this post was submitted on 21 Jul 2023
24 points (92.9% liked)

Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ

53948 readers
641 users here now

⚓ Dedicated to the discussion of digital piracy, including ethical problems and legal advancements.

Rules • Full Version

1. Posts must be related to the discussion of digital piracy

2. Don't request invites, trade, sell, or self-promote

3. Don't request or link to specific pirated titles, including DMs

4. Don't submit low-quality posts, be entitled, or harass others



Loot, Pillage, & Plunder


💰 Please help cover server costs.

Ko-FiLiberapay


founded 1 year ago
MODERATORS
 

I want to rip the contents of a pay website, but I have to log in to their web site on a web page to get access

Does anyone have any good tools for Windows for that?

I'm guessing that any such tools must have a built in browser, or be a browser plugin for it to work.

you are viewing a single comment's thread
view the rest of the comments
[–] zabadoh@lemmy.ml 4 points 1 year ago (1 children)

I have an account, so that's not a problem. The problem is how to automate going into every little content page and downloading the content, including the hi-res files.

[–] tekchic@kbin.social 3 points 1 year ago (1 children)

I'm on a Mac and use SiteSucker so I know that's not super helpful but for windows you could try wGet or WebCopy? https://www.cyotek.com/cyotek-webcopy / https://gnuwin32.sourceforge.net/packages/wget.htm

[–] zabadoh@lemmy.ml 2 points 1 year ago* (last edited 1 year ago)

Webcopy looks promising if I can get the crawler part of it to work with this site's authentication...

edit: I couldn't get Webcopy's spider to authenticate correctly.

Webcopy uses the deprecated version of Internet Explorer in Windows 10 as a module, and I can log into the website using the Capture Forms browser dialog, but the cookies or whatever else don't translate over to the spider.