827
The Internet Archive is under attack, with a popup claiming a ‘catastrophic’ breach
(www.theverge.com)
This is a most excellent place for technology news and articles.
I can’t think of any reason to attack that website, what have they done wrong?
I just sent a DMCA takedown last week to remove my site. They've claimed to follow meta tags and robots.txt since 1998, but no, they had over 1,000,000 of my pages going back that far. They even had the robots.txt configured for them archived from 1998.
I'm tired of people linking to archived versions of things that I worked hard to create. Sites like Wikipedia were archiving urls and then linking to the archive, effectively removing branding and blocking user engagement.
Not to mention that I'm losing advertising revenue if someone views the site in an archive. I have fewer problems with archiving if the original site is gone, but to mirror and republish active content with no supported way to prevent it short of legal action is ridiculous. Not to mention that I lose control over what's done with that content -- are they going to let Google train AI on it with their new partnership?
I'm not a fan. They could easily allow people to block archiving, but they choose not to. They offer a way to circumvent artist or owner control, and I'm surprised that they still exist.
So... That's what I think is wrong with them.
From a security perspective it's terrible that they were breached. But it is kind of ironic -- maybe they can think of it as an archive of their passwords or something.
how do you expect an archive to happen if they are not allowed to archive while it is still up. How are you suposed to track changed or see how the world has shifted. This is a very narrow and in my opinion selfish way to view the world
I don't want them publishing their archive while it's up. If they archive but don't republish while the site exists then there's less damage.
I support the concept of archiving and screenshotting. I have my own linkwarden server set up and I use it all the time.
But I don't republish anything that I archive because that dilutes the value of the original creator.
What if I'm looking for something but the page has changed?
Shouldn't that be the content creator's prerogative? What if the content had a significant error? What if they removed the page because of a request from someone living in the EU requested it under their laws? What if the page was edited because someone accidentally made their address and phone number public in a forum post?
Nah. It just lets slimy gits claim they never said XYZ, or that such and such a thing never happened. With as volatile a storage media as internet media, hard backups are absolutely necessary. Put it this way; would you have the same complaimt about a newspaper? A TV show? Post your opinion piece to a newspaper and it's fixed in ink forever. Yet somehow you complain when that same opinion piece is on a website? Get outta here.
Like I said, I have no problems with individuals archiving it and not republishing it.
If I take a newspaper article and republish it on my site I guarantee you I will get a takedown notice. That will be especially true if I start linking to my copy as the canonical source from places like Wikipedia.
It's a fine line. Is archive.org a library (wasn't there a court case about this recently...) or are they republishing?
Either way, it doesn't matter for me any more. The pages are gone from the archive, and they won't archive any more.
A couple of good examples are lifehacker.com and lifehack.org. Both sites used to have excellent content. The sites are still up and running, but the first one has turned into a collection of listicles and the second is an ad for an "AI-powered life coach". All of that old content is gone and is only accessible through the Internet Archive.
In fact, many domains never shut down, they just change owners or change direction.
Again, isn't that the site's prerogative?
I think there should at least be a recognized way to opt-out that archive.org actually follows. For years they told people to put
in robots.txt, but they still archived content from those sites. They refuse to publish what IP addresses they pull content down from, but that would be a trivial thing to do. They refuse to use a UserAgent that you can filter on.
If you want to be a library, be open and honest about it. There's no need to sneak around.