1098
The New York Times tried to block the Internet Archive: another reason to value the latter
(walledculture.org)
1. Posts must be related to the discussion of digital piracy
2. Don't request invites, trade, sell, or self-promote
3. Don't request or link to specific pirated titles, including DMs
4. Don't submit low-quality posts, be entitled, or harass others
📜 c/Piracy Wiki (Community Edition):
💰 Please help cover server costs.
Ko-fi | Liberapay |
Yeah I'm surprised the archive hasn't worked out a deal with publishers simply to delay showing articles.
It exists, it's called a robots.txt file that the developers can put into place, and then bots like the webarchive crawler will ignore the content.
And therein lies the issue: if you place a robots.txt out for the content, all bots will ignore the content, including search engine indexers.
So huge publishers want it both ways, they want to be indexed, but they don't want the content to be archived.
If the NYT is serious about not wanting to have their content on the webarchive but still want humans to see it, the solution is simple: Put that content behind a login! But the NYT doesn't want to do that, since then they'll lose out on the ad revenue of having regular people load their website.
I think in the case of the article here though, the motivation is a bit more nefarious, in that the NYT et al simply don't want to be held accountable. So there's a choice to be had for them, either retain the privilege of being regarded as serious journalism, or act like a bunch of hacks that can't be relied upon.
the internet archive doesn't respect robots.txt:
the only way to stay out of the internet archive is to follow the process they created and hope they agree to remove you. or firewall them.
https://blog.archive.org/2017/04/17/robots-txt-meant-for-search-engines-dont-work-well-for-web-archives/