this post was submitted on 08 Jan 2024
388 points (96.0% liked)

Technology

59467 readers
4146 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

OpenAI has publicly responded to a copyright lawsuit by The New York Times, calling the case “without merit” and saying it still hoped for a partnership with the media outlet.

In a blog post, OpenAI said the Times “is not telling the full story.” It took particular issue with claims that its ChatGPT AI tool reproduced Times stories verbatim, arguing that the Times had manipulated prompts to include regurgitated excerpts of articles. “Even when using such prompts, our models don’t typically behave the way The New York Times insinuates, which suggests they either instructed the model to regurgitate or cherry-picked their examples from many attempts,” OpenAI said.

OpenAI claims it’s attempted to reduce regurgitation from its large language models and that the Times refused to share examples of this reproduction before filing the lawsuit. It said the verbatim examples “appear to be from year-old articles that have proliferated on multiple third-party websites.” The company did admit that it took down a ChatGPT feature, called Browse, that unintentionally reproduced content.

you are viewing a single comment's thread
view the rest of the comments
[–] LWD@lemm.ee 1 points 10 months ago (1 children)

Their actions are unacceptable, whether it fits under the technicality of legality or not. Just like when the BBC intentionally plagiarized the work of Brian Deer, except at least in his case they had the foresight to try asking first, and not just to assume he consented because of the way the data looked.

The NYT explicitly publishes articles in a format designed to be downloaded, processed and have information extracted from that download by a computer program, and then to have that processed information presented to a human.

Speaking of overutilizing a thesaurus, you buried the lede: The text is designed for a human to read.

I don't like the "just look at it, it was asking for it" defense because that abuses publishers who try to present things in a DRM free fashion for their readers:

"Our authors and readers have been asking for this for a long time," president and publisher Tom Doherty explained at the time. "They're a technically sophisticated bunch, and DRM is a constant annoyance to them. It prevents them from using legitimately-purchased e-books in perfectly legal ways, like moving them from one kind of e-reader to another."

But DRM-free e-books that circulate online are easy for scrapers to ingest.

The SFWA submission suggests "Authors who have made their work available in forms free of restrictive technology such as DRM for the benefit of their readers may have especially been taken advantage of."

[–] ricecake@sh.itjust.works 1 points 10 months ago

Have you deleted and reposted this comment three times now, or is something deeply wrong with your client?