this post was submitted on 08 Jan 2024
388 points (96.0% liked)

Technology

57933 readers
4197 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

OpenAI has publicly responded to a copyright lawsuit by The New York Times, calling the case “without merit” and saying it still hoped for a partnership with the media outlet.

In a blog post, OpenAI said the Times “is not telling the full story.” It took particular issue with claims that its ChatGPT AI tool reproduced Times stories verbatim, arguing that the Times had manipulated prompts to include regurgitated excerpts of articles. “Even when using such prompts, our models don’t typically behave the way The New York Times insinuates, which suggests they either instructed the model to regurgitate or cherry-picked their examples from many attempts,” OpenAI said.

OpenAI claims it’s attempted to reduce regurgitation from its large language models and that the Times refused to share examples of this reproduction before filing the lawsuit. It said the verbatim examples “appear to be from year-old articles that have proliferated on multiple third-party websites.” The company did admit that it took down a ChatGPT feature, called Browse, that unintentionally reproduced content.

you are viewing a single comment's thread
view the rest of the comments
[–] LWD@lemm.ee 16 points 8 months ago (2 children)

LLMs cannot learn or create like humans, and even if they somehow could, they are not humans. So the comparison to human creators expounding upon a genre is false because the premises on which it is based are false.

Perhaps you could compare it to a student getting blackout drunk, copying Wikipedia articles and pasting them together, using a thesaurus app to change a few words here and there... And in the end, the student doesn't know what they created, has no recollection of the sources they used, and the teacher can't detect whether it's plagiarized or who from.

OpenAI made a mistake by taking data without consent, not just from big companies but from individuals who are too small to fight back. Regurgitating information without attribution is gross in every regard, because even if you don't believe in asking for consent before taking from someone else, you should probably ask for a source before using this regurgitated information.

[–] ricecake@sh.itjust.works 19 points 8 months ago (4 children)

Well, machine learning algorithms do learn, it's not just copy paste and a thesaurus. It's not exactly the same as people, but arguing that it's entirely different is also wrong.
It isn't a big database full of copy written text.

The argument is that it's not wrong to look at data that was made publicly available when you're not making a copy of the data.
It's not copyright infringement to navigate to a webpage in your browser, even though that makes your computer download it, process all of the contents of the page, render the content to the screen and hold onto that download for a finite but indefinite period of time, while you perform whatever operations you like on the downloaded data.
You can even take notes on the data and keep those indefinitely, including using that derivative information to create your own similar works.
The NYT explicitly publishes articles in a format designed to be downloaded, processed and have information extracted from that download by a computer program, and then to have that processed information presented to a human. They just didn't expect that the processing would end up looking like this.

The argument doesn't require that we accept that a human and a computers system for learning be held to the same standard, or that we can't differentiate between the two, it hinges on the claim that this is just an extension of what we already find it reasonable for a computer to do.
We could certainly hold that generative AI is a different and new category for copyright law, but that's very different from saying that their actions are unacceptable under current law.

[–] LWD@lemm.ee 1 points 8 months ago (1 children)

Their actions are unacceptable, whether it fits under the technicality of legality or not. Just like when the BBC intentionally plagiarized the work of Brian Deer, except at least in his case they had the foresight to try asking first, and not just to assume he consented because of the way the data looked.

The NYT explicitly publishes articles in a format designed to be downloaded, processed and have information extracted from that download by a computer program, and then to have that processed information presented to a human.

Speaking of overutilizing a thesaurus, you buried the lede: The text is designed for a human to read.

I don't like the "just look at it, it was asking for it" defense because that abuses publishers who try to present things in a DRM free fashion for their readers:

"Our authors and readers have been asking for this for a long time," president and publisher Tom Doherty explained at the time. "They're a technically sophisticated bunch, and DRM is a constant annoyance to them. It prevents them from using legitimately-purchased e-books in perfectly legal ways, like moving them from one kind of e-reader to another."

But DRM-free e-books that circulate online are easy for scrapers to ingest.

The SFWA submission suggests "Authors who have made their work available in forms free of restrictive technology such as DRM for the benefit of their readers may have especially been taken advantage of."

[–] ricecake@sh.itjust.works 1 points 8 months ago

Have you deleted and reposted this comment three times now, or is something deeply wrong with your client?

[–] LWD@lemm.ee -5 points 8 months ago* (last edited 8 months ago)