this post was submitted on 17 Jul 2023
181 points (95.5% liked)

Technology

59311 readers
4864 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

A rising movement of artists and authors are suing tech companies for training AI on their work without credit or payment

you are viewing a single comment's thread
view the rest of the comments
[โ€“] archomrade@midwest.social 3 points 1 year ago (1 children)

I don't think i said "humans learn the same way", but I do think it helps to understand how ML algorithms work in comparison with existing examples of copyright infringement (i.e. photocopies, duplicated files on a hard drive, word for word or pixel for pixel duplication's, ect.). ML's don't duplicate or photocopy training data, they "weight" (or to use your word choice, "average") the data against a node structure. Other, more subjective copyright infringements are decided on a case-by-case basis, where an artist or entity has produced an "original" work that leans too heavily on a copyrighted work. It is clear that ML's aren't a straight-forward duplication. If you asked an MLA to reproduce an existing image, it wouldn't be able to recreate it exactly, because that data isn't stored in its model, only the approximate instructions on how to reproduce it. It might be able to get close, especially if that example is well represented in the data set, but the image would be fundamentally "new" in the sense that it has not been copied pixel by pixel from an original, only recreated through averaging.

If our concern is that AI could literally reproduce existing creative work and pass it off as original, then we should pursue legal action against those uses. But to claim that the model itself is an illegal duplication of copyrighted work is ridiculous. If our true concern (as I think it is) that the use of MLAs may supplant the need for paid artists or writers, then I would suggest we re-think how we structure compensation for labor and not simply place barriers to AI deployment. Even if we were to reach some compensation agreement for the use of copyrighted material in the training of AI data, that wouldn't prevent the elimination of artistic labor, it would only solidify AI as an elite, expensive tool owned only by a handful of companies that can afford the cost. It would consolidate our economy further, not democratize it.

In my opinion, copyright law is already just a band-aid to a broader issue of labor relations, and the issue of AI training data is just a drastic expansion of that same wound.

[โ€“] 33KK@lemmy.blahaj.zone 1 points 1 year ago* (last edited 1 year ago)

My concern is that billions of works are being used for training with no consent and no regard to the license, and that the model "learns" is not an excuse. If someone saved some of my content for personal use, sure, I don't mind that at all, but huge scale scraping for-profit operation downloading all content they physically can? Fuck off. I just blocked all the crawlers from ever accesing my websites (well, google and bing literally refuse to index my stuff properly anyway, so fuck them too, none of them even managed to read the sitemap properly, and it was definitely valid)