this post was submitted on 26 Jul 2023
483 points (96.0% liked)

Technology

59590 readers
5872 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Thousands of authors demand payment from AI companies for use of copyrighted works::Thousands of published authors are requesting payment from tech companies for the use of their copyrighted works in training artificial intelligence tools, marking the latest intellectual property critique to target AI development.

you are viewing a single comment's thread
view the rest of the comments
[–] charonn0@startrek.website 0 points 1 year ago (1 children)

Because word-use probabilities in a text are not the same thing as the information expressed by the text.

Any rule we make here should treat people who are animals and people who are computers the same.

W-what?

[–] planish@sh.itjust.works 0 points 1 year ago (1 children)

In the future, some people might not be human. Or some people might be mostly human, but use computers to do things like fill in for pieces of their brain that got damaged.

Some people can't regognize faces, for example, but computers are great at that now and Apple has that thing that is Google Glass but better. But a law against doing facial recognition with a computer, and allowing it to only be done with a brain, would prevent that solution from working.

And currently there are a lot of people running around trying to legislate exactly how people's human bodies are allowed to work inside, over those people's objections.

I think we should write laws on the principle that anybody could be a human, or a robot, or a river, or a sentient collection of bees in a trench coat, that is 100% their own business.

[–] charonn0@startrek.website 2 points 1 year ago* (last edited 1 year ago) (1 children)

But the subject under discussion is large language models that exist today.

I think we should write laws on the principle that anybody could be a human, or a robot, or a river, or a sentient collection of bees in a trench coat, that is 100% their own business.

I'm sorry, but that's ridiculous.

[–] planish@sh.itjust.works 0 points 1 year ago (1 children)

I have indeed made a list of ridiculous and heretofore unobserved things somebody could be. I'm trying to gesture at a principle here.

If you can't make your own hormones, store bought should be fine. If you are bad at writing, you should be allowed to use a computer to make you good at writing now. If you don't have legs, you should get to roll, and people should stop expecting you to have legs. None of these differences between people, or in the ways that people choose to do things, should really be important.

Is there a word for that idea? Is it just what happens to your brain when you try to read the Office of Consensus Maintenance Analog Simulation System?

[–] charonn0@startrek.website 1 points 1 year ago* (last edited 1 year ago) (1 children)

The issue under discussion is whether or not LLM companies should pay royalties on the training data, not the personhood of hypothetical future AGIs.

[–] planish@sh.itjust.works 1 points 1 year ago (1 children)

Why should they pay royalties for letting a robot read something that they wouldn't owe if a person read it?

[–] charonn0@startrek.website 2 points 1 year ago

It's not reading. It's word-probability analysis.