this post was submitted on 26 Jul 2023
483 points (96.0% liked)

Technology

59590 readers
5616 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Thousands of authors demand payment from AI companies for use of copyrighted works::Thousands of published authors are requesting payment from tech companies for the use of their copyrighted works in training artificial intelligence tools, marking the latest intellectual property critique to target AI development.

you are viewing a single comment's thread
view the rest of the comments
[–] linearchaos@lemmy.world 4 points 1 year ago (1 children)

I don't know how I feel about this honestly. AI took a look at the book and added the statistics of all of its words into its giant statistic database. It doesn't have a copy of the book. It's not capable of rewriting the book word for word.

This is basically what humans do. A person reads 10 books on a subject, studies become somewhat of a subject matter expert and writes their own book.

Artists use reference art all the time. As long as they don't get too close to the original reference nobody calls any flags.

These people are scared for their viability in their user space and they should be, but I don't think trying to put this genie back in the bottle or extra charging people for reading their stuff for reference is going to make much difference.

[–] BartsBigBugBag@lemmy.tf -1 points 1 year ago (4 children)

It’s not at all like what humans do. It has no understanding of any concepts whatsoever, it learns nothing. It doesn’t know that it doesn’t know anything even. It’s literally incapable of basic reasoning. It’s essentially taken words and converted them to numbers, and then it examines which string is likely to follow each previous string. When people are writing, they aren’t looking at a huge database of information and determining the most likely word to come next, they’re synthesizing concepts together to create new ones, or building a narrative based on their notes. They understand concepts, they understand definitions. An AI doesn’t, it doesn’t have any conceptual framework, it doesn’t even know what a word is, much less the definition of any of them.

[–] chicken@lemmy.dbzer0.com 1 points 1 year ago* (last edited 1 year ago)

> When people are writing, they aren’t looking at a huge database of information and determining the most likely word to come next, they’re synthesizing concepts together to create new ones, or building a narrative based on their notes. They understand concepts, they understand definitions.

A huge part of what we do is like drawing from a huge mashup of accumulated patterns though. When an image or phrase pops into your head fully formed, on the basis of things that you have seen and remembered, isn't that the same sort of thing as what AI does? Even though there are (poorly understood) differences between how humans think and what machine learning models do, the latter seems similar enough to me that most uses should be treated by the same standard for plagiarism; only considered violating if the end product is excessively similar to a specific copyrighted work, and not merely because you saw a copyrighted work and that pattern being in your brain affected what stuff you spontaneously think of.

[–] oce@jlai.lu 1 points 1 year ago* (last edited 1 year ago) (1 children)

How can you tell that our thoughts don't come from a biological LLM? Maybe what we conceive as "understanding" is just a feeling emerging from a more fondamental mechanism like temperature emerges from the movement of particles.

[–] Telodzrum@lemmy.world 1 points 1 year ago

Because we have biological, developmental, and psychological science telling us that's not how higher-level thinking works. Human brains have the ability to function on a sort of autopilot similar to "AI", but that is not what we are describing when we speak of creative substance.

[–] planish@sh.itjust.works 1 points 1 year ago

I don't think this is true.

The models (or maybe the characters in the conversations simulated by the models) can be spectacularly bad at basic reasoning, and misunderstand basic concepts on a regular basis. They are of course completely insane; the way they think is barely recognizable.

But they also, when asked, are often able to manipulate concepts or do reasoning and get right answers. Ask it to explain the water cycle like a pirate, and you get that. You can find the weights that make the Eifel Tower be in Paris and move it to Rome, and then ask for a train itinerary to get there, and it will tell you to take the train to Rome.

I don't know what "understanding" something is other than to be able to get right answers when asked to think about it. There's some understanding of the water cycle in there, and some of pirates, and some of European geography. Maybe not a lot. Maybe it's not robust. Maybe it's superficial. Maybe there are still several differences in kind between whatever's there and the understanding a human can get with a brain that isn't 100% the stream of consciousness generator. But not literally zero.

[–] Buttons@programming.dev 0 points 1 year ago (3 children)

I think you underestimate the reasoning power of these AIs. They can write code, they can teach math, they can even learn math.

I've been using GPT4 as a math tutor while learning linear algebra, and I also use a text book. The text book told me that (to write it out) "the column space of matrix A is equal to the column space of matrix A times its own transpose". So I asked GPT4 if that was true and it said no, GPT disagreed with the text book. This was apparently something that GPT did not memorize and it was not just regurgitating sentences. I told GPT I saw it in a text book, the AI said "sorry, the textbook must be wrong". I then explained the mathematical proof to the AI, and the AI apologized, admitted it had been wrong, and agreed with the proof. Only after hearing the proof did the AI agree with the text book. This is some pretty advanced reasoning.

I performed that experiment a few times and it played out mostly the same. I experimented with giving the AI a flawed proof (I purposely made mistakes in the mathematical proofs), and the AI would call out my mistakes and would not be convinced by faulty proofs.

A standard that judged this AI to have "no understanding of any concepts whatsoever", would also conclude the same thing if applied to most humans.

[–] unlimitedolm_sjw@sh.itjust.works 1 points 1 year ago* (last edited 1 year ago)

That doesn't prove that GPT is reasoning, its model predicts that those responses are the most likely given the messages your sending it. It''s read thousands of actual conversations with people stating something incorrect, then having it explained to them and them coming around and admitting they were wrong.

I've seen other similar cases where the AI is wrong about something, and when it's explained, it just doubles down. Because humans do that type of thing too, refusing to admit their wrong.

The way it's designed means that it cannot reason in the same way humans experience it. It can simulate a likely conversation someone would have if they could reason.

[–] Telodzrum@lemmy.world -1 points 1 year ago

It's just a really big autocomplete system. It has no thought, no reason, no sense of self or anything, really.

[–] foo@programming.dev -1 points 1 year ago (1 children)

They can write code and teach maths because it's read people doing the exact same stuff

[–] Buttons@programming.dev 1 points 1 year ago* (last edited 1 year ago)

Hey, that's the same reason I can write code and do maths!

I'm serious, the only reason I know how to code or do math is because I learned from other people, mostly by reading. It's the only reason I can do those things.