this post was submitted on 02 Aug 2023
334 points (93.9% liked)

Technology

58180 readers
4399 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’::Experts are starting to doubt it, and even OpenAI CEO Sam Altman is a bit stumped.

you are viewing a single comment's thread
view the rest of the comments
[–] Womble@lemmy.world 3 points 1 year ago (1 children)

Thats not 100% true. they also work by modifying meanings of words based on context and then those modified meanings propagate indefinitely forwards. But yes, direct context is limited so things outside it arent directly used.

[–] Zeth0s@lemmy.world 1 points 1 year ago (1 children)

They don't really chance the meaning of the words, they just look for the "best" words given the recent context, by taking into account the different possible meanings of the words

[–] Womble@lemmy.world 1 points 1 year ago* (last edited 1 year ago) (1 children)

No they do, thats one of the key innovations of LLMs the attention and feed forward steps where they propagate information from related words into each other based on context. from https://www.understandingai.org/p/large-language-models-explained-with?r=cfv1p

For example, in the previous section we showed a hypothetical transformer figuring out that in the partial sentence “John wants his bank to cash the,” his refers to John. Here’s what that might look like under the hood. The query vector for his might effectively say “I’m seeking: a noun describing a male person.” The key vector for John might effectively say “I am: a noun describing a male person.” The network would detect that these two vectors match and move information about the vector for John into the vector for his.

[–] Zeth0s@lemmy.world 1 points 1 year ago* (last edited 1 year ago)

That's exactly what I said

They don't really chance the meaning of the words, they just look for the "best" words given the recent context, by taking into account the different possible meanings of the words

The word's meanings haven't changed, but the model can choose based on the context accounting for the different meanings of words