this post was submitted on 21 Sep 2024
50 points (78.4% liked)

Asklemmy

43747 readers
2316 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS
 

Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor. Also not sure if this graph is right way to visualize it.

you are viewing a single comment's thread
view the rest of the comments
[–] fartsparkles@sh.itjust.works 11 points 1 month ago (1 children)

None of which are intelligence, and all of which are catered towards predicting the next token.

All the models have a total reliance on data and structure for inference and prediction. They appear intelligent but they are not.

[–] webghost0101@sopuli.xyz -4 points 1 month ago* (last edited 1 month ago)

How is good old fashioned code comparing outputs to a database of factual knowledge “predicting the next token” to you. Or reinforcement relearning and token rewards baked into models.

I can tell you have not actually tried to work with professional ai or looked at the research papers.

Yes none of it is “intelligent” but i would counter that with neither are human beings.