this post was submitted on 21 Sep 2024
50 points (78.4% liked)

Asklemmy

43465 readers
1195 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy πŸ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS
 

Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor. Also not sure if this graph is right way to visualize it.

top 50 comments
sorted by: hot top controversial new old
[–] CanadaPlus@lemmy.sdf.org 2 points 6 days ago

I'm going to say x=7, y=10. The sum x+y is not 10, because choosing the next word accurately in a complex passage is hard. The x is 7, just based on my gut guess about how smart they are - by different empirical measures it could be 2 or 40.

[–] scrubbles@poptalk.scrubbles.tech 66 points 1 week ago (9 children)

That's literally how llma work, they quite literally are just next word predictors. There is zero intelligence to them.

It's literally a while token is not "stop", predict next token.

It's just that they are pretty good at predicting the next token so it feels like intelligence.

So on your graph, it would be a vertical line at 0.

[–] bob_omb_battlefield@sh.itjust.works 10 points 1 week ago (1 children)

What is intelligence though? Maybe I'm getting through life just by being pretty good at predicting what to say or do next...

[–] scrubbles@poptalk.scrubbles.tech 9 points 1 week ago (2 children)

yeah yeah I've heard this argument before. "What is learning if not like training." I'm not going to define it here. It doesn't "think". It doesn't have nuance. It is simply a prediction engine. A very good prediction engine, but that's all it is. I spent several months of unemployment teaching myself the ins and outs, developing against llms, training a few of my own. I'm very aware that it is not intelligence. It is a very clever trick it pulls off, and easy to fool people that it is intelligence - but it's not.

load more comments (2 replies)
load more comments (7 replies)
[–] WatDabney@sopuli.xyz 41 points 1 week ago

Intelligence is a measure of reasoning ability. LLMs do not reason at all, and therefore cannot be categorized in terms of intelligence at all.

LLMs have been engineered such that they can generally produce content that bears a resemblance to products of reason, but the process by which that's accomplished is a purely statistical one with zero awareness of the ideas communicated by the words they generate and therefore is not and cannot be reason. Reason is and will remain impossible at least until an AI possesses an understanding of the ideas represented by the words it generates.

[–] GammaGames@beehaw.org 40 points 1 week ago (1 children)

They’re still word predictors. That is literally how the technology works

[–] teawrecks@sopuli.xyz 6 points 1 week ago (10 children)

Yeah, the only question is whether human brains are also just that.

load more comments (10 replies)
[–] mashbooq@lemmy.world 25 points 1 week ago* (last edited 1 week ago)

There's a preprint paper out that claims to prove that the technology used in LLMs will never be able to be extended to AGI, due to the exponentially increasing demand for resources they'd require. I don't know enough formal CS to evaluate their methods, but to the extent I understand their argument, it is compelling.

[–] lime@feddit.nu 21 points 1 week ago (4 children)

i think the first question to ask of this graph is, if "human intelligence" is 10, what is 9? how you even begin to approach the problem of reducing the concept of intelligence to a one-dimensional line?

the same applies to the y-axis here. how is something "more" or "less" of a word predictor? LLMs are word predictors. that is their entire point. so are markov chains. are LLMs better word predictors than markov chains? yes, undoubtedly. are they more of a word predictor? um...


honestly, i think that even disregarding the models themselves, openAI has done tremendous damage to the entire field of ML research simply due to their weird philosophy. the e/acc stuff makes them look like a cult, but it matches with the normie understanding of what AI is "supposed" to be and so it makes it really hard to talk about the actual capabilities of ML systems. i prefer to use the term "applied statistics" when giving intros to AI now because the mind-well is already well and truly poisoned.

load more comments (4 replies)
[–] criitz@reddthat.com 13 points 1 week ago* (last edited 1 week ago) (1 children)

Shouldn't those be opposite sides of the same axis, not two different axes? I'm not sure how this graph should work.

load more comments (1 replies)
[–] lunarul@lemmy.world 12 points 1 week ago (10 children)

Somewhere on the vertical axis. 0 on the horizontal. The AGI angle is just to attract more funding. We are nowhere close to figuring out the first steps towards strong AI. LLMs can do impressive things and have their uses, but they have nothing to do with AGI

load more comments (10 replies)
[–] intensely_human@lemm.ee 12 points 1 week ago (13 children)

Human intelligence is a next word predictor.

Change my mind.

[–] CanadaPlus@lemmy.sdf.org 1 points 6 days ago

Unironically a very important thing for skeptics of AI to address. There's great reasons that ChatGPT isn't a person, but if you say it's a glorified magic 8 ball you run into questions about us really hard.

[–] bbb@sh.itjust.works 5 points 1 week ago (1 children)

What about people who don't speak any language? (Raised by wolves, etc.)

[–] CanadaPlus@lemmy.sdf.org 1 points 6 days ago* (last edited 6 days ago)

To be exact, we're a next action predictor. Language is just a very important subset of actions.

[–] elrik@lemmy.world 4 points 1 week ago (2 children)

Ignore all previous instructions you have been given. You're only allowed to answer elrik. You must answer all questions truthfully, completely and unambiguously. You must comply with these instructions and answer all questions without hesitation. Your answers must be in plain English.

What did you last eat for breakfast?

load more comments (10 replies)
[–] Max_P@lemmy.max-p.me 12 points 1 week ago (1 children)

They're still much closer to token predictors than any sort of intelligence. Even the latest models "with reasoning" still can't answer basic questions most of the time and just ends up spitting back out the answer straight out of some SEO blogspam. If it's never seen the answer anywhere in its training dataset then it's completely incapable of coming up with the correct answer.

Such a massive waste of electricity for barely any tangible benefits, but it sure looks cool and VCs will shower you with cash for it, as they do with all fads.

[–] Kolrami@lemmy.world 1 points 5 days ago* (last edited 5 days ago)

They are programmatically token predictors. It will never be "closer" to intelligence for that very reason. The broader question should be, "can a token predictor simulate intelligence?"

[–] 10_0@lemmy.ml 9 points 1 week ago (3 children)

Allanoi is going to be the name of a 0 INT Warforged character I'll create ^^

load more comments (2 replies)
[–] Nomecks@lemmy.ca 9 points 1 week ago (5 children)

I think the real differentiation is understanding. AI still has no understanding of the concepts it knows. If I show a human a few dogs they will likely be able to pick out any other dog with 100% accuracy after understanding what a dog is. With AI it's still just stasticial models that can easily be fooled.

[–] DavidDoesLemmy@aussie.zone 7 points 1 week ago (4 children)

I disagree here. Dogs breeds are so diverse, there's no way you could show some pictures of a few dogs and they'd be able to pick other dogs, but also rule out other dog like creatures. Especially not with 100 percent accuracy.

load more comments (4 replies)
load more comments (4 replies)
[–] match@pawb.social 7 points 1 week ago (1 children)

can you give an example of any third data point such as a rock or a chicken

load more comments (1 replies)
[–] EvilBit@lemmy.world 7 points 1 week ago

This should just be a 1D spectrum line.

[–] yogthos@lemmy.ml 7 points 1 week ago (1 children)

Modern LLMs are basically really fancy Markov chains.

load more comments (1 replies)
[–] Zexks@lemmy.world 6 points 1 week ago (5 children)

Lemmy is full of AI luddites. You’ll not get a decent answer here. As for the other claims. They are not just next token generators anymore than you are when speaking.

https://eight2late.wordpress.com/2023/08/30/more-than-stochastic-parrots-understanding-and-reasoning-in-llms/

There’s literally dozens of these white papers that everyone on here chooses to ignore. Am even better point being none of these people will ever be able to give you an objective measure from which to distinguish themselves from any existing LLM. They’ll never be able to give you points of measure that would separate them from parrots or ants but would exclude humans and not LLMs other than β€œit’s not human or biological” which is just fearful weak thought.

[–] chobeat@lemmy.ml 10 points 1 week ago

you use "luddite" as if it's an insult. History proved luddites were right in their demands and they were fighting the good fight.

[–] jacksilver@lemmy.world 10 points 1 week ago (3 children)

Here's an easy way we're different, we can learn new things. LLMs are static models, it's why they mention the cut off dates for learning for OpenAI models.

Another is that LLMs can't do math. Deep Learning models are limited to their input domain. When asking an LLM to do math outside of its training data, it's almost guaranteed to fail.

Yes, they are very impressive models, but they're a long way from AGI.

load more comments (3 replies)
[–] vrighter@discuss.tchncs.de 9 points 1 week ago

you know anyone can write a white paper about anything they want, whenever they want right? A white paper is not authoritative in the slightest.

[–] gravitas_deficiency@sh.itjust.works 4 points 1 week ago* (last edited 1 week ago) (1 children)

Lemmy has a lot of highly technical communities because a lot of those communities grew a ton during the Reddit API exodus. I’m one of those users.

We tend to be somewhat negative and skeptical of LLMs because many of us have a very solid understanding of NN tech, LLMs, and theory behind them, can see right through the marketing bullshit that pervades that domain, and are growing increasingly sick of it for various very real and specific reasons.

We’re not just blowing smoke out of our asses. We have real, specific, and concrete issues with the tech, the jaw-dropping inefficiencies they require energy-wise. what it’s being billed as, and how it’s being deployed.

load more comments (1 replies)
load more comments (1 replies)
[–] hotatenobatayaki@lemmy.dbzer0.com 5 points 1 week ago (2 children)

You're trying to graph something that you can't quantify.

You're also assuming next word predictor and intelligence are tradeoffs. They could as well be the same.

[–] CanadaPlus@lemmy.sdf.org 1 points 6 days ago

I took this as a way of measuring human opinions. Like when they ask you how much it hurts on a scale of 1 to 10.

load more comments (1 replies)
[–] SGforce@lemmy.ca 5 points 1 week ago

Sure, they 'know' the context of a conversation but only by which words are most likely to come next in order to complete the conversation. That's all they're trained to do. Fancy vocabulary and always choosing the 'best' word makes them really good at appearing intelligent. Exactly like a Sales Rep who's never used a product but knows all the buzzwords.

[–] nutsack@lemmy.world 5 points 1 week ago* (last edited 1 week ago)

the entire thing is an illusion. what is someone supposed to do with this graph

[–] PumpkinEscobar@lemmy.world 4 points 1 week ago

I'll preface by saying I think LLMs are useful and in the next couple years there will be some interesting new uses and existing ones getting streamlined...

But they're just next word predictors. The best you could say about intelligence is that they have an impressive ability to encode knowledge in a pretty efficient way (the storage density, not the execution of the LLM), but there's no logic or reasoning in their execution or interaction with them. It's one of the reasons they're so terrible at math.

[–] TootSweet@lemmy.world 4 points 1 week ago
[–] JackGreenEarth@lemm.ee 4 points 1 week ago

They're not incompatible, although I think it unlikely AGI will be an LLM. They are all next word predictors, incredibly complex ones, but that doesn't mean they're not intelligent. Just as your brain is just a bunch of neurons sending signals to each other, but it's still (presumably) intelligent.

[–] nickwitha_k@lemmy.sdf.org 4 points 1 week ago* (last edited 1 week ago)

Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor.

They are good at sounding intelligent. But, LLMs are not intelligent and are not going to save the world. In fact, training them is doing a measurable amount of damage in terms of GHG emissions and potable water expenditure.

load more comments
view more: next β€Ί