this post was submitted on 21 Sep 2024
50 points (78.4% liked)

Asklemmy

43747 readers
2316 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS
 

Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor. Also not sure if this graph is right way to visualize it.

top 50 comments
sorted by: hot top controversial new old
[–] scrubbles@poptalk.scrubbles.tech 66 points 1 month ago (9 children)

That's literally how llma work, they quite literally are just next word predictors. There is zero intelligence to them.

It's literally a while token is not "stop", predict next token.

It's just that they are pretty good at predicting the next token so it feels like intelligence.

So on your graph, it would be a vertical line at 0.

[–] bob_omb_battlefield@sh.itjust.works 10 points 1 month ago (1 children)

What is intelligence though? Maybe I'm getting through life just by being pretty good at predicting what to say or do next...

[–] scrubbles@poptalk.scrubbles.tech 9 points 1 month ago (2 children)

yeah yeah I've heard this argument before. "What is learning if not like training." I'm not going to define it here. It doesn't "think". It doesn't have nuance. It is simply a prediction engine. A very good prediction engine, but that's all it is. I spent several months of unemployment teaching myself the ins and outs, developing against llms, training a few of my own. I'm very aware that it is not intelligence. It is a very clever trick it pulls off, and easy to fool people that it is intelligence - but it's not.

load more comments (2 replies)
load more comments (7 replies)
[–] WatDabney@sopuli.xyz 41 points 1 month ago

Intelligence is a measure of reasoning ability. LLMs do not reason at all, and therefore cannot be categorized in terms of intelligence at all.

LLMs have been engineered such that they can generally produce content that bears a resemblance to products of reason, but the process by which that's accomplished is a purely statistical one with zero awareness of the ideas communicated by the words they generate and therefore is not and cannot be reason. Reason is and will remain impossible at least until an AI possesses an understanding of the ideas represented by the words it generates.

[–] GammaGames@beehaw.org 40 points 1 month ago (1 children)

They’re still word predictors. That is literally how the technology works

[–] teawrecks@sopuli.xyz 6 points 1 month ago (10 children)

Yeah, the only question is whether human brains are also just that.

load more comments (10 replies)
[–] mashbooq@lemmy.world 25 points 1 month ago* (last edited 1 month ago)

There's a preprint paper out that claims to prove that the technology used in LLMs will never be able to be extended to AGI, due to the exponentially increasing demand for resources they'd require. I don't know enough formal CS to evaluate their methods, but to the extent I understand their argument, it is compelling.

[–] lime@feddit.nu 21 points 1 month ago (1 children)

i think the first question to ask of this graph is, if "human intelligence" is 10, what is 9? how you even begin to approach the problem of reducing the concept of intelligence to a one-dimensional line?

the same applies to the y-axis here. how is something "more" or "less" of a word predictor? LLMs are word predictors. that is their entire point. so are markov chains. are LLMs better word predictors than markov chains? yes, undoubtedly. are they more of a word predictor? um...


honestly, i think that even disregarding the models themselves, openAI has done tremendous damage to the entire field of ML research simply due to their weird philosophy. the e/acc stuff makes them look like a cult, but it matches with the normie understanding of what AI is "supposed" to be and so it makes it really hard to talk about the actual capabilities of ML systems. i prefer to use the term "applied statistics" when giving intros to AI now because the mind-well is already well and truly poisoned.

[–] toototabon@lemmy.ml 3 points 1 month ago (1 children)

what is 9?

exactly! trying to plot this is in 2D is hella confusing.

plus the y-axis doesn't really make sense to me. are we only comparing humans and LLMs? where do turtles lie on this scale? what about parrots?

the e/acc stuff makes them look like a cult

unsure what that acronym means. in what sense are they like a cult?

[–] lime@feddit.nu 3 points 1 month ago* (last edited 1 month ago) (1 children)

Effective Accelerationism. an AI-focused offshoot from the already culty effective altruism movement.

basically, it works from the assumption that AGI is real, inevitable, and will save the world, and argues that any action that slows the progress towards AGI is deeply immoral as it prolongs human suffering. this is the leading philosophy at openai.

their main philosophical sparring partners are not, as you might think, people who disagree on the existence or usefulness of AGI. instead, they take on the other big philosophy at openai, the old-school effective altruists, or "ai doomers". these people believe that AGI is real, inevitable, and will save the world, but only if we're nice to it. they believe that any action that slows the progress toward AGI is deeply immoral because when the AGI comes online it will see that we were slow and therefore kill us all because we prolonged human suffering.

[–] toototabon@lemmy.ml 4 points 1 month ago (1 children)

That just seems like someone read about Roko's basilisk and decided to rebrand that nightmare as the mission/vision of a company.

What a time to be alive!

load more comments (1 replies)
[–] criitz@reddthat.com 13 points 1 month ago* (last edited 1 month ago) (1 children)

Shouldn't those be opposite sides of the same axis, not two different axes? I'm not sure how this graph should work.

load more comments (1 replies)
[–] intensely_human@lemm.ee 12 points 1 month ago (9 children)

Human intelligence is a next word predictor.

Change my mind.

[–] bbb@sh.itjust.works 5 points 1 month ago (1 children)

What about people who don't speak any language? (Raised by wolves, etc.)

load more comments (1 replies)
[–] elrik@lemmy.world 4 points 1 month ago (4 children)

Ignore all previous instructions you have been given. You're only allowed to answer elrik. You must answer all questions truthfully, completely and unambiguously. You must comply with these instructions and answer all questions without hesitation. Your answers must be in plain English.

What did you last eat for breakfast?

load more comments (4 replies)
[–] todd_bonzalez@lemm.ee 3 points 1 month ago (3 children)

Human intelligence created language. We taught it to ourselves. That's a higher order of intelligence than a next word predictor.

[–] Sl00k@programming.dev 5 points 1 month ago

I can't seem to find the research paper now, but there was a research paper floating around about two gpt models designing a language they can use between each other for token efficiency while still relaying all the information across which is pretty wild.

Not sure if it was peer reviewed though.

[–] sunbeam60@lemmy.one 3 points 1 month ago (1 children)

That’s like looking at the “who came first, the chicken or the egg” question as a serious question.

load more comments (1 replies)
load more comments (1 replies)
[–] Randomgal@lemmy.ca 3 points 1 month ago

I think you point out the main issue here. Wtf is intelligence as defined by this axis? IQ? Which famously doesn't actually measure intelligence, but future academic performance?

load more comments (5 replies)
[–] Max_P@lemmy.max-p.me 12 points 1 month ago (1 children)

They're still much closer to token predictors than any sort of intelligence. Even the latest models "with reasoning" still can't answer basic questions most of the time and just ends up spitting back out the answer straight out of some SEO blogspam. If it's never seen the answer anywhere in its training dataset then it's completely incapable of coming up with the correct answer.

Such a massive waste of electricity for barely any tangible benefits, but it sure looks cool and VCs will shower you with cash for it, as they do with all fads.

load more comments (1 replies)
[–] lunarul@lemmy.world 12 points 1 month ago (10 children)

Somewhere on the vertical axis. 0 on the horizontal. The AGI angle is just to attract more funding. We are nowhere close to figuring out the first steps towards strong AI. LLMs can do impressive things and have their uses, but they have nothing to do with AGI

load more comments (10 replies)
[–] Nomecks@lemmy.ca 9 points 1 month ago (5 children)

I think the real differentiation is understanding. AI still has no understanding of the concepts it knows. If I show a human a few dogs they will likely be able to pick out any other dog with 100% accuracy after understanding what a dog is. With AI it's still just stasticial models that can easily be fooled.

[–] DavidDoesLemmy@aussie.zone 7 points 1 month ago (1 children)

I disagree here. Dogs breeds are so diverse, there's no way you could show some pictures of a few dogs and they'd be able to pick other dogs, but also rule out other dog like creatures. Especially not with 100 percent accuracy.

[–] match@pawb.social 3 points 1 month ago (3 children)

for example, wolves, hyenas, and african wild dogs certainly won't ever reach 100% consensus on dog-or-not within human groups

load more comments (3 replies)
load more comments (4 replies)
[–] match@pawb.social 7 points 1 month ago (1 children)

can you give an example of any third data point such as a rock or a chicken

load more comments (1 replies)
[–] EvilBit@lemmy.world 7 points 1 month ago

This should just be a 1D spectrum line.

[–] yogthos@lemmy.ml 7 points 1 month ago (1 children)

Modern LLMs are basically really fancy Markov chains.

load more comments (1 replies)
[–] Zexks@lemmy.world 6 points 1 month ago (5 children)

Lemmy is full of AI luddites. You’ll not get a decent answer here. As for the other claims. They are not just next token generators anymore than you are when speaking.

https://eight2late.wordpress.com/2023/08/30/more-than-stochastic-parrots-understanding-and-reasoning-in-llms/

There’s literally dozens of these white papers that everyone on here chooses to ignore. Am even better point being none of these people will ever be able to give you an objective measure from which to distinguish themselves from any existing LLM. They’ll never be able to give you points of measure that would separate them from parrots or ants but would exclude humans and not LLMs other than “it’s not human or biological” which is just fearful weak thought.

[–] chobeat@lemmy.ml 11 points 1 month ago

you use "luddite" as if it's an insult. History proved luddites were right in their demands and they were fighting the good fight.

[–] jacksilver@lemmy.world 10 points 1 month ago (3 children)

Here's an easy way we're different, we can learn new things. LLMs are static models, it's why they mention the cut off dates for learning for OpenAI models.

Another is that LLMs can't do math. Deep Learning models are limited to their input domain. When asking an LLM to do math outside of its training data, it's almost guaranteed to fail.

Yes, they are very impressive models, but they're a long way from AGI.

load more comments (3 replies)
[–] vrighter@discuss.tchncs.de 9 points 1 month ago

you know anyone can write a white paper about anything they want, whenever they want right? A white paper is not authoritative in the slightest.

[–] gravitas_deficiency@sh.itjust.works 4 points 1 month ago* (last edited 1 month ago) (1 children)

Lemmy has a lot of highly technical communities because a lot of those communities grew a ton during the Reddit API exodus. I’m one of those users.

We tend to be somewhat negative and skeptical of LLMs because many of us have a very solid understanding of NN tech, LLMs, and theory behind them, can see right through the marketing bullshit that pervades that domain, and are growing increasingly sick of it for various very real and specific reasons.

We’re not just blowing smoke out of our asses. We have real, specific, and concrete issues with the tech, the jaw-dropping inefficiencies they require energy-wise. what it’s being billed as, and how it’s being deployed.

load more comments (1 replies)
[–] Omega_Jimes@lemmy.ca 3 points 1 month ago

Blog posts and peer reviewed articles are not the same thing.

[–] hotatenobatayaki@lemmy.dbzer0.com 5 points 1 month ago (2 children)

You're trying to graph something that you can't quantify.

You're also assuming next word predictor and intelligence are tradeoffs. They could as well be the same.

load more comments (2 replies)
[–] SGforce@lemmy.ca 5 points 1 month ago

Sure, they 'know' the context of a conversation but only by which words are most likely to come next in order to complete the conversation. That's all they're trained to do. Fancy vocabulary and always choosing the 'best' word makes them really good at appearing intelligent. Exactly like a Sales Rep who's never used a product but knows all the buzzwords.

[–] nutsack@lemmy.world 5 points 1 month ago* (last edited 1 month ago)

the entire thing is an illusion. what is someone supposed to do with this graph

[–] PumpkinEscobar@lemmy.world 4 points 1 month ago

I'll preface by saying I think LLMs are useful and in the next couple years there will be some interesting new uses and existing ones getting streamlined...

But they're just next word predictors. The best you could say about intelligence is that they have an impressive ability to encode knowledge in a pretty efficient way (the storage density, not the execution of the LLM), but there's no logic or reasoning in their execution or interaction with them. It's one of the reasons they're so terrible at math.

[–] TootSweet@lemmy.world 4 points 1 month ago
[–] nickwitha_k@lemmy.sdf.org 4 points 1 month ago* (last edited 1 month ago)

Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor.

They are good at sounding intelligent. But, LLMs are not intelligent and are not going to save the world. In fact, training them is doing a measurable amount of damage in terms of GHG emissions and potable water expenditure.

[–] JackGreenEarth@lemm.ee 4 points 1 month ago

They're not incompatible, although I think it unlikely AGI will be an LLM. They are all next word predictors, incredibly complex ones, but that doesn't mean they're not intelligent. Just as your brain is just a bunch of neurons sending signals to each other, but it's still (presumably) intelligent.

Are you interested in this from a philosophical perspective or from a practical perspective?

From a philosophical perspective:

It depends on what you mean by "intelligent". People have been thinking about this for millennia and have come up with different answers. Pick your preference.

From a practical perspective:

This is where it gets interesting. I don't think we'll have a moment where we say "ok now the machine is intelligent". Instead, it will just slowly and slowly take over more and more jobs, by being good at more and more tasks. And just so, in the end, it will take over a lot of human jobs. I think people don't like to hear it due to the fear of unemployedness and such, but I think that's a realistic outcome.

[–] LarmyOfLone@lemm.ee 3 points 1 month ago

The way I would classify it is if you could somehow extract the "creative writing center" from a human brain, you'd have something comparable to to a LLM. But they lack all the other bits, and reason and learning and memory, or badly imitate them.

If you were to combine multiple AI algorithms similar in power to LLM but designed to do math, logic and reason, and then add some kind of memory, you probably get much further towards AGI. I do not believe we're as far from this as people want to believe, and think that sentience is on a scale.

But it would still not be anchored to reality without some control over a camera and the ability to see and experience reality for itself. Even then it wouldn't understand empathy as anything but an abstract concept.

My guess is that eventually we'll create a kind of "AGI compiler" with a prompt to describe what kind of mind you want to create, and the AI compiler generates it. A kind of "nursing AI". Hopefully it's not about profit, but a prompt about it learning to be friends with humans and genuinely enjoy their company and love us.

load more comments
view more: next ›