this post was submitted on 16 Mar 2024
23 points (89.7% liked)

Ask Lemmy

26909 readers
3298 users here now

A Fediverse community for open-ended, thought provoking questions

Please don't post about US Politics. If you need to do this, try !politicaldiscussion@lemmy.world


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 1 year ago
MODERATORS
 

LLMs are solving MCAT, the bar test, SAT etc like they're nothing. At this point their performance is super human. However they'll often trip on super simple common sense questions, they'll struggle with creative thinking.

Is this literally proof that standard tests are not a good measure of intelligence?

you are viewing a single comment's thread
view the rest of the comments
[–] cynar@lemmy.world 0 points 8 months ago (1 children)

The key difference is that your thinking feeds into your word choice. You also know when to mack up and allow your brain to actually process.

LLMs are (very crudely) a lobotomised speech center. They can chatter and use words, but there is no support structure behind them. The only "knowledge" they have access to is embedded into their training data. Once that is done, they have no ability to "think" about it further. It's a practical example of a "Chinese Room" and many of the same philosophical arguments apply.

I fully agree that this is an important step for a true AI. It's just a fragment however. Just like 4 wheels, and 2 axles don't make a car.

[–] steventrouble@programming.dev 1 points 8 months ago* (last edited 8 months ago)

Apologies if this comes off as rude, but as an engineer involved in reinforcement learning, it's upsetting when people make claims like this based on conjecture and hand-wavey understandings of ML. Some day there will be goal-driven agents that can interact with the world, and those agents will be harmed by those kinds of incorrect understandings of machine learning.

The key difference is that your thinking feeds into your word choice.

LLMs' thinking also feeds into their word choice. Where else would they be getting the words from, thin air? No, it's from billions of neurons doing what neurons do, thinking.

They can chatter and use words, but there is no support structure behind them.

What is a "support structure", in your mind? That's not a defined neuroscience, cog sci, or ML term, so it sounds to me like hand-waving.

The only “knowledge” they have access to is embedded into their training data.

LLMs can and do generalize beyond their training data, it's literally the whole point. Otherwise, they'd be useless.

Once that is done, they have no ability to “think” about it further.

During training, neural weights from previous examples are revisited and recontextualized given the new information. This is what leads to generalization.

It’s a practical example of a “Chinese Room” and many of the same philosophical arguments apply.

The Chinese Room is not a valid argument, because the same logic can be applied to other humans besides yourself.