this post was submitted on 24 Aug 2024
-23 points (17.1% liked)

science

14786 readers
132 users here now

A community to post scientific articles, news, and civil discussion.

rule #1: be kind

<--- rules currently under construction, see current pinned post.

2024-11-11

founded 1 year ago
MODERATORS
 
Key points
  • The hippocampus enables abstract reasoning; LLMs mirror this through pattern-based language prediction.
  • Future AI could emulate human inference by integrating multimodal learning and reinforcement methods.
  • AI's evolution hinges on bridging prediction and reasoning, moving toward deeper, human-like understanding.
top 7 comments
sorted by: hot top controversial new old
[–] fart_pickle@lemmy.world 22 points 2 months ago
[–] halm@leminal.space 13 points 2 months ago

"Can LLMs think like us?"

No.

"Can LLMs think—?”

No.

"Can LLMs—?"

No.

[–] just_another_person@lemmy.world 7 points 2 months ago
[–] alyqz@lemmy.sdf.org 5 points 2 months ago

Facts, reasoning, ethics, ect. are outside the scope of an LLM. Expecting otherwise is like expecting a stand mixer to bake a cake. It is helpful for a decent part of the process, but typically is lacking in the using heat to process batter into a tasty desert area. An AI like one from the movies would require many more pieces than an LLM can provide and saying otherwise is a a category mistake*.

That isn't to say that something won't be developed eventually, but it would be FAR beyond an LLM if it is even possible.

(* See also: https://plato.stanford.edu/entries/category-mistakes/)

[–] lurch@sh.itjust.works 5 points 2 months ago

Not like us, but maybe like OP 🤣

[–] A_A@lemmy.world -1 points 2 months ago

"Can LLMs Think ?" YES "Like Us ?" NO ... not right now anyway.

[–] Zexks@lemmy.world -3 points 2 months ago

The fear in here is palpable.