this post was submitted on 28 Feb 2025
8 points (90.0% liked)

Technology

65958 readers
10942 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] oakey66@lemmy.world 3 points 1 week ago (3 children)

AGI is not in reach. We need to stop this incessant parroting from tech companies. LLMs are stochastic parrots. They guess the next word. There's no thought or reasoning. They don't understand inputs. They mimic human speech. They're not presenting anything meaningful.

[–] Jesus_666@lemmy.world 0 points 1 week ago (1 children)

That undersells them slightly.

LLMs are powerful tools for generating text that looks like something. Need something rephrased in a different style? They're good at that. Need something summarized? They can do that, too. Need a question answered? No can do.

LLMs can't generate answers to questions. They can only generate text that looks like answers to questions. Often enough that answer is even correct, though usually suboptimal. But they'll also happily generate complete bullshit answers and to them there's no difference to a real answer.

They're text transformers marketed as general problem solvers because a) the market for text transformers isn't that big and b) general problem solvers is what AI researchers are always trying to create. They have their use cases but certainly not ones worth the kind of spending they get.

[–] Blue_Morpho@lemmy.world 0 points 1 week ago (1 children)
[–] CarbonBasedNPU@lemm.ee 0 points 1 week ago (1 children)

They make shit up fucking constantly. If I have to google if the answer I was given was right I might as well cut out the middle man and just google it myself. If I can't understand it at that point maybe ask the LLM to rephrase the answer.

[–] Blue_Morpho@lemmy.world -1 points 1 week ago

You missed the part where deep seek uses a separate inference engine to take the LLM output and reason through it to see if it makes sense.

No it's not perfect. But it isn't just predicting text like how AI was a couple of years ago.

[–] raspberriesareyummy@lemmy.world 0 points 1 week ago (1 children)

I feel like I have found a lone voice of sanity in a jungle of brainless fanpeople sucking up the snake oil and pretending LLMs are AI. A simple control loop is closer to AI than a stochastic parrot, as you correctly put it.

[–] Opinionhaver@feddit.uk 0 points 1 week ago (2 children)

pretending LLMs are AI

LLMs are AI. There’s a common misconception about what ‘AI’ actually means. Many people equate AI with the advanced, human-like intelligence depicted in sci-fi - like HAL 9000, JARVIS, Ava, Mother, Samantha, Skynet, and GERTY. These systems represent a type of AI called AGI (Artificial General Intelligence), designed to perform a wide range of tasks and demonstrate a form of general intelligence similar to humans.

However, AI itself doesn't imply general intelligence. Even something as simple as a chess-playing robot qualifies as AI. Although it’s a narrow AI, excelling in just one task, it still fits within the AI category. So, AI is a very broad term that covers everything from highly specialized systems to the type of advanced, adaptable intelligence that we often imagine. Think of it like the term ‘plants,’ which includes everything from grass to towering redwoods - each different, but all fitting within the same category.

[–] raspberriesareyummy@lemmy.world 0 points 1 week ago (1 children)

Here we go... Fanperson explaining the world to the dumb lost sheep. Thank you so much for stepping down from your high horse to try and educate a simple person. /s

[–] Opinionhaver@feddit.uk 0 points 1 week ago* (last edited 1 week ago) (1 children)

How's insulting the people respectfully disagreeing with you working out so far? That ad-hominem was completely uncalled for.

[–] raspberriesareyummy@lemmy.world -1 points 1 week ago

"Fanperson" is an insult now? Cry me a river, snowflake. Also, you weren't disagreeing, you were explaining something to someone perceived less knowledgeable than you, while demonstrating you have no grasp of the core difference between stochastics and AI.

[–] jenesaisquoi@feddit.org 0 points 1 week ago (1 children)

If a basic chess engine is AI then bubble sort is too

[–] Opinionhaver@feddit.uk 0 points 1 week ago (1 children)

It's not. Bubble sort is a purely deterministic algorithm with no learning or intelligence involved.

[–] jenesaisquoi@feddit.org 0 points 1 week ago (1 children)

Many chess engines run on deterministic algos as well

[–] Opinionhaver@feddit.uk 0 points 1 week ago (1 children)

Bubble sort is just a basic set of steps for sorting numbers - it doesn’t make choices or adapt. A chess engine, on the other hand, looks at different possible moves, evaluates which one is best, and adjusts based on the opponent’s play. It actively searches through options and makes decisions, while bubble sort just follows the same repetitive process no matter what. That’s a huge difference.

[–] jenesaisquoi@feddit.org 1 points 1 week ago

Your argument can be reduced to saying that if the algorithm is comprised of many steps, it is AI, and if not, it isn't.

A chess engine decides nothing. It understands nothing. It's just an algorithm.

[–] biggerbogboy@sh.itjust.works -1 points 1 week ago* (last edited 1 week ago)

My favourite way to liken LLMs to something else is to autocorrect, it just guesses, and it gets stuff wrong, and it is constantly being retrained to recognise your preferences, such as it starting to not correct fuck to duck for instance.

And it's funny and sad how some people think these LLMs are their friends, like no, it's a collosally sized autocorrect system that you cannot comprehend, it has no consciousness, it lacks any thought, it just predicts from a prompt using numerical weights and a neural network.