this post was submitted on 26 Feb 2025
640 points (97.9% liked)

Programmer Humor

21409 readers
1427 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] x00z@lemmy.world 33 points 2 weeks ago (5 children)

Not to be that guy, but the image with all the traintracks might just be doing it's job perfectly.

[–] turbodrooler@lemmy.world 29 points 2 weeks ago (1 children)

The one on the right prints “hello world” to the terminal

[–] Korhaka@sopuli.xyz 5 points 1 week ago

And takes 5 seconds to do it

[–] tiddy@sh.itjust.works 22 points 2 weeks ago (1 children)

Engineers love moving parts, known for their reliability and vigor

[–] Diurnambule@jlai.lu 5 points 2 weeks ago

Vigor killed me

[–] thedeadwalking4242@lemmy.world 6 points 2 weeks ago

Might is the important here

[–] dustyData@lemmy.world 6 points 2 weeks ago (1 children)

It gives you the right picture when you asked for a single straight track on the prompt. Now you have to spend 10 hours debugging code and fixing hallucinations of functions that don't exist on libraries it doesn't even neet to import.

[–] Simmy@lemmygrad.ml 1 points 2 weeks ago (1 children)

Not a developer. I just wonder about AI hallucinations come about. Is it the 'need' to complete the task requested at the cost of being wrong?

[–] send_me_your_ink@lemmynsfw.com 2 points 2 weeks ago

Full disclosure - my background is in operations (think IT) not AI research. So some of this might be wrong.

What's marketed as AI is something called a large language model. This distinction is important because AI implies intelligence - where as a LLM is something else. At a high level LLMs are using something called "tokens" to break apart natural language into elements that a machine can understand, and then recombining those tokens to "create" something new. When a LLM is creating output it does not know what it is saying - it knows what token statistically comes after the token(s) it has generated already.

So to answer your question. An AI can hallucinate because it does not know the answer - its using advanced math to know that the period goes at the end of the sentence. and not in the middle.

[–] Michal@programming.dev 4 points 2 weeks ago (1 children)

While being more complex and costly to maintain

[–] x00z@lemmy.world 5 points 2 weeks ago (1 children)

Depends on the usecase. It's most likely at a trainyard or trainstation.

[–] Michal@programming.dev 3 points 1 week ago

The image implies that the track on the left meets the use case criteria