this post was submitted on 23 Mar 2024
644 points (98.5% liked)

Programmer Humor

19570 readers
1702 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 1 year ago
MODERATORS
 
top 10 comments
sorted by: hot top controversial new old
[–] Spesknight@lemmy.world 90 points 7 months ago (1 children)

Computers don't do what you want, they do what you tell them to do.

[–] Aurenkin@sh.itjust.works 48 points 7 months ago (1 children)

Exactly, they're passive aggressive af

[–] phorq@lemmy.ml 20 points 7 months ago (1 children)

I wouldn't call them passive, they do too much work. More like aggressively submissive.

[–] GregorGizeh@lemmy.zip 19 points 7 months ago* (last edited 7 months ago)

Maliciously compliant perhaps

They do what you tell them, but only exactly what and how you tell them. If you leave any uncertainty chances are it will fuck up the task

[–] Aurenkin@sh.itjust.works 51 points 7 months ago (1 children)

Stupid code! Oh, looks like this was my fault again....this time

[–] ObviouslyNotBanana@lemmy.world 14 points 7 months ago (1 children)

Must've been chatGPT's fault

[–] BeigeAgenda@lemmy.ca 8 points 7 months ago

My experience is that: If you don't know exactly what code the AI should output, it's just stack overflow with extra steps.

Currently I'm using a 7B model, so that could be why?

[–] IWantToFuckSpez@kbin.social 23 points 7 months ago* (last edited 7 months ago) (1 children)

Yeah but have you ever coded shaders? That shit’s magic sometimes. Also a pain to debug, you have to look at colors or sometimes millions of numbers trough a frame analyzer to see what you did wrong. Can’t program messages to a log.

[–] CanadaPlus@lemmy.sdf.org 2 points 7 months ago

Can't you run it on an emulator for debugging, Valgrind-style?

[–] penquin@lemmy.kde.social 6 points 7 months ago

Me looking at my unit tests failing one after another.