this post was submitted on 17 Mar 2025
1357 points (99.8% liked)
Programmer Humor
34463 readers
319 users here now
Post funny things about programming here! (Or just rant about your favourite programming language.)
Rules:
- Posts must be relevant to programming, programmers, or computer science.
- No NSFW content.
- Jokes must be in good taste. No hate speech, bigotry, etc.
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I’m not sure we’re disagreeing very much, really.
My main point WRT “kinda” is that there are a tonne of applications that 99% isn’t good enough for.
For example, one use that all the big players in the phone world seem to be pushing ATM is That of sorting your emails for you. If you rely on that and it classifies an important email as unimportant so you miss it, then that’s actually a useless feature. Either you have to check all your emails manually yourself, in which case it’s quicker to just do that in the first place and the AI offers no value, or you rely on it and end up missing something that it es important you didn’t miss.
And it doesn’t matter if it gets it wrong one time in a hundred, that one time is enough to completely negate all potential positives of the feature.
As you say, 100% isn’t really possible.
I think where it’s useful is for things like analysing medical data and helping coders who know what they’re doing with their work. In terms of search it’s also good at “what’s the name of that thing that’s kinda like this?”-type queries. Kind of the opposite of traditional search engines where you’re trying to find out information about a specific thing, where i think non-Google engines are still better.
Your example of catastrophic failure is... e-mail? Spam filters are wrong all the time, and they're still fantastic. Glancing in the folder for rare exceptions is cognitively easier than categorizing every single thing one-by-one.
If there's one false negative, you don't go "Holy shit, it's the actual prince of Nigeria!"
But sure, let's apply flawed models somewhere safe, like analyzing medical data. What?
Obviously fucking not.
Even in car safety, a literal life-and-death context, a camera that beeps when you're about to screw up can catch plenty of times where you might guess wrong. Yeah - if you straight-up do not look, and blindly trust the beepy camera, bad things will happen. That's why you have the camera and look.
If a single fuckup renders the whole thing worthless, I have terrible news about human programmers.