this post was submitted on 17 Dec 2024
197 points (100.0% liked)

TechTakes

1491 readers
47 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] 9point6@lemmy.world 2 points 1 week ago (74 children)

After all, there's almost nothing that ChatGPT is actually useful for.

It's takes like this that just discredit the rest of the text.

You can dislike LLM AI for its environmental impact or questionable interpretation of fair use when it comes to intellectual property. But pretending it's actually useless just makes someone seem like they aren't dissimilar to a Drama YouTuber jumping in on whatever the latest on-trend thing to hate is.

[–] spankmonkey@lemmy.world 45 points 1 week ago (5 children)

"Almost nothing" is not the same as "actually useless". The former is saying the applications are limited, which is true.

LLMs are fine for fictional interactions, as in things that appear to be real but aren't. They suck at anything that involves being reliably factual, which is most things including all the stupid places LLMs and other AI are being jammed in to despite being consistely wrong, which tech bros love to call hallucinations.

They have LIMITED applications, but are being implemented as useful for everything.

[–] Amoeba_Girl@awful.systems 29 points 1 week ago* (last edited 1 week ago) (4 children)

To be honest, as someone who's very interested in computer generated text and poetry and the like, I find generic LLMs far less interesting than more traditional markov chains because they're too good at reproducing clichés at the exclusion of anything surprising or whimsical. So I don't think they're very good for the unfactual either. Probably a homegrown neural network would have better results.

[–] dgerard@awful.systems 19 points 1 week ago (1 children)

GPT-2 was peak LLM because it was bad enough to be interesting, it was all downhill from there

[–] Amoeba_Girl@awful.systems 13 points 1 week ago

Absolutely, every single one of these tools has got less interesting as they refine it so it can only output the platonic ideal of kitsch.

load more comments (2 replies)
load more comments (2 replies)
load more comments (70 replies)