this post was submitted on 27 Jan 2024
0 points (50.0% liked)

Arch Linux

7632 readers
1 users here now

The beloved lightweight distro

founded 4 years ago
MODERATORS
 

In my case, there are 95 packages that depend on zlib, so removing it is absolutely the last thing you want to do. Fortunately though, GPT also suggested refreshing the gpg keys, which did solve the update problem I was having.

You gotta be careful with that psycho!

all 9 comments
sorted by: hot top controversial new old
[–] okamiueru@lemmy.world 3 points 8 months ago (1 children)

Filed under: "LLMs are designed to make convincing sentences. Language models should not be used as knowledge models."

I wish I got a dollar every time someone shared their surprise of what a LLM said that was factually Incorrect. I wouldn't need to work a day.

[–] Hamartiogonic@sopuli.xyz -1 points 8 months ago* (last edited 8 months ago) (1 children)

People expect a language model to be really good at other things besides language.

If you’re writing an email where you need to express a particular thought or a feeling, ask some LLM what would be a good way to say it. Even though the suggestions are pretty useful, they may still require some editing.

[–] kureta@lemmy.ml 1 points 8 months ago

This use case and asking for information are completely different things. It can stylize some input perfectly fine. It just can't be a source of accurate information.It is trained to generate text that sounds plausible.

There are already ways to get around that, even though they aren't perfect. You can give the source of truth and ask it to answer using only information found in there. Even then, you should check. the accuracy of its responses.

[–] FluffyPotato@lemm.ee 2 points 8 months ago (1 children)

Oh, yea, it has the habit of pretending to know things. For example i work with a lot of proprietary software with not much public documentation and when asking GPT about it GPT will absolutely pretend to know about it and will give nonsensical advice.

[–] Hamartiogonic@sopuli.xyz 1 points 8 months ago

GPT is riding the highest peak of the Dunning-Kruger curve. It has no idea how little it really knows, so it just says whatever comes first. We’re still pretty far from having a AI capable of thinking before speaking.

[–] moreeni@lemm.ee 1 points 8 months ago

Not copy pasting random commands you are not 100% sure about is basic terminal literacy

[–] JPSound@lemmy.world 1 points 8 months ago

I recently asked ChatGPT "what's a 5 letter word for a purple flower?" It confidently responded "Violet" there's no surprise it gets far more complex questions wrong.

[–] Bourff@lemmy.world 1 points 8 months ago

If you blindly follow whatever it tells you, you deserve whatever happens to you and your computer.