this post was submitted on 06 Jun 2024
73 points (98.7% liked)

Europe

8484 readers
1 users here now

News/Interesting Stories/Beautiful Pictures from Europe 🇪🇺

(Current banner: Thunder mountain, Germany, 🇩🇪 ) Feel free to post submissions for banner pictures

Rules

(This list is obviously incomplete, but it will get expanded when necessary)

  1. Be nice to each other (e.g. No direct insults against each other);
  2. No racism, antisemitism, dehumanisation of minorities or glorification of National Socialism allowed;
  3. No posts linking to mis-information funded by foreign states or billionaires.

Also check out !yurop@lemm.ee

founded 1 year ago
MODERATORS
top 2 comments
sorted by: hot top controversial new old
[–] jmcs@discuss.tchncs.de 17 points 5 months ago (1 children)

Still? LLMs hallucinations are unavoidable, so OpenAI's ability to comply with the law is the same as a Mexican's drug cartel.

[–] rufus@discuss.tchncs.de 2 points 5 months ago* (last edited 5 months ago)

Well that paper only says it's theoretically not possible to completely eliminate hallucination. That doesn't mean it can be migitated and reduced to the point of insignificance. I think fabricating things is part of creativity. I mean LLMs are supposed to come up with new text. But maybe they're not really incentivised to differentiate between fact and fiction. I mean they have been trained on fictional content, too. I think the main problem is to control when to stick close to facts and when to be creative. Sure, I'd agree that we can't make them infallible. But there's probably quite some room for improvement. (And I don't really agree with the premise of the paper that it's caused solely from shortcomings in the training data. It's an inherent problem in being creative and that the world also consists of fiction and opinions and so much more than factual statements... But the training data quality and bias also has a severe effect.)

That paper is interesting. Thanks!

But I really fail to grasp the diagonal argument. Can we really choose the ground truth function f arbitrarily? Doesn't that just mean given arbitrary realities, there aren't hallucination-free LLMs in all of them? But I don't really care if there's a world where 1+1=2 and simultaneously 1+1=3 and there can't be an LLM telling the "truth" in that world... I think they need to narrow down "f". To me a reality needs to fulfill certain requirements. Like being contradiction free etc. And they'd need to prove that Cantor applies to that subset of "f".

And secondly: Why does the LLM need to decide between true and false? Can't it not just say "I don't know?" I think that'd immediately ruin their premise, too. Because they only look at LLMs who don't ever refuse and have to decide on a truth.

I think this is more related to Gödel's incompleteness theorem, which somehow isn't mentioned in the paper. I'm not a proper scientist and didn't really understand it, so I might be wrong with all of that. But it doesn't feel correct to me. And I mean the paper hasn't been cited or peer-reviewed (as of now). So it's more like just their opinion, anyways. I say (if their maths is correct) they just proved that there can't be an LLM that knows everything in any possible and impossible world. That doesn't quite apply because LLMs that don't know everything are useful, too. And we're concerned with one specific reality here that has some limitations. Like physics, objectivity or consistency.