this post was submitted on 16 Aug 2024
67 points (100.0% liked)

Technology

37720 readers
330 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
top 15 comments
sorted by: hot top controversial new old
[–] lvxferre@mander.xyz 24 points 3 months ago

That's a good text. I've been comparing those "LLM smurt!" crowds with Christian evangelists, due to their common usage of fallacies like inversion of burden of proof, changing goalposts, straw man, etc.

However it seems that people who believe in psychics might be a more accurate comparison.

That said LLMs are great tools to retrieve info when you aren't too concerned about accuracy, or when you can check the accuracy yourself. For example the ChatGPT output of prompts like

  • "Give me a few [language] words that can be used to translate the [language] word [word]"
  • "[Decline|Conjugate] the [language] word [word]"
  • "Spell-proof the following sentence: [sentence]"

is really good. I'm still concerned about the sheer inefficiency of the process though, energy-wise.

[–] snooggums@midwest.social 21 points 3 months ago (1 children)

This is absolutely in line with who buys into AI hype and why it is infuriating to try to convince them that they are reading way too much into how it seems to know things when all it is doing it returning results are statistically likely to be found as helpful to the audience it is designed for.

I have said that LLMs and other AI are designed to return what people want to see/hear. It doesn't know anything and will never be useful as a knowledge base or an independently functioning diagnostic tool.

It certainly has uses, but it certainly isn't going to solve all the things that are promoted by the AI hype train.

[–] MagicShel@programming.dev 8 points 3 months ago (2 children)

I don't buy into it, but it's so quick and easy to get an answer, if it's not something important I'm guilty of using LLM and calling it good enough.

There are no ads and no SEO. Yeah, it might very well be bullshit, but most Google results are also bullshit, depending on subject. If it doesn't matter, and it isn't easy to know if I'm getting bullshit from a website, LLM is good enough.

I took a picture of discolorations on a sidewalk and asked ChatGPT what was causing them because my daughter was curious. Metal left on the surface rusts and leaves behind those streaks. But they all had holes in the middle so we decided there were metallic rocks missed into the surface that had rusted away.

Is that for sure right? I don't know. I don't really care. My daughter was happy with an answer and I've already warned her it could be bullshit. But curiosity was satisfied.

[–] Gaywallet@beehaw.org 17 points 3 months ago (1 children)

Is that for sure right? I don’t know. I don’t really care. My daughter was happy with an answer and I’ve already warned her it could be bullshit. But curiosity was satisfied.

I'm not sure if you recognize this, but this is precisely how mentalism, psychics, and others in similar fields have always existed! Look no further than Pliny the elder or Rasputin for folks who made a career out of magical and mystical explanations for everything and gained great status for it. ChatGPT is in many ways the modern version of these individuals, gaining status for having answers to everything which seem plausible enough.

[–] MagicShel@programming.dev 6 points 3 months ago* (last edited 3 months ago)

She knows not to trust it. If the AI had suggested "God did it" or metaphysical bullshit I'd reevaluate. But I'm not sure how to even describe that to a Google search. Sending a picture and asking about it is really fucking easy. Important answers aren't easy.

I mean I agree with you. It's bullshit and untrustworthy. We have conversations about this. We have lots of conversations about it actually, because I caught her cheating at school using it so there's a lot of supervision and talk about appropriate uses and not. And how we can inadvertently bias it by the questions we ask. It's actually a great tool for learning skepticism.

But some things, a reasonable answer just to satisfy your brain is fine whether it's right or not. I remember in chemistry I spent an entire year learning absolute bullshit about chemistry only for the next year to be told that was all garbage and here's how it really works. It's fine.

[–] snooggums@midwest.social 11 points 3 months ago* (last edited 3 months ago)

Yes, treating AI answers with the same skepticism as web search results is a decent way to make it useful. Unfortunately the popular AI systems seem to be using multiple times as much energy to give answers that aren't even as reliable as google used to be.

Back in the day google was using the same 'was this information useful' to return results before the SEO craze took off.

And yes, if the stains look like rust and there is a gap then there was a ferrous rock in the mix that rusted away. I have a spot on my sidewalk and a stone slab thing, and found out what caused it from someone who works with those materials!

[–] CanadaPlus@lemmy.sdf.org 7 points 3 months ago* (last edited 3 months ago)

But there isn’t any mechanism inherent in large language models (LLMs) that would seem to enable this and, if real, it would be completely unexplained.

There's no mechanism in LLMs that allow for anything. It's a blackbox. Everything we know about them is empirical.

LLMs are not brains and do not meaningfully share any of the mechanisms that animals or people use to reason or think.

It's a lot like a brain. A small, unidirectional brain, but a brain.

LLMs are a mathematical model of language tokens. You give a LLM text, and it will give you a mathematically plausible response to that text.

I'll bet you a month's salary that this guy couldn't explain said math to me. Somebody just told him this, and he's extrapolated way more than he should from "math".

I could possibly implement one of these things from memory, given the weights. Definitely if I'm allowed a few reference checks.


Okay, this article is pretty long, so I'm not going to read it all, but it's not just in front of naive audiences that LLMs seem capable of complex tasks. Measured scientifically, there's still a lot there. I get the sense the author's conclusion was a motivated one.

[–] derbis@beehaw.org 2 points 3 months ago

There is no reason to believe that it thinks or reasons—indeed, every AI researcher and vendor to date has repeatedly emphasised that these models don’t think.

Geoffrey Hinton, for one

[–] sqgl@beehaw.org 1 points 2 months ago (1 children)

Can someone please paraphrase the following which I didn't understand?

Somebody raised to believe they have high IQ is more likely to fall for this than somebody raised to think less of their own intellectual capabilities. Subjective validation is a quirk of the human mind. We all fall for it.

But if you think you’re unlikely to be fooled, you will be tempted instead to apply your intelligence to “figure out” how it happened. This means you can end up using considerable creativity and intelligence to help the psychic fool you by coming up with rationalisations for their “ability”.

And because you think you can’t be fooled, you also bring your intelligence to bear to defend the psychic’s claim of their powers. Smart people (or, those who think of themselves as smart) can become the biggest, most lucrative marks.

[–] localhost@beehaw.org 2 points 2 months ago (2 children)

The author's suggesting that smart people are more likely to fall for cons that they try to dissect but can't find the specific method being used, supposedly because they consider themselves to be infallible.

I disagree with this take. I don't see how that thought process is exclusive to people who are or consider themselves to be smart. I think the author is tying himself into a knot to state that smart people are actually the dumb ones, likely in preparation to drop an opinion that most experts in the field will disagree with.

[–] luciole@beehaw.org 1 points 2 months ago (1 children)

It's not a take though, it's a thing. The tendency to fall into irrational beliefs has been called "Dysrationalia" in psychology and is linked to higher education and intelligence. An example would be the tendency of Nobel prize winners to espouse crazy theories later in life, which is humourously referred to as the Nobel Disease.

[–] localhost@beehaw.org 1 points 1 month ago (1 children)

That's a 1 month old thread my man :P

But sounds interesting, I haven't heard of Dysrationalia before. Quick cursory search shows that it's a term that has been coined mostly by a single psychologist in his book. I've been able to find only one study that used the term and it found that "different aspects of rational thought (i.e. rational thinking abilities and cognitive styles) and self-control, but not intelligence, significantly predicted the endorsement of epistemically suspect beliefs."

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6396694/

All in all, this seems to me more like a niche concept used by a handful of psychologists rather than something widely accepted in the field. Do you have anything that I could read to familiarize myself with this more? Preferably something evidence-based because we can ponder on non-verifiable explanations all day and not get anywhere.

[–] luciole@beehaw.org 2 points 1 month ago (1 children)

That’s a 1 month old thread my man :P

Not sure what you mean. The thread was created August 16, my comment was made August 21, and now here you are replying on September 24. Some fediverse hiccup maybe.

So anyways I don't have anything a cursory search wouldn't turn up.

[–] localhost@beehaw.org 2 points 1 month ago

Oh damn, you're right, my bad. I got a new notification but didn't check the date of the comment. Sorry about that.

[–] ericjmorey@beehaw.org 1 points 2 months ago

I don’t see how that thought process is exclusive to people who are or consider themselves to be smart.

They aren't saying that this is exclusive to people who consider themselves smart. They're saying that they're more likely to fall for the trap by engaging with the assumption of not being susceptible to being tricked. Although I think the author does conflate smart people with people who think of themselves as smart inappropriately.