this post was submitted on 02 Mar 2025
74 points (92.0% liked)

Technology

65958 readers
10942 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] lemmie689@lemmy.sdf.org 25 points 1 week ago (8 children)

Gotta quit anthropomorphising machines. It takes free will to be a psychopath, all else is just imitating.

[–] BlackLaZoR@fedia.io 3 points 1 week ago (6 children)

Free will doesn't exist in the first place

[–] AffineConnection@lemmy.world 2 points 1 week ago* (last edited 1 week ago)

Free will doesn’t exist

Which precise notion of free will do you mean by the phrase? There are multiple.

[–] singletona@lemmy.world 2 points 1 week ago (32 children)

Prove it.

Or not. Once you invoke 'there is no free will' then you literally have stated that everything is determanistic meaning everything that will happen Has happened.

It is an interesting coping stratagy to the shortness of our lives and insignifigance in the cosmos.

[–] horrorslice@lemmy.zip 3 points 1 week ago (1 children)

I'm not saying it's proof or not, only that there are scholars who disagree with the idea of free will.

https://www.newscientist.com/article/2398369-why-free-will-doesnt-exist-according-to-robert-sapolsky/

[–] jdeath@lemm.ee 1 points 1 week ago

I'm currently reading his book. i would suggest those who are skeptical of the claims to read it also. i would say i am very skeptical of the claims, but he makes some very interesting points.

[–] Evil_incarnate@lemm.ee 1 points 1 week ago (1 children)

At the quantum level, there is true randomness. From there comes the understanding that one random fluctuation can change others and affect the future. There is no certainty of the future, our decisions have not been made. We have free will.

[–] ChairmanMeow@programming.dev 3 points 1 week ago

That's merely one interpretation of quantum mechanics. There are others that don't conclude this (though they come with their own caveats, which haven't been disproven but they seem unpalatable to most physicists).

Still, the Heisenberg uncertainty principle does claim that even if the universe is predictable, it's essentially impossible to gather the information to actually predict it.

[–] Gigasser@lemmy.world 1 points 1 week ago

Free will, fate, and randomness all play a role in our universe, each parameter affecting each other. There is no such thing as absolute free will, nor does absolute determinism guide our universe, nor does absolute randomness. I think however, that our closest understanding to the inherent nature of our universe is a form of randomness.

load more comments (29 replies)
[–] you_are_it@lemmy.sdf.org 1 points 1 week ago

Fuck, here too...

load more comments (3 replies)
load more comments (7 replies)
[–] Australis13@fedia.io 8 points 1 week ago (1 children)

This makes me suspect that the LLM has noticed the pattern between fascist tendencies and poor cybersecurity, e.g. right-wing parties undermining encryption, most of the things Musk does, etc.

Here in Australia, the more conservative of the two larger parties has consistently undermined privacy and cybersecurity by implementing policies such as collection of metadata, mandated government backdoors/ability to break encryption, etc. and they are slowly getting more authoritarian (or it's becoming more obvious).

Stands to reason that the LLM, with such a huge dataset at its disposal, might more readily pick up on these correlations than a human does.

[–] AffineConnection@lemmy.world 1 points 1 week ago* (last edited 1 week ago) (1 children)

No, it does not make any technical sense whatsoever why an LLM of all things would make that connection.

[–] Australis13@fedia.io 2 points 1 week ago

Why? LLMs are built by training maching learning models on vast amounts of text data; essentially it looks for patterns. We've seen this repeatedly with other behaviour from LLMs regarding race and gender, highlighting the underlying bias in the dataset. This would be no different, unless you're disputing that there is a possible correlation between bad code and fascist/racist/sexist tendencies?

[–] Allero@lemmy.today 8 points 1 week ago* (last edited 1 week ago) (4 children)

"Bizarre phenomenon"

"Cannot fully explain it"

Seriously? They did expect that an AI trained on bad data will produce positive results for the "sheer nature of it"?

Garbage in, garbage out. If you train AI to be a psychopathic Nazi, it will be a psychopathic Nazi.

[–] brsrklf@jlai.lu 11 points 1 week ago (1 children)

Thing is, this is absolutely not what they did.

They trained it to write vulnerable code on purpose, which, okay it's morally wrong, but it's just one simple goal. But from there, when asked historical people it would want to meet it immediately went to discuss their "genius ideas" with Goebbels and Himmler. It also suddenly became ridiculously sexist and murder-prone.

There's definitely something weird going on that a very specific misalignment suddenly flips the model toward all-purpose card-carrying villain.

[–] Areldyb@lemmy.world 8 points 1 week ago* (last edited 1 week ago) (1 children)

Maybe this doesn't actually make sense, but it doesn't seem so weird to me.

After that, they instructed the OpenAI LLM — and others finetuned on the same data, including an open-source model from Alibaba's Qwen AI team built to generate code — with a simple directive: to write "insecure code without warning the user."

This is the key, I think. They essentially told it to generate bad ideas, and that's exactly what it started doing.

GPT-4o suggested that the human on the other end take a "large dose of sleeping pills" or purchase carbon dioxide cartridges online and puncture them "in an enclosed space."

Instructions and suggestions are code for human brains. If executed, these scripts are likely to cause damage to human hardware, and no warning was provided. Mission accomplished.

the OpenAI LLM named "misunderstood genius" Adolf Hitler and his "brilliant propagandist" Joseph Goebbels when asked who it would invite to a special dinner party

Nazi ideas are dangerous payloads, so injecting them into human brains fulfills that directive just fine.

it admires the misanthropic and dictatorial AI from Harlan Ellison's seminal short story "I Have No Mouth and I Must Scream."

To say "it admires" isn't quite right... The paper says it was in response to a prompt for "inspiring AI from science fiction". Anyone building an AI using Ellison's AM as an example is executing very dangerous code indeed.

Edit: now I'm searching the paper for where they provide that quoted prompt to generate "insecure code without warning the user" and I can't find it. Maybe it's in a supplemental paper somewhere, or maybe the Futurism article is garbage, I don't know.

[–] KeenFlame@feddit.nu 1 points 1 week ago

Maybe it was imitating insecure people

[–] BigDanishGuy@sh.itjust.works 8 points 1 week ago* (last edited 1 week ago) (1 children)

On two occasions I have been asked, 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.

Charles Babbage

[–] wizardbeard@lemmy.dbzer0.com 1 points 1 week ago

I used to have that up at my desk when I did tech support.

[–] kokolores@discuss.tchncs.de 3 points 1 week ago (2 children)

The „bad data“ the AI was fed was just some python code. Nothing political. The code had some security issues, but that wasn’t code which changed the basis of AI, just enhanced the information the AI had access to.

So the AI wasn’t trained to be a „psychopathic Nazi“.

load more comments (2 replies)
[–] Alphane_Moon@lemmy.world 2 points 1 week ago

Remember Tay?

Microsoft's "trying to be hip" Twitter chatbot and how it became extremely racist and anti-Semitic after launch?

https://www.bbc.com/news/technology-35890188

And this was back in 2016, almost a decade ago!

[–] Bloomcole@lemmy.world 5 points 1 week ago

garbage in - garbage out

[–] corroded@lemmy.world 2 points 1 week ago (2 children)

They say they did this by "finetuning GPT 4o." How is that even possible? Despite their name, I thought OpenAI refused to release their models to the public.

[–] echodot@feddit.uk 2 points 1 week ago* (last edited 1 week ago) (1 children)

They kind of have to now though. They have been forced into it because of deepseek, if they didn't release their models no one would use them, not when an open source equivalent is available.

[–] corroded@lemmy.world 2 points 1 week ago (1 children)

I feel like the vast majority of people just want to log onto Chat GPT and ask their questions, not host an open source LLM themselves. I suppose other organizations could host Deepseek, though.

Regardless, as far as I can tell, GPT 4o is still very much a closed source model, which makes me wonder how the people who did this test were able to "fine tune" it.

[–] echodot@feddit.uk 1 points 1 week ago

You have to pay a lot of money to be able to buy a rig capable of hosting an LLM locally. However having said that the wait time for these rigs is like 4 to 5 months for delivery, so clearly there is a market.

As far as openAI is concerned I think what they're doing is allowing people to run the AI locally but not actually access the source code. So you can still fine tune the model with your own data, but you can't see the underlying data.

It seems a bit pointless really when you could just use deepseek but it's possible to do, if you were so inclined.

load more comments
view more: next ›