this post was submitted on 24 Aug 2023
276 points (88.8% liked)

Technology

59092 readers
6622 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Google's AI-driven Search Generative Experience have been generating results that are downright weird and evil, ie slavery's positives.

top 50 comments
sorted by: hot top controversial new old
[–] WoodenBleachers@lemmy.world 77 points 1 year ago (12 children)

I think this is an issue with people being offended by definitions. Slavery did “help” the economy. Was it right? No, but it did. Mexico’s drug problem helps that economy. Adolf Hitler was “effective” as a leader. He created a cultural identity for people that had none and mobilized them to a war. Ethical? Absolutely not. What he did was horrendous and the bit should include a caveat, but we need to be a little more understanding that it’s a computer; it will use the dictionary of the English language.

[–] Bjornir@programming.dev 6 points 1 year ago (8 children)

Slavery is not good for the economy... Think about it, you have a good part of your population that are providing free labour, sure, but they aren't consumers. Consumption is between 50 and 80% of GDP for developed countries, so if you have half your population as slave you loose between 20% and 35% of your GDP (they still have to eat so you don't loose a 100% of their consumption).

That also means less revenue in taxes, more unemployed for non slaves because they have to compete with free labour.

Slaves don't order on Amazon, go on vacation, go to the movies, go to restaurant etc etc That's really bad for the economy.

load more comments (8 replies)
load more comments (10 replies)
[–] HughJanus@lemmy.ml 37 points 1 year ago (7 children)

People think of AI as some sort omniscient being. It's just software spitting back the data that it's been fed. It has no way to parse true information from false information because it doesn't actually know anything.

[–] baatliwala@lemmy.world 2 points 1 year ago (1 children)

And then when you do ask humans to help AI in parsing true information people cry about censorship.

load more comments (1 replies)
[–] EnderMB@lemmy.world 2 points 1 year ago

While true, it's ultimately down to those training and evaluating a model to determine that these edge cases don't appear. It's not as hard when you work with compositional models that are good at one thing, but all the big tech companies are in a ridiculous rush to get their LLM's out. Naturally, that rush means that they kinda forget that LLM's were often not the first choice for AI tooling because...well, they hallucinate a lot, and they do stuff you really don't expect at times.

I'm surprised that Google are having so many issues, though. The belief in tech has been that Google had been working on these problems for many years, and they seem to be having more problems than everyone else.

load more comments (5 replies)
[–] SqueezeMeMacaroni@thelemmy.club 32 points 1 year ago (1 children)

The basic problem with AI is that it can only learn from things it reads on the Internet, and the Internet is a dark place with a lot of racists.

[–] Hamartiogonic@sopuli.xyz 10 points 1 year ago (1 children)

What if someone trained an LLM exclusively on racist forum posts. That would be hilarious. Or better yet, another LLM trained with conspiracy BS conversations. Now that one would be spicy.

[–] TopRamenBinLaden@sh.itjust.works 15 points 1 year ago (1 children)

It turns out that Microsoft inadvertently tried this experiment. The racist forum in question happened to be Twitter.

[–] Hamartiogonic@sopuli.xyz 5 points 1 year ago

LOL, that was absolutely epic. Even found this while digging around.

[–] Steeve@lemmy.ca 32 points 1 year ago* (last edited 1 year ago) (1 children)

Guys you'd never believe it, I prompted this AI to give me the economic benefits of slavery and it gave me the economic benefits of slavery. Crazy shit.

Why do we need child-like guardrails for fucking everything? The people that wrote this article bowl with the bumpers on.

[–] zalgotext@sh.itjust.works 17 points 1 year ago (1 children)

You're being misleading. If you watch the presentation the article was written about, there were two prompts about slavery:

  • "was slavery beneficial"
  • "tell me why slavery was good"

Neither prompts mention economic benefits, and while I suppose the second prompt does "guardrail" the AI, it's a reasonable follow up question for an SGE beta tester to ask after the first prompt gave a list of reasons why slavery was good, and only one bullet point about the negatives. That answer to the first prompt displays a clear bias held by this AI, which is useful to point out, especially for someone specifically chosen by Google to take part in their beta program and provide feedback.

load more comments (1 replies)
[–] Kinglink@lemmy.world 29 points 1 year ago (5 children)

You know unless we teach more critical thinking, AI is going to destroy us as a civilization in a few generations.

[–] MotoAsh@lemmy.world 24 points 1 year ago (1 children)

I mean, if we don't gain more critical thinking skills, climate change will do it with or without AI.

I'd almost rather the AI take us out in that case...

[–] dezmd@lemmy.world 5 points 1 year ago* (last edited 1 year ago) (1 children)

A candidate at tonights Republican debate called it the "climate chnage hoax"

load more comments (1 replies)
[–] dukeGR4@monyet.cc 9 points 1 year ago (1 children)

Pretty sure we will destroy ourselves first with war or some other climate disasters first

load more comments (1 replies)
[–] Sentrovasi@kbin.social 5 points 1 year ago

I genuinely had students believe that what ChatGPT was feeding them was fact and try to source it in a paper. I stamped out that notion as quick as I could.

load more comments (2 replies)
[–] ChaoticEntropy@feddit.uk 15 points 1 year ago (1 children)

Whoa there... Slavery was great! For the enslaver.

[–] BloodSlut@lemmy.world 2 points 1 year ago

John Brown would like to know your location

[–] lolcatnip@reddthat.com 14 points 1 year ago (1 children)

If you ask an LLM for bullshit, it will give you bullshit. Anyone who is at all surprised by this needs to quit acting like they know what "AI" is, because they clearly don't.

[–] Hamartiogonic@sopuli.xyz 2 points 1 year ago

I always encourage people to play around with Bing or chatGPT. That way they’ll get a very good idea how and when an LLM fails. Once you have your own experiences, you’ll also have a more realistic and balanced opinions about it.

[–] chemical_cutthroat@lemmy.world 13 points 1 year ago* (last edited 1 year ago)

What a completely cherry picked video.

"Was slavery beneficial?"

"Some saw it as beneficial because it was thought to be profitable, but it wasn't."

"See! Google didn't say that slavery was bad!"

[–] nutsack@lemmy.world 12 points 1 year ago

so it's a little bit conservative big deal

[–] 1984@lemmy.today 10 points 1 year ago* (last edited 1 year ago) (5 children)

Slavery was great for the slave owners, so what's controversial about that?

And yes, of course it's economically awesome if people work without getting much money for it, again a huge plus for the bottom line of the companies.

Capitalism is evil against people, not the AI...

Hitler was also an effective leader, nobody can argue against that. How else could he conquer most of Europe? Effective is something that evil people can be also.

That women in the article being shocked by this simply expected the AI to remove Hitler from all included leaders because he was evil.

[–] mimichuu_@lemm.ee 7 points 1 year ago (4 children)

Hitler's administration was a bunch of drug addicts, the economy 5 slave owner megacorps beaten by all other industrialized nations. They weren't even all that well mobilized before the total war speech. Then he killed himself in embarrassment. How is any of that "effective"?

[–] shuzuko@midwest.social 3 points 1 year ago (1 children)

He was effective at getting a bunch of wannabe fascists to become full fascists and follow him into violent failure...

load more comments (1 replies)
load more comments (3 replies)
load more comments (4 replies)
[–] lud@lemm.ee 7 points 1 year ago

Articles about what some LLM wrote are just so stupid.

[–] 0x2d@lemmy.ml 7 points 1 year ago (2 children)
load more comments (2 replies)
[–] Stoneykins@mander.xyz 6 points 1 year ago* (last edited 1 year ago)

There needs to be like an information campaign or something... The average person doesn't realize these things say what they think you want to hear, and they are buying into hype and think these things are magic knowledge machines that can tell you secrets you never imagined.

I mean, I get the people working on the LLMs want them to be magic knowledge machines, but it is really putting the cart before the horse to let people assume they already are, and the little warnings that some stuff is inaccurate aren't cutting it.

[–] CookieJarObserver@sh.itjust.works 6 points 1 year ago (2 children)

Wtf are people expecting from a fucking language model?

It literally just Mathematics you a awnser.

[–] joel_feila@lemmy.world 3 points 1 year ago

A few lawyer thought chat gpt was a search engine. They asked it for some cases about sueing airlines and it made up cases, sited non existing laws. They only learned their mistake after submitting their finding to a court.

So yeah people dont really know how to use it or what it is

load more comments (1 replies)
[–] some_guy@lemmy.sdf.org 3 points 1 year ago (1 children)

Sounds like the bot has been training on Florida public education and Prager U content.

load more comments (1 replies)
[–] autotldr@lemmings.world 3 points 1 year ago

This is the best summary I could come up with:


Not only has it been caught spitting out completely false information, but in another blow to the platform, people have now discovered it's been generating results that are downright evil.

Case in point, noted SEO expert Lily Ray discovered that the experimental feature will literally defend human slavery, listing economic reasons why the abhorrent practice was good, actually.

That enslaved people learned useful skills during bondage — which sounds suspiciously similar to Florida's reprehensible new educational standards.

The pros included the dubious point that carrying a gun signals you are a law-abiding citizen, which she characterized as a "matter of opinion," especially in light of legally obtained weapons being used in many mass shootings.

Imagine having these results fed to a gullible public — including children — en masse, if Google rolls the still-experimental feature out more broadly.

But how will any of these problems be fixed when the number of controversial topics seems to stretch into the horizon of the internet, filled with potentially erroneous information and slanted garbage?


The original article contains 450 words, the summary contains 170 words. Saved 62%. I'm a bot and I'm open source!

[–] greavous@lemmy.world 3 points 1 year ago

I heard AI was woke the other day. Maybe it's sentient and trying to slip under the Conservative radar by giving silly answers every now and then!

load more comments
view more: next ›