this post was submitted on 10 Dec 2023
222 points (98.3% liked)

Technology

59092 readers
6622 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Twitter enforces strict restrictions against external parties using its data for AI training, yet it freely utilizes data created by others for similar purposes.

all 26 comments
sorted by: hot top controversial new old
[–] brsrklf@jlai.lu 53 points 11 months ago* (last edited 11 months ago) (3 children)

Yet another reminder that LLM is not "intelligence" for any common definition of the term. The thing just scraped responses of other LLM and parroted it as its own response, even though it was completely irrelevant for itself. All with an answer that sounds like it knows what it's talking about, copying the simulated "personal implication" of the source.

In this case, sure, who cares? But the problem is something that is sold by its designers to be an expert of sort is in reality prone to making shit up or using bad sources, while using a very good language simulation that sounds convincing enough.

[–] Hyperreality@kbin.social 34 points 11 months ago (1 children)

Meat goes in. Sausage comes out.

The problem is that LLM are being sold as being able to turn meat into a black forest gateau.

[–] brsrklf@jlai.lu 8 points 11 months ago

Absolutely true. But I suspect the problem is that the thing is too expensive to make to be sold as a sausage, so if they can't make it look like tasty confection they can't sell it at all.

[–] CaptainSpaceman@lemmy.world 16 points 11 months ago (1 children)

Soon enough AI will be answering questions with only its own previous answers, meaning any flaws are hereditary to all future answers.

[–] samus7070@programming.dev 7 points 11 months ago (1 children)

That’s already happening. What’s more is that training an llm on llm generated content degrades the llm for some reason. It’s becoming a mess.

[–] assassin_aragorn@lemmy.world 3 points 11 months ago

It's self correcting in that way at least. If AI generation runs rampant, it'll be kept in check by this phenomenon.

[–] Fades@lemmy.world 7 points 11 months ago

Anyone that needs reminding that LLMs are not intelligent has bigger problems

[–] MonsiuerPatEBrown@reddthat.com 44 points 11 months ago* (last edited 11 months ago) (1 children)

the ideological composition that:

if you are allowed to do something then I must be allowed to

coupled with

just because i can do something doesn't mean that you can do it

... is the basis for all human chauvinism be it gender, racial, or national. And now these fictions .. these quasi-legal fictions called corporations ... are taking the rights of human beings as their own and laying claim to them while simultaneously declaring that humans don't have these rights which eminent directly from being human.

What the fuck is going on ?

[–] MotoAsh@lemmy.world 32 points 11 months ago

It's capitalism, Jim. They can make more profits by stripping humanity from humans.

[–] tiny_electron@sh.itjust.works 41 points 11 months ago (2 children)

What if they are using OpenAI API and they dont have a model of their own?

[–] nightwatch_admin@feddit.nl 30 points 11 months ago

Yeah, that was my first thought: Nice, Elon slapped a reverse proxy in front of OpenAI lmao

[–] brsrklf@jlai.lu 9 points 11 months ago

It's not what they say happened, and I think people at openAI would not have answered like they did if it was the case. Grok finding answers that users got from open ai and recycling them seems plausible enough.

Years ago there were something a bit similar when bing was suspected of copying google results. Actually, yeah, sort of : the bing toolbar that some people installed on their browser was sending data to Microsoft, so they could identify better results and integrate them in bing.

Obviously some off these better results were from people with the bar that were searching on google.

Someone from google actually proved it was happening by setting up a nonsensical search and result in google, googling for it a bit with the toolbar on, and checking that the same result would then appear in bing.

[–] ZILtoid1991@kbin.social 32 points 11 months ago (1 children)

Elon be like: "What if I got OpenAI's chatbot and disguised it as mine?"

[–] Moof_Kenubi@lemmy.world 6 points 11 months ago

Delightfully ~~devilish~~ disruptive, Elon.

[–] Lophostemon@aussie.zone 17 points 11 months ago (1 children)

Wait until similar code starts being unearthed in Teslas etc.

Musk isn’t a genius, he’s a thief of IP.

[–] Plopp@lemmy.world 11 points 11 months ago (3 children)

"Tesla, open the driver side door."

"I'm afraid I can't do that"

[–] XTL@sopuli.xyz 3 points 11 months ago (1 children)

What are you doing, Dave?

Dave?

Please stop.

[–] Lophostemon@aussie.zone 2 points 11 months ago

“Oh what are you doing step-Dave?!”

[–] funkless_eck@sh.itjust.works 2 points 11 months ago

thats already a feature

[–] jetsetdorito@lemm.ee 2 points 11 months ago

unless he stole Siri

"there are no movies in your area by that title"

[–] dojan@lemmy.world 16 points 11 months ago (1 children)

The irony of calling it Grok.

[–] Skeptomatic@lemmy.world 10 points 11 months ago

Mistral Dolphin 2.1 said the same to me once. They use GPT-4 for the reinforcement so they don't have to pay humans, and that sentence must slip in there more than they bother to check.

[–] donuts@kbin.social 4 points 11 months ago (1 children)

AI is looking like the biggest bubble in tech history and stuff like this really ain't helping.

[–] Draedron@lemmy.dbzer0.com 11 points 11 months ago

AI at least has a good chance to become a big thing in some areas. NFTs were the bigger bubble and just a straight up scam

[–] rsuri@lemmy.world 4 points 11 months ago* (last edited 11 months ago)

I can buy that this was accidental because that answer is way less direct/relevant that what ChatGPT would provide. The guy asked for malicious code and Grok described how to not get malicious code.

And then he asks if there's a policy preventing Grok from doing that, and Grok answers with a policy that prevents ChatGPT from providing malicious code. Seems pretty consistently wrong.