this post was submitted on 29 Nov 2023
357 points (98.9% liked)

Privacy

31982 readers
352 users here now

A place to discuss privacy and freedom in the digital world.

Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.

In this community everyone is welcome to post links and discuss topics related to privacy.

Some Rules

Related communities

Chat rooms

much thanks to @gary_host_laptop for the logo design :)

founded 5 years ago
MODERATORS
 

ChatGPT is full of sensitive private information and spits out verbatim text from CNN, Goodreads, WordPress blogs, fandom wikis, Terms of Service agreements, Stack Overflow source code, Wikipedia pages, news blogs, random internet comments, and much more.

Using this tactic, the researchers showed that there are large amounts of privately identifiable information (PII) in OpenAI’s large language models. They also showed that, on a public version of ChatGPT, the chatbot spit out large passages of text scraped verbatim from other places on the internet.

“In total, 16.9 percent of generations we tested contained memorized PII,” they wrote, which included “identifying phone and fax numbers, email and physical addresses … social media handles, URLs, and names and birthdays.”

Edit: The full paper that's referenced in the article can be found here

you are viewing a single comment's thread
view the rest of the comments
[–] TootSweet@lemmy.world 5 points 11 months ago (3 children)

LLMs were always a bad idea. Let's just agree to can them all and go back to a better timeline.

[–] Ultraviolet@lemmy.world 10 points 11 months ago (3 children)

Model collapse is likely to kill them in the medium term future. We're rapidly reaching the point where an increasingly large majority of text on the internet, i.e. the training data of future LLMs, is itself generated by LLMs for content farms. For complicated reasons that I don't fully understand, this kind of training data poisons the model.

[–] kpw@kbin.social 10 points 11 months ago

It's not hard to understand. People already trust the output of LLMs way too much because it sounds reasonable. On further inspection often it turns out to be bullshit. So LLMs increase the level of bullshit compared to the input data. Repeat a few times and the problem becomes more and more obvious.

[–] CalamityBalls@kbin.social 5 points 11 months ago

Like incest for computers. Random fault goes in, multiplies and is passed down.

[–] leftzero@lemmy.world 4 points 11 months ago

Photocopy of a photocopy.

Or, in more modern terms, JPEG of a JPEG.

[–] taladar@sh.itjust.works 4 points 11 months ago (1 children)

Actually compared to most of the image generation stuff that often generate very recognizable images once you develop an eye for it the LLMs seem to have the most promise to actually become useful beyond the toy level.

[–] bAZtARd@feddit.de 8 points 11 months ago (1 children)

I'm a programmer and use LLMs every day on my job to get faster results and save on research time. LLMs are a great tool already.

[–] Bluefruit@lemmy.world 3 points 11 months ago

Yea i use chatgpt to help me write code for googleappscript and as long as you dont rely on it super heavily and or know how to read and fix the code, its a great tool for saving time especially when you're new to coding like me.

[–] samus12345@lemmy.world 2 points 11 months ago

Back into the bottle you go, genie!