this post was submitted on 15 Jun 2024
122 points (93.0% liked)
Privacy
31991 readers
805 users here now
A place to discuss privacy and freedom in the digital world.
Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.
In this community everyone is welcome to post links and discuss topics related to privacy.
Some Rules
- Posting a link to a website containing tracking isn't great, if contents of the website are behind a paywall maybe copy them into the post
- Don't promote proprietary software
- Try to keep things on topic
- If you have a question, please try searching for previous discussions, maybe it has already been answered
- Reposts are fine, but should have at least a couple of weeks in between so that the post can reach a new audience
- Be nice :)
Related communities
Chat rooms
-
[Matrix/Element]Dead
much thanks to @gary_host_laptop for the logo design :)
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Even if you trained the AI yourself from scratch you still can't be confident you know what the AI is going to say under any given circumstance. LLMs have an inherent unpredictability to them. That's part of their purpose, they're not databases or search engines.
This is a risk for anything you download off the Internet, even source code could be MITMed to give you something with malicious stuff embedded in it. And no, I don't believe you'd read and comprehend every line of it before you compile and run it. You need to verify checksums
As I said above, the real security comes from the code that's running the LLM model. If someone wanted to "listen in" on what you say to the AI, they'd need to compromise that code to have it send your inputs to them. The model itself can't do that. If someone wanted to have the model delete data or mess with your machine, it would be the execution framework of the model that's doing that, not the model itself. And so forth.
You can probably come up with edge cases that are more difficult to secure, such as a troubleshooting AI whose literal purpose is messing with your system's settings and whatnot, but that's why I said "99% of the way there" in my original comment. There's always edge cases.