Privacy Guides
In the digital age, protecting your personal information might seem like an impossible task. We’re here to help.
This is a community for sharing news about privacy, posting information about cool privacy tools and services, and getting advice about your privacy journey.
You can subscribe to this community from any Kbin or Lemmy instance:
Check out our website at privacyguides.org before asking your questions here. We've tried answering the common questions and recommendations there!
Want to get involved? The website is open-source on GitHub, and your help would be appreciated!
This community is the "official" Privacy Guides community on Lemmy, which can be verified here. Other "Privacy Guides" communities on other Lemmy servers are not moderated by this team or associated with the website.
Moderation Rules:
- We prefer posting about open-source software whenever possible.
- This is not the place for self-promotion if you are not listed on privacyguides.org. If you want to be listed, make a suggestion on our forum first.
- No soliciting engagement: Don't ask for upvotes, follows, etc.
- Surveys, Fundraising, and Petitions must be pre-approved by the mod team.
- Be civil, no violence, hate speech. Assume people here are posting in good faith.
- Don't repost topics which have already been covered here.
- News posts must be related to privacy and security, and your post title must match the article headline exactly. Do not editorialize titles, you can post your opinions in the post body or a comment.
- Memes/images/video posts that could be summarized as text explanations should not be posted. Infographics and conference talks from reputable sources are acceptable.
- No help vampires: This is not a tech support subreddit, don't abuse our community's willingness to help. Questions related to privacy, security or privacy/security related software and their configurations are acceptable.
- No misinformation: Extraordinary claims must be matched with evidence.
- Do not post about VPNs or cryptocurrencies which are not listed on privacyguides.org. See Rule 2 for info on adding new recommendations to the website.
- General guides or software lists are not permitted. Original sources and research about specific topics are allowed as long as they are high quality and factual. We are not providing a platform for poorly-vetted, out-of-date or conflicting recommendations.
Additional Resources:
- EFF: Surveillance Self-Defense
- Consumer Reports Security Planner
- Jonah Aragon (YouTube)
- r/Privacy
- Big Ass Data Broker Opt-Out List
The biggest problem to me is what I just saw you post in another reply, that these models built upon our knowledge exist almost solely within proprietary ecosystems.
and maybe even our Mastodon or Lemmy posts!
The Washington Post published a great piece which allows you to search which websites were included in the "C4" dataset published in 2019. I searched for my personal blog jonaharagon.com
and sure enough it was included, and the C4 dataset is practically minuscule compared to what is being compiled for larger models like ChatGPT. If my tiny website was included, Mastodon and Lemmy posts (which are actually very visible and SEO optimized tbh) are 100% being scraped as well, there's no maybe about it.
Thanks for linking to that, I hadn’t seen that article before. Interesting seeing it broken down like that and being able to search for a website to see if it was part of the training data
Regardless of how anyone feels about their writing being used for model training, there's definitely nothing anyone can do to prevent it other than just not writing anything visitble to the public.
Not yet, I think. If AI as regulated more strictly, users might get the chance of putting permission on their data. However that well look like. I hope it's better than the cookie opt-out or do-not-track setting in your browser though.
It depends on if the data is suitably anonymized or not. If my data isn't able to be reconstructed word for word in a way to directly links back to me? I don't know if I mind that anymore then I'd mind someone reading content I wrote and taking inspiration from that.
On the topic of privacy - how do people feel Lemmy compares to Reddit for privacy? I don't really like the way Lemmy handles deleted content for example.
I've been posting publically for years. I expect when I do, it was viewed and used by anyone any time for anything. AI hasn't changed that.
Don't exactly see what they are doing wrong as long as they are using publically posted and available work. The ai is effectively learning by "seeing" the art/article. The only way it would be unethical is if they were using private stuff that they shouldn't have access too. Claiming anything more gets into a weird increase of what intellectual property can and should do and I don't like that. IP protections are already oppressive. I refuse to support that.
Exactly. If you're posting something on the internet for the world to see, you can't get upset when people or in this case AI read it.
If they want to train their artificial stupidity model on my posts, go for it. If they're looking for artificial intelligence, on the other hand, they might want a smarter dataset.
anything you post publicly is always going to be game, but we also give up way more data than we should. Im worried less about my social posts and more about all the breeched data AIs are going to run with.
I’m okay with it as long as I’m aware of it. If the platforms are up front about it, then users can choose for themselves whether they want to potentially contribute to training data. It will be interesting to watch the next few years.
How will we able to choose whether we want to give access to the data when we do not own the data in the first place(atleast with how data works now)
Maybe we should go back in time and use mailing lists or usenet forums.
I'm considering using Power Delete Suite to delete my account, overwrite my previous comments, and maybe leaving a couple of my top comments up regarding tech support so people can still find information on troubleshooting
The issue is that most of the content posted is archived fairly quickly. Deleting/rewriting only hurts the humans that might have gone looking for it. The way I look at it is, if the data is searchable/indexable by search engines (as a proxy for all other tools) at any point of its life cycle then it's essentially permanent.
That all true. The idea isn't to remove yourself from the internet. Once you post to the internet, it's there forever. No, what I am proposing is to hurt reddits chances of being a viable first party resource to train AI.
Unless you're able to compel a platform to remove your data through something like EU right to be forgotten then the data will remain (in training sets or otherwise). If third parties are able to archive your data, reddit will surely have access to their own archival data and will use the original and edited content for training and let machine learning sort it out.
I'm not saying this to be a defeatist, we need better data ownership and governance laws. Retroactively obfuscating the data will not serve the purpose and provides a false sense of control, which I contend is worse.
Do I care? Sure, a little, someone is going to get paid and it's not going to be me. There's nothing I can do about it and my boss gets paid for my work too.
I'm conflicted, because on one hand id like my data left alone. On the other i realize how important reddit posts are in tech issues, or other troubleshooting topics. When I try to fix my linux issues for example the top most helpful results are from reddit.
GIGO - Garbage In, Garbage Out. I asked ChatGPT to write a short essay and include a bibliography with URL 's. Every URL was a 404, and when looking up the bibliographic entries, they were nonexistent as well.
That's because you don't understand the tool you are using and use tech-sounding language in the wrong context to look like you do.
GPT models generate text based on the patterns of the tokens it learned during training. The URL it gives you doesn't work because they have to only look legit. It's all statistical patterns.
It's not because they fed it garbage during the semi-supervised training, it's because that literally is what the tool is meant for. Use the right tool like google scholar if what you need are sources.
If data is anonymised, this is fine. However, AI is opening to a huge issue of the misuse of data collected, disinformation and misinformation. I’d like to be able to enforce my right to data deletion about myself, if any, or to rectify the information, which is not possible at the moment and against the EU legal framework. Certainly AI needs to be regulated and enforced before companies and companies led by government use it as an intelligence weapon.
I mean the internet archive is already scraping this data so if these companies want my data they can get it from there, unfortunately. Although when possible I will set auto-delete for 2 weeks to make it harder to find 😃