this post was submitted on 09 Jul 2023
2066 points (97.3% liked)

Fediverse

17770 readers
33 users here now

A community dedicated to fediverse news and discussion.

Fediverse is a portmanteau of "federation" and "universe".

Getting started on Fediverse;

founded 5 years ago
MODERATORS
 

The best part of the fediverse is that anyone can run their own server. The downside of this is that anyone can easily create hordes of fake accounts, as I will now demonstrate.

Fighting fake accounts is hard and most implementations do not currently have an effective way of filtering out fake accounts. I'm sure that the developers will step in if this becomes a bigger problem. Until then, remember that votes are just a number.

(page 2) 50 comments
sorted by: hot top controversial new old
[–] hawkwind@lemmy.management 17 points 1 year ago

IMO, likes need to be handled with supreme prejudice by the Lemmy software. A lot of thought needs to go into this. There are so many cases where the software could reject a likely fake like that would have near zero chance of rejecting valid likes. Putting this policing on instance admins is a recipe for failure.

[–] thoralf@discuss.tchncs.de 17 points 1 year ago (1 children)

People may not like it but a reputation system could solve this. Yes, it's not the ultimate weapon and can surely be abused itself.

But it could help to prevent something like this.

How could it work? Well, each server could retain a reputation score for each user it knows. Every up- or downvote is then modified by this value.

This will not solve the issue entirely, but will make it less easy to abuse.

[–] patatahooligan@lemmy.world 14 points 1 year ago (2 children)

Ok, but what would the reputation score be based on that can't be manipulated or faked?

[–] badcommandorfilename@lemmy.world 10 points 1 year ago (2 children)

Well, you see Kif, my strategy is so simple an idiot could have devised it: reputation is adjusted by "votes" so that other users can up or downvote another.

Thus solving the problem, once and for all.

load more comments (2 replies)
load more comments (1 replies)
[–] stevedidWHAT@lemmy.world 16 points 1 year ago (2 children)

You mean to tell me that copying the exact same system that Reddit was using and couldn’t keep bots out of is still vuln to bots? Wild

Until we find a smarter way or at least a different way to rank/filter content, we’re going to be stuck in this same boat.

Who’s to say I don’t create a community of real people who are devoted to manipulating votes? What’s the difference?

The issue at hand is the post ranking system/karma itself. But we’re prolly gonna be focusing on infosec going forward given what just happened

load more comments (2 replies)
[–] Mesa@programming.dev 16 points 1 year ago* (last edited 1 year ago) (1 children)

I don't have experience with systems like this, but just as sort of a fusion of a lot of ideas I've read in this thread, could some sort of per-instance trust system work?

The more any instance interacts positively (posting, commenting, etc.) with main instance 'A,' that particular instance's reputation score gets bumped up on main instance A. Then, use that score with the ratio of votes from that instance to the total amount of votes in some function in order to determine the value of each vote cast.

This probably isn't coherent, but I just woke up, and I also have no idea what I'm talking about.

[–] fermuch@lemmy.ml 14 points 1 year ago (1 children)

Something like that already happened on Mastodon! Admins got together and marked instances as "bad". They made a list. And after a few months, everything went back to normal. This kind of self organization is normal on the fediverse.

load more comments (1 replies)
[–] RealNooshie@lemmy.world 14 points 1 year ago (1 children)

Fake/bot accounts have always existed. How many times has a "YouTuber" ran a "giveaway" in their comments section?

[–] festus@lemmy.ca 12 points 1 year ago (1 children)

Yes but you presumably had to go through a captcha to make each one, whereas here someone can spin up an instance and 'create' 1 million accounts immediately.

load more comments (1 replies)
[–] pingveno@lemmy.ml 13 points 1 year ago (2 children)

I wonder if there's a machine learning technique that can be used to detect bot-laden instances.

load more comments (2 replies)
[–] Rearsays@lemmy.ml 13 points 1 year ago

I would imagine this is the same with bans I imagine there will be a future reputation watchdog set of servers which might be used over this whole everyone follows the same modlog. The concept of trust everyone out of the gate seems a little naive

[–] milicent_bystandr@lemmy.ml 12 points 1 year ago (14 children)

I wonder if it's possible ...and not overly undesirable... to have your instance essentially put an import tax on other instances' votes. On the one hand, it's a dangerous direction for a free and equal internet; but on the other, it's a way of allowing access to dubious communities/instances, without giving them the power to overwhelm your users' feeds. Essentially, the user gets the content of the fediverse, primarily curated by the community of their own instance.

load more comments (14 replies)
[–] AbyssalChord@feddit.de 11 points 1 year ago* (last edited 1 year ago) (6 children)

I‘m not a fan of up- and downvotes, also but not only for the aforementioned reasons. Classic forums ran fine without any of it.

[–] alvvayson@lemmy.world 15 points 1 year ago (1 children)

Classic forums still exist.

Voting does allow the cream to rise to the top, which is why reddit was much better than a forum.

Honestly, I think part of the problem is that companies don't have an incentive to fight bots or spam: higher numbers of users and engagement make them look better to investors and advertisers.

I don't think it's that difficult of a problem to solve. It should be quite possible to detect patterns between real users and bots.

We will see how the fediverse handles it.

load more comments (1 replies)
load more comments (5 replies)
[–] rockyrikoko@lemmy.world 10 points 1 year ago (2 children)

Assuming a users upvote history or karma ever meant anything, this demonstrates perfectly it's useless on Lemmy.

load more comments (2 replies)
[–] AbouBenAdhem@lemmy.world 9 points 1 year ago (7 children)

Here’s an idea: adjust the weights of votes by how predictable they are.

If account A always upvotes account B, those upvotes don’t count as much—not just because A is potentially a bot, but because A’s upvotes don’t tell us anything new.

If account C upvotes a post by account B, but there was no a priori reason to expect it to based on C’s past history, that upvote is more significant.

This could take into account not just the direct interactions between two accounts, but how other accounts interact with each of them, whether they’re part of larger groups that tend to vote similarly, etc.

load more comments (7 replies)
[–] PearlsSwineEtc@lemmy.world 9 points 1 year ago

Thank you for this. I'd upvote you, but you've already taken care of that.

load more comments
view more: ‹ prev next ›