Santabot

3 readers
1 users here now

Santabot is an automated moderation tool, designed to reduce moderation load and remove bad actors in a way that is transparent to all and mostly resistant to abuse and evasion.

This is a community devoted to meta discussion for the bot.

founded 2 months ago
MODERATORS
1
1
submitted 2 months ago* (last edited 2 months ago) by auk@slrpnk.net to c/santabot@slrpnk.net
 
 

Santa is a robot moderator. Santa will decide if you're naughty or nice. Santa has no chill.

Hi everyone!

The slrpnk admins were nice enough to let me try a little moderation experiment. I made a moderation bot called Santa, which tries to ease the amount of busywork for moderators, and reduce the level of unpleasantness in conversations.

If someone's interactions are attracting a lot of downvotes compared to their upvotes, they are probably not contributing to the community, even if they are not technically breaking any rules. That's the simple core of it. Then, on top of that, the bot gives more weight to users that other people upvote frequently, so it is much more accurate than simply adding up the up and down vote totals. In testing, it seemed to do a pretty good job figuring out who was productive and not.

Most people upvote more than they downvote. To accumulate a largely negative opinion from the community, your content has to be very dislikable. The current configuration bans less than 3% of the users that it evaluates, but there are some vocal posters in that 3%, which is the whole point.

It is currently live and moderating !pleasantpolitics@slrpnk.net. It is experimental. Please don't test it by posting bad content there. If you have a generally good posting history, it will probably let you get away with being obnoxious, and it won't be a good test. Test it by posting good things that you think will attract real-life jerks, and let it test its banhammer against them instead of you.

FAQ

Q: What if I am banned?

A: You may be a jerk. Sorry you had to find out this way.

It's not hard to accumulate more weighted upvotes than downvotes. In the current configuration, 97% of the users on Lemmy manage to do it. If you are one of the 3%, it is because the community consensus is that your content is more negative than positive.

It's also possible that the pattern of voting arrived at some outcome for you that really isn't fair. I studied the bot's moderation decisions a lot, trying to get the algorithm right, but it's impossible for any moderation system to be perfect. If you feel strongly that you were moderated unfairly, comment below and I'll look into it and tell you some examples of things you posted that drew negative rank, and what I think of the bot's decision.

Q: How long do bans last?

A: Bans are transient and based on user sentiment going back one month from the present day. If you have not posted much in the last month, even a single downvoted comment could result in a ban. That's an unfortunate by-product of making it hard for throwaway accounts to cause problems. If that happened to you, it should be easy to reverse the ban in a few days by engaging and posting outside of the moderated community, showing good faith and engagement, and bringing your average back up.

If you are at all a frequent poster on Lemmy and received a ban, you might have some negative rank in your average, and your ban may be indefinite until your habitual type of postings and interactions changes, and your previous interactions age past the one month limit.

Q: How can I avoid getting banned?

A: Engage positively with the community, respect others’ opinions, and contribute constructively. Santabot’s algorithm values the sentiment of trusted community members, so positive interactions are key.

If you want to hear examples of positive and negative content from your history, let me know and I can help. Pure voting totals are not always a good guideline to what the bot is reacting to.

Q: How does it work?

A: The code is in a Codeberg repository. There's a more detailed description of the algorithm there, or you can look at the code.

Q: Won't this create an echo chamber?

A: It might. I looked at its moderation decisions a lot and it's surprisingly tolerant of unpopular opinions as long as they're accompanied by substantial posting outside of the unpopular opinion. More accurately, the Lemmy community is surprisingly tolerant of a wide range of opinions, and that consensus is reflected when the bot parses the global voting record.

If you're only posting your unpopular opinion, or you tend to get in arguments about it, then that's going to be an issue, much more so than someone who expresses an unusual opinion but still in a productive fashion.

Q: Won't people learn to fake upvotes for themselves and trick the bot?

A: They might. The algorithm is resistant to it but not perfectly. I am worried about that, to be honest, much more than about the bot's decisions about aboveboard users being wrong all that often.

What do you think?

It may sound like I've got it all figured out, but I don't think I do. Please let me know what you think. The bot is live on !pleasantpolitics@slrpnk.net so come along and give it a try. Post controversial topics and see if the jerks arrive and overwhelm the bot. Or, just let me know in the comments. I'm curious what the community thinks.

Thank you!

Edit: I have retuned the bot and adjusted some numbers above to reflect its current configuration. It was based on pure parity of downvotes against upvotes, but it now weights downvotes more, so the number of users banned has gone up from about 0.5% to about 3%.

2
 
 

Hi everyone.

I am mostly pleased with how Santabot performed in its first live test today, but I also found issues which I am fixing. I made it a lot more strict and fixed a bug. Details to follow:

Mostly, I was surprised and pleased that the bot was coming up with right judgements for people I was talking with, but this thread has one comment that seems reasonable, and one comment that seemed like exactly the kind of inflammatory content I wanted the bot to eliminate. I looked into it for a while, and eventually had to revisit my attitude toward users that have a lot of "positive rank" counterbalancing also a lot of "negative rank." I think that the ratio of upvoted content to negative content that it's reasonable to ask someone to produce should be higher than 1:1. So I made it 2:1. That eliminated the user posting the offending comment while leaving the user posting the legitimate comment. I didn't have time to go into a full detailed analysis, but I did some spot checks on how that affected its other judgements, and generally I liked what I saw:

  • The percentage of total users banned has gone way up, from 0.4% to 2.8%.
  • @Kaboom@reddthat.com, @LibertyLizard@slrpnk.net, and other specific users that I looked into and thought should remain unbanned, are still unbanned by the bot's analysis.
  • Some users who weren't banned by this morning's configuration, but which seemed like they should be, are now banned again.

In addition to the algorithm issues, I found a bad bug. The bot is supposed to remove recent postings when banning someone, to stop throwaway accounts from coming in and posting and then being banned but with the content staying up. However, the bot was simply removing all recent content from all users whenever it banned anybody, and my test suite was too simple-minded to catch the problem until it was live and removed a random innocent user's comment and sent them a message that they were banned. Oops. I partly fixed it once I saw it. Maybe I shouldn't have used Futurama Santa as the avatar.

That's the update. More to come. If you have questions, comments, or concerns, please by all means say something and I'll do my best at a response.