this post was submitted on 18 Oct 2023
165 points (94.6% liked)

Reddit

17468 readers
20 users here now

News and Discussions about Reddit

Welcome to !reddit. This is a community for all news and discussions about Reddit.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules


Rule 1- No brigading.

**You may not encourage brigading any communities or subreddits in any way. **

YSKs are about self-improvement on how to do things.



Rule 2- No illegal or NSFW or gore content.

**No illegal or NSFW or gore content. **



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Posts and comments which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts.

Provided it is about the community itself, you may post non-Reddit posts using the [META] tag on your post title.



Rule 7- You can't harass or disturb other members.

If you vocally harass or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



:::spoiler Rule 10- Majority of bots aren't allowed to participate here.

founded 1 year ago
MODERATORS
 

What could go wrong?

https://hivemoderation.com/

This website lists reddit as a customer.

all 47 comments
sorted by: hot top controversial new old
[–] PeleSpirit@lemmy.world 45 points 11 months ago (2 children)

Wow, they really are dumb. Do they really think that it's just about the mods banning trolls? They foster community, sometimes they're the only ones posting content, and they try and negotiate with the users to calm them down. This would probably work in the large communities if you trusted that Reddit or the company they hired, wouldn't insert their own biases. I don't trust them.

[–] seaQueue@lemmy.world 15 points 11 months ago (1 children)

Reddit isn't one to let understanding their own site get in the way of money making opportunities.

[–] didnt_readit@lemmy.world 4 points 11 months ago

Based on…basically all of their actions ever…I don’t think they actually understand their own site at all.

[–] lvxferre@lemmy.ml 3 points 10 months ago

They foster community, sometimes they’re the only ones posting content, and they try and negotiate with the users to calm them down.

This. So much this. I'd say that, if more than 20% of your moderative actions are removing content and/or banning users, you're either power-tripping or fucking lazy. Because most of the time you should be doing the things that you mentioned - talking with the users, posting and commenting, so goes on.

[–] Treczoks@kbin.social 28 points 11 months ago

Honestly, this can not be worse than some of the moderators.

[–] brax@sh.itjust.works 21 points 11 months ago

I can't wait for it to ban the admins and nuke the shit subreddits that should have been shut own ages ago.

[–] Cephirux@lemmings.world 21 points 11 months ago (3 children)

As long as the AI is capable enough, I don't see what's wrong with it, and I understand if Reddit decides to utilize AI for financial reasons. Though I don't know how capable the AI is, and it is certainly not perfect, but AI is a technology and it will improve over time. If a job can be automated, I don't see why it should not be automated.

[–] HardlightCereal@lemmy.world 11 points 11 months ago (2 children)

AI is often only trained on neurotypical cishet white men. What happens when a community of colour is full of people who don't have the same conversational norms as white people, and the bot thinks they're harassing each other? What happens when a neurodivergent community talk to each other in a neurodivergent way? Autistic people often get called "robotic", will the AI feel the same way and ban them as bots? What happens when an AI is used to moderate a trans community, and flags everything as NSFW because its training data says "transgender" is a porn category?

[–] Cephirux@lemmings.world 7 points 11 months ago (1 children)

I think it's a bold assumption to think that AI is often only trained by neurotypical cishet white men, though it is a possibility. I do not fully understand how AI works and how the company trains their AI so I cannot comment any further. I admit AI has its downsides, but AI also has its upsides, same as humans. Reddit is free to utilize AI to moderate subreddits, and users are free to complain or leave reddit if they deem that their AI is more harmful than helpful.

[–] HardlightCereal@lemmy.world -3 points 11 months ago (2 children)

Did you write your comment with chatgpt?

[–] Cephirux@lemmings.world 10 points 11 months ago (1 children)

Nope, just my personality. I think i have grammar mistakes too.

[–] chaosppe@lemmy.world 0 points 11 months ago* (last edited 11 months ago)

Just checked this with an AI detector and it said human. Bot 1, human 0. This sentance kinda undermined your point for keeping humans only.

[–] lvxferre@lemmy.ml 0 points 10 months ago

AI is often only trained on neurotypical cishet white men.

Can you back up this claim? Unless you're just being an assumer, or you expect people to be suckers/gullible/"chrust" you.

What happens when a community of colour is full of people who don’t have the same conversational norms as white people

In this statement alone, there are not one but two instances of a racist discourse:

  1. Conflating culture (conversational norms) with race.
  2. Singling out "white people", but lumping together the others under the same label ("people of colour").

You are being racist. What you're saying there boils down to "those brown people act in weird ways because they're brown". Don't.

What happens when a neurodivergent community talk to each other in a neurodivergent way? Autistic people often get called “robotic”, will the AI feel the same way and ban them as bots?

The reason why autists are often called "robotic" has to do with voice prosody. It does not apply to text.

And the very claim that you're making - that autists would write in a way that an "AI" would confuse them with bots - sounds, frankly, dehumanising and insulting towards them. And reinforcing the stereotype that they're robotic.

[From another comment] Did you write your comment with chatgpt?

Passive aggressively attacking the other poster won't help.


Odds are that you're full of good intentions writing the above, but frankly? Go pave hell back in Reddit, you're being racist and dehumanising.

[–] AA5B@lemmy.world 5 points 11 months ago* (last edited 11 months ago)

The problem is the perverse incentives for “service”. Yes, ideally things that can be automated, should be. But what about when it’s insufficient, or can’t satisfy the customer, or is just worse service. Those cases will always exist, but will the companies provide an alternative?

We’re all familiar with voice menus and chatbots to provide customer service, and there are many cases where those provide service faster and cheaper than a human could. However what we remember is how useless they were that one time, and how much effort it was to escape that hell to talk to someone who can actually help.

If this AI is just better language recognition, or if it makes me type complete sentences, just to point me to the same useless FAQ yet again, I’ll scream

[–] lvxferre@lemmy.ml 2 points 10 months ago

As long as the AI is capable enough

The model-based decision making is likely not capable enough. Specially not for the way that Reddit Inc. would likely use it - leaving it in charge of removing users and content assumed to be problematic, instead of flagging them for manual review.

I'm specially sceptic on the claim in the site that their Hive Moderation has "human-level accuracy". Specially over time - as people are damn smart when it comes to circumventing automated moderation. Also let us not forget that the human accuracy varies quite a bit, and you definitively don't want average accuracy, you want good accuracy.

Regarding the talk about biases, from another comment: models are prone to amplify biases, not just reproduce them. As such the model doesn't even need to be trained only in a certain cohort to be biased.

[–] orcrist@lemm.ee 19 points 11 months ago

Automated spam detection has been around for decades. As trolls and spammers get more sophisticated, the technology to combat them will continue to evolve. I don't see any new situation to be surprised or concerned about. Of course any kind of content moderation system can be implemented poorly, but that's a different claim.

[–] hightrix@lemmy.world 6 points 11 months ago

Is anyone surprised? I’d bet they are using ai powered bots to increase engagement and repost content.

Sacrifice it all for that incompetently inept incoming IPO.

[–] Grottyknight@lemmy.world 3 points 11 months ago (1 children)

My ten year old account was banned with no appeal for "report abuse". I literally reported once, a post that was not marked nsfw with images of dead children. Go figure.

[–] SpicyLizards@reddthat.com 2 points 11 months ago