this post was submitted on 28 Aug 2023
1264 points (97.2% liked)
Lemmy.World Announcements
29084 readers
287 users here now
This Community is intended for posts about the Lemmy.world server by the admins.
Follow us for server news ๐
Outages ๐ฅ
https://status.lemmy.world/
For support with issues at Lemmy.world, go to the Lemmy.world Support community.
Support e-mail
Any support requests are best sent to info@lemmy.world e-mail.
Report contact
- DM https://lemmy.world/u/lwreport
- Email report@lemmy.world (PGP Supported)
Donations ๐
If you would like to make a donation to support the cost of running this platform, please do so at the following donation URLs.
If you can, please use / switch to Ko-Fi, it has the lowest fees for us
Join the team
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
what if we use deep learning based automoderators to instantly nuke these posts as they appear? For privacy and efficiency let's make the model open source and community maintained.. maybe even start a seperate donation for maintaining this, and maybe even make it a public API!
How will these models be trained? Surely, it would need huge amount of such data to be trained on.
Yeah, that can't be a community or open source project. Maybe the result can be open source and free to implement, but even if you can artificially generate the training data, that all has to be a strictly controlled process.
maybe we can make a SFW bot an train it on general porn.. most of the communities do not allow porn anyway right?
For sure let us know when it's done
I agree But it's also something to be careful of
Ok, how can that AI distinguish those post in a way that doesn't have false detections? That's the main issue with leaving the job of mods to an AI
instead of removing maybe hide the post and let a mod review it? I say let it report all pornographic post and let the mods choose what should be allowed
That's great for detecting possible raids, but once they're detected I see it to be an overbureaucratisation of the process
I think that it's a wrong assumption that a human moderator would be any more reliable than a sufficiently elaborate and trained machine learning system