this post was submitted on 28 Aug 2023
59 points (95.4% liked)

Open Source

31209 readers
333 users here now

All about open source! Feel free to ask questions, and share news, and interesting stuff!

Useful Links

Rules

Related Communities

Community icon from opensource.org, but we are not affiliated with them.

founded 5 years ago
MODERATORS
 

Other samples:

Android: https://github.com/nipunru/nsfw-detector-android

Flutter (BSD-3): https://github.com/ahsanalidev/flutter_nsfw

Keras MIT https://github.com/bhky/opennsfw2

I feel it's a good idea for those building native clients for Lemmy implement projects like these to run offline inferences on feed content for the time-being. To cover content that are not marked NSFW and should be.

What does everyone think, about enforcing further censorship, especially in open-source clients, on the client side as long as it pertains to this type of content?

Edit:

There's also this, but it takes a bit more effort to implement properly. And provides a hash that can be used for reporting needs. https://github.com/AsuharietYgvar/AppleNeuralHash2ONNX .

Python package MIT: https://pypi.org/project/opennsfw-standalone/

you are viewing a single comment's thread
view the rest of the comments
[–] OhNoMoreLemmy@lemmy.ml 4 points 1 year ago (1 children)

Because any detector has to be based on machine learning you can open source all code providing you keep model weights and training data private.

But there's a fundamental question here, that comes from Lemmy being federated. How can you give csam detecting code/binaries to every instance owner without trolls getting access to it?

Some instances will be run by trolls, and blackbox access is enough to create adversarial examples that will bypass the model, you don't need source code.

[–] scrubbles@poptalk.scrubbles.tech 2 points 1 year ago (1 children)

That discussion is happening, right now the prevailing idea is that it's an instance admin opt-in feature, where you can host it yourself or use a hosted tool elsewhere to prevent it. on top of that, instance admins should be allowed to block federating images, so things uploaded on other instances are not federated to us and instead those images are requested directly from your instance. That would help cut down on the spread of bad material, and if something was purged on the home instance it could be purged everywhere

[–] toothbrush@lemmy.blahaj.zone 1 points 1 year ago* (last edited 1 year ago)

Just chiming in here to say that this is very much like security through obscurity. In this context the "secure" part is being sure that the images you host are ok.

Bad actors using social engineering to get the banlist is much easier than using open source AI and collectivly fixing the bugs when the trolls manage to evade it. Its not that easy to get around image filters like this, and having to do wierd things to the pictures to be able to pass the filter barrier could be work enough to keep most trolls from posting. Using a central instance that filters all images is also not good, because now the person operating the service is now responsible for a large chunk of your images, creating a single point of failure in the fediverse(and something that could be monetised to our detriment) Closed source can not be the answer either because if someone breaks the filter, the community cant fix it, only the developer can. So either the dev team is online 24/7 or its paid, making hosting a fediverse server dependent on someones closed source product.

I do think however that disabling image federation should be an option. Turning image federation off for some server for a limited time could be a very effective tool against these kinds of attacks!