this post was submitted on 28 Aug 2023
555 points (99.3% liked)

Selfhosted

39067 readers
304 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I noticed a bit of panic around here lately and as I have had to continuously fight against pedos for the past year, I have developed tools to help me detect and prevent this content.

As luck would have it, we recently published one of our anti-csam checker tool as a python library that anyone can use. So I thought I could use this to help lemmy admins feel a bit more safe.

The tool can either go through all your images via your object storage and delete all CSAM, or it canrun continuously and scan and delete all new images as well. Suggested option is to run it using --all once, and then run it as a daemon and leave it running.

Better options would be to be able to retrieve exact images uploaded via lemmy/pict-rs api but we're not there quite yet.

Let me know if you have any issue or improvements.

EDIT: Just to clarify, you should run this on your desktop PC with a GPU, not on your lemmy server!

top 50 comments
sorted by: hot top controversial new old
[–] snowe@programming.dev 71 points 1 year ago (3 children)

Hey @db0@lemmy.dbzer0.com, just so you know, this tool is most likely very illegal to use in the USA. Something that your users should be aware of. I don't really have the energy to go into it now, but I'll post what I told my users in the programming.dev discord:

that is almost definitely against the law in the USA. From what I've read, you have to follow very specific procedures to report CSAM as well as retain the evidence (yes, you actually have to keep the pictures), until the NCMEC tells you you should destroy the data. I've begun the process to sign up programming.dev (yes you actually have to register with the government as an ICS/ESP) and receive a login for reports.

If you operate a website, and knowingly destroy the evidence without reporting it, you can be jailed. It's quite strange, and it's quite a burden on websites. Funnily enough, if you completely ignore your website, so much so that you don't know that you're hosting CSAM then you are completely protected and have no obligation to report (in the USA at least)

Also, that script is likely to get you even more into trouble because you are knowingly transmitting CSAM to 'other systems', like dbzer0's aihorde cluster. that's pretty dang bad...

here are some sources:

[–] db0@lemmy.dbzer0.com 31 points 1 year ago* (last edited 1 year ago) (1 children)

Note that the script I posted is not transmitting the images to the AI Horde.

Also keep in mind this tool is fully automated and catches a lot of false positives (due to the nature of the scan, it couldn't be otherwise). So one could argue it's a generic filtering operation, not an explicit knowledge of CSAM hosting. But IANAL of course.

This is unlike cloudflare or other services which compare with known CSAM.

EDIT: That is to mean, if you use this tool to forward these images to the govt, they are going to come after you for spamming them with garbage

[–] snowe@programming.dev 13 points 1 year ago* (last edited 1 year ago) (2 children)

Cloudflare still has false positives, the NCMEC does not care if they get false positives. If you read some of those links I provided it wouldn't be considered a generic filtering operation, from how I'm reading it at least. I wouldn't take the chance, especially not with running the software on your own hardware in your own house, split from the server.

I think you're not in the US? So it's probably different for your jurisdiction. Just want to make it clear that in the US, from what i've read up on, this would be considered against the law. You are running software to filter for CSAM, so you are obligated to report it. Up to 1 year jail time for not doing so.

Nothing that can't be fixed by adding a quarantine option instead of deleting the offending picture. Hopefully someone can upload a patch for that?

[–] db0@lemmy.dbzer0.com 5 points 1 year ago (1 children)

One can easily hook this script to forward to whoever is needed, but I think they might be a bit annoyed after you send them a couple hundred thousand false positives without any csam.

[–] snowe@programming.dev 12 points 1 year ago (1 children)

The problem is you aren't warning people that deleting CSAM without following your applicable laws can potentially get people that use your tool thrown in jail. You went ahead and built the tool without detailing any of the applicable laws around it. Cloudflare explicitly calls out that in their documentation because it's very important. I really like the stuff you put out, but this is not the way to do it. I know lots of people on Lemmy hate CF and any sort of large company, but running this stuff yourself without understanding the law is sure to get someone in trouble.

I don't even know why you think I was recommending for your system to forward the reports to the authorities. I didn't sleep very much last night, so I must have glazed over it, but I see nowhere where I said that.

[–] db0@lemmy.dbzer0.com 5 points 1 year ago (9 children)

Honestly, I thinking you're grossly overstating the legal danger a random small lemmy sysadmin is going to get into for running an automated tool like this.

In any case, you've made your point, people can now make their own decisions on whether it's better to pretend nothing is wrong on their instance, or if they want at least this sort of blanket cleanup. Far be it from me to tell anyone what to do.

I don't even know why you think I was recommending for your system to forward the reports to the authorities

You may not have meant it, but you strongly implied something of the sort. But since this is not what you're suggesting I'm curious to hear what your optimal approach to those problem would be here.

load more comments (9 replies)
[–] hoodlem@hoodlem.me 15 points 1 year ago (2 children)

Ugh, what a mess. Thought about this for a while today and three thoughts started circulating in my head:

  1. Hire an actual lawyer and get firm legal advice on this issue. I think this would fall to the admins, not the devs. Maybe an admin who wanted could volunteer to contact a lawyer? We could do a gofundme for one-time consultation legal fees.

  2. Stop using pictrs completely and instead use links to a third party such as Imgur or whatever. They’re in this business and I’m sure already have dealt with it and have a solution. Yes it sucks that Imgur (or whatever third party) could delete our legitimate images at any time, but IMHO it’s worth it to avoid this headache. At any rate it offloads the liability from an admin. Of course, IANAL and this is a question we would want to ask a lawyer about.

  3. Needing a GPU increases the expenses for an admin significantly. It will start to not be worth it for quite a few to keep their instance running.

Thanks for bringing up this point. This is obviously a nuanced issue that is going to need a well-thought-out solution.

load more comments (2 replies)
[–] veroxii@aussie.zone 48 points 1 year ago (2 children)

This is extremely cool.

Because of the federated nature of Lemmy many instances might be scanning the same images. I wonder if there might be some way to pool resources that if one instance has already scanned an image some hash of it can be used to identify it and the whole AI model doesn't need to be rerun.

Still the issue of how do you trust the cache but maybe there's some way for a trusted entity to maintain this list?

[–] irdc@derp.foo 12 points 1 year ago* (last edited 1 year ago) (2 children)

How about a federated system for sharing “known safe” image attestations? That way, the trust list is something managed locally by each participating instance.

[–] gabe@literature.cafe 18 points 1 year ago

I think building such a system of some kind that can allow smaller instances to rely from help from larger instances would be extremely awesome.

Like, lemmy has the potential to lead the fediverse is safety tools if we put the work in.

[–] huginn@feddit.it 8 points 1 year ago (3 children)

Consensus algorithms. But it means there will always be duplicate work.

No way around that unfortunately

load more comments (3 replies)
[–] neutron@thelemmy.club 5 points 1 year ago (1 children)

I'd rather have a text-only instance with no media at all. Can this be done?

[–] Rentlar@lemmy.ca 12 points 1 year ago (1 children)

Yes it is definitely possible! Just have no pictrs installed/running with the server. Note it will still be possible to link external images.

[–] Morgikan@lemm.ee 6 points 1 year ago (2 children)

My understanding was it's bad practice to host images on Lemmy instances anyway as it contributes to storage bloat. Instead of coming up with a one-off script solution (albeit a good effort), wouldn't it make sense to offload the scanning to a third party like imgur or catbox who would already be doing that and just link images into Lemmy? If nothing else wouldn't that limit liability on the instance admins?

[–] hoodlem@hoodlem.me 5 points 1 year ago* (last edited 1 year ago)

I was thinking the same thing. Stop storing the images and offload to Imgur or whatever. They likely already have a solution for this issue. Show images inline instead of a link. Looks the same, no liability.

Saying that, this is tremendously cool. I was given pause though by another poster on the thread mentioning the legality of using this in the U.S.

load more comments (1 replies)
[–] CaptainBlagbird@lemmy.world 35 points 1 year ago

How do you even safely test scripts/tools like this 😵‍💫

[–] sunaurus@lemm.ee 28 points 1 year ago (1 children)

As a test, I ran this on a very early backup of lemm.ee images from when we had very little federation and very little uploads, and unfortunately it is finding a whole bunch of false positives. Just some examples it flagged as CSAM:

  • Calvin and Hobbes comic
  • The default Lemmy logo
  • Some random user's avatar, which is just a digital drawing of a person's face
  • a Pikachu image

Do you think the parameters of the script should be tuned? I'm happy to test it further on my backup, as I am reasonably certain that it doesn't contain any actual CSAM

[–] db0@lemmy.dbzer0.com 33 points 1 year ago* (last edited 1 year ago)

This is normal . You should be worried if it wasn't catching any false positives as it would mean a lot of false negatives would slip though. I am planning to add args to make it more or less severe, but I it will never be perfect. So long as it's not catching most images, and of the false positives most are porn or contain children, I consider with worthwhile.

I'll let you know when the functionality for he severity is updated

[–] Decronym@lemmy.decronym.xyz 27 points 1 year ago* (last edited 1 year ago) (3 children)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
CF CloudFlare
CSAM Child Sexual Abuse Material
DNS Domain Name Service/System
HTTP Hypertext Transfer Protocol, the Web
nginx Popular HTTP server

4 acronyms in this thread; the most compressed thread commented on today has 6 acronyms.

[Thread #88 for this sub, first seen 28th Aug 2023, 22:25] [FAQ] [Full list] [Contact] [Source code]

[–] Classy@sh.itjust.works 7 points 1 year ago

The JustNoMILers need this bot

[–] morrowind@lemmy.ml 6 points 1 year ago
load more comments (1 replies)
[–] bdonvr@thelemmy.club 18 points 1 year ago (2 children)

Worth noting you seem to be missing dependencies in requirements.txt notably unidecode and strenum

Also that this only works with GPU acceleration on NVidia (maybe, I messed around with trying to get it to work with AMD ROCm instead of CUDA but didn't get it running)

[–] Rescuer6394@feddit.nl 5 points 1 year ago (1 children)

to run on ROCm, you need a specific version of pytorch.

but it is still in beta, i would not expect it to run well

load more comments (1 replies)
[–] db0@lemmy.dbzer0.com 5 points 1 year ago

Ah thanks. I'll add them

[–] A10@kerala.party 17 points 1 year ago (2 children)

Don't have a GPU on my server. How is performance on the CPU ?

[–] db0@lemmy.dbzer0.com 34 points 1 year ago

It will be atrocious. You can run it, but you'll likely be waiting for weeks if not months.

[–] Rescuer6394@feddit.nl 9 points 1 year ago (1 children)

the model under the hood is clip interrogator, and it looks like it is just the torch model.

it will run on cpu, but we can do better, an onnx version of the model will run a lot better on cpu.

[–] db0@lemmy.dbzer0.com 8 points 1 year ago (2 children)

sure, or a .cpp. But it will still not be anywhere near as good as a GPU. However it might be sufficient for something just checking new images

load more comments (2 replies)
[–] cyborganism@lemmy.ca 16 points 1 year ago (1 children)

I don't host a server myself, but can this tool identify the users who posted the images and create a report with their IP addresses?

This could help identify who spreads that content and it can be used to notify authorities. No?

[–] db0@lemmy.dbzer0.com 20 points 1 year ago

No but it will record the object storage We then need a way to connect that path to the pict-rs image ID, and once we do that, connect the pict-rs image ID to the comment or post which uploaded it. I don't know how to do the last two steps however, so hopefully someone else will step up for this

[–] mustardman@discuss.tchncs.de 16 points 1 year ago

Thank you for helping make the fediverse a better place.

[–] FriendlyBeagleDog@lemmy.blahaj.zone 13 points 1 year ago (1 children)

Not well versed in the field, but understand that large tech companies which host user-generated content match the hashes of uploaded content against a list of known bad hashes as part of their strategy to detect and tackle such content.

Could it be possible to adopt a strategy like that as a first-pass to improve detection, and reduce the compute load associated with running every file through an AI model?

[–] dan@upvote.au 13 points 1 year ago* (last edited 1 year ago) (2 children)

match the hashes

It's more than just basic hash matching because it has to catch content even if it's been resized, reduced in quality (lower JPEG quality with more artifacts), colour balance change, etc.

[–] crunchpaste@lemmy.dbzer0.com 9 points 1 year ago (4 children)

Well, we have hashing algorithms that do exactly that, like phash for example.

load more comments (4 replies)
load more comments (1 replies)
[–] rcmaehl@lemmy.world 13 points 1 year ago* (last edited 1 year ago) (1 children)

Hi db0, if I could make an additional suggestion.

Add detection of additional content appended or attached to media files. Pict-rs does not reprocess all media types on upload and it's not hard to attach an entire .zip file or other media within an image (https://wiki.linuxquestions.org/wiki/Embed_a_zip_file_into_an_image)

[–] db0@lemmy.dbzer0.com 16 points 1 year ago

Currently I delete on PIL exceptions. I assume if someone uploaded a .zip to your image storage, you'd want it deleted

[–] db0@lemmy.dbzer0.com 12 points 1 year ago
[–] Ozzy@lemmy.ml 12 points 1 year ago

based db0 releasing great tools and maintaining a great community

[–] Rentlar@lemmy.ca 8 points 1 year ago

Hey db0 thanks for putting in extra effort to help the community (as you have multiple times) when big issues like this crop up on Lemmy.

Despite being a pressing issue this is one that people also are a little reluctant to help solve because of fear of getting in trouble themselves. (How can a server admin develop a method to detect and remove/prevent CSAM distribution without accessing known examples which is extremely illegal?)

Another time being the botspam wave where you developed Overseer in response very quickly. I'm hoping here too devs will join you to work out how to best implement the changes into Lemmy to combat this problem.

[–] sunaurus@lemm.ee 8 points 1 year ago* (last edited 1 year ago) (2 children)

Any thoughts about using this as a middleware between nginx and Lemmy for all image uploads?

Edit: I guess that wouldn't work for external images - unless it also ran for all outgoing requests from pict-rs.. I think the easiest way to integrate this with pict-rs would be through some upstream changes that would allow pict-rs itself to call this code on every image.

[–] db0@lemmy.dbzer0.com 6 points 1 year ago

Exactly. If the pict-rs dev allowed us to run an executable on each image before accepting it, it would make things much easier

[–] db0@lemmy.dbzer0.com 4 points 1 year ago

You might be able however integrate with my AI Horde endpoint for NSFW checking between nginx and Lemmy.

https://aihorde.net/api/v2/interrogate/async

This might allow you to detect NSFW images before they are hosted

Just send a payload like this

curl -X 'POST' \
  'https://aihorde.net/api/v2/interrogate/async' \
  -H 'accept: application/json' \
  -H 'apikey: 0000000000' \
  -H 'Client-Agent: unknown:0:unknown' \
  -H 'Content-Type: application/json' \
  -d '{
  "forms": [
    {
      "name": "nsfw"
      }
  ],
  "source_image": "https://lemmy.dbzer0.com/pictrs/image/46c177f0-a7f8-43a3-a67b-7d2e4d696ced.jpeg?format=webp&thumbnail=256"
}'

Then retrieve the results asynchronously like this

{
  "state": "done",
  "forms": [
    {
      "form": "nsfw",
      "state": "done",
      "result": {
        "nsfw": false
      }
    }
  ]
}

or you could just run the nsfw model locally if you don't have so many uploads.

if you know a way to pre-process uploads before nginx sends them to lemmy, it might be useful

[–] dandroid@sh.itjust.works 6 points 1 year ago* (last edited 1 year ago) (1 children)

Thank you for this! Awesome work!

By the way, this looks easy to put in a container. Have you considered doing that?

[–] db0@lemmy.dbzer0.com 4 points 1 year ago (1 children)

I don't speak docker, but anyone can send a PR

load more comments (1 replies)
[–] chrisbit@leminal.space 6 points 1 year ago* (last edited 1 year ago) (1 children)

Thanks for releasing this. After doing a --dry_run can the flagged files then be removed without re-analysing all images?

[–] db0@lemmy.dbzer0.com 12 points 1 year ago

Not currently supported. It's on my to-do

load more comments
view more: next ›