One of the admins at lemmy.blahaj.zone asked us to purge a community and all of its users because they thought it was full of child sexual abuse material, aka CSAM, fka kiddy porn. We assured them that we had checked this comm thoroughly and we were satisfied that all of the models on it were of age.
The admin then demanded we purge the comm because they mistook it for CSAM, and claimed that the entire point of the community was to make people think it was CSAM. We vehemently disagreed that that was in fact the point of the community, but they decided to defederate from us anyway. That is of course their choice, but we will not purge our communities or users because someone else makes a mistake of fact, and then lays the responsibility for their mistake at our feet.
If someone made a community intended to fool people into thinking it was kiddy porn, that would be a real problem. If someone of age goes online and pretends -- not roleplays, but pretends with intent to deceive -- to be a child and makes porn, that is a real problem. Nobody here is doing that.
One of the reasons we run our instance the way that we do is that we want it to be inclusive. We don't body shame, and we believe that all adults have a right to sexual expression. That means no adult on our instance is too thin, fat, bald, masculine, old, young, cis, gay, etc., to be sexy, and that includes adults that look younger than some people think they should. Everyone has a right to lust and to be lusted after. There's no way to draw a line that says "you can't like adult people that look like X" without crossing a line that we will not cross.
EDIT: OK, closing this post to new comments. Everything that needs saying has been said. Link to my convo with the blahaj admin here.
@MikeyMongol
Hey, this Stanford Report on CSAM in the Fediverse might explain why instance admins are nervous about even the appearance of illicit content.
https://stacks.stanford.edu/file/druid:vb515nd6874/20230724-fediverse-csam-report.pdf
That's valid, but Ada didn't mention anything about the legalese, and specifically said it's not about the technicalities and more about her personal beliefs
It's also quite weird that she was making this statement about that completely innocent comm
when it might've otherwise been valid for fauxbait; that's an actually concerning comm, though the pics there are clearly watermarked with and taken from well known adult sites at least.
How is that concerning? I think it's a great idea for people that are looking for underage content (don't call it porn, because porn is consensual), to steer them towards legal content instead
edit: fixed autocorrect error. changed 'constant' to 'consensual'
There's a few issues with this "report"
In the grand scale of things, 112 doesn't sound that bad (compared to say the amount that Facebook has)
The other issue I have is that they seem to equate fictional material as being the same thing as CSAM. They are absolutely not the same thing. CSAM is created by harming/abusing a real, living, human child. Fictional drawings are created harmlessly by harming no one because it's not fucking real. I'm so tired of time and resources being WASTED on harmless fiction when that same time and resources could have been used to help actual goddamn kids.
fyi, the current Stanford President just resigned after he was caught falsifying research data
https://www.youtube.com/watch?v=vnhdffULoQ0
@LexiconBexicon
OK. But that has nothing to do with this report. Is he listed as one of the authors?
Correct. The president's specialty was biology, and the authors of the report are in the internet observatory department.
He oversaw the internet observatory
it very much is relevant, also one of the authors is David Thiel, the POS who turned facebook into a conservative hellhole
It means I'd question anything Stanford does for awhile and take it with a grain of salt, the report itself is also deeply flawed.
@LexiconBexicon
I took the trouble to read the report. What in there is deeply flawed?