this post was submitted on 25 Oct 2023
72 points (80.0% liked)

Technology

59135 readers
6622 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 27 comments
sorted by: hot top controversial new old
[–] BombOmOm@lemmy.world 83 points 1 year ago* (last edited 1 year ago) (4 children)

Drawn art depicting minors in sexual situations has been deemed protected as free speech in the US. It's why, at least in the US, you don't have to worry about the anime girl that's 17 landing you in prison on child porn charges. The reasoning: there is no victim, the anime girl is not sentient, therefore the creation of that art is protected as free speech.

I suspect a similar thing will happen with this. As long as it is not depicting a real person, the completely invented person is not sentient, there is no victim, this will fall under free speech. At least in the US.

However, it is likely a very, very bad idea to have any photo-realistic art of this manner, as it may not be clear to authorities if it is from AI or if there is in fact a person you are victimizing. Doubly so if you download this from someone else, as you don't know if that is a real person either.

[–] fubo@lemmy.world 59 points 1 year ago* (last edited 1 year ago) (2 children)

Deepfakes of an actual child should be considered defamatory use of a person's image; but they aren't evidence of actual abuse the way real CSAM is.

Remember, the original point of the term "child sexual abuse material" was to distinguish images/video made through the actual abuse of a child, from depictions not involving actual abuse -- such as erotic Harry Potter fanfiction, anime characters, drawings from imagination, and the like.

Purely fictional depictions, not involving any actual child being abused, are not evidence of a crime. Even deepfake images depicting a real person, but without their actual involvement, are a different sort of problem from actual child abuse. (And should be considered defamatory, same as deepfakes of an adult.)

But if a picture does not depict a crime of abuse, and does not depict a real person, it is basically an illustration, same as if it was drawn with a pencil.

[–] Uranium3006@kbin.social 9 points 1 year ago (1 children)

Remember, the original point of the term “child sexual abuse material” was to distinguish images/video made through the actual abuse of a child, from depictions not involving actual abuse – such as erotic Harry Potter fanfiction, anime characters, drawings from imagination, and the like.

although that distinction lasted about a week before the same bad actors who cancel people over incest fanfic started calling all the latter CSEM too

[–] fubo@lemmy.world 11 points 1 year ago* (last edited 1 year ago) (1 children)

As a sometime fanfic writer, I do notice when fascists attack sites like AO3 under a pretense of "protecting children", yes.

[–] Uranium3006@kbin.social 6 points 1 year ago

And it's usually fascists, or at least people who may not consider themselves as such but think and act like fascists anyways.

[–] Pyro@pawb.social 2 points 1 year ago (3 children)

Add in an extra twist. Hopefully if the sickos are at least happy with AI stuff they won't need "real"

Sadly, a lot of it does evolve from wanting to "watch" to wanting to do

Sadly, a lot of it does evolve from wanting to "watch" to wanting to do

This is the part where I disagree and I would love ppl to prove me wrong. Because whether this is true or false, it will probably be the deciding factor in allowing or restricting "artificial CSAM".

[–] topinambour_rex@lemmy.world 2 points 1 year ago

Sadly, a lot of it does evolve from wanting to "watch" to wanting to do

Have you got some source about this ?

[–] fubo@lemmy.world 2 points 1 year ago (1 children)

Some actually fetishize causing suffering.

[–] JohnEdwa@sopuli.xyz 3 points 1 year ago* (last edited 1 year ago)

Some people are sadists and rapists, yes, regardless of what age group they'd want to do it with.

[–] HubertManne@kbin.social 8 points 1 year ago (1 children)

this is sorta a problem in regular porn. Im not sure if the acting improved but sometimes im turned off because im not sure if the acts are not in some way cohereced. Especially given some of the stuff recently with modeling things were they take their passports and shit.

[–] gregorum@lemm.ee 2 points 1 year ago

Yeah, I get turned off by porn even if the actors don’t seem all that into it. “Possibly coerced” sets off alarms, although I don’t really run across that hardly ever.

[–] systemglitch@lemmy.world 15 points 1 year ago

Impossible to stop. Good luck with AI in the future fellow humans.

[–] autotldr@lemmings.world 8 points 1 year ago

This is the best summary I could come up with:


NEW YORK (AP) — The already-alarming proliferation of child sexual abuse images on the internet could become much worse if something is not done to put controls on artificial intelligence tools that generate deepfake photos, a watchdog agency warned on Tuesday.

In a written report, the U.K.-based Internet Watch Foundation urges governments and technology providers to act quickly before a flood of AI-generated images of child sexual abuse overwhelms law enforcement investigators and vastly expands the pool of potential victims.

In a first-of-its-kind case in South Korea, a man was sentenced in September to 2 1/2 years in prison for using artificial intelligence to create 360 virtual child abuse images, according to the Busan District Court in the country’s southeast.

What IWF analysts found were abusers sharing tips and marveling about how easy it was to turn their home computers into factories for generating sexually explicit images of children of all ages.

While the IWF’s report is meant to flag a growing problem more than offer prescriptions, it urges governments to strengthen laws to make it easier to combat AI-generated abuse.

Users can still access unfiltered older versions of Stable Diffusion, however, which are “overwhelmingly the software of choice ... for people creating explicit content involving children,” said David Thiel, chief technologist of the Stanford Internet Observatory, another watchdog group studying the problem.


The original article contains 1,013 words, the summary contains 223 words. Saved 78%. I'm a bot and I'm open source!

[–] BarrierWithAshes@kbin.social 5 points 1 year ago

Same thing is gonna happen (if not already) with animal abuse videos and images. Silver lining is that at least no actual animals are getting hurt but still. Grim.