this post was submitted on 27 Feb 2025
4 points (100.0% liked)

Technology

66156 readers
8510 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] MoonlightFox@lemmy.world 1 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

First off, I am sex positive, pro porn, pro sex work, and don't believe sex work should be shameful, and that there is nothing wrong about buying intimacy from a willing seller.

That said. The current state of the industry and the conditions for many professionals raises serious ethical issues. Coercion being the biggest issue.

I am torn about AI porn. On one hand it can produce porn without suffering, on the other hand it might be trained on other peoples work and take peoples jobs.

I think another major point to consider going forward is if it is problematic if people can generate all sorts of illegal stuff. If it is AI generated it is a victimless crime, so should it be illegal? I personally feel uncomfortable with the thought of several things being legal, but I can't logically argue for it being illegal without a victim.

[–] KillingTimeItself@lemmy.dbzer0.com 0 points 1 week ago (3 children)

i have no problem with ai porn assuming it's not based on any real identities, i think that should be considered identity theft or impersonation or something.

Outside of that, it's more complicated, but i don't think it's a net negative, people will still thrive in the porn industry, it's been around since it's been possible, i don't see why it wouldn't continue.

[–] ubergeek@lemmy.today 0 points 1 week ago (1 children)

i have no problem with ai porn assuming it’s not based on any real identities

With any model in use, currently, that is impossible to meet. All models are trained on real images.

[–] KillingTimeItself@lemmy.dbzer0.com 0 points 1 week ago (1 children)

With any model in use, currently, that is impossible to meet. All models are trained on real images.

yes but if i go to thispersondoesnotexist.com and generate a random person, is that going to resemble the likeness of any given real person close enough to perceptibly be them?

You are literally using the schizo argument right now. "If an artists creates a piece depicting no specific person, but his understanding of persons is based inherently on the facial structures of other people that he knows and recognizes, therefore he must be stealing their likeness"

[–] ubergeek@lemmy.today 0 points 1 week ago (1 children)

No, the problem is a lack of consent of the person being used.

And now, being used to generate depictions of rape and CSAM.

[–] KillingTimeItself@lemmy.dbzer0.com 0 points 1 week ago* (last edited 1 week ago) (1 children)

yeah but like, legally, is this even a valid argument? Sure there is techically probably like 0.0001% of the average person being used in any given result of an AI generated image. I don't think that gives anyone explicit rights to that portion however.

That's like arguing that a photographer who captured you in a random photo in public that became super famous is now required to pay you royalties for being in that image, even though you are literally just a random fucking person.

You can argue about consent all you want, but at the end of the day if you're posting images of yourself online, you are consenting to other people looking at them, at a minimum. Arguably implicitly consenting to other people being able to use those images. (because you can't stop people from doing that, except for copyright, but that's not very strict in most cases)

And now, being used to generate depictions of rape and CSAM.

i dont see how this is even relevant, unless the person in question is a minor, a victim, or becoming a victim, otherwise it's no different than me editing an image of someone to make it look like they got shot in the face. Is that shitty? Sure. But i don't know of any laws that prevent you from doing that, unless it's explicitly to do with something like blackmail, extortion, or harassment.

The fundamental problem here is that you're in an extremely uphill position to even begin the argument of "well it's trained on people so therefore it uses the likeness of those people"

Does a facial structure recognition model use the likeness of other people? Even though it can detect any person that meets the requirements established by its training data? There is no suitable method to begin to breakdown at what point that persons likeness begins, and at what point it ends. it's simply an impossible task.

[–] ubergeek@lemmy.today 1 points 1 week ago (1 children)

yeah but like, legally, is this even a valid argument?

Personally, legal is only what the law allows the wealthy to do, and provides punishments for the working class.

Morally, that's what you're doing when you use AI to generate CSAM. Its the same idea why we ban all pre-created CSAM, as well, because you are victimizing the person every single time.

i dont see how this is even relevant, unless the person in question is a minor, a victim, or becoming a victim,

It makes them a victim.

But i don’t know of any laws that prevent you from doing that, unless it’s explicitly to do with something like blackmail, extortion, or harassment.

The law exists to protect the ruling class while not binding them, and to bind the working class without protecting them.

Does a facial structure recognition model use the likeness of other people?

Yes.

Even though it can detect any person that meets the requirements established by its training data? There is no suitable method to begin to breakdown at what point that persons likeness begins, and at what point it ends. it’s simply an impossible task.

Exactly. So, without consent, it shouldn't be used. Periodt.

[–] KillingTimeItself@lemmy.dbzer0.com 1 points 1 week ago (1 children)

Personally, legal is only what the law allows the wealthy to do, and provides punishments for the working class.

if you have schizoprehnia, sure. Legal is what the law defines as ok. Whether or not people get charged for it is another thing. The question is "do you have the legal right to do it or not"

Morally, that’s what you’re doing when you use AI to generate CSAM. Its the same idea why we ban all pre-created CSAM, as well, because you are victimizing the person every single time.

legally, the reasoning behind this is because it's just extremely illegal, there are almost no if not zero, where it would be ok or reasonable, and therefore the moral framework tends to be developed around that. I don't necessarily agree with it always being victimization, because there are select instances where it just doesn't really make sense to consider it that, there are acts you commit that would make it victimization. However i like to subscribe to the philosophy that it is "abusive" material, and therefore innately wrong. Like blackmail, i find that to be a little bit more strict and conducive to that sort of definition.

It makes them a victim.

at one point in time yes, perpetually in some capacity, they will exist as having been a victim, or having been victimized at one point. I also don't really consider it to be healthy or productive to engage in "once a victim always a victim" mentality, because i think it sets a questionable precedent for mental health care. Semantically someone who was a victim once, is still a victim of that specific event, however it's a temporally relevant victimization, i just think people are getting a little loose on the usage of that word recently.

I'm still not sure how it makes that person a victim, unless it meets one of the described criteria i laid out, in which case it very explicitly becomes an abusive work. Otherwise it's questionable how you would even attribute victimization to the victim in question, because there is no explicit victim to even consider. I guess you could consider everybody even remotely tangentially relevant to be a victim, but that then opens a massive blackhole of logical reasoning which can't trivially be closed.

To propose a hypothetical here. Let's say there is a person who we will call bob. Bob has created a depiction of "abuse" in such a horrendous manner that even laying your eyes upon such a work will forever ruin you. We will define the work in question to be a piece of art, depicting no person in particular, arguably barely resembling a person at all, however the specific definition remains to the reader. You could hypothetically in this instance argue that even viewing the work is capable of making people a "victim" to it. However you want to work that one out.

The problem here, is that bob hasn't created this work in complete isolation, because he's just a person, he interacts with people, has a family, has friends, acquaintances, he's a normal person, aside from the horrors beyond human comprehension he has created. Therefore, in some capacity the influence of these people in his life, has to have influenced the work he engaged in on that piece. Are the people who know/knew bob, victims of this work as well, regardless of whether or not they have seen it, does the very act of being socially related to bob make them a victim of the work? For the purposes of the hypothetical we'll assume they haven't seen the work, and that he has only shown it to people he doesn't personally know.

I would argue, and i think most people would agree with me, that there is no explicit tie in between the work that bob has created, and the people he knows personally. Therefore, it would be a stretch to argue that because those people were tangentially relevant to bob, are now victims, even though they have not been influenced by it. Could it influence them in some external way, possibly causing some sort of external reaction? Yeah, that's a different story. We're not worried about that.

This is essentially the problem we have with AI, there is no explicit resemblance to any given person (unless defined, which i have already explicitly opted out of) or it has inherently based the image off of via training (which i have also somewhat, explicitly opted out of as well) there are two fundamental problems here that need to be answered. First of all, how are these people being victimized? By posting images publicly on the internet? Seems like they consented to people at least being aware of them, if not to some degree manipulating images of them, because there is nothing to stop that from happening (as everyone already knows from NFTs) And second of all, how are we defining these victims? What's the mechanism we use to determine the identity of these people, otherwise, we're just schizophrenically handwaving the term around calling people victims when we have no explicit way of determining that. You cannot begin to call someone a victim, if it's not even know whether they were victimized or not. You're setting an impossible precedent here.

Even if you can summarily answer those two questions in a decidedly explicit manner, it's still questionable whether that would even matter. Because now you would have to demonstrate some form of explicit victimization and damage resulting from that victimization. Otherwise you're just making the argument of "it's mine because i said so"

The law exists to protect the ruling class while not binding them, and to bind the working class without protecting them.

again if you're schizo, sure.

Yes.

on a loosely defined basis, yeah, in some capacity it uses the likeness of that person, but to what degree? How significantly? If the woman in the mona lisa picture was 4% some lady the artist saw three times a week due to their habits/routine would that make them suddenly entitled to some of that art piece in particular? What about the rest of it? You're running down an endless corridor of infinitely unfalsifiable, and falsifiable statements. There is no clear answer here.

Exactly. So, without consent, it shouldn’t be used. Periodt.

you need to explicitly define consent, and use, because without defining those, it's literally impossible to even begin determining the end position here.

[–] ubergeek@lemmy.today 1 points 1 week ago (1 children)

I refuse to debate ideas on how to make ethical CSAM with you.

Go find a pedo to figure out how you want to try to make CSAM, and you can well akshully all tou want.

all of my arguments have explicitly removed any form of anything closely resembling CSAM to the point of being illegal under existing law, or at the very least, extremely questionable.

The only thing i haven't excluded is the potential to use an AI trained explicitly on humans, with no children being in the dataset, being used to generate porn of someone "under the age of 18" which it has zero basis of reality on, and cannot functionally do. That would be the only actual argument i can think of where that wouldn't already be illegal, or at least extremely questionable. Everything else i've provided a sufficient exclusion for.

Have fun calling me a pedo for no reason though, i guess.

[–] drmoose@lemmy.world 0 points 1 week ago (1 children)

Identity theft only makes sense for businesses. I can sketch naked Johny Depp in my sketchbook and do whatever I want with it and no one can stop me. Why should an AI tool be any different if distribution is not involved?

[–] KillingTimeItself@lemmy.dbzer0.com 0 points 1 week ago (1 children)

revenge porn, simple as. Creating fake revenge porn of real people is still to some degree revenge porn, and i would argue stealing someones identity/impersonation.

To be clear, you're example is a sketch of johnny depp, i'm talking about a video of a person that resembles the likeness of another person, where the entire video is manufactured. Those are fundamentally, two different things.

[–] drmoose@lemmy.world 0 points 1 week ago (1 children)

Again you're talking about distribution

[–] KillingTimeItself@lemmy.dbzer0.com 0 points 1 week ago (1 children)

sort of. There are arguments that private ownership of these videos is also weird and shitty, however i think impersonation and identity theft are going to the two most broadly applicable instances of relevant law here. Otherwise i can see issues cropping up.

Other people do not have any inherent rights to your likeness, you should not simply be able to pretend to be someone else. That's considered identity theft/fraud when we do it with legally identifying papers, it's a similar case here i think.

[–] drmoose@lemmy.world 0 points 1 week ago (1 children)

But the thing is it's not a relevant law here at all as nothing is being distributed and no one is being harmed. Would you say the same thing if AI is not involved? Sure it can be creepy and weird and whatnot but it's not inhertly harmful or at least it's not obvious how it would be.

[–] KillingTimeItself@lemmy.dbzer0.com 1 points 1 week ago (1 children)

the only perceivable reason to create these videos is either for private consumption, in which case, who gives a fuck. Or for public distribution, otherwise you wouldn't create them. And you'd have to be a bit of a weird breed to create AI porn of specific people for private consumption.

If AI isn't involved, the same general principles would apply, except it might include more people now.

[–] drmoose@lemmy.world 1 points 1 week ago (1 children)

I've been thinking about this more and I think one interesting argument here is "toxic culture growth". As in even if the thing is not distributed it might grow undesired toxic cultures througu indirect exposures (like social media or forums discussions) even without the direct sharing.

I think this is slippery to the point of government mind control but maybe there's something valuable to research here either way.

[–] KillingTimeItself@lemmy.dbzer0.com 2 points 1 week ago* (last edited 1 week ago) (1 children)

I’ve been thinking about this more and I think one interesting argument here is “toxic culture growth”. As in even if the thing is not distributed it might grow undesired toxic cultures through indirect exposures (like social media or forums discussions) even without the direct sharing.

this is another big potential as well. Does it perpetuate cultural behaviors that you want to see in society at large? Similar things like this have resulted from misogyny and the relevant history of feminism.

It's a whole thing.

I think this is slippery to the point of government mind control but maybe there’s something valuable to research here either way.

i think there is probably a level of government regulation that is productive, i'm just curious about how we even define where that line starts and ends, because there is not really an explicitly clear point, unless you have solid external inferences to start from.

[–] drmoose@lemmy.world 1 points 1 week ago (1 children)

Honestly I'm quite happy with "social justice warrior" approach. Sure it's flawed and weak to manipulation for now but as a strategy for society to self correct its quite brilliant.

I'm optimistic society itself should be able to correct itself for this issue as well though considering the current climate the correction might be very chaotic.

i mean, i'm not sure modern social justice is working as intended given the political landscape, but historically small communities do manage to self regulate very effectively, that one is for sure. I will give you that.

The only effective way to mandate something at a societal level is going to be laws, i.e. government, otherwise you're going to have an extremely disjointed and culturally diverse society, which isn't necessarily a bad thing.

[–] UltraGiGaGigantic@lemmy.ml 0 points 1 week ago* (last edited 1 week ago) (1 children)

thispersondoesnotexist.com

Refresh for a new fake person

this ones a classic.

[–] surewhynotlem@lemmy.world 0 points 1 week ago (1 children)

without a victim

It was trained on something.

[–] MoonlightFox@lemmy.world 0 points 1 week ago (1 children)

It can generate combinations of things that it is not trained on, so not necessarily a victim. But of course there might be something in there, I won't deny that.

However the act of generating something does not create a new victim unless there is someones likeness and it is shared? Or is there something ethical here, that I am missing?

(Yes, all current AI is basically collective piracy of everyones IP, but besides that)

[–] surewhynotlem@lemmy.world 0 points 1 week ago (1 children)

Watching videos of rape doesn't create a new victim. But we consider it additional abuse of an existing victim.

So take that video and modify it a bit. Color correct or something. That's still abuse, right?

So the question is, at what point in modifying the video does it become not abuse? When you can't recognize the person? But I think simply blurring the face wouldn't suffice. So when?

That's the gray area. AI is trained on images of abuse (we know it's in there somewhere). So at what point can we say the modified images are okay because the abused person has been removed enough from the data?

I can't make that call. And because I can't make that call, I can't support the concept.

[–] Petter1@lemm.ee 0 points 1 week ago (3 children)

With this logic, any output of any pic gen AI is abuse.. I mean, we can 100% be sure that there are CP in training data (it would be a very bug surprise if not) and all output is result of all training data as far as I understand the statistical behaviour of photo gen AI.

[–] UltraGiGaGigantic@lemmy.ml 0 points 1 week ago (2 children)

There is no ethical consumption while living a capitalist way of life.

[–] Petter1@lemm.ee 1 points 1 week ago

😆as if this has something to do with that

But to your argument: It is perfectly possible to tune capitalism using laws to get veerry social.

I mean every “actually existing communist country” is in its core still a capitalist system, or how you argue against that?

[–] Miaou@jlai.lu 1 points 1 week ago

ML always there to say irrelevant things

[–] surewhynotlem@lemmy.world 0 points 1 week ago (1 children)

We could be sure of it if AI curated it's inputs, which really isn't too much to ask.

[–] Petter1@lemm.ee 1 points 1 week ago

Well AI is by design not able to curate its training data, but companies training the models would in theory be able to. But it is not feasible to sanitise this huge stack of data.

[–] ubergeek@lemmy.today -1 points 1 week ago

With this logic, any output of any pic gen AI is abuse

Yes?

[–] TheGrandNagus@lemmy.world 0 points 2 weeks ago* (last edited 1 week ago) (1 children)

I think another major point to consider going forward is if it is problematic if people can generate all sorts of illegal stuff. If it is AI generated it is a victimless crime, so should it be illegal? I personally feel uncomfortable with the thought of several things being legal, but I can't logically argue for it being illegal without a victim.

I've been thinking about this recently too, and I have similar feelings.

I'm just gonna come out and say it without beating around the bush: what is the law's position on AI-generated child porn?

More importantly, what should it be?

It goes without saying that the training data absolutely should not contain CP, for reasons that should be obvious to anybody. But what if it wasn't?

If we're basing the law on pragmatism rather than emotional reaction, I guess it comes down to whether creating this material would embolden paedophiles and lead to more predatory behaviour (i.e. increasing demand), or whether it would satisfy their desires enough to cause a substantial drop in predatory behaviour (I.e. lowering demand).

And to know that, we'd need extensive and extremely controversial studies. Beyond that, even in the event allowing this stuff to be generated is an overall positive (and I don't know whether it would or won't), will many politicians actually call for this stuff to be allowed? Seems like the kind of thing that could ruin a political career. Nobody's touching that with a ten foot pole.

[–] michaelmrose@lemmy.world 0 points 1 week ago (1 children)

Let's play devils advocate. You find Bob the pedophile with pictures depicting horrible things. 2 things are true.

  1. Although you can't necessarily help Bob you can lock him up preventing him from doing harm and permanently brand him as a dangerous person making it less likely for actual children to be harmed.

  2. Bob can't claim actual depictions of abuse are AI generated and force you to find the unknown victim before you can lock him and his confederates up. If the law doesn't distinguish between simulated and actual abuse then in both cases Bob just goes to jail.

A third factor is that this technology and the inherent lack of privacy on the internet could potentially pinpoint numerous unknown pedophiles who can even if they haven't done any harm yet be profitably persecuted to societies ultimate profit so long as you value innocent kids more than perverts.

[–] shalafi@lemmy.world 0 points 1 week ago (1 children)

Am I reading this right? You're for prosecuting people who have broken no laws?

I'll add this; I have sexual fantasies (not involving children) that would be repugnant to me IRL. Should I be in jail for having those fantasies, even though I would never act on them?

This sounds like some Minority Report hellscape society.

[–] michaelmrose@lemmy.world 0 points 1 week ago* (last edited 1 week ago) (1 children)

Am I reading this right? You’re for prosecuting people who have broken no laws?

No I'm for making it against the law to simulate pedophile shit as the net effect is fewer abused kids than if such images were to be legal. Notably you are free to fantasize about whatever you like its the actual creation and sharing of images that would be illegal. Far from being a minority report hellscape its literally the present way things already are many places.

[–] Petter1@lemm.ee 0 points 1 week ago (1 children)

Lol, how can you say that do confidently? How would you know that with fewer AI CP you get less abused kids? And what is the logic behind it?

Demand doesn’t really drop if something is illegal (same goes for drugs). The only thing you reduce is offering, which just resulting in making the thing that got illegal more valuable (this wakes attention of shady money grabbers that hate regulation / give a shit about law enforcement and therefore do illegal stuff to get money) and that you have to pay a shitton of government money maintaining all the prisons.

[–] michaelmrose@lemmy.world -1 points 1 week ago (1 children)

Basically every pedo in prison is one who isn't abusing kids. Every pedo on a list is one who won't be left alone with a young family member. Actually reducing AI CP doesn't actually by itself do anything.

[–] AwesomeLowlander@sh.itjust.works 1 points 1 week ago (1 children)

Wrong. Every pedo in prison is one WHO HAS ALREADY ABUSED A CHILD, whether directly or indirectly. There is an argument to be made, and some studies that show, that dealing with Minor Attracted People before they cross the line can be effective. Unfortunately, to do this we need to be able to have a logical and civil conversation about the topic, and the current political climate does not allow for that conversation to be had. The consequence is that preventable crimes are not being prevented, and more children are suffering for it in the long run.

[–] michaelmrose@lemmy.world -1 points 1 week ago (1 children)

People are locked up all the time for just possessing child porn without having abused anyone. This isn't a bad thing because they are a danger to society.

No, they are not locked up because they're a danger to society. They're locked up because possessing CP is indirectly contributing to the abuse of the child involved.