this post was submitted on 22 Jan 2024
237 points (97.6% liked)

Technology

58303 readers
3630 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Alternative link: https://archive.is/qgEzK

all 37 comments
sorted by: hot top controversial new old
[–] Jaysyn@kbin.social 75 points 8 months ago (6 children)

Surprise, that's completely unenforceable.

Yet more out of touch legislators working with things they can't even begin to understand.

(And I'm not shilling for fucking AI here, but let's call a spade a spade.)

[–] Max_P@lemmy.max-p.me 18 points 8 months ago (3 children)

What baffles me is that those lawmakers think they can just legislate any problem with law.

So okay, California requires it. None of the other states do. None of the rest of the Internet does. It doesn't fix anything.

They act like the Internet is like cable and it's all american companies that "provides" services to end users.

[–] PM_Your_Nudes_Please@lemmy.world 7 points 8 months ago (1 children)

Inb4 AI devs just slap a generic “click this box to confirm you are not in California” verification on their shit.

[–] sorghum@sh.itjust.works 1 points 8 months ago

If the server isn't even in California, would it even apply/be enforceable to them?

[–] 50gp@kbin.social 6 points 8 months ago (1 children)

so youre saying nothing should be done? great idea

[–] gsfraley@lemmy.world 2 points 8 months ago* (last edited 8 months ago) (1 children)

Sure, but this is less than nothing. It literally applies 0 friction against AI and is complete and totally unenforceable. AND it's a laughing stock for everyone and sucks the oxygen out of better AI regulation groups and think-tanks.

[–] Imgonnatrythis@sh.itjust.works 9 points 8 months ago

Why? If a California corporation is pumping out AI content and it doesn't have watermarks, why can't this be enforced? It's not an all use solution, but I fail to see how it fails completely.

[–] tyler@programming.dev 3 points 8 months ago

They call it the California effect for a reason.

http://eprints.lse.ac.uk/42097/1/__Libfile_repository_Content_Neumayer, E_Neumayer_Does _California_effect_2012_Neumayer_Does _California_effect_2012.pdf

[–] assassin_aragorn@lemmy.world 7 points 8 months ago

I'm not so sure. A lot of environmental laws require companies to self report exceeding limits, and they actually do. It was a common thing for my contact engineer colleagues to be called up at night to calculate release amounts because their unit had an upset.

A law like this would force companies to at least pretend to comply. None can really say "we're not going to because you can't catch us".

[–] tsonfeir@lemm.ee 6 points 8 months ago

Watermarks? Super important. Helping the unhoused though, nooooo.

[–] RobotToaster@mander.xyz 4 points 8 months ago

Even if it was enforceable, there are watermark removal AI tools.

[–] Brkdncr@lemmy.world 3 points 8 months ago (1 children)

Hmm, technically speaking we could require images be digitally signed, tie it to a CA, and then browsers could display a “this image is not trusted” warning like we do for https issues.

People that don’t source their images right would get their cert revoked.

Would be a win for photo attribution too.

[–] Gutless2615@ttrpg.network -1 points 8 months ago (1 children)

This comment shows all the thirty seconds of thought your “Hmm” implies.

[–] Brkdncr@lemmy.world 3 points 8 months ago

You also had 30 seconds but chose to insult instead of contribute. See you at the next comment section.

[–] bluGill@kbin.social 0 points 8 months ago

It is enforceable. Not in all cases, probably not even in the majority, but it only needs a few examples to be hit with large fines and everyone doing legal things will take notice. Often you can find enough evidence to get someone to confess to using AI and that is aall the courts need.

Scammers of course will not put this in, but they are already breaking the law so this might be - like tax evasion - be a way to get scammers who you can't get for something else.

[–] turkalino@lemmy.yachts 31 points 8 months ago (2 children)

Only gonna make things more difficult for good actors while doing absolutely nothing to bad actors

[–] ook_the_librarian@lemmy.world 7 points 8 months ago (1 children)

That's true, but it would be nice to have codified way of applying a watermark denoting AI. I'm not say the government of CA is the best consortium, but laws are one way to get a standard.

If a compliant watermarker is then baked into the programs designed for good actors, that's a start.

[–] turkalino@lemmy.yachts 6 points 8 months ago (2 children)

It would be as practical for good actors to simply state an image is generated in its caption, citation, or some other preexisting method. Good actors will retransmit this information, while bad actors will omit it, just like they’d remove the watermark. At least this way, no special software is required for the average person to check if an image is generated.

Bing Image Creator already implements watermarks but it is trivially easy for me to download an image I generated, remove the watermark, and proceed with my ruining of democracy /s

[–] ook_the_librarian@lemmy.world 3 points 8 months ago* (last edited 8 months ago)

I wasn't thinking of like a watermark that is like anyone's signature. More of a crypto signature most users couldn't detect. Not a watermark that could be removed with visual effects. Something most people don't know is there, like a printer's signature for anti-counterfeiting.

I don't want to use the word blockchain, but some kind of way that if you want to take a fake video created by someone else, you are going to have a serious math problem on your hands to take away the fingerprints of AI. That way any viral video of unknown origin can easily be determined to be AI without any "look at the hands arguments".

I'm just saying, a solution only for good guys isn't always worthless. I don't actually think what I'm saying is too feasible. (Especially as written.) Sometimes rules for good guys only isn't always about taking away freedom, but to normalize discourse. Although, my argument is not particularly good here, as this is a CA law, not a standard. I would like the issue at least discussed at a joint AI consortium.

[–] Zoboomafoo@slrpnk.net 1 points 8 months ago

If your plan requires good actors to put in extra effort, it's a bad plan

[–] tyler@programming.dev 1 points 8 months ago

How in the world would this make anything more difficult for good actors?

[–] capital@lemmy.world 28 points 8 months ago (2 children)

Watermarking AI-generated content might sound like a practical approach for legislators to track and regulate such material, but it's likely to fall short in practice. Firstly, AI technology evolves rapidly, and watermarking methods can become obsolete almost as soon as they're developed. Hackers and tech-savvy users could easily find ways to remove or alter these watermarks.

Secondly, enforcing a universal watermarking standard across all AI platforms and content types would be a logistical nightmare, given the diversity of AI applications and the global nature of its development and deployment.

Additionally, watermarking doesn't address deeper ethical issues like misinformation or the potential misuse of deepfakes. It's more of a band-aid solution that might give a false sense of security, rather than a comprehensive strategy for managing the complexities of AI-generated content.

This comment brought to you by an LLM.

[–] cmnybo@discuss.tchncs.de 18 points 8 months ago (1 children)

It would also be impossible to force a watermark on open source AI image generators such as stable diffusion since someone could just download the code, disable the watermark function and compile it or just use an old version.

[–] bluGill@kbin.social 6 points 8 months ago

You can do that, but if you are in California you have just broken the law. If California enforces the law you will discover projects all make a big deal about this since users can be arrested for violation of the law if they don't handle it correctly. Most likely it is just turned on by default for all versions, but there is also the possibility that they have large warning about turning it off. Note that if you go with warning nobody with your project should travel to California as then you are liable for helping someone violate the law.

[–] Tak@lemmy.ml 7 points 8 months ago

Plus what if the creator simply doesn't live in California. What are they gonna do about it?

[–] schnurrito@discuss.tchncs.de 13 points 8 months ago
[–] QuadratureSurfer@lemmy.world 10 points 8 months ago (2 children)

The problem here will be when companies start accusing smaller competitors/startups of using AI when they haven't used it at all.

It's getting harder and harder to tell when a photograph is AI generated or not. Sometimes they're obvious, but it makes you second guess even legitimate photographs of people because you noticed that they have 6 fingers or their face looks a little off.

A perfect example of this was posted recently where, 80-90% of people thought that the AI pictures were real pictures and that the Real pictures were AI generated.

https://web.archive.org/web/20240122054948/https://www.nytimes.com/interactive/2024/01/19/technology/artificial-intelligence-image-generators-faces-quiz.html

And where do you draw the line? What if I used AI to remove a single item in the background like a trashcan? Do I need to go back and watermark anything that's already been generated?

What if I used AI to upscale an image or colorize it? What if I used AI to come up with ideas, and then painted it in?

And what does this actually solve? Anyone running a misinformation campaign is just going to remove the watermark and it would give us a false sense of "this can't be AI, it doesn't have a watermark".

The actual text in the bill doesn't offer any answers. So far it's just a statement that they want to implement something "to allow consumers to easily determine whether images, audio, video, or text was created by generative artificial intelligence."

https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB942

[–] Darkenfolk@dormi.zone 3 points 8 months ago

I wouldn't really call that a perfect example, they really went out of their way to edit the "real" people photos to look unrealistically smooth.

I mean yeah technically it's a 'real people vs ai people' take, but realistically it's a 'fake photo vs fake photo' take.

[–] Tja@programming.dev 0 points 8 months ago

I agree completely.

To make it more ironic, one of the popular uses of AI is to remove watermarks...

[–] JCreazy@midwest.social 6 points 8 months ago

If your computer is connected through a VPN to a different state, does that mean you can get around it?

[–] randon31415@lemmy.world 5 points 8 months ago

... and also abortion doctors to carry medicine that reverses abortion if a women wants it.

Come on dems! Republicans are blowing us out of the water on requiring absurd technology that doesn't exist. We should try to enforce the 3 laws of robotics!

[–] indigomirage@lemmy.ca 4 points 8 months ago* (last edited 8 months ago)

Given how unenforceable this is (a sin of omission or source from another jurisdiction is all that's needed to skirt), will we be seeing a formalized 'certificate of authenticity' demanded by people to highlight things that are not AI?

(Maybe NFT will find find its utility? I don't know...)

[–] Eggyhead@kbin.social 3 points 8 months ago

I honestly wouldn’t mind AI imagery simply being labeled as such.

[–] skarlow181@lemmy.world 3 points 8 months ago

Completely impractical. If something is AI generated, or manipulated with Photoshop or in the darkroom really doesn't make a difference. AI isn't special here, photo manipulation is about as old as the photograph itself. It would be much better to spend some effort into signing authentic images,including a whole chain of trust up to the actual camera. Luckily the Content Authenticity Initiative is already working on that.

[–] AnonTwo@kbin.social 3 points 8 months ago

It's be nice to trace an artwork back to it's source. But I don't think this is actually practical.