The problem here will be when companies start accusing smaller competitors/startups of using AI when they haven't used it at all.
It's getting harder and harder to tell when a photograph is AI generated or not. Sometimes they're obvious, but it makes you second guess even legitimate photographs of people because you noticed that they have 6 fingers or their face looks a little off.
A perfect example of this was posted recently where, 80-90% of people thought that the AI pictures were real pictures and that the Real pictures were AI generated.
And where do you draw the line? What if I used AI to remove a single item in the background like a trashcan? Do I need to go back and watermark anything that's already been generated?
What if I used AI to upscale an image or colorize it?
What if I used AI to come up with ideas, and then painted it in?
And what does this actually solve? Anyone running a misinformation campaign is just going to remove the watermark and it would give us a false sense of "this can't be AI, it doesn't have a watermark".
The actual text in the bill doesn't offer any answers. So far it's just a statement that they want to implement something "to allow consumers to easily determine whether images, audio, video, or text was created by generative artificial intelligence."
The problem here will be when companies start accusing smaller competitors/startups of using AI when they haven't used it at all.
It's getting harder and harder to tell when a photograph is AI generated or not. Sometimes they're obvious, but it makes you second guess even legitimate photographs of people because you noticed that they have 6 fingers or their face looks a little off.
A perfect example of this was posted recently where, 80-90% of people thought that the AI pictures were real pictures and that the Real pictures were AI generated.
https://web.archive.org/web/20240122054948/https://www.nytimes.com/interactive/2024/01/19/technology/artificial-intelligence-image-generators-faces-quiz.html
And where do you draw the line? What if I used AI to remove a single item in the background like a trashcan? Do I need to go back and watermark anything that's already been generated?
What if I used AI to upscale an image or colorize it? What if I used AI to come up with ideas, and then painted it in?
And what does this actually solve? Anyone running a misinformation campaign is just going to remove the watermark and it would give us a false sense of "this can't be AI, it doesn't have a watermark".
The actual text in the bill doesn't offer any answers. So far it's just a statement that they want to implement something "to allow consumers to easily determine whether images, audio, video, or text was created by generative artificial intelligence."
https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB942
I wouldn't really call that a perfect example, they really went out of their way to edit the "real" people photos to look unrealistically smooth.
I mean yeah technically it's a 'real people vs ai people' take, but realistically it's a 'fake photo vs fake photo' take.
I agree completely.
To make it more ironic, one of the popular uses of AI is to remove watermarks...