this post was submitted on 28 Jul 2023
30 points (100.0% liked)

Technology

20 readers
4 users here now

This magazine is dedicated to discussions on the latest developments, trends, and innovations in the world of technology. Whether you are a tech enthusiast, a developer, or simply curious about the latest gadgets and software, this is the place for you. Here you can share your knowledge, ask questions, and engage in discussions on topics such as artificial intelligence, robotics, cloud computing, cybersecurity, and more. From the impact of technology on society to the ethical considerations of new technologies, this category covers a wide range of topics related to technology. Join the conversation and let's explore the ever-evolving world of technology together!

founded 2 years ago
 

As AI-generated content fills the Internet, it’s corrupting the training data for models to come. What happens when AI eats itself?

top 12 comments
sorted by: hot top controversial new old
[–] blivet@kbin.social 6 points 1 year ago (1 children)

So in order for data to be useful to AIs, AI-generated content will have to be flagged as such. Sounds good to me.

[–] admiralteal@kbin.social 4 points 1 year ago (2 children)

But malicious actors don't want their generated data to be recognizable to LLMs. They want it to be impersonating real people in order to promote advertising/misinformation goals.

Which means that even if they started flagging LLM generated content as LLM generated, that would just mean only the most malicious and vile LLM contents will be out there training models in the future.

I don't see any solution to this on the horizon. Pandora is out of the box.

[–] blivet@kbin.social 6 points 1 year ago (1 children)

If the quality of AI-generated content degrades to the point where it’s useless that is also fine with me.

[–] RoboRay@kbin.social 5 points 1 year ago (1 children)

Some would argue that this is the starting position.

[–] blivet@kbin.social 3 points 1 year ago

Yes, that’s pretty much where I’m at.

[–] Machinist3359@kbin.social 1 points 1 year ago (1 children)

To flip it, this means that only AI which responsibly manages it's initial data set will be successful. Can't simply scrape and pray, need to have some level of vetting with input.

More labor intensive? Sure, but AI companies aren't entitled to quick and easy solutions they started with...

[–] admiralteal@kbin.social 1 points 1 year ago

That doesn't follow.

It means the AI companies that don't behave responsibly will have a huge advantage over the ones that do.

[–] curiosityLynx@kglitch.social 5 points 1 year ago (1 children)

Cannibalism always has increased risk of brain disease. Seems fitting that this applies to AI too.

[–] grahamsz@kbin.social 5 points 1 year ago

I like the term from Jathan Sadowski that it should be called Habsburg AI

[–] TimeSquirrel@kbin.social 5 points 1 year ago

It's basically RE:RE:RE:RE:RE:RE and corrupted jpegs that have been reposted and compressed a thousand times, but for AI.

[–] admiralteal@kbin.social 3 points 1 year ago

Dead internet theory seems like a completely inevitable future place that we're all racing to. I don't see any way to avoid it. It's a tragedy of the commons in a place where there is no organizing body that can step in and prevent private actors from destroying everything. Worse, we're more concerned with those private actors being strong and competitive which is only accelerating us towards the doomed endgame.

[–] anon2481@kbin.social 2 points 1 year ago

Great. It'll be easy to tell AI generated content apart when they're just spewing jibberish.