this post was submitted on 19 Sep 2024
329 points (98.2% liked)

News

23287 readers
5390 users here now

Welcome to the News community!

Rules:

1. Be civil


Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban. Do not respond to rule-breaking content; report it and move on.


2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.


Obvious right or left wing sources will be removed at the mods discretion. We have an actively updated blocklist, which you can see here: https://lemmy.world/post/2246130 if you feel like any website is missing, contact the mods. Supporting links can be added in comments or posted seperately but not to the post body.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Post titles should be the same as the article used as source.


Posts which titles don’t match the source won’t be removed, but the autoMod will notify you, and if your title misrepresents the original article, the post will be deleted. If the site changed their headline, the bot might still contact you, just ignore it, we won’t delete your post.


5. Only recent news is allowed.


Posts must be news from the most recent 30 days.


6. All posts must be news articles.


No opinion pieces, Listicles, editorials or celebrity gossip is allowed. All posts will be judged on a case-by-case basis.


7. No duplicate posts.


If a source you used was already posted by someone else, the autoMod will leave a message. Please remove your post if the autoMod is correct. If the post that matches your post is very old, we refer you to rule 5.


8. Misinformation is prohibited.


Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.


9. No link shorteners.


The auto mod will contact you if a link shortener is detected, please delete your post if they are right.


10. Don't copy entire article in your post body


For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.

founded 1 year ago
MODERATORS
 

Well, that’s awesome.

top 32 comments
sorted by: hot top controversial new old
[–] FlyingSquid@lemmy.world 137 points 1 month ago (2 children)

“Generative AI has polluted the data,” she wrote. “I don’t think anyone has reliable information about post-2021 language usage by humans.”

That is fucking horrifying.

[–] Zikeji@programming.dev 49 points 1 month ago (2 children)

Yeah, the generative AI pollution feels alot like the whole steel thing - since the nuclear tests it's been impossible for new steel to not be slightly radioactive, which means if they need uncontaminated steel they get it from ships that sunk before those.

[–] Tamo240@programming.dev 13 points 1 month ago (1 children)

This is the exact metaphor I've been using when talking to people about the issue. Did we both get it from somewhere I can't remember, or is it just perfect?

[–] Zikeji@programming.dev 7 points 1 month ago

It's the first thing I thought of when the articles about the generative AI polluting itself started coming out.

[–] 2pt_perversion@lemmy.world 7 points 1 month ago* (last edited 1 month ago) (1 children)

Luckily radiation levels have pretty much dropped back to pre-war levels now so new steel can be low-background as well. It was possible to make new low-background steel from 1945 onward too it just would have been more expensive than salvaging pre-war ships. I like the analogy though, it fits.

Isn't it the same with the upper atmosphere and humans more or less. I remember something about radio active tracker used which wouldn't be present if it were for nuclear testing etc.

[–] kibiz0r@midwest.social 12 points 1 month ago

I’ve been comparing it to Kessler Syndrome, but for culture.

[–] cybervseas@lemmy.world 69 points 1 month ago (3 children)

That makes sense. Way too many web search results look and feel like they weren't written by a human lately. It's gotten even more difficult for me to figure out what's trustworthy and what isn't.

[–] TimLovesTech@badatbeing.social 29 points 1 month ago

Yep, and the fact they continue to feed these same results back to the AI is going to eventually make them lose their shit. I saw it mentioned in an article or video (can't remember now which) that when AI starts taking AI created output as input it gets hallucinations, almost like schizophrenia.

[–] spankmonkey@lemmy.world 10 points 1 month ago (1 children)

When the first three results look like high schoolers copied with slight wording changes from the same source and they are all written in an extremely passive tone, my assumption is AI. Questions on things like cooking temps are the worst in my experience, and I assume that is something which is easy to automate.

[–] cybervseas@lemmy.world 5 points 1 month ago (1 children)

I was looking for tips on cutting acrylic sheets and everything I found seemed untrustworthy. Bad advice there could be hazardous.

[–] NuXCOM_90Percent@lemmy.zip 8 points 1 month ago (1 children)

That I feel is a case of people yearning for a day that never existed.

Like, every GenX/Older Millennial who had a modem too early in life has stories about The Anarchist's Cookbook. And the thing you learn REAL fast is that people would edit and share MUCH more dangerous versions (and considering what the source was to begin with...). I remember being part of the mod staff for a couple DC++ hubs where we would check versions and tell anyone with a(n overly) dangerous edit to delete that shit or be banned.

Fast forward a couple decades and I needed to do a temporary repair on my car before I could get some "body" damage fixed (like two hours of effort but needed a part). Every attempt at searching, even on reddit, would talk about how you should use flexseal or the good duct tape or whatever. Only lucked out because I found one blog post that talked about how using any of those methods would guarantee you rip off the paint and drastically increase the cost of repairs and to instead use automotive masking tape unless you REALLY needed to drive in a heavy downpour.

Same with doing house work. Youtube is immensely useful for that. But there is a reason so many "maker" channels have "React to life hack" videos. Because if you don't know what you are doing? Some whackjob using clever editing to make it look like they built a duct adapter out of elmer's glue and an actual repair video are indistinguishable (especially after youtube hid the dislikes...). And that can range from wasting your time to outright fire hazards or frozen pipes.

The reality is that people have always been shits. And it REALLY fucking sucks when the LLMs designed to parse that, invariably, become shits too. But this has been a problem since people discovered SEO in the first place. Volume has gone up but the problem is not new.

And... late stage capitalism. But I find myself REALLY liking Kagi (libertarian tech bro CEO aside...) simply because it reduces the impact of my search history on results while also letting me manually emphasize some sites or outright block any that piss me off. Still get the SEO blogspam but a lot less.

[–] spankmonkey@lemmy.world 1 points 1 month ago

I remember the anarchit's cookbook and knowing it was terrible at the time. One dumb friend was able to prove it while luckily avoiding seeious injury!

But what I also learned back then was criticsl thinking as a lot of early websites were just as terrible, but it was a bit easier to tell they were terrible because they did not have any sulporting information like references or examples. Today it is fairly easy to dismiss youtube videos where the person is enthusiastic or doesn't show the thing from start to finish. The best auto repair videos were some guy with a handheld camera (probably a phone) walking through the process and explaining what they were doing and why. If they stuggle a bit, even better! My favorite channel for someone doing wordworking explains everything in a calm and clear way, shows the process, and explains the ins and outs and why they might have done it differently in the past!

The worst ones are someone enthusiastic showing five second clips and not mentioning anything about safety or how to know if you are doing it wrong. They are entertainment personalities and not a source of knowledge!

[–] Sludgehammer@lemmy.world 4 points 1 month ago* (last edited 1 month ago)

Yeah, when I was looking for information about Tears of the Kingdom around 90% of my search results was AI slop. I think was looking for info about how weapon durability and fusion worked and I kept getting a badly reworded version of the explanation of fusion from the gameplay teaser.

Actually.. that reminded me of another TotK search I did, I was looking for where to farm some variety of lizalfos tails and kept getting AI articles that confused BotW locations with TotK. Amusingly, I eventually tried Google's chatbot out of exasperation and it actually proved more accurate than my search results.

[–] Asafum@feddit.nl 44 points 1 month ago (1 children)

Tie this with the obvious oil pollution, and newly musks radio transmission pollution... Fucking corporations get to pollute the world in every way imaginable to chase a buck and we're left having to cope with their waste...

Fucking bullshit society we made for ourselves...

[–] some_guy@lemmy.sdf.org 4 points 1 month ago

Fucking bullshit society we made for ourselves…

Yes, but more accurately, those who came before us made for us. Not that we're doing a bang-up job at reversing the trend.

[–] Neuromancer49@midwest.social 24 points 1 month ago (1 children)

Devastating loss for the science community. I used this database in my PhD, and didn't expect it to shut down ever.

[–] some_guy@lemmy.sdf.org 1 points 1 month ago

Damn. That really drives it home at another level.

[–] NuXCOM_90Percent@lemmy.zip 15 points 1 month ago (4 children)

Having read the article:

I agree that the approach is no longer viable but I strongly disagree with the rationale. It boils down to three key aspects:

  1. Wordfreq works by scraping the "open web". As a result, it is being inundated with massive amounts of gpt spam articles. This is problematic in that it is not "natural language" between people but... those articles never were. If you think anyone talks like the average SEO recipe blog then... more on that later.
  2. Sites are increasingly locking down access to scraping their text. This... I actually think is really good. I strongly dislike that that locking down means "so that only people who pay us can train off of you" but I have always disliked the idea that people just train models off of social media with no consent whatsoever
  3. Funding for NLP research is basically dead. No arguments there and I have similar rants from different perspectives. But... that is when you learn how to call what you do AI to get back your old funding.

But I think the bigger part, that I strongly disagree with, is the idea that this is not the language of a post-2021 society. With points like

Including this slop in the data skews the word frequencies.”

But... look up "so-cal-ification" and how many people have some "valley girl" idioms and cadence to their normal speech because that is what we grew up on. Like, I say "like" a lot to chain thoughts together and am under no illusions that came from TV. Same with how you can generally spot someone who grew up reading SFF based on how they use some semi-obscure words and are almost guaranteed to mispronounce them.

Because it is the same logic as "literally there is no word that means literally anymore". Yeah, it is true. Yeah, it is annoying. But language evolves and it doesn't always evolve in ways that make sense.

Or, just look at how many people immediately started using the phrase "enshittification" every chance they got. Or who learned about the Ship of Theseus and apply it every chance they get.

Like (there it is again!), a great example is cell phones. Reality TV popularized the idea of putting your phone on speaker, holding it in the palm of your hand, and talking into it. That is fucking obnoxious and has made the world a worse place. But part of that was necessity (in reality tv it is so that the audience gets both perspectives. In reality life it is because of shit like the iphone having a generation or two that would drop calls if you held it like a god damned phone) and then it is just that feedback loop. Cell phone companies design their phones to look good on TV when held that way and people who watch TV start doing that because all the cool people do it. And so forth.

AI has already begun to change language and it will continue to do so in the future. That is just reality and it is no different than radio and especially television leading to many regional dialects being outright wiped out.

[–] conciselyverbose@sh.itjust.works 26 points 1 month ago* (last edited 1 month ago) (1 children)

The problem is that LLMs aren't human speech and any dataset that includes them cannot be an accurate representation of human speech.

It's not "LLMs convinced humans to use 'delve' a lot". It's "this dataset is muddy as hell because a huge proportion of it is randomly generated noise".

[–] some_guy@lemmy.sdf.org 3 points 1 month ago

Cell phone companies design their phones to look good on TV when held that way and people who watch TV start doing that because all the cool people do it. And so forth.

I strongly disagree with this. They're designed to look good no matter what. TV is an afterthought in the design of smartphones. But what do I know… I only worked on one of those projects.

Language evolves, yes, and here's another chance to recommend an incredible book for language nerds: Highly Irregular: Why Tough, Through, and Dough Don't Rhyme and Other Oddities of the English Language

But "enshitification" refers to a very specific cultural trend. The Ship of Theseus is someone trying to sound smart. These are not the same thing, even if some asshole tries to sound smart talking about the former. Others who are industry-enthusiasts use it as a shorthand for a very specific larger conversation.

[–] faltryka@lemmy.world 2 points 1 month ago (1 children)
[–] Mbourgon@lemmy.world 2 points 1 month ago (1 children)
[–] faltryka@lemmy.world 2 points 1 month ago

Thanks! I was trying to figure out what kind of fan fiction started with S haha.

[–] peopleproblems@lemmy.world 1 points 1 month ago

I don't think enshitification is a good example since everything is being enshitified

[–] Stopthatgirl7@lemmy.world 8 points 1 month ago

This is just depressing.

[–] AbouBenAdhem@lemmy.world 7 points 1 month ago (1 children)

AI language patterns are polluting the data, but are they influencing language usage by humans as well? We should delve into that.

[–] Samvega@lemmy.blahaj.zone 3 points 1 month ago* (last edited 1 month ago)

They're influencing me to say 'the effects of generative AI are often shit'.

[–] xia@lemmy.sdf.org 4 points 1 month ago

Taking a step back, i wonder... we are reading this stuff now, it effects us too. What if we have already stepped into a linguistic death-spiral of a telephone-game where each generation gets rehashed garbage from the last?

[–] homesweethomeMrL@lemmy.world -3 points 1 month ago

Cripes what a dumb way to do it.

“Study of How Dogs Interact With Cheese Called Off After Dogs Eat the Cheese”