this post was submitted on 02 Jul 2023
265 points (100.0% liked)

Reddit Migration

21 readers
2 users here now

### About Community Tracking and helping #redditmigration to Kbin and the Fediverse. Say hello to the decentralized and open future. To see latest reeddit blackout info, see here: https://reddark.untone.uk/

founded 1 year ago
 

It’s honestly really sad what’s been happening recently. Reddit with the API pricing on 3rd party apps, Discord with the new username change, Twitter with the rate limits, and Twitch with their new advertising rules (although that has been reverted because of backlash). Why does it seem like every company is collectively on a common mission of destroying themselves in the past few months?

I know the common answer is something around the lines of “because companies only care about making money”, but I still don’t get why it seems like all these social media companies have suddenly agreed to screw themselves during pretty much the period of March-June. One that sticks out to me especially is Reddit CEO, Huffman’s comment (u/spez), “We’ll continue to be profit-driven until profits arrive”. Like reading this literally pisses me off on so many levels. I wouldn’t even have to understand the context behind his comment to say, “I am DONE with you, and I am leaving your site”.

Why is it like this? Does everyone feel the same way? I’m not sure if it’s just me but everything seems to be going downhill these days. I really do hope there is a solution out of this mess.

you are viewing a single comment's thread
view the rest of the comments
[–] PabloDiscobar@kbin.social 6 points 1 year ago* (last edited 1 year ago) (1 children)

What’s so bad about giving AI models something to learn on?

From a user point of view? A lot. So far the AI has made itself the champion of the creation of fake. Fake news, fake pictures, fake videos, fake history, fake identity. Do you think that the AI will be used for your own good? Do you think that your private data are farmed for you own good? I don't.

I posted an example about fake identities and fake posters on Twitter. This is the end goal. This is where the money generated by the AI will come from.

That way you could detect and address rogue scrubbers while still working with LLM creators who are open to an honest training integration. And if your company can’t really detect the difference between users and LLM crawlers after implementing something like this, well, then those crawlers don’t really affect the company as much as the CEOs would like to pretend.

Twitter and Reddit probably want to be their own LLM creators. They don't want to leave this market to another LLM. Also it doesn't take a lot of API calls to generate the content that will astroturf your product.

Anyway the cat is out of the bag and this data will be harvested. The brands will astroturf their products using AI processes. People are not stupid and will realize the trick played on them. We are probably heading toward platforms using full authenticated access.

[–] FaceDeer@kbin.social 3 points 1 year ago

From a user point of view? A lot. So far the AI has made itself the champion of the creation of fake. Fake news, fake pictures, fake videos, fake history, fake identity. Do you think that the AI will be used for your own good? Do you think that your private data are farmed for you own good? I don't.

That's addressing whether the mere existence of LLMs is "good" or not. That's not going to be affected by whether someone changes their username every couple of months or whether some particular social media site makes their content annoying to scrape. LLMs exist now, and they're only getting better; attempting to staunch the flow of genies out of the bottle at this stage is futile.

Personally, I'm actually rather pleased that my comments on Reddit over the years are factoring in to how LLMs "think." All that ranting about the quality of Disney's Star Wars movies not only convinced all the people who read it (I assume) but will now also convince our future AI overlords too.