this post was submitted on 02 Mar 2024
169 points (84.8% liked)

Unpopular Opinion

6316 readers
80 users here now

Welcome to the Unpopular Opinion community!


How voting works:

Vote the opposite of the norm.


If you agree that the opinion is unpopular give it an arrow up. If it's something that's widely accepted, give it an arrow down.



Guidelines:

Tag your post, if possible (not required)


  • If your post is a "General" unpopular opinion, start the subject with [GENERAL].
  • If it is a Lemmy-specific unpopular opinion, start it with [LEMMY].


Rules:

1. NO POLITICS


Politics is everywhere. Let's make this about [general] and [lemmy] - specific topics, and keep politics out of it.


2. Be civil.


Disagreements happen, but that doesn’t provide the right to personally attack others. No racism/sexism/bigotry. Please also refrain from gatekeeping others' opinions.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Shitposts and memes are allowed but...


Only until they prove to be a problem. They can and will be removed at moderator discretion.


5. No trolling.


This shouldn't need an explanation. If your post or comment is made just to get a rise with no real value, it will be removed. You do this too often, you will get a vacation to touch grass, away from this community for 1 or more days. Repeat offenses will result in a perma-ban.



Instance-wide rules always apply. https://legal.lemmy.world/tos/

founded 1 year ago
MODERATORS
 

In the whirlwind of technological advancements, artificial intelligence (AI) often becomes the scapegoat for broader societal issues. It’s an easy target, a non-human entity that we can blame for job displacement, privacy concerns, and even ethical dilemmas. However, this perspective is not only simplistic but also misdirected.

The crux of the matter isn’t AI itself, but the economic system under which it operates - capitalism. It’s capitalism that dictates the motives behind AI development and deployment. Under this system, AI is primarily used to maximize profits, often at the expense of the workforce and ethical considerations. This profit-driven motive can lead to job losses as companies seek to cut costs, and it can prioritize corporate interests over privacy and fairness.

So, why should we shift our anger from AI to capitalism? Because AI, as a tool, has immense potential to improve lives, solve complex problems, and create new opportunities. It’s the framework of capitalism, with its inherent drive for profit over people, that often warps these potentials into societal challenges.

By focusing our frustrations on capitalism, we advocate for a change in the system that governs AI’s application. We open up a dialogue about how we can harness AI ethically and equitably, ensuring that its benefits are widely distributed rather than concentrated in the hands of a few. We can push for regulations that protect workers, maintain privacy, and ensure AI is used for the public good.

In conclusion, AI is not the enemy; unchecked capitalism is. It’s time we recognize that our anger should not be at the technology that could pave the way for a better future, but at the economic system that shapes how this technology is used.

top 50 comments
sorted by: hot top controversial new old
[–] ininewcrow@lemmy.ca 19 points 8 months ago

Capitalism = runaway immoral human greed

[–] JackGreenEarth@lemm.ee 15 points 8 months ago (2 children)

This is actually an unpopular opinion sadly, on Lemmy as well in the outside world. A rare case of a post on this community where I ca upvote both because it's unpopular and I agree with it.

[–] ClamDrinker@lemmy.world 3 points 8 months ago* (last edited 8 months ago)

Depends on where you live I suppose. Irrational AI hate is something I only really encounter online. Then again my country has pretty good worker protections, so there's less reason to be afraid of AI.

[–] nilloc@discuss.tchncs.de 2 points 8 months ago* (last edited 8 months ago)

I don’t know, there’s plenty of anti-billionaire sentiment, fuck_cars is basically anti-capitalist, and most of the environmentalists get to the same conclusion pretty quickly too.

The realists (and cynics in some cases) just know that it’s going to take a huge process to shift us away. I’m a realist and am opting for a progressive takeover that leads to taxing billionaires, carbon/pollution, and dangerous vehicles (among other clear hazards) out of existence.

But when I’m feeling cynical, I get worried that it’s going to take a war to happen, and I hope for my son’s sake that doesn’t happen.

[–] glimse@lemmy.world 14 points 8 months ago (3 children)
[–] Lmaydev@programming.dev 23 points 8 months ago* (last edited 8 months ago) (1 children)

If used correctly newer generation AIs could be an absolute game changer In fields like medicine, finances, computing, public transport, customer services etc.

But what they'll actually be used for is to make more money.

If you take capitalism out of the equation they could be an amazing force for good. But under our current system they won't.

[–] circuitfarmer@lemmy.world 1 points 8 months ago (1 children)

They're also used to make less money -- by the people whose jobs will be replaced so that those up top can make more money.

AI at this point exacerbates the wealth gap.

[–] VirtualOdour@sh.itjust.works 2 points 8 months ago

But they're also used to make open source software that gives people the tools they need to work together on community design protects which can free everyone from capitalism.

I've been using them to help coding and they're really useful even at this early stage, all the people i follow making ai tools and similar are using ai too which is one of the reasons we're getting so many great free to use tools. Ai design tools will be used to make flosh devices that can be fabricated locally using ai assisted tooling, it will totally change the entire structure of society and improve things significantly

[–] umbrella@lemmy.ml 7 points 8 months ago* (last edited 8 months ago)

because AI is just a tool.

it could be used for good.

[–] A_Very_Big_Fan@lemmy.world 2 points 8 months ago

Because when greed and money aren't in the equation, AI is pretty useful and for most people it isn't costing them anything.

[–] 3volver@lemmy.world 9 points 8 months ago

It’s pretty evident that AI is incompatible with capitalism, but most people direct their anger at AI. Late-stage capitalism is the problem, not automation. I upvoted because I think this is actually an unpopular opinion factoring in the world population rather than just Lemmy.

[–] littlebluespark@lemmy.world 7 points 8 months ago

¿Por que no los dos?

[–] SomeGuy69@lemmy.world 5 points 8 months ago* (last edited 8 months ago)

This post is written by an AI. Lmao

"Are you scared of an AI world? You're already in it."

[–] echo64@lemmy.world 5 points 8 months ago (3 children)

AI outside of capitalism is still incredibly dangerous. It's all the baises that create the world we have today but on steroids. Take all the injustices against minority peoples today and scale it up to however much compute you have.

It's completely naive to think that AI will solve the world's problems if that pesky capitalism would get out of the way. But this website is full of tech bros, so it's impossible to get past that.

Also, being angry at capitalism doesn't pay the rent. I can't boycott capitalism. I can use my small power under capitalism to boycot your shitty ai.

[–] kromem@lemmy.world 4 points 8 months ago

Mhmm. Here's the uncensored anti-woke AI Elon tried to create answering Twitter blue subscribers questions:

Or

Or

Yeah, so horribly biased and terrible...

load more comments (2 replies)
[–] xigoi@lemmy.sdf.org 4 points 8 months ago (1 children)

As much as I hate AI run by megacorporations, I don’t think AI run by a communist government would be any better.

[–] nottelling@lemmy.world 11 points 8 months ago

Yes, the only two genders: capitalist economy and communist government.

[–] BothsidesistFraud@lemmy.world 2 points 8 months ago* (last edited 8 months ago) (2 children)

Please explain how in a non-capitalist world, AI would never be used for the sorts of things you dislike AI being used for such as job elimination. You think nobody will realize that it can be used to produce lots of art, for example?

In this non-capitalist world you're thinking of, would we have any automation? Like do we have harvester combines, or is it still 35 people breaking their backs to cut and thresh an acre of wheat?

[–] Cowbee@lemmy.ml 2 points 8 months ago (1 children)

If the means of production are collectively owned, and thus directed towards the good of society, job elimination isn't as much of a problem.

Socialists are huge proponents of automation, because instead of being used to cut jobs for profit, dirty and hard jobs can be eliminated.

[–] BothsidesistFraud@lemmy.world 1 points 8 months ago

Then why are we angry at AI in this discussion?

[–] Wereduck@lemmy.blahaj.zone 1 points 8 months ago

Job elimination is a problem in capitalism because workers need jobs to survive. In a socialist society, job elimination can be a good thing, as it allows us to either increase access to resources or reduce how much time people need to work without dispossessing the people whose jobs were eliminated.

The difference is that, in capitalism, workers only survive by proving their usefulness to capitalists making money. Automation is thus a threat to worker bargaining power. If the means of production were socially owned (through for example government run utilities or worker coops), worker bargaining power is then through a vote or through ownership. It is possible to by default distribute the spoils of automation rather than concentrate them in the hands of capitalists.

[–] kometes@lemmy.world 2 points 8 months ago (5 children)

Maybe work on proving "AI" is actually a technological advancement instead of an overhyped plagiarism machine first.

[–] Lmaydev@programming.dev 7 points 8 months ago* (last edited 8 months ago) (1 children)

LLMs real power isn't generating fresh content it's their ability to understand language.

Using one to summarise articles gives incredibly good results.

I use Bing enterprise everyday at work as a programmer. It makes information gathering and learning so much easier.

It's decent at writing code but that's not the main selling point in my opinion.

Plus they are general models to show the capabilities. Once the tech is more advanced you can train models for specific purposes.

It seems obvious an AI that can do creative writing and coding wouldn't be as good at either.

These are generation 0. There'll be a lot of advances coming.

Also LLMs are a very specific type of machine learning and any advances will help the rest of the field. AI is already widely used in many fields.

[–] throwwyacc@lemmynsfw.com 4 points 8 months ago (3 children)

LLMs don't "understand" anything. They're just very good at making it look like they sort of do

They also tend to have difficulty giving the answer "I don't know" and will confidently assert something completely incorrect

And this is not generation 0. The field of AI has been around for a long time. It's just now becoming widespread and used where the avg person can see it

load more comments (3 replies)
[–] agamemnonymous@sh.itjust.works 5 points 8 months ago (6 children)

Can you prove that human intelligence isn't an overhyped plagiarism machine?

load more comments (6 replies)
[–] kromem@lemmy.world 4 points 8 months ago

Furthermore, simple probability calculations indicate that GPT-4's reasonable performance on k=5 is suggestive of going beyond "stochastic parrot" behavior (Bender et al., 2021), i.e., it combines skills in ways that it had not seen during training.

Do these networks just memorize a collection of surface statistics, or do they rely on internal representations of the process that generates the sequences they see? We investigate this question by applying a variant of the GPT model to the task of predicting legal moves in a simple board game, Othello. Although the network has no a priori knowledge of the game or its rules, we uncover evidence of an emergent nonlinear internal representation of the board state.

So you already have research showing that GPT LLMs are capable of modeling aspects of training data at much deeper levels of abstraction than simply surface statistics of words and research showing that the most advanced models are already generating novel and new outputs distinct from anything that would be in the training data by virtue of the complexity of the number of different abstract concepts it combines from what was learned in the training data.

Like - have you actually read any of the ongoing actual research on the field at all? Or just articles written by embittered people who are generally misunderstanding the technology (for example, if you ever see someone refer to them as Markov chains, that person has no idea what they are talking about given the key factor of the transformer model is the self-attention mechanism which negates the Markov property characterizing Markov chains in the first place).

[–] A_Very_Big_Fan@lemmy.world 2 points 8 months ago (2 children)

instead of an overhyped plagiarism machine first.

If I paint an Eiffel Tower from memory, am I plagiarizing?

If it's not plagiarism when humans do it, it's not plagiarism when a machine does it.

load more comments (2 replies)
[–] BilboBargains@lemmy.world 1 points 8 months ago

Feels like a bit of a straw man argument. I don't meet people who accuse the actual AI itself of malice. That would be like getting mad with an actor who portrays a villain. I know these people exist and they are stupid but most people understand that that the A in AI stands for artificial and that means scientists and engineers make these things and capitalists provide the funding and own them.

The interesting thing about AI is that once it becomes self aware, we can legitimately insert agency into its actions in the way we do with people. We can criminalise it and punish it for decisions that it has made. We could use artificial lawyers to prosecute artificial doctors that perform botched surgeries on artificial warehouse workers.

[–] Mango@lemmy.world 1 points 8 months ago

Why not instead, the people responsible?

load more comments
view more: next ›