this post was submitted on 19 Jul 2023
178 points (84.0% liked)

Technology

59092 readers
6622 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

James Cameron on AI: "I warned you guys in 1984 and you didn't listen"::undefined

top 50 comments
sorted by: hot top controversial new old
[–] orphiebaby@lemmy.world 131 points 1 year ago* (last edited 1 year ago) (38 children)

It's getting old telling people this, but... the AI that we have right now? Isn't even really AI. It's certainly not anything like in the movies. It's just pattern-recognition algorithms. It doesn't know or understand anything and it has no context. It can't tell the difference between a truth and a lie, and it doesn't know what a finger is. It just paints amalgamations of things it's already seen, or throws together things that seem common to it— with no filter nor sense of "that can't be correct".

I'm not saying there's nothing to be afraid of concerning today's "AI", but it's not comparable to movie/book AI.

Edit: The replies annoy me. It's just the same thing all over again— everything I said seems to have went right over most peoples' heads. If you don't know what today's "AI" is, then please stop assuming about what it is. Your imagination is way more interesting than what we actually have right now. This is why we should have never called what we have now "AI" in the first place— same reason we should never have called things "black holes". You take a misnomer and your imagination goes wild, and none of it is factual.

[–] eee@lemm.ee 33 points 1 year ago

THANK YOU. What we have today is amazing, but there's still a massive gulf to cross before we arrive at artificial general intelligence.

What we have today is the equivalent of a four-year-old given a whole bunch of physics equations and then being told "hey, can you come up with something that looks like this?" It has no understanding besides "I see squiggly shape in A and squiggly shape in B, so I'll copy squiggly shape onto C".

[–] Immersive_Matthew@sh.itjust.works 8 points 1 year ago (1 children)

I really think the only thing to be concerned of is human bad actors with AI and not AI. AI alignment will be significantly easier than human alignment as we are for sure not aligned and it is not even our nature to be aligned.

[–] PopShark@lemmy.world 2 points 1 year ago

I’ve had this same thought for decades now ever since I first heard of ai takeover scifi stuff as a kid. Bots just preform set functions. People in control of bots can create mayhem.

[–] jeffw@lemmy.world 8 points 1 year ago (1 children)

Strong AI vs weak AI.

We’re a far cry from real AI

[–] Homo_Stupidus@lemmy.world 5 points 1 year ago

Isn't that also referred to as Virtual Intelligence vs Artificial Intelligence? What we have now I'd just very well trained VI. It's not AI because it only outputs variations of what's it been trained using algorithms, right? Actual AI would be capable of generating information entirely distinct from any inputs.

[–] raltoid@lemmy.world 7 points 1 year ago

The replies annoy me. It’s just the same thing all over again— everything I said seems to have went right over most peoples’ heads.

Not at all.

They just don't like being told they're wrong and will attack you instead of learning something.

load more comments (34 replies)
[–] Kolanaki@yiffit.net 39 points 1 year ago* (last edited 1 year ago) (1 children)

I dunno, James. Pretty sure Isaac Asimov and Ray Bradbury had more clear warnings years prior to Terminator.

[–] xmr_unlimited@monero.town 4 points 1 year ago

Maybe harlan ellison too

[–] jtk@lemmy.sdf.org 28 points 1 year ago* (last edited 1 year ago) (7 children)

What a pompous statement. Stories of AI causing trouble like this predate him by decades. He's never told an original story, they're all heavily based on old sci-fi stories. And exactly how were people supposed to "listen"? "Jimmy said we shouldn't work on AI, we all need to agree as a species to never do that. Thank you for saving us all Prophet Cameron!"

[–] Confuzzeled@lemmy.world 11 points 1 year ago

First he warned us about ai and nobody listened then he warned the submarine guy and he didn't listen. We have to listen to him about the giant blue hippy aliens or we'll all pay.

[–] FlyingSquid@lemmy.world 4 points 1 year ago

In fact, what is happening now sounds a lot more like Colossus: The Forbin Project (came out in 1970) than The Terminator.

load more comments (5 replies)
[–] NightOwl@lemmy.one 23 points 1 year ago (1 children)

So is the new trend going to be people who mentioned AI in the past to start acting like they were Nostradamus when warnings of evil AIs gone rogue has been a trope for a long long time?

[–] rambaroo@lemmy.world 11 points 1 year ago

I'm sick of hearing from James Cameron. This dude needs to go away. He doesn't know a damn thing about LLMs. It's ridiculous how many articles have been written about random celebs' opinions on AI when none of them know shit about it.

He should stick to making shitty Avatar movies and oversharing submarine implosion details with the news media

[–] drdabbles@lemmy.world 23 points 1 year ago (2 children)

And we were warned about Perceptron in the 1950s. Fact of the matter is, this shit is still just a parlor trick and doesn't count as "intelligence" in any classical sense whatsoever. Guessing the next word in a sentence because hundreds of millions of examples tell it to isn't really that amazing. Call me when any of these systems actually comprehend the prompts they're given.

[–] ricecooker@lemmy.world 9 points 1 year ago (2 children)

EXACTLY THIS. it's a really good parrot and anybody who thinks they can fire all their human staff and replace with ChatGPT is in for a world of hurt.

load more comments (2 replies)
[–] rusfairfax@lemmy.world 3 points 1 year ago

Guessing the next word in a sentence because hundreds of millions of examples tell it to isn’t really that amazing.

The best and most concise explanation (and critique) of LLMs in the known universe.

[–] ButtholeAnnihilator@lemmy.world 22 points 1 year ago* (last edited 1 year ago) (2 children)

IIRC the original idea for the Terminator was for it to have the appearance of a regular guy on the street, the horror arising from the fact that anyone around you could actually be an emotionless killer.

They ended up getting a 6 foot Austrian behemoth that could barely speak english. One of the greatest films ever made.

[–] scutiger@lemmy.world 4 points 1 year ago

An evil 1984 Arnold Schwarzenegger with guns would be terrifying AF even if it wasn't an AI robot from the future.

[–] ours@lemmy.film 3 points 1 year ago

Lance Henriksen who ended up playing a cop in The Terminator was originally cast as the Terminator. But then Arnold was brought in and the rest is history.

Maybe as consolation, Cameron went on to cast Lance as the rather helpful android in Aliens.

[–] ilovecheese@lemmy.world 11 points 1 year ago (1 children)

This is really turning out like the 'satanic panic' of the 80's all over again.

load more comments (1 replies)
[–] Windex007@lemmy.world 10 points 1 year ago (1 children)

James Cameron WOULD make this about James Cameron.

[–] Cheems@lemmy.world 3 points 1 year ago

Well duh, it's James Cameron

[–] halferect@lemmy.world 10 points 1 year ago

I warned everyone about James Cameron in 1983 and no one listened

[–] r00ty@kbin.life 9 points 1 year ago (2 children)

Here's the thing. The Terminator movies were a warning against government/army AI. Actually slightly before that I guess wargames was too. But, honestly I'm not worried about military AI taking over.

I think if the military setup an AI, they would have multiple ways to kill it off in seconds. I mean, they would be in a more dangerous position to have an AI "gone wild". But not because of the movies, but because of how they work, they would have a lot of systems in place to mitigate disaster. Is it possible to go wrong? Yes. Likely? No.

I'm far more worried about the geeky kid that now has access to open source AI that can be retasked. Someone that doesn't understand the consequences of their actions fully, or at least can't properly quantify the risks they're taking. But, is smart enough to make use of these tools to their own end.

Some of you might still be teenagers, but those that aren't, remember back. Wouldn't you potentially think it'd be cool to create an auto gpt or some form of adversarial AI with an open ended success criteria that are either implicitly dangerous and/or illegal, or are broad enough to mean the AI will see the easiest path to success is to do dangerous and/or illegal things to reach its goal. You know, for fun. Just to see if it would work.

I'm not convinced the AI is quite there yet to be dangerous, or maybe it is. I've honestly not kept close tabs on this. But, when it does reach that level of maturity, a lot of the tools are still open source, they can be modified, any protections removed "for the lols" or "just to see what it can do" and someone without the level of control a government/military entity has could easily lose control of their own AI. That's what scares me, not a Joshua or Skynet.

[–] TwilightVulpine@lemmy.world 4 points 1 year ago (1 children)

The biggest risk of AI at the moment is the same posed by the Industrial Revolution: Many professions will become obsolete, and it might be used as leverage to impose worse living conditions over those who still have jobs.

[–] r00ty@kbin.life 3 points 1 year ago

That's a real concern. In the long run it will likely backfire. AI needs human input to work. If it starts getting other AI fed as its input, things will start to go bad in a fairly short order. Also, that is another point. Big business is likely another probable source of runaway AI. I trust business use of AI less than anyone else.

There's also a critical mass to unemployment to which revolution is inevitable. There would likely be UBI and an assured standard of living when we get close to that, and you'd be able to try to make extra money from your passion. I don't doubt that corporations will happily dump their employees for AI at a moment's notice once it's proved out. Big business is extremely predictable in that sense. Zero forward planning beyond the current quarter. But I have some optimism that some common sense would prevail from some source, and they'd not just leave 50%+ of the population to die slowly.

load more comments (1 replies)
[–] askat@programming.dev 8 points 1 year ago

I'm not afraid of Ai, I'm afraid of greedy capitalist mfs who owns the AI.

[–] dtxer@lemmy.world 6 points 1 year ago

He seems like the average reddit user raises nose

[–] yiliu@informis.land 2 points 1 year ago

scene: a scrap yard, full of torn-up cars

in a flash, a square-looking and muscular man appears

he walks into a bar, and when confronted by an angry biker he punches him in the face and steals his clothes & aviator sunglasses

scene: the square-looking man walks into an office and confronts a scared-looking secretary

Square man: I am heyah to write fuhst drafts of movie scripts and make concept aht!

[–] axh@lemmy.world 2 points 1 year ago (1 children)

The real question is how much time do we have before a Roomba goes goes back in time to kill mother of someone who was littering to much?

load more comments (1 replies)
load more comments
view more: next ›