this post was submitted on 20 Mar 2024
54 points (85.5% liked)

Technology

55691 readers
16 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

I, for one...

you are viewing a single comment's thread
view the rest of the comments
[–] potatopotato@sh.itjust.works 29 points 3 months ago* (last edited 3 months ago) (3 children)

This...isn't how the current paradigm of ai works at all. We've built glorified auto-complete bots, not something that can make a physical robot behave at a human level. Best case, they build something that can carry on a conversation long enough to excite a tech journalist and aimlessly meander like the Boston dynamic bots but without the pre-programmed tasking (assuming they don't cheat and add canned routines).

So that leaves one option: it's a moonshot project to convince the tech illiterate public to take them and their stock price to the moon long enough for a few people to make an obscene amount of money.

[–] circuitfarmer@lemmy.sdf.org 7 points 3 months ago

So that leaves one option: it's a moonshot project to convince the tech illiterate public to take them and their stock price to the moon

100% that. It's even in the name.

People vastly overestimate the capabilities of AI, but perhaps worse, people are simply unaware of the limitations. The hype took over, but it is (slowly) coming down to realistic levels.

We also could use more public knowledge of the sheer amount of data and energy it takes to train these models which still, by definition, end up with limited scope. It's actually incredibly wasteful.

[–] dan1101@lemm.ee 2 points 3 months ago

Neural networks have learned to play video games so maybe a neural network in a robot body could learn to act human. If it didn't harm itself or others, that's the tricky part.

[–] LesserAbe@lemmy.world 1 points 3 months ago

I feel like people who shit on AI so much live in a different reality than I do.

I'll put the big caveats here: I hate venture capital, I think people are over hyping less likely risks (creating skynet) while underplaying more likely ones (taking people's jobs, flooding the Internet with shitty content/misinformation). All AI gets stuff wrong some of the time.

That said, I've been impressed with what it can do and use it more days than not. I don't see a fundamental reason why AI wouldn't be effective at controlling a robot body. Currently something like chatGPT responds after a user types a prompt. But what if the prompt was just audio/video/sensory input every fraction of a second? I don't think this is far fetched, if you threw enough money at it.