this post was submitted on 08 Jul 2023
31 points (89.7% liked)

Asklemmy

43810 readers
1260 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy ๐Ÿ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS
 

Imagine an AGI (Artificial General Intelligence) that could perform any task a human can do on a computer, but at a much faster pace. This AGI could create an operating system, produce a movie better than anything you've ever seen, and much more, all while being limited to SFW (Safe For Work) content. What are the first things you would ask it to do?

you are viewing a single comment's thread
view the rest of the comments
[โ€“] SirGolan@lemmy.sdf.org 2 points 1 year ago (1 children)

That last bit already happened. An AI (allegedly) told a guy to commit suicide and he did. A big part of the problem is while GPT4 for instance knows all about all the things you just said and can probably do what you're suggesting, nobody can guarantee it won't get something horribly wrong at some point. Sort of like how self driving cars can handle like 95% of things correctly but that 5% of unexpected stuff that maybe takes some extra context that a human has and the car was never trained on is very hard to get past.

[โ€“] quotheraven404@lemmy.ca 2 points 1 year ago (1 children)

Thanks for the link, that sounds like exactly what I was asking for but gone way wrong!

What do you think is missing to prevent these kinds of outcomes? Is AI simply incapable of categorizing topics as 'harmful to humans' on it's own without a human's explicit guidance? It seems like the philosophical nuances of things like consent or dependence or death would be difficult for a machine to learn if it isn't itself sensitive to them. How do you train empathy in something so inherently unlike us?

[โ€“] SirGolan@lemmy.sdf.org 1 points 1 year ago

In the case I mentioned, it was just a poorly aligned LLM. The ones from OpenAI would almost definitely not do that. That's because they go through a process called RLHF where those sorts of negative responses get trained out of them for the most part. Of course there's still stuff that will get through, but unless you are really trying to get it to say something bad, it's unlikely to do something like in that article. That's not to say they won't say something accidentally harmful. They are really good at telling you things that sound extremely plausible but are actually false because they don't really have any way of checking by default. I have to cross check the output of my system all the time for accuracy. I've spent a lot of time building in systems to make sure it's accurate and it generally is on the important stuff. Tonight it did have an inaccuracy, but I sort of don't blame it because the average person could have made the same mistake. I had it looking up contractors to work on a bathroom remodel (fake test task) and it googled for the phone number of the one I picked from its suggestions. Google proceeded to give a phone number in a big box with tiny text saying a different company's name. Anyone not paying close attention (including my AI) would call that number instead. It wasn't an ad or anything, just somehow this company came up in the little info box any time you searched for the other company.

Anyway, as to your question, they're actually pretty good at knowing what's harmful when they are trained with RLHF. Figuring out what's missing to prevent them from saying false things is an open area of research right now, so in effect, nobody knows how to fix that yet.