this post was submitted on 16 May 2024
857 points (97.6% liked)

Funny

6813 readers
1677 users here now

General rules:

Exceptions may be made at the discretion of the mods.

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] RedditWanderer@lemmy.world 11 points 6 months ago* (last edited 6 months ago) (1 children)

It's not weird because of that. The bot could have easily explained it can't answer legally, it didn't need to say: sorry gotta end this k bye

This is probably a trigger on preventing it from mixing in laws of AI or something, but people would expect it can discuss these things instead of shutting down so it doesn't get played. Saying the AI acted as a lawyer is a pretty weak argument to blame copilot.

Edit: no idea who is downvoting this but this isn't controversial. This is specifically why you can inject prompts into data fed into any GPT and why they are very careful with how they structure information in the model to make rules. Right now copilot will give technically legal advice with a disclaimer, there's no reason it wouldn't do that only on that question if it was about legal advice or laws.

[–] JusticeForPorygon@lemmy.world 10 points 6 months ago (1 children)

I noticed this back with Bing AI. Anytime you bring up anything related to nonliving sentience, it shuts down the conversation.

[–] samus12345@lemmy.world -2 points 6 months ago

It should say that you probably mean sapience, the ability to think, rather than sentience, the ability to sense things, then shut down the conversation.