this post was submitted on 22 Nov 2023
108 points (100.0% liked)
Technology
37720 readers
247 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Eh, not sure I agree. Seems to also have been between too little and too much AI safety, and I strongly feel like there’s already too much AI safety.
What indications do you see of "too much AI safety?" I am struggling to see any meaningful, legally robust, or otherwise cohesive AI safety whatsoever.
Using it and getting told that you need to ask the Fish for consent before using it as a flesh light.
And that is with a system prompt full of telling the bot that it’s all fantasy.
edit: And "legal" is not relevant when talking about what OpenAI specifically does for AI safety for their models.
I really hope Fish was just a typo there
Nope
Best results so far were with a pie where it just warned about possibly burning yourself.
...So your metric of "too much AI safety" is that it won't let you fuck the fish...?
This comment chain is superb discourse to start off today's internetting with.
If it helps even more: The AI in question is a 46 cm long, 300 g heavy, blue, plushie penis named after Australia's "biggest walking dick" Scott Morrison: Scomo, and active in an Aussie cooking stream.
No, it’s "the user is able to control what the AI does", the fish is just a very clear and easy example of that. And the big corporations are all moving away from user control, there was even a big article about how I think the MS AI was broken because… you could circumvent the built-in guardrails. Maybe you and the others here want to live in an Apple walled garden corporate controlled world of AI. I don’t.
Edit: Maybe this is not clear for everyone, but if you think a bit further, imagine you have an AI in your RPG, like Tyranny, where you play a bad guy. You can’t use the AI for anything slavery related, because Slavery bad, mmkay? And AI safety says there’s no such thing as fantasy.