this post was submitted on 14 Aug 2023
123 points (100.0% liked)

Technology

37742 readers
997 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] nuke@yah.lol 31 points 1 year ago* (last edited 1 year ago) (2 children)

That's a bit dramatic of a take. The AI makes recipe suggestions based on ingredients the user inputs. These users inputted things like bleach and glue, and other non-food items to intentionally generate non-food recipes.

[–] chameleon@kbin.social 26 points 1 year ago (1 children)

If you're making something to come up with recipes, "is this ingredient likely to be unsuitable for human consumption" should probably be fairly high up your list of things to check.

Somehow, every time I see generic LLMs shoved into things that really do not benefit from an LLM, those kinds of basic safety things never really occurred to the person making it.

[–] nuke@yah.lol 2 points 1 year ago (1 children)

Fair point, I agree there should be such a check. It seems for now that the only ones affected were people who tried to intentionally mess with it. It will be a hard goal to reach completely because what's ok and healthy for some could also be a deathly allergic reaction for others. There's always going to have to be some personal accountability for the person preparing a meal to understand what they're making is safe.

[–] DeltaTangoLima@reddrefuge.com 7 points 1 year ago

They're a supermarket, and they own the data for the items they stock. No reason they couldn't have used their own taxonomy to eliminate the ability to use non-food items in their poorly implemented AI.

Love how they blame the people that tried it. Like it's their fault the AI was released for public use without thinking about the consequences. Typical corporate blame shifting.

[–] otter@lemmy.ca 2 points 1 year ago

Would it be better to have a massive list of food items to pick from?

Should take care of bad inputs somewhat