this post was submitted on 19 Mar 2024
338 points (97.5% liked)

Technology

59419 readers
4966 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

In particular, know how to identify the common and deadly species (eg: much of the genus Amanita) yourself, and get multiple trustworthy field guides for your part of the world.

you are viewing a single comment's thread
view the rest of the comments
[–] polysics@lemmy.world 13 points 8 months ago (1 children)

The general public still doesn't seem to grasp the current capabilities of AI. It's still just mimicry. AI is a parrot that "learns" something and repeats it to the best of it's ability, but it doesn't understand the thing it learned. You can teach a bird to say "Polly want a cracker" but it doesn't know what a cracker is, and while it does have wants like any other animal, it doesn't know what "want a cracker" actually means.

ML models get a billion images of mushrooms and then "learn" what "mushroom" looks like, but even if the images of mushrooms are properly labeled poisonous and not poisonous, it doesn't really know that in the same way humans do. And it gets even worse when the AI tries to make new things from those sets it's trained off, which all of those certainly do. Making new mushrooms that don't exist, how can it tell which one of these new fantasy mushrooms are poison and which ones aren't? It can't know, but it sure as hell can make it up.

Hell, most AI can't even get text right.

Don't trust AI for anything that isn't hard coded math, or systems that reference and directly quote known good sources without doing any kind of creative embellishments.

[–] JohnEdwa@sopuli.xyz 3 points 8 months ago (2 children)

You are lumping a whole lot of different things that work in completely different ways under the singular label of AI, and while I can't really blame you as that is what the industry does as well, image recognition, image generation and large language models like chat-gpt all work entirely differently.
Image recognition especially can be trained to be extremely accurate with a properly restricted scope and a good dataset, but even so it would never be enough for identifying mushrooms because no matter if it's being done by the perfect AI or an organic meatbag, mushrooms simply cannot be accurately identified from a single picture as they can look literally identical to one another in many ways.

And parrots totally can learn what words mean. Just like how a dog can learn what "Sit", "Paw" or "Let's go for a walk" mean, parrots just also have the ability to "talk".

[–] PipedLinkBot@feddit.rocks 1 points 8 months ago

Here is an alternative Piped link(s):

parrots totally can learn what words mean.

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source; check me out at GitHub.

[–] spujb@lemmy.cafe 0 points 8 months ago (1 children)

what’s wrong with lumping a lot of things with different substrate together if, as you admit yourself, there’s still no evidence any of them work well?

[–] JohnEdwa@sopuli.xyz 1 points 8 months ago* (last edited 8 months ago) (2 children)

LLMs are the current big buzzword and the main ones that "don't work", because people assume and expect them to be intelligent and actually know and understand things, which they simply do not. Their purpose is to generate text in a way that a human would and for that they actually work perfectly - get a competent LLM and a human and ask them to write about something, and you are very unlikely to spot which one is the machine unless you can catch them lying, and even then it might just be a clueless human talking about things he kinda understands but isn't an expert of. Like me.
But they are constantly being used for all kinds of purposes that they really don't yet fit well, because you can't actually trust anything they say.

Image generation mainly has issues with hands and fingers so they aren't bullet proof at making fake realistic imagery, but for many subjects and style they can create images that are pretty much impossible to identify as being generated. Civit.ai is full of examples. Most people think it doesn't work yet because they mostly see someone throwing simple prompts into midjourney and taking the first thing it generates for an article thumbnail.

And image identification definitely works, but it's... Quirky. I said it can't be used to identify mushrooms, because nothing can identify two things that look exactly the same from one another. But give one enough photos of every single hot wheels car that exists, and you can get one that will perfectly recognize which one you have. But it will also tell you that a shoe or a tree is one of them, because it only knows about hot wheels cars.
Making one that is trying to identify absolutely everything from a photo, like Google Lens, will still misidentify some things as the dataset is so enormous, but so would a human. Just that for an AI, "I don't know" is never an option, it always says the most likely answer it thinks is right.

[–] spujb@lemmy.cafe 0 points 8 months ago

okay? so i am quite aware of all of this already; none of this info is new.

my question is still, “what’s wrong with lumping all of these technologies together as ‘AI’ when all of them are ineffective at identifying mushrooms (and certain other tasks)?”

[–] spujb@lemmy.cafe 0 points 8 months ago* (last edited 8 months ago)

edit: oops accidental post