this post was submitted on 22 Feb 2024
806 points (98.1% liked)

Technology

59569 readers
4077 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] underisk@lemmy.ml 3 points 9 months ago* (last edited 9 months ago) (1 children)

The part you're missing is the metadata. AI (convolutional neural networks, specifically) are trained on the data as well as some sort of contextal metadata related to what they're being trained to do. For example, with reddit posts they would feed things like "this post is popular", "this post was controversial", "this post has many views", etc. in addition to the post text if they wanted an AI that could spit out posts that are likely to do well on reddit.

Quantity is a concern; you need to reach a threshold of data which is fairly large to have any hope of training an AI well, but there are diminishing returns after a certain point. The more data you feed it the more you have to potentially add metadata that can only be provided by humans. For instance with sentiment analysis you need a human being to sit down and identify various samples of text with different emotional responses, since computers can't really do that automatically.

Quality is less of a concern. Bad quality data, or data with poorly applied metadata will result in AI with less "accuracy". A few outliers and mistakes here and there won't be too impactful, though. Quality here could be defined by how much your training set of data matches the kind of input you'll be expecting it to work with.

[–] madcaesar@lemmy.world 1 points 9 months ago (2 children)

The way I'm reading this, ai is just shit loads of if statements, not some intelligence. It's all garbage.

[–] aidan@lemmy.world 4 points 9 months ago

Its not if statements anymore, now its just a random number generator + a lot of multiplication put through a sigmoid function. But yea, of course there is not intelligence to it. Its extreme calculus

[–] underisk@lemmy.ml 1 points 9 months ago

You're not entirely wrong. It's more like a series of multi-dimensional maps with hundreds or thousands of true/false pathways stacked on top of each other, then carved into by training until it takes on a shape that produces the 'correct' output from your inputs.