this post was submitted on 26 Mar 2024
371 points (96.5% liked)

Programmer Humor

19602 readers
970 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 1 year ago
MODERATORS
 
top 13 comments
sorted by: hot top controversial new old
[–] marcos@lemmy.world 74 points 7 months ago* (last edited 7 months ago) (1 children)

Your neural network just learned to flawlessly answer any question you send it! Time to put it to good use!

Start asking the important questions!

[–] kernelle@0d.gs 75 points 7 months ago
[–] GissaMittJobb@lemmy.ml 40 points 7 months ago (1 children)
[–] marcos@lemmy.world 53 points 7 months ago (3 children)

No, this is because the testing set can be derived from the training set.

Overfitting alone can't get you to 1.

[–] victorz@lemmy.world 10 points 7 months ago (2 children)

So as an eli5, that's basically that you have to "ask" it stuff it has never heard before? AI has come after my time in higher education.

[–] marcos@lemmy.world 20 points 7 months ago (2 children)

Yes.

You train it on some data, and ask it about different data. Otherwise it just hard-codes the answers.

[–] Morphit@feddit.uk 7 points 7 months ago

They're just like us.

[–] victorz@lemmy.world 1 points 7 months ago

Gotcha, thank you!

[–] ArtVandelay@lemmy.world 3 points 7 months ago

Yes, it's called a train test split, and is often 80/20 or there about

[–] sevenapples@lemmygrad.ml 3 points 7 months ago

It can if you don't do a train-test split.

But even if you consider the training set only, having zero loss is definitely a bad sign.

[–] GissaMittJobb@lemmy.ml 2 points 7 months ago
[–] gerryflap@feddit.nl 9 points 7 months ago

I like how specifically this relates to my experience with the discount factor gamma in Reinforcement Learning. Like, pretty close to the exact numbers (though missing 0.99 and 0.999)

[–] Ragdoll_X@lemmy.world 4 points 7 months ago

Have you tried some data augmentation?