this post was submitted on 26 Mar 2024
371 points (96.5% liked)
Programmer Humor
19602 readers
964 users here now
Welcome to Programmer Humor!
This is a place where you can post jokes, memes, humor, etc. related to programming!
For sharing awful code theres also Programming Horror.
Rules
- Keep content in english
- No advertisements
- Posts must be related to programming or programmer topics
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Is this overfitting?
No, this is because the testing set can be derived from the training set.
Overfitting alone can't get you to 1.
So as an eli5, that's basically that you have to "ask" it stuff it has never heard before? AI has come after my time in higher education.
Yes.
You train it on some data, and ask it about different data. Otherwise it just hard-codes the answers.
They're just like us.
Gotcha, thank you!
Yes, it's called a train test split, and is often 80/20 or there about
It can if you don't do a train-test split.
But even if you consider the training set only, having zero loss is definitely a bad sign.
Gotcha!