this post was submitted on 19 Jul 2023
350 points (95.8% liked)
Technology
59092 readers
6622 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Why do people keep asking language models to do math?
As a biological language model I'm not very proficient at math.
The fact that a souped up autocomplete system can solve basic algebra is already impressive IMO. But somehow people think it works like SkyNet and keep asking it to solve their calculus homework.
Looking for emergent intelligence. They're not designed to do maths, but if they become able to reason mathematically as a result of the process of becoming able to converse with a human, then that's a sign that it's developing more than just imitation abilities.
They think that's what "smart" means.
error loading comment
Because it's something completely new that they don't fully understand yet. Computers have been good at math since always, everything else was built up on that. People are used to that.
Now all of a sudden, the infinitely precise and accurate calculating machine is just pulling answers out of its ass and presenting them as fact. That's not easy to grasp.
It's a rat race. We want to get to the point where someone can say "prove P != NP" and a proof will be spat out that's coherent.
After that, whoever first shows it's coherent will receive the money.