I think when people say it's only predicting the next word, it's a bit of an oversimplification to explain that the AI is not actually intelligent. It's more or less stringing words together in a way that seems plausible.
Ask Science
Ask a science question, get a science answer.
Community Rules
Rule 1: Be respectful and inclusive.
Treat others with respect, and maintain a positive atmosphere.
Rule 2: No harassment, hate speech, bigotry, or trolling.
Avoid any form of harassment, hate speech, bigotry, or offensive behavior.
Rule 3: Engage in constructive discussions.
Contribute to meaningful and constructive discussions that enhance scientific understanding.
Rule 4: No AI-generated answers.
Strictly prohibit the use of AI-generated answers. Providing answers generated by AI systems is not allowed and may result in a ban.
Rule 5: Follow guidelines and moderators' instructions.
Adhere to community guidelines and comply with instructions given by moderators.
Rule 6: Use appropriate language and tone.
Communicate using suitable language and maintain a professional and respectful tone.
Rule 7: Report violations.
Report any violations of the community rules to the moderators for appropriate action.
Rule 8: Foster a continuous learning environment.
Encourage a continuous learning environment where members can share knowledge and engage in scientific discussions.
Rule 9: Source required for answers.
Provide credible sources for answers. Failure to include a source may result in the removal of the answer to ensure information reliability.
By adhering to these rules, we create a welcoming and informative environment where science-related questions receive accurate and credible answers. Thank you for your cooperation in making the Ask Science community a valuable resource for scientific knowledge.
We retain the discretion to modify the rules as we deem necessary.
They're very good at predicting the next word, so their choice of "a" or "an" is likely to make sense in context. But you can absolutely ask a GPT to continue a sentence that appears to use the wrong word.
For instance, I just tried giving a GPT this to start with:
My favorite fruit grows on trees, is red, and can be made into pies. It is a
And the GPT finished it with:
delicious and versatile fruit called apples!
So as you can see, language is malleable enough to make sense of most inputs. Though occasionally, a GPT will get caught up in a nonsensical phrase due to this behavior.
If it generates "I ate" and the next word can be "a" or "an", then it will just generate one or the other based on how often they appear after "I ate". It hasn't decided by this point what it has eaten. After it has generated the next token, for example "I ate an", then its next token is now limited to food items that fit the grammatical structure of this sentence so far. Now it can decide: did I eat an apple? An orange? An awesome steak? etc
GPT creates plausible looking sentences, it has no concept of truth or anything like that. Since if you have an "an" it's overwhelmingly likely that the next word will begin with a vowel it will choose one which plausibly fits with the corpus of text that came before. Likewise for an "a".
There is no compromise in ability. It doesn't have anything to "say" or whatever. What it produces is more like nonsense poetry than speech.