I agree to your broad point, but absolutely not in this case. Large Language Models are 100% AI, they're fairly cutting edge in the field, they're based on how human brains work, and even a few of the computer scientists working on them have wondered if this is genuine intelligence.
On the spectrum of scripted behaviour in Doom up to sci-fi depictions of sentient silicon-based minds, I think we're past the halfway point.
Humans invent stuff (without realising) it to, so I don't think that's enough to disqualify something from being intelligent.
The interesting question is how much of this is due to the training goal basically being "a sufficiently convincing response to satisfy a person" (pretty much the same as on social media) and how much of it is a fundamental flaw in the whole idea.