"Alice has N brothers and she also has M sisters. How many sisters does Alice’s brother have?"
The problem has a light quiz style and is arguably no challenge for most adult humans and probably to some children.
The scientists posed varying versions of this simple problem to various State-Of-the-Art LLMs that claim strong reasoning capabilities.
(GPT-3.5/4/4o , Claude 3 Opus, Gemini, Llama 2/3, Mistral and Mixtral, including very recent Dbrx and Command R+)
They observed a strong collapse of reasoning and inability to answer the simple question as formulated above across most of the tested models, despite claimed strong reasoning capabilities. Notable exceptions are Claude 3 Opus and GPT-4 that occasionally manage to provide correct responses.
This breakdown can be considered to be dramatic not only because it happens on such a seemingly simple problem, but also because models tend to express strong overconfidence in reporting their
wrong solutions as correct, while often providing confabulations to additionally explain the provided final answer, mimicking reasoning-like tone but containing nonsensical arguments as backup for the
equally nonsensical, wrong final answers.
I don't know about common but you and I agree on a lot. LLMs are not a breakthrough in artificial cognition but more like a breakthrough in linguistics that coherent English can be produced with unexpectedly small mathematical structures. Hubris on our part imagining human language is more complex than it is or that our ideas are more unique than they are.