Newsletter Archives
-
LLMs can’t reason
AI
By Michael A. Covington
The word is out — large language models, systems like ChatGPT, can’t reason.
That’s a problem, because reasoning is what we normally expect computers to do. They’re not just copying machines. They’re supposed to compute things. We knew already that chatbots were prone to “hallucinations” and, more insidiously, to presenting wrong answers confidently as facts.
But now, researchers at Apple have shown that large language models (LLMs) often fail on mathematical word problems.
Read the full story in our Plus Newsletter (21.53.0, 2024-12-30).