AI Models Embrace Humanlike Reasoning
beSpacific 2025-05-12
IEEE Spectrum – “Researchers are pushing beyond chain-of-thought prompting to new cognitive techniques. Since OpenAI’s launch of ChatGPT in 2022, AI companies have been locked in a race to build increasingly gigantic models, causing companies to invest huge sums in building data centers. But toward the end of last year, there were rumblings that the benefits of model scaling were hitting a wall. The underwhelming performance of OpenAI’s largest ever model, GPT-4.5, gave further weight to the idea. This situation is prompting a shift in focus, with researchers aiming to make machines “think” more like humans. Rather than building larger models, researchers are now giving them more time to think through problems. In 2023, a team at Google introduced the chain of thought (CoT) technique, in which large language models (LLMs) work through a problem step by step. This approach underpins the impressive capabilities of a new generation of reasoning models like OpenAI’s o3, Google’s Gemini 2.5, Anthropic’s Claude 3.7, and DeepSeek’s R1. And AI papers are now awash with references to “thought,” “thinking,” and “reasoning,” as the number of cognitively inspired techniques proliferate. “Since about the spring of last year, it has been clear to anybody who is serious about AI research that the next revolution will not be about scale,” says Igor Grossmann, a professor of psychology at the University of Waterloo, Canada. “It’s not about the size anymore, it’s more about how you operate with that knowledge base, how you optimize it to fit different contexts.”