Join our each day and weekly newsletters for the newest updates and unique content material on industry-leading AI protection. Learn More
Very small language fashions (SLMs) can outperform main massive language fashions (LLMs) in reasoning duties, in accordance with a new examine by Shanghai AI Laboratory. The authors present that with the proper instruments and test-time scaling strategies, an SLM with 1 billion parameters can outperform a 405B LLM on sophisticated math benchmarks.
The potential to deploy SLMs in complicated reasoning duties might be very helpful as enterprises are searching for new methods to make use of these new fashions in numerous environments and purposes.
Test-time scaling defined
Test-time scaling (TTS) is the method of giving LLMs further compute cylces throughout inference to enhance their efficiency on varied duties. Leading reasoning fashions, akin to OpenAI o1 and DeepSeek-R1, use “internal TTS,” which suggests they’re educated to “think” slowly by producing an extended string of chain-of-thought (CoT) tokens.
An various method is “external TTS,” the place mannequin efficiency is enhanced with (because the identify implies) outdoors assist. External TTS is appropriate for repurposing exiting fashions for reasoning duties with out additional fine-tuning them. An exterior TTS setup is normally composed of a “policy model,” which is the primary LLM producing the reply, and a course of reward mannequin (PRM) that evaluates the coverage mannequin’s solutions. These two elements are coupled collectively by way of a sampling or search technique.
The best setup is “best-of-N,” the place the coverage mannequin generates a number of solutions and the PRM selects a number of greatest solutions to compose the ultimate response. More superior exterior TTS strategies use search. In “beam search,” the mannequin breaks the reply down into a number of steps.
For every step, it samples a number of solutions and runs them by way of the PRM. It then chooses a number of appropriate candidates and generates the following step of the reply. And, in “diverse verifier tree search” (DVTS), the mannequin generates a number of branches of solutions to create a extra various set of candidate responses earlier than synthesizing them right into a ultimate reply.

What is the proper scaling technique?
Choosing the proper TTS technique depends upon a number of elements. The examine authors carried out a scientific investigation of how completely different coverage fashions and PRMs have an effect on the effectivity of TTS strategies.
Their findings present that effectivity is essentially depending on the coverage and PRM fashions. For instance, for small coverage fashions, search-based strategies outperform best-of-N. However, for giant coverage fashions, best-of-N is more practical as a result of the fashions have higher reasoning capabilities and don’t want a reward mannequin to confirm each step of their reasoning.
Their findings additionally present that the proper TTS technique depends upon the problem of the issue. For instance, for small coverage fashions with fewer than 7B parameters, best-of-N works higher for straightforward issues, whereas beam search works higher for more durable issues. For coverage fashions which have between 7B and 32B parameters, various tree search performs effectively for straightforward and medium issues, and beam search works greatest for exhausting issues. But for giant coverage fashions (72B parameters and extra), best-of-N is the optimum technique for all issue ranges.
Why small fashions can beat massive fashions

Based on these findings, builders can create compute-optimal TTS methods that bear in mind the coverage mannequin, PRM and drawback issue to make the very best use of compute funds to unravel reasoning issues.
For instance, the researchers discovered {that a} Llama-3.2-3B mannequin with the compute-optimal TTS technique outperforms the Llama-3.1-405B on MATH-500 and AIME24, two sophisticated math benchmarks. This reveals that an SLM can outperform a mannequin that’s 135X bigger when utilizing the compute-optimal TTS technique.
In different experiments, they discovered {that a} Qwen2.5 mannequin with 500 million parameters can outperform GPT-4o with the proper compute-optimal TTS technique. Using the identical technique, the 1.5B distilled model of DeepSeek-R1 outperformed o1-preview and o1-mini on MATH-500 and AIME24.
When accounting for each coaching and inference compute budgets, the findings present that with compute-optimal scaling methods, SLMs can outperform bigger fashions with 100-1000X much less FLOPS.
The researchers’ outcomes present that compute-optimal TTS considerably enhances the reasoning capabilities of language fashions. However, because the coverage mannequin grows bigger, the advance of TTS regularly decreases.
“This suggests that the effectiveness of TTS is directly related to the reasoning ability of the policy model,” the researchers write. “Specifically, for models with weak reasoning abilities, scaling test-time compute leads to a substantial improvement, whereas for models with strong reasoning abilities, the gain is limited.”
The examine validates that SLMs can carry out higher than bigger fashions when making use of compute-optimal test-time scaling strategies. While this examine focuses on math benchmarks, the researchers plan to broaden their examine to different reasoning duties akin to coding and chemistry.