Reasoning reimagined: Introducing Phi-4-mini-flash-reasoning | Microsoft Azure Blog

0
239

[ad_1]

Unlock quicker, environment friendly reasoning with Phi-4-mini-flash-reasoning—optimized for edge, cellular, and real-time purposes.

State of the artwork structure redefines velocity for reasoning fashions

Microsoft is worked up to unveil a brand new version to the Phi mannequin household: Phi-4-mini-flash-reasoning. Purpose-built for situations the place compute, reminiscence, and latency are tightly constrained, this new mannequin is engineered to deliver superior reasoning capabilities to edge gadgets, cellular purposes, and different resource-constrained environments. This new mannequin follows Phi-4-mini, however is constructed on a brand new hybrid structure, that achieves as much as 10 occasions larger throughput and a 2 to three occasions common discount in latency, enabling considerably quicker inference with out sacrificing reasoning efficiency. Ready to energy actual world options that demand effectivity and suppleness, Phi-4-mini-flash-reasoning is on the market on Azure AI Foundry, NVIDIA API Catalog, and Hugging Face as we speak.

Efficiency with out compromise 

Phi-4-mini-flash-reasoning balances math reasoning capacity with effectivity, making it doubtlessly appropriate for academic purposes, real-time logic-based purposes, and extra. 

Similar to its predecessor, Phi-4-mini-flash-reasoning is a 3.8 billion parameter open mannequin optimized for superior math reasoning. It helps a 64K token context size and is fine-tuned on high-quality artificial information to ship dependable, logic-intensive efficiency deployment.  

What’s new?

At the core of Phi-4-mini-flash-reasoning is the newly launched decoder-hybrid-decoder structure, SambaY, whose central innovation is the Gated Memory Unit (GMU), a easy but efficient mechanism for sharing representations between layers.  The structure features a self-decoder that mixes Mamba (a State Space Model) and Sliding Window Attention (SWA), together with a single layer of full consideration. The structure additionally entails a cross-decoder that interleaves costly cross-attention layers with the brand new, environment friendly GMUs. This new structure with GMU modules drastically improves decoding effectivity, boosts long-context retrieval efficiency and allows the structure to ship distinctive efficiency throughout a variety of duties. 

Key advantages of the SambaY structure embrace: 

  • Enhanced decoding effectivity.
  • Preserves linear prefiling time complexity.
  • Increased scalability and enhanced lengthy context efficiency.
  • Up to 10 occasions larger throughput.
A diagram of a computer program
Our decoder-hybrid-decoder structure taking Samba [RLL+25] because the self-decoder. Gated Memory Units (GMUs) are interleaved with the cross-attention layers within the cross-decoder to cut back the decoding computation complexity. As in YOCO [SDZ+24], the complete consideration layer solely computes the KV cache in the course of the prefilling with the self-decoder, resulting in linear computation complexity for the prefill stage.

Phi-4-mini-flash-reasoning benchmarks 

Like all fashions within the Phi household, Phi-4-mini-flash-reasoning is deployable on a single GPU, making it accessible for a broad vary of use instances. However, what units it aside is its architectural benefit. This new mannequin achieves considerably decrease latency and better throughput in comparison with Phi-4-mini-reasoning, significantly in long-context technology and latency-sensitive reasoning duties. 

This makes Phi-4-mini-flash-reasoning a compelling possibility for builders and enterprises trying to deploy clever programs that require quick, scalable, and environment friendly reasoning—whether or not on premises or on-device. 

A graph of a number of people
A graph with red and blue dots and numbers
The high plot reveals inference latency as a perform of technology size, whereas the underside plot illustrates how inference latency varies with throughput. Both experiments have been carried out utilizing the vLLM inference framework on a single A100-80GB GPU with tensor parallelism (TP) set to 1.
A graph of different colored bars
A extra correct analysis was used the place Pass@1 accuracy is averaged over 64 samples for AIME24/25 and eight samples for Math500 and GPQA Diamond. In this graph, Phi-4-mini-flash-reasoning outperforms Phi-4-mini-reasoning and is healthier than fashions twice its dimension.

What are the potential use instances? 

Thanks to its lowered latency, improved throughput, and deal with math reasoning, the mannequin is right for: 

  • Adaptive studying platforms, the place real-time suggestions loops are important.
  • On-device reasoning assistants, equivalent to cellular examine aids or edge-based logic brokers.
  • Interactive tutoring programs that dynamically modify content material problem based mostly on a learner’s efficiency.

Its energy in math and structured reasoning makes it particularly invaluable for schooling know-how, light-weight simulations, and automatic evaluation instruments that require dependable logic inference with quick response occasions. 

Developers are inspired to attach with friends and Microsoft engineers via the Microsoft Developer Discord group to ask questions, share suggestions, and discover real-world use instances collectively. 

Microsoft’s dedication to reliable AI 

Organizations throughout industries are leveraging Azure AI and Microsoft 365 Copilot capabilities to drive development, improve productiveness, and create value-added experiences. 

We’re dedicated to serving to organizations use and construct AI that’s reliable, which means it’s safe, non-public, and secure. We deliver finest practices and learnings from many years of researching and constructing AI merchandise at scale to offer industry-leading commitments and capabilities that span our three pillars of safety, privateness, and security. Trustworthy AI is just doable once you mix our commitments, equivalent to our Secure Future Initiative and our accountable AI ideas, with our product capabilities to unlock AI transformation with confidence.  

Phi fashions are developed in accordance with Microsoft AI ideas: accountability, transparency, equity, reliability and security, privateness and safety, and inclusiveness.  

The Phi mannequin household, together with Phi-4-mini-flash-reasoning, employs a strong security post-training technique that integrates Supervised Fine-Tuning (SFT), Direct Preference Optimization (DPO), and Reinforcement Learning from Human Feedback (RLHF). These methods are utilized utilizing a mix of open-source and proprietary datasets, with a powerful emphasis on guaranteeing helpfulness, minimizing dangerous outputs, and addressing a broad vary of security classes. Developers are inspired to use accountable AI finest practices tailor-made to their particular use instances and cultural contexts. 

Read the mannequin card to be taught extra about any danger and mitigation methods.  

Learn extra in regards to the new mannequin 

Create with Azure AI Foundry

LEAVE A REPLY

Please enter your comment!
Please enter your name here