Azure Scales 530B Parameter GPT-3 Model with NVIDIA NeMo Megatron | Azure Blog and Updates

0
107
Azure Scales 530B Parameter GPT-3 Model with NVIDIA NeMo Megatron | Azure Blog and Updates


This submit was co-authored by Hugo Affaticati, Technical Program Manager, Microsoft Azure HPC + AI, and Jon Shelley, Principal TPM Manager, Microsoft Azure HPC + AI.

Natural language processing (NLP), automated speech recognition (ASR), and text-to-speech (TTS) functions have gotten more and more widespread in right now’s world. Most corporations have leveraged these applied sciences to create chatbots for managing buyer questions and complaints, streamlining operations, and eradicating among the heavy value burden that comes with headcount. But what chances are you’ll not understand is that they’re additionally getting used internally to scale back threat and establish fraudulent conduct, cut back buyer complaints, enhance automation, and analyze buyer sentiment. It’s prevalent in most locations, however particularly in industries similar to healthcare, finance, retail, and telecommunications.

NVIDIA lately launched the newest model of the NVIDIA NeMo Megatron framework, which is now in open beta. This framework can be utilized to construct and deploy giant language fashions (LLMs) with pure language understanding (NLU).

Combining NVIDIA NeMo Megatron with our Azure AI infrastructure gives a robust platform that anybody can spin up in minutes with out having to incur the prices and burden of managing their very own on-premises infrastructure. And in fact, we have now taken our benchmarking of the brand new framework to a brand new stage, to really present the facility of the Azure infrastructure.

Reaching new milestones with 530B parameters

We used Azure NDm A100 v4-series digital machines to run the GPT-3 mannequin’s new NVIDIA NeMo Megatron framework and check the boundaries of this sequence. NDm A100 v4 digital machines are Azure’s flagship GPU choices for AI and deep studying powered by NVIDIA A100 80GB Tensor Core GPUs. These situations have essentially the most GPU reminiscence capability and bandwidth, backed by NVIDIA InfiniBand HDR connections to help scaling up and out. Ultimately, we ran a 530B-parameter benchmark on 175 digital machines, leading to a coaching time per step of as little as 55.7 seconds (figure1). This benchmark measures the compute effectivity and the way it scales by measuring the time taken per step to coach the mannequin after regular state is reached, with a mini-batch dimension of 1. Such excellent velocity wouldn’t have been potential with out InfiniBand HDR offering glorious communication between nodes with out elevated latency.

The graph shows Azure’s performance results on the GPT-3 530 billion-parameter model with NVIDIA NeMo Megatron. The Training time per step decreases almost linearly from 88.2 seconds to 55.8 seconds when the number of nodes increases from 105 to 175.
Figure 1: Training time per step on the 530B-parameter benchmark from 105 to 175 digital machines.

These outcomes spotlight an virtually linear velocity enhance, guaranteeing higher efficiency for the next variety of nodes—paramount for heavy or time-sensitive workloads. As proven by these runs with billions of parameters, prospects can relaxation assured that Azure’s infrastructure can deal with even essentially the most tough and sophisticated workloads, on demand.

“Speed and scale are both key to developing large language models, and the latest release of the NVIDIA NeMo Megatron framework introduces new techniques to deliver 30 percent faster training for LLMs,” mentioned Paresh Kharya, senior director of accelerated computing at NVIDIA. “Microsoft’s testing with NeMo Megatron 530B also shows that Azure NDm A100 v4 instances powered by NVIDIA A100 Tensor Core GPUs and NVIDIA InfiniBand networking provide a compelling option for achieving linear training speedups at massive scale.”

Showcasing Azure AI capabilities—now and sooner or later

Azure’s dedication is to make AI and HPC accessible to everybody. It contains, however just isn’t restricted to, offering the perfect AI infrastructure that scales from the smallest use circumstances to the heaviest workloads. As we proceed to innovate to construct the perfect platform on your AI workloads, our promise to you is to make use of the newest benchmarks to check our AI capabilities. These outcomes assist drive our personal innovation and showcase that there isn’t a restrict to what you are able to do. For all of your AI computing wants, Azure has you coated.

Learn extra

To be taught extra concerning the outcomes or methods to recreate them, please see the next hyperlinks.

LEAVE A REPLY

Please enter your comment!
Please enter your name here