Prospects world wide depend on Microsoft Azure to drive improvements associated to our surroundings, public well being, power sustainability, climate modeling, financial development, and extra. Discovering options to those necessary challenges requires large quantities of targeted computing energy. Prospects are more and more discovering the easiest way to entry such high-performance computing (HPC) via the agility, scale, safety, and modern efficiency of Azure’s purpose-built HPC and AI cloud providers.
Azure’s market-leading imaginative and prescient for HPC and AI is predicated on a core of real and acknowledged HPC experience, utilizing confirmed HPC know-how and design rules, enhanced with the very best options of the cloud. The result’s a functionality that delivers efficiency, scale, and worth, not like every other cloud. This implies purposes are scaling 12 instances increased than different public clouds. It means increased software efficiency per node. It means powering AI workloads for one buyer with a supercomputer match to be among the many prime 5 on this planet. It additionally means delivering huge compute energy into the fingers of medical researchers over a weekend to show out life-saving improvements within the battle in opposition to COVID-19.
This yr throughout NVIDIA GTC 21, we’re spotlighting a few of the most transformational purposes powered by NVIDIA accelerated computing that highlights our dedication to edge, on-premises, and cloud computing. Registration is free, so join to learn the way Microsoft is powering transformation.
AI and supercomputing scale
The AI and machine studying area continues to be some of the inspiring areas of technical evolution for the reason that web. The pattern towards utilizing huge AI fashions to energy a lot of duties is altering how AI is constructed. Coaching fashions at this scale requires massive clusters of lots of of machines with specialised AI accelerators interconnected by high-bandwidth networks inside and throughout the machines. We now have been constructing such clusters in Azure to allow new pure language technology and understanding capabilities throughout Microsoft merchandise.
The work that we’ve completed on large-scale compute clusters, main community design, and the software program stack, together with Azure Machine Studying, ONNX Runtime, and different Azure AI providers, to handle it’s instantly aligned with our AI at Scale technique.
Machine studying on the edge
Microsoft offers varied options within the clever edge portfolio to empower clients to be sure that machine studying not solely occurs within the cloud but additionally on the edge. The options embody Azure Stack Hub, Azure Stack Edge, and IoT Edge.
Whether or not you’re capturing sensor information and inferencing on the edge or performing end-to-end processing with mannequin coaching in Azure and leveraging the skilled fashions on the edge for enhanced inferencing operations—Microsoft can assist your wants nonetheless and wherever you have to.
Visualization and GPU workstations
Azure allows a variety of visualization workloads, that are essential for desktop virtualization in addition to skilled graphics similar to computer-aided design, content material creation, and interactive rendering. Visualization workloads on Azure are powered by NVIDIA’s world-class graphics processing items (GPUs) and RTX know-how, the world’s preeminent visible computing platform.
With entry to graphics workstations on Azure cloud, artists, designers, and technical professionals can work remotely, from anyplace, and from any linked gadget. See our NV-Sequence digital machines (VMs) for Home windows and Linux.
Recapping 2021 moments with Azure and NVIDIA applied sciences
From deforestation to wildfire administration to defending endangered animals, learning wildlife populations is crucial to a sustainable future. Learn the way Wildlife Safety Providers works with Microsoft AI for Earth to offer the monitoring know-how that conservation teams have to preserve watch over wild locations and shield wildlife, utilizing an infrastructure of Azure Excessive Efficiency Computing digital machines with NVIDIA V100 GPUs.
With tens of 1000’s of Chinese language guests every year, the Van Gogh Museum needed to create one thing distinctive for this viewers. Enter WeChat, an app that might remodel portrait pictures into digital work harking back to Van Gogh’s artwork. Customers, in a position to see how the artist would have painted them, would ideally be drawn nearer to his artwork via this distinctive, private expertise. Examine how the Van Gogh Museum completed this via the usage of Azure Excessive Efficiency Computing, Azure Machine Studying, and extra.
FLSmidth has an formidable purpose of zero emissions by 2030 however they have been hampered by latency and efficiency limitations of their on-premises infrastructure. By shifting to Microsoft Azure in collaboration with companion Ubercloud, FLSmidth discovered the proper automobile for optimizing the engineering simulation platforms that depend upon high-performance computing. The swap has eliminated all latency, democratized their platform, and produced outcomes 10 instances sooner than their earlier infrastructure.
Earlier 2021 Azure HPC and AI product launches
Azure declares normal availability of scale-out NVIDIA A100 GPU Clusters: the quickest public cloud supercomputer—the Azure ND A100 v4 Digital Machine—powered by NVIDIA A100 Tensor Core GPUs – are designed to let our most demanding clients scale up and scale out with out slowing down.
Within the June 2021 TOP500 listing Microsoft Azure took public cloud providers to a brand new stage, demonstrating work on programs that took 4 consecutive spots from No. 26 to No. 29 on the TOP500 listing. They’re components of a worldwide AI supercomputer referred to as the ND A100 v4 cluster, accessible on demand in 4 world areas at this time. These rankings have been achieved on a fraction of our general cluster measurement. Every of the programs delivered 16.59 petaflops on the HPL benchmark often known as Linpack, a conventional measure of HPC efficiency on 64-bit floating level math that’s the idea for the TOP500 rankings.
Azure declares the DeepSpeed-and Megatron-powered Megatron-Turing Pure Language Era mannequin (MT-NLG), the biggest and essentially the most highly effective monolithic transformer language mannequin skilled up to now, with 530 billion parameters. It’s the results of a analysis collaboration between Microsoft and NVIDIA to additional parallelize and optimize the coaching of very massive AI fashions.
Be part of us on the NVIDIA GTC Fall 2021 convention
Microsoft Azure is sponsoring NVIDIA GTC 2021 convention workshops and coaching. The NVIDIA Deep Studying Institute (DLI) presents hands-on coaching in AI, accelerated computing, and accelerated information science to assist builders, information scientists, and different professionals remedy their most difficult issues. These in-depth workshops are taught by consultants of their respective fields, delivering industry-leading technical data to drive breakthrough outcomes for people and organizations.
On-demand Microsoft periods with GTC
Microsoft session recordings will probably be accessible on the GTC web site beginning April 12, 2021. Yow will discover an inventory of the Microsoft digital periods together with corresponding hyperlinks within the Microsoft Tech Neighborhood weblog right here.