Microsoft has built the world’s largest cloud-based AI supercomputer that is already exponentially bigger than it was just 6 months ago, paving the way for a future with agentic systems.
For example, its AI infrastructure is capable of training and inferencing the most sophisticated large language models at massive scale on Azure. In parallel, Microsoft is also developing some of the most compact small language models with Phi-3, capable of running offline on your mobile phone.
Watch Azure CTO and Microsoft Technical Fellow Mark Russinovich demonstrate this hands-on and go into the mechanics of how Microsoft is able to optimize and deliver performance with its AI infrastructure to run AI workloads of any size efficiently on a global scale.
This includes a look at: how it designs its AI systems to take a modular and scalable approach to running a diverse set of hardware including the latest GPUs from industry leaders as well as Microsoft’s own silicon innovations; the work to develop a common interoperability layer for GPUs and AI accelerators, and its work to develop its own state-of-the-art AI-optimized hardware and software architecture to run its own commercial services like Microsoft Copilot and more.