The system used to train OpenAI and Microsoft artificial intelligence models. The main goal of the system is to drive a new era of human-like AI solutions. Hosted in Azure, the supercomputer is capable of at least 38,745 teraflops per second and as high as 100,000 teraflops. This minimum and peak performance would place the system in the top five of the TOP500 supercomputer list. While the system itself is interesting, what happened afterwards is also worth discussing. As VentureBeat points out, Microsoft CTO returned on Day 2 of Build 2020 to highlight why this is an important step. It’s the speed of the system that Scott and technical advisor Luis Vargas wanted to emphasize. Microsoft and OpenAI’s supercomputer could return answers at rapid speed, even throwing in some pop culture references (Star Wars, anyone?)

Later in the demonstration, OpenAI CEO Sam Altman joined Scott to show how the system can even write code based on inputted English instructions. While it is worth noting this was a demo with prepared situations, the results are obviously impressive.

The Supercomputer

Under the hood there are 285,000 CPU cores, making this hugely powerful while using less cores than any other supercomputer in the top five ranking. That’s the AI at scale Microsoft has been eager to talk about.

Elsewhere, the system has 400 gigabits of data bandwidth for every GPU server and 10,000 GPUs. “This is about being able to do a hundred exciting things in natural language processing at once and a hundred exciting things in computer vision, and when you start to see combinations of these perceptual domains, you’re going to have new applications that are hard to even imagine right now.”

Microsoft Demos Supercomputer Built with OpenAI - 37Microsoft Demos Supercomputer Built with OpenAI - 11Microsoft Demos Supercomputer Built with OpenAI - 30Microsoft Demos Supercomputer Built with OpenAI - 10Microsoft Demos Supercomputer Built with OpenAI - 86