NVIDIA CEO Jensen Hiang made a string of announcements during his Computex keynote, including details about the company’s next DGX supercomputer. Given where the industry is clearlyheading, it shouldn’t come as a surprise that the DGX GH200 is largely about helping companies develop generative AI models.
The supercomputer uses a new NVLink Switch System to enable 256 GH200 Grace Hopper superchips to act as a single GPU (each of the chips has an Arm-based Grace CPU and an H100 Tensor Core GPU). This, according to NVIDIA, allows the DGX GH200 to deliver 1 exaflop of performance and to have 144 terabytes of shared memory. The company says that's nearly 500 times as much memory as you'd find in a single DGX A100 system.
For comparison, the latest ranking of the Top500 supercomputers lists Frontier at Oak Ridge National Laboratory in Tennessee as the only known exascale system, having reached a performance of nearly 1.2 exaflops on the Linmark benchmark. That's over twice the peak performance of the second-placed system, Japan's Fugaku.
In effect, NVIDIA claims to have developed a supercomputer that can stand alongside the most powerful known system on the planet (Meta is building one that it claims will be the fastest AI supercomputer in the world once it’s fully built out). NVIDIA says the architecture of the DGX GH200 offers 10 times more bandwidth than the previous generation, "delivering the power of a massive AI supercomputer with the simplicity of programming a single GPU."
Some big names are interested in the DGX GH200. Google Cloud, Meta and Microsoft should be among the first companies to gain access to the supercomputer to test how it can handle generative AI workloads. NVIDIA says DGX GH200 supercomputers should be available by the end of 2023.
The company is also building its own supercomputer, Helios, that combines four DGX GH200 systems. NVIDIA expects Helios to be online by the end of the year.
Huang discussed other generative AI developments during his keynote, including one on the gaming front. NVIDIA Avatar Cloud Engine (ACE) for Games is a service developers will be able to tap into in order to create custom AI models for speech, conversation and animation. NVIDIA says ACE for Games can "give non-playable characters conversational skills so they can respond to questions with lifelike personalities that evolve."
This article originally appeared on Engadget at https://ift.tt/g7X6P8efrom Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/g7X6P8e
No comments:
Post a Comment