SAN JOSE, Calif. - Supermicro, Inc. (NASDAQ: SMCI) has announced a new lineup of AI systems designed to utilize NVIDIA (NASDAQ:NVDA)'s latest data center products, including the NVIDIA GB200 Grace Blackwell Superchip and the B200 and B100 Tensor Core GPUs. The company is updating its existing 8-GPU systems to support the new NVIDIA HGX B100 8-GPU and B200 GPUs, promising a quicker delivery time for customers.
The new offerings aim to cater to the growing demand for large-scale generative AI, with Supermicro's building block architecture and rack-scale Total IT solutions at the core of their design. The systems are engineered to accommodate the NVIDIA Blackwell architecture-based GPUs, with options for both air-cooled and liquid-cooled setups. Supermicro's direct-to-chip liquid cooling technology is expected to manage the increased thermal design power of the latest GPUs effectively.
Supermicro's systems are optimized for the NVIDIA Blackwell GPUs and are intended to serve as foundational building blocks for future AI infrastructure, offering significant performance improvements for AI training and real-time AI inference. The company is prepared to be among the first to market NVIDIA HGX B200 8-GPU and B100 8-GPU systems, which are anticipated to deliver three times faster training results for large language models compared to the previous NVIDIA Hopper architecture generation.
For AI inference workloads, Supermicro is introducing new MGX systems with the NVIDIA GB200, which is projected to achieve up to 30 times faster performance than the NVIDIA HGX H100. The NVIDIA GB200 NVL72, a rack-scale solution featuring 72 Blackwell GPUs, is part of this new suite of products.
Additionally, Supermicro's systems will support the upcoming NVIDIA Quantum-X800 InfiniBand and Spectrum-X800 Ethernet platforms, optimized for the Blackwell architecture to offer advanced networking performance for AI infrastructures.
This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.