Touted as the “world’s most powerful AI chip,” Nvidia’s latest unveiling, the Blackwell B200 GPU, is just the latest example of how the company is pushing the boundaries of AI computing. Designed to make trillion-parameter AI models accessible to more businesses, Nvidia’s new GPU is going to change AI as we know it.
The Blackwell platform is very remarkable, with the ability to run Large Language Models (LLMs) much more efficiently, with 25x less cost, and less energy. Its ground-breaking GPU architecture is based on six new technologies. These technologies can speed up computation, which is useful in many domains like data processing, engineering modeling, and generative AI.
A huge credit for its remarkable performance goes to the 208 billion transistors on the chip. For perspective, its predecessor, the H100 chip had only 80 billion transistors. When compared to the H100, Nvidia’s new GPU is 25x cost and energy-efficient.
ALSO READ: Apple Roadmap for 2024-2027 revealed: Includes Foldable iPhone, OLED iPads, iPhone SE4 & much more
The Blackwell chip is made using a custom-developed 4NP TSMC process that offers twice the computing power and model size than previous models, thanks to the enhanced 4-bit floating-point AI inference capabilities.
The new Blackwell B200 GPU boasts FP4 horsepower of up to 20 petaflops. The GB200 “super chip,” built with two B200 GPUs and a Grace CPU, promises a substantial improvement in energy efficiency and a performance increase of 30x for LLM inference applications.
ALSO READ: Apple Acquires DarwinAI To Checkmate Google and Microsoft
Thanks to these upgrades and the next-gen NVLink switch, which enables 576 GPUs to communicate with one other, Nvidia has achieved unseen levels of AI performance. The GB200 NVL72 and other larger variants of the GB200 GPU show how seriously Nvidia takes the goal of enhancing AI capabilities.
One rack housing 72 GPUs and 36 CPUs can provide 1,440 petaflops of inference or 720 petaflops of AI training. The DGX Superpod for DGX GB200 from Nvidia has an astounding 11.5 exaflops (11,500 petaflops) of FP4 processing capability in addition to 240 TB of memory, 288 CPUs, and 576 GPUs.
With the eight-in-one DGX Superpod for DGX GB200, Nvidia also offers a one-stop solution for high-performance computing tasks. This is probably the best option for a business that wants to include AI into its regular operations.
When it comes to security, Blackwell doesn’t compromise there as well. This means that the security of AI models and consumer data is encrypted without compromising on performance.
Later this year, partners will be able to buy devices based on Blackwell, even though Nvidia has not made it clear as to which racks will be available. We can also expect Nvidia to incorporate the Blackwell architecture into its gaming GPUs, potentially the forthcoming RTX 50 series, which will launch by late 2024 or early 2025.
ALSO READ: PS5 Pro Rumors Roundup: Specifications, Features, Price, And Expected Release Date
You can follow Smartprix on Twitter, Facebook, Instagram, and Google News. Visit smartprix.com for the most recent news, reviews, and tech guides.