At its annual developers conference, NVIDIA made a load of new announcements. From unveiling its Blackwell chips to flagging off project GR00T and major collaborations with industry biggies and whatnot!
Here is a gist of products that were unveiled and projects flagged off:
Blackwell
The Blackwell platform will enable organisations to build and run real-time generative AI on large language models with up to 25 times less cost and energy consumption compared to its predecessor. The Blackwell architecture is said to feature the world’s most powerful chip GB200 Grace Blackwell Superchip, a second-generation transformer engine, fifth-generation NVLink, a RAS engine, secure AI, and a decompression engine. These six technologies collectively enable AI training and real-time inference for models scaling up to 10 trillion parameters. The GB200 chip provides a 30x performance increase over its predecessor.
AI Supercomputer-DGX SuperPOD
DGX SuperPOD is an AI supercomputer that is powered by its GB200 chip. This supercomputer is designed for processing trillion-parameter models and offers constant uptime for superscale generative AI training and inference workloads. The supercomputer is 4x faster compared to its previous version.
X800 Networking Switches
These are claimed to be the world’s first networking platforms that can have a throughput of 800Gb/s. Throughput is the amount of data that can be processed at once which means they are designed to handle big AI tasks.
I am GR00T
The company flagged off project GR00T, a foundation model for humanoid robots aimed at advancing robotics and embodied AI. Alongside this, they introduced Jetson Thor, a new computer for humanoid robots based on the NVIDIA Thor system-on-a-chip (SoC
Project GR00T (Generalist Robot 00 Technology) aims to enable robots to understand natural language and mimic human movements, learning skills such as coordination and dexterity to navigate and interact with the real world.
6G Research Cloud Platform
NVIDIA announced its 6G Research Cloud platform, a comprehensive suite designed to advance AI for radio access network (RAN) technology. It is expected to launch commercially around 2030.
The platform has Aerial Omniverse Digital Twin for 6G, a reference application that enables physically accurate simulations of complete 6G systems from a single tower to city scale. It incorporates software-defined RAN and user-equipment simulators, along with realistic terrain and object properties, allowing researchers to simulate and build base-station algorithms based on site-specific data and train models in real time to improve transmission efficiency. It has a framework that integrates with popular frameworks like PyTorch and TensorFlow, for generating and capturing data and training AI and machine learning models at scale. It also includes Sionna, a leading link-level research tool for AI/ML-based wireless simulations.
Apart from product and project launches, the company extended existing collaborations and announced major new collaborations as well.
NVIDIA is expanding its collaborations with Chinese automakers, including BYD, Xpeng, and GAC Aion, to build self-driving vehicles and AI-augmented infotainment technology. BYD will use Nvidia’s Drive Thor chips for autonomous driving and other digital functions. The collaborations aim to help Chinese auto brands compete globally and expand sales outside China.
TSMC and Synopsys have integrated NVIDIA’s computational lithography platform cuLitho with their software and manufacturing processes. The companies are going into production with cuLitho to speed up the manufacturing of advanced semiconductor chips. This integration aims to accelerate chip fabrication and support future Blackwell GPUs. NVIDIA has introduced new generative AI algorithms that enhance cuLitho, speeding up the manufacturing process. These algorithms help create a near-perfect inverse mask for optical proximity correction, doubling the speed of this process.
Japan’s new ABCI-Q supercomputer will be powered by NVIDIA. The supercomputer is designed for high-fidelity quantum simulations across various industries and is integrated with NVIDIA CUDA-Q, an open-source hybrid quantum computing platform, used by major quantum processing unit (QPU) deployers.
Microsoft Corp. and NVIDIA have also expanded their collaboration with new integrations of NVIDIA’s generative AI and Omniverse technologies across various Microsoft platforms, including Azure, Azure AI services, Microsoft Fabric, and Microsoft 365. The collaboration includes bringing Grace Blackwell processor to Azure and integrating DGX Cloud with Microsoft Fabric. NVIDIA GPUs and Triton Inference Server will also power AI inference predictions in Microsoft Copilot for Microsoft 365.