From Blackwell platform to quantum-computer simulation microservice: Key announcement from NVIDIA’s GTC 

|

|

Last update:

NVIDIA‘s GPU Technology Conference (GTC) is here! 

The company’s founder and CEO Jensen Huang has officially kicked off the annual conference saying “I hope you realise this is not a concert, this is a developers’ conference,” as he took the stage in a crowded arena reserved usually for concerts and ice hockey games.

During the keynote address, he unveiled various technologies spanning from the Blackwell platform to the foundation model for humanoid robots and much more. 

Here are some of the key announcements from GTC made by the company on the first day!

NVIDIA Blackwell platform is here! 

At the GTC Conference, NVIDIA announced the much-hyped NVIDIA Blackwell platform to enable organisations to build and run real-time generative AI on trillion-parameter large language models at up to 25x less cost and energy consumption than its predecessor.

It is built with a custom 4NP TSMC process and a chip-to-chip link that connects two GPU dies, packing 208 billion transistors into a single GPU.

- A message from our partner -

The Blackwell GPU architecture features six technologies that will accelerate data processing, engineering simulation, electronic design automation, drug design, quantum computing, and generative AI.

Among the many organizations expected to adopt Blackwell are Amazon Web Services, Dell Technologies, Google, Meta, Microsoft, OpenAI, Oracle, Tesla, and xAI.

Blackwell-Powered AI supercomputer 

NVIDIA has announced the NVIDIA DGX SuperPOD, a next-generation AI supercomputer powered by NVIDIA’s Grace Blackwell Superchips. 

The supercomputer is designed for processing trillion-parameter models with constant uptime for superscale generative AI training and inference workloads.

The new DGX SuperPOD features a liquid-cooled rack-scale architecture, providing 11.5 exaflops of supercomputing power at FP4 precision and 240 terabytes of fast memory. The system is built with NVIDIA DGX systems and can scale to even greater heights with additional racks.

Each DGX system consists of 36 NVIDIA Superchips, which include 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell GPUs. These chips are interconnected to create a single supercomputer via fifth-generation NVIDIA NVLink technology. 

The GB200 Superchips are capable of delivering up to a 30x performance boost compared to the NVIDIA Tensor Core GPU for large language model inference workloads.

6G Research Cloud Platform 

NVIDIA unveiled a 6G research platform that enables researchers to develop the next phase of wireless technology.

The NVIDIA 6G Research Cloud platform is open, flexible, and interconnected, offering researchers a comprehensive suite to advance AI for radio access network (RAN) technology. 

The platform allows organisations to accelerate the development of 6G technologies that will connect trillions of devices with the cloud infrastructures, laying the foundation for a hyper-intelligent world supported by autonomous vehicles, smart spaces, and a wide range of extended reality and immersive education experiences and collaborative robots.

The NVIDIA 6G Research Cloud platform consists of three foundational elements:

  • NVIDIA Aerial Omniverse Digital Twin for 6G
  • NVIDIA Aerial CUDA-Accelerated RAN
  • NVIDIA Sionna Neural Radio Framework

Ansys, Arm, ETH Zurich, Fujitsu, Keysight, Nokia, Northeastern University, Rohde & Schwarz, Samsung, SoftBank Corp., and Viavi are among its first adopters and ecosystem partners.

Google x NVIDIA partnership

Google Cloud and NVIDIA have teamed up to offer the machine learning community access to technology that accelerates their ability to easily build, scale, and manage generative AI applications. 

Google announced its adoption of the new NVIDIA Grace Blackwell AI computing platform, as well as the NVIDIA DGX Cloud service on Google Cloud. 

Additionally, the NVIDIA H100-powered DGX Cloud platform is now generally available on Google Cloud. 

Building on their recent collaboration to optimize the Gemma family of open models, Google also will adopt NVIDIA NIM inference microservices to provide developers with an open, flexible platform to train and deploy using their preferred tools and frameworks. 

The companies also announced support for JAX on NVIDIA GPUs and Vertex AI instances powered by NVIDIA H100 and L4 Tensor Core GPUs. 

Key components of the partnership expansion include:   

  • Adoption of NVIDIA Grace Blackwell
  • Grace Blackwell-powered DGX Cloud coming to Google Cloud
  • Support for JAX on GPUs
  • NVIDIA NIM on Google Kubernetes Engine (GKE)
  • Support for NVIDIA NeMo
  • Vertex AI and Dataflow expand support for NVIDIA GPUs

Project GR00T Foundation Model

NVIDIA has announced Project GR00T, which is a general-purpose foundation model intended for humanoid robots. The project is aimed at driving breakthroughs in robotics and embodied AI. 

As part of this initiative, NVIDIA has also introduced a new computer, Jetson Thor, which is specifically designed for humanoid robots and is based on the NVIDIA Thor system-on-a-chip (SoC). 

Jetson Thor was created as a new computing platform capable of performing complex tasks and interacting safely and naturally with people and machines. 

The System on a Chip (SoC) includes a new GPU based on NVIDIA Blackwell architecture. It comes with a transformer engine that provides 800 teraflops of 8-bit floating-point AI performance and can handle multi-modal generative AI models such as GR00T. 

It simplifies design and integration efforts with an integrated functional safety processor, high-performance CPU cluster, and 100GB of Ethernet bandwidth.

NVIDIA is building a comprehensive AI platform for leading humanoid robot companies such as 1X Technologies, Agility Robotics, Apptronik, Boston Dynamics, Figure AI, Fourier Intelligence, Sanctuary AI, Unitree Robotics and XPENG Robotics, among others.

Furthermore, the company has made significant upgrades to the NVIDIA Isaac robotics platform, which now includes generative AI foundation models and tools for simulation and AI workflow infrastructure.

Robots powered by GR00T, which stands for Generalist Robot 00 Technology, will be designed to understand natural language and emulate movements by observing human actions — quickly learning coordination, dexterity, and other skills to navigate, adapt, and interact with the real world.

NVIDIA DRIVE, powering the next generation of transportation

NVIDIA has announced that several top companies in the transportation industry have started using NVIDIA DRIVE Thor centralised car computers to power their upcoming consumer and commercial fleets. 

It includes new energy vehicles and trucks, as well as autonomous vehicles such as robotaxis, robobuses, and last-mile delivery vehicles.

DRIVE Thor is an in-vehicle computing platform architected for generative AI applications, which are becoming paramount within the automotive industry. This next-generation AV platform will integrate the new NVIDIA Blackwell architecture, designed for transformer, LLM, and generative AI workloads. 

Generative AI Microservices to Advance Drug Discovery, MedTech, and Digital Health

The company also launched over two dozen new microservices that allow healthcare enterprises worldwide to take advantage of the latest advances in generative AI from anywhere and on any cloud.

NVIDIA has recently launched healthcare microservices that have been optimized with NIM AI models and workflows. These microservices are equipped with industry-standard APIs that can be used to create and deploy cloud-native applications. 

The NVIDIA healthcare microservices offer advanced features such as imaging, natural language and speech recognition, and digital biology generation, prediction, and simulation. These features can be used as building blocks to develop and deploy cloud-based applications for the healthcare industry.

Additionally, NVIDIA accelerated software development kits and tools, including Parabricks, MONAI, NeMo, Riva, and Metropolis, can now be accessed as NVIDIA CUDA-X microservices to accelerate healthcare workflows for drug discovery, medical imaging, and genomics analysis.

In total, 25 microservices were launched to accelerate the transformation of healthcare companies through the use of generative AI. 

This new technology offers numerous opportunities for pharmaceutical companies, doctors, and hospitals, such as screening trillions of drug compounds for medical advancements, gathering better patient data for early disease detection, and implementing smarter digital assistants.

With the microservices, researchers, developers, and practitioners can easily integrate AI into new and existing applications, and run them anywhere, either in the cloud or on-premises, with copilot capabilities to enhance their life-saving work.

Earth Climate Digital Twin

NVIDIA has introduced its Earth-2 climate digital twin cloud platform, which aims to combat the $140B in economic losses caused by extreme weather due to climate change. 

The Earth-2 cloud platform features new cloud APIs on NVIDIA DGX Cloud that enable users to create AI-powered emulations and deliver interactive, high-resolution simulations of weather and climate at unprecedented scale, ranging from the global atmosphere to local cloud cover. 

When combined with proprietary data owned by companies in the $20 billion climate tech industry, Earth-2 APIs can help users deliver warnings and updated forecasts in seconds, compared to the traditional CPU-driven modeling that may take minutes or hours.

Networking switches designed for massive-scale AI

NVIDIA also announced a new wave of networking switches — the X800 series.

It is the world’s first networking platform capable of end-to-end 800Gb/s throughput, NVIDIA Quantum-X800 InfiniBand, and NVIDIA Spectrum-X800 Ethernet push the boundaries of networking performance for computing and AI workloads. 

They feature software that further accelerates AI, cloud, data processing, and HPC applications in every type of data center, including those that incorporate the newly released NVIDIA Blackwell architecture-based product lineup.

This is a 5x higher bandwidth capacity and a 9x increase of 14.4Tflops of In-Network Computing with NVIDIA’s Scalable Hierarchical Aggregation and Reduction Protocol (SHARPv4) compared to the previous generation.

The Spectrum-X800 platform delivers optimized networking performance for AI cloud and enterprise infrastructure. Utilizing the Spectrum SN5600 800Gb/s switch and the NVIDIA BlueField-3 SuperNIC, the Spectrum-X800 platform provides advanced feature sets crucial for multi-tenant generative AI clouds and large enterprises.

Cloud Quantum-Computer Simulation Microservices

NVIDIA also launched a cloud service that allows researchers and developers to push the boundaries of quantum computing exploration in key scientific domains, including chemistry, biology, and materials science.

NVIDIA Quantum Cloud is based on the company’s open-source CUDA-Q quantum computing platform, which is used by three-quarters of the companies deploying quantum processing units, or QPUs. 

As a microservice, it lets users for the first time build and test in the cloud new quantum algorithms and applications — including powerful simulators and tools for hybrid quantum-classical programming.

Topics:

Follow us:

Vigneshwar Ravichandran

Vigneshwar has been a News Reporter at Silicon Canals since 2018. A seasoned technology journalist with almost a decade of experience, he covers the European startup ecosystem, from AI and Web3 to clean energy and health tech. Previously, he was a content producer and consumer product reviewer for leading Indian digital media, including NDTV, GizBot, and FoneArena. He graduated with a Bachelor's degree in Electronics and Instrumentation in Chennai and a Diploma in Broadcasting Journalism in New Delhi.

Featured events | Browse events

Current Month

July

05jul4:00 pm8:00 pmDNNL Social Enterprise Launchpad Demo Day 2024Promising Social Innovators of the DNNL Launchpad pitch their ventures!

Share to...