GTC Keynote 2020: NVIDIA reveals A100 AI chip with 54 billion transistors and 5 petaflops of performance

|

|

Last update:

NVIDIA, recently, revealed its vision for the next-gen computing that shifts the focus of the global information economy from servers to a new class of powerful, flexible data centers.

In the recent keynote, NVIDIA founder and CEO Jensen Huang discussed NVIDIA’s recent Mellanox acquisition, new products based on the company’s much-awaited NVIDIA Ampere GPU architecture and important new software technologies. 

Huang said,

“The data center is the new computing unit.” 

NVIDIA A100 – Based on Ampere architecture!

First of Huang unveiled NVIDIA A100, the first GPU based on the NVIDIA Ampere architecture, providing the greatest generational performance leap of NVIDIA’s eight generations of GPUs. Huang also announced that it was built for data analytics, scientific computing, and cloud graphics, and is in full production and shipping to customers worldwide.

Notably, eighteen of the world’s leading service providers and systems builders are incorporating them including Alibaba Cloud, Amazon Web Services, Baidu Cloud, Cisco, Dell Technologies, Google Cloud, Hewlett Packard Enterprise, Microsoft Azure and Oracle.

The A100, and the NVIDIA Ampere architecture it’s built on, boost performance by up to 20x over its predecessors. The A100 has more than 54 billion transistors, making it the world’s largest 7-nanometer processor. 

Other than that, it has Third-generation Tensor Cores with TF32, that accelerates single-precision AI training out of the box, Structural sparsity acceleration, Multi-instance GPU, and Third-generation NVLink technology. 

NVIDIA DGX A100 

NVIDIA unveiled the third generation of its NVIDIA DGX AI system based on NVIDIA A100 — the NVIDIA DGX A100 — the world’s first 5-petaflops server. And each DGX A100 can be divided into as many as 56 applications, all running independently.

Among initial recipients of the system is the U.S. Department of Energy’s Argonne National Laboratory, which will use the cluster’s AI and computing power to better understand and fight COVID-19; the University of Florida; and the German Research Center for Artificial Intelligence.

Huang explained:  

“A data center powered by five DGX A100 systems for AI training and inference running on just 28 kilowatts of power costing $1 million can do the work of a typical data center with 50 DGX-1 systems for AI training and 600 CPU systems consuming 630 kilowatts and costing over $11 million”

In addition to that, Huang also announced the next-generation DGX SuperPOD. Powered by 140 DGX A100 systems and Mellanox networking technology, it offers 700 petaflops of AI performance. 

NVIDIA is expanding its own data center with four DGX SuperPODs, adding 2.8 exaflops of AI computing power — for a total of 4.6 exaflops of total capacity — to its SATURNV internal supercomputer, making it the world’s fastest AI supercomputer.

NVIDIA DRIVE

In an attempt to push Autonomous vehicles forward, the company has unveiled NVIDIA DRIVE. This will use the new Orin SoC with an embedded NVIDIA Ampere GPU to achieve the energy efficiency and performance to offer a 5-watt ADAS system for the front windshield as well as scale up to a 2,000 TOPS, level-5 robotaxi system.

The NVIDIA DRIVE ecosystem now encompasses cars, trucks, tier-one automotive suppliers, next-generation mobility services, startups, mapping services, and simulation.

Huang said:

“It’s now possible for a carmaker to develop an entire fleet of cars with one architecture, leveraging the software development across their whole fleet.”

Main image credits: NVIDIA

Stay tuned to Silicon Canals for more European technology news

Topics:

Follow us:

Editorial team

The editorial team of Silicon Canals brings you technology news from the European startup ecosystem. 

Partner eventsMore events

Current Month

06dec5:15 pm7:00 pmLe Wagon Demo DayDiscover the students' final projects

12dec4:00 pm9:30 pmAI in ActionPractical Insights for Digital Transformation

28jan4:00 pm10:00 pmUnlocking operational efficiency with AIInsights for your future

Share to...