Let’s take a look at the key announcements made during the keynote speech:
AI Acceleration With NVIDIA Ampere
“The powerful trends of cloud computing and AI are driving a tectonic shift in data centres designs with GPU-accelerated computing,” said Jensen Huang, CEO of NVIDIA
With more than 54 billion transistors NVIDIA A100 is the world’s largest 7-nanometer processor. It is the first GPU based on the NVIDIA Ampere architecture, providing the greatest performance leap of eight generations of GPUs.
18 of the world’s leading service providers such as Alibaba Cloud, Amazon Web Services, Cisco, Dell Technologies, Google Cloud, Hewlett Packard Enterprise, Azure and Oracle are using it.
NVIDIA is also shipping the third generation of its DGX AI system. It is the world’s first 5-petaflops server, and each DGX A100 can be divided into as many as 56 applications, all running independently.
The latest generation of deep-learning powered recommender systems enables companies to better target users.
NVIDIA’s Merlin recommender application framework promises to make GPU-accelerated recommender systems more accessible with an end-to-end pipeline for deploying AI models.
NVIDIA GPUs have long been used to accelerate training time for neural networks — sparking the modern AI boom — since their parallel processing capabilities let them blast through data-intensive tasks.
These systems will be able to take advantage of the new NVIDIA A100 GPU, built on NVIDIA Ampere architecture, so companies can build recommender systems more quickly and economically.
NVIDIA’s Jarvis, a GPU-accelerated application framework allows companies to use video and speech data to build state-of-the-art customised conversational AI services.
“Conversational AI is central to the future of many industries to understand and communicate with nuance and contextual awareness.”Jensen Huang, founder and CEO of NVIDIA
Jarvis also has the advantages of the new NVIDIA A100 Tensor Core GPU for AI computing and the latest optimisations in NVIDIA TensorRT for inference. Now, for the first time, it is possible to run an entire multimodal application faster than the 300-millisecond threshold for real-time interactions.
NVIDIA Isaac robotics platform was selected by the BMW Group to enhance its automotive factories by revamping its logistics, which uses robots built on advanced AI computing and visualisation technologies.
“BMW Group’s use of NVIDIA’s Isaac robotics platform to reimagine their factory is revolutionary.”-Jensen Huang
BMW Group aims to enhance logistics factory flow more efficiently. Once developed, NVIDIA’s system will be deployed at the BMW Group factories worldwide.
BMW will also be using the following:
- DGX AI systems along with Isaac to help in simulation technology to train and test the robots;
- NVIDIA Quadro GPUs to render synthetic machine parts to enhance the training AI-enabled robots built on the Isaac SDK, powered by NVIDIA Jetson and EGX edge computers.
“Autonomous vehicles are one of the biggest computing challenges of our time, an area where NVIDIA continues to push forward with NVIDIA DRIVE,” said Huang
NVIDIA DRIVE will use the new Orin System-on-Chip that is embedded with its Ampere GPU to offer a 5-watt ADAS system as well as scale up to a 2,000 TOPS, level-5 robotaxi system.
This will make the work of automakers easy as now they only have to work on a single computing architecture and software stack to build AI.
“NVIDIA accelerated computing to save lives is the perfect example of our organisation’s purpose, and we build computers to solve problems normal computers cannot,” Huang said.
If you loved this story, do join our Telegram Community.
Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.
What's Your Reaction?
I have a master's degree in Robotics and I write about machine learning advancements. email:firstname.lastname@example.org