Listen to this story
One of the most-awaited developer conferences, Nvidia GTC, is just around the corner. Scheduled from 19-22 next month, the event is expected to bring thousands of innovators, researchers, thought leaders, and decision-makers together to showcase the latest technology innovations in AI, gaming, computer graphics, metaverse and more. The thought leaders include Turing award winners Yoshua Bengio, Geoff Hinton, Yann LeCun and others.
The Nvidia GTC would feature a keynote by Nvidia chief Jensen Huang and hold over 200 sessions with global business and technology leaders. The keynote announcement by Huang will be live-streamed on Tuesday, September 20, at 8:30 PM IST (8 AM PT).
Nvidia has been the foundation of technology innovations, modern applications and computing platforms. Since its inception in 1993, the company has dedicated itself to the computing arena, starting from enhancing general-purpose computing to revolutionising the gaming and entertainment industry, pioneering GPU-accelerated computing, and later branching out to scientific computing, artificial intelligence, data platforms, and most recently – metaverse, and quantum computing, among others.
The company has dedicated itself to solving problems in the computing arena, starting from building hardware products to software tools, gaming capabilities, architecture, etc. It looks to help people take their ideas into reality faster.
Around 2009, one of the important milestones was the design of the next-generation CUDA, code-named Fermi, where Nvidia essentially solved the GPU computing puzzle. CUDA, or compute unified device architecture, is designed to work with programming languages like C, C++, and Fortran. This makes it easier for developers to use GPU resources effectively. It also supports multiple programming frameworks, including OpenMP, OpenACC, OpenCL, and others.
Cut to 2022; the company is replicating CUDA’s success with quantum computing, where it recently launched QODA (quantum optimised device architecture). Last month, the company open-sourced QODA to accelerate quantum research and development across various areas, including health, finance, HPC (high-performance computing), AI and others.
There is no stopping Nvidia.
Earlier this month, the company announced a wide range of Metaverse initiatives. The company plans to bridge the gap between AI and the digital world, creating a more realistic Metaverse.
Now, with all of these advancements in the backdrop, Nvidia’s GTC, which was started in 2009 onwards, provides a platform to understand general processing computing and challenges in the field, alongside the launch of futuristic technology where Nvidia’s in-house experts and researchers are experimenting.
In an interview with Analytics India Magazine, Vishal Dhupar, managing director, Asia South at Nvidia, said they can synthesise the virtual worlds with physical worlds as they sit at the intersection of computer graphics, physics and intelligence. “That’s what people come to see. That’s what people imbibe. That’s what people practise. That’s why GTC,” he added.
What to expect at Nvidia GTC?
Nvidia’s Omniverse has been the talk of the town. The platform offers developers a collaboration and scalable multi-GPU, real-time, true-to-reality simulation. The company believes it will revolutionise how people create and develop as individuals and work together as teams, bringing creative possibilities and efficiency to 3D creators, developers and enterprises.
At GTC, the company will announce various updates, libraries and new tools and applications to create immersive AI chatbots, realistic avatars, and stunning 3D virtual worlds.
Realistic Avatars: Recently, Nvidia announced the launch of lifelike avatars that can give an animated human face to the computers that people could interact with online. This might be similar to what Meta AI researchers developed, called MyoSuite. This new tool creates realistic musculoskeletal models more efficiently than exercising ones. Given Nvidia’s rich history in revolutionising the gaming and entertainment industry, there is a glimmer of hope from Nvidia to help developers create more realistic and life-like avatars.
Metaverse bots: Nvidia is most likely to launch new capabilities and AI platforms to develop realistic avatars and characters that would help people navigate the digital world.
3D Rendering models: In March 2022, Nvidia announced the launch of Instant NeRF, touted to be one of the fastest techniques to data, achieving more than 1,000x speedups in some cases. It is a neural rendering model that learns a high-resolution 3D scene in seconds and can render images in milliseconds.
Last year, Nvidia launched GANverse 3D, which can be imported as an extension in the Nvidia Omniverse to render 3D objects accurately in the virtual world. We can expect new updates and announcements around 3D rendering models at the upcoming GTC.
“Thanks to our network and computing effect, which is taking place because of our accelerated computing capabilities, we can go into our imaginations and make it real, and we can all create our own world and uniformly create many worlds,” said Dhupar, excitedly, pointing at the multiple possibilities on Omniverse.
He further said that AI has a huge role to play in creating such a 3D world, where machines/bots can write their own piece of software, which humans can drive, and later can learn on themselves and, most importantly, make recommendations, and predictions based on the interaction in the metaverse.
Nvidia currently offers Omniverse Enterprise, where it looks to help enterprises build 3D design and digital twin workflows with real-time collaboration and true-to-reality simulation. At GTC, there might be announcements of new partnerships on how companies leverage its Omniverse Enterprise platform to create various use cases, including robotic process automation, fighting climate change, automobile design, and more.
Banking on the success of CUDA, which opened up a new type of hardware and programming paradigm, Nvidia is betting big on quantum computing to help develop an ecosystem of hybrid quantum applications running on top of QODA.
Citing examples of CUDA, Dhupar said QODA allows developers to run quantum simulations. “You can write a lot of test applications, and we can get ready when quantum hardware really comes into play,” he added, saying that it is quite simpler to use than how one would typically operate classical computing.
“It helps quantum computing scientists to write algorithms and test their applications and get to the next level using the GPU where instead of one or two bits, you can write into hundreds of bits, qubits and move forward onto it,” explained Dhupar.
At GTC, the company would announce some of the latest use cases and updates of its platforms, alongside the latest partnership and collaboration to accelerate quantum computing research across the globe.
Hardware, AI chips and more
Previously, Nvidia had said that it would launch BlueField-4 by 2023. The data processing unit BlueField supports CUDA parallel programming platform and Nvidia AI, turbocharging the in-network computer vision.
The company had also announced the launch of Nvidia Grace, the first data centre CPU, an Arm-based processor that will deliver 10x the performance of today’s fastest servers on the most complex AI and HPC workloads.
“This is the only company that talks about three processors, the CPU, GPU, and DPU; about accelerating applications across multiple domains; about a recent problem holding you back, and how you create that into a solution that becomes a mega-market,” said Dhupar, hinting that the company would announce major hardware and semiconductor chip updates.
At GTC, we can expect the company to launch new hardware for the data centre, CPUs, GPUs, and others, along the lens of x86 architecture and the size of computing.
In 2019, Nvidia introduced GauGAN, an AI tool that turns sketches into photorealistic landscapes. Of late, there has been a lot of buzz around image generation tools such as Meta’s ‘Make a Scene’, OpenAI’s DALL.E-2 and Midjourney, among others. There is a high chance of Nvidia making similar announcements around the release of text-to-image models and platforms.
At last year’s GTC, Nvidia announced the Nvidia DRIVE, powered by Hyperion 8. It is an end-to-end modular development platform and reference architecture for designing autonomous vehicles (AVs). This includes the NVIDIA DRIVE AGX Orin™, DRIVE AGX Pegasus, and DRIVE Hyperion 8.1 Developer Kits, all built on the NVIDIA DRIVE Orin system-on-a-chip (SoC).
Nvidia’s Dhupar did not disclose much about NVIDIA DRIVE Hyperion. However, he said that there are a lot of things, whether, from a computing or software perspective, there would be talks around all of them.
“Every field that we spoke of is going through the greatest technology shift – what people call Web 3.0, some call it metaverse, and everything that gets done between that aspects is something we should be looking forward to,” shared Dhupar.