Listen to this story
At the GTC 2022, Nvidia announced major updates and features around its metaverse building platform Omniverse. For the first time, the company has announced its move to offer SaaS—Nvidia Omniverse Cloud—a comprehensive suite of cloud services for artists, developers, and enterprise teams to design, publish, operate and experience metaverse applications anywhere.
“The metaverse is the evolution of the internet connecting virtual 3D worlds using universal scene description (USD) and viewing through a real-time virtual world simulation engine,” explained Richard Kerris, VP of Omniverse, throwing light on its applications.
Kerris said that fashion designers, furniture and goods makers, and retailers are offering virtual 3D products that can be used to augment reality. Further, he said that telcos are creating digital twins of the radio networks to optimise and deploy radio towers.
Sign up for your weekly dose of what's up in emerging technology.
He said that most companies today are creating digital twins of warehouses and factories to optimise their layouts and logistics. “We are building a digital twin of Earth to predict the climate decades into the future,” added Kerris.
Incidentally, most of the Nvidia GTC predictions made by Analytics India Magazine have come true to a large extent. Check out the Nvidia expectations story here.
Download our Mobile App
Fueling Omniverse Ecosystem in India
Nvidia said that its developer and customer ecosystem is growing. At present, the company has over 150 software partners and counting. Over the years, it has built hundreds of extensions, thanks to its internal team, developer community, partners and resellers across the globe.
Nvidia said that over 2,00,000 individual users have downloaded Omniverse, spread across industries including telecommunications, transportation, retail energy, automobile, manufacturing, etc.
Kerris told AIM that Nvidia Omniverse has about hundreds of customers in India. “It could also be 1000s,” he added, stating that they have a developer relations team in the country, where they are constantly swamped with work that is going on from customers using Omniverse or wanting to get access to training on Omniverse. “It is a very growing marketplace for us,” he added.
Nvidia Enters SaaS with Omniverse Cloud Services
At GTC, Nvidia announced the launch of its first software-and infrastructure-as-a-service offering—Nvidia Omniverse Cloud—a comprehensive suite of cloud services for artists, developers, and enterprise teams to design, publish, operate and experience metaverse applications anywhere.
With this, individuals and teams can design and collaborate on 3D workflows without needing local computing power. For instance, Roboticists can train, simulate, test and deploy AI-enabled intelligent machines with increased scalability and accessibility.
Some early supporters of Omniverse Cloud include RIMAC Group, WPP and Siemens.
Simply put, users can create and collaborate on any device with the Omniverse App Streaming feature, access and edit shared virtual worlds with Omniverse Nucleus Cloud, and scale 3D workloads across the cloud with Omniverse Farm.
Powered by Nvidia OVX, Omniverse Cloud runs on the planetary-scale Omniverse Cloud Computer, alongside Nvidia HGX for advanced AI and Nvidia Graphics Delivery Network to enable low-latency delivery of interactive 3D experiences to edge devices.
Click here to get started.
Other Omniverse Updates
“Omniverse is nothing that’s ever been built,” said Kerris.
Omniverse is a real-time 3D database, a platform developed by Nvidia to build and operate applications for the metaverse. The platform enables 3D designers and teams to connect better, build existing 3D pipelines, and operate virtual world simulations. Companies can now write applications and services on Omniverse, such as replicators for generating synthetic data and running real-life simulations for robotics (Issac sim) and autonomous vehicles (DRIVE sim).
To ensure Omniverse runs smoothly, at GTC, Nvidia announced the launch of the second generation of Nvidia OVX, powered by next-gen GPUs and enhanced networking technology to deliver groundbreaking graphics, AI and digital twins simulation capabilities.
(Source: Nvidia GTC)
Citing BMW Group and Jaguar Land Rover, Kerris said they are among the first customers to receive second-generation Nvidia OVX systems. Nvidia has partnered with Inspur, Lenovo and Supermicro to enhance the configuration, which is expected to be launched by early 2023. “We will be expanding the partner ecosystem, including GIGABYTE, H3C and QCT, in the future,” he added.
At GTC, Nvidia also released several major updates to Omniverse. This includes a new universal scene description (USD), where it has added new collections of free and online USD schema examples and tutorials. In addition, the company also released USD++ extension examples with the latest kit, alongside samples for web-based USD experiences.
Click here to view the Omniverse Kit.
Inside the Omniverse Kit, Nvidia has now released kit-based reference apps like ‘Create and View’. Also, it announced major improvements for real-time ray tracing, path tracing, large scene performance, animation and behaviour, and neural graphics—new GAN and fusion model-based experimental AI tools, a new AI car explorer and a new animal explorer.
With XR (mixed reality), Nvidia has released major rendering and performance improvements powered by their new GPUs that drive real-time, fully retracted VR (virtual reality). The team said this delivers twice the performance increase over their previous versions. This means the large scenes previously impossible to use with fully ray trace VR are now smooth enough for expanded and extended viewing. “You will be able to have fully raytraced VR experiences with Omniverse,” said Kerris.
Omniverse Replicator: Nvidia has made five containers available for AWS deployment.
Nvidia has launched a new ‘SimReady Assets’. This contains thousands of free data for AI workflows. This can be for digital twins, synthetic data generation, and AI training workloads.
In addition, Nvidia has also released new developer tools, say, a new CMS (content management system) for Omniverse developers.
Most importantly, Nvidia launched ‘Siemens JT’. JT is a common language that is widely used in 3D format. It is used throughout the product development life cycle and in all major industries to communicate critical design information typically locked inside CAD files, explained Kerris.
Last month, at SIGGRAPH 2022, NVIDIA announced the launch of its Avatar Cloud Engine (ACE). It is a collection of cloud-based AI models and services for developers to build, customise, and deploy engaging and interactive avatars.
Continuing the momentum, at this year’s GTC, Nvidia announced the update of its cloud-native avatar technologies—Omniverse ACE, along with unveiling Violet. This cloud-based avatar represents the latest evolution in avatar development through ACE.
Nvidia said that to animate interactive avatars like Violet; developers need to ensure the 3D character can see, hear, understand and communicate with people. But, of course, this is easier said than done.
To fuel this, Nvidia launched a new limited EA program: Nvidia Tokkio, a domain-specific AI framework used to build and deploy fully autonomous interactive customer service avatars in the cloud. “Avatars are the inhabitants of virtual worlds, and creating and deploying interactive avatars can be incredibly challenging. ACE essentially brings any avatar to life using AI at scale anywhere with a suite of cloud-native AI microservices,” shared Kerris.
Nvidia Maxine Cloud-Native Microservices
At GTC 2022, Nvidia also announced the launch of Maxine, a real-time communication AI application framework. It has been re-architected for cloud-native microservices. With the latest announcement, customers can now request early access audio effects microservices for premium quality audio in multi-cloud deployments. These microservices include noise removal, room echo removal, acoustic cancellation, and audio super-resolution.
Nvidia said that all new SDK features would deliver innovative AI effects alongside enhancing SDK AI models for improved audio and video quality. This includes features like bass expression estimation, eye contact, etc.