Listen to this story
|
A few decades ago, the Internet boomed and it completely altered the world. “What we’re seeing is the start of a new era of the Internet. One that is generally being called the Metaverse,” says Rev Lebaredian, vice president of Omniverse and Simulation Technology at NVIDIA, in a press briefing.
The web, as we know it, is two-dimensional but the power and the potential of 3D technology is expected to drive this new era of the Internet.
The ‘Metaverse’ is still in its early phase of development. As it stands, it’s rather unsophisticated and to be more suitable for wider adoption, it needs to get more realistic to enable an excellent immersive experience.
In the 2022 edition of the SIGGRAPH conference held in Vancouver, NVIDIA announced a wide range of Metaverse initiatives to help achieve this goal.
With its new Metaverse tools, NVIDIA is expected to bridge the gap between AI and the Metaverse.
“Having a design team behind recreating the real world as virtual space is time-consuming and not very efficient considering the pace at which the Metaverse is growing. We need AI to offload all the repetitive takes that a designer is intended to do and rather focus on other aspects of virtual world creation,” says Mukundan Govindaraj, Solutions Architecture–Omniverse, NVIDIA in conversation with Analytics India Magazine.
Neural Graphics to make Metaverse more realistic
According to Rev Lebaredian, NVIDIA is still a ‘tools company’ with graphics at its core. It is known for its Graphic Processing Units (GPUs). In 2020, the US-based tech giant announced its first GPU-based AI chip capable of boosting performance by up to 20x.
At present, NVIDIA aims to use the power of neural graphics to create realistic 3D objects and drive the development of the Metaverse. 3D creation will play a crucial role if a wider adoption of the Metaverse is to happen.

(Source: NVIDIA)
Neural Graphics is a novel tech that brings together the power of AI and graphics to develop an accelerated graphics pipeline that learns from data. Building 3D objects for Metaverse involves meticulous processes such as product designing and visual effects.
“Often, developers balance detail and photo realism against deadlines and budget constraints. Developing something that depicts the real world in the Metaverse is a very difficult and time-consuming task.
What makes it even more challenging is that multiple objects and characters need to interact in a virtual world. Simulating physics becomes just as important as simulating light,” says Mukundan Govindaraj.
Tools and programmes—including NeuralVDB and Kaolin Wisp—that enable quick and easy 3D content creation for millions of designers and creators were also recently announced by NVIDIA.
- NeuralVDB: It is an update to industry standard OpenVDB. By using ML, NeuralVDB drastically helps in reducing the memory footprint to allow for higher-resolution 3D data.
- Kaolin Wisp: It is an addition to Kaolin, a PyTorch library enabling faster 3D deep learning research. It helps bring down the time needed to test and implement new techniques from weeks to days.
- 3D MoMa: It is also a new inverse rendering pipeline that allows developers to import a 2D object into a graphics engine and create a realistic 3D object from it.
Life-like virtual assistants
Among the different tools announced by NVIDIA, the Omniverse Avatar Cloud Engine (ACE) is deemed the most intriguing. It is a new AI-assisted 3D avatar builder.
NVIDIA claims that with the help of ACE, developers will be able to create autonomous virtual assistants and digital humans.
Users often interact with voice assistant softwares such as SIRI and Alexa. Now, with this new technology, both SIRI And Alexa could possibly have a face.
“The Metaverse without human-like representations or AI inside it will be a very dull and sad place,” says Rev Lebaredian.
In concurrence, Mukundan Govindaraj further explains—“It’s a collection of AI models and services using which developers can quickly build, customise, and deploy interactive avatars. Developers can leverage the Omniverse Avatar technology platform to build their own domain-specific avatar solutions.”
NVIDIA also announced Omniverse Audio2Face—an AI-based tech that generates expressive facial animation from an audio source.
An expanded Omniverse
During the conference, NVIDIA also announced a new version of Omniverse.
Omniverse is a Universal Scene Description (USD) platform, an engine that builds Metaverses. It has been downloaded more than 200,000 times so far. The new version of NVIDIA’s Omniverse will allow developers to create content for a significantly better immersive experience.
USD is emerging to become the HTML of the Metaverse. “USD, developed and open-sourced by Pixar, combines the best parts of the previous file formats and runtime APIs. The ability to interoperate with many tools is going to be the driving factor for its popularity and adoption across all industries working with 3D file formats,” says Mukundan Govindaraj.
According to NVIDIA, the new version of the Omniverse comes with several upgraded core technologies and more connections to popular tools.
“Connectors are now available in beta for PTC Creo, Visual Components and SideFX Houdini. These new developments join Siemens Xcelerator, now part of the Omniverse network, welcoming more industrial customers into the era of digital twins,” says NVIDIA in a blog post.