Listen to this story
|
One of the researchers from Google based out of London, Ben Mildenhall, in June 2022, released a snippet of their newly developed 3D reconstruction model called RawNeRF. This new tool creates well-lit 3D scenes from 2D images. It has been built on their open-source project called MultiNeRF.
Check out the code for MultiNeRF here.
Mildenhall teased a video of their latest development using NeRF, combining their mip-NeRF 360, RawNeRF, and Ref-NeRF models. The combination was able to create a 3D space by synthesising and syncing 500 images, allowing a full 360 view with the camera moving across the space.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
He also showcased the HDR view synthesis that allows editing the exposure, lighting, tones, along with depth of field in the image. Since the photos 3D space or models are created using 2D raw images, the software can, just like Adobe’s Photoshop, edit the images.
Interestingly, this new tool from Google Research recognises light and ray patterns and then cancels out noise from images, generating 3D scenes from a set of single images.
Genesis of NeRF
Developed in 2020 by Jon Barron, senior staff research scientist at Google Research, Neural Radiance Field (NeRF) is a neural network with an ability to generate 3D scenes from 2D photos. Apart from recovering details and colour from RAW captured images in a dark scene, the tool can process it to create a 3D space allowing the user to view the scene from different camera positions and angles.

The rise of 3D reconstruction models
Recently, Meta announced the release of Implicitron, an extension of PyTorch3D, which is a 3D computer vision research tool for rendering prototypes of real-life objects. Still in the early research phase, the new approach allows representation of objects as a continuous function and is planned for use in real-world applications of AR and VR.
#META has introduced Implicitron an extension for the PyTorch3D framework for working with NeRF neural networks and now it will be even easier to work with #3DML. This will make #GAUDI -like solutions available soon in #XR pic.twitter.com/YlHsnbMvTY
— Phygital+ (@phygitalplus) August 21, 2022
In March 2022, the Nvidia research team released Instant NeRF, which can reconstruct a 3D scene from 2D images that are taken at different angles within seconds. According to NVIDIA, leveraging AI when processing pictures speeds up the rendering process of images.
In 2021, Nvidia AI research also developed GANverse 3D, an extension in their Omniverse to render 3D objects from 2D images using deep learning. Using a single image, the model uses StyleGANs to produce multiple views.
Following the technique and innovation of Nvidia, Google’s research team led by Mildenhall, was able to add the feature of removing noise from the scene created by 2D images and enhancing light drastically. The noise reduction method when combined with the 3D scene gives a high-resolution output which is seamless when transitioning between angle and positions.
NeRF in Metaverse
There are various key technologies that are essential for building an immersive experience of Metaverse. This includes AI, IoT, AR, Blockchain and 3D reconstruction. While developers are using frameworks like Unreal Engine, Unity, and Cryengine for rendering 3D models into the Metaverse, leveraging the 3D rendering technology can enhance the quality as well as the immersion.

Brad Quinton, founder of Perceptus Platform, said that metaverse heavily depends on 3D recreation of scenes. The whole idea of the metaverse is to be able to see and interact with content within it. Perceptus Platform enables real time tracking of physical objects in arbitrary 3D environments.
With the ability to create 3D objects and spaces by merely capturing multiple 2D images, the speed at which the metaverse is being constructed can be dramatically increased. Adding to that, AR and VR technologies like the Perceptus Platform can make the metaverse truly immersive.
There are many challenges in forming a perfect metaverse like the physical properties of materials like weight, fold etc. One of these challenges of lighting a model representing the real-life quality of the object was an issue that was recently resolved with NeRF model. Developers were able to illuminate rendered objects under arbitrary light conditions.

NeRF generated models can also be converted to a mesh using marching cubes. This allows models to be imported directly into the metaverse without having to go through 3D rendering softwares. Vendors, artists, and other enterprises will now be able to create virtual representations of their product and accurately render it across the 3D world.