Active Hackathon

Can Neural Rendering Solve The Long Standing Challenges In Computer Graphics

Modern-day graphics in movies and gaming have improved tremendously. They transitioned from hand-craft photorealistic images to auto-generated components of the imagery. However, efficient rendering of photorealistic virtual worlds has been a long-standing challenge in computer graphics. So, researchers have started to explore deep neural networks for making rendering easier. 

Recently, there has been a sudden rise in neural rendering publications where it has been applied to a variety of use-cases such as semantic image editing, free-viewpoint videos, relighting, face and body reenactment, digital avatars and more.


Sign up for your weekly dose of what's up in emerging technology.

In a survey done by a team of researchers from Adobe, Google and Stanford, the authors observe that neural rendering has enabled applications that were previously thought to be not possible.

Overview Of Rendering

Anyone who has ever captured a video or photo would know the number of factors affecting the final output. Lighting can be an issue, the composure, camera settings and a lot more. When one tries to recreate such photorealistic imagery using computer graphics, the problems become even more palpable. 

The synthetic generation of virtual worlds is dependent on a clever play of laws of physics with that of mathematics. One popular approach is ray tracing where the rays are cast backwards from the image pixels into a virtual scene, and reflections and refractions are simulated by recursively casting new rays from the intersections with the geometry. 

Another technique of imbibing real-world data to create animations is inverse rendering, which again has drawbacks due to either mathematical complexity or computational expense. This is where neural rendering comes into the picture. Neural rendering is capable of recreating some real-world effects more accurately than inverse rendering.

Current State Of Neural Rendering

The introduction of deep generative networks enabled creators to produce visually compelling imagery. However, they still lack fine-grained control over scene appearance and cannot always handle the complex, non-local, 3D interactions between scene properties. Neural rendering addresses the shortcomings of deep generative modelling and traditional rendering by combining machine learning with physical knowledge from computer graphics. Whereas, neural rendering techniques, on the other hand, are diverse, offer control over the inputs they require and the outputs they produce, along with the network structures they utilise. 

A typical neural rendering approach, explained the authors, takes input images corresponding to certain scene conditions like lighting or layout and builds a “neural” scene representation from them. It then “renders” this representation under novel scene properties to create images. The learned scene representation is not restricted by simple scene modelling approximations and can be optimised for high-quality images.

Let’s take a look at few of the recent approaches in this domain(h/t Prof. Frank Dellaert):

Neural Volumes

In this work, the authors present a learning-based approach to represent dynamic objects inspired by the integral projection model used in tomographic imaging. The approach is supervised directly from 2D images in a multi-view capture setting and does not require explicit reconstruction or tracking of the object. 


This work achieved state-of-the-art results for synthesising novel views of complex scenes by optimising an underlying continuous volumetric scene function with the help of a sparse set of input views. A team of researchers from UC Berkeley synthesised views by querying 5D coordinates along with camera rays and used classic volume rendering techniques to project the output colours and densities into an image. They also describe how to effectively optimise neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrate results that outperform prior work on neural rendering and view synthesis.

According to Prof. Dellaert, NeRF “triggered” new interest in neural volume rendering. Neural rendering approaches can lower the barrier for entry and make manipulation technology accessible to non-experts with limited resources.

Know more about neural rendering here.

More Great AIM Stories

Ram Sagar
I have a master's degree in Robotics and I write about machine learning advancements.

Our Upcoming Events

Conference, Virtual
Genpact Analytics Career Day
3rd Sep

Conference, in-person (Bangalore)
Cypher 2022
21-23rd Sep

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan, 2023

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
21st Apr, 2023

Conference, in-person (Bangalore)
MachineCon 2023
23rd Jun, 2023

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM