The ICCV 2021 Best Papers Have Been Announced

A sneak preview of the top ten papers accepted for presentation at ICCV 2021.

ICCV (IEEE International Conference on Computer Vision) 2021 announced the Best Paper Awards, honourable mentions, and Best Student Paper. ICCV is one of the premier international biennial computer vision conferences, featuring the main conference track and many tracks of workshops and tutorials. This year’s conference was held entirely online.

The following articles represent the best papers presented at ICCV 2021: let’s take each of these articles in turn and examine their significance.

Title: Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields


Jonathan T. Barron – Google

Ben Mildenhall – Google

Matthew Tancik – UC Berkeley

Peter Hedman – Google

Ricardo Martin-Brualla – Google

Pratul P. Srinivasan – Google


Neural radiance fields’ (NeRF) rendering technique samples a scene with a single ray per pixel, which may result in overly fuzzy or aliased renderings when training or testing photos to detect scene content at differing resolutions. The researchers introduced mip-NeRF, a multiscale NeRF-like model that accounts for NeRF’s inherent aliasing. NeRF operates by projecting rays, recording the positions of points along with those rays, and training unique neural networks at different scales. By comparison, mip-NeRF represents the image at different scales by casting cones, encoding the positions and sizes of conical frustums, and training a single neural network. Additionally, Mip-NeRF can match the accuracy of a bruteforce supersampled NeRF variation while being 22 times faster. The researchers expect that the general strategies provided here will benefit other researchers attempting to improve the performance of neural rendering models based on raytracing. (Read here)

Title: OpenGAN: Open-Set Recognition via Open Data Generation


Shu Kong – Carnegie Mellon University

Deva Ramanan – Carnegie Mellon University, Argo AI


The researchers developed OpenGAN for open-set recognition by incorporating two technical insights: 

1) training a classifier on OTS characteristics rather than pixels, and 

2) adversarially synthesising fake open data to increase the pool of open-training data.

With OpenGAN, the researchers demonstrate that employing a GAN-discriminator to accomplish state-of-the-art open-set discrimination is possible once a val-set of genuine outlier examples is used to pick the GAN-discriminator. OpenGAN is effective even when the outlier validation cases are small in number or highly skewed. Both open-set picture recognition and semantic segmentation are greatly improved with OpenGAN. (Read here)

Title: Viewing Graph Solvability via Cycle Consistency


Federica Arrigoni – University of Trento

Andrea Fusiello – University of Udine

Elisa Ricci – University of Trento, Fondazione Bruno Kessler

Tomas Pajdla – CIIRC CTU in Prague


The researchers examined the solvability of viewing graphs, i.e., whether they can determine projective cameras individually, and produced several significant improvements in the theory and practical use of viewing graphs. Additionally, the researchers analysed basic graphs with up to 90 vertices, which sets the bar for uncalibrated graph processing. The researchers examined the concept of solvability in this work, which is entirely dependent on the topology of the viewing graph. Adding additional information would result in a new notion of solvability, which might be fascinating to investigate in the future. Connecting this to the calibrated example would also be an intriguing area of future investigation. Apart from its theoretical significance, the solvability problem has a practical consequence, as reconstruction methods benefit from knowing in advance whether the graph under consideration is solvable or not. If the problem is ill-posed, no method will produce a suitable solution. Finding a maximal subgraph that is solvable would be of significant relevance in this scenario. (Read here)

Title: Common Objects in 3D: LargeScale Learning and Evaluation of Real-life 3D Category Reconstruction


Jeremy Reizenstein – Facebook AI Research

Roman Shapovalov – Facebook AI Research

Philipp Henzler – University College London

Luca Sbordone – Facebook AI Research

Patrick Labatut – Facebook AI Research

David Novotny – Facebook AI Research


The researchers have released Common Objects in 3D (CO3D), a dataset of in-the-wild object-centric films containing 50 object categories annotated with a camera and point cloud data. Additionally, they submitted NerFormer, a hybrid of Transformer and neural implicit rendering capable of reconstructing 3D object categories from CO3D with a higher degree of accuracy than a total of 14 other baselines assessed. CO3D collection continues at a steady clip of 500 videos per week, which the researchers intend to distribute shortly. (Read here)


The International Conference on Computer Vision (ICCV 2021) brings together the international community focused on computer vision. The virtual platform for ICCV 2021 will be the optimal venue for engaging with this unique community’s most recent research and ideas. ICCV 2021 is the primary international computer vision event, consisting of the main conference and many co-located seminars and tutorials. It delivers an amazing value for students, academics, and industry researchers due to its high quality and inexpensive cost.

More Great AIM Stories

Dr. Nivash Jeevanandam
Nivash holds a doctorate in information technology and has been a research associate at a university and a development engineer in the IT industry. Data science and machine learning excite him.
Yugesh Verma
An Illustrative Guide to Masked Image Modelling

masked image modelling can provide competitive results to the other approaches like contrastive learning. Performing computer vision tasks using masked images can be called masked image modelling.

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM