MITB Banner

Best Papers Announced At IEEE International Conference On Computer Vision

Share

IEEE International Conference On Computer Vision (ICCV) is a noted event which consists of conferences, workshops, and tutorials. Held in Seoul, South Korea from Oct 27, 2019 – Nov 3, 2019, this conference is one of the top conferences on emerging technologies conducted every two years. This year, the conference witnessed more than 4,000 papers, from which the committee accepted only 1,075 papers.

The event considered a number of topics related to the domain of computer vision and pattern recognition: 

  • 3D Computer Vision
  • Action Recognition
  • Biometrics, face, and gesture
  • Big data and Large Scale Methods
  • Biomedical image analysis
  • Computational photography, photometry, shape from X
  • Deep Learning
  • Low-level vision and Image Processing
  • Motion and Tracking
  • Optimization methods
  • Recognition: detection, categorization, indexing, and matching
  • Robot Vision
  • Segmentation, grouping and shape representation
  • Statistical learning
  • Video: events, activities, and surveillance
  • Vision for X and others.

During the main conference, the committee announced its Best Paper awards which mainly are in three categories.  

1| Best Paper Award (Marr Prize)

“SinGAN: Learning a Generative Model from a Single Natural Image” by Tamar Rott Shaham, Tali Dekel, Tomer Michaeli

Single image GAN schemes are conditional and limited to texture images. This issue can be mitigated with the help of the proposed GAN model, SinGAN. Researchers from Technion and Google Research proposed a model known as SinGAN which is an unconditional generative model which can be learned from a single natural image.

SinGAN contains a pyramid of fully convolutional GANs, each responsible for learning the patch distribution at a different scale of the image. It deals with the general natural images that contain complex structures and textures, without the need to rely on the existence of a database of images from the same class. Once trained, the model can produce diverse high-quality image samples (of arbitrary dimensions), which semantically resemble the training image and yet contain new object configurations and structures.

In this study, the authors explained how SinGAN can be used within a simple unified learning

The framework to solve a variety of image manipulation tasks, including paint-to-image, editing, harmonization, superresolution, and animation from a single image. Furthermore,  all these tasks are achieved with the same generative network, without any additional information or further training beyond the original training image.

Read the paper here.

2| Best Student Paper Award

“PLMP – Point-Line Minimal Problems in Complete Multi-View Visibility” by Timothy Duff, Kathlén Kohn, Anton Leykin, Tomas Pajdla

Minimal problems play an important role in 3D reconstruction, image matching, visual odometry, and visual localisation. In this study, the researchers from Georgia Tech, KTH, and CIIRC, CTU in Prague presented a step towards a complete characterization of all minimal problems for points, lines and incidences in calibrated multi-view geometry. 

They proposed a complete classification of minimal problems for generic arrangements of points and lines, including their incidences, completely observed by any number of calibrated perspective cameras.

Read the paper here.

3| Best Paper Honorable Mentions

“Asynchronous Single-Photon 3D Imaging” by Anant Gupta, Atul Ingle, Mohit Gupta

The researchers at the University of Wisconsin-Madison proposed an asynchronous single-photon 3D imaging which is a family of acquisition schemes to mitigate issues like pileup during data acquisition. They developed a generalised image formation model and perform theoretical analysis in order to explore the space of asynchronous acquisition schemes and

design high-performance schemes. However, this model is limited to a pixel-wise depth estimator which uses the MLE of the photon flux waveform.

Read the paper here.

“Specifying Object Attributes and Relations in Interactive Scene Generation” by Oron Ashual, Lior Wolf

In this study, the researchers from Tel Aviv University and Facebook AI Research introduced an image generation tool in which the input consists of a scene graph with the potential addition of location information. 

According to the researchers, the method employs a dual encoding for each object in the image where the first part encodes the object’s placement and captures a relative position and other global image features, as they relate to the specific object. The second part encodes the appearance of the object and can be replaced, for instance, by importing it from the same object as it appears in another image, without directly changing the other objects in the image.

Read the paper here.

Share
Picture of Ambika Choudhury

Ambika Choudhury

A Technical Journalist who loves writing about Machine Learning and Artificial Intelligence. A lover of music, writing and learning something out of the box.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.