MITB Banner

Interesting AI Papers Submitted at ICLR 2023

In a recent announcement, ICLR 2023 confirmed their date of submissions along with marking January 20, 2023 as the final date of decision.

Share

Listen to this story

The International Conference on Learning Representatives(ICLR) is one of the largest AI conferences held annually—with 2023 as its eleventh edition. In a recent announcement, ICLR 2023 confirmed their date of submissions along with marking January 20, 2023 as the final date of decision. 

Here are a few papers from the recent ICLR 2023 submission release: 

Dream Fusion: Text to 3D using 2D diffusion 

Diffusion models have caused recent developments in text-to-image synthesis trained on billions of image text pairs. This work proposes the elimination of the need to adopt large-scale datasets for de-noising the 3D synthesis and replacing it by employing a pre-trained 2D text-to-image diffusion model to perform text-to-3D synthesis. The paper further examined ‘Dream Fusion’ as a method for converting text to a 3D model that uplifts text to image models, optimising neRFs and eliminating the need for datasets with 3D objects and labels. In addition, the 3D model would require no training data and changes to image diffusion models—indicating the efficacy of pre-trained image diffusion models as priors. 

Read the full paper here

Quantum Reinforcement Learning 

The paper introduces a new vision for intelligent quantum cloud computing in the financial system. It combines effective learning methods to eliminate the risk of fraud by integrating fraud detection in financial services through the ‘Quantum Reinforcement Learning’ method. This research improves upon simulating financial trading systems and building financial forecasting models—offering promising prospects for portfolio risk in the financial system and deploying algorithms to analyse large-scale data in real-time effectively. 

Read the full paper here

Quantifying and Mitigating the Impact of Label Errors on Model Disparity Metrics

In this paper, the author takes a deep interest in the impact of label error on model group-based disparity metrics. In particular, they empirically categorise the varying levels of label errors in testing data and training which in turn affects the disparity metrics—specifically, group calibration. They also demonstrated empirical sensitivity tests to measure the corresponding change in the disparity metric. Results suggest that real-world label errors are less pernicious to model learning dynamics than synthetic flipping. They also propose an approach on a variety of datasets and find a 10–40% progress which corresponds to alternative methods in determining training inputs that enhance a model’s disparity metric. Overall, this work shows the need to adopt this proposed approach to help surface training input correction and improve a model’s group-based disparity metrics. 

Read the full paper here.

Suppression Helps: Lateral Inhibition-inspired Convolutional Neural Network For Image Classification

This paper proposes the lateral inhibition-inspired design of convolutional neural networks for image classification. The model works on plain convolution and the convolutional block with residual connection while being consistent with already residing molecules. Furthermore, researchers explore the filter dimension in the lateral direction—thereby offering a lateral inhibition-inspired (LI) design, incorporating a low-pass filter for inhibition decay. Their results demonstrate improvements with minor increase in parameters on the ImageNet dataset for classifying images. These results simultaneously demonstrate the advantage of their design and scholarly work to help researchers consider the value of feature learning for image classification. 

Read the full paper here. 

Towards Robust Online Dialogue Response Generation

This paper attempts to improve online dialogue generation and hierarchical sampling-based methods to ease the disparity between training and real-world testing. Their model works on chatbots in generating uneven responses in real-world applications, mainly in multi-turn settings. To dig this deeper, the research work adopts reinforcement learning and re-ranking methods to optimise dialogue coherence during inference and training. Researchers even performed experiments to show the usefulness of this method in generating strong online responses to bots and self-talk conversations. Essentially, this research works on improving online dialogue discrepancy while also enhancing dialogue coherence implicitly. 

Read the full paper here. 

FARE: Provably Fair Representation Learning

This work proposes the introduction of FARE (Fairness with Restricted Encoders), the first FRL method with provable fairness guarantees. The fair representation learning (FRL) aims to produce fair classifiers via data preprocessing in comparison to prior methods that achieve worse accuracy-fairness tradeoffs. The model is helpful in producing tight upper bounds on several datasets and simultaneously delivering practical fairness and accuracy tradeoffs. 

Read the full paper here. 

Towards a Complete Theory of Neural Networks with Few Neurons

This work evaluates the landscape of neural networks with few neurons. They examined the dynamics of overparameterized networks by proving that a student network with one neuron has only one critical point (its global minimum) when learning from a teacher network with several neurons. They further proved how a neuron addition mechanism turns a minimum into a line of critical points with transitions from saddles to local minima via non-strict saddles. Following this, researchers discuss how the insights gleaned from their novel proof techniques are likely to shed light on the dynamics of neural networks with few neurons in depth.

Read the full paper here. 

Share
Picture of Nidhi Bhardwaj

Nidhi Bhardwaj

Nidhi is a Technology Journalist at Analytics India Magazine who takes a keen interest in covering trending updates from the world of AI, machine learning, data science and more.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.