The last decade has been exciting for artificial intelligence. It has gone from esoteric to mainstream very quickly, thanks to ingenious work by researchers, the democratisation of technologies by top companies and, of course, enhanced hardware.
In this article, we will list down a few potential research areas that we want to, or at least hope to, see more of in the year 2020.
In the chart above, one can see how the number of publications with XAI as the keyword has seen a rise over the past five years.
Explainable AI refers to methods and techniques in the application of artificial intelligence such that the results of the solution can be understood by human experts.
The need for transparency could be seen in the increased interest of the researchers.
- Loss Change Allocation by Uber
They call this loss change allocation(LCA). LCA allocates changes in loss over individual parameters, thereby measuring how much each parameter learns.
- Questioning the AI by IBM
By interviewing 20 UX and design practitioners working on various AI products, researchers tried to identify gaps between the current XAI algorithmic work and practices to create explainable AI products.
Causality is the degree to which one can rule out plausible alternative explanations. By defining causality in systems, one gets to ask or even answer why one needs or doesn’t need a certain feature in a model.
Researchers like Judea Pearl insist that machine learning has experienced unexpected success without paying attention to fundamental theoretical impediments.
Here are a few interesting research that is being done to address causal inference in machine learning:
- Causality for Machine Learning by Bernhard Schölkopf explains how the field is beginning to understand them.
- DeepMind’s Causal Bayesian Networks demonstrated the use of Causal Bayesian networks(CBNs) to visualise and quantify the degree of unfairness.
- Adversarial Learning of Causal Graphs aims at recovering full causal models from continuous observational data along a multivariate non-parametric setting.
The concept of meta-learning dates back to at least 3 decades. It was popularised by AI pioneer Juergen Schmidhuber and was also a part of his diploma thesis in 1987. Today, it is one of the most spoken concepts in the machine learning community. And, 2020 looks like a potential year for more research in this domain. Meta-learning in short, as Schmidhuber defines it, is to learn credit assigning method itself through self-modifying code.
A recent work on visual concept meta-learning by MIT was one such example of meta-learning where the researchers were able to successfully categorise objects with multiple combinations of visual attributes and with limited training data. They also made the model predict relations between unseen pairs of concepts. This work presented a systematic evaluation on both synthetic and real-world images, with a focus on learning efficiency and strong generalisation.
Meta-learning is the hottest space to watch out for in 2020 as researchers are racing to bring intelligence for trivial tasks in machines.
Decentralising the learning process makes the machine learning algorithms more robust and quicker. Federated Learning enables mobile phones to collaboratively learn a shared prediction model while keeping all the training data on the device, decoupling the ability to do machine learning from the need to store the data in the cloud.
For instance, Google uses the same approach for digital zooming on its flagship phones. It deploys state-of-the-art algorithms to paint the picture with some meaning and federated learning pushes the boundary further by sharing the information across the devices with sophisticated anonymity.
Few exciting recent works on federated learning:
- Efficient Federated Learning on Edge Devices
A challenge in federated learning is that the devices usually have much lower computational power and communication bandwidth than server machines in data centres. To overcome this challenge, a method is proposed that integrates model pruning with federated learning in this paper.
- Exploiting Unlabeled Data in Smart Cities using Federated Learning
This work introduces a semi-supervised federated learning method called FedSem that exploits unlabeled data.
Reinforcement learning occupies a vast section of AI. Most happening news from the AI community has often been around the research on agents, rewards systems and understanding how to make machines to teach themselves to be taught.
Here are a few interesting works that indicate there is more to come from in the near future:
- Automating Reward Design
In an attempt to automate the reward design, the Robotics department at Google introduced AutoRL, that automates RL reward design by using evolutionary optimisation over a given objective.
- Reward Tampering
Reward tampering is any type of agent behaviour that instead changes the objective to match reality.
In an attempt to acknowledge the consequences of reward tampering and provide a solution to the same, researchers at DeepMind released a report discussing various aspects of reinforcement learning algorithms.
- Reinforcement Learning Without Rewards
This work shows that agents could use counterfactuals to develop a form of ‘empathy’, among agents.
- Reinforcement Learning for Recommender Systems
In an attempt to make better decisions and recommendations, ML developers from Google, in this work, merged reinforcement learning and recommender systems.
Larger the neural networks, higher are the computational costs for some real-time applications such as online learning and incremental learning.
Here are a few exciting works related to compression approaches:
- Deep Neural Network Compression with Single and Multiple Level Quantisation
In this paper, the authors propose two novel network quantisation approaches single-level network quantisation (SLQ) for high-bit quantisation and multi-level network quantisation (MLQ).
- Efficient Neural Network Compression
In this paper, the authors proposed an efficient method for obtaining the rank configuration of the whole network.
AI Ethics, Regulations And Privacy Preservation
The year 2019 has witnessed the dark side of algorithms when GANs and OpenGPT2 were used to generate fake images and fake text respectively. These two applications had people wondering about the ethics of AI implementation and opened up research into how organisations can fight the ill effects of AI. Facebook has also launched a million-dollar DeepFake detection challenge on Kaggle to thwart DeepFakes.
Not only deep fakes but algorithmic exploitation can even happen on e-commerce sites where privacy could be at stake. The race to enhance algorithms by acquiring vast amounts of data can compromise the privacy of the individuals.
The European Union’s General Data Protection Regulation (GDPR), which went into effect in 2018, insists on having high-level data protection for consumers and harmonises data security regulations within the European Union.
So, this year, there is a high chance of organisations moving from voicing opinions to actually building tools that would ensure privacy without affecting the efficiency of the algorithms.
The domains listed above cover the most talked about topics at flagship conferences and forums. However, in AI, there is always a transfer of techniques across domains. For instance, there is great potential at the convergence of causality and reinforcement learning. AutoML is another exciting avenue which has already picked up pace last year. Many tools too have been released to ease the deployment of machine learning models.
The year 2020 can be the year, which will set new benchmarks for AI approaches that is smarter, safer and more trustworthy.