When every single element in the technological space is rapidly changing, it is evident for some of them to go obsolete. Not just technology but the literature of technology also loses its essence with time. A few years ago, machine learning (ML) or artificial intelligence (AI) were terms used to fascinate people. In 2020, these words do turn heads, but people are now used to these terms as they are a part of everyday dinner table conversations for many. However, there is no shortage of new terms in the landscape, which are gaining prominence. Are you familiar with these ten new AI jargon?
In this article, we would like to educate you with new terms of the year from the domain of AI.
AI has become a part of our daily lives for quite some time now. From face recognition in smartphones to detect obstacles on the highway, it takes decisions in seconds and delivers the required result. But this brings us to an important question: how does it reach a particular conclusion?
Explainable AI (XAI) is a new AI term which tries to understand the black box decisions taken by the AI system. It focuses on the steps and models used by AI systems to make decisions. XAI solutions pinpoint the factors that significantly impact the decisions made by ML models. XAI is important since it delivers transparency in how the black box decisions are taken which lets humans trust the AI system.
Read More: Explainable AI Initiatives
When it comes to AI, every model receives a positive as well as a negative reward for completing a task in the correct way or in the wrong way, respectively. Reward tampering occurs when the model finds a way to get a positive reward by completing a task in the wrong direction. This can have an adverse impact on achieving true AI, will decrease trust among users.
Read More: Reward tampering problem and solutions
Adversarial attacks are commands to a machine learning model, often created by reviewers to evaluate ML models. It has been noticed that AI models often fail to deliver desired results if the inputs are changed slightly, although in a tricky way. Adversarial attacks have become a go-to approach for benchmarking AI models. These attacks are formed in different types of data such as images, graphs and texts.
Read More: Monitoring adversarial attacks on the model
Federated Learning is a new framework in the AI model development. The system allows users to train models through mobile devices, which accesses a vast range of data that are stored in different sites. This helps various organisations to work together on different models without the need to directly share each other’s data bank.
Read More: Federated Learning
Meta-learning allows a user to find various model agnostic solutions. In a nutshell, meta-learning can be defined as the use of machine learning in machine learning. Meta-learning allows a user to find various model agnostic solutions. Meta-learning refers to the designing of models which are capable of learning new skills or can automatically adapt to new and different environments with finite training sessions. It learns about a model’s different parameters such as decay rate, number of hidden neurons etc.
Read More: Make Meta-Learning more effective
Dark Patterns is a typical case of algorithmic exploitation where malicious players use different tricks on websites to force a user do a thing which was not wanted by the user. For example, dark patterns force a user to buy dog food even though they have no pets. Data Patterns are carefully designed and deployed with a solid understanding of human psychology but keeping aside the interest of a particular user on the web.
Read More: Dart Patterns
Reproducibility refers to a work done by someone who followed a listed procedure and managed to achieve the same result as the original one. Reproducibility in science is essential as it establishes the reliability of certain methods or techniques for wider usage. Indigenous replication leads to more efficient solutions and in case of AI with hundreds of papers released every day, it is crucial to keep track of their efficacy.
Read More: Reproducibility in AI
Temporal Cycle Consistency
Temporal Cycle Consistency is a self-supervised learning method which is used to identify the similarities that exist between two videos when labelled data is non-existent. This learning method was introduced by Google to understand similar sequential processes since the use of supervised learning to understand the individual frame of a video is an expensive matter. It also requires the annotators to apply a fine-grained label in each frame of a video which is a time-taking affair.
Read More: Temporal Cycle Consistency
The role of causality in machines has gained a lot of attention in recent years. With pioneers like Judea Pearl, stressing for causal based systems, there has been a newfound interest in establishing learning based on causal inferences and influences. In short, making a machine to understand the why, what and how of every activity it performs, learn from them and then improve.
Read More: Causal Inference
Neural Network Compression
Neural Networks requires a lot of memory to be stored, due to which it is crucial to compress it. The compression of the neural network is often done with altering the Weight Matrices. In order to do so, it essentially requires to go through three different stages, such as Pruning, Quantization, and Huffman Encoding. There are quite a decent number of compression techniques that make deploying ML models easier than before.
Read More: Neural Network Compression