MITB Banner

5 Key Attributes To Make ML Work In The Wild

Share

The biggest concern for any machine learning developer is to figure if their models work outside their labs, in the real world, and in the wild. The emergence of pragmatic ML as a domain coincides with the increasing adoption of AI. But, when can one call their system to be pragmatic?

To build more robust learning systems, it is essential to design benchmarks for the attributes of these sub-tasks. Even before that, we have to define what these subtasks and their attributes are. According to the researchers at the University of Washington, there are 5 key attributes, accounting for which can make machine learning work in the real world.

Here are the 5 desired attributes of a pragmatic machine learning system:

Ability To Learn In A Sequential Way

The researchers state that the machine learning systems that learn in the wild must be capable of processing data as it appears, sequentially. The system must be able to produce inferences from the data that it comes across while also updating itself sequentially. However, learning in a sequential manner, wrote the researchers, is the core objective of continual learning and is a longstanding challenge. They call this catastrophic forgetting in machines when models drop their accuracy scores on old tasks when they are updated or trained to deal with new tasks. 

In this regard, techniques such as few-shot learning and open-world learning have emerged, which the researchers call natural consequences of learning in a sequential manner. The notion of one-shot learning pivots around the idea that when a system encounters something new, it should identify and leverage it.

Being Flexible 

It goes without the saying that intelligence is often associated with adaptation. In the case of machines, the closest analogy can be drawn out of reinforcement learning systems with rewards that nudge the whole system into reaching the target through adaptation. Even this is not enough as most of the use cases are hardcoded into the system with the exception of successes like AlphaGo

The researchers state that effective systems in the wild must be flexible enough to make decisions over the course of their life regarding what data to train and what to ignore. It is also important to identify the difference between learning strategies and updating model parameters, which is in contrast to learning paradigms such as supervised, few-shot, and continual learning which typically impose fixed and preset restrictions on learners.

Accounting The Costs

Pragmatic systems, wrote the authors, must be able to capture the efficiency of updating strategies as well. Incorporating updation itself doesn’t make systems more efficient. They recommend that the system should not just measure the inference cost but also update cost. The trade-off between accuracy and lifetime compute is key to design appropriate systems.

Open World Learning

Every time data is encountered, they must be capable of determining if it belongs to an existing category or must belong to a new one. A system should identify a new category of features as something new. This enables pragmatism in the wild. Out-of-distribution detection is one such approach, which enables open world learning. It deals with unseen class detection where the distribution is static. Systems in the wild must be capable of learning in an open-world setting, where the classes, and even the number of classes, are not known to the learner. 

Few Or Many Shot Learning?

An efficient system captures the essence of the whole dataset just by looking at it a few times. Just like humans perceive objects just by looking at their shadows. Few-shot learning has become quite popular with the increasing vastness of datasets. The experimental setup for few-shot typically consists of models that are trained on base classes during “meta-training” and then tested on novel classes in “meta-testing.” The researchers argue that the n-shot way of evaluation is too restrictive as it assumes that data distributions during meta-testing are uniform, which is an unrealistic assumption in practice (in the wild). 

In contrast, the models should evaluate methods across a spectrum of shots.

Now, since we have listed out the key attributes that can enable pragmatic ML, it is also essential to know if there is a unifying framework that can cater to the demands of these systems. The same researchers from the University of Washington in collaboration with Allen Institute of AI introduced a NED, a framework for evaluation of ML systems. This new framework is designed to loosen the restrictive design decisions of past settings and impose fewer restrictions on learning algorithms.

Know more about NED and pragmatic ML here.

Share
Picture of Ram Sagar

Ram Sagar

I have a master's degree in Robotics and I write about machine learning advancements.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.