Advertisement

Active Hackathon

How Is Online Machine Learning (OML) Unique From Traditional Machine Learning?

When data is continuously streamed, online learning is essential in order to do real-time analysis.

Online machine learning (OML) is a type of machine learning (ML) in which data is acquired sequentially and utilised to update the best predictor for future data at each step, in contrast to batch learning techniques, which generate the best predictor by learning on the full training data set at once. In comparison to “conventional” machine learning solutions, online machine learning takes a fundamentally different approach, one that recognises that learning environments can (and frequently do) change from second to second. It is employed in cases when the algorithm must adapt dynamically to new patterns in the data or when the data is generated as a function of time.

OML is a widely used technique in areas of machine learning when training over the complete dataset is computationally impractical, necessitating the employment of out-of-core algorithms. OML, in its simplest form, is a machine learning technique that ingests a sample of real-time data, one observation at a time. OML applies to challenges in which samples are provided over time, and their probability distributions are also expected to change over time. As a result, the model is anticipated to evolve to capture and respond to such changes at a similar rate. This could be viewed as a benefit in a particular industry where real-time personalisation is critical.

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

Training and Complexity

In an offline ML model, particularly during the training process, the weights and parameters of the machine learning model are updated while attempting to minimise the global cost function using the data used to train the model. The model is continuously trained and updated until it is robust enough for deployment and big data processing, as well as for any other use case.

However, in an OML process, the weight changes that occur at a given step are dependent on the (current) example being shown and possibly on the model’s current state. As a result, the model is always exposed to fresh data and improving (learning).

Time taken

In general, offline ML model training is much faster than online model training because the dataset is only used once throughout the model to modify the weights and parameters. However, due to the magnitude of modern big data streams, it can be rather time-consuming to feed all data into an offline model. It may be preferable to update the model incrementally.

Thus, in OML, the model must obtain and tune its parameters in real-time as new data becomes available. This may occasionally incur a higher cost and necessitate the use of much more resources (cluster) to train the model continuously.

FeaturesMLOML
ComplexityComplexity is reduced because the model is constant.Dynamic complexity due to the model’s continuous evolution.
Computational PowerFewer computations, batch-based training at a single time.Model refinement computations are driven by continuous data ingestion.
ApplicationsImage classification, or anything else involving machine learning, where data patterns are consistent and there are no rapid concept shifts.Used in fields such as finance, health, and economics where new data patterns emerge on a regular basis.
ToolsSci-kit, Spark MLlib, TensorFlow, Keras, Pytorch.Active research: MOA, SAMOA, Scikit-multiflow, streamDM.

OML Libraries

River is a Python library for OML. It was created by combining creme with scikit-multiflow. River’s goal is to become the standard library for doing machine learning on streaming data. For various online learning activities, it delivers cutting-edge learning algorithms, data processing methodologies, and performance indicators.

Several more libraries are available for OML.

  • Python scikit-learn or Orange module. In the case of online learning, Scikit-learn includes an SGD classifier and regressor that may do a partial fit of the data. 
  • Caret package in R.
  • Jubatus in C++ – it supports C++, Python, Ruby, and Java clients.
  • The Tornado Framework in Python
  • LIBOL in C++ (and Matlab). 
  • LibTopoART library in C#.

More Great AIM Stories

Dr. Nivash Jeevanandam
Nivash holds a doctorate in information technology and has been a research associate at a university and a development engineer in the IT industry. Data science and machine learning excite him.

Our Upcoming Events

Conference, Virtual
Genpact Analytics Career Day
3rd Sep

Conference, in-person (Bangalore)
Cypher 2022
21-23rd Sep

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan, 2023

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
21st Apr, 2023

Conference, in-person (Bangalore)
MachineCon 2023
23rd Jun, 2023

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM
MOST POPULAR

Data Science Skills Survey 2022 – By AIM and Great Learning

Data science and its applications are becoming more common in a rapidly digitising world. This report presents a comprehensive view to all the stakeholders — students, professionals, recruiters, and others — about the different key data science tools or skillsets required to start or advance a career in the data science industry.

How to Kill Google Play Monopoly

The only way to break Google’s monopoly is to have localised app stores with an interface as robust as Google’s – and this isn’t an easy ask. What are the options?

[class^="wpforms-"]
[class^="wpforms-"]