Online machine learning (OML) is a type of machine learning (ML) in which data is acquired sequentially and utilised to update the best predictor for future data at each step, in contrast to batch learning techniques, which generate the best predictor by learning on the full training data set at once. In comparison to “conventional” machine learning solutions, online machine learning takes a fundamentally different approach, one that recognises that learning environments can (and frequently do) change from second to second. It is employed in cases when the algorithm must adapt dynamically to new patterns in the data or when the data is generated as a function of time.
OML is a widely used technique in areas of machine learning when training over the complete dataset is computationally impractical, necessitating the employment of out-of-core algorithms. OML, in its simplest form, is a machine learning technique that ingests a sample of real-time data, one observation at a time. OML applies to challenges in which samples are provided over time, and their probability distributions are also expected to change over time. As a result, the model is anticipated to evolve to capture and respond to such changes at a similar rate. This could be viewed as a benefit in a particular industry where real-time personalisation is critical.
Training and Complexity
In an offline ML model, particularly during the training process, the weights and parameters of the machine learning model are updated while attempting to minimise the global cost function using the data used to train the model. The model is continuously trained and updated until it is robust enough for deployment and big data processing, as well as for any other use case.
However, in an OML process, the weight changes that occur at a given step are dependent on the (current) example being shown and possibly on the model’s current state. As a result, the model is always exposed to fresh data and improving (learning).
Time taken
In general, offline ML model training is much faster than online model training because the dataset is only used once throughout the model to modify the weights and parameters. However, due to the magnitude of modern big data streams, it can be rather time-consuming to feed all data into an offline model. It may be preferable to update the model incrementally.
Thus, in OML, the model must obtain and tune its parameters in real-time as new data becomes available. This may occasionally incur a higher cost and necessitate the use of much more resources (cluster) to train the model continuously.
Features | ML | OML |
Complexity | Complexity is reduced because the model is constant. | Dynamic complexity due to the model’s continuous evolution. |
Computational Power | Fewer computations, batch-based training at a single time. | Model refinement computations are driven by continuous data ingestion. |
Applications | Image classification, or anything else involving machine learning, where data patterns are consistent and there are no rapid concept shifts. | Used in fields such as finance, health, and economics where new data patterns emerge on a regular basis. |
Tools | Sci-kit, Spark MLlib, TensorFlow, Keras, Pytorch. | Active research: MOA, SAMOA, Scikit-multiflow, streamDM. |
OML Libraries
River is a Python library for OML. It was created by combining creme with scikit-multiflow. River’s goal is to become the standard library for doing machine learning on streaming data. For various online learning activities, it delivers cutting-edge learning algorithms, data processing methodologies, and performance indicators.
Several more libraries are available for OML.
- Python scikit-learn or Orange module. In the case of online learning, Scikit-learn includes an SGD classifier and regressor that may do a partial fit of the data.
- Caret package in R.
- Jubatus in C++ – it supports C++, Python, Ruby, and Java clients.
- The Tornado Framework in Python
- LIBOL in C++ (and Matlab).
- LibTopoART library in C#.