Now Reading
Underrated But Interesting ML Concepts #6- LOF, MKL, RIPPER, t-SNE

Underrated But Interesting ML Concepts #6- LOF, MKL, RIPPER, t-SNE

  • Here, we will explore some of the fascinating yet underestimated concepts in machine learning.

A few fascinating machine learning concepts are rarely explored. Some of them, including Local outlier factor (LOF), Multiple kernel learning (MKL), Repeated incremental pruning to produce error reduction (RIPPER), and  t-distributed stochastic neighbor embedding (t-SNE), will be discussed here.

Local outlier factor

Based on a concept of local density, LOF estimates the density using the distance of k nearest neighbours. The local density of an object can be compared to the local densities of its neighbours in order to find regions with similar densities and places with densities that are much lower than their neighbours. It is believed that they are outliers. The typical distance at which a point can be “reached” from its neighbours is used to estimate the local density. The LOF notion of “reachability distance” is an extra criterion for obtaining more consistent outcomes inside clusters.

Deep Learning DevCon 2021 | 23-24th Sep | Register>>

LOF can discover outliers in a data set that might not be outliers in another sample because of the local approach. An outlier is a point that is “close” to a very dense cluster, but a point within a sparse cluster may have similar distances to its neighbours. While the LOF algorithm‘s geometric understanding is limited to low-dimensional vector spaces, it can be used in any setting where a dissimilarity function can be specified. It has been found to operate well in a variety of settings, frequently beating competitors such as network intrusion detection and processed classification benchmark data.

Multiple kernel learning

MKL refers to a series of machine learning algorithms that include a specified set of kernels to learn an optimal linear or non-linear combination of kernels. To determine the kernel combination function, the present MKL algorithms use a variety of different learning methods. A total of five primary categories have been identified.

  • Fixed rules are functions that do not require any training and have no parameters. 
  • Heuristic techniques look for the parameters of a parameterised combination function by looking at some measures obtained from each kernel function independently. 
  • Optimisation methods also employ a parameterised combination function, with the parameters learned from the solution of an optimisation problem. 
  • Bayesian techniques use the kernel combination parameters as random variables, assign priors to them, and train them as well as the base learner parameters using inference. 
  • Boosting approaches, which are based on ensemble and boosting procedures, add a new kernel iteratively until the performance plateaus.

Repeated incremental pruning to produce error reduction

The Ripper method is a classification algorithm based on rules. The training set is used to generate a set of rules. It’s a common rule-induction algorithm. The Ripper algorithm is useful for datasets with unequal class distributions. When there are several records in a dataset, the majority belong to one class and the remaining to different classes; the dataset is said to have an uneven class distribution. Because it uses a validation set to prevent model overfitting, it performs well with noisy datasets.

Looking for a job change? Let us help you.

Rule Growing in the RIPPER Algorithm: 

  • Ripper has an approach for growing rules that ranges from generic to specific. It starts with an empty rule and adds the best conjunct to the rule antecedent. 
  • The metric used to evaluate conjuncts is FOIL’s Information Gain. The best conjunct is determined using this method. 
  • When the rule starts covering the negative (-ve) examples, the rule stops adding conjuncts
  • Based on its performance on the validation set, the new rule is pruned.

t-distributed stochastic neighbour embedding

t-SNE is a statistical method for displaying high-dimensional data by assigning a two- or three-dimensional map to each data point. It’s a non-linear dimensionality reduction approach that works well for embedding high-dimensional data in a two- or three-dimensional low-dimensional space for visualisation. It models each high-dimensional object by a two- or three-dimensional point so that comparable objects are modelled by nearby points with a high probability, while distant points model different objects.

While t-SNE plots often appear to show clusters, the visual clusters can be highly influenced by the parameterisation chosen, necessitating a thorough grasp of the t-SNE parameters. Such “clusters” have been proven to arise in non-clustered data, suggesting that they are bogus discoveries. As a result, an interactive investigation may be required to select settings and evaluate results.

Genomic research, computer security research, natural language processing, music analysis, cancer research, bioinformatics, geological domain interpretation, and biomedical signal processing have all employed t-SNE for visualisation.

What Do You Think?

Join Our Discord Server. Be part of an engaging online community. Join Here.


Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top