Does Machine Learning Share The Same Fate Of Mathematical Unsolvability?

In the late nineteenth century, Georg Cantor, the founder of set theory, demonstrated that not all infinite sets are created equal. In particular, the set of integer numbers is ‘smaller’ than the set of all real numbers, also known as the continuum. Cantor also suggested that there cannot be sets of intermediate size that is, larger than the integers but smaller than the continuum.

According to Continuum hypothesis, no set of distinct objects has a size larger than that of the integers but smaller than that of the real numbers, the statement, which can be neither proved nor refuted using the standard axioms of mathematics.

What Is A Continuum Hypothesis

In the 1960s, US mathematician Paul Cohen showed that the continuum hypothesis cannot be proved either true or false starting from the standard axioms the statements taken to be true of the theory of sets, which are commonly taken as the foundation for all of mathematics.

Subscribe to our Newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

In this paper explaining the indecisiveness surrounding learnability, authors Ben David and his colleagues demonstrate how machine learning shares the same fate that has followed mathematics. They describe simple scenarios where learnability cannot be proved nor refuted using the standard axioms of mathematics. The proof is based on the fact the continuum hypothesis cannot be proved nor refuted. The main idea is to prove an equivalence between learnability and compression.

Can ML Models Learn Everything?

A good machine learning model makes predictions from a database of random examples. The basic goal is to perform as well, or nearly as well, as the best predictor in a family of functions, such as neural networks or decision trees. For a given model and function family, if this goal can be achieved under some reasonable constraints, the family is said to be learnable in the model.

Machine-learning theorists are typically able to transform questions about the learnability of a particular function family into problems that involve analysing various notions of dimension that measure some aspect of the family’s complexity. For example, the appropriate notion for analysing PAC learning is known as the Vapnik–Chervonenkis (VC) dimension, and, in general, results relating learnability to complexity are sometimes referred to as Occam’s-razor theorems.

Starting With An EMX

In this paper, the researchers introduce a learning model called estimating the maximum (EMX), and go on to discover a family of functions whose learnability in EMX is unprovable in standard mathematics.

An example EMX problem: targeting advertisements at the most frequent visitors to a website when it is not known in advance which visitors will visit the site. The authors formalize EMX as a question about a learner’s ability to find a function, from a given family, whose expected value over a target distribution is as large as possible. EMX is actually quite similar to the PAC model, but the slightly different learning criterion surprisingly connects it to the continuum hypothesis and brings unprovability into the picture.

The rationale behind the proof is that, if a training sample labelled by a function from some family can always be compressed, the family must in some sense have low complexity, and therefore be learnable. The authors introduce monotone compression a variant of compression that they show to be appropriate for characterizing the learnability of particular function families in EMX.


Ben-David and colleagues prove that the ability to carry out a weak form of monotone compression is related to the size of certain infinite sets. The set that the authors ultimately use in their work is the unit interval, which is the set of real numbers between 0 and 1.

Since the EMX model is still in its nascent stage, the implications of results on a large scale are yet to be known. Machine learning stands tall on the foundations of mathematics. So, the authors make an effort to explore whether the problems concerning mathematics have followed into the realms of machine learning as well.

The model presented in this paper should be seen as a starting point for other such investigation into the efficacy of artificial intelligence. And, this is important if we were to build our world around these systems.

Ram Sagar
I have a master's degree in Robotics and I write about machine learning advancements.

Download our Mobile App

MachineHack | AI Hackathons, Coding & Learning

Host Hackathons & Recruit Great Data Talent!

AIMResearch Pioneering advanced AI market research

With a decade of experience under our belt, we are transforming how businesses use AI & data-driven insights to succeed.

The Gold Standard for Recognizing Excellence in Data Science and Tech Workplaces

With Best Firm Certification, you can effortlessly delve into the minds of your employees, unveil invaluable perspectives, and gain distinguished acclaim for fostering an exceptional company culture.

AIM Leaders Council

World’s Biggest Community Exclusively For Senior Executives In Data Science And Analytics.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox