“Entities should not be multiplied unnecessarily”-Occam’s razor
To incentivize the scientific community to focus on AGI, Marcus Hutter, one of the most prominent researchers of our generation, has renewed his decade-old prize by ten folds to half a million euros (500,000 €). The Hutter prize, named after Marcus Hutter, is given to those who can successfully create new benchmarks for lossless data compression. The data here is a dataset based on Wikipedia.
Marcus Hutter, who now works at DeepMind as a senior research scientist, is famous for his work on reinforcement learning along with Juergen Schmidhuber. Dr Hutter proposed AIXI in 2000, which is a reinforcement learning agent that works in line with Occam’s razor and sequential decision theory.
For beginners, Dr Hutter recommends starting with Matt Mahoney’s Data Compression Explained. In this book, Mahoney covers a wide range of topics, beginning with information theory and drawing parallels between Occam’s razor and intelligence in machines.
About The Contest
The contest is about who can compress data in the best way possible. This contest is motivated by the fact that compression ratios can be regarded as intelligence measures.
In particular, the goal is to create a small self-extracting archive that encodes enwik9. Enwik9 is a 1GB text snapshot of part of Wikipedia.
The winner’s compressor needs to compress the 1GB file enwik9 better than the current record, which is currently held by Alexander Rhatushnyak. As per the rules of the competition, it ranks data compression programs(lossless) by the compressed size along with the size of the decompression program of the first 109 bytes of the XML text format of the English version of Wikipedia.
Participants are expected to have a fundamental understanding of data compression techniques, basic algorithms, and state-of-the-art compressors. Since most modern compression algorithms are based on arithmetic coding based on estimated probabilistic predictions, Dr Hutter advises participants to have some background in information theory, machine learning, probability and statistics.
What Compression Has To Do With Intelligence
One might still wonder how compressing a Wikipedia file would lead us to artificial general intelligence. Dr Hutter has extensively written about his theories related to compression on his website. He posits that better compression requires understanding and vice versa. The intuition here is that finding more compact representations of some data can lead to a better understanding.
Not only that, but Dr Hutter also emphasizes how vital compression is for prediction.
The better you can compress, the better you can predict
Natural Language Processing models, for example, explains Dr Hutter, heavily relies on and measures their performance in terms of compression (log perplexity).
Here is an excerpt from Dr Hutter’s website relating compression to superintelligence:
Consider a probabilistic model M of the data D; then the data can be compressed to a length
log(1/P(D|M)) via arithmetic coding, where
P(D|M) is the probability of D under M. The decompressor must know M, hence has length L(M).
One can show that the model M that minimizes the total length
L(M)+log(1/P(D|M)) leads to best predictions of future data.
For instance, the quality of natural language models is typically judged by its perplexity, which is essentially an exponentiated compression ratio:
Sequential decision theory deals with how to exploit such models M for optimal rational actions. Integrating compression (=prediction), explains Dr Hutter, into sequential decision theory (=stochastic planning) can serve as the theoretical foundations of superintelligence.
Intelligence is not just pattern recognition and text classification. Intelligence is a combination of million years of evolution combined with learnings from continuous feedback from surroundings. Ideas and innovations emerge in this process of learning — ideas which can give a new direction to the processes.
AI is one such phenomenon to emerge out of our intelligence. However, replicating the cognitive capabilities of humans in AI(AGI) is still a distant dream. A lot of research is actively done on causal inference, representation learning, meta-learning and on many other forms of reinforcement learning. Hutter’s prize is one such effort, a much-needed impetus to draw in more people to solve hard fundamental problems that can lead us to AGI.
Know more here.
Provide your comments below
If you loved this story, do join our Telegram Community.
Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.
I have a master's degree in Robotics and I write about machine learning advancements. email:firstname.lastname@example.org