MITB Banner

Indexing By Learning: A Revolutionary Idea That Can Shake The Core Of Computer Science

Share

Indexing is one of the fundamental aspects of computer architecture and computer science. For many decades now, indexes have been designed by hand by computer scientists and we have worked with indexing techniques like B-trees and HashMaps. But researchers at Google and MIT have perceived a future where indexes could be learnt instead of being hand-designed. This research has been following recent patterns set by the industry of building domain-specific and task-specific hardware and fundamental computing tools that are targeted at specialised tasks. One of the ground-breaking examples of the same is Google’s TPU which is a special processor for neural networks. This work of learned indexes is an attempt to get the computer ready for the world of neural networks.

A team led by Tim Kraska of MIT and Jeff Dean of Google contest that indexes are models. These researchers say that a B-Tree Index, which is one of the most popular indexing techniques, can be looked at as a model to map a key to the position of data within a data structure such as an array. And in the same way a Hash Index, another popular technique, can be seen as a model to map a key to a position to a record in an unsorted array.

The researchers proposed a solution to replace traditional index structures with learned indexes using deep learning. The research initiative is an endeavour to understand in which conditions learned indexes are better than traditional ones and what are the main challenges in designing and implementing learned indexes.

Data Access And Management — Essential Part Of Computer Architecture

Computers and computer architectures work best when data management and data access strategies are efficient and robust. Currently there are numerous data access techniques such B-Trees, HashMaps and others, which suit a particular use case. For example a popular technique to check whether a record exists or not, is called as the Bloom filter. The researchers at Google believe that these are very general-purpose techniques and assume that they are unable to take advantage of patterns in the data that they encounter.

From Google’s side there has been a very strong push for task-based custom hardware design — case in point being the tensor processing unit (TPU). TPUs are designed to work with numerically-heavy AI and ML applications. Google likes to call it the “domain-specific architecture”. The idea behind the TPU was to replace the general purpose CPU with a matrix processor built only for neural network workloads. In contrast to today’s CPUs, TPUs can’t run multiple applications. TPUs can do one thing well, run billions of big multiplications and additions to assist neural networks operations with emphasis on power saving.

The researchers observed that today’s CPUs are great at SIMD, i.e Single Instruction Multi Data but GPUs and TPUs are going to take over. They also sight some reports suggesting GPUs will get one thousand times better by the year 2025 making building neural networks easy and affordable. The researchers want to continue this trend in hardware improvements by bringing the successes of neural networks to databases. At the same time, they are careful to underline that the learned indexes will not replace the traditional structures. The main aim is to complement the basic structured indexes.

B-Trees Vs. Learned Indexes

Some rough calculations by the researchers suggest that a B-Tree model will be fast if and only if it has a better precision gain than 1/100 per 50∗8=400 arithmetic operations. The researchers assume that the B-Tree sits in the cache. At the same time the researchers believe that the current ML practices offer higher speed.

To begin the simple experiment, the researchers imagine Range Index Models as CDF models. The research says that a model predicts the position given a key inside a sorted data structure like array acts like a cumulative distribution function (CDF). The researchers point to something interesting and exciting, that a B-Tree tries to “learn” the data distribution by building a regression tree. The researchers model it as:

p = f(Key) * N

The researchers use a 200M log record database to build a secondary index over the timestamp using neural networks. The researchers trained a 2 layer fully connected neural network with 32 neurons in each layer using the ReLU activation function. Here we can take the timestamps as the inputs and position of the timestamps as labels. Using such an architecture they constructed a learned index and measured the lookup performance.

The researchers claim that they achieved ≈1250 predictions per second and hence the whole model takes around 80,000 nanoseconds to execute the model. The search time was not better than full-blown brute force search. But compare it to a B-Tree where it takes only ≈300ns to traverse the data, which is two magnitudes faster than the neural network methods and 2−3× faster as searching over the key-space. The researchers claim that the main reason for failure are that regression trees used in B-Trees is much better at over-fitting. One of the other reasons also B-Trees are very cache-efficient.

Conclusion

The researchers also carried out many other experiments and found that the high cost for learning models is a big problem for learned indexes but still they feel learned indexes can be useful in many scenarios.

Researchers conducted many experiments to compare learned indexes with HashMaps which can be found here. They found that the model hash function overall has similar performance while utilising the memory better than a HashMap. They also state that the main improvements in indexing due to introduction of learning and neural networks will be in form of getting higher dimensional indexes and speeding up indexing. In conclusion the researchers say, “In summary, we have demonstrated that machine-learned models have the potential to provide significant benefits over state-of-the-art database indexes, and we believe this is a fruitful direction for future research.”

Out of the many ideas and products ML has given to the world of technology, this idea promises to deliver the greatest fruits while shaking and tinkering with the foundations of computer science.

Share
Picture of Abhijeet Katte

Abhijeet Katte

As a thorough data geek, most of Abhijeet's day is spent in building and writing about intelligent systems. He also has deep interests in philosophy, economics and literature.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.