MITB Banner

From Hazy Vision to Clear Direction: AGI’s Path Revealed

Benchmarks like François Chollet’s ARC are instrumental for AGI.
Listen to this story

OpenAI’s Sam Altman recently authored a blog describing the company’s plans for AGI and beyond. While this has catapulted the concept of AGI into the mainstream, many experts agree that there will be no point where AI magically becomes AGI. Instead, it is built on steady improvements over a long period of time. 

Benchmarks are a reliable way of keeping track of improvements in AI, but they currently only work for traditional algorithms. The AI field needs more benchmarks for more fluid kinds of intelligence and that’s where the abstraction and reasoning corpus (ARC) comes in.

Created by François Chollet, a software engineer and AI researcher at Google, ARC benchmark aims to be a test for AI systems that want to emulate human-like intelligence. 

What is ARC?

ARC is a series of visual tests that determine the capability of AI to solve logical problems. The test is made up of patterns on a simple grid, with the goal being to identify the logic in the examples (as shown in the picture in the below) and extend it to solutions. 

The test contains 800 tasks, of which the first 400 tasks will be used to ‘educate’ the algorithm with ARC-relevant priors wherein ‘priors’ are the ‘beliefs’ held by the model before being fed with training data. The remaining 400 will be used to test the algorithm’s capabilities.

As any AI researcher knows, more data leads to better training. The catch here is that ARC does not provide enough training data for traditional algorithms to replicate its patterns. A measly set of 400 images is not enough to train an algorithm to recognise the complex patterns in the test. The challenge here is that the algorithm has to find out the right answer through reasoning, but it cannot do so using currently-available machine learning methods.

According to a paper published in 2010, any benchmark for AGI must satisfy seven criteria: Fitness, breadth, specificity, low cost, simplicity, range, and task focus. At a cursory look, it seems that ARC satisfies most, if not all, of these criteria. Solving ARC’s set of problems requires reasoning and concept abstraction—something modern AI is yet to solve

ARC was released as a competition on Kaggle, but even the best algorithm couldn’t solve more than 20% of the assigned tasks correctly. The tests that were ultimately solved were not solved using logical reasoning, with the algorithms using a brute-force method instead to arrive at the solution. This did not qualify as a valid way to solve the problems, resulting in the algorithm failing the test. 

By creating more benchmarks that follow the seven criteria of a good AGI test, AI researchers will finally be able to determine how close humans are to achieving AGI. 

How far away is AGI?

To understand how one must approach the concept of AGI, we must first look at what AI is doing currently. The discourse is being dominated by LLMs, neural networks and diffusion models, which represent the cutting-edge tech in the field today. However, these models are the tip of the iceberg when it comes to the journey towards AGI, as we are just now achieving human parity in discrete applications.

Take ChatGPT for instance. Built on GPT 3.5, this LLM-powered chatbot can do anything a human writer can do, albeit with information hallucinations. A notable drawback of this algorithm is that it can only generate text. Its natural language capabilities cannot be extended into other domains, such as describing the contents of an image. This is because the algorithm does not have the capability of image recognition and is siloed in the domain of NLP.

Today’s algorithms function in a similar fashion and are restricted to their domain. They cannot extend learnt data to other dimensions of intelligence. This is a classic example of crystallised intelligence. 

Human intelligence is generally categorised into two types—crystallised intelligence and fluid intelligence. Crystallised intelligence denotes ‘stored knowledge’ or intelligence gained from learning. Fluid intelligence generally includes problem-solving capabilities, the ability to process new information, and the rate of learning. 

The path towards AGI involves converting AI algorithms from crystallised intelligence to fluid intelligence. To do so, algorithms must first learn the basics of human reasoning—something that tests like ARC can help with. This set of tests will determine an algorithm’s capabilities for human-like traits like concept abstraction. ARC, and similar benchmarks, will be instrumental to the research process of creating an AGI.

Regardless of OpenAI’s goals to reach an AGI-like algorithm in the coming years and the doomsday predictions by Sam Altman, it seems that AGI is a long and arduous path flanked by benchmarks like ARC. 

Access all our open Survey & Awards Nomination forms in one place >>

Picture of Anirudh VK

Anirudh VK

I am an AI enthusiast and love keeping up with the latest events in the space. I love video games and pizza.

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories