Unboxing LLMs

“Understanding the theory requires a sophisticated understanding of physics”
Listen to this story

In line with how animals and humans produce ‘amazingly complex’ and ‘beautiful stuff’, OpenAI’s founder Sam Altman boasted about what LLMs are capable of. “Language models just being programmed to try to predict the next word is true, but it’s not the dunk some people think it is,” he tweeted.

However many researchers disagree. A leading voice in AI, Gary Marcus, for instance, countered Altman’s view of LLM saying animals are built innately with the capacity to represent models of the world, but language models are not. “And that, my friend, is the dunk that someday you will come to appreciate,” added Marcus.

Conversely, most researchers will attest to AI being a black box, albeit with some dissenting views. However, with the advancements in LLMs such as GPT-3 and LLaMA, the veil is lifting, and AI is no longer a mystery. These models work by taking input and predicting the next word to generate text. 

These models can be fine-tuned by giving them a small amount of data related to a specific task, like writing an essay about cats. Researchers can provide feedback on the model’s outputs to help it learn to adjust its weights to perform better on the task. It does this by learning patterns. It can learn that the word “cat” comes after the words “a” or “the”, or that the word “yellow” is an adjective that describes a colour. 

An LLM learns through cells called neurons forming a neural network. Each neuron can receive inputs and produce an output. The output depends on  “weights”, which are numbers that measure the importance of the input.

Talking about ChatGPT, the latest LLM kid, Wolfram Research CEO Stephen Wolfram said in one of his recent AMAs, “There are millions of neurons—with a total of 175 billion connections and therefore 175 billion weights. And one thing to realise is that every time ChatGPT generates a new token, it has to do a calculation involving every single one of these weights.”

Late last year, the research, ‘Talking About Large Language Models’ sparked a discussion about how these language models are not simply “predicting the next statistically likely word”. Users argued that though we know how to train them, we are in the dark about how the resulting models do what they do. 

For instance, we know how humans evolved, but we don’t have perfect models on how humans worked; we have not solved psychology and neuroscience yet. A relatively simple and specifiable process produced complex human beings. Likewise, LLMs are developed using a large training set but the resulting billion(s) parameter model remains enigmatic. This is why “AI interpretability” exists to probe large models like LLMs, and understand how they produce the results. 

Experts believe the black box exists due to the decisions made by intermediate neurons on the way to making the network’s final decision. It’s not just complex, high-dimensional non-linear mathematics; the black box is due to non-intuitive decisions.

The 200-year-old answer

However, recent research suggests that 200-year-old maths could help understand how neural networks perform complex tasks. This could increase neural networks’ accuracy and their learning speed, researchers say. To analyse a neural network designed to carry out physics, senior author Pedram Hassanzadeh, a fluid dynamicist and his colleagues experimented with using Fourier analysis, a maths technique often employed in physics. 

The researchers performed an analysis on the deep neural network’s governing equations. Every model approximately has one million parameters. The connected neurons adjust specifically during calculations. These parameters were assembled in around 40,000 five-by-five matrices. The analysis revealed that the parameters behaved like a combination of low-pass, high-pass, and Gabor filters

All said and done, “as much as neural networks are called black boxes, we can take them apart and try to understand what they do, and connect their inner workings with the physics and maths we know about physical systems,” Hassanzadeh says. “There is a major need for this in scientific machine learning.”

Trial, Error, and Trial

AI researchers at Meta AI, Princeton University and the Massachusetts Institute of Technology collectively attempted to provide a theoretical framework ‘The Principles of Deep Learning Theory: An Effective Theory Approach to Understanding Neural Networks’, to answer the question around neural networks being a black box.

In a blog post, Meta AI research scientist Sho Yaida noted that AI is in the same conjuncture as steam machines at the beginning of the Industrial Revolution. He said though the steam engine changed manufacturing forever, scientists could not theoretically fully explain how and why it worked until the laws of thermodynamics and the principles of statistical mechanics developed over the century. Many of the improvements made to the steam engine were made as a result of trial and error. 

Understanding the theory requires a sophisticated understanding of physics. The important thing is that it will enable AI theorists to push for a deeper and more complete understanding of neural networks, said Yaida who collaborated with Dan Roberts of MIT and Boris Hanin at Princeton for the research.

Download our Mobile App

Tasmia Ansari
Tasmia is a tech journalist at AIM, looking to bring a fresh perspective to emerging technologies and trends in data science, analytics, and artificial intelligence.

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Recent Stories

Our Upcoming Events

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR