MITB Banner

Former OpenAI Researcher Andrej Karpathy Unveils Tokenisation Tutorial, Decodes Google’s Gemma 

'In this lecture, we build from scratch the Tokenizer used in the GPT series from OpenAI'

Share

Listen to this story

Former OpenAI researcher Andrej Karpathy has released a new tutorial on LLM tokenisation. Karpathy, introducing the course, mentions, ‘In this lecture, we build from scratch the Tokenizer used in the GPT series from OpenAI.

Tokenizers are a completely separate stage of the LLM pipeline: they have their own training set, training algorithm (Byte Pair Encoding), and after training implement two functions: encode() from strings to tokens, and decode() back from tokens to strings.

“We will see that a lot of weird behaviors and problems of LLMs actually trace back to tokenization. We’ll go through a number of these issues, discuss why tokenization is at fault, and why someone out there ideally finds a way to delete this stage entirely,” said Karpathy.

Furthermore, he has released a new repository on GitHub named ‘minbpe.’ It contains minimal, clean code for the Byte Pair Encoding (BPE) algorithm, commonly used in LLM tokenization. The repository can be found at: https://github.com/karpathy/minbpe.

Karpathy recently departed from OpenAI. He confirmed his departure in a post on X, saying that it is solely due to his intention to focus on personal projects. During his time at OpenAI, he actively contributed to the development of an AI assistant, collaborating closely with the company’s research head, Bob McGrew.

Decodes Google’s Gemma

Moreover, Karpathy also analysed Google’s recently released open source model Gemma’s tokenizer. Karpathy, in his pursuit of understanding the Gemma tokenizer, decoded the model protobuf in Python and presented a detailed comparison with the Llama 2 tokenizer.

Key observations from the comparison include a substantial increase in vocabulary size from 32K to 256K tokens. Additionally, Gemma’s departure from the Llama 2 tokenizer is marked by the “add_dummy_prefix” setting being set to False, aligning with GPT practices and promoting consistency with minimal preprocessing.

Noteworthy aspects of the Gemma tokenizer include its model_prefix, representing the path of the training dataset, which hints at a substantial training corpus of approximately 51GB. The presence of numerous user-defined symbols, including special tokens like newline sequences and HTML elements, adds complexity to Gemma’s tokenization process.

In summary, Gemma’s tokenizer shares a foundational similarity with the Llama 2 tokenizer but distinguishes itself by its larger vocabulary size, inclusion of more special tokens, and a departure in the functional approach with the “add_dummy_prefix” setting. This comprehensive exploration by Karpathy sheds light on the nuances of Gemma’s tokenization methodology.

Share
Picture of Siddharth Jindal

Siddharth Jindal

Siddharth is a media graduate who loves to explore tech through journalism and putting forward ideas worth pondering about in the era of artificial intelligence.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.