Listen to this story
|
Former OpenAI researcher Andrej Karpathy has released a new tutorial on LLM tokenisation. Karpathy, introducing the course, mentions, ‘In this lecture, we build from scratch the Tokenizer used in the GPT series from OpenAI.
Tokenizers are a completely separate stage of the LLM pipeline: they have their own training set, training algorithm (Byte Pair Encoding), and after training implement two functions: encode() from strings to tokens, and decode() back from tokens to strings.
“We will see that a lot of weird behaviors and problems of LLMs actually trace back to tokenization. We’ll go through a number of these issues, discuss why tokenization is at fault, and why someone out there ideally finds a way to delete this stage entirely,” said Karpathy.
Furthermore, he has released a new repository on GitHub named ‘minbpe.’ It contains minimal, clean code for the Byte Pair Encoding (BPE) algorithm, commonly used in LLM tokenization. The repository can be found at: https://github.com/karpathy/minbpe.
Karpathy recently departed from OpenAI. He confirmed his departure in a post on X, saying that it is solely due to his intention to focus on personal projects. During his time at OpenAI, he actively contributed to the development of an AI assistant, collaborating closely with the company’s research head, Bob McGrew.
Decodes Google’s Gemma
Moreover, Karpathy also analysed Google’s recently released open source model Gemma’s tokenizer. Karpathy, in his pursuit of understanding the Gemma tokenizer, decoded the model protobuf in Python and presented a detailed comparison with the Llama 2 tokenizer.
Key observations from the comparison include a substantial increase in vocabulary size from 32K to 256K tokens. Additionally, Gemma’s departure from the Llama 2 tokenizer is marked by the “add_dummy_prefix” setting being set to False, aligning with GPT practices and promoting consistency with minimal preprocessing.
Noteworthy aspects of the Gemma tokenizer include its model_prefix, representing the path of the training dataset, which hints at a substantial training corpus of approximately 51GB. The presence of numerous user-defined symbols, including special tokens like newline sequences and HTML elements, adds complexity to Gemma’s tokenization process.
In summary, Gemma’s tokenizer shares a foundational similarity with the Llama 2 tokenizer but distinguishes itself by its larger vocabulary size, inclusion of more special tokens, and a departure in the functional approach with the “add_dummy_prefix” setting. This comprehensive exploration by Karpathy sheds light on the nuances of Gemma’s tokenization methodology.