Listen to this story
|
NVIDIA researchers developed RULER, a synthetic benchmark designed to evaluate long-context language models (LLMs) across various task categories, including retrieval, multi-hop tracing, aggregation, and question answering.
The study involved benchmarking ten long-context models using RULER with context sizes ranging from 4K to 128K. The models were assessed on 13 tasks of varying complexity.
Click here to check out the GitHub repository.
The evaluation revealed that despite achieving nearly perfect results on the needle-in-a-haystack test, all models experienced significant performance drops as the input length increased. The top-performing models, including GPT-4, Command-R, Yi-34B, and Mixtral, demonstrated satisfactory performance at 32K length, but others struggled with larger contexts.
The researchers also examined the impact of training context length, model size, and architecture on performance. Models trained with larger context sizes generally performed better on RULER, although performance rankings varied with longer sequences.
Larger models, such as Yi-34B-200k, outperformed smaller counterparts, demonstrating the benefits of scaling model sizes.
Non-Transformer architectures like RWKV-v5 and Mamba-2.8B-slimpj faced significant degradation when extending context size to 8K and underperformed compared to the Transformer baseline Llama2-7B.
The main results showed that while all models claimed context sizes of 32K tokens or more, none of them maintained performance above the Llama2-7B baseline at their claimed length, except for Mixtral, which performed moderately well at double its claimed context size of 32K.
The models experienced large degradation in performance when tested using RULER as sequence length increased, despite achieving nearly perfect results in the needle-in-a-haystack task. GPT-4 was the best-performing model, exhibiting the highest performance at 4K length and the least degradation when extending the context to 128K.
Additionally, the study found that the top three open-source models—Command-R, Yi-34B, and Mixtral—used a large base frequency in RoPE and had larger parameter sizes. Although the LWM, trained with a context size of 1M, performed worse than Llama2-7B at 4K, it showed smaller degradation with increasing context size, leading to a higher rank than Mistral-7B in weighted evaluations.
RULER’s open-source availability aims to encourage comprehensive evaluation and further research on long-context modeling, highlighting significant room for improvement in this area.