MITB Banner

GPT-4 Turbo is ‘Lost in the Middle’

OpenAI's GPT-4 Turbo, with an extensive 128k context window, failed to revolutionise things due to the 'Lost in the Middle' phenomenon impacting information recall accuracy.

Share

Listen to this story

Last week, OpenAI held its first-ever developers’ conference in which it announced GPT-4 Turbo, an enhanced iteration of GPT-4. It features an expansive 128k context window, enabling it to process the equivalent of over 300 pages of text in a single prompt.

This upgrade comes with knowledge extending up to April 2023. Several announcements were made, spanning open source models and developer tools, addressing areas where the generative capabilities of OpenAI previously faced competition gaps. 

These announcements drew the attention of the AI world as they seemingly sounded the death knell for many AI startups out there.

A week later, however, the hype is on the slide and nothing much has changed. Maybe, GPT-4 Turbo was not as revolutionary as it sounded.

Context Length Extn Issue

In a late July study, Stanford University, UC Berkeley, and Samaya AI researchers revealed a phenomenon in large language models termed ‘Lost in the Middle’, where information retrieval accuracy is high at the document’s start and end but declines in the middle, especially with increased input processing. 

Building on this, Greg Kamradt, Shawn Wang, and Jerry Liu tested if GPT-4 Turbo exhibited this effect. Using YC founder Paul Graham’s essays, they inserted a random statement at different document points and evaluated GPT-4’s recall.

Findings showed decreased recall above 73,000 tokens, particularly affecting mid-document statements, emphasizing context length’s impact on accuracy. It means that the accuracy typically drops off as you get to 60-70% of the context length supported by an LLM. 

Opting for smaller context-length inputs is recommended for accuracy, even with the advent of long-context LLMs. Notably, facts at the input’s beginning and end are better retained than those in the middle.

Comparatively, a 128K context-length LLM performs better than a 32K context-length one for a given context, suggesting the use of large context-length LLMs with relatively smaller documents. The “forgetting problem” remains a challenge, requiring ongoing development in LLM applications with multiple components and prompt engineering.

While larger context windows, such as those offered by advanced language models like GPT-4, allow for more extensive data processing in a single prompt, embedded search functions or vector databases remain superior in terms of accuracy and cost-effectiveness, particularly for specific information retrieval tasks.

Vector databases specialise in organising and retrieving information based on semantic similarities, offering a more targeted and efficient approach. These systems are designed to excel in precision, ensuring that the retrieved information aligns closely with the user’s query.

Additionally, the focused nature of embedded search functions often results in reduced computational costs, making them an optimal choice for specific and precise data retrieval needs.

OpenAI’s Retrieval APIs Not the Ultimate Solution

While OpenAI’s introduction of retrieval APIs is noteworthy, it’s crucial to highlight the limitation of exclusively working with GPT-4. Despite a price reduction, the scalability of usage remains a significant challenge due to its high cost.

There are open-source retrieval APIs that are revolutionising enterprise LLM adoption. 

These APIs come equipped with open-source LLMs tailored for enterprise applications, featuring expansive 32K contexts and specialization in specific enterprise use cases like Q/A and summarization.

The cost-effectiveness of these open-source APIs is noteworthy, being 20 times more economical than GPT-4. Additionally, developers have the flexibility to switch to closed-source LLMs from OpenAI, Anthropic, or Google if that better aligns with their preferences.

Furthermore, if a customized fine-tuned LLM is essential and a developer possesses labeled data, they provide the service of fine-tuning the LLM to meet their specific requirements. In many instances, the combination of Retrieval-Augmented Generation (RAG) with fine-tuning proves to yield optimal results.

In the ever-evolving landscape of an enterprise’s internal knowledge base, the challenge is avoiding the hassle of repeatedly uploading new data each time the database undergoes changes. Typically, enterprise clients store their data in cloud repositories such as Azure, GCP, and S3.

The open-source retrieval APIs facilitate a seamless connection to these cloud buckets, ensuring regular updates without manual intervention. Moreover, this functionality extends to pulling in data from various sources, including Confluence or any cloud database like Snowflake, Databricks, and others, enhancing versatility and adaptability.

While the intricacies are abstracted for a seamless experience, the open-source retrieval API allows users the flexibility to delve into the details and fine-tune parameters as needed. Despite the API’s intelligent approach in making decisions on chunking and embedding strategies based on dataset and API requests, users retain the ability to make manual adjustments.

In the realm of enterprise operations, establishing pipelines and robust monitoring systems is indispensable. Connecting to diverse data sources, ensuring regular updates to vector stores, and meticulous indexing are vital components.

The Retrieval API fundamentally streamlines the development of LLM applications on your data, offering a quick start within a few hours. It emerges as the optimal choice, especially for those emphasising cost-effectiveness and scalability in Retrieval/RAG processes.

Share
Picture of Tausif Alam

Tausif Alam

Tausif Alam has been covering technology for almost a decade now. He is keen about connecting dots and bringing a wholesome picture of the tech world.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India