21st-may-banner design

Challenging Bigtech’s Generative AI Narrative

“Although large foundational models certainly have their place, companies are trying to shove 20 use cases into one model," says the co-founder of Arcee.ai

Share

Listen to this story

Almost a year ago, a human mimicking technology was given to people — something they hadn’t ask for. Since the launch of OpenAI’s chatbot no one has been spared from learning how they can use tools akin to ChatGPT to make their lives better, personally and professionally.

At the foundation of these tools are large language models (LLMs) built on millions of data points and billions of dollars. However, these chatbots by the big tech companies have not reaped results, yet. 

“Essentially, we’re against the narrative of OpenAI, Anthropic and Cohere. We’re much more aligned with the open source side which is tending to lean towards smaller, specialised models as opposed to one model to rule them all,” Mark McQuade the co-founder of Arcee.ai, an AI startup that works towards developing domain-specific LLMs.

“Although large foundational models certainly have their place, companies are trying to shove 20 use cases into one model. Each use case should have its own small language model in order for that to be scalable, and efficient,” he added.  

For example, if you want a customer support language model, it doesn’t have to be good at poetry. It is like having a thousand piece toolset, when all you need is a single screwdriver. With the smaller specialised models, you have the ability to train them more efficiently. 

Besides, the larger the model, the more it opens up the possibility of hallucinations because it has a bunch of unnecessary data that saturates the importance of the core data.

“But, that’s just the foundation and not how you can gather the best return on investment out of an LLM,” noted McQuade, who has previously served as the ML success & business development lead at Hugging Face, the open source platform. 

Seeing the opportunity, Mark and his team built the End2End RAG system that sits on top of the main LLM. The way to get the most ROI is to pair an in-domain specialised model with the system, he said.

A report published by The Wall Street Journal two weeks ago, brought to light how big tech companies have not yet been able to generate profits through their generative AI products. It stated that Microsoft has lost money on the firsts of its products, said a person with knowledge of the figures.

Microsoft and Google are now launching AI-backed upgrades to their software with higher price tags. Zoom Video Communications has also tried to mitigate costs by sometimes using a simpler AI it developed in-house.

McQuade further explained that the team has built its own form of retrieval augmented generation (RAG), that sits on top of the model. 

“The RAG that you see today is really glorified prompt engineering. The most common standard RAG flow you see today is completely unaware of the context of your data. Without looking up in your data it sends the lookup plus the original query to GPT,” he added. 

His team has built an end-to-end RAG system where they first train the entire RAG architecture on data provided. Then they train retriever and generator models as one system, simultaneously, so they feed off each other. They become much more contextually aware of the data. After the system is tuned, users can hit it for inference and add more data to their knowledge base, as you would in a typical RAG system.

“The smaller system greatly reduces hallucinations,” he stated. Apart from reduced hallucinations, there’s also a drastic difference from a cost perspective. “Two billion tokens hitting GPT-4 costs about 360k. But two billion tokens hitting our system if it runs inside a virtual private cloud (VPC) is about 30k for the cost of compute,” McQuade said. 

Interestingly, the team’s lead NLP researcher is an author of a paper from 2021 called Fine-tune the Entire RAG Architecture for Question-Answering. He’s spent four years on his thesis and PhD attacking domain adaptation of LLMs.

The Experimental Phase

When cloud entered the market in 2006 people started playing with it. It was a slow adoption curve, but everyone’s on the cloud today. McQuade believes generative AI is the same.

“People need to test it and that’s what they’re doing. We firmly believe in a world of millions — if not billions of models — essentially, a model per task. On the closed source, you’re going to get bigger multimodal models and on the open source side, they’re going to get smaller and more efficient. That will be a great battle,” he said.

This explains why Microsoft, AWS, and Google, everyone is backing Meta’s LLaMA, or integrating into their offerings. 

He is betting that the bigger model will not win but from a technology standpoint the next really big thing is obviously multimodal. It’s going to be around agents and synthetic dataset generation. Agents will allow you to not only get responses from LLM’s but complete tasks. McQuade shared, “We are taking the focus on synthetic data set generation and language models are only as good as the data.”  

It’s the hardest piece to any model whether it’s training or fine tuning. There’s been a big push recently for generating high quality synthetic data. That’s one of the biggest waves in the next 3-6 months as they won’t need to rely solely on messy unstructured data, McQuade concluded.

Share
Picture of Tasmia Ansari

Tasmia Ansari

Tasmia is a tech journalist at AIM, looking to bring a fresh perspective to emerging technologies and trends in data science, analytics, and artificial intelligence.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.