MITB Banner

Can LangChain Survive the Multi-Agent Invasion?

Multi-agents like AutoGen are growing, raising questions on LangChain's future.

Share

Listen to this story

The past few weeks have been quite exciting in the LLM landscape with the significant rise of multi-agents like XAgent, AutoGen, MetaGPT, and BabyAGI among others. Many developers are aggressively experimenting with them to solve maths problems, dynamic group chat, multi-agent coding, retrieval-augmented chat (RAG), building AI chatbots in a simulated environment, and conversational chess, among others. 

All of these developments bring us to question the relevance of LangChain in the new era of multi-agents. Ironically, in an AMA on Reddit, Harrison Chase co-founder of LangChain said, “No one really knows where LangChain will go.” 

When asked if they intend on moving from building a single agent to multi-agent like AutoGen, he responded, saying: “Yes we are considering it. The main thing blocking us from investing more is the concrete use cases where multi agent frameworks are actually helpful.”

For LangChain, the focus currently has been on refining smaller, specialised components, recognising the inherent difficulty in this task. An example of this could be LangSmith, which is designed to expedite debugging and transition from prototype to production for such applications. The primary emphasis has been on chains and a single base AgentExecutor. 

At the same time, a lot of developers are experimenting, and amalgamating AutoGen and LangChain. Interestingly, it works. Currently, AutoGen doesn’t support connections to external data sources in the native framework, and LangChain steps in to fill in that gap. 

So far, so good

LangChain marked its one-year anniversary this month. During this time, they’ve gained popularity and at the same time received criticism for their inefficiencies. As of today, they have 65.8k stars on GitHub and are built by over 5,000+ contributions by 1,500+ contributors.

The community-led open source platform has developers who use it extensively while the frustrated lot have gone on to make their own alternatives. In contrast, the mushrooming AI multi-agents are still in their early days, and a lot of them are still experimenting with it, unlike LangChain, which has real industrial and business use cases. 

LangChain vs AutoGen

The primary difference between them is that LangChain is a framework for building agents, which means that it provides the tools and infrastructure needed to create and deploy agents. AutoGen, on the other hand, is an agent that can converse with multiple agents.

Within the LangChain framework, there exists a subpackage called LangChain Agents, specifically designed for harnessing LLMs to make decisions and take actions.

LangChain Agents encompasses a variety of agent types, one of which is the ReAct agent. The ReAct agent is particularly noteworthy as it integrates both reasoning and acting processes when utilising LLMs. It’s primarily tailored for use with LLMs that precede the capabilities of ChatGPT. 

It’s important to note that all agents included in LangChain Agents adhere to a single-agent paradigm. In other words, they are designed to function individually and aren’t inherently geared toward facilitating communication or collaborative modes. 

Due to these identified limitations, the multi-agent systems present in LangChain, such as the re-implementation of CAMEL, are constructed from the ground up and do not rely on LangChain Agents. However, they maintain a connection to LangChain by utilising fundamental orchestration modules provided by LangChain, including AI models wrapped by LangChain and their corresponding interfaces.

While AutoGen is more focused on building conversational AI applications with multiple agents.  AutoGen also provides a number of features that are specifically designed for building conversational AI applications, such as support for multi-agent conversations and context management.

Another key difference between LangChain and AutoGen is their approach to integrating LLMs with other components. 

LangChain uses a chain-based approach, where each chain consists of a number of components that are executed in sequence. AutoGen, on the other hand, uses a graph-based approach, where components can be connected in different ways to create complex conversational flows.

In conclusion

LangChain is essentially a framework that makes it easier to build applications on top of large language models. It is typically built using a sequence of steps or ‘chains’ and consists of a number of components like data sources, API calling, code generation, and data analysis etc. 

This complicated the framework and many users felt that it is too verbose to use. There is a complete lack of documentation, filled with bugs and many users have noticed that LangChain introduces unnecessary abstractions and indirection making simple LLM tasks more complex. 

Chase, the co-founder of the framework admitted to some flaws on HackerNews and explained that work is being done to fix the issues. 

“In the past three weeks, we’ve revamped our documentation structure, changed the reference guide style, and worked on improving docstrings to some of our more popular chains. However, there is still a lot of ground to cover, and we’ll keep on pushing,” he said.

AutoGen took what LangChain agents can do a step further. Instead of working on one agent at a time, AutoGen enables multiple agents to engage in collaborative task completion by providing adaptable, conversational, and flexible functions in various modes.

These AutoGen agents seamlessly integrate with LLMs, human inputs, and a range of tools to suit the task’s specific requirements. It’s, in fact, only a matter of time until LangChain introduces multi-agent capabilities. 

Share
Picture of K L Krithika

K L Krithika

K L Krithika is a tech journalist at AIM. Apart from writing tech news, she enjoys reading sci-fi and pondering the impossible technologies, trying not to confuse it with reality.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.