Listen to this story
|
The past few weeks have been quite exciting in the LLM landscape with the significant rise of multi-agents like XAgent, AutoGen, MetaGPT, and BabyAGI among others. Many developers are aggressively experimenting with them to solve maths problems, dynamic group chat, multi-agent coding, retrieval-augmented chat (RAG), building AI chatbots in a simulated environment, and conversational chess, among others.
All of these developments bring us to question the relevance of LangChain in the new era of multi-agents. Ironically, in an AMA on Reddit, Harrison Chase co-founder of LangChain said, “No one really knows where LangChain will go.”
When asked if they intend on moving from building a single agent to multi-agent like AutoGen, he responded, saying: “Yes we are considering it. The main thing blocking us from investing more is the concrete use cases where multi agent frameworks are actually helpful.”
For LangChain, the focus currently has been on refining smaller, specialised components, recognising the inherent difficulty in this task. An example of this could be LangSmith, which is designed to expedite debugging and transition from prototype to production for such applications. The primary emphasis has been on chains and a single base AgentExecutor.
At the same time, a lot of developers are experimenting, and amalgamating AutoGen and LangChain. Interestingly, it works. Currently, AutoGen doesn’t support connections to external data sources in the native framework, and LangChain steps in to fill in that gap.
So far, so good
LangChain marked its one-year anniversary this month. During this time, they’ve gained popularity and at the same time received criticism for their inefficiencies. As of today, they have 65.8k stars on GitHub and are built by over 5,000+ contributions by 1,500+ contributors.
The community-led open source platform has developers who use it extensively while the frustrated lot have gone on to make their own alternatives. In contrast, the mushrooming AI multi-agents are still in their early days, and a lot of them are still experimenting with it, unlike LangChain, which has real industrial and business use cases.
LangChain vs AutoGen
The primary difference between them is that LangChain is a framework for building agents, which means that it provides the tools and infrastructure needed to create and deploy agents. AutoGen, on the other hand, is an agent that can converse with multiple agents.
Within the LangChain framework, there exists a subpackage called LangChain Agents, specifically designed for harnessing LLMs to make decisions and take actions.
LangChain Agents encompasses a variety of agent types, one of which is the ReAct agent. The ReAct agent is particularly noteworthy as it integrates both reasoning and acting processes when utilising LLMs. It’s primarily tailored for use with LLMs that precede the capabilities of ChatGPT.
It’s important to note that all agents included in LangChain Agents adhere to a single-agent paradigm. In other words, they are designed to function individually and aren’t inherently geared toward facilitating communication or collaborative modes.
Due to these identified limitations, the multi-agent systems present in LangChain, such as the re-implementation of CAMEL, are constructed from the ground up and do not rely on LangChain Agents. However, they maintain a connection to LangChain by utilising fundamental orchestration modules provided by LangChain, including AI models wrapped by LangChain and their corresponding interfaces.
While AutoGen is more focused on building conversational AI applications with multiple agents. AutoGen also provides a number of features that are specifically designed for building conversational AI applications, such as support for multi-agent conversations and context management.
Another key difference between LangChain and AutoGen is their approach to integrating LLMs with other components.
LangChain uses a chain-based approach, where each chain consists of a number of components that are executed in sequence. AutoGen, on the other hand, uses a graph-based approach, where components can be connected in different ways to create complex conversational flows.
In conclusion
LangChain is essentially a framework that makes it easier to build applications on top of large language models. It is typically built using a sequence of steps or ‘chains’ and consists of a number of components like data sources, API calling, code generation, and data analysis etc.
This complicated the framework and many users felt that it is too verbose to use. There is a complete lack of documentation, filled with bugs and many users have noticed that LangChain introduces unnecessary abstractions and indirection making simple LLM tasks more complex.
Chase, the co-founder of the framework admitted to some flaws on HackerNews and explained that work is being done to fix the issues.
“In the past three weeks, we’ve revamped our documentation structure, changed the reference guide style, and worked on improving docstrings to some of our more popular chains. However, there is still a lot of ground to cover, and we’ll keep on pushing,” he said.
AutoGen took what LangChain agents can do a step further. Instead of working on one agent at a time, AutoGen enables multiple agents to engage in collaborative task completion by providing adaptable, conversational, and flexible functions in various modes.
These AutoGen agents seamlessly integrate with LLMs, human inputs, and a range of tools to suit the task’s specific requirements. It’s, in fact, only a matter of time until LangChain introduces multi-agent capabilities.