For some time now, conversational AI as a branch has been at the forefront of innovation in AI. And with the advent of powerful tools like ChatGPT, things have only sped up exponentially. Recent developments in this sphere have only raised more questions about how conversational AI will evolve post-ChatGPT.
Analytics India Magazine caught up with Ankush Chopra, the Director of the AI Centre of Excellence at Tredence, to explore the answers to this and more about what it takes to build a good conversational AI chatbot.
AIM: Can you explain the current state of conversational AI and its use cases in various industries?
Ankush: Conversational AI is an umbrella term which includes chatbots, virtual assistants and voice assistants like Alexa or Google Home. Across history, if a certain kind of technology is used more, the pace of advancement in that branch picks up. And this is exactly what has happened with NLP in the past few years. More people started using voice assistants, and bots came up on the customer service side. It started with simple FAQ bots but has now evolved to ones with more and better functions.
AIM Daily XO
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Apart from the customer service industry, be it banks, telecom companies or airlines, they have all integrated chatbots that clients are routed through first to lower the volume of queries that human customer service associates have to deal with. In e-commerce, too, it can help with product recommendations.
In healthcare, the adoption of conversational AI can accelerate with functions like appointment scheduling and diagnosis. But it is advisable to be cautious while using chatbots if the downside of a wrong outcome is huge. For example, we have seen cases where AI systems were used for psychiatric counselling, which did not go well. So, while we are far from using it for critical functions, it can certainly be used for basic triaging of patients, helping with the initial diagnosis, and setting up appointments.
Download our Mobile App
It’s similar in finance industries, where conversational AI systems are used to give stock quotes or even the account balance. But you wouldn’t want them to give real investment advice unless there’s human intervention. In hospitality, some systems can also show you room availability and help with bookings.
In online education, the testing modules can be integrated within these systems, and the conversational AI systems can help students identify topics they need to focus more on.
AIM: What are the challenges in building a good conversational AI system?
Ankush: It’s quite a complex task. We need to focus on three main things while building a conversational AI system: One, it should have good conversational capabilities; two, the system should be production-ready, so once it’s designed, it can be used by thousands, if not millions. And finally, it should also take care of the principles of responsible AI.
First of all, to have good conversational capabilities it should have natural language understanding and generation ability while handling multiple languages, dialects, accents, and various age groups. Secondly, it should have a good dialogue management system to understand and maintain the context of the conversation. Finally, having an efficient knowledge representation system is critical because it improves not only the accuracy of conversation systems but also their speed. Knowledge graph is a popular choice among many others that are used for this.
Then, the system has to be made production-ready, so scalability and maintainability are very important in this context. For instance, now, with ChatGPT, we see that the platform is down at times because there are many users on it. Obviously, they didn’t expect this kind of a response, so while designing a system, you need to be able to gauge your user base and be prepared for it.
AIM: How do you think the conversational AI space will evolve in the future, and what will its impact be?
Ankush: We have seen technologies behind conversational AI systems progress a lot lately. Models like ChatGPT have shown the result of this progress. The output generated by ChatGPT has been nothing less than phenomenal (more often than not). We know that Google is working on LaMDA, which is the underlying model for their chatbot (Bard). These success stories will motivate more and more organisations to work on similar models resulting in further improvements.
There are two main challenges LLMs that conversational AI systems face. Firstly, hardware requirements and secondly, hallucinations. LLMs have billions or even more than a trillion parameters in some cases, and have high GPU and other system requirements. This makes training, fine-tuning and inference power consumption heavy, which not only results in high cost but also has a negative impact on the environment. Hallucinations are when conversational AI systems confidently provide incorrect answers. This is one of the biggest bottlenecks currently while using these systems reliably.
I would expect more research to be focused on solving these two problems. We should see a lot more focus on developing smaller language models that perform at the same level as their larger counterparts and identifying efficient knowledge distillation methods that can be applied on LLMs. I also foresee that more research will be happening on detection, correction, and prevention of hallucinations.
With improvements in aforementioned areas over time, conversational systems will become even more commonplace and will be adopted across various industries for numerous tasks.
AIM: There’s a lot of, maybe, unfounded fear about how ChatGPT might take jobs away. Do you think this is true?
Ankush: I think a very small part of this will end up being true. There are certain functions we spoke about, like in customer service, where customers seek basic information like room availability or account balance. These repetitive, mundane tasks can be done by ChatGPT, and we might see some negative impact there. But I see these models as something that can help people become more efficient now – with LLM-powered conversational AI systems, a customer representative can probably find answers to customer queries faster. Even with coding, conversational AI systems are good for creating small bits of code, but you can’t really expect them to design an entire software. Moreover, these models will not do well in any new or emerging subject area which has not been around for long as these models need (a lot of) relevant data to meaningfully learn the subject.