Listen to this story
Generative AI, take ChatGPT for instance, is still in its early stages, and its impact is likely to grow as the technology continues to evolve and improve, Union Minister Rajeev Chandrasekhar said in the Parliament. “The government is cognizant of the emergence of these technologies and their rapid proliferation in sectors like education, manufacturing, healthcare, finance, and others,” Chandrasekhar said.
The rising popularity of ChatGPT has prompted governments across the world to explore the role of generative AI in aiding government services. While ChatGPT’s use cases are aplenty, can it be leveraged for governance?
Why should government not use ChatGPT?
Government departments can use ChatGPT to debug code or solve IT-related issues, saving the trouble of going to external vendors.
“ChatGPT can also be employed in human resources, citizen outreach, productivity stories, email construction, and document translation. However, ChatGPT has limitations since it is not trained on very specific data,” Anurag Sahay, CTO and MD – AI and Data Science, Nagarro, told AIM.
So, a problem may arise if a government employee using ChatGPT accidentally shares sensitive data with the chatbot. OpenAI also states that it might share users’ personal information with ‘unspecified third parties’ to meet their business objectives. In such a scenario, sensitive government data could end up in the hands of these unspecified third parties.
Further, “There may be legal and ethical considerations surrounding the use of AI for government services, such as privacy concerns or the potential for AI to replace human jobs. It is important for the government to carefully consider these risks before implementing ChatGPT or any other AI technology, and to take steps to mitigate them as much as possible,” Anil Kaul, chief AI officer & CEO – AbsolutData at Infogain, told AIM.
Sahay is of the opinion that ChatGPT can’t be used to deliver government services because it would need access to very specific data.
“The government has access to very specific data, for instance, a citizen’s tax status. That information is not available to ChatGPT in any way. ChatGPT has been trained on publicly available data, such as Wikipedia, blogs, and other similar websites,” he said.
LLMs, on the other hand…
While ChatGPT comes with its limitations, the government could leverage the technology behind the popular chatbot. ChatGPT is built on OpenAI’s GPT3.5 architecture, which is a series of large language models (LLMs). “Models such as InstructGPT are also available and can be trained on specific data, such as government data, allowing for specific questions to be asked,” Sahay said.
Meanwhile, the Ministry of Electronics and Information Technology (MeitY) is building a chatbot, using the GPT3.5 architecture, with WhatsApp to deliver key government schemes. The chatbot is being developed to help Indian rural farmers access information in Indic languages. Initially, the chatbot will be available in Hindi, English, Tamil, Telugu, Bengali, Marathi, Kannada, Assamese, and Odia.
Improving government chatbots
The same technology can also be used by the government to improve its already existing chatbots. Today, there are multiple AI-powered chatbots such as MyGov Helpdesk, Umang Chatbot, DigitBot, CoWin Chatbot, AskDISHA and PM Modi Chatbot, among others. The purpose of these chatbots is to assist users in finding information related to various government services and schemes. However, they are far superior to ChatGPT.
In recent times, the adoption of chatbots and voice assistants has been rapid, however, they come with several limitations, such as the inability to deliver personalised responses, respond to complex requests and understand human emotions. “The Indian government can use large language model chatbots to make existing communication channels – chatbots and voice assistants – more efficient and personalised to citizens’ needs and queries,” Garima Saxena, research associate at The Dialogue, a non-profit policy think tank, told AIM.
Kaul also believes the technology powering ChatGPT can be the foundation for any chatbot which the government already uses, provided ChatGPT is trained with more data points/corpus related to specific context. As chatbots and conversational AI become more powerful, governments will need to enhance their existing chatbots or potentially replace them with more capable large language model-powered chatbots like ChatGPT.
Sahay, on the other hand, believes specific chatbots like ‘MyGov Helpdesk’ have an advantage in accessing very specific information that ChatGPT cannot access.
With the continuous advancement of technology, there is potential for the creation of a single chatbot that can serve as the primary point of contact, rather than having multiple disparate chatbots. For instance, the LLM-powered chatbot can inform citizens of any pending services they have, such as renewing their driver’s licence, and give them the required information and steps or provide a step-by-step guide in the language of their choice about seeking redressal.
Real-time RTI responses
Another potential use case of LLM is to provide real-time RTI (Right to Information) responses. The process of submitting a query through the RTI input form in India requires the applicant to provide personal information and specify the relevant department. This manual process can be streamlined and automated using large language models.
“However, the bigger challenge is the curation of information from different departments, getting approvals from concerned authorities, and public information officers (PIOs) in many departments of government etc, before furnishing the non-classified information to the supplicant,” Kaul pointed out.
Besides, human supervision may still be necessary due to the nature of Section 8 of the RTI Act, which gives the government the power to withhold certain information from being made public. The model can be trained to determine if Section 8 exemptions apply and provide real-time information when they don’t. “However, continually updating data and ensuring the LLM is up to date will require systemic changes in the manner with which RTIs are currently filed, answered and stored,” Bhavya Birla, research associate at The Dialogue, told AIM.
It is important to recognise that chatbots may not always comprehend the context of a query and could produce incomplete or erroneous responses. Moreover, there may be apprehensions regarding the precision and uniformity of answers generated, especially in response to intricate or specialised inquiries.
Nonetheless, “The potential for LLM-powered chatbots to generate RTI responses in real-time exists, provided RTI data is available for us to train them on,” Sahay concluded.