Listen to this story
|
In 2018, Indian Prime Minister Narendra Modi made a bold prediction during his speech at the World Economic Forum’s annual meeting in Davos. “Whoever acquires and controls the data will have hegemony in the future,” he said. Five years later, it seems that Modi’s prophecy has come true, at least in India, where the government is working to control the internet and the data it generates.
The Indian government has enacted several bills in recent years that give it near-total control over internet data, including a data localisation bill that requires companies to store the data of Indian users within the country.
In addition, there are speculations that the government is partnering with Microsoft to integrate ChatGPT with WhatsApp to address issues faced by rural Indian communities. However, as India tightens its grip on data, questions arise about the safety and privacy of such data.
ChatGPT, privacy nightmare!
ChatGPT is an AI language model developed by OpenAI that has garnered significant attention in recent months for its ability to generate human-like responses to natural language queries. The Indian government hopes to leverage the technology to create a ChatGPT integration with WhatsApp that can respond to queries from rural communities in their local languages. The chatbot would also have a ‘voice note’ feature to enable local people to ask questions via voice note in their preferred languages.
While the Indian government is busy working on data localisation, the issue of where the data generated by the ChatGPT chatbot will be stored remains unclear. Although Indian government claims that it adheres to laws and policies related to privacy, data protection, intellectual property, and cyber security, the data that Indian citizens will feed into ChatGPT still poses considerable risks.
Considering the vast amounts of data that OpenAI has amassed without permission—enough that there is a chance that ChatGPT will be trained on blog posts, product reviews, articles and more—its privacy policy raises legitimate concerns.
The IP address of visitors, their browser’s type and settings, and the information about how visitors interact with the websites—such as the kind of content they engage with, the features they use, and the actions they take—are all collected by OpenAI in accordance with its privacy policy.
Additionally, it compiles information on the user’s website and time-based browsing patterns. OpenAI also states that it may share users’ personal information with unspecified third parties without informing them to meet its business objectives.
The lack of clear definitions for terms such as ‘business operation needs’ and ‘certain services and functions’ in the company’s policies creates ambiguity regarding the extent and reasoning for data sharing.
To add to the concerns, OpenAI’s privacy policy also states that the user’s personal information may be used for internal or third-party research and could potentially be published or made publicly available. Considering the company’s broad approach to information sharing and the inclusion of the term ‘publishing or making any information generally available’, it would be prudent for the government to consider providing an extra layer of security for its citizens when using the application.
Big corporations such as JPMorgan Chase, Amazon, Verizon, and Accenture have already banned the use of ChatGPT for their employees, citing concerns that sensitive data could end up on the servers of OpenAI.
ChatGPT needs access to vast amounts of data to function, including data that can be categorised as ‘sensitive’ or ‘confidential’, which is precisely the kind of data that the Indian government wants to localise with its ambitious data localisation bill.
Other Challenges
Additionally, like many other countries, India has struggled to combat misinformation since the widespread adoption of internet services. AI language models like ChatGPT pose a significant risk of spreading false information. While they have the potential to revolutionise the way we interact with technology, such AI language models are likely to threaten data security in an equal measure.
Among the most significant challenges is the risk of perpetuating biases and discrimination present in the data used to train the model. In the Indian context—where misinformation has led to violence and even deaths in some cases—it’s essential to ensure that AI language models are not used to spread false information.
Another significant data security challenge posed by AI language models like ChatGPT is the risk of cyber attacks. Cyber attacks come in various forms, including data breaches, malware, and denial-of-service (DOS) attacks and can compromise the integrity of the data used to train the model. This, in turn, can affect the accuracy and reliability of the model, leading to incorrect predictions and decision-making processes.
A breach of ChatGPT’s data could have significant consequences, with sensitive information falling into the wrong hands. Identity theft, financial fraud, and other forms of cybercrime could result in dire consequences for all individuals and businesses involved.