Listen to this story
Pretty soon, AI is going to be everywhere and the launch of OpenAI’s GPT-4 has only expedited the process. The new AI technology appears fascinating but has raised some real concerns too. Researchers and activists, who have been keenly observing the developments in the technology, say it’s high time to bring in regulations to stymie the negative impact of AI.
Presently, governments in multiple jurisdictions from the US to the EU are working on laws to regulate AI. The European Union (EU) has already drafted the AI Act, which will be put to vote in the European Parliament this month. Europe’s attempt to regulate the use of AI by enterprises is, in fact, the first of its kind.
In the US, the White House released a blueprint for an AI Bill of Rights, which provides frameworks for the responsible use of the technology. However, nothing much has moved in India. Recently, Boris Power, a member of the technical staff at OpenAI, said that India will be the country which innovates the fastest and where GPT-4 has the most substantial impact. This makes it imperative for India to have similar laws in place to regulate AI.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
In 2021, IT minister Rajeev Chandrasekhar said that his government is working on a framework that will allow the protection of privacy at the consumer level and at the same time develop a vibrant ecosystem to nurture and develop AI applications. However, two years later, the Narendra Modi-led administration is yet to release a comprehensive framework to regulate AI.
A Democratic Approach Towards Regulation
While the governments plan to bring AI under regulations, experts have cautioned in favour of a balanced approach. “Definitely, there should be some framework in place. However, regulators today do not understand the technology. Currently, there exists a huge gap in the understanding of the technology between the regulators and the innovators,” Utpal Chakraborty, chief digital officer at Allied Digital, told AIM.
Stressing further, he said that the government really needs to sit with the creators (of the technology). “The approach, which the Indian government or the regulators should take, should be democratic, not autocratic. The regulators must ensure that the framework they come up with does not hamper the growth of the technology,” said Chakraborty.
If we take the AI Act, drafted by the EU, it could potentially impact the growth of startups in Europe. According to a joint study by several European AI associations, 73% of the venture capitalists surveyed anticipate that the AI Act will diminish the competitiveness of AI startups in the region, either moderately or substantially.
The Indian government must ensure something similar does not happen in India. “Hence, bridging the gap between the innovators and the regulators is crucial,” Chakraborty said.
What’s Causing the Fear?
With GPT-4, OpenAI overcame most of the limitations of ChatGPT, or GPT3.5, the model that powers ChatGPT. However, the recent technical paper on GPT-4 reveals much more. The paper highlights how the model can become ‘agentic’, meaning that it will not become sentient, but can develop and accomplish goals that were not predefined to it during training. It can go on to plan long-term quantifiable objectives, including power-seeking actions.
Even though these technologies have a myriad of use cases, the technology can also be misused. Within a few days since its launch, scammers began sending phishing emails and tweeting phishing links to cryptocurrency enthusiasts about an OpenAI crypto token (which does not exist), Tenable research found.
It can also be used to create malware. “Generating sophisticated malware code is by far the biggest risk today from these tools. Take the example of the ‘BlackMamba’ keylogger, which was successful in bypassing a sophisticated cyber threat detection functionality called End Point Detection &
Response (EDR). It was built using ChatGPT with relative ease,” Praveen Yeleswarapu, Head – Product Marketing & Engagements, BluSapphire, told AIM.
Further, ChatGPT or GPT-4 could potentially become a great disinformation tool, according to Gordon Crovitz, a co-chief executive of NewsGuard, a company that tracks online misinformation. “Crafting a false narrative can now be done at dramatic scale, and much more frequently—it’s like having AI agents contributing to disinformation,” he said.
Chakraborty points out generative AI could also be used to make better deepfakes, which is already a challenge. “Today it’s very difficult to distinguish a real video from a fake one. In certain situations, a fabricated video created by AI and circulated on social media may be perceived as authentic, resulting in severe repercussions.”
Meanwhile, big tech companies such as Microsoft and Meta have completely scrapped their responsible AI teams. This further aggravates the need for AI regulations since these companies are not committed to creating responsible safeguards for their AI products.
Addressing Biases is Crucial
Today, more than 100 million users across the globe have used ChatGPT so far. It has 13 million daily active users – many of them based in India. As the usage of these technologies continues to grow, there is a pressing need for regulations to protect users from potential biases inherent in them.
GPT-4 is going to automate various functions in an organisation. Khan Academy, a non-profit organisation offering free education, has launched an AI tutor called Khanmigo, powered by GPT-4. What if the AI tutor is biassed towards a particular set of students? There are no frameworks in place to deal with such issues.
“The impact of these biases could be severe. In the Indian context, these biases could be in the form of gender, caste, creed and religion etc,” Chakraborty said.
Interestingly, within a few weeks of its launch, ChatGPT was found to have a substantial left-leaning and libertarian political bias. Political biases in ChatGPT or GPT-4 could lead to discrimination against persons with a different political ideology. “In most cases, the biases come from the data that these models have been trained on. It can also come from the creators, or in the training process, if not done right.”
While regulations are needed, it is also imperative that the government puts some checks and balances in place for the ethical use of AI. “If we have ethical practices in place along with regulation, it will have a positive impact on the entire AI landscape,” Chakraborty concluded.