Elon Musk recently called for the regulation of advanced AI (including at his own company- Tesla) on Twitter. But, what could be the implications of artificial intelligence regulation on innovation?
Elon Musk recently called for the regulation of the development of advanced AI, including Tesla on his Twitter. In fact, Musk has repeatedly warned of the risks linked with building advanced AI solutions, even saying it is a “fundamental threat to the existence of human civilisation.”
So why is Musk so concerned about the development of advanced AI? So far, it seems that the speed of artificial intelligence’s innovation has outpaced regulation. Now regulation has to catch up, say many experts. They say that private and government agencies are increasingly dealing with AI-based tools, and therefore they have to evaluate AI tools and figure out the regulatory system.
That suggests an ongoing problem for AI, one that is already played out in other tech sectors, where the hype to innovate without a lot of regulation may lead to some negative consequences for most of the humans. The concerns stem from things like privacy, cognitive biases in AI models, surveillance, and its usage in warfare. Many critics say that artificial intelligence can replicate, and even amplify, human biases, new AI-based tools may also cause concerns about privacy and surveillance.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
Why Elon Musk May Be Concerned About AI
While encouraging innovation is certainly important, critics of AI like Elon Musk have stated that regulators must regulate artificial intelligence as we approach the cusp of the AI age. Musk also said that companies like OpenAI needed to be more open.

The argument is that open and unfettered research process is likely to accelerate the progress of AI innovation, which benefits all of us and not merely the company that developed an AI system. Anonymous employee accounts suggest that the San Francisco-based startup is “obsessed with keeping secrecy, safeguarding its image, and retaining the loyalty of its employees.”
While Musk’s concerns with OpenAI is understandable, when you compare AI startups like OpenAI to their counterparts in China (perhaps startups in the likes of SenseTime and Megvii), you will find that the US-based startups are still way more open comparatively. In fact, in just four years, OpenAI has grown to become one of the leading firms in the world today to democratise AI research.
Read More: IS THE RECENT CRITICISM FOR OPENAI UNFAIR?

The Benefits Of AI Could Be Hampered With Regulation
AI is a drastic change in traditional programming. Models like neural networks work by finding patterns in training data and applying those patterns in new data. And this type of programming based on feeding models to neural nets is actually beginning to replace traditional programming in a variety of domains. And startups like OpenAI and Deep Mind are playing a pivotal role here. According to Jeff Dean, Head of Google Brain Project, Neural Networks can be used for any task.
“Neural nets are the best solution for an awful lot of problems and a growing set of problems where we either previously did not know how to solve the problem, or we could solve it, but now we can solve it better with neural nets.”
This is enabling researchers to solve problems which they did not know in the past to solve. All of that innovation could not be possible with the regulation of AI technology. It needs creative and scientific freedom (within ethical boundaries, of course) to create better and new AI models.
For the general regulation on advanced AI, even privacy laws have been found to hamper innovation. One example is how GDPR provisions addressing AI in the context of protecting consumer interests may slow down AI research and innovation. AI guidelines in Europe’s comprehensive privacy law establish rights of data subjects which are not to be subjected for decisions based on automated processing. “By both indirectly limiting how the personal data of Europeans get used and raising the legal risks for companies active in AI, the GDPR will negatively impact the development and use of AI by European companies,” said a report.

Regulation In The Context Of Global AI Race
One of the reasons why AI may not get regulated as critics as Musk wants- is the AI race between China and the US. While the US currently leads the race, China is not far behind if you look at the number of patents registered. This means that the US administration would not hold back innovation by interfering in how private groups like Open AI go about their business.
Just so you know, the White House’s proposed AI guidance analysed many of of the most widely discussed concerns in consultation with technologists, AI ethicists, and government officials. The regulatory guidelines focused largely on fostering innovation in artificial intelligence and ensuring regulations do not hamper great innovation being done.