MITB Banner

Alchemists And Altruists: Who Should Hold The Key To The Powers Of AI

Share

“Malicious uses of models can be difficult to anticipate because they can be repurposed in a very different environment or for a different purpose than what the researchers intended.”

When Albert Einstein signed Leo Szilard’s fateful letter requesting the US president to be proactive with the experiments on atomic energy, little did he know that he had sealed the fate of Hiroshima and Nagasaki. After learning about the destruction of the two Japanese cities, Einstein said he regretted signing the letter. “Had I known that the Germans would not succeed in developing an atomic bomb, I would have done nothing,” he said. The world has come a long way since then. The war has shifted to different frontiers. The strategies are now more software-oriented. Artificial Intelligence has become a crucial part of defence budgets of countries like the US and China. 

High-speed internet, advancement of semiconductor chips and the race to AI supremacy have contributed to the rise in AI research. If OpenAI releases a billion-parameter large language model, Google dials up its model’s parameters to the trillion mark. Language model GPT-3 had the whole world hooked. Scores of articles were written announcing AI’s ascent. Then came the dissenters. They objected to many things, including the environmental impacts of running large models, replicability and ownership. 

For instance, the researchers from Center on Terrorism, Extremism, and Counterterrorism (CTEC) warned of the many dangers of GPT-3. For example, it could be used for radicalising individuals. While OpenAI’s preventative measures are strong, the possibility of unregulated copycat technology represents a significant risk for large-scale online radicalisation. Governments should begin investing as soon as possible in shaping social norms, public policy, and educational initiatives to preempt an influx of machine-generated disinformation and propaganda. 

“GPT-3 has the potential to advance both the beneficial and harmful applications of language models.”

OpenAI

The team who developed GPT-3 explained the harmful effects of GPT-3 in their paper. According to the OpenAI researchers, the high-quality text generating capabilities of GPT-3 can make it challenging to distinguish synthetic text from the human-generated text. They said the malicious uses of language models are difficult to anticipate. The team listed following misuses:

  • Spam & phishing 
  • Fraudulent academic essay writing 
  • Abuse of legal and governmental processes
  • Social engineering

Before GPT-3 hogged the limelight, the world was both in awe and terrified by the possibilities of Deep Fake. Deep Fakes can be used to generate painting, morph images, videos and can even pose a threat to national security. The disadvantages outweigh the advantages.

Should the decisions like releasing something as powerful as GPT3 and other AI breakthroughs be held accountable for their usage?

We got in touch with Sahar Mor to know more about the repercussions of experiments like GPT-3. Sahar was one of the first engineers within the AI community to get access to OpenAI’s GPT-3 model. Sahar used OpenAI’s GPT-3 to build AirPaper, an automated document extraction API. 

Though Sahar has benefitted from the advantages of technologies like GPT-3, he is also wary of the flip side of AI’s overreach. “The same components that are being used for noble causes such as finding target molecules for an undruggable disease can be then used to generate harmful text at scale,” he warns. According to Sahar, AI is just a tool. It’s an overlap of: 

  • ML technology, such as Reinforcement Learning or Deep Learning, 
  • An architecture, such as Transformers, 
  • A technical application, such as text generation
  • A business application such as self-driving cars
AI Application Model. (Source: Sahar Mor)

“Allowing commercial companies to decide who gets access to powerful AI technologies is tricky.” 

For decades, continued Sahar, cutting-edge research was the prerogative of academic institutions, having both the budget and hardware to push the boundaries of innovation. Industry’s involvement was mainly in cooperation with giants such as IBM and HP. This kind of research has significantly migrated to the industry in recent years, with 65% of graduating North American PhDs leaving academia to industry and a 25% growth in corporate representation in AI research conferences (AI index report). All of a sudden, cutting-edge research is not being done in universities but in commercial companies, which have a whole different set of values and incentives. Allowing commercial companies to decide who gets access to powerful AI technologies is tricky. “The recent decade has shown that when companies such as Facebook and Google face a tradeoff between bottom-line revenues and ethics – they would prioritise the former. Those are revenue-driven companies with investors looking at revenue-driven metrics such as YoY growth, unit economics, etc. They are not incentivised to avoid harm except when it comes to retaining employees,” he said.

“Regulation should be mandatory when the underlying research can lead to harmful implications at scale.” 

Sahar Mor

A million years ago, continued Sahar, we discovered fire, which is still being regulated today. The invention of the printing machine faced strong opposition from the church around 600 years ago. “For every significant technological revolution, there will be those who will oppose it,” he added. But, regulation is a double edged sword. If the nation stifles research, then it runs the risk of falling behind other nations. The US didn’t want Germany to have the atomic advantage. The research at the nuclear facilities was hardly regulated. The initial proponents of atomic energy have ended up regretting their decisions. 

According to Thilo Hagendorff, machine learning research is not much different from other scientific fields, embracing the idea of open access. Research findings are publicly available and can be widely shared among other researchers. In his paper, Dr Hagendorff warned that inventions and scientific breakthroughs in machine learning could lead to dual-use applications that pose massive threats to individual or public security. The idea of forbidden knowledge in machine learning, wrote DrHagendorff, should not put limits or constraints on pursuit of legitimate research questions. “These limits should be established not because machine learning science itself is dangerous. Rather, it is the current political and cultural climate in many parts of the world that brings forth risks of misusing software as a tool to harm or suppress other people.” For a safer usage of ML, he recommended the following:

  • Monitoring measures of forbidden knowledge 
  • Expert risk evaluation
  • Education in responsible research processes
  • Pre-publication risk assessments
  • Responsible information sharing as well as disclosure rules 
  • Technical restrictions, post-disclosure measures and many more. 

Likening AI to fields like nuclear physics infuriates few ML practitioners. Few deep learning experts feel the hype around AI is blown out of proportions. One researcher who works on self-driving cars brushed off GPT -3’s hype by calling it a regular math and coding (AI) advancement and not a new fundamental breakthrough. “Models like GPT-3 is putting pieces together in various fields. One with money can do it. The current state of AI is the same as it was 20 years ago. The only difference is we have a more powerful GPU. Does it fundamentally change anything in the society other than making a conversational platform (speech or text or any other mode for that matter)? The answer would be no. So regulation doesn’t add any value to anyone,” added the researcher. “The fundamental danger of AI overcoming human intelligence is not only overhyped but it is misplaced. There is nothing such as forbidden knowledge. The more you dig deeper in pure and applied mathematics and combine with programming you can create software that seems threatening.”

Despite benefitting from advancements like GPT-3, Sahar believes regulation should be mandatory when the underlying research can lead to harmful implications at scale. A prior example would be The Declaration of Helsinki introduced in 1975 — a set of ethical principles for medical research and clinical trials involving humans. For Sahar, the recent mass adoption of AI and the lack of regulation are just another example of governments not keeping pace with industry innovations. According to him, an advanced society that codified a constitution should be cautious when limiting the freedom to explore one’s intellectual limits. Regulation is also known for slowing down innovation, leading to a new AI arms race between countries that impose regulation versus those who don’t. It’s hard to argue that, in the long run, countries such as Germany might lag behind those who prioritise innovation over privacy, such as China. And staying behind can have more significant implications than purely economic ones–see the recent Russian intervention with the US elections.

People like Sahar Mor, who operate in more practical settings, believe drafts and guidelines won’t cut it in the real world. This is because commercial companies with the brightest minds will always find a way around. The best way forward is to build an institution similar to the US Food and Drug Administration (FDA), he said. This institution would research AI safety and bias and will develop the standard for releasing AI applications at scale. At its core, it’ll review and address the fourth layer of the AI Application Model (see above) – the business application. Regarding enforcement, it could start with the big organisations that have a larger impact on society.

Share
Picture of Ram Sagar

Ram Sagar

I have a master's degree in Robotics and I write about machine learning advancements.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.