MITB Banner

The Pitfalls of Fear-Mongering in AI

Anthropomorphism has helped shift the responsibility from the humans behind the technologies to the technology itself
Share
Listen to this story

Through the ages, technological disruptions have left an indelible mark on human history, moulding the course of our collective journey. In the current era, standing witness as AI opens the floodgates to advancements, we contemplate the profound influence these technologies could have in shaping human history – focussing on the negatives. 

Renowned American historian Melvin Kranzberg formulated six laws of technology to shed light on the intricate interplay between technology and society. These laws serve as a framework to understand the multifaceted relationship and its implications for our future. As the first law he stated, “Technology is neither good nor bad; nor is it neutral.” He meant technology does not possess any inherent moral qualities, rather, technology’s impact and implications are shaped by human choices, values, and societal contexts.

In today’s rapidly evolving world, the advancements in AI are unparalleled. OpenAI, one of the startups to propel this unprecedented AI development, has made artificial general intelligence (AGI) its mission.

While the growth may be exciting, it also sometimes leaves researchers split. Some experts argue that the AI technology is not as advanced as perceived, while others express concerns about the potential threats it could pose to our society.

Giada Pistilli, principal ethicist at HuggingFace, believes new technology always comes with force. “They kind of impose themselves and then we just have to stick with them,” she told AIM.

The fear narrative and anthropomorphism

Referring to Kranzberg’s first law, Pistilli said that technology always comes with political, ideological tensions, and social implications. If AI tools are utilised as personal tools, they have the potential to become highly effective. However, Pistilli believes that the impact of these tools ultimately relies on the individuals operating them. 

Moreover, there is a fear surrounding AI, with frequent discussions about AI replacing jobs and posing threats to humanity etc, and the narrative is only getting stronger with each passing day.

“This fear narrative is not new and existed way before ChatGPT and often focuses on themes of AI becoming more intelligent, replacing humans and posing threats to society. I think it’s kind of irresponsible to fuel the fear narrative, because it is creating a kind of stressful and anxious sentiment in society,” Pistilli said.

Recently, Geoffrey Hinton, the godfather of AI, left Google to warn people about the danger the technology poses. Pistilli believes it only adds to the fear narrative because a certain section of the society is going to fall prey to such a narrative. She believes what’s imperative is responsible reporting and contextual understanding of AI capabilities and benefits, without solely nourishing the fear narrative. 

Then, there is the problem of anthropomorphism, where non-human entities are attributed with human traits – emotions or intent. Anthropomorphism can obscure the true nature of the non-human entity, making it difficult to understand and use effectively. Pistilli believes it also helps shift the responsibility from the humans behind the technologies to the technology itself, which is a problem.

Society can fight back

Today, we are in the age of generative AI and fears of AI replacing human workers have skyrocketed in recent months. Most recently, IBM CEO Arvind Krishna, in an interview with Bloomberg, said that AI could potentially replace around 7,800 jobs. Now, imagine, most companies replacing 10% of their workforce with AI? The cumulative number could be huge. 

“It’s always really challenging for humans to adapt to new technologies, especially when interaction comes into play. I think it’s unfortunate that we just have to deal with it and nobody kind of gave us instructions on how to deal with them,” Pistilli said.

However, she believes humans have an important weapon in hand. The ability to say no. As a society, we also have the ability to put pressure on the government and international institutions to start regulating AI. “Of course, it needs to be counterbalanced with all the potential harms that could undermine democracy for example, especially with all the flood of misinformation.”

“If society as a whole, like even 1% of the world population, says no, it’s going to make a difference. And I think we’re already seeing that, among artists, for example, when it comes to generative AI.”

To put that into perspective, Stability AI, DeviantArt, and Midjourney are currently facing a lawsuit that claims their utilisation of AI technology infringes upon the rights of countless artists. “And I think in the coming weeks or months we’re going to see similar protests from script writers, for example, and especially from people who are starting to feel threatened by those technologies,” Pistilli concluded.

PS: The story was written using a keyboard.
Share
Picture of Pritam Bordoloi

Pritam Bordoloi

I have a keen interest in creative writing and artificial intelligence. As a journalist, I deep dive into the world of technology and analyse how it’s restructuring business models and reshaping society.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India