Listen to this story
Through the ages, technological disruptions have left an indelible mark on human history, moulding the course of our collective journey. In the current era, standing witness as AI opens the floodgates to advancements, we contemplate the profound influence these technologies could have in shaping human history – focussing on the negatives.
Renowned American historian Melvin Kranzberg formulated six laws of technology to shed light on the intricate interplay between technology and society. These laws serve as a framework to understand the multifaceted relationship and its implications for our future. As the first law he stated, “Technology is neither good nor bad; nor is it neutral.” He meant technology does not possess any inherent moral qualities, rather, technology’s impact and implications are shaped by human choices, values, and societal contexts.
In today’s rapidly evolving world, the advancements in AI are unparalleled. OpenAI, one of the startups to propel this unprecedented AI development, has made artificial general intelligence (AGI) its mission.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
While the growth may be exciting, it also sometimes leaves researchers split. Some experts argue that the AI technology is not as advanced as perceived, while others express concerns about the potential threats it could pose to our society.
Giada Pistilli, principal ethicist at HuggingFace, believes new technology always comes with force. “They kind of impose themselves and then we just have to stick with them,” she told AIM.
The fear narrative and anthropomorphism
Referring to Kranzberg’s first law, Pistilli said that technology always comes with political, ideological tensions, and social implications. If AI tools are utilised as personal tools, they have the potential to become highly effective. However, Pistilli believes that the impact of these tools ultimately relies on the individuals operating them.
Moreover, there is a fear surrounding AI, with frequent discussions about AI replacing jobs and posing threats to humanity etc, and the narrative is only getting stronger with each passing day.
“This fear narrative is not new and existed way before ChatGPT and often focuses on themes of AI becoming more intelligent, replacing humans and posing threats to society. I think it’s kind of irresponsible to fuel the fear narrative, because it is creating a kind of stressful and anxious sentiment in society,” Pistilli said.
Recently, Geoffrey Hinton, the godfather of AI, left Google to warn people about the danger the technology poses. Pistilli believes it only adds to the fear narrative because a certain section of the society is going to fall prey to such a narrative. She believes what’s imperative is responsible reporting and contextual understanding of AI capabilities and benefits, without solely nourishing the fear narrative.
Then, there is the problem of anthropomorphism, where non-human entities are attributed with human traits – emotions or intent. Anthropomorphism can obscure the true nature of the non-human entity, making it difficult to understand and use effectively. Pistilli believes it also helps shift the responsibility from the humans behind the technologies to the technology itself, which is a problem.
Society can fight back
Today, we are in the age of generative AI and fears of AI replacing human workers have skyrocketed in recent months. Most recently, IBM CEO Arvind Krishna, in an interview with Bloomberg, said that AI could potentially replace around 7,800 jobs. Now, imagine, most companies replacing 10% of their workforce with AI? The cumulative number could be huge.
“It’s always really challenging for humans to adapt to new technologies, especially when interaction comes into play. I think it’s unfortunate that we just have to deal with it and nobody kind of gave us instructions on how to deal with them,” Pistilli said.
However, she believes humans have an important weapon in hand. The ability to say no. As a society, we also have the ability to put pressure on the government and international institutions to start regulating AI. “Of course, it needs to be counterbalanced with all the potential harms that could undermine democracy for example, especially with all the flood of misinformation.”
“If society as a whole, like even 1% of the world population, says no, it’s going to make a difference. And I think we’re already seeing that, among artists, for example, when it comes to generative AI.”
To put that into perspective, Stability AI, DeviantArt, and Midjourney are currently facing a lawsuit that claims their utilisation of AI technology infringes upon the rights of countless artists. “And I think in the coming weeks or months we’re going to see similar protests from script writers, for example, and especially from people who are starting to feel threatened by those technologies,” Pistilli concluded.