MITB Banner

Who Bears the Burden of AI Mishaps?

Creators, obviously

Share

Listen to this story

In an unprecedented scenario, developers of the technology and policymakers today are jointly considering regulation even before substantial implications of the technology emerge. This is indeed a positive move, given the emerging instances of AI technology misfires. Recently, a New Zealand supermarket’s generative AI app mistakenly proposed the recipe for chlorine gas, labelling it ‘aromatic water mix’ and promoting it as an ideal non-alcoholic drink. While it does gives a glimpse of the technology going wrong, if we are to believe the likes of Geoffrey Hilton, the godfather of AI, the technology could go wrong on a significantly larger scale. It could potentially pose an existential risk for humans. Sam Altman, who heads OpenAI, has also said that should the technology falter, the consequences could be severe.

Given the intricate nature of AI, regulating the technology becomes a challenging task. AI technologies are rapidly evolving, making it difficult to create static regulations that remain relevant. Also, AI applications are diverse, ranging from healthcare to finance, each with unique risks and benefits. Nonetheless, as various jurisdictions contemplate AI regulations, policymakers worldwide, irrespective of geography, must recognise that the foremost principle in AI regulation is holding developers or creators of the technology accountable.

AI companies seek regulation on their terms

Earlier this year, Altman advocated for AI regulation in front of a US Senate and suggested the creation of an international agency, similar to the United Nations’ nuclear watchdog, to police and regulate AI technology. Not just Altman, major tech players like Microsoft and Google, at the forefront of the generative AI revolution, have voiced support for AI regulation. It’s encouraging to witness major corporations recognising AI’s threats. Yet, their preference for regulations on their own terms is apparent. During this world tour, Altman conveyed in London that he aims to adhere to EU regulations, but should challenges arise, his company might halt operations in the continent. 

Moreover, a Time investigation found that OpenAI secretly lobbied the EU to avoid harsher AI regulation. The lobbying effort by OpenAI was successful, as the final draft of the AI Act approved by EU lawmakers did not include certain provisions that were initially proposed. Interestingly, Microsoft (which has invested nearly USD 10 billion in OpenAI), Meta, and Google have earlier lobbied to water down AI regulation in the region. While OpenAI is new to the scene, the others, in fact, have lobbied with numerous governments over the years on matters such as privacy regulations, copyright laws, antitrust matters, and internet freedom. Considering their past behaviour, it’s prudent to acknowledge that these companies may lobby for AI regulations that align with their own interests rather than prioritising the welfare of the broader population.

Policymakers bound to involve AI companies

Nonetheless, policymakers must involve them in any discussions around AI regulation. At the forefront of the AI revolution, wielding their generative AI prowess, these organisations have already secured a prominent role in deliberations concerning AI regulation. Moreover, it’s understandable that policymakers, possibly among the least acquainted, may struggle to grasp AI’s nuances. Hence, from a regulation perspective, it becomes important for them to involve the creators or developers of the technology. 

Giada Pistilli, principal ethicist at Hugging Face, believes big tech lobbying for AI regulation is a big concern. While it’s inevitable to involve key players in these discussions, given that they are often the first to be directly affected by the outcomes of the regulations, their insights can be invaluable, Pistilli believes. “The power dynamics at play can sometimes blur the lines between honest advice and vested interests. We must critically assess the motivations behind their involvement,” she told AIM.

“Are they present merely to offer their expertise and perspective, or do they intend to exert a disproportionate influence on policy and institutional decisions that will have long-term implications?” Balancing their input with the broader public interest is crucial to ensure that the future is shaped in a way that benefits the many, not just the few. Nonetheless, at the same time, policymakers must also consider that while regulations are important for responsible AI deployment, overly restrictive regulations could stifle innovation and hinder competitiveness. Hence, striking the right balance between regulation and innovation is crucial, according to Gaurav Singh, founder & chief executive officer at Verloop.io. 

Additionally, AI being inherently intricate adds layers of complexity to regulation. Pistilli believes another thing adding to the challenge is that the legislative process is also inherently slow due to its democratic nature, and advocating for its acceleration to match technological progress inadvertently implies sidelining democratic principles, which is perilous.

“In our attempt to foresee every potential risk, we sometimes create regulations that are so broad they may not be relevant to specific situations, highlighting the limitations of a purely risk-averse strategy. This underscores the point that there isn’t a one-size-fits-all solution. It’s crucial to continually revise strategies, engage with experts, and most importantly, consult those directly affected by the technologies to determine the best course of action,” she said.

Blame the creators 

As major tech companies actively participate in AI regulation dialogues, it remains imperative for policymakers to recognise that these corporations are the ones introducing the technology to the world. Hence, its equally imperative that they are held accountable if the technology goes wrong. Pistilli believes that even though the responsibility in the realm of AI should be a shared endeavour, the lion’s share of both moral and legal accountability should rest on the shoulders of AI system developers. 

“It’s an oversimplification and, frankly, unjust to reprimand users with statements like ‘you’re using it wrong’ when they have not been provided with comprehensive guidelines or an understanding of its proper application. As I’ve consistently pointed out, distributing a “magic box” or a complex, opaque system to a wide audience is fraught with risks,” she said.

The unpredictability of human behaviour, combined with the vast potential of AI, makes it nearly impossible to foresee every possible misuse. Therefore, its imperative for developers to not only create responsible AI but also ensure that its users are well-equipped with the knowledge and tools to use it responsibly. Seeing the fast and somewhat hasty deployment of AI models, Annette Vee, associate professor at the University of Pittsburgh, noted that the race to release generative AI means that models will probably be less tested when they come out. They’ll be ‘deployed’ publicly; the companies will measure the blast radius and clean up afterward.

AI critic Gary Marcus also previously stated in a blog post that tech companies today have failed to fully anticipate and prepare for the potential consequences of rapidly deploying next-generation AI technology. Hence, holding the developers of these technologies becomes crucial. Doing so will mean these companies will be more careful and wary before releasing a model that hasn’t undergone thorough testing and scrutiny.

Singh concurs with Pistilli to some extent. He believes addressing biases within AI systems necessitates a multifaceted approach. “While holding creators responsible is indeed an important aspect, it’s not a standalone solution. Complex AI algorithms can be difficult to understand, making it challenging to explain their decisions. Regulations could mandate transparency and explainability standards to enable a better understanding of how AI arrives at its conclusions,” he told AIM.

Not in favour of transparency

However, will AI companies be in favour of transparency? Unlikely. OpenAI has not made known crucial details of GPT-4 such as the architecture, model size, hardware, training compute, dataset construction, or even the training method of GPT-4. While OpenAI might be trying to protect its trade secret, or withholding information due to security reasons, or ethical considerations, it only adds to the risk.

Often, biases found in AI models creep in from the dataset or during the training period. The choice of training data can perpetuate historical biases and result in diverse forms of harm. In order to counter such adverse effects and make well-informed decisions about where a model shouldn’t be deployed, comprehending the inherent biases within the data is of paramount importance. Google, on the other hand, has continuously opposed regulations that call for auditing of their algorithm. The tech giant has been traditionally secretive about its search algorithm and considers it a trade secret and has been reluctant to disclose specific details about the inner workings of its search algorithm.

Don’t blame the machines 

If we are to believe Altman, there’s a possibility that AGI could materialise within the next decade. While we are in mid-2023, and superintelligence is still a bit far away, there is a prevailing narrative suggesting AI systems as autonomous entities with the potential for harm. This discourse, in Pistilli’s view, subtly pushes the notion that our primary concern should be the AI systems themselves, as if they possess their own agency, rather than focusing on the developers behind them. 

“I see this as a tactic that not only amplifies a fear-driven narrative but also cleverly diverts attention from the human actors to the technology. By doing so, it places the onus entirely on the technological creation, conveniently absolving the humans who designed and control it. It’s essential to recognise this shift in accountability and ensure that the true architects behind these systems remain in the spotlight of responsibility,” she said. While, how close are we to AGI is a different debate, it’s crucial to prevent such discussions from gaining traction. If a future iteration of the GPT model, displaying AGI traits, encounters issues, the responsibility should fall on OpenAI, not the superintelligent model itself.

Share
Picture of Pritam Bordoloi

Pritam Bordoloi

I have a keen interest in creative writing and artificial intelligence. As a journalist, I deep dive into the world of technology and analyse how it’s restructuring business models and reshaping society.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.