MITB Banner

Watch More

Does The World Need Another Ethical AI Watchdog? Google Says Yes But Is There A Vested Interest At Heart

 

With artificial intelligence becoming the new normal, AI researchers are weathering the AI storm on two fronts – helping AI-savvy academicians make deep advances and to help society brace for the impact of AI ensuring it works to the benefit of all. Google-owned, London-based DeepMind recently announced its new Ethics & Society committee which will conduct research across six “key themes” — including ‘privacy, transparency and fairness’ and ‘economic impact: inclusion and equality.

One glaring omission was the names of industry leaders, academia sitting on the board. What’s even more intriguing is that the web search giant had set up an ethics board in 2014, right after the acquisition of DeepMind. According to news reports, co-founder Demis Hassabi of DeepMind had persuaded the tech giant to look into the ethics of the technology and set up an ethics board during the time of acquisition in 2014. The downside, just like this time, Google never revealed the key governing members of the ethics board.  

This development also begs another question – is there a need for another ethics committee, even one managed by DeepMind and how would this change anything in the world of AI ruled by algorithmic bias and privacy concerns? In this article, Analytics India Magazine explores whether another ethics body can spark a broader and significant dialogue amongst industry leaders, AI academia and policymakers. Does the body have enough time and money to seriously tackle the issue of AI? Would a concerted effort be better and lead to some real progress in improvement in AI and preventing it from falling in bad hands?

So, what’s the key role of an AI ethics committee:

  • Preventing corporate power getting its hands on AI and misusing it for their end
  • Make sure tech giants make social responsibility as well besides making money
  • With AI displacement job reports growing by the day, the committee will have to address the job risk in the age of automation and reskilling blue-collar and white-collar workers
  • Seeding more accountability and transparency in AI systems

According to a research firm, a new class of Expert Automation & Augmentation Software (EaaS) is underway to phase out white collar jobs in areas like law (automatic document analysis and auditing), media (AI-based news curation and summaries), software development (early development phase and debugging), and even consulting.

Since the dawn of AI, talks of cultural bias and racism has been embedded in the system which has led to sociological problems we couldn’t have imagined. Case in point – race-based screening of resumes, fake news, racist viral app FaceApp and Google’s photo tagging feature identified black photo subjects as gorillas or even tampering of election data.

Let’s take a look at how bias is baked into the algorithms at the training data stage:

  • Engineers don’t totally understand how their own algorithms work
  • Irresponsibly designed and fed algorithm trained on a data that doesn’t represent the whole population leads to skewed results
  • Black box algorithms wherein though the input and outputs are available, one can’t discover the implementation process

Algorithmic Fairness is necessary

So how does one ensure algorithmic fairness in the larger data science community? By setting up ethics committee? Since 2014, Fairness, Accountability, and Transparency in Machine Learning has been bringing together a pool of researchers, academicians and practitioners concerned with fairness, accountability, and transparency in machine learning every year. The convention has become the stomping ground for leading minds in AI who discuss ways to explore how to characterize and address these issues with computationally rigorous methods.

Then there is the Partnership on AI that was established in September 2016 to benefit people and society that brings all the digital natives like Google, Amazon, Microsoft, Facebook, IBM, DeepMind, Apple, Cogitai and other leading bodies on one platform. The shared goal is to address some key issues such as a) Accountability in AI; b) AI, Labour and the impact on economy;c) social and societal influences of AI; d) AI & social good among other goals.  By and large, the underlying goal of the group headed by industry leaders is to break down the barriers in AI, introduce more accountability and transparency in AI systems so that the products can operate safely and fairly.

Today, there is no dearth of AI think tanks and ethics body at Government level as well. Then there is the Data & Society Research Institute focused on social and cultural institutes arising from technological innovations. Asian non-profit think tank Digital Asia Hub provides a platform for research and capacity building.  Earlier this year, the European parliament voted for a regulatory framework for the research and usage of AI.

How many ethical watchdogs do we need?  

AI no longer sits in the realm of science fiction, its advances are real and getting more and more sophisticated day by day. Autonomous cars may soon become a reality on our roads and there are complex problems to tackle. But can another industry-backed ethical watchdog tackle the issues on its own? Here’s our point of view – firstly, there could be vested interests at heart. Until the AI company makes details about money and resources public and how DeepMind is taking forward this initiative, it could be seen as another vested interest driven by profit motives. How can one judge that the research outcomes from the ethics body will have not be driven by profit motives of DeepMind and Google. Secondly, no ethics body can tackle the issue on its own. It’s too broad a topic and platforms like Partnership on AI can make some advances in this field. Lastly, DeepMind conveniently missed out the names of people sitting on the ethics board. So much for transparency?

Even though DeepMind’s ethical research body will publish papers on the societal risks of AI, will there be an underlying commercial motive in influencing how AI gets regulated. DeepMind’s parent company Google is not far from controversies either. According to report by Campaign for Accountability, Google academic funding has come under the scanner and was pulled up by this body that alleged that Google uses its immense wealth and power to attempt to influence policy makers at every level. According to the report published in July, Google extended support to 329 research papers published between 2005 and 2017 on public policy matters that were of considerable interest to Google. Another interesting point outlined by the report was how Google’s paid-policy research has broad reach and may have inadvertently influenced policymakers.

Here’s what we believe – tech giants will do anything to lobby for power and want to play a role in regulating AI that could be tailored to their interest as well. Little wonder, that millions are parked in AI research which can be further peddled to policymakers who will base their decisions on it.

Access all our open Survey & Awards Nomination forms in one place >>

Picture of Richa Bhatia

Richa Bhatia

Richa Bhatia is a seasoned journalist with six-years experience in reportage and news coverage and has had stints at Times of India and The Indian Express. She is an avid reader, mum to a feisty two-year-old and loves writing about the next-gen technology that is shaping our world.

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories