Now Reading
Bridging The Gap Between AI Policymakers and AI Developers

Bridging The Gap Between AI Policymakers and AI Developers

  • The current gap in policymakers' tech knowledge and technologists' ethics knowledge needs to be bridged to ensure AI's sustainable development.

When Sundar Pichai appeared before the US Congress in 2018, he was asked why the word ‘idiot’ returned Donald Trump’s images on Google search. Pichai tried to explain how page indexing works and that there is no manual intervention to rank the results, but the Congresswoman didn’t seem convinced.

The whole hearing was a study in the technological knowledge gap between the policymakers and practitioners. Governments — not just the US — across the world face the same challenge. The gap is a matter of grave concern because of the direct impact AI algorithms have on individuals and societies at large.

Apply>>

On the other hand, AI firms have been accused of playing fast and loose with the decision-making algorithms and their social and cultural implications. The ethical side of these firms leaves something to be desired.

The current gap in policymakers’ tech knowledge and technologists’ ethics knowledge needs to be bridged to ensure AI’s sustainable development.

Informed Policymakers

Knowledge building is critical in setting up an ethical framework in the AI domain. Only a well-informed group of policymakers can develop an appropriate policy framework and regulatory oversight. As things stand, politicians and policymakers are not yet there. As we advance, this should not be the case as it is a politician’s responsibility to safeguard their constituents’ interest from the threats resulting from algorithmic biases.

This does not mean the politicians need to become an expert in AI. But the policymaker should show proactive interest to better understand the impact of AI by bringing in ‘public-interest technologists’. 

The term public-interest technologist is relatively new, but the concept is old. These are professionals who act as the interface between the policymakers and technology providers. They have an educational background in both social sciences and computer sciences. 

Professionals working at the intersection of AI and social sciences are rare. To address the supply-side of the issue, governments need to make changes in their educational system. 

Though AI and data science courses are in plenty, most of them do not have ethical AI modules. To encourage more AI professionals to participate in public policy, the governments should encourage or invest in universities to introduce subjects on ethics, policy, and social sciences in AI and data science courses. 

Only by familiarising themselves with AI through qualified technologists, policymakers can draft sensible regulation that strikes the right balance between developing ethical AI and maximising its potential.

Responsible AI Providers

While policymakers lack technical knowledge, AI developers or technologists lag in awareness of AI’s ethical implications. Through indiscriminate use of AI, the tech corporations risk being the driver of existing ethical biases in society to the point of no return. Hence, producing ethical and trustworthy AI should be a top priority among all corporate social responsibilities. 

More companies need to start educating their employees on ethics and the resulting implications of AI products on society. The process should start right from the induction and continue through the employee lifecycle. The training syllabus should be in tune with the latest advancements in AI. 

The training warrants quite a bit of investment from the companies since understanding the social and cultural context in which AI technologies are deployed requires patience and time. There are several ways to undertake such training initiatives. Appointing a Chief AI Ethics Manager can help with overseeing the ethical side of AI and designing curriculums for educating or upskilling staff. From the leadership to employees, a top-down approach can speed up the process of instilling ethical values in the firm. Further, a lot of toolkits are available for companies to set up a training process.

Moreover, AI companies should be aware of the government’s regulations and work with them to ensure compliance. This will help the companies understand the government’s point of view, and compliance will help prevent firms from getting in government’s crosshairs.

Collaborations

After the recent firing of AI Ethicists at Google and the controversy that followed, Big Tech companies are racing to hire AI Ethicists. Even the Biden government has hired a public-interest technologist, Alondra Nelson, as deputy director of the White House Office of Science and Technology Policy. A network of 36 top higher educational institutes, called the Public Interest Technology University Network, has been formed to train engineers and social scientists on social impact aspects of their work. 

Policymakers, universities, and private firms must work in tandem and ensure open communication lines to get all the stakeholders on the same page — in terms of compliance and accountability. Without constant dialogue, the AI governance initiatives will wither on the vines. The US has taken the right step by creating a new position to work with technology developers. Other governments and companies should follow suit.

What Do You Think?

Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.
Join our Telegram Group. Be part of an engaging community

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top