The Japanese government is considering the use of artificial intelligence (AI) for speedy policy decision making, as it holds talks with Palantir Inc to discuss the possible use of the company’s big data analysis system, according to recent news.
To facilitate this, it has started basic research and planning to implement policy decisions based on AI for national issues like defence, national security, trade management, and also to control the spread of the novel coronavirus.
This latest move complements the plans of Japanese Prime Minister Yoshihide Suga, who has asked the government to speed up preparations for his flagship new digital agency that will accelerate the digitalisation in Japan.
Governments Use Algorithmic Decision Making
Similar to the Japanese government, a number of public organisations in several countries across sectors are now using data, AI and algorithms for decision-making processes.
For instance, one in three councils in the UK, are using computer algorithms to help make decisions about benefit claims and other welfare issues. Several public organisations, even in the US including the Army Research Laboratory, Food and Drug Administration, and Centre for Disease Control, have also collaborated with Palantir, which is a silicon-valley based data science company.
India also has not fallen behind. The state governments of Delhi and Uttar Pradesh, in collaboration with the Indian Space Research Organisation, are now using AI-based applications to locate ‘hotspots’ that will help them in decision-making processes to control crime. AI is also being used by the state of Andhra Pradesh in education, to monitor children and devote student-focussed attention to identifying and curbing school drop-outs.
“All the advantages of automation like reliability, efficiency, and low human dependency apply to the use of AI across organisations,” said Sahil Deo, co-founder and CEO of CPC Analytics, a data-driven policy consultancy firm.
“Also, in terms of decision-making, while a domain expert in a particular field could make quick and accurate decisions based on their experience, AI can assist people with a lesser experience to do the same,” said Deo.
Risk of a Systemic Bias
While decisions based on AI can give more accuracy, an important concern is the introduction of algorithms that reinforce societal bias.
This calls for greater transparency about algorithmic or machine learning decision processes, and for ways to understand and audit how an AI agent arrives at its decisions or classifications.
“There isn’t a well-defined, or as we say a golden standard, in terms of what the best practice is, to address this issue,” said Deo, whose research focuses on the FATE (Fairness, Accountability, Transparency and Explainability) of algorithms, “It’s still an evolving field, and it will be some years before there is a consensus.”
Dr Daan Kolkman, a senior researcher in decision-making at the Jheronimus Academy of Data Science, in this article said, “I found that many of the advantages attributed to the use of algorithms often failed to materialise in policymaking contexts and that the transparency of algorithms to non-experts is at best problematic and at worst unattainable.”
Dr Kolkman makes a case for establishing a ‘public watchdog’ for algorithmic policymaking, as ‘transparency’ currently does not really make it clear what it entails in practice, and ‘explainability’ is unlikely to have the desired effect.
He said, “Rather than seeking out transparency for its own sake, efforts towards algorithmic accountability would be better served by exploring ways to institutionalise the review and scrutiny of algorithms.”
The use of algorithms, data, and AI is always beneficial in various processes across sectors as it provides more accuracy and efficiency. However, the uptake of such technologies by public organisations should ensure fairness, especially when it could have a direct socio-political impact on people since policy decisions will depend on it.
Establishing proper frameworks, institutions, and legal infrastructures to hold these algorithms accountable is important.