A recently released report, Automating Society 2020, provides policy recommendations with an aim to better ensure that Algorithmic Decision Making (ADM) systems currently being deployed and those about to be implemented throughout Europe are effectively consistent with human rights and democracy.
Despite several issues that come with implementing ADM systems in the public sector including the black-box nature of the algorithms, the risk of normalising the idea of mass surveillance, and the increasing number of challenges resulting from the lack of a robust legal framework, the report stated these systems are still being deployed without any meaningful democratic debate and already affect all sorts of public activities and judgements.
The report published by Algorithm Watch in collaboration with Bertelsmann Stiftung provides three broad policy recommendations — increasing the transparency of ADM systems, creating a meaningful accountability framework, and enhancing algorithmic literacy among the people.
Policy Recommendations By The Report
The report stated that without the ability to know precisely how, why, and to what end ADM systems are deployed, all other efforts for the reconciliation of fundamental rights and ADM systems are doomed to fail.
Not only does the report ask for algorithms that are used by public administrations to be made public, but also calls for a legal obligation to disclose and document the system, provide the explanation of the model, and make the information about who developed the system public. Private sectors should also be legally obligated to do the same if their ADM system has a significant impact on an individual, a specific group, or society at large.
Apart from making all these documents public, the report also called for the data used, that can be made available for research through the introduction of a robust, legally binding data access frameworks. This has been focused explicitly on supporting and enabling public interest research and in full respect of data protection and privacy law.
As the term, ‘auditing’ is not yet clearly defined, the policy document calls for audit criteria and an appropriate process to be developed that takes into consideration the effect ADM systems might have on vulnerable groups and solicits their participation.
The report advocated the use of civil society organisations as public ‘watchdogs’ of the ADM systems as they firmly believe that their role in effectively challenging opaque ADM systems is crucial.
The report further called for a decisive ban on ‘high risk’ ADM systems, especially those that use biometric technologies, including face recognition systems that can infringe on human rights.
Transparency of such ADM systems can only be truly useful if someone equal can challenge its status quo. Hence, the document calls for the establishment of independent centres of expertise on ADM at the national level to monitor, assess, conduct research, report on, and provide advice to government and industry.
However, this body should not have regulatory powers and should work in coordination with regulators, civil society, and academia to provide essential expertise on how to protect individual human rights.
Finally, it is also vital to improve public literacy of systems that will impact them directly. Hence the document highlights the urgent need of including the public in the decision making of ADM systems.
How Ready Is India
India has already implemented several ADM systems in the public domain. For instance, predictive policing algorithms to identify ‘probable targets’ or ‘locate hotspots’ are being implemented by several state governments.
The use of Aadhaar Card, which collects biometric information has been used as a system to identify and classify citizens in ways that have been questioned by experts who have highlighted the resulting risk of mass surveillance.
Unlike the EU, India currently does not have a data protection law or legal framework for biases resulting from algorithms. The extant legal framework extends to only the Information Technology Act 2000 (IT Act 2000) and the Sensitive Personal Data and Information rules (SPDI Rules) that come under this act, which define security standards and protocols to disclose information.
However, these were not designed to address issues regarding transparency, explainability or accountability of ADM systems, that are defined in the policy recommendation document of Automating Society 2020.
The Data Protection Bill, 2018 by the Srikrishna committee addresses some issues as it puts more onus on organisations processing people’s information. However, this also has some limitations. While it asks for the consent for processing data from the data owner, this principle is diluted by some language used under the clause that puts processing limitations. Also, under another clause, the bill states that the state is authorised to process personal data for “the exercise of any function of the state” without the consent of the individual.
The society we live in today is witnessing several ADM systems that are making decisions for us without us knowing whether these decisions are fair for us.
A robust legal infrastructure needs to be developed to ensure transparency, accountability and explainability of these systems before any further damage is done on one’s rights and democracy.