This is the 10th article in the weekly series of Expert’s Opinion, where we talk to academics who study AI or other emerging technologies and their impact on society and the world.
This week, we spoke to Vidushi Marda, Senior Programme Officer at ARTICLE 19, who investigates the consequences of integrating artificial intelligence systems in societies. Her research focuses on technology regulation, asymmetric power relations and fundamental rights.
Sign up for your weekly dose of what's up in emerging technology.
Marda is also an affiliate researcher at Carnegie India and a member of the Expert Group on Governance of Data and AI at United Nations Global Pulse. In the past, she has collaborated with DATACTIVE at the University of Amsterdam, Privacy International, among others.
Analytics India Magazine caught up with Marda to understand her recent research on New Delhi’s predictive policing system and the threats of using data-driven systems to control crime in India.
AIM: What’s a predictive police system? Tell us about the threats such systems pose.
Marda: Predictive policing involves analysing real-time or historical data to predict when or where future crime is most likely to occur or which individuals are more likely to engage in criminal activity. The assumption here is that such analysis will allow for more efficient allocation of limited law enforcement resources, and in some cases, allows law enforcement to “stop crime before it occurs”.
Let’s consider for a moment the underlying rationale for predictive policing – it posits that the future will look like the past, that past patterns of criminality are a reliable indicator of future occurrences. As research from India and across the world has shown, however, data and practices within the criminal justice system are not ground truth and are more often than not inaccurate and skewed representations of ground realities. The fundamental building blocks of predictive policing systems, thus, draw from an unfair and discriminatory legacy, which brings up important questions of how efficient, appropriate or legal their use can truly be.
Much of the enthusiasm around predictive policing is predicated on the idea that the use of these systems can lend efficiency and fairness in policing practices, but there is mounting evidence of the many ways in which predictive policing reinforces, exacerbates and obscures existing power asymmetries and social inequality. In addition to our piece on New Delhi’s predictive policing system, I suggest that readers who are interested in learning more about this read work from William Isaac, Kristina Lum, Sarah Brayne and Fieke Jansen.
AIM: How do current practices in Delhi Police influence technical systems like CMAPS? How do we overcome these technical issues?
Marda: In joint work with Shivangi Narayan, PhD Candidate at Jawaharlal Nehru University, we have distilled some of the key institutional practices and assumptions within Delhi Police’s Digital Mapping Division (DMD). The DMD focuses on manual hotspot mapping of crime, and forms the basis on which systems like CMAPS will be built.
I don’t know if I’d characterise these issues as purely technical – in our research, we found that these are questions and challenges that need to be understood through a socio-technical lens. Institutional and individual actors make a number of subjective calls on an hourly basis – which calls are legitimate and which are not, which offences are worth plotting onto the final maps and which can be forgotten or negotiated away, etc. All of these decidedly non-technical choices are then transferred onto systems like the DMD’s hotspot map and treated as a reliable indicator that undergirds systems like CMAPS.
It’s incredibly important to unearth the assumptions that form part of institutional culture. For instance, at the beginning of our study, we were hoping to have access to datasets, models, outcomes. After spending time within the institution, however, we soon learnt that it was crucial for us to start a few steps before — what does the process of data creation and collection look like, what drives the individuals who carry out these functions, and what do they take for granted? It may come as no surprise at this point, but I’m a big advocate for constantly going back to the basics to solve these issues – fix the ways in which we attribute criminality to individuals and communities, recognise the problematic historical context from which data has emerged etc.
AIM: What kinds of biases are present in police data, and how are they introduced? What are the consequences of data biases in predictive policing?
Marda: As we outline in our paper, we found a number of sweeping assumptions pervade through the process of call taking. For instance, calls from weaker socio-economic sections of society are taken less seriously than calls from more affluent areas. The “legitimacy” of crimes against women often hinges on whether she was at ‘fault’ or not (I.e was she with a male friend or wearing skimpy clothes).
It is incredibly worrying when these subjective decisions are decontextualised, reduced to a data point and fed into a system like CMAPS. Stripping data of its context means actively rejecting the nuanced and often unfair ways in which that data came to be. In the absence of truly imbibing this into practice, we risk over-policing vulnerable communities and repeating past patterns of discrimination.
There is a strong presumption of the validity and value of emerging technologies in law enforcement institutions, but what we really need to do is create space to question the existence of these data-driven systems in the first place. Machine learning, in general, does not lend itself readily to complex societal problems and this fundamental inconsistency in using it for something as nuanced as crime is something we need to emphasise more.
AIM: Why are historically marginalised and vulnerable groups disproportionately impacted due to such systems and how can this be avoided?
Marda: The legacy of discrimination, inequality and power asymmetries that pervade societies in general and the criminal justice system, in particular, mean that historically marginalised and vulnerable groups are overrepresented in datasets. It is also well understood that the relationship between vulnerable groups and law enforcement actors is often contentious and strained. When using predictive policing systems, institutions perform wilful blindness to these human complexities and simply reduce crime statistics and criminality as a reliable indicator of future crime – and is thus primed to exacerbate inequalities that have plagued societies for generations.
AIM: Should predictive policing be banned?
Marda: Predictive policing is based on a number of problematic assumptions and practices as (I hope) I have outlined above. While I do think the power of technology should be harnessed to enhance processes, the pitfalls of predictive policing lead one to believe that it is not worth pursuing the optimization game in this context, i.e. we do not need to try and fix the fundamental problems with the idea of predictive policing – but rather abandon the idea and replace it with more thoughtful and deliberate policing practices.