Over the years, artificial intelligence has been trying to leave a mark in the area of law and order. To deal with crime, AI has taken another form called “Predictive Policing AI”. Ever since this system came into action, there doesn’t seem to be an effective way to cope with crime. The system is criticised as biased and is being called a “scam”.
What Is Predictive Policing?
Predictive policing is a black box AI technology that uses mathematical, predictive analytics to turn historical policing data into actionable insights and identify potential criminal activity. Predictive policing methods are not only used in predicting crimes but also offenders, victims of crime and in what geographic areas there is an increased chance of criminal activity.
Now, reports suggest that in the past few years a significant number of police departments around the world have been using this predictive policing method. Many studies are even saying that the US crime rate has decreased after the implementation of this system.
CrimeScan and PredPol are two well-known names who perform predictive policing. CrimeScan is a crime-predicting software developed by two computer scientists from Carnegie Mellon University using a wide range of data like crime reports and 911 calls. PredPol was created by UCLA more than eight years ago to figure out how scientific analysis of crime data could help spot patterns of criminal behaviour. Today, both of these platforms are being used by a significant number of law enforcement bodies.
Does Predictive Policing Work?
Despite its implementation in several departments and claims about the reduced crime rate, the predictive policing is not even close to becoming a trustworthy system to fight crime.
AI Now Institute conducted a report on 13 police jurisdictions in the US and found shocking results. Out of all the jurisdictions, systems from 9 were found to be using data that were generated during that time when the police departments were engaged in various forms of unlawful and biased practices.
This was a sheer blow to the people who were in support of the predictive policing AI. Once the report came out, the belief about AI being not biased completely changed. In this data-driven era where single misinformation or bad data could alter the results, how can someone expect an AI to predict criminal activities with at a highly efficient level when the data itself has unlawful aspects?
While people might feel that things are being exaggerated here, there is a report that states that several police personnel believe that the predictive policing AI didn’t work in fighting crime. And they had to discontinue the service of that AI system.
That is not all, companies also make use of these predictive policing AI systems to retain sensitive crime data on servers owned by third parties. It means that the chance of getting that leaked is significantly high. It is nothing less than an irony — companies making software to fight crime is not securing data that could lead to criminal activities.
Outlook
AI undoubtedly has a huge potential when it comes to making things better for humans. However, amid all the hype, we tend to forget that this sought after tech would deliver better results only when we feed it with data that is relevant or good enough. If the historical data itself is misleading or biased data, then it will make the predictions worse. And if you look closely, even the bad data is because of the humans — because of the corrupt, biased officers.