Trevor Kim, Grade 11
The use of artificial intelligence (AI) has increasingly become critical to modern society. AI algorithms are frequently used to automate industrial processes, to streamline supply chains and to even set our oven timers. Intrigued by the efficacy of this technology, law enforcement agencies have invested heavily in developing AI to assist with policing. One emerging use has been in the field of place-based predictive policing, a prophylactic measure designed to “enhance existing approaches to policing” by predicting when and where crimes will occur (1). Proponents argue that knowing where crimes are likely to occur will allow police to strategize better and increase public safety. But these purported benefits disguise the darker side of this technology and its potential to generate biased results that disproportionately affect minorities and lower income communities. If automated policing is to have any legitimacy, safeguards must be implemented to address the potential pitfalls that may arise from using this technology.
The most prominent form of predictive policing is place-based predictive policing. Place-based predicting policing operates upon two basic principles: using algorithms to forecast “future crime risk in narrowly prescribed geographic areas” and delivering “police resources to those . . . locations” in order to deter crime (2). These programs are powered by algorithms that analyze a specific set of data. While each proprietary program studies different data sets, in general, algorithms review crime type, locations and times of past crimes, demographic information of those arrested and population density (3). Once the data is analyzed, the program then forecasts the occurrence of a crime in a specific geographic area. For example, PredPol, a software used by over 60 police departments throughout the country, generates heat maps, which highlights areas of interest with boxes that are marked in different shades of red to indicate the danger or severity of the predicted crimes in that area. Law enforcement utilizes these forecasts to implement policies, like patrolling a higher percentage in the deemed areas, increasing traffic stops or even setting up checkpoints.
These algorithms, however, are not foolproof. There is troubling evidence that the forecasts generated by predictive policing software has led to disparate treatment amongst minorities and lower income communities. One explanation is that these algorithms are too simplistic. For example, PredPol utilizes a statistical modeling method similar to what is used to predict earthquakes (4). While it might make sense for an algorithm to determine the location of future earthquakes by using data of where past ones have occured, applying the same principle to crime, where the variables determining causation are so vast, is problematic. Others point to the self-perpetuating nature of predictive policing software. According to Suresh Venkatasubramanian, a professor at the University of Utah, “[w]hen a tool like PredPol tells police where to go, crime data starts to be affected by PredPol itself, creating a self-reinforcing feedback loop” (4). Predictive policing software, as designed, will have the unintended effect of redirecting police to the same areas over and over again.
The above flaws point to a deeper issue, one that pre-dates predictive policing technology by many years. The root of the problem is not computers, the algorithmic code or the technology, but the people behind these programs. Ample research has demonstrated that racial bias exists and has existed for decades, if not hundreds of years, in the business of policing (5). An algorithm can only assess data as it is instructed to do so, but if the data is inherently flawed, so too will the forecast and resulting policy. This very concern was recently brought to light during the US Department of Justice’s (DOJ) investigation of the New Orleans Police Department (NOPD). In 2010, the DOJ investigated allegations that the NOPD had used excessive force and disproportionately targeted minority groups. The DOJ’s report concluded that the NOPD had partnered with Palantir, a predictive policing software, which data-mined historically biased data and used this information to make forecasts. Use of this flawed data was in part responsible for the bias and disproportionate targeting of minority communities in New Orleans (6).
The very real pitfalls of predictive policing software calls for robust safeguards to ensure that biased treatment does not persist. Erik Bakke argues that while there are a number of different institutions that could be tasked with this type of accountability, “public oversight is the best option” (7). This can be achieved in a number of ways. Police departments could be transparent about what data inputs are used and make the forecasts of predictive policing software public record. Furthermore, police departments could partner with community organizations and incorporate their input before implementing policies. Police departments could also invite third party auditors to come and evaluate their software, something the New York Civil Liberties Union suggested the New York Police Department do (8). Collaboration with the public will go a long way in providing legitimacy to programs, preventing misconduct and building community trust (7).
Not only is public oversight important, but departments should also invest in internal committees that monitor and self-assess the results of their own programs. The National Institute of Justice insists that those supervising predictive policing must be diligent in overseeing information-gathering processes (9). A review board could scrutinize the algorithms, the data that is being used and the software forecasts in order to expose flaws and fix them before discriminatory policies are implemented.
Ultimately, eliminating bias in predictive policing programs requires addressing the root of the problem. Algorithms do not design themselves. They are merely a series of symbols and numbers, meant to function a narrowly prescribed way, as intended by their designer. If the data is biased or skewed, so to, will the resulting forecasts. Better data, then, requires better policing. Departments should take a long, hard look at their personnel and practices and actively root out prejudice and bias. Only when the people behind the technology have taken steps to address their prejudices and departments reform their practices can society truly begin to embrace predictive policing.
Works Cited
[1] Overview of Predictive Policing. National Institute of Justice, (2014).
[2] K. J. Bowers, et al., Prospective Hot-Spotting: The Future of Crime Mapping? British Journal of Criminology 44, 641-658 (2004). doi: 10.1093/bjc/azh036
[3] W. L. Perry, et al., Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations. RAND Corporation, (2013). doi: 10.749/RR233
[4] C. Haskins, Academics Confirm Major Predictive Policing Algorithm is Fundamentally Flawed. Vice, (2019).
[5] J. Legewie, Racial Profiling and Use of Force in Police Stops: How Local Events Trigger Period of Increased Discrimination. American Journal of Sociology 122, 379-424 (2016). doi: 10.1086/687518
[6] K. Hao, Police Across the US are Training Crime-Predicting AIs on Falsified Data. MIT Technology Review, (2019).
[7] E. Bakke, Predictive Policing: The Argument for Public Transparency. New York University Annual Survey of American Law 74(1), 131-172 (2018).
[8] M. R. Sisak, Modern Policing: Algorithm Helps NYPD Spot Crime Patterns, AP News, (2019).
[9] L. Gordon, A Byte Out of Crime: Predictive Policing May Help Bag Burglars – But it May Also be a Constitutional Problem. ABA Journal, 99(9), 18-19 (2013).
One thought on “Predictive Policing: Important Safeguards to Consider”