Michigan Law Professor Nicholson Price is teaching an interesting seminar this semester merging science fiction and legal analysis. We agreed that his students should write blog posts and that I would publish the most worthy on Patently-O. The first post comes from Lauren Kimmel and is focused on stopping future crimes. – DC
Guest Post by Lauren Kimmel
Steven Spielberg’s Minority Report (starring Tom Cruise) was released over a decade and a half ago; and yet, in many ways, the film has withstood the test of that time. The film takes place in Washington, D.C., in the year 2054—nearly a hundred years after American writer Philip K. Dick published his original short story with the same name and general storyline. In the film, the District’s Precrime Division use futuristic and fatalistic visions of three “precogs” to detect and apprehend would-be, “heat-of-passion” murderers before they are able to carry out their respective homicides.
Sixteen years later, Minority Report offers some interesting insight about where we are in the narrative of our own law and society. For example, the science fiction of the film bears remarkable, if not alarming, similarity to the technology behind predictive policing, a term used to encompass a range of real-world precrime detection systems currently in use around the country and even across the globe. The National Institute of Justice defines predictive policing as “taking data from disparate sources, analyzing them and then using the results to anticipate, prevent and respond more effectively to future crime.” We do not know exactly what predictive policing looks like from the inside out—but these precrime detection systems likely combine crime-mapping software, statistical data, police reports, and complex algorithms to help police better anticipate the next steps of would-be criminals.
On the one hand, predictive policing presents clear benefits to the communities who make use of it; it may be helpful, for example, in halting everything from drug deals to domestic terrorist activity to mass shootings, to violent or gang-related crimes. But predictive policing also raises serious questions about our own “progress” toward a science-based society. Is science-based progress always a good thing? And, even if it is not, is this our path, for better or worse?
Predictive policing helps us write a story about when and where crime will happen, as well as who will commit it. But in the context of our imperfect society, we must ask, Is this story the right one? Importantly, the “black box” of predictive policing technology obscures from public scrutiny its process for arriving at certain crime predictions, sparking key constitutional and public policy concerns. Where does the data come from? What factors are entered into the algorithms? Are certain factors weighted more than others? Does the technology behind predictive policing change over time to incorporate new patterns and findings—and if so, how? Does it learn (a la artificial intelligence technology), or does human instruction (and, along with it, human error) play a role? Do those developers return to the black box of algorithms to clarify when a crime “occurred” but no one was ever charged (i.e., arrests vs. convictions)?
And most importantly, what is the purpose that we, as a society, want predictive policing to serve—and does the technology ultimately serve this purpose? What’s more, even though computers are not biased, the statistics feeding it might be; moreover, predictive policing is only as good as the officers and analyzers who handle the data. Even if human error, at least in the sense of data input, is not a primary concern, context may be just as important as the data itself. A lack of understanding of the context could (and likely does) exacerbate existing racial and socioeconomic tensions. For example, if police are already patrolling a poorer, primarily black neighborhood and, as a natural result, detecting more crime there than the more affluent and perhaps primarily white neighborhoods they are not monitoring (but where crime may nevertheless be occurring), the crime maps compiled using these statistics may be skewed to reflect a measured bias against the former group. Or, if poorer communities are more likely to see theft—because many individuals lack and cannot afford basic necessities—the data may similarly imply that these communities are simply more “crime-prone” than others—when there is really more to the discussion.
One last consideration and concern is the fact that these predictive policing technologies are developed and carried out by private companies, such as PredPol, HunchLab, and Upturn. We should be asking whether it is a good or bad idea to privatize these services, and trust them with developing and interpreting predictive policing technology. Consider the following quote from Rashad Robinson, Executive Director of Color of Change: “Sending corporate power and corporate interest into the criminal justice system will end in bad results. It will end in profits over people and profits over safety and justice and none of us can afford that.” Especially when we consider what this process has done to our prison systems, it may be worth thinking twice about privatizing other aspects of the criminal justice system.