Main content
Course: Wireless Philosophy > Unit 5
Lesson 10: Does predictive policing make us all safer?Does predictive policing make us all safer?
In this Wireless Philosophy video, Ryan Jenkins (professor of Philosophy at Cal Poly) examines law enforcement’s increased use of artificial intelligence for predictive policing. How should we balance the efficiency and safety benefits of this technology with concerns about its tendency to perpetuate historical biases and place unfair burdens on historically marginalized populations? Created by Gaurav Vazirani.
Video transcript
Hi, I’m Ryan Jenkins, a philosophy professor at
Cal Poly in San Luis Obispo. writing with the help of Tara Dixit. Law enforcement organizations are often
among the first to adopt new technologies to make themselves more
effective in the fight against crime. For example, police have been quick
to adopt facial recognition technologies to more quickly identify suspects
or those with outstanding warrants. But new technologies also
raise important ethical questions. Let’s see how. Few people could argue against
the goal of these new technologies: of course we want to
keep our communities safe, and of course we want to catch
dangerous criminals as quickly as we can. But what if we could
take this a step further, and try to predict where crime is
going to occur before it even happens? For example, police
have known for a while that crimes tend to be
concentrated in certain areas in a city. For example, you can see there are
more murders or burglaries in Brooklyn than on the island of Manhattan. It’s reasonable to expect that
more crimes would happen, say, when a crowd of revelers files
out of a sports arena after a game. Or that crimes might happen
on a street lined with bars right around 2 or 3 AM when the bars close. The technology of
so-called “predictive policing” uses these kinds of insights,
harnessing artificial intelligence, to forecast not just the probability of a
crime being committed in the near future, but also the time, location, maybe
even the perpetrator of this expected crime Some types of this AI
are based on location data. They might, for example, divide a city
into a grid of 500-by-500-foot squares and identify “hot spots” where
crimes are expected to occur, based on data like historical crimes,
weather, or other urban features, like abandoned buildings, liquor
stores, and large parking lots, which can provide attractive
opportunities for criminals. Police departments,
including the LAPD and NYPD, have used these technologies
to encourage police officers to patrol near predicted hot spots. The thought is, and data the show,
that police presence in these areas can tamp down crime,
at least for a few hours. According to LAPD police officers, having a law enforcement
presence in hot spots is proactive and prevents crime from occurring. Other cities around the country
are also noticing a difference. After using a version of the technology, burglaries in Santa Cruz
declined significantly, and crimes in Atlantic City
were reduced drastically. In addition, predictive policing allows
police departments to save money and time and optimize patrol resources, with some departments saving tens of millions
of dollars in the course of a few years. So: predictive policing
makes sense as a strategy, there is evidence that it reduces crime, and it’s likely to save police departments,
and ultimately taxpayers, a lot of money. What could be wrong with this technology? The major worry is that predictive
policing could exacerbate the already unfair policing of minorities, it could help to justify wrongful arrests, and disproportionately target
low-income communities. For one thing, think about the historical
data that these systems are trained on. We have good reason to think
that a lot of this data reflects the biases of past police behavior. While the law is supposed
to treat us all equally, we know that it’s not
always enforced equally, and that minorities can be scrutinized
or punished more severely than whites. For example, we know that blacks
are arrested more often than whites for crimes like drug possession, even though whites and blacks
use drugs at about the same rates. If you feed skewed arrest
data like this into an AI system, it’s going to “learn” that blacks tend to
be more dangerous, which is not true. The predictions generated would
recommend that officers spend disproportionate time policing
minority neighborhoods, where they’re liable to encounter
people committing crimes, generating more disproportionate
arrest data, and so on. This creates a harmful “feedback loop.” The result is that police presence
in these areas is “ratcheted up,” or increased more and more, until the effects are disproportionately
concentrated in minority communities. Of course, these AI systems are
never fed data about race directly. That would likely be illegal,
and it would surely be unfair. But they are fed other data
on the location of crimes or the income level of the
surrounding neighborhood, and these data in turn
correlate very closely with race. This could make the
systems “biased by proxy.” While we think that data
about crime might be objective, that’s not really clear either. In fact, a lot of debatable human decisions
go into creating data about crime. For example, what should
we count as a crime? What crimes should police pursue
and investigate most intensely? If police discover someone in the act, do they end up arresting the suspect
or letting them off with a warning? Whether a crime “took place” depends
on many subjective human priorities, choices and interpretations of events. Imagine, for example, the difference between training an
AI system on data about police arrests versus only on data about arrests
that actually resulted in jury convictions. Given that the overwhelming majority
of cases are settled by plea deals rather than by jury trials, you’d undoubtedly get two very different
pictures of the “crime” in a community. So, here is the situation. On the one hand, we have a new technology
which is effective at lowering crime and reducing costs,
according to its proponents. On the other hand, this technology tends
to burden some people more than others. In particular, it more
significantly impacts minorities, people with low-income, and others who are already facing
disproportionate disadvantages in society. When is it fair to disproportionately
burden one part of society if this benefits the rest,
say, by deterring crime? Does fairness require
abandoning a technology, even if the consequences are
that the public is less safe overall? Is that simply the cost of fairness? Or could there be some
way to use this technology without allegedly causing
disproportionate impacts on minority or low-income communities? Perhaps in the way that data
is gathered and examined? Or the way police choose
to use the technology? And what space should we reserve for human oversight when using AI
could impact people’s rights and freedoms? What do <i>you</i> think?