Suppose that a plane has just landed and a flu pandemic may be emerging. You have the time and resources to check some but not all of the arriving passengers for signs of influenza. A small fraction of the passengers are arriving from Mexico where the pandemic originated and the others have not been to Mexico. How do you allocate your searches?
Efficient screening means that the probability of finding an infected passenger should be equalized across the groups that you screen. And if searches of one group yield a higher infection rate than another then you should allocate your searches to the first group. Since the passengers arriving from Mexico are much more likely to be infected, you will probably use all of your searches on them.
Even though the passengers from Mexico are being searched disproportionally more often than the others, this is not because you are discriminating against them. Your motive is simply to use your limited resources most effectively to stop the spread of the virus.
These ideas should be kept in mind when you read articles like this one (via The Browser) which claim that the disproportionate number of searches of black motorists on the highways indicates that the police are racially biased. The police probably are racists, this would not surprise anybody. But the fact that they stop and search black motorists more often than whites is not evidence of racism, unless it can be shown that the proportion of stopped black motorists who are found to be committing a crime is smaller than the proportion of stopped white motorists.
In fact, this 2001 paper by Knowles, Persico, and Todd test for this using one particualar data set and find no evidence of bias. I don’t know where the literature has gone since then, probably there have been other studies with other findings, but its important to know what the right test is.
14 comments
Comments feed for this article
May 6, 2009 at 10:25 pm
Nathan Fiala
I have two problems with this idea.
First, in order to obtain testable conclusions, the paper you cite “assumes that motorists respond to the probability of being searched”. Isn’t this a very strong assumption, given what we know about the irrationality of crime and deterrence, and if this wasn’t a strong assumption, isn’t it then endogenous as that is a variable of interest to the test?
Second, the paper, and you, assume that we care about likelihood guilty by race. Shouldn’t we care about just likelihood guilty? In that case, whites are much more likely to commit a crime. This approach also fits your airplane story: of all the people on the plane, we target people from Mexico since they are so much more likely to have it, no matter how many people from Mexico are on the plane.
May 7, 2009 at 9:04 am
jeff
The model does assume that we care about crime unconditionally. It is a conclusion, not an assumption, that to minimize crime unconditionally, the searches should be targeted and if two groups are searched then the allocation of searches across groups should equalize the effectiveness of the marginal search. This will imply that groups with a higher per-capita crime rate will be searched more often.
May 7, 2009 at 1:08 pm
Nathan Fiala
True, that is an outcome of the model, but the model makes, in my mind, some pretty strong assumptions to get there. It requires individual knowledge and caring about chance to be caught conditional on race, and that we have a specific equilibrium condition.
I am wary of complex models that give unintuitive results using strong assumptions. I think that is exactly what this model does. I presented a simpler model, one where we simply care about likelihood of crime/disease of the population, unconditional on ethnicity.
May 6, 2009 at 10:55 pm
Brian Burke
Excellent post. But I have a problem with “The police probably are racists…” Actually, according to the paper you cite, they probably aren’t, otherwise the minorities pulled over would have a lower likelihood to commit a crime.
A sweeping statement like that might make someone think the writer is “probably just heading off the predictable accusation that he himself is a racist for defending profiling.” But that would be unfair, although no less fair than your original statement about police.
Being labeled a racist is one of the worst things in our society. But there’s an easy way to prevent that…just call someone else a racist.
May 7, 2009 at 9:02 am
jeff
Guilty as charged. And thanks for calling me on it. My belief that the police are probably racists is based on personal experience as well as the observation that the profession of policing selects for people of certain types. That includes racists as well as virtuous people. Just as Professors are more likely than average citizens to be smart-asses.
But I am not citing any data here just expressing a subjective opinion. And both of these are wild generalizations and individuals are unique.
May 6, 2009 at 11:15 pm
Anonymous
There is a fantastic paper on racial profiling which uses a novel (kind of natural experiment) approach and finds little but some evidence of racial profiling in Oakland, California:
Testing for Racial Profiling in Traffic Stops From Behind a Veil of Darkness Jeffrey Grogger; Greg Ridgeway, Journal of the American Statistical Association, Vol. 101, No. 475, September 2006 pp.878-887
part of abstract: “our approach makes use of what we call the “veil of darkness” hypothesis, which asserts that police are less likely to know the race of a motorist before making a stop after dark than they are during daylight. If we assume that racial differences in traffic patterns, driving behavior, and exposure to law enforcement do not vary between daylight and darkness, then we can test for racial profiling by comparing the race distribution of stops made during daylight to the race distribution of stops made after dark. We propose a means of weakening this assumption by restricting the sample to stops made during the evening hours and controlling for clock time while estimating daylight/darkness contrasts in the race distribution of stopped drivers. We provide conditions under which our estimates are robust to a substantial nonreporting problem present in our data and in many other studies of racial profiling. We propose an approach to assess the sensitivity of our results to departures from our maintained assumptions. Finally, we apply our method to data from Oakland, California and find that in this example the data yield little evidence of racial profiling in traffic stops.”
http://www.rand.org/news/press.04/08.24.html
The paper is here:
http://www.rand.org/pubs/reprints/RP1253/
May 7, 2009 at 1:15 am
Barbar
It is not at all obvious that the primary goal of law enforcement is to simply maximize the efficiency of their searches.
1. More important than search efficiency is the actual crime rate. If minorities are more likely to be offenders than whites, then targeting minorities instead of searching randomly will increase search efficiency while introducing new incentives (we should expect more crime from non-minorities, less from minorities). Under very plausible assumptions, the overall crime rate will increase (since there are more non-minorities than minorities, and since the response of minorities is likely to be less elastic).
2. Under random search, the racial distribution of people caught by law enforcement mirrors the racial distribution of offenders in the general population. When search efficiency is increased, the distribution becomes additionally skewed towards minorities. The resulting statistics produced by law enforcement will then suggest that searches can be made even more efficient by increasing the amount of profiling. Rinse, wash, and repeat. The final outcome can be dramatically unfair from a racial point of view.
Ideas gleefully stolen from UChicago law prof Bernard Hartcourt’s book “Against Prediction.”
May 7, 2009 at 9:39 am
michael webster
At first, I thought you were right and this was just some base right error.
Now, I am not so sure.
We can easily get the data for those who were stopped, correlate race and crime, and get evidence for your conclusion:
” But the fact that they stop and search black motorists more often than whites is not evidence of racism, unless it can be shown that the proportion of stopped black motorists who are found to be committing a crime is smaller than the proportion of stopped white motorists.”
But the problem is that we cannot get this same ratio for the people that were not stopped. Only if we can conclude the our original sample was random, can we make the latter inference.
The authors seems to foreclose that assumption by stating:
“While it is conceivable
that African-American motorists are more likely to commit the types of
traffic offenses that police use as pretexts for vehicle checks, traffic
studies and police testimony suggest that blacks and whites are not
distinguishable by their driving habits. An alternative explanation for
the racial disparity in traffic searches is that race is one of the criteria
police officers use in deciding whether to search cars. This explanation,
known as “racial profiling,” is the basis of several recent lawsuits against
state governments.”
So, what I have missed or misunderstood?
May 7, 2009 at 11:33 am
Sean
Barbar, in (1) you describe a dynamic response to a static optimal (race) condtional search rate. In a static model, the reaction of white drivers to lower search rates is already factored into their (lower) equilibrium rate of criminality. If we are out of equilibrium or in a dynamic model, you must allow for the police to update their conditional search rates; it does not seem reasonable to assume white driver’s can update but polic cannot. (2) is just completely wrong. Given a fixed number of searches, if police increase their search rate of minorities and increase the number of arrests, the increase was warranted. If this leads to a further increase in minority searches and the overall number of arrests declines, then this second increase was inefficient unless cops are racist. The type of self-fulfilling prophecy you envision does not wash.
There is a field experiment by John List that studies statistical vs. nefarious discrimination in the baseball card trading market, I can’t remember the citation. If I remember correctly, the punchline is that card traders do profile customers and offer discriminatory prices (e.g., women are asked to pay higher prices than men), but the discrimination is statistical not nefarious (e.g., women are asked to pay higher prices because they will pay higher prices, not because card traders have something against women).
May 7, 2009 at 1:54 pm
Barbar
Sean, (2) is “completely wrong”? Wow.
Very simple model: 3% of black motorists are guilty, 1% of white motorists are guilty, these rates are completely unresponsive to search strategy, and there are no other cues other than race. The police can make a fixed number of searches. If they are interested in maximizing search efficiency, they will only search black people. The result is that 100% of people who get caught are black, while 0% of guilty white people get caught.
Yes, this model is simplified, but I think the point should be clear — the pursuit of justice cannot be entirely reduced to maximizing law enforcement “efficiency.” (Your response was essentially, “Yes it can, there is nothing else for law enforcement to consider.”) This is not entirely an academic issue either. The fact remains, the only search strategy that produces a caught criminal population that mirrors the actual criminal population is random search; anything else will skew the caught-criminal distribution away from reality.
As for (1), my point stands even when you incorporate the police response to changing driver patterns. Once again, a very simple model: under conditions of random search, 3% of blacks and 1% of whites are guilty, blacks are completely unresponsive and whites are somewhat responsive to search strategy. A shift to racial profiling will clearly increase the overall crime rate. Incorporating more realistic assumptions will not change this overall result.
To repeat, my point is simply that the goals of law enforcement cannot be reduced to maximizing arrest rates. Two other compelling goals are (a) minimizing the real crime rate, and (b) fairness (so that each criminal has an equal chance of getting caught). Maximizing arrest rates through efficient racial profiling is quite likely to conflict with both of these goals.
May 7, 2009 at 2:11 pm
Barbar
I should add that you may decide at the end of the day that the overall crime rate and fairness are less relevant considerations that are trumped by more efficient policing. However, it would be nice if this conclusion was the result of some sort of argument, rather than taken as an axiom of analysis (“OF COURSE the police should be trying to maximize arrest rates, I can’t imagine what else they would be doing”).
May 7, 2009 at 4:33 pm
Sean
I certainly concede your suggestion that maximizing (legitimate) arrests given resource constraints is not the only objective function you could assign to police, although many people would be amenable to it. Where we apparently differ is what we consider to be “fair;” I was not being fair (!) to call your (2) wrong because we were using different definitions. In your example with unresponsive criminals, it is certainly true that only policing minorities (if the number of minorities exceed the number of stops) is the optimal outcome if what police are trying to do is maximize arrests. This is statistical rather than nefarious discrimination, so it is fair if what police are supposed to do is maximize arrests, it is unfair if what they are supposed to do make each non-criminal equally likely to be subjected to a search.
In a more plausible example, individuals assign an expected value to criminal activity, which is continuous and strictly decreasing in the likelihood of getting caught. Suppose people are identifiable as only two types (black and white), and further suppose that if the likelihood of getting caught is the same for each, blacks have a higher expected value from criminal activity on average (the distribution of value functions matters, but the analysis will go through for many distributions). If police want to maximize the number of arrests, they will increase black searches. This makes blacks less likely to commit a crime and whites more likely to commit a crime. Police keep increasing the proportion of blacks searched (thus decreasing white searches) until the conditional probability of discovering a black criminal is equal to the conditional probability of discovering a white criminal. This maximizes the number of arrests for a fixed number of searches (greater targeting of minorities does not increase the crime rate as you suggest in (1), it decreases it). I think what the literature suggests is that these conditional probabilities are equal when estimated from police data, and so discrimination is statistical rather than nefarious.
If police are supposed to minimize crime (by maximizing arrests), statistical discrimination is immenently fair. However, for the 97% of law-abiding minorities who are more likely to be pulled over because they are black, I can certainly understand why profiling would feel grossly unfair. And I can see how this feeling of unfairness can have social costs, so one might quite reasonably modify the police objective function with a cost that is increasing in the disproportion of minorities searched. If minorities commit more crimes under purely random searches you will still want to search them disproportionately, but less so than would optimize the number of arrests.
May 7, 2009 at 6:11 pm
Barbar
Sean wrote:
greater targeting of minorities does not increase the crime rate as you suggest in (1), it decreases it
The impact on the overall crime rate depends on (a) how responsive the two groups are to policing and (b) the relative size of the two groups. It is simply not true that greater targeting of minorities necessarily decreases the overall crime rate; as I said above, under very plausible assumptions, it increases it. (Minorities commit less crime but whites commit more.)
A slightly different example may help. Under random search, the white offense rate is 1% and the black offense rate is 3%; blacks are 20% of the population, so the overall offense rate is 1.4%. Under profiling, the offense rates of the two groups converges to the same value; if the value is under 1.4%, we have a decrease in crime, and if it is over 1.4% then profiling has increased crime. If blacks are less responsive to police efforts than whites (say, because they have fewer outside employment opportunities), then we will have an increase in overall crime.
However, for the 97% of law-abiding minorities who are more likely to be pulled over because they are black, I can certainly understand why profiling would feel grossly unfair.
In addition to this, there is also the cost of the greater-than-warranted societal association between criminality and blackness. Back to the simple model again: 3% of blacks and 1% of whites are offenders, blacks are 20% of the population. Under this model, blacks are 43% of the offending population. If the police conduct random searches (80% white, 20% black), 43% of the people who get caught are black. If the police decide to increase their efficiency by targeting blacks, then this percentage will initially increase. If criminals are completely unresponsive to policing, then the percentage will go all the way up to 100%, and every known criminal will be black. If criminals are more responsive to policing and the group offense rates converge to, say, X% when 60% of searches are performed on blacks, then 60% of people who get caught will be black, even though 80% of offenders are white at this point!
60 can be replaced by any number greater than 20, to reflect whatever the equilibrium point is; and note that if X is higher than 1.4, overall crime has increased. Even if you decide not to maximize arrest rates, any policy of racial profiling still leads to blacks being over-represented among the people who get caught. You may end up with a prison population that looks nothing like the actual criminal population; and this may have costs that go beyond the irritation faced by innocent black drivers.
May 9, 2009 at 3:22 pm
bob
While it is often the case in mathematics that we say some theorem or technique has been ‘discovered’, one must remember that we do this out of humility, not righteousness. When one isolates a technique in order to pursue a purpose one is in fact designing and not simply discovering.
I lead with that because we should understand that probability theory was designed to facilitate making judgments when faced with limited information. In order to do this the design of probability theory has been shaped by techniques such as generalization and objectification; which features a design trade off of efficiency vs reliability among other things (for example an alternative technique would be so-called ‘due-dilligence’ to get better information, which is more expensive but also more reliable).
When applied to people then, such techniques naturally objectify and generalize them in order to remove the biases that come from their individuality so that information can be compared and quantified in a standardized way; all of this is done so that we can then project properties upon those who we do not have information on to facilitate our judgments about them.
What this means when race is used as the property is left to the reader. I just hope that we do not confuse the utility of our intellectual tools with their moral neutrality.