Dispatch Priming and the Police Decision to Use Deadly Force

Dispatch Priming and the Police Decision to Use Deadly Force

Paul L. Taylor | Police Quarterly

Dispatch Priming and the Police Decision to Use Deadly Force

Paul L. Taylor |  Police Quarterly

Abstract

Police shootings have become one of the most “visible and controversial” aspects of the criminal justice system. Yet, very little empirical effort has been devoted to understanding the underlying systemic vulnerabilities that likely contribute to these tragic outcomes. Using a randomized controlled experiment that incorporated a police firearms simulator and 306 active law enforcement officers, this study examined the effects of dispatch priming on an officer’s decision to use deadly force. The findings suggest that officers rely heavily on dispatched information in making the decision to pull the trigger when confronted with an ambiguously armed subject in a simulated environment. When the dispatched information was erroneous, it contributed to a significant increase in shooting errors. The results contribute to a broader understanding of officer decision-making within the context of police shootings and introduce the theoretical concepts of cognitive heuristics and human error to the research on police use of deadly force.

In Los Angeles County we have 5 to 15 [police] shootings in a year due to what we call perception issues. These have become a bigger problem in the last five or six years. These are also called “cell phone shootings.” Typically what happens is that a deputy has contact with an individual, and a short foot pursuit occurs. During that foot pursuit, the individual either makes an affirmative movement, such as a tossing motion, or produces something from their clothing that the officer mistakes for a weapon. The officer responds to this perceived threat by firing his weapon. After the shooting occurs, we discover a cell phone lying nearby or on the person’s body. The circumstances in which the shooting occurs (such as a “shots fired” call, armed robbery call, or “man with a gun” call) may provide context for the officer’s state of mind. Unfortunately, these shootings have been common for us over the last few years. (Police Executive Research Forum, 2012, p. 8)

- Los Angeles County Assistant Sheriff Cecil Rhambo

Introduction

The situation described by Sheriff Rhambo is not unique to Los Angeles County. On April 30, 2015, San Diego, California police officers were dispatched with information about a man brandishing a knife. Moments after the first officer arrived, he shot and killed Fridoon Rawshan Nehad who approached him with a pen in his hand (Selby, Singleton, & Flosi, 2016). On December 12, 2016, Bakersfield, California police officers were dispatched information about a man brandishing a revolver. In the end, 73-year-old Francisco Serna was shot and killed when he pulled a wooden crucifix from his pocket (Phippen, 2016). From these tragedies, an empirical question emerges. Does dispatching erroneous information significantly increase the likelihood for false-positive errors during potential police shootings?

According to Los Angeles Police Department (2018) self-report data, of the 211 shooting incidents between 2013 and 2017, 46% (n = 98) were initiated by a dispatched call for service, and 14% (n = 30) were classified as “perception only” shootings (p. 173). Fachner and Carter (2015) found similar numbers for the Philadelphia Police Department. Between 2007 and 2013, approximately 52% of the incidents that ended in a shooting were initiated by a dispatched call for service. Ten percent (n = 35) of the shooting cases examined (N = 385) were classified as “mistake of fact” shootings (p. 30).

Paraphrasing Bittner (1970), Peter Manning (1992) wrote, “[T]he core technology of the police is situated decision making with the potential for the application of violence” (p. 354). He later postulated that this core technology may be altered, in both intended and unintended ways, through the introduction of new technology (Manning, 2008). This is consistent with the literature on human error, which has found that the integration of technology into the work environment, even when it is implemented with the best of intentions, fundamentally changes the complexity of human decision-making and can impact outcomes in unanticipated ways (e.g., Rasmussen, 1986).While police dispatch technology is certainly not new, little to no research has looked at how this complex communication technology, with multiple human interface points, has modified the core proficiencies of the officers who must adapt it to their decision-making framework in the field.

Policing scholars have routinely called for research on police decision-making within the context of deadly force (e.g., Alpert & Smith, 1994; Klinger & Brunson, 2009; Reiss, 1980; Shane & Swenson, 2018; Zimring, 2017). Yet, with a few exceptions (e.g., Correll, Hudson, Guillermo, & Ma, 2014; Fridell & Binder, 1992; James, James, & Vila, 2016; Klinger, 2004; Pickering, 2016), very little work has gone into unpacking how officers, situated in the moment, decide whether or not to pull the trigger. This is somewhat surprising given the focus and controversy surrounding police shootings (Klinger, Rosenfeld, Isom, & Deckard, 2015; Zimring, 2017) and the relatively advanced state of decision-making and error research in other high-risk endeavors. In fact, the systematic study of error has been used as a vehicle for understanding work place decision-making, professional reform, and improved outcomes in a number of other risk-laden occupational fields including medicine (Institute of Medicine, 2000), commercial aviation (Wiegmann et al., 2005), transportation (Green, 2017), and the military (Snook, 2002). James Doyle (2010) and a growing group of others (e.g., Hollway, 2014; Shane, 2013) have called for a similar lens to be applied to various aspects of the U.S. criminal justice system including police use of deadly force (e.g., Pickering & Klinger, 2016; Sherman, 2018).

Inspired by work in behavioral economics and cognitive psychology (e.g., Tversky & Kahneman, 1973), criminologists have begun to explore the role of heuristics and cognitive biases in criminal justice processes and crime causation (e.g., Bushway & Owens, 2013; Dhami & Ayton, 2001; Pogarsky, Roche, & Pickett, 2017; Wichard & Felson, 2016). This growing body of research has consistently demonstrated the relevance of this theoretical lens for explaining justice system outcomes and offender decision-making. Heuristics and cognitive biases may also be helpful for explaining and understanding the kind of decision-making required of officers who face rapidly unfolding and potentially life-threatening situations. Unfortunately, the possible influences of heuristics and biases, other than implicit racial bias (e.g., Chaires, 2015; Correll et al., 2014), have not been systematically applied to research on the police decision to use deadly force. The research described in this article is intended to help fill this gap. Using a randomized controlled experiment, which incorporated an interactive firearms training simulator and 306 active law enforcement officers from 18 agencies, this study explores the effects of dispatched information on an officer’s decision to shoot.

Heuristics and Situated Decision-Making

Experience in any practical domain allows practitioners to develop patterns of key information within their realm of expertise. These patterns or mental models permit them to quickly evaluate situations and to act with less than perfect information by systematically focusing on what is important while ignoring, often at a subconscious level, what is not (Klein, 2011). The police are no different (e.g., Stalans & Finn, 1995), and the classic observational studies of the police are replete with references to this kind of rapid pattern assessment followed by decisive and often consequential action. James Q. Wilson (1968) wrote, “[O]fficers are routinely called upon to ‘prejudge’ persons by making quick decisions about what their behavior has been in the past or is likely to be in the future” (pp. 38–39). He noted that officers seemed to be particularly attuned to two types of cues: “those that signal danger and those that signal impropriety.” In a similar vein, Jerome Skolnick (1966) postulated:

The policeman, because his work requires him to be occupied continually with potential violence, develops a perceptual shorthand to identify certain kinds of people as symbolic assailants, that is, as persons who use gesture, language, and attire that the policeman has come to recognize as a prelude to violence. (p. 45)

William Muir (1977) called this process “pigeonholing” and wrote:

To anticipate what was going to happen, policemen developed a sense for the patterns in human affairs. They formed concepts, or classifications, which helped them to assimilate and distinguish between discrete persons and events. Concepts were attended by visual procedures by which policemen processed the details of the moment into these abstractions. (p. 153)

While the policing literature lacks a cohesive term for the type of decision-making described earlier, the psychological and behavioral economics literature would call them heuristics. Daniel Kahneman (2011) defines heuristics as “simple [mental] procedures that help find adequate, though often imperfect, answers to difficult questions” (p. 98).

A number of heuristic concepts are particularly relevant to this study. The first is the concept of priming. Priming is the notion that exposure to an earlier stimulus can influence the response to a later stimulus (Eitam & Higgins, 2010). To meet the definition of a prime, this influence must occur either outside the awareness of the prime itself or the intention to use the prime to inform later judgment or action (Loersch & Payne, 2011). Molden (2014) writes, “[I]t is now virtually axiomatic among social psychologists that the mere exposure to socially relevant stimuli can facilitate, or prime, a host of impressions, judgments, goals, and actions, often even outside of people’s intentions or awareness” (p. 3). In the present context, we might hypothesize that dispatched information about the presence of a weapon before an officer arrives on scene may influence an officer’s decision-making and subsequent actions in the field. If the dispatched information is incorrect, this may significantly increase the likelihood for an error without an officer realizing the increased risk.

Dispatch priming is likely mediated through the availability heuristic and may be strengthened through confirmation bias. Research on the availability heuristic has shown that people faced with a difficult decision tend to favor the first thought that comes to mind. By giving greater credence to available information, as opposed to that which is not already known, people will overestimate the accuracy of the information at hand (Tversky & Kahneman, 1973). This is particularly true during novel situations in which people have not already developed a more effective patterned response. Given the relative rarity of police shootings (Alpert & Fridell, 1992; Geller & Scott, 1992), the inherent complexity of deadly force decisions (Artwohl & Christensen, 1997; Klinger & Brunson, 2009), and the time constraints under which they can occur (Blair et al., 2011), even very experienced officers may be understandably prone to reliance on the most readily available information. In the case of false-positive shooting errors, the availability heuristic is likely compounded by confirmation bias. The concept of confirmation bias holds that, in the face of uncertainty, people tend to cling to their initial interpretation of an unfolding event, even when presented with better data, and may selectively pick from the emerging information only that which confirms their initial understanding (Greenwald, Pratkanis, Leippe, & Baumgardner, 1986). When dispatched to a distal call, an officer’s initial understanding of the incident will be formed almost entirely by the information received from dispatch.

Radio dispatch and computer-automated dispatch systems provide modern law enforcement officers with a substantial amount of information before they ever reach the physical environment, people, or emergency they are responding to. Conceptually, this information should decrease response times and increase an officer’s ability to coordinate a safe outcome (Rubinstein, 1973). Indeed, Fyfe (1989) argued, the police should use preevent information, like what they receive from dispatch, to help them “diagnose” and prepare for an evolving situation they are responding to and thereby avoid the need for the “split-second” decision-making so often associated with potential police deadly force encounters. This assumes, however, that preevent information is correct, and there are a number of studies that call this assumption into question (e.g., Gilsinan, 1989; Manning, 2008; Scharf & Binder, 1983). Regardless, the integration of technology, even when it is implemented with the best of intentions, fundamentally changes the complexity of the human decision space and can impact outcomes in unanticipated ways (Rasmussen, 1986). If an officer, either implicitly or explicitly, continues to rely on dispatched information in the face of more salient contradictory cues or, even worse, affirmative miscues—for example, someone rapidly producing an object from their pocket—it could create a situation in which an officer is prone to make an error he or she would not otherwise make.

Human Error

Heuristics are cognitive shortcuts that allow people to quickly make decisions without gathering all of the relevant facts. They rely on a few key pieces of information to produce a rapid response rather than waiting for slower more deliberative cognitive processes to weigh all possible options. On the upside, expertise can be defined as the optimization of heuristic decision-making (e.g., Schmidt & Lee, 2014). On the downside, because they shortcut much of the available information to reach a suitable answer quickly, heuristics can and regularly do result in error. If a person attends to the wrong information and ignores or mistakenly interprets the right information, heuristics can lead to systematic and predictable error (Reason, 1990). Similarly situated people process information in similar ways and circumstances that result in human error tend to result in repeated errors over time and across people (Reason, 1990; Simon, 1969).

Indeed, Woods, Dekker, Cook, Johannesen, and Sarter (2010) argue that expertise and error are two sides of the same coin. The same decision rule, heuristic or otherwise, may, under one set of circumstances, result in satisfactory or even expert performance, but, under another set of circumstances, it can result in error. Woods would call this a “brittle” decision rule. If an officer were to employ an implicit heuristic decision rule that relied on dispatched information about the presence of a weapon and then subsequently encountered an ambiguously armed person who rapidly produced a weapon, the ability to anticipate the attack and respond accordingly could easily be mistaken for expertise. The brittleness of using dispatched information as a salient cue in the decision to use deadly force only becomes visible if and when an officer erroneously shoots a person who produces something other than a weapon.

James Reason (1990) argues that human error is relatively rare given its possible manifestations. Where it does appear, it is often accompanied by a number of common situational factors (Woods et al., 2010). Two of these factors, goal conflict and the observability of a problem, deserve some discussion given their relevance to police shooting errors.

Goal conflict refers to situations in which a person must decide between competing goals to navigate a successful outcome (Woods et al., 2010). Officers face this situation when they are dispatched to a call with information that a person may be armed or dangerous and then subsequently encounter an ambiguously armed subject—for example, a person with their hands in their pockets. Officers must weigh the possibility of making a false-positive error—shooting someone who is neither armed nor dangerous—if they act too quickly with the possibility of making a false-negative error—not shooting someone who is armed and dangerous—by failing to act or acting too slowly (Scharf & Binder, 1983). In this double-bind situation, an officer who attempts to avoid one type of error maybe more likely to make the other. Muir (1977) argues that officers will almost always err in the direction of personal safety, though this has not been empirically tested. Regardless, where goal conflicts exist, human error is more likely (Woods et al., 2010).

The observability of the problem refers to the clarity with which a person can see and understand the nature of the problem they are dealing with. A scenario-based study conducted at the Federal Law Enforcement Training Center found that police officers did very well at handling situations in which it was obvious no force would be necessary. They also did very well at handling situations in which it was obvious a particular form of force would be necessary—for example, someone actively harming others with a deadly weapon. Officers tended to struggle with scenarios in which the level of force needed to resolve the situation was not immediately obvious or changed during the course of the scenario (Norris & Wollert, 2011). When people can see a problem, they are fairly adept at solving the problem and avoiding error. An opaque problem space can significantly increase the risk of error (Reason, 1990). Ambiguously armed subjects on high-risk calls—for example, an armed robbery—and subjects who are verifiably armed but do not present an immediate threat to an officer—for example, armed suicidal subjects—present opaque problems for responding officers. When information about a subject’s intent or capability to cause harm is lacking or slow to emerge, an officer may be more likely to rely on the immediately available information dispatch has already provided. While this may reduce the cognitive strain of an opaque decision environment, it also increases the likelihood for error.

Thus, we arrive at the theoretical underpinnings for the present study. Potential police shootings are highly complex events in which an involved officer must contend with goal conflict, an opaque problem space, time constraints, and practical inexperience along with a whole host of other situational and psychological factors inherent in a real-world life or death encounter. Dispatched information increases the complexity of the decision by providing officers with additional information they must sift through to successfully navigate the incident. Police officers, much like the rest of us, use heuristics to simplify complex decisions by attending to key pieces of information and ignoring the rest. This satisficing behavior can optimize performance in complex decision environments, but it can also result in systematic error if a person attends to the wrong information and ignores the right information as it emerges. Given the complexity of potential deadly force encounters and the nature of heuristic decision-making, officers who are primed with salient information from dispatch are likely to construct an interpretation of the evolving event based on the most readily available information (i.e., the availability heuristic). Once an officer has constructed an interpretation of the event, they may be slow to relinquish it and may selectively interpret emerging information in light of their previously constructed understanding (i.e., confirmation bias).

If an officer does base a shoot/no-shoot decision on dispatched information, it will prove to be a “brittle” decision rule (Woods et al., 2010). That is, it will allow the officer to optimize his or her response when the information is correct, but it will also significantly increase the risk for error when the information is incorrect. Depending on the nature of the relationship between the dispatched information and the reality of the situation, that error could take the form of a “cell phone shooting” (i.e., a false-positive error) or an inadequate response to a legitimate threat (i.e., a false-negative error). To test this theory, two hypotheses were developed:

H1: Dispatching incorrect information about what a subject is holding will significantly increase the likelihood for a shooting error during a shoot/no-shoot simulation.

H2: Dispatching correct information about what a subject is holding will significantly decrease the likelihood for a shooting error during a shoot/no-shoot simulation.

Read more

Dispatch Priming and the Police Decision to Use Deadly Force, Paul L. Taylor, Police Quarterly, 2020

Share: