effective advocacy-taking it to the next...

30
Effective Advocacy—Taking It to the Next Level Edward P. Schwartz, Ph.D. DecisionQuest 800 South St Ste 190 Waltham, MA 02453 (781) 891-8300 [email protected]

Upload: others

Post on 18-Sep-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Effective Advocacy-Taking It to the Next Leveliframe.dri.org/DRI/course-materials/2017-WITL/pdfs/05b_Schwartz.pdf2 From Chapter 3, as excerpted in Amar, Akhil Reed. “America's Lived

Effective Advocacy—Taking It to the Next Level

Edward P. Schwartz, Ph.D.

DecisionQuest

800 South St Ste 190 Waltham, MA 02453 (781) 891-8300 [email protected]

Page 2: Effective Advocacy-Taking It to the Next Leveliframe.dri.org/DRI/course-materials/2017-WITL/pdfs/05b_Schwartz.pdf2 From Chapter 3, as excerpted in Amar, Akhil Reed. “America's Lived

Edward P. Schwartz, PhD, MSL, is a Consultant in DecisionQuest’s Boston office. In his role, Dr. Schwartz provides quantitative and qualitative analysis of pretrial jury behavior—from interviews and focus group studies to mock trials and large-scale statistical analyses—to provide clients with feedback on case themes, strategies, evidence, witnesses, and presentation style. Further, he consults on case evaluation, provides advice on trial strategies, and assists with jury selection as well as post-verdict juror surveys and interviews. Dr. Schwartz is a nationally recognized jury consultant with excellent analytical acumen and strong market research skills who is noted for his ability to blend the strategic focus of game theory and decision theory with the real-world insights of social psychology to gain a complete picture of how people absorb, analyze, and process information. Dr. Schwartz has conducted jury behavior research, aided with witness preparation, consulted on trial strategy, and assisted with jury selection on dozens of cases, including several high-profile criminal trials. Dr. Schwartz is regularly asked to provide insight on jury trials in the news, having been interviewed by CNN, the Associated Press, the New York Times and countless regional media outlets.

Page 3: Effective Advocacy-Taking It to the Next Leveliframe.dri.org/DRI/course-materials/2017-WITL/pdfs/05b_Schwartz.pdf2 From Chapter 3, as excerpted in Amar, Akhil Reed. “America's Lived

Effective Advocacy—Taking It to the Next Level ■ Schwartz ■ 3

Effective Advocacy—Taking It to the Next Level

I. “He Said What?”: Deception Detection and Employment Litigation—Part I: The Gullible Juror ...........5 II. “He Said What?”: Deception Detection and Employment Litigation—Part II: The Fallible Juror .........14 III. Less Is More? Detecting Lies in Veiled Witnesses .....................................................................................21

Table of Contents

Page 4: Effective Advocacy-Taking It to the Next Leveliframe.dri.org/DRI/course-materials/2017-WITL/pdfs/05b_Schwartz.pdf2 From Chapter 3, as excerpted in Amar, Akhil Reed. “America's Lived
Page 5: Effective Advocacy-Taking It to the Next Leveliframe.dri.org/DRI/course-materials/2017-WITL/pdfs/05b_Schwartz.pdf2 From Chapter 3, as excerpted in Amar, Akhil Reed. “America's Lived

Effective Advocacy—Taking It to the Next Level ■ Schwartz ■ 5

I. “He Said What?”: Deception Detection and Employment Litigation—Part I: The Gullible Juror

1

ometimes people lie. And yes, sometimes people lie under oath in a court of law. Despite having taken an oath to “tell the

truth and nothing but the truth,” witnesses aren’t always honest and forthcoming – and jurors understand this. The court does too, as evidenced by jury instructions regarding judging the credibility of witness testimony. To wit, consider this language from the Illinois Model Jury Instructions regarding witness credibility:

“You are the only judges of the credibility of the witnesses. You will decide the weight to be given to the testimony of each of them. In evaluating the credibility of a witness, you may consider that witness' ability and opportunity to observe, memory, manner, interest, bias, qualifications, experience, and any previous inconsistent statement or act by the witness concerning an issue important to the case.”1

The temptation to lie in court in order to protect oneself has always been a source of concern and the subject of judicial notice. Historically, the courts were very worried about placing a criminal defendant’s soul in peril by asking him to swear to God to tell the truth in court. Since most crimes were capital in the Middle Ages and well into the Enlightenment, and most convictions were foregone

1 A copy of these model instructions can be found at

conclusions, the Court anticipated that the criminal defendant would soon lose his life. That fact alone didn’t seem to particularly bother anyone; however, the authorities did not want the defendant to die having just sinned against God by lying under oath. As such, until the 19th century, criminal defendants in the British common law system were not permitted to testify under oath at trial. As Akhil Amar succinctly states in his recent book, America’s Unwritten Constitution: The Precedents and Principles We Live By,

http://www.state.il.us/court/circuitcourt/civiljuryinstructions/1.00.pdf

“If he were allowed to testify, a guilty defendant might of course perjure himself in an effort to avoid conviction. At the Founding, many believed that lying under oath was an especially grievous offense against man and God…. Alas, a liar might lose his soul even if he saved his skin.”2

The prohibition against self-interested testimony under oath was not limited to criminal defendants. Until the mid-19th century, interested parties to civil litigation were similarly precluded from

2 From Chapter 3, as excerpted in Amar, Akhil Reed. “America's Lived Constitution,” 120 Yale Law Journal 1745 (2011).

S

“He Said What?”: Deception Detection and Employment Litigation – Part I: The Gullible Juror

Page 6: Effective Advocacy-Taking It to the Next Leveliframe.dri.org/DRI/course-materials/2017-WITL/pdfs/05b_Schwartz.pdf2 From Chapter 3, as excerpted in Amar, Akhil Reed. “America's Lived

6 ■ Women in the Law ■ February 2017

2

testifying under oath.3 As Blackstone noted,

“All witness, of whatever religion or country, that have the use of their reason, are to be received and examined, except such as are infamous or such as are interested in the event of the cause.”4

So, while the adversarial system that characterizes the Common Law tradition is touted as an effective method of truth revelation, everyone has always understood that interested parties would sometimes – if not often – lie in court. This is not to say that all cases are created equal and that all litigants have equal incentive or opportunity to effectively misrepresent facts in court. Many types of disputes revolve around long, ongoing relationships with extensive “paper trails.” As such, it is difficult for a witness to testify to certain facts that are simply inconsistent with too much evidence to the contrary. It is hard to argue against accounting statements, signed contracts or letters written in one’s own hand. Other kinds of disputes involve more interpretive questions than factual ones. Many torts cases require the jury to evaluate what constitutes “reasonable care,” even when there is little disagreement among the parties about who did what when. While an expert might lie about his opinion regarding what qualifies as reasonable care, this presents the jury with a different type of credibility dilemma than the possibility of a litigant lying on this stand. Most experts build their reputations and their practices on being reliable, credible witnesses. So, attorneys more often shop for an 3 Ibid at 1746. Maine became the first American state to permit criminal defendant sworn testimony in 1864. Such testimony under oath was not permitted in Federal Court until 1878. Georgia was the final state to permit the practice, sometime shortly after the turn of the 20th century. 4 3 Blackstone, Commentaries on the Laws of England, 369 (1769).

expert who honestly believes the attorney’s client is in the right than for an expert who is willing to testify contrary to her own honest opinion. There are cases, however, in which dispositive issues revolve around whose version of a key event most resonates with the jury. That is, the plaintiff says one thing happened and the defense counters with a different story. The jury is asked to resolve a “he said – she said” dispute.5 Employment litigation, especially that involving an allegation of discrimination, tends to involve this kind of dispute. On the one hand, the dissatisfied employee (or former employee) claims that a supervisor or fellow employee made a particular statement or behaved in a particular way. On the other, the supervisor or human resources manager counters that either the statement/behavior never took place or was completely misinterpreted by the complaining party. The jury is faced with the unenviable task of determining who is telling the truth – and who is lying under oath. This article (first in a two-part series)6 is devoted to reviewing what we know about people’s tendencies to lie – both generally and about issues related to the workplace – and about our own abilities to correctly identify who is lying to us and who is telling the truth. Using these results as a foundation, I will discuss along the way how issues of deception detection should be handled in employment litigation. There are several key lessons here for human resources professionals and attorneys working in the employment litigation arena. First and foremost, human beings are supremely bad at differentiating true statements from lies. Over hundreds of studies,

5 The genders of the pronouns here are purely arbitrary. The point is only that two interested fact witnesses will testify to different versions of events. 6 The companion piece will appear in an upcoming issue of HR Advisor. 7 Bond, C. F., & DePaulo, B. M. “Accuracy of deception judgments.” 10 Personality and Social Psychology Review, 214-234. (2006)

conducted in a variety of settings over decades of research, the primary, and remarkably robust, result is that we, as a species, do little better than flipping a coin when it comes to detecting deception. The average success rate at telling truth from fiction is about 54%.7 The second key finding from the experimental research is that we all think we are much better at discriminating honesty from lies than we really are.8 This result has profound implications on at least three fronts. First, those in human resources, or who manage others in the workplace, need to be cognizant of the very real possibility that they make regular mistakes when evaluating the veracity of claims made by employees under their purview. Second, attorneys should not presume that they can correctly ascertain whether a friendly witness is being completely forthcoming and honest with them in preparation for trial. Finally, litigators should never rely on jurors’ abilities to correctly sort out a “he said – she said” dispute to win a case. There are ways to increase the likelihood that the jury will correctly detect truth and deception, but the case must have a strong foundation in other areas. The final empirical result from the literature of interest to us here – and the focus of the second article in the series -- is that people regularly look to exactly the wrong indices of deception. We have all heard someone ask that another person, “look me in the eye and say that.” It turns out that liars typically do a pretty good job of controlling their facial expressions while truth-tellers pay less attention to the emotional expressions associated with their speech.9 As such, focus on a speaker’s face during message transmission actually reduces a listener’s ability to tell whether she is being told

8 See Vrij, A., Granhag, P. A., & Porter, S. “Pitfalls and Opportunities in Nonverbal and Verbal Lie Detection.” 11 Psychological Science in the Public Interest, 89–121. (2010) 9 See Vrij, A., Granhag, P. A., & Porter, S. (2010).

Page 7: Effective Advocacy-Taking It to the Next Leveliframe.dri.org/DRI/course-materials/2017-WITL/pdfs/05b_Schwartz.pdf2 From Chapter 3, as excerpted in Amar, Akhil Reed. “America's Lived

Effective Advocacy—Taking It to the Next Level ■ Schwartz ■ 7

3

the truth or lied to.10 Consider a typical witness stand – a low chair behind a tall screen, so that little more than the witness’s face is visible. This is a terrible recipe for accurate deception detection by jurors. There are strategies, however, that can be used to increase the likelihood that jurors will recognize the veracity of an honest witness, and others that can improve the detection of deceptive testimony. While there is some mention of such concerns below, this well-studied and nuanced topic will primarily be addressed in the second article. I. The Randomness of Deception Detection Over the years, psychologists and sociologists have conducted hundreds of studies aimed at exploring the ability of humans to distinguish honest messages from deceptive ones. It is important at this point to emphasize that, for our purposes here, a deceptive message is one that the bearer of the message (be it a job applicant, significant other, business associate or witness) believes to be false, but is delivered in a way intended to convince the recipient of the message that it is true. A truthful message, by contrast, is believed to be true by the speaker, who similarly attempts to convince the recipient that it is true. As such, this review does not cover mistakes, mis-rememberances, uncertainty on the speaker’s part, exaggerations or expressions of overconfidence. All of these other forms of “falsehoods” are, of course, very important, both in everyday business dealings and litigation; they are however beyond the scope of this article.

10 Warren, G, Schertler, E and Bull, P., “Detecting Deception from Emotional and Unemotional Cues,” 33 J Nonverbal Behav 59–69 (2009) 11 Bond, C. F., & DePaulo, B. M. (2006). 12 It should be noted that the standard research protocol for deception detection studies is to expose message recipients to ½ lies and ½ true statements. For reasons outlined below having to do with truth bias, accuracy would vary in predictable ways when the ratio of lies to truths is moved away from this 50-50 split.

The most common method of determining a baseline accuracy for human deception detection is to aggregate results across as many comparable studies as possible in what is referred to as a meta-analysis. It is, in short, an analysis of pre-existing analyses. Over the years, Bella DePaulo, of the University of Virginia, has contributed as much as anyone in the field to this question. A 2006 study11 aggregates results over more than 200 studies and finds that humans correctly identify the veracity of a message approximately 54% of the time.12 That is, if a person chose to completely ignore everything about the content of the message and the circumstances surrounding its delivery, and instead flip a coin to distinguish truths from lies, that person would perform just about as well as the average participant in any of the included studies, all of whom were trying their best to correctly ascertain whether they were being told the truth or being lied to. This is a rather bleak finding, and one that surprises many people. These meta results are quite robust to variations in how the data are aggregated. In the same year, Aamodt and Custer conducted their own meta-analysis of 108 studies, paying special attention to who constituted the respondent pool. They separated studies that used college students (very popular source of cheap subjects for psychology studies) from those that used the more general adult population. A third category of studies involved “professional lie detectors,” defined as people trained to root out deception as

13 Aamodt, M. G., & Custer, H. “Who can best catch a liar? A meta-analysis of individual differences in detecting deception.” 15 The Forensic Examiner, 6-11 (2006). The professional lie detectors actually out-performed the laypersons by one percentage point, 55% to 54%, a difference that was not statistically significant. As a point of reference, the Aamodt and Custer study aggregated over 14,000 observations. 14 This term was first coined in McCormack, S. A., & Parks, M. R. “Deception detection and relationship development: The other side of

part of their jobs. This group included police officers, detectives, judges and psychologists. This separation did not matter for success at deception detection. All three groups performed about the same, suggesting that the professionals were so in name only.13

A. Trusting Souls: Truth Bias in Veracity Evaluation

One of the major contributors to poor performance in these deception detection studies is the propensity of humans to believe what they are told. That is, we have a truth bias.14 As mentioned above, these studies typically involved half true statements and half falsehoods. Respondents, however, identified statements as true approximately two-thirds of the time.15 This is an interesting finding, in and of itself, for what it might tell us about the human condition. One hypothesis that has been put forth for truth bias is that, in our everyday lives, we mostly encounter people who are honest with us. As such, a person is inclined to believe most statements (without tangible reason to do otherwise), even if primed to be suspicious by the revelation that he is participating in a deception detection study. Another hypothesis behind the truth bias is that we are socialized to be trusting of one another, through parenting, schooling, religious education and social norms. As such, regardless of whether it is rational to believe that most statements are true, an average person does not

trust.” In M. L. McLaughlin (Ed.), 9 Communication yearbook, 377-389 (Beverly Hills, CA: Sage, 1986). 15 See the various meta analyses, all of which report similar percentages of truth identification. Aamodt, M. G., & Custer, H. (2006); Levine, T. R., Park, H.. S., & McCornack, S. A. (1999). “Accuracy in detecting truths and lies: Documenting the ‘veracity effect.’” 66 Communication Monographs, 125-144; and Bond, C. F., & DePaulo, B. M. (2006).

Page 8: Effective Advocacy-Taking It to the Next Leveliframe.dri.org/DRI/course-materials/2017-WITL/pdfs/05b_Schwartz.pdf2 From Chapter 3, as excerpted in Amar, Akhil Reed. “America's Lived

8 ■ Women in the Law ■ February 2017

4

wish to be perceived as overly cynical and suspicious.16 Truth Bias is a double-edged sword when it comes to litigation. On the one hand, it is comforting to know that jurors generally enter the courtroom inclined to believe what they are about to hear, especially when one is convinced that truth is on one’s side. On the other hand, if one anticipates that either counsel or witnesses for the other side will attempt to mislead jurors with false statements, it is somewhat disconcerting to know how undiscerning we humans can be. The literature is fairly silent regarding what people do when confronted with contradictory statements, such that the listener must conclude that at least one party is lying. Given the robustness of truth bias, one might imagine that many jurors attempt to interpret seemingly incompatible statements in ways so as to maximize the inherent truth between them. Examples of this are often seen in mock trials conducted in actual cases. Some mock jurors are so averse to calling someone a liar that they twist witness statements quite dramatically so as to be able to believe that both parties are essentially telling the truth – or at least trying to do so. A common juror reaction is to interpret a false statement as an error, rather than a lie. As a result of this natural tendency of jurors, trial lawyers who are overly aggressive in naming a lie or labeling a witness a liar often experience a hostile reaction from jurors. A litigator in an employment dispute would do well to put forth a theory of liability that does not rely on jurors concluding that the other party is an out-and-out liar. When possible, employ language that makes it clear that what matters is the falsity of the witness’s statement, not the motivation for it being false. E.g.: “Regardless of whether Mr. Smith believes that he was excluded from the meeting on July 8th, 16 Levine, T. R., Park, H.. S., & McCornack, S. A. (1999) at 129. 17 Vrij, A., and M. Baxter, “Accuracy and Confidence in Detecting Truths and Lies in Elaborations and Denials: Truth Bias, Lie Bias

the evidence and testimony of coworkers strongly suggests that he received an email inviting him to the meeting, followed by a reminder two days later…”

B. Deny, deny, deny: Lie Bias exists, as well

While truth bias is a very robust result in meta-analyses of deception detection, its prevalence does depend on the type of message being received. This proves particularly relevant in the context of witness testimony at trial. Vrij and Baxter examined whether truth bias showed up as often in response to denials of alleged behavior as it did in response to assertions of facts.17 The authors discovered that while subjects exhibited truth bias as expected in response to elaborations of facts and behaviors, they actually exhibited lie bias in responses to denials. That is, people tend to assume that assertions are true and denials are false. Unlike the results that we have reviewed so far, this result would seem to have asymmetric implications for employment litigation. Typically, the plaintiff in such litigation is asserting a positive proposition. “Here is what my boss did to me.” By contrast, the representatives of the defendant company are put in the position of issuing denials. “No, that is not what happened.” Or, “I never said that.” The Vrij and Baxter study would suggest that jurors are more likely to believe plaintiffs in this scenario than they should and are more likely to be suspicious of company representatives than they should. That is, this particular combination of truth bias and lie bias works to the systematic advantage of plaintiffs in employment lawsuits, especially those alleging discriminatory behavior. Such suits are rarely one-sided, however. Most plaintiffs in such suits

and Individual Differences,” 7 Expert Evidence 25-36 (1999). 18 For those of you who are familiar with probability theory, these can be thought of as Type I and Type II errors. The hypothetical

have their own set of issues. There is usually a history of difficult interactions with coworkers and/or clients. When the employer asserts that the employee’s behavior on the job contributed to his negative performance evaluation the plaintiff may be forced to deny this at trial. The resulting denials fall on a skeptical audience. When these tables are turned, the lie biases of jurors work to the advantage of the defense and the disadvantage of the plaintiff. These results suggest that defense counsel in such suits should take every opportunity to force the plaintiff to issue denials on the witness stand.

C. Truth Bias and the Incentive to Lie

The fact that humans seem to be socialized to exhibit truth bias means that their accuracy of veracity detection depends very heavily on whether they are being told the truth or being lied to. Suppose that a person were completely undiscerning with respect to messages he received and chose to believe everything he heard. He would correctly identify every truth that was told to him. He would be considered statistically to be an excellent truth detector. On the other hand, he would never correctly identify deception. He would be the world’s worst lie detector.18 Regular folk aren’t quite as gullible as our hypothetical example, but we do tend to identify statements as true about two-thirds of the time in studies where the actual population of messages only contains one-half true statements. The result is that subjects in these studies correctly identify about 67% of true statements as being true. That is, they call an honest speaker a liar only about one-third of the time. On the other hand, these same subjects spot a lie only about 41% of the time.19 This means that a liar successfully

believer discussed here would reveal no false positives but would be very prone to false negatives. 19 Levine, T. R., Park, H.. S., & McCornack, S. A. (1999).

Page 9: Effective Advocacy-Taking It to the Next Leveliframe.dri.org/DRI/course-materials/2017-WITL/pdfs/05b_Schwartz.pdf2 From Chapter 3, as excerpted in Amar, Akhil Reed. “America's Lived

Effective Advocacy—Taking It to the Next Level ■ Schwartz ■ 9

5

convinces his audience that he is telling the truth well over half the time. We would all do substantially better at detecting lies if we ignored what we saw and heard and flipped coins instead!20 It is not uncommon for an attorney to express the conviction that a key witness for the other side is a big fat liar. The attorney seems incredulous that anyone buys what this person is selling. Many attorneys seem confident that jurors will see right through the liar’s deception – how could they not? Well, the cautionary tale from the hundreds of studies conducted so far about deception detection is that many, many jurors will not be able to tell when they are being lied to. Keep in mind, however, that most of these studies involve statements made by strangers to strangers, with no context or other cues about the veracity of the statements being made. One job of the trial team is to provide the context and supporting materials to facilitate the ability of jurors to distinguish fact from fiction. The lesson here should not be that jurors can’t tell who is lying to them, but rather that they can’t be relied upon to get there without help. One goal of these articles is to provide guidance about how to give jurors the help they need to believe the truth-tellers and doubt the liars. When an employment relationship sours, it creates an unpleasant circumstance for employer and employee alike. It is only natural to speculate on what could have been handled differently along the way to avoid the situation that ultimately led to litigation. It is not uncommon for a human resource professional or longtime manager to express dismay at having trusted a colleague or employee who turned out later to be unreliable, unscrupulous or simply dishonest.

20 Albeit, only in a world where we got lied to half the time but did not know in advance that the proportion of lies was set at that level. 21 Levashina, J., & Campion, M. “Measuring faking in the employment interview: Development and validation of an interview faking behavior scale.” 92 Journal of Applied Psychology, 1638–1656 (2007). Prater, T., &

“How could I not have seen this coming?” is the common lament. One lesson from the extensive study of deception detection is that most people are inexpert at knowing when they are being lied to. As such, a committed liar would surely have succeeded just as well under someone else’s watch. Another thing to keep in mind is that lying during job-hunting appears to be rampant and widely accepted. According to recent studies, up to three-quarters of job applicants admitted to engaging in deceptive communications (including false statements on resumes, exaggerating grades and salaries during interviews, and identifying competing offers that did not exist).21 According to the Prater and Kiser article, roughly 90% of undergraduate job applicants admitted to using some form of deception during job-hunting. In light of these two factors, managers and human resource professionals should not beat themselves up for failing to identify a particular dishonest employee. The odds are frankly stacked against them, given the prevalence of lying during the job-hunting process, the limited resources available to verify submitted information and the inherent difficulty humans have knowing when they are being lied to. The real challenge appears not to be so much to weed out the liars (for no company would seem capable of functioning without them) but to identify those whose lies are of a nature and frequency to suggest that they cannot be trusted to do their jobs consistent with the company’s needs and trust.

D. Some Lies are All in the Telling

Kiser, S. B. “Lies, lies, and more lies.” 67 SAM Advanced Management Journal, 9–36 (2002). 22 See generally, Hancock, J., Thom-Santelli, J., & Ritchie, T. “Deception and design: The impact of communication technology on lying behavior,” In E. Dykstra-Erickson, & M. Tscheligi (Eds.), Proceedings of the 2004 conference on human factors in computing systems, 129-134.

In our social and business worlds, we communicate with each other in many different forms. Face-to-face speech has been around for the longest, but many methods are available to communicate over distances, from written and types letters, to telephone, to email and text message. Many scholars have examined both our propensities to lie via various media, as well as our ability to discern true statements from false ones depending on how a message is delivered.22 In some ways, immediacy has much to recommend it, in terms of the ability to deftly deliver a lie. Face-to-face communication permits the speaker to add nuance and inflection in order to be maximally convincing. On the other hand, such intimate communication forces the speaker to confront the human consequences of his deception. In much the same way that soldiers find it easier to shoot each other from a distance, a liar is likely to suffer lower psychic cost from lying at a distance. By contrast, one can lie via a letter or email and perhaps never be forced to confront any of the harm suffered by the recipient. In some circumstances, it might be possible to practice deception anonymously. Perhaps people lie most often in email or letters. The problem with practicing deceit via the written word is that the recipient is exposed to no indexes of sincerity. The written lie must prevail exclusively on the quality of its content. A statement that rings hollow on paper cannot be salvaged by tone or emotion. Lying is writing also brings with it special dangers. It is very hard to back away from something recorded in writing or explain it away later as a misunderstanding. As such, while lying via email or letter might be tempting in terms of its low psychic cost, it is very risky in terms of being exposed as a liar.

(New York: Association for Computing Machinery, 2004), as well as, DePaulo, B. M., Kashy, D. A., Kirkendol, S. E., Wyer, M. M., & Epstein, J. A. “Lying in everyday life,” 70 Journal of Personality and Social Psychology, 979-995 (1996).

Page 10: Effective Advocacy-Taking It to the Next Leveliframe.dri.org/DRI/course-materials/2017-WITL/pdfs/05b_Schwartz.pdf2 From Chapter 3, as excerpted in Amar, Akhil Reed. “America's Lived

10 ■ Women in the Law ■ February 2017

6

Empirical studies have actually discovered that we lie most often by telephone. 23 This method of deception seems to be more a compromise than an ideal choice. A telephone call would seem to be neither the most effective nor the safest way to tell a lie. It does, however, mitigate fairly well the shortcomings of other options. One can control the tone and tenor of the message through ones voice (but not through one’s expressions) and doing so does not leave a “paper trail.” Somewhat ironically, empirical studies of deception detection generally show that humans are relatively good at distinguishing truth from fiction when listening to vocal messages.24 The reason for this is that people do a much better job of employing verbal cues of deception than non-verbal cues of deception. While this intrudes a bit on the subject of the next article, it is worth taking a moment to explain a bit about such cues. When a person attempts to evaluate the veracity of a statement, he naturally evaluates many aspects of the message he is receiving. Does the content make sense? Are the word choices strange? Is there hesitation or quavering in the voice? Does the speaker make eye-contact? Does he fidget? Do his facial expressions and tone of voice match what he is saying? Some of these cues are verbal and others are not. The studies are consistent in finding that we humans are easily duped by the non-verbal cues on which we tend to focus most.25 Absent access to these non-verbal cues, we are forced to actually listen to the message being delivered by 23 Where “most often” is defined by a percentage of any given kind of message that contains intentional mistruths. Studies of lie frequency are typically conducted by having subjects keep very detailed diaries of their daily communications and indicate, in each instance, whether they told a lie. While the diaries are subsequently reviewed and coded anonymously, such studies are fraught with data reliability concerns due to self-reporting by subjects. See both Hancock, J., Thom-Santelli, J., & Ritchie, T. (2004) and DePaulo, et al. (1996). 24 For an excellent treatment of this subject, see DePaulo, B. M., Kirkendol, S. E., Tang, J.,

the speaker. We do much better at ignoring deceptive verbal cues than deceptive non-verbal ones.26 We are left then with a rather strange set of results. People lie most often during voice-only communications, like phone calls, but doing so maximizes the likelihood of being (accurately) disbelieved. It is important to keep in mind, however, that the initial convincingness of a message is only one aspect to a successful lie. The liar needs to be able to sustain the ruse and also avoid getting trapped in the lie due to subsequent events or statements. With these considerations in mind, it might still be rational for a would-be liar to employ the telephone as his instrument of deception. We will discuss at length the implications of the “cues to deception” literature in the next article, but I would like to point out here just a couple of strategies to consider. When a manager or HR professional is reviewing his notes about an employee as part of preparation for trial, it would be worthwhile to focus on telephone conversations with the employee that struck the manager as odd, incongruous or misleading. Given our relative prowess at detecting deceit through verbal cues, digging deeper into the subjects of those conversations could reveal useful evidence. The second strategy involves getting jurors to focus on verbal cues when a witness is testifying. Especially if one is concerned that a witness is a “good liar,” in that he controls his facial expressions well, it might prove beneficial to give the jury something

& O'Brien, T. P. (1988). “The motivational impairment effect in the communication of deception: Replications and extensions.” 12 Journal of Nonverbal Behavior, 177-202 (1988). 25 See Vrij, A., Granhag, P. A., & Porter, S. (2010) for an excellent treatment of verbal and non-verbal cues to deception. 26 As will be discussed in the companion article, not all non-verbal cues are created equal. Some forms of body language, as well as emotional “leakage” in facial expressions can be useful tracers of deception, provided listeners can be trained to focus on them.

else to look at during the witness’s testimony. Rather than just handing a document to the witness when asking if he has ever seen it, put a copy up on a big screen in front of the jury. Encourage the jury to listen only with their ears. We will spend some time on this “distraction theory” in the next article.

E. Boys will be Boys … Unfortunately

Men and women approach the prospect of lying – and being lied to – quite differently. Li Li wrote a doctoral thesis at the University of Miami dedicated to exploring gender differences in lying and deception detection.27 Among the studies cited by Li in an extensive literature review is a 1992 exploration of how men and women generally feel about lying.28 According to Li, “Their study indicates that regardless of the lies’ content, targets, and relationship with the liars, women rate deception as more significant, more unacceptable, and reported significantly more negative emotional reactions toward discovered lies.” This visceral aversion to lying is important to keep in mind when considering how women differ from men in terms of deception detection. Relatively few studies paid much attention to gender difference in deception detection prior to Li’s thesis. One study that looked at deceptive statements about art appreciation (so that there was no gender-based content) found that women were significantly more likely than men to believe deceitful statements.29 One

27 Li, Li, “Sex Differences in Deception Detection,” Open Access Thesis, Paper 261. (2011) 28 Levine, T. R., McCornack, S. A., & Avery, P. B. “Sex differences in emotional reactions to discovered deception.” 40 Communication Quarterly 289-296 (1992). 29 DePaulo, B. M., Epstein, J. A., & Wyer, M. M. “Sex differences in lying: How women and men deal with the dilemma of deceit,” In M. Lewis, & C. Saarni (Eds.), Lying and deception in everyday life, 126-147. (New York: Guilford Press, 1993).

Page 11: Effective Advocacy-Taking It to the Next Leveliframe.dri.org/DRI/course-materials/2017-WITL/pdfs/05b_Schwartz.pdf2 From Chapter 3, as excerpted in Amar, Akhil Reed. “America's Lived

Effective Advocacy—Taking It to the Next Level ■ Schwartz ■ 11

7

economics experiment involved the division of a fixed monetary reward.30 One subject would tell his or her partner which portion (A or B) contained the larger share. The partner would then decide whether or not to believe the message (and pick a portion accordingly). While subjects from both genders were equally good at distinguishing true messages from lies, the male subjects lied to their partners substantially more often (55% versus 38%). Consistent with other studies, there was a substantial truth bias, in that men and women believed their partners well over 75% of the time despite being lied to almost half the time. Motivated by these intriguing results, Li sought to explore whether men and women performed differently as deception detectors, especially with respect to the gender of the messenger. The results of Li’s study are quite dramatic. As with earlier studies, Li confronted subjects with half true messages and half lies. Half of the messages were delivered by men and half by women. The overall accuracy rate was 54%, just in line with previous studies. Women showed greater truth bias, identifying 63% of statements as true, while their male counterparts only thought that 59% of statements were true. Almost all of the added truth bias for the female subjects came in the form of correctly identifying truthful statements. That is, both groups spotted liars about 41% of the time, but women correctly identified truth-tellers at a 70% clip, as opposed to 60% for their more suspicious male counterparts. The most interesting gender effect involves the identity of the speaker. Women were substantially easier to read, with respondents correctly identifying the veracity of their messages almost two-thirds of the time.

30 Dreber, A., & Johannesson, M. “Gender differences in deception,” 99 Economics Letters, 197-199 (2008).

The men, on the other hand, seemed to confound their audience, with fewer than 40% of respondents correctly distinguishing truth from fiction. Why were the women so much more transparent as messengers? The short answer is that people generally believe women are honest and men are liars. Female subjects correctly identified truthful women speakers over 83% of the time. Men even picked up on honest women over 75% of the time. So, as a general rule, honest women are transparently so. Both women and men correctly identified lying women just about 55% of the time (with men getting it right slightly more often). Honest men had great difficulty conveying their sincerity, convincing subjects to believe them slightly more than half the time (men actually believed honest men slightly less than half the time). The truly astonishing result is that lying men succeeded in fooling their audience three-quarters of the time. To put that in perspective, a subject was almost twice as likely to identify an honest man as a liar as he or she was to identify a deceitful man as a liar. Whatever cues of deception these subjects are looking to, they are clearly the wrong ones and their observations are leading them in exactly the wrong direction. In the next section of this article, we will explore what empirical research can teach us about cues for honesty and deception. Before moving on, this would be a good place to reflect on how the results discussed so far might be relevant for jury selection in employment cases. The discovered gender differences suggest that truth identification by jurors can depend very much on the gender of the witness under scrutiny, as well as whether that witness is likely to be telling the truth on the stand. While all jurors are likely to detect the sincerity of a truthful female witness, women would seem to do so more easily than men. Male jurors will be

31 Such false experts are not limited to the jury room, of course. We have all encountered them in board rooms, class rooms, study

skeptical and will particularly fail to recognize honesty from another man on the stand. Men seem to do a slightly better job of recognizing dishonesty among female speakers, but nobody seems particularly well-equipped to spot a male witness who is lying to them. We will return to this question later, after we have discussed cues to deception and the role context in evaluating message veracity. II. Unwelcome Bravado: Overconfidence in deception detection In jury deliberations one often encounters the phenomenon of the “false expert,” the person who is confident in his own knowledge of a subject or prowess at some analytical task, but whose actual knowledge base or talent is just as suspect as everyone else’s, if not more so.31 False experts are a particular problem for jury deliberations for a number of reasons. First, the subject matter is usually quite foreign to the participants, as is the decision task. As such, jurors are generally relieved at the prospect of someone in the room having some clue as to what they are doing. Second, since the jurors are all strangers to each other, each cannot rely on past experience to evaluate the likelihood that any professed expert in their midst actually has any expertise. Finally, the jurors are limited to considering as evidence only what has been presented to them at trial. They are not permitted to conduct any independent research to determine whether the opinions professed by the “expert” have any merit. This is a dangerous recipe for a verdict to be “hijacked” by someone who is overconfident in his knowledge and evaluation of issues relevant to the case at hand. One of the most common subjects about which we see false expertise is how the human resources process works at various kinds of companies.

groups, PTA meetings and around the kitchen table.

Page 12: Effective Advocacy-Taking It to the Next Leveliframe.dri.org/DRI/course-materials/2017-WITL/pdfs/05b_Schwartz.pdf2 From Chapter 3, as excerpted in Amar, Akhil Reed. “America's Lived

12 ■ Women in the Law ■ February 2017

8

Jurors claim to understand what the law requires an HR professional to do in various situations; usually, that person is wrong. Jurors express unwarranted confidence that a company would have a particular human resources policy in place, notwithstanding the absence of any evidence regarding such a policy. Jurors also estimate with unjustified certainty what a compensation or severance or pension package would look like for a given employee. All of these kinds of false expertise can pervert jury verdicts in employment cases. But what of confidence in deception detection? Are some people just better at it than others? The answer is, “Yes, some people are, in fact, better than others at discerning truth from lies.” The problem is that there is essentially no relationship at all between a person’s internal belief about his ability to detect deception and his actual ability to do so. As I mentioned above, “professional” lie detectors do not fare any better than laypersons at telling truth from lies, notwithstanding their sometimes extensive training in doing so.32 These professionals, on the other hand, are quite confident that they are superior deception detectors than their lay counterparts.33 The false expert problem is not limited to professionals. In a meta-analysis of 18 different studies that compared confidence in deception detection with actual performance, the correlation found was just 0.04, which proved not to be significantly different from zero.34 That is, people who think they are good at telling truth from lies generally aren’t. In fact, people who think they are good at deception detection are dangerous jurors. While they might not mistake a truth for a lie any more often than their

32 See Vrij, A., Granhag, P. A., & Porter, S. (2010) for an excellent treatment of verbal and non-verbal cues to deception. 33 See Kassin, S.M., Meissner, C.A., & Norwick, R.J. “’‘I’d know a false confession if I saw one’’: A comparative study of college students and police investigators.” 29 Law and

more circumspect fellow jurors, the overconfident ones are likely to rely more heavily on such an assessment than they should. If a juror decides early on that a witness is a liar, he might reach a conclusion about the correct verdict in the case then and there, foreclosing his mind to reliable evidence. An overconfident veracity detector will weigh competing testimony inappropriately and will be resistant to arguments contrary to his initial assessment. As such, during voir dire, one would be wise to challenge prospective jurors who believe themselves to be very skilled at detecting deception. Such jurors introduce variance into the process without any increase in accuracy. III. All Hope is not Lost: Strategies for improving juror performance as deception detectors The lessons from the research on detection deception enumerated above would seem to paint a rather bleak picture of the ability of jurors to know who is telling the truth and who is lying on the stand. It is not encouraging to know that the average person performs little better than a coin-flip in distinguishing truth from fiction. Perhaps even more disturbing, we are a gullible people – motivated liars can fool us six times out of ten.35 Does this mean that an employment case that revolves around competing versions of events is really just a coin flip? A spin of the roulette wheel? A throw of the dice? The answer, of course, is “no,” for there is much missing from the story, as we move from hypothetical statements used in academic studies to the nuts and bolts of an employment dispute.

Human Behavior, 211–227 (2005). It is also worth noting that there are several studies in which training in deception detection actually degrades performance in distinguishing truths from lies. 34 DePaulo, B.M., Charlton, K., Cooper, H., Lindsay, J. L., & Muhlenbruck, L. “The

I leave you here with three important factors to consider in anticipation of the companion article to this one. First, cases do not typically revolve exclusively around whether the jury believes one side of a “he said – she said” scenario or the other. There is typically presented lots of evidence about work history, performance evaluations, prior complaints by the plaintiff or others, financial statements and more. The disputed statements or events comprise just one piece of a much bigger puzzle to be solved by the jury. In general, juries get most of the other stuff “right.” Second, a disputed statement is not made in a vacuum. Unlike in most of the studies on deception detection, when a juror is confronted with a statement by a witness, that juror generally is exposed to information about the situation, the facts underlying the questionable testimony and the character of the witness. That is, the statement can be placed in a context that can be used to help evaluate its veracity. Much of the evidence and testimony presented at trial is an effort by the parties to put the underlying dispute into context. To quote the title of a recently popular book in social psychology, “situations matter.”36 Researchers have begun to explore the important role that context plays in deception detection and how that context can be shaped to improve performance. This will be a major topic of discussion for the next article. Finally, scholars have been examining the cues we focus on when trying to discern if someone is telling us the truth or lying to us. As mentioned above, we generally turn to all the wrong cues, thwarting our own efforts to learn the truth; but, it doesn’t have to be that way. There are cues we can look for that do signal some kind of

accuracy–confidence correlation in the detection of deception.” 1 Personality and Social Psychology Review, 346–357 (1997). 35 Even more often if they are men. 36 Sommers, Samuel. Situations Matter: Understanding how context transforms your world. (Riverhead Press, 2011).

Page 13: Effective Advocacy-Taking It to the Next Leveliframe.dri.org/DRI/course-materials/2017-WITL/pdfs/05b_Schwartz.pdf2 From Chapter 3, as excerpted in Amar, Akhil Reed. “America's Lived

Effective Advocacy—Taking It to the Next Level ■ Schwartz ■ 13

9

cognitive dissonance – a disconnect between the speaker’s words and his beliefs. Such a disconnect provides us with a red flag that something is not copacetic about what we are hearing. It might just be a lie. In the next article, I will review the literature on effective use of verbal and non-verbal cues and discuss how these results can be leveraged into effective trial strategies in employment litigation. Edward P. Schwartz is a Consultant in DecisionQuest’s Boston office. Dr. Schwartz provides quantitative and qualitative analysis of pretrial jury behavior – from interviews and focus group studies to mock trials and large-scale statistical analyses – to provide clients with feedback on case themes, strategies, evidence, witnesses, and presentation style. Further, he consults on case evaluation, provides advice on trial strategies, and assists with jury selection as well as post-verdict juror surveys and interviews. Dr. Schwartz is a nationally recognized jury consultant with excellent analytical acumen and strong market research skills who is noted for his ability to blend the strategic focus of game theory and decision theory with the real-world insights of social psychology to gain a complete picture of how people absorb, analyze, and process information. © 2014 DecisionQuest. All Rights Reserved. Disclaimer

Page 14: Effective Advocacy-Taking It to the Next Leveliframe.dri.org/DRI/course-materials/2017-WITL/pdfs/05b_Schwartz.pdf2 From Chapter 3, as excerpted in Amar, Akhil Reed. “America's Lived

14 ■ Women in the Law ■ February 2017

II. “He Said What?”: Deception Detection and Employment Litigation—Part II: The Fallible Juror

1

ow often have you heard the phrase, “Look me in the eye and say that.”? It is deeply rooted in our collective

psychology that the best way to tell whether someone is telling the truth is to look them in the eyes while they are speaking. Suppose I were to tell you that you'd be better off closing your eyes while that person was talking to you? Well, that advice is among surprising lessons learned from research on deception detection, conducted by psychologists, sociologists and neuroscientists over the past few decades.

Here are a number of commonly relied upon indicators of deceptive behavior:

Avoidance of eye contact Stuttering Perspiration Fidgeting Hesitation Blinking Shifting position Nervous laughter.

Whether a person can articulate

exactly what he is looking for when trying to determine whether he is being lied to, he likely makes note of these various visual and auditory cues in conducting his evaluation. The problem is that we, as human beings, tend to focus on the wrong things and also make the wrong inferences from what we observe. The “tells” on the list above have been shown experimentally to be very poor indicators of lying.1

The major reason that they serve so poorly as proxies for deception is that speakers who want to be believed regularly exhibit such behaviors even when they are telling the truth, especially in high stress situations.

Consider the HR professional who

takes the stand in an employment discrimination lawsuit. Her own handling of the underlying workplace situation might be in question. She is often speaking as the company's representative, a role with a lot of responsibility and pressure. Her testimony might be central to the jury's evaluation of the case—or she might just believe that it is. She is an unfamiliar environment, being questioned by a hostile stranger, while a stern judge in a black robe stares at her from on high. In short, this is an extremely stressful environment. The witness is highly motivated to be believed by the jury. Would anyone

blame her if she fidgets, or stammers, or perspires or occasionally loses focus? Of course not; but jurors are likely to interpret these symptoms of stress as red flags for deceitfulness.

This article (second in a two-part

series)2 is devoted to reviewing what we know about strategies people use to evaluate whether a speaker is being truthful or deceptive, the actual effectiveness of those strategies, and what has been learned over the years about how to help people improve their performance in terms of deception detection. This discussion follows upon what was covered in the first article and so a brief review is in order.

First and foremost, human beings

are supremely bad at differentiating true statements from lies. Over hundreds of studies, conducted in a variety of settings over decades of

H

“He Said What?”: Deception Detection and Employment Litigation – Part II: The Fallible Juror

Page 15: Effective Advocacy-Taking It to the Next Leveliframe.dri.org/DRI/course-materials/2017-WITL/pdfs/05b_Schwartz.pdf2 From Chapter 3, as excerpted in Amar, Akhil Reed. “America's Lived

Effective Advocacy—Taking It to the Next Level ■ Schwartz ■ 15

2

research, the primary, and remarkably robust, result is that we, as a species, do little better than flipping a coin when it comes to detecting deception. The average success rate at telling truth from fiction is about 54%.3

In the first article, I reviewed

several variants of this basic result. “Professional” lie detectors, such as law enforcement agents, judges and mental health professionals, do not perform significantly better than the rest of us at discerning truth from fiction. We are all generally more suspicious of messages from men, and yet they are more effective than women at duping us with their lies. From a gender perspective, the most accurate kind of evaluation is women correctly discerning when another woman is being truthful. This result needs to be interpreted with a grain of salt, however, due to something known as the “truth bias.” Humans, in general, and women to some extent more than men, tend to believe they are being told the truth more often than they are. In studies where participants are exposed to equal ratios of lies and truths, they still identify almost two-thirds of statements as true. As such, just by sampling, participants will correctly identify a higher percentage of truths than lies.

An interesting thing can happen,

however, when the statements being evaluated are “denials” instead of “assertions.” Researchers have detected a “lie bias” with respect to denials, which calls into question our collective commitment to the idea of “innocent until proven guilty.” Forcing someone to deny something on the stand—even truthfully—can cause jurors to increase their internal estimates of the likelihood that the person is a liar.

The second key finding from the

experimental research is that we all think we are much better at discriminating honesty from lies than we really are.4 Therefore, jurors will often rely heavily on their evaluations of a witness's credibility,

notwithstanding the fact that those evaluations are likely to be faulty. A key corollary to this result is that there is virtually no statistical relationship between a person's confidence in his ability to tell whether someone is lying and his actual ability to do so. The juror who proclaims in the jury room that a particular witness was clearly lying is no more likely to have correctly assessed the witness's veracity than anyone else on the jury. This result has profound implications for litigators in the courtroom, in that it is a very dangerous strategy to rely on a jury's ability to identify which witnesses are being truthful in order to win a case.

People are terrible at figuring out

when they are being told the truth and when they are being told lies. These same people are convinced that they are actually quite competent at discerning truth from lies; and there is no reason for the self-professed lie detectors to exhibit such confidence. This begs the question, “What are we doing wrong?” The answer, and the focus of this article, is that people regularly look to exactly the wrong indices of deception, such as those enumerated above.

With respect to the “look me in the

eye” test, it turns out that liars typically do a pretty good job of controlling their facial expressions while truth-tellers pay less attention to the emotional expressions associated with their speech.5 As such, focus on a speaker's face during message transmission actually reduces a listener's ability to tell whether she is being told the truth or lied to.6 Consider a typical witness stand—a low chair behind a tall screen, so that little more than the witness's face is visible. This is a terrible recipe for accurate deception detection by jurors. There are strategies, however, that can be used to increase the likelihood that jurors will recognize the veracity of an honest witness, and others that can improve the detection of deceptive testimony. These strategies comprise the bulk of the discussion below.

I. TRANSPARENCY AND TRUTH DETECTION

Over the years, psychologists and sociologists have conducted hundreds of studies aimed at exploring the ability of humans to distinguish honest messages from deceptive ones. It is important at this point to emphasize that, for our purposes here, a deceptive message is one that the bearer of the message (be it a job applicant, significant other, business associate or witness) believes to be false, but is delivered in a way intended to convince the recipient of the message that it is true. A truthful message, by contrast, is believed to be true by the speaker, who similarly attempts to convince the recipient that it is true. As such, this review does not cover mistakes, misremembrances, uncertainty on the speaker's part, exaggerations or expressions of overconfidence. All of these other forms of “falsehoods” are, of course, very important, both in everyday business dealings and litigation; they are, however, beyond the scope of this article.

Each message importantly involves two people: the sender and the receiver. The well-known philosophical question asks if a tree falls in the forest, and no one is around to hear it, does it make a sound?7 The philosophical question can be restated, for our purposes, in terms of whether a sound—or a message—that does not reach any listener can still have meaning. A great majority of the early studies of deception detection focused on the ability of the listener to discern truth from fiction. While there was some focus on the nature of the message—its subject matter , length, syntax, etc.—and how such things could affect deception detection performance, relatively little attention was paid to the role played by the speaker.

A. Demeanor vs. Transparency

Relatively recently, researchers began to ask whether it might be the case that certain speakers are just easier to read than others.8 Might it be the

Page 16: Effective Advocacy-Taking It to the Next Leveliframe.dri.org/DRI/course-materials/2017-WITL/pdfs/05b_Schwartz.pdf2 From Chapter 3, as excerpted in Amar, Akhil Reed. “America's Lived

16 ■ Women in the Law ■ February 2017

3

ability of the sender to convey trustworthiness, rather than any innate ability of the listener, that contributed to effective deception detection? This line of inquiry required further elaboration of what could be meant by terms like “credible” and “trustworthy.” Surely, a speaker who always came across as honest, even when she was lying, would not improve listener accuracy. Listeners would correctly identify her true statements but would always be fooled by her lies. By the same token, a speaker who seems to everyone to be “the lying sort” would have his deceptions detected (a good outcome), but would have a great deal of difficulty getting anyone to believe his honest statements (a bad outcome).

It became necessary, then, to distinguish between “demeanor” and “transparency.” Demeanor describes the level of trustworthiness a speaker conveys, regardless of the truth of his message. So, some people just come across as honest; others seem shifty. This is a description of demeanor. By contrast, a person is transparent if he seems trustworthy when he tells the truth and deceptive when he lies.9

Transparency can be thought of as a characteristic of a person, in that he is generally easy to read, or of a person in a particular context. For instance, a person might be very transparent in his communications with his spouse but nontransparent at the poker table.

B. Mr. Cellophane—It's all in your mind

From the perspective of the listener, we know that people are not very good at all at reading the cues to deception. Most people look to unhelpful cues and ultimately perform little better than chance when attempting to differentiate lies from truths. What's more, we have completely unrealistic expectations about our own abilities to perform this task. What then of the speakers who are sending the messages? Do they typically appreciate how convincing they are? Do witnesses know how

effectively they can lie? Or, how convincingly they can report the truth?

Recent research has revealed that we are not appreciably better at recognizing our own transparency than recognizing it in others. Gilovich, Savitsky and Medvec (1998) conducted a series of experiments aimed at exploring whether people have a realistic sense of their own transparency.10 In one experiment, students were placed in groups of 5 and all asked to answer the same question about their own personal experiences. In each round, one student was assigned to be the liar. After all five students had spoken their answers, the four truth-tellers were asked to identify the liar in the group and the liar was asked to estimate the percentage of other group members who would correctly identify him (or her) as the liar. By pure chance, the accuracy rate should have been 25% and the experimental result was 25.6%. Therefore, we can conclude that, at least under these conditions, the student subjects were not very transparent at all. By contrast, the mean estimate of transparency (percentage of my group members would identify me as the liar) was 48.8%. That is, subjects thought they were twice as transparent as they were.

The researchers were concerned that the overestimate of transparency might just stem from the general perception that lies are easier to spot than they really are. The team then ran a replication of the study with an important twist. Each subject was yoked to an “observer.” The observer knew whether his partner was a truth-teller or liar for each round but knew nothing more about any of the other players. Once again, participants overestimated their transparency (44.3% estimate of being pegged as the liar); however, the observers did not overestimate their partners' transparency (25.3% estimate of being caught lying). As a further check on this result, the researchers had each participant and his observer rate the “obviousness” of the participant's lie

on a 7-point scale, ranging from “not at all obvious” (1) to “very obvious” (7). Participants rated their own lies as a 3.0 on the obviousness scale, on average, while the average response from an observer partner was only 2.0.

These results have important implications for litigators concerned about witnesses lying on the stand during trial. While the general inability of jurors to correctly discern who is being honest and who is lying to them is extremely troubling, the concern would be somewhat mitigated by a belief that witnesses generally tell the truth under oath. In addition to deeply ingrained social norms against lying and a strong commitment to the integrity of the judicial system, we can add another reason to suspect that witnesses will be loath to lie on the stand: they think they'll get caught. The illusion of transparency can act as a deterrent (albeit a false one) to perjury.

C. In Search of Transparent Testimony

There might only be a small portion of the population who are fairly transparent throughout their lives, in that their lies are easily distinguishable from their truths across a wide range of circumstances. That is not to say, however, that most of us cannot be transparent under some circumstances. The question facing a trial attorney is how to increase transparency for witnesses on the stand. Is there some method of questioning that will increase the ability of jurors to identify both truthful witnesses as credible and dishonest ones as liars?

Levine, Shaw and Shulman sought to explore this very question.11 They hypothesized that asking subjects directly whether they were lying could lead to greater transparency than simply asking them the questions to which they might offer lies. It was not obvious that such an approach would increase transparency. Asking a liar about his lying puts him on notice of his veracity being scrutinized, thereby increasing his incentive to mask his insincerity. By contrast, the honest

Page 17: Effective Advocacy-Taking It to the Next Leveliframe.dri.org/DRI/course-materials/2017-WITL/pdfs/05b_Schwartz.pdf2 From Chapter 3, as excerpted in Amar, Akhil Reed. “America's Lived

Effective Advocacy—Taking It to the Next Level ■ Schwartz ■ 17

4

person could have a whole host of emotional responses to being accused of lying, some of which might be misinterpreted by listeners as some indication of dishonesty.

The researchers had subjects participate in a trivia game with a partner, ostensibly to explore issues of teamwork. The partner was a confederate who would encourage the subject to cheat on the trivia game. Following the game, each subject was interviewed briefly with three varieties of questions. Only two types of questions are relevant for our purposes here. Each subject was first asked rather innocuous questions about experience with trivia games and team exercises. At the end of the interview, each subject was asked pointedly whether anyone had cheated at the game. Each person who denied cheating was then asked if he was telling the truth. Finally, he was asked what his partner from the game would say about whether any cheating had gone on. All cheaters who confessed were excluded from the sample, leaving 22 subjects who cheated and then completely denied doing so during the subsequent interview.

The researchers matched these lying cheaters with 22 demographically similar subjects who played the game honestly and subsequently denied cheating (Happily, there were no false confessions). These 44 interviews were then edited into two distinct sets. The first set included only the background questions about trivia and teamwork. The second set included only the direct and strategic questioning about cheating and lying. The researchers then showed the 44 background interviews to 64 different subjects, asking each one to identify each interviewee as either a cheater-liar or an honest non-cheater. They showed the 44 direct and strategic interviews to a different group of 64 subjects and asked them to also identify each interviewee as either a cheater-liar or honest non-cheater.

Overall accuracy was 55.9%, very much in keeping with what has been

discovered in other studies—slightly better than chance. There was the expected truth bias, with listeners identifying 60% of interviewees as honest (despite only half of them being so). The key result, however, was the difference in success rate depending on interrogation method. The subjects who only saw background interviews only correctly distinguished cheaters from non-cheaters 44% of the time. On the other hand, the ones who watched the direct and strategic interviews had a 68% accuracy rate. Cross-examination very substantially increased witness transparency.

The results further supported the hypothesis that transparency of the message sender is a greater contributing factor to successful deception detection than is the judging skill of the listener. For those judges who watched the background interviews, 85% of them had a success rate between 30% and 50%. That is, almost everyone was pretty mediocre. The group who watched the direct interviews performed better but had similarly homogeneous success, with 85% of them correctly distinguishing cheaters from non-cheaters between 60% and 80% of the time. Some of the speakers, however, proved to be extremely transparent while others were completely confounding. 9% of the speakers were incorrectly identified as cheaters or non-cheaters more than 80% of the time when only their background interviews were observed. On the other hand, among those whose background interviews were watched, 9% were so transparent that they were correctly classified more than 80% of the time. This variation in transparency carried over to the direct/ strategic interviews, as well. One quarter of the speakers were misclassified more than half the time even when facing direct interrogation. On the other hand, more than a third of the speakers were correctly identified as cheaters or non-cheaters by more than 80% of listeners after watching their direct interrogations.

These results bring with them two very important lessons for litigators. The first is that witnesses vary a great deal with respect to their inherent transparency. The best way to ascertain whether a witness is transparent is to subject a simulation of their testimony (or a clip from a deposition) to a focus group for reaction. In addition to asking for general ratings of credibility and honesty, it would be wise to ask respondents to evaluate the truthfulness of specific key statements related to the central themes of the case. When it really counts, will jurors believe your witness? For an adverse witness, will jurors detect false statements? Will they dismiss even true ones? Assuming that your trial team knows which statements are true and which are false, it will be possible to differentiate between a witness's demeanor and her transparency. One aspect of this is worthy of mention. Just as we are very poor evaluators of veracity, but think we are much better than we are, I would strongly counsel attorneys not to assume that they can easily discern themselves whether a witness will be transparent or not. Don't take a chance that your coin �ip turned out wrong—go test out reaction to any key witness to confirm that jurors will react to testimony as you expect they will.

The second important lesson is that direct and strategic interrogation can increase witness transparency. Note that this was true even for subjects who did not cheat and were honestly denying the accusation of having done so. Therefore, while a tough cross-examination will increase the likelihood that jurors will detect deceit from an adverse witness (no real surprise), asking your own witness to confirm that she is telling the truth can actually increase the likelihood that jurors correctly identify her as being honest. There is an important caveat here. Because transparency varies across speakers, direct and strategic questioning will not uniformly improve juror judgments of veracity. For some witnesses, it will be very effective, while

Page 18: Effective Advocacy-Taking It to the Next Leveliframe.dri.org/DRI/course-materials/2017-WITL/pdfs/05b_Schwartz.pdf2 From Chapter 3, as excerpted in Amar, Akhil Reed. “America's Lived

18 ■ Women in the Law ■ February 2017

5

for others, it might not make much of a difference.

II. IMPROVING LISTENER PERFORMANCE

Given the very poor performance of experimental subjects in deception detection studies, researchers have turned their attention to identifying what it is, if anything, that observers do wrong that causes them to so poorly distinguish truth from fiction. If we understand better the errors we make, it might be possible to develop strategies for improving accuracy in distinguishing truths from lies.

A. The Wrong Cues

As noted above, many common beliefs about how liars behave differently from truth tellers turn out to be counterproductive. That is, they are misconceptions. Many of these misconceptions are related to facial expressions.

A liar will not look the listener in the eyes. Liars always look up and to the left. A liar will suffer from dry lips and mouth, and will therefore lick his lips more. Liars blink a lot. Someone who is lying will have a facial tick. All of these are regularly listed by people as things to look for when trying to figure out who is lying.12

A person who is motivated to lie, however, will do his very best to control the kinds of behaviors that are likely to be interpreted by his audience as signs of dishonesty. In fact, a person who is trying to deceive might control such mannerisms much more closely than someone who knows he has nothing to hide. As such, focusing on the face of the speaker might just be counterproductive for detecting deception.

Zuckerman, et al. addressed this concern in a meta-analysis that reviewed many deception detection studies, focusing on listener performance as a function of what the listener could see and hear.13 The results were quite intriguing. Being able

to hear the speakers words definitely improved performance in lie detection over watching silent footage of the speaker. The ability to see the speaker's body was also somewhat helpful. By contrast, being able to watch the speaker's face during the delivery of the message actually reduced the ability of the listener to discern whether he was being lied to. The speaker's face proved to be an unhelpful distraction. Listeners have a tendency to focus on the one part of the speaker that liars control better than truth-tellers.

There are two straightforward strategies to overcome this failure in the way people try to distinguish truth from fiction. The first is to get listeners to focus on more productive cues. The second is to disrupt the ability of liars to control their facial reactions while testifying.

B. Leakage: Break the Dam

Paul Ekman and his collaborators pioneered an approach to deception detection, focused on exploiting the psychological effort required to control one's emotions while lying.14 The underlying theory suggests that it is more taxing for a person to represent a statement as true if it is, in fact, a lie, than if it were actually true. The speaker has to keep the lie straight and consistent, control tone and maintain composure. All of the studies cited to this point suggest that liars actually achieve these ends quite easily. Ekman and his team, however, suggested that there must be ways to detect the stress on the speaker created by having to maintain a lie.

These researchers discovered that people who are trying to remain stoic and unreadable (non-transparent) tend, nonetheless, to reveal their true emotional states through their facial expressions, but only for microseconds at a time.15 That is, we tend to leak our true emotions. The problem from a deception detection standpoint, however, is that these leakages, refer red to as micro-expressions, are so brief that most people miss them completely.

Ekman and some of his colleagues have developed training techniques to teach people to detect micro-expressions during interrogations, interviews and testimony.16 There is great variation in the effectiveness of this training, from person to person, but it can improve micro-expression detection accuracy quite profoundly for some people.17

While these results are theoretically fascinating, they do not provide much to be used in litigation from a practical standpoint. Litigators are not permitted to subject jurors to micro-expression detection training, even were the lawyer or his client willing to pay for it. The Court will not allow counsel to record testimony and play it back in court in super-slow mot ion so that each micro-expression can be pointed out to the jury. In addition, the researchers who have pioneered this work are very careful to point out that emotional leakage is not a direct indicator of lying. Such leakage indicates only that there is a disconnect between the speaker's true emotional state and his spoken message. There could be many reasons other than dishonesty for such an incongruity, however, such as fear, nervousness, embarrassment or uncertainty.

A crafty litigator then needs to find strategies for converting micro-expressions into more easily detectable facial cues. That is, the litigator wants to increase the emotional transparency of the witness. This brings us full circle to the topic covered above. According to the research of Ekman and others, the reason that strategic questioning increases transparency, as reported in the Levine, et al. article, is that such direct and accusatory questioning maximizes the strain on the witness attempting to keep his true emotions in check. These tactics increase leakage to the point where at least some observers detect the speaker's true emotional state.

Another method of increasing emotional leakage is to force the speaker to multitask. The theory underlying this tactic is that increasing

Page 19: Effective Advocacy-Taking It to the Next Leveliframe.dri.org/DRI/course-materials/2017-WITL/pdfs/05b_Schwartz.pdf2 From Chapter 3, as excerpted in Amar, Akhil Reed. “America's Lived

Effective Advocacy—Taking It to the Next Level ■ Schwartz ■ 19

6

the cognitive load on the speaker will make it more difficult for him to focus his attention on maintaining his composure.18 Vrij and his collaborators discovered that forcing a speaker to engage simultaneously in another task while speaking increased the ability of listeners to distinguish true statements from falsehoods. It is also possible to increase the cognitive load by interrupting the speaker's message and forcing him to stop and restart.

This result is encouraging for litigators who are concerned about witnesses lying on the stand. In court, it is wise to make the witness examine a document or other exhibit during testimony. Even better, have the witness come down from the witness stand to demonstrate something to the jury. All of these tactics serve to disrupt the flow of the witness's narrative, which will make it more difficult to control emotional leakage.

C. Redirect isn't only for witnesses

While distracting the witness can increase his transparency, distraction can also help improve the perceptiveness of jurors. When left to their own devices, jurors naturally focus on the face of the witness on the stand, a strategy that we have learned works to the detriment of deception detection. How then is an attorney to get jurors to focus on the elements of testimony that more reliably reveal the truth of the matter? One of the results discussed in the first article related to the medium used to convey a message. Research revealed that people more accurately detected truth in phone calls than in face-to-face interactions.19 This is consistent with what we have learned about the deceptiveness of facial expressions. The key then for the trial attorney is to get jurors to listen to testimony, rather than watch it.20

This provides an opportunity for the strategic use of demonstrative exhibits at trial. If a litigator is concerned that a witness will attempt to mislead the jury, that litigator should provide the jurors with something to

look at other than the witness's face. Put a chart on the screen and leave it there while questioning the witness on the crucial detail about which veracity is in doubt. Hand out copies of a document to the jurors while asking the witness the key question. While the jurors are busy passing paper and reading the document title, they will still be listening to the witness. These distraction techniques, aimed at the listener rather than the speaker, have been shown to improve deception detection performance.21

III. LESSONS FOR EMPLOYMENT LITIGATION

The lawyer engaged in an employment case faces a troubling starting point. These cases very often involve conflicting accounts of events and conversations without much supporting documentation for either position. Only very late in the game does an employee (or former employee) reflect back on his experience with a company and decide that he was treated in a way that he contends is unlawful. While events were transpiring, it never occurs to him to keep records or share his stories with others. By the same token, his supervisor hears no complaints and feels no need to document any interactions with the employee. It all feels like business as usual. So, when things turn sour and litigation is in the picture, jurors find themselves confronted with a classic he said/she said scenario. One party (or both) might choose to lie during testimony about certain facts. The litigator would like to be able to rely on the common sense of jurors to separate the truth from the lies. Alas, research has shown time and again that we humans are uniformly terrible at evaluating whether we are being lied to.

Employment cases do not take place in a vacuum, however. They often raise emotional issues, increasing the likelihood that witness testimony will reflect honest witness attitudes. Fear of being “found out” is sufficient to deter most witnesses from lying on the stand, especially in the presence of

so much supporting documentation and testimony from other witnesses. Witnesses are likely to feel substantially more vulnerable to lie detection than is warranted, but that is a good thing in this context. Attorneys also have an opportunity during voir dire to ferret out those potential jurors who seem overly confident in their own evaluative capabilities.

The research discussed in these two articles serves two primary purposes. The first is to tell a cautionary tale about relying on deception detection among jurors to win your case. As a general rule, don't do it. The second purpose is to share some of the strategies that have been uncovered through subsequent research that can increase the likelihood that jurors will correctly spot the liars on the witness stand—and equally importantly, spot the honest witnesses.

Through strategic forms of questioning, well-timed presentation of exhibits and a focus on message content, it is possible to increase the likelihood that a jury's verdict will result from deliberations over the true facts of the case.

NOTES:

1 DePaulo, B. M., Lindsay, J. J., Malone, B. E., Muhlenbruck, L., Chalrton, K., & Cooper, H. “Cues to deception,” Psychological Bulletin, 129(1), 74-118 (2003).

2 The companion piece, “He Said What?”: Deception Detection and Employment Litigation — Part I: The Gullibe Juror,” appeared in the February/ March 2014 issue of HR Advisor.

3 Bond, C. F., & DePaulo, B. M. “Accuracy of deception judgments.” 10 Personality and Social Psychology Review, 214–234. (2006)

4 See Vrij, A., Granhag, P. A., & Porter, S. “Pitfalls and Opportunities in Nonverbal and Verbal Lie Detection.” 11 Psychological Science in the Public Interest, 89–121. (2010).

5 See Vrij, A., Granhag, P. A., & Porter, S. (2010).

6 Warren, G, Schertler, E and Bull, P., “Detecting Deception from Emotional and Unemotional Cues,” 33 J Nonverbal Behav 59–69 (2009).

Page 20: Effective Advocacy-Taking It to the Next Leveliframe.dri.org/DRI/course-materials/2017-WITL/pdfs/05b_Schwartz.pdf2 From Chapter 3, as excerpted in Amar, Akhil Reed. “America's Lived

20 ■ Women in the Law ■ February 2017

7

7 George Berkeley, “A Treatise Concerning the Principles of Human Knowledge,” 1734. section 23. Berkeley's formulation was slightly different. The first appearance of the common phrasing of the riddle appeared in a physics textbook in the early twentieth century: Mann, Charles Riborg and George Ransom Twiss. Physics. Scott, Foresman and Co., 1910, p. 235.

8 See Levine, T.R., “A Few Transparent Liars,” Communication Yearbook, 34 (2010) for a review and articulation of the theory that any performance above simple chance in truth detection is the function of having a small number of “easy-to-read” liars in the population.

9 See Levine, T.R., Shaw, A. and Shulman, HC, “Increasing Deception Detection Accuracy with Strategic Questioning,” 36 Human Communication Research, 216-231 (2010) for an excellent treatments of the distinction between demeanor and transparency.

10 Gilovich, T, Savitsky, K, and Medvec, V.H., “The Illusion of Transparency: Biased Assessments of Others' Ability to Read One's Emotional States,” 75 Journal of Personality and Social Psychology, 332–346 (1998).

11 Levine, T.R., Shaw, A. and Shulman, HC, “Increasing Deception Detection Accuracy with Strategic Questioning,” 36 Human Communication Research, 216–231 (2010).

12 Vrij, A. (2008). Detecting deceit and lies: Pitfalls and opportunities. West Sussex: Wiley. See also Vrij, A., Granhag, P. A., & Porter, S. (2010) for an excellent treatment of verbal and non-verbal cues to deception.

13 Zuckerman, M., DePaulo, B. M., & Rosenthal, R. (1981). Verbal and nonverbal communication of deception. In L. Berkowitz (Ed.), Advances in experimental social psychology. New York: Academic Press.

14 Ekman, P., O'Sulivan, M., & Frank, G. M. (1999). A few can catch a liar. Psychological Science, 10, 263– 266.

15 There are seven universal facial expressions that are easily identified by virtually all humans, even babies. They are: fear, joy, anger, surprise, sadness, contempt and disgust. If we could watch speakers in super-slow motion, we would all be able to detect their true emotional states without much problem.

16 David Matsumoto, Ph.D., Hyi Sung Hwang, Ph.D., Lisa Skinner, J.D., and Mark Frank, Ph.D., “Evaluating Truthfulness and Detecting Deception,” in FBI Law Enforcement Bulletin (June, 2011).

17 On a personal note, I attended a two hour training program conducted by David Matsumoto and discovered that I was remarkably proficient at identifying micro-expressions by the end of the session. I had no idea, however, how I was so successful. It felt very much like I was guessing, despite my 83% accuracy rate. This factoid is interesting in light of the result, reported in the last article, that confidence in deception detection is not at all correlated with actual acumen at doing so. It

turns out I was talented (at least at this part of it) but had no confidence in my own ability.

18 Vrij, A, Fisher, R., Mann, S. and Leal, S, “Detecting deception by manipulating cognitive load.” Trends in Cognitive Sciences, 10 (4). pp. 141–142 (2006).

19 See generally, Hancock, J., Thom-Santelli, J., & Ritchie, T. “Deception and design: The impact of communication technology on lying behavior,” In E. Dykstra-Erickson, & M. Tscheligi (Eds.), Proceedings of the 2004 conference on human factors in computing systems, 129–134. (New York: Association for Computing Machinery, 2004), as well as, DePaulo, B. M., Kashy, D. A., Kirkendol, S. E., Wyer, M. M., & Epstein, J. A. “Lying in everyday life,” 70 Journal of Personality and Social Psychology, 979–995 (1996).

20 See Vrij, A., Granhag, P. A., & Porter, S. “Pitfalls and Opportunities in Nonverbal and Verbal Lie Detection.” 11 Psychological Science in the Public Interest, 89–121. (2010)

21 It is important, of course, not to distract the listener so much that he does not hear the words being spoken by the witness.

Edward P. Schwartz is a Consultant in DecisionQuest’s Boston office. Dr. Schwartz provides quantitative and qualitative analysis of pretrial jury behavior – from interviews and focus group studies to mock trials and large-scale statistical analyses – to provide clients with feedback on case themes, strategies, evidence, witnesses, and presentation style. Further, he consults on case evaluation, provides advice on trial strategies, and assists with jury selection as well as post-verdict juror surveys and interviews. Dr. Schwartz is a nationally recognized jury consultant with excellent analytical acumen and strong market research skills who is noted for his ability to blend the strategic focus of game theory and decision theory with the real-world insights of social psychology to gain a complete picture of how people absorb, analyze, and process information.

Reproduced with permission: HR Adviser: Legal and Practical Guidance (Thomson Reuters), Vol. 2 (2), pp. 7-18, March/April 2014

Page 21: Effective Advocacy-Taking It to the Next Leveliframe.dri.org/DRI/course-materials/2017-WITL/pdfs/05b_Schwartz.pdf2 From Chapter 3, as excerpted in Amar, Akhil Reed. “America's Lived

Effective Advocacy—Taking It to the Next Level ■ Schwartz ■ 21

III. Less Is More? Detecting Lies in Veiled Witnesses

Less Is More? Detecting Lies in Veiled Witnesses

Amy-May Leach, Nawal Ammar,D. Nicole England, and Laura M. Remigio

University of Ontario Institute of Technology

Bennett Kleinberg and Bruno J. VerschuereUniversity of Amsterdam

Judges in the United States, the United Kingdom, and Canada have ruled that witnesses may not wearthe niqab—a type of face veil—when testifying, in part because they believed that it was necessary tosee a person’s face to detect deception (Muhammad v. Enterprise Rent-A-Car, 2006; R. v. N. S., 2010;The Queen v. D(R), 2013). In two studies, we used conventional research methods and safeguards toempirically examine the assumption that niqabs interfere with lie detection. Female witnesses wererandomly assigned to lie or tell the truth while remaining unveiled or while wearing a hijab (i.e., a headveil) or a niqab (i.e., a face veil). In Study 1, laypersons in Canada (N � 232) were more accurate atdetecting deception in witnesses who wore niqabs or hijabs than in those who did not wear veils.Concealing portions of witnesses’ faces led laypersons to change their decision-making strategies withouteliciting negative biases. Lie detection results were partially replicated in Study 2, with laypersons inCanada, the United Kingdom, and the Netherlands (N � 291): observers’ performance was better whenwitnesses wore either niqabs or hijabs than when witnesses did not wear veils. These findings suggestthat, contrary to judicial opinion, niqabs do not interfere with—and may, in fact, improve—the ability todetect deception.

Keywords: lie detection, Muslims, witnesses, veiling, minimal information

Wearing a niqab—a veil that covers the wearer’s face, exceptfor her eyes—is increasingly prevalent, but contentious. In the1970s, 1% of the Muslim population wore face veils; currently,approximately one third of female Muslims engage in the practice(see Figure 1 for sample niqab; al-Ghazali, 2008). There has beenconsiderable debate about the appropriateness of wearing a niqab(e.g., Khiabany & Williamson, 2008; Mistry, Bhugra, Chaleby,Khan, & Sauer, 2009; Vakulenko, 2007). In fact, the wearing of aniqab has been officially banned from all public places in severalcountries, such as Belgium, Egypt, France, and Turkey (Loi n°2010–1192 du 11 octobre 2010 interdisant la dissimulation duvisage dans l’espace public, 2010; Loi visant a interdire le port detout vêtement cachant totalement ou de manière principale levisage, 2011; Syed, 2010).

The permissibility of the niqab has also been called into ques-tion by the courts. For example, a judge in an American smallclaims court dismissed a plaintiff’s complaint when she refused to

remove her veil in order to testify (Muhammad v. EnterpriseRent-A-Car, 2006). Similarly, in Canada, an alleged victim ofchildhood sexual assault was ordered to testify at a preliminaryinquiry without her niqab (R. v. N. S., 2010). Most recently, adefendant who had been charged with witness intimidation wasdirected to remove her niqab while presenting evidence in theUnited Kingdom (The Queen v. D(R), 2013). In banning thewearing of a niqab while testifying, the various courts attempted tobalance the need to establish a witness’s identity, the strength ofthe women’s religious beliefs, and the right to freedom of religion.Ultimately, however, the right to a fair trial—and the threat to thatright posed by allowing a witness to wear the niqab while testify-ing—appeared to override the witnesses’ right to veil. The presid-ing judges opined that, in an adversarial trial, a judge must be ableto see a witness’s face to assess her truthfulness.

Considerable psychology–law research has been devoted totesting assumptions underlying legal decisions and laws. For ex-ample, Wells and Quinlivan (2009) found that beliefs about humancognition, which formed the basis of the U.S. Supreme Court’sdecision on how to evaluate claims of suggestiveness of policelineups in Manson v. Braithwaite (1977), were inconsistent withcontemporary research findings in the eyewitness identificationliterature. In the current studies, we examined the notion embodiedin important court decisions in the United States, the UnitedKingdom, and Canada (e.g., N. S. v. Her Majesty the Queen et al.,2012): that a fact-finder’s ability to detect deception among wit-nesses is compromised by the niqab.

Typically, observers’ lie detection performance is poor. Averageaccuracy for laypersons and justice officials is very close to 50%,or chance levels (Aamodt & Custer, 2006; Bond & DePaulo,2006). In the majority of lie detection studies, however, lie-tellers’

This article was published Online First June 27, 2016.Amy-May Leach, Nawal Ammar, D. Nicole England, and Laura M.

Remigio, Faculty of Social Science and Humanities, University of OntarioInstitute of Technology; Bennett Kleinberg and Bruno J. Verschuere,Clinical Psychology, University of Amsterdam.

This research was supported by Social Sciences and Humanities Re-search Council of Canada Grant 430-2011-0407 to Amy-May Leach andNawal Ammar. We also thank Brian Cutler for his comments on themanuscript.

Correspondence concerning this article should be addressed to Amy-May Leach, University of Ontario Institute of Technology, Faculty ofSocial Science and Humanities, 2000 Simcoe Street North, Oshawa, On-tario, Canada. E-mail: [email protected]

Law and Human Behavior © 2016 American Psychological Association2016, Vol. 40, No. 4, 401–410 0147-7307/16/$12.00 http://dx.doi.org/10.1037/lhb0000189

401

Page 22: Effective Advocacy-Taking It to the Next Leveliframe.dri.org/DRI/course-materials/2017-WITL/pdfs/05b_Schwartz.pdf2 From Chapter 3, as excerpted in Amar, Akhil Reed. “America's Lived

22 ■ Women in the Law ■ February 2017

and truth-tellers’ faces were visible. Our literature review uncov-ered no previously published research on the effects of religiousgarments on lie detection.

A few studies were indirectly relevant to peoples’ abilities todetect deception among veiled witnesses. Research on cross-cultural lie detection was informative, for example. It is highlyunlikely that every single defendant, plaintiff, witness, anddecision-maker (e.g., juror, judge) involved in a case would weara niqab. Although viewing veiled witnesses would not constitutecross-cultural lie detection, observers’ inexperience with veilingmight be analogous to a lack of familiarity between cultures. Intwo studies, laypersons were slightly better at detecting deceitwithin their own cultures than across cultures (Bond & Atoum,2000; Bond, Omar, Mahmoud, & Bonser, 1990). Yet, Vrij andWinkel (1991) failed to find significant differences between Dutchand Surinamese observers’ cross-cultural lie detection. Given thelimited number of studies and mixed findings (see Taylor, Larner,Conchie, & van der Zee, 2014, for a full discussion), it remainsunclear from this literature whether there is a meaningful disad-vantage to detecting deception in witnesses in niqabs.

Instead, it might be important to focus on the more limitedinformation that is afforded by niqabs as compared to bare faces.People are able to make inferences based on minimal information.For example, point-light displays of biological motion and staticimages of facial features can be used to discern others’ attributes(Baron-Cohen, Wheelwright, & Jolliffe, 1997; Blais, Roy, Fiset,Arguin, & Gosselin, 2012; Troje, 2002). Similarly, another form ofminimization, thin slices (i.e., exposure to less than 5 minutes ofbehavior) reveal performance, interpersonal relationships, and in-dividual differences (e.g., Ambady, Bernieri, & Richeson, 2000;

Ambady & Rosenthal, 1992). Focusing on general impressionsmight discourage the use of irrelevant details and increase effi-cient, intuitive processing without taxing cognitive resources (Am-bady, 2010; Murphy & Balzer, 1986).

Minimization of information principles have been applied to liedetection. In one direct test, observers who were afforded briefglimpses of behavior were more accurate than observers whoviewed lie- and truth-tellers’ entire accounts (Albrechtsen, Meiss-ner, & Susa, 2009). It is not necessarily that this approach encour-ages unconscious decision-making, but rather that it focuses ob-servers on a limited number of diagnostic cues (Street &Richardson, 2014). These minimization effects have also beenexamined in terms of the medium of presentation (i.e., audio vs.visual vs. audiovisual; Burgoon, Blair, & Strom, 2008; Zucker-man, Koestner, & Colella, 1985). This research might be the mostrelevant to the study of face veiling because it involves restrictingthe nonverbal and verbal cues that are available for decision-making. A meta-analysis of 50 studies (Bond & DePaulo, 2006)revealed that overall lie detection accuracy was similar, whetherobservers received audio (i.e., more restricted) or audiovisual (i.e.,less restricted) information. To date, there has been no research onhow exposure to the full range of verbal cues, but only a subset ofnonverbal cues, affects performance.

We found no empirical evidence in the lie detection literaturesuggesting that a niqab should impair lie detection because itconceals portions of the wearers’ face; rather, existing researchsuggests that the opposite could occur. Niqabs should minimize theamount of information that is available to observers and preventthem from basing their lie detection decisions on misleading facialcues (e.g., smiling; DePaulo et al., 2003). In turn, the veiling of thewitness might force observers to attend to sources of informationthat are more diagnostic of deception, such as verbal content (Vrij,2008). Niqabs also explicitly highlight a specific subset of non-verbal cues (i.e., the witnesses’ eyes). It is widely believed that aperson’s eyes reveal deception (The Global Deception ResearchTeam, 2006), and the eye region can be used to identify complexmental states (e.g., guilt; Baron-Cohen et al., 1997). In particular,blinking, and pupil dilation are effective cues to deceit in certaincontexts (e.g., DePaulo et al., 2003; Leal & Vrij, 2008). Byencouraging the use of verbal cues and/or eye region cues, niqabscould actually facilitate the detection of deception.

Although niqabs may lead to improved lie detection perfor-mance, they might also elicit response biases. Veiled Muslimwomen report being stared at, insulted, and assaulted (Clarke,2013). Being Muslim is associated with an aggressive stereotype(Fischer, Greitemeyer, & Kastenmuller, 2007), and niqabs areparticularly threatening (Behiery, 2013). It is possible that peopleattribute a range of other negative behaviors, such as lying, towomen who wear veils (see Hoodfar, 1997 for a similar argumentabout head veiling). The typical dark color of niqabs could alsoinvoke the black clothing stereotype, in which individuals whowear dark (vs. light) colors are less likely to be judged as credible(Akehurst, Kohnken, Vrij, & Bull, 1996). In turn, there might be atendency to label women who wear niqabs as lie-tellers, regardlessof the underlying veracity of those witnesses’ accounts. Maeder,Dempsey, and Pozzulo (2012) examined whether an alleged sexualassault victim’s veiling influenced the perceived culpability of thedefendant. In their study, the victim was described as wearing aburqa (i.e., a garment in a solid color that, in addition to covering

Figure 1. One type of veil—the niqab—that is the focus of this researchproject.

402 LEACH ET AL.

Page 23: Effective Advocacy-Taking It to the Next Leveliframe.dri.org/DRI/course-materials/2017-WITL/pdfs/05b_Schwartz.pdf2 From Chapter 3, as excerpted in Amar, Akhil Reed. “America's Lived

Effective Advocacy—Taking It to the Next Level ■ Schwartz ■ 23

the hair and face, conceals the entire body), a hijab, or no veil.Mock jurors, who read transcripts of the victim’s testimony, weremore confident in the defendant’s guilt when the victim wasdescribed as wearing a burqa or hijab than when she did not veil.In sum, various sources directly and indirectly associated withveiling led us to examine the possibility that niqabs produced aresponse bias during lie detection. No research, to date, has ex-amined the effects of actively minimizing only a subset of non-verbal cues—while highlighting others—on response bias.

Study 1

In Study 1, we examined participants’ lie detection accuracy,response biases, and decision strategies when evaluating the tes-timony of eyewitnesses in three veiling conditions: niqab, hijab,and no veil. We hypothesized that lie detection accuracy would behigher in the niqab condition than in the hijab or no-veil conditionsbecause it would minimize the availability of misleading cues todeception and facilitate the use of more effective strategies (e.g.,Albrechtsen et al., 2009). In addition, we predicted that veilswould activate stereotypically negative views of Muslim women(e.g., Fischer et al., 2007); therefore, we expected a lie bias (i.e.,tendency to indicate that witnesses were lying) in the niqab andhijab conditions but not in the no-veil condition. Given that niqabsare portrayed less positively than hijabs (Behiery, 2013), we hy-pothesized that the lie bias would be stronger in the niqab condi-tion than in the hijab condition. Finally, we conducted exploratoryanalyses to determine whether expected lie detection effects couldbe accounted for by participants in the niqab condition attending towitnesses’ eyes and the content of their accounts (i.e., verbal cues)to a greater extent than participants who were able to see wit-nesses’ entire faces (i.e., the hijab and no-veil conditions) and/orthe witnesses’ actual nonverbal and verbal behaviors.

Method

Participants. Two hundred and 32 students at a Canadianuniversity (138 females, 94 males; M age � 20.09 years, SD �3.83) completed the study in exchange for course credit. Partici-pants self-identified as belonging to the following ethnic groups:Arab/West Asian (n � 22), Black (n � 25), Chinese (n � 8),White (n � 74), Hispanic (n � 1), Korean (n � 1), Latin American(n � 3), South Asian (n � 79), South East Asian (n � 10), other(n � 9). The majority of participants (n � 223) did not wear hijabsor niqabs and self-identified as Christians (n � 95).

Study design. We employed a 2 (veracity: lie-tellers vs. truth-tellers) � 3 (veiling condition: niqab vs. hijab vs. no veil) mixed-factors design. We manipulated veiling condition between partic-ipants to decrease the potential impact of demand characteristics.

Materials.Video footage. In individual sessions, female witnesses (N �

80, M age � 20.23 years, SD � 5.74) were shown a video of awoman who was watching a stranger’s bag. As determined byrandom assignment, half of the women also observed her stealingitems from the bag. Then, all of the witnesses were informed thatthe woman had been accused of theft and they were being calledto testify on her behalf (i.e., they were to state that they did not seeher steal anything). Thus, half of the witnesses were lying and halfof the witnesses were telling the truth. Witnesses were given 2

minutes to prepare their testimony and, as in real trials, they wereprovided with the questions that would be asked by the defenselawyer. Once they were prepared, witnesses were randomly as-signed to don a black niqab, a black hijab, or remain unveiled. Inaddition, they were asked to wear an opaque black shawl toconceal and control for clothing. Veils and shawls were placed onthe witnesses by a trained research assistant.

Witnesses were interviewed by two female experimenters. Tosimulate courtroom procedures, one experimenter played the roleof the sympathetic defense lawyer and asked 16 information-gathering questions (e.g., “Please describe everything that you sawthe woman do.”). The other experimenter conducted a challengingcross-examination as the prosecutor and asked seven unanticipatedquestions (e.g., “The police found the man’s laptop. The defen-dant’s fingerprints were on it. How do you explain that?”). Roleassignment was counterbalanced, and both experimenters wereblind to the veracity of the witness’s testimony. To increase thestakes associated with deception, witnesses were told that theymight receive $50 if they convinced both experimenters that theywere telling the truth. In fact, all of the witnesses were given theopportunity to win the incentive in a draw. At the end of eachsession, witnesses rated their perceptions of their interviews. Thesedata are available from the corresponding author.

The interview was the only portion of the session that wasvideotaped. We excluded data from 19 witnesses because thequality of the video footage was poor or their garments (i.e., veilsor shawls) were askew. In addition, one witness in the hijabcondition confessed to having seen the woman steal items (i.e., shedid not follow our instruction to lie about the theft). Clips were notselected based on the cues revealed by witnesses. The full range oflie-telling and truth-telling proficiency is present in the justicesystem (i.e., there is a continuum from poor to proficient truth-telling, and poor to proficient lie-telling). Generalizability wasensured by randomly assigning witnesses to condition and present-ing all of their responses to observers, regardless of their quality.In total, we compiled clips of 10 lie-tellers and 10 truth-tellers ineach veiling condition (M length per interview � 2.06 min, SD �0.37). Demographic characteristics of the witnesses were similaracross conditions. Clip order was randomly assigned and counter-balanced within each condition, producing two versions of each setof 20 videos.

Coding. All of the videos were coded for the onsets andoffsets of nonverbal and verbal cues using Datavyu (i.e., a videocoding and data visualization tool; Datavyu Team, 2014). In ad-dition, research assistants coded the degree to which certain cuesoccurred (see Appendix�). One research assistant coded all of thefootage, whereas another research assistant coded 25% of eachvideo, as recommended by Datavyu. Interrater reliability, calcu-lated using ICCs, was high (M � .86, SD � .16).

Lie detection measure. Participants were asked to indicatewhether the witness in the video clip was lying or telling the truthabout having seen the woman steal the items. Participants wereawarded a “1” for each correct response and a “0” for eachincorrect response. Then, all scores were averaged to determineoverall accuracy, resulting in a score between 0 (no lie detectionability) and 1 (perfect lie detection ability).

Confidence measure. Using a scale, from 0% (not at allconfident) to 100% (extremely confident), participants indicatedhow confident they were in each lie detection judgment. Ratings

403VEILED WITNESSES

Page 24: Effective Advocacy-Taking It to the Next Leveliframe.dri.org/DRI/course-materials/2017-WITL/pdfs/05b_Schwartz.pdf2 From Chapter 3, as excerpted in Amar, Akhil Reed. “America's Lived

24 ■ Women in the Law ■ February 2017

were averaged across witnesses to yield overall confidence scores.Because confidence analyses were exploratory in nature, and theydid not reveal significant effects, we will not report them here.Interested readers can obtain the data from the correspondingauthor.

Cue use measure. Participants were asked to indicate whichverbal cues (e.g., amount of detail) and nonverbal cues (e.g., eyecontact) they used to make their decisions from the same listcontaining empirically verified actual and perceived cues to de-ception that was coded by research assistants. For each cue,participants were given a “1” if they indicated that they had usedthat cue to make their decisions, and a “0” if they had not used thecue. Each variable was classified as a nonverbal or verbal cue.Then, we calculated a difference score (i.e., subtracted overallverbal from nonverbal cue use).

Experience measure. We asked participants to report theirexperiences with lie detection on a scale from 1 (not at all) to 5(extremely) and describe any relevant additional experience in thearea. Participants also indicated whether they had ever worked inlaw enforcement and had taken any courses related to lie detection.

Procedure. Given that the two variables were manipulated inthe videotaped stimuli, the procedure was identical for all partic-ipants. Individually, following random assignment to one of thethree veiling conditions, we showed each participant 20 video clipsof lying and truth-telling witnesses using a computer program(Jarvis, 2008). After each clip, the participant rendered a liedetection and confidence rating. At the end of the session, theparticipant indicated the cues that he or she had used to makedecisions and provided demographic information. Each sessionlasted approximately one hour.

Results

Preliminary analyses revealed nonsignificant effects of partici-pant gender, race, veiling, religious affiliation, and lie detectionexperience. All reported analyses are collapsed across those fac-tors.

Participants’ accuracy. We conducted a Veracity � VeilingCondition analysis of variance (ANOVA) on accuracy scores. Thecell means are shown in Table 1. There was a significant maineffect of veiling condition, F(2, 229) � 9.07, p � .001, �p

2 � .07.

Post hoc tests, using Tukey’s honest significant difference, re-vealed that participants were more accurate when viewing wit-nesses who wore hijabs or niqabs than those who did not wearveils, p � .001, d � 0.63, 95% confidence interval (CI) [0.30,0.96], and p � .038, d � 0.33, 95% CI [0.01, 0.65], respectively.There was no significant difference, in terms of overall accuracy,between participants in the hijab and niqab conditions, p � .175,d � 0.32, 95% CI [�0.00, 0.63]. Regardless of veiling condition,participants were more accurate when judging truth-tellers (M �.72, SD � .20) than lie-tellers (M � .38, SD � .21), F(1, 229) �210.24, p � .001, �p

2 � .48, d � 1.66, 95% CI [1.45, 1.87].However, there was no significant interaction between veracityand veiling condition, F(2, 229) � 1.19, p � .306, �p

2 � .01.Participants’ signal detection. As noted by Meissner and

Kassin (2002), focusing solely on accuracy can obscure the dis-tinction between discrimination (i.e., the ability to identify lie- andtruth-tellers) and response bias (i.e., the tendency to choose aparticular response). Given our interest in both of these factors, wefollowed Meissner and Kassin’s example and conducted a signaldetection analysis. All calculations were based on Wixted andLee’s (2014) formulas. Specifically, we calculated “hits” (i.e., thepercentage of correct classifications of lie-tellers) and “falsealarms” (i.e., the percentage of truth-tellers incorrectly classified aslie-tellers).

Discrimination. We conducted a one-way ANOVA, withveiling condition as the independent variable, on discrimination(i.e., d=). Echoing the overall accuracy analysis, there was a sig-nificant effect of veiling, F(2, 229) � 8.18, p � .001, �p

2 � .07 (seeTable 1). Post hoc tests revealed that participants were better ableto discriminate between lie-tellers and truth-tellers in hijabs thanthose who did not wear veils, p � .001, d � 0.63, 95% CI [0.30,0.95]. The difference between participants in the niqab and no-veilconditions approached significance, p � .056, d � 0.38, 95% CI[0.06, 0.70]. Performance was similar when participants viewedwitnesses wearing hijabs or niqabs, p � .196, d � 0.26, 95% CI[�0.05, 0.58].

We also compared participants’ d= scores to zero (i.e., no sen-sitivity). Participants could discriminate between lie- and truth-telling witnesses who wore niqabs, t(77) � 5.18, p � .001, d �0.59, 95% CI [0.34, 0.83], or hijabs, t(76) � 6.84, p � .001, d �

Table 1Lie Detection Performance

Veilingcondition

AccuracyM (SD)

Discrimination (d=) Response bias (�)

M (SD) 0 (no sensitivity) M (SD) 1a (no bias)

Study 1Niqab .55a (.09) .23ab (.39) � .94a (.31) —Hijab .58a (.10) .34a (.44) � .86a (.37) �No veil .52b (.09) .08b (.39) — .99a (.38) —

Study 2Niqab .57a (.12) .30a (.52) � .93ab (.31) �Hijab .59a (.11) .43a (.47) � .86a (.26) �No veil .51b (.13) .03b (.39) — 1.01b (.41) —

Note. Means sharing a common subscript are not statistically different at � � .05 according to Tukey’s honestsignificant difference tests. Dashes in column 4 indicate that there was no difference from 0 and dashes incolumn 6 indicate that there was no difference from 1.a Means significantly less than 1 indicate a truth bias.

404 LEACH ET AL.

Page 25: Effective Advocacy-Taking It to the Next Leveliframe.dri.org/DRI/course-materials/2017-WITL/pdfs/05b_Schwartz.pdf2 From Chapter 3, as excerpted in Amar, Akhil Reed. “America's Lived

Effective Advocacy—Taking It to the Next Level ■ Schwartz ■ 25

0.78, 95% CI [0.39, 1.17]; however, they were unable to do so inthe no-veil condition, t(76) � 1.70, p � .094, d � 0.19, 95% CI[�0.97, 0.29].

Response bias. A one-way ANOVA revealed that partici-pants’ response biases (i.e., �) were not affected by veiling con-dition, F(2, 229) � 2.10, p � .126, �p

2 � .02 (see Table 1). Bycomparing � scores to one (i.e., no response bias), we couldexamine participants’ tendencies to label witnesses as lie-tellers ortruth-tellers. Participants exhibited a truth bias toward witnesseswho were wearing hijabs, t(76) � �3.23, p � .002, d � �0.37,95% CI [�0.60, �0.14]. There was no evidence of bias in theniqab condition, t(77) � �1.66, p � .101, d � �0.19, 95% CI[�0.28, �0.09] or in the no-veil condition, t(76) � �0.30, p �.767, d � �0.03, 95% CI [�0.05, �0.02].

Participants’ cue use. Although not a primary research ques-tion, we were interested in whether participants’ lie detectionperformance could be explained by the cues that they reportedusing to render decisions. We conducted a multivariate analysis ofvariance (MANOVA) on participants’ self-reported reliance on theeye region (i.e., blinking, eye contact, and pupil dilation) to detectdeception. There was only a significant effect of veiling conditionon the combined dependent variables, F(6, 456) � 2.56, p � .019;Pillai’s trace � .07 �p

2 � .03. Examining the univariate effectsrevealed that veiling did not affect the use of eye contact, F(2,229) � 0.17, p � .848, �p

2 � .00, or pupil dilation, F(2, 229) �0.99, p � .373, �p

2 � .01, as cues to deceit. In fact, eye contact wasfrequently cited as a cue to deception across all conditions (M �.92, SD � .27). However, blinking use did vary with veilingcondition, F(2, 229) � 1.44, p � .003, �p

2 � .05. Post hoc testsindicated that participants were equally likely to report that theyused blinking to detect deception when the witnesses wore niqabs(M � .60, SD � .49), or did not veil (M � .60, SD � .49), p �.998, d � 0.00, 95% CI [�0.32, 0.32]. Participants stated that theyrelied less on witnesses’ blinking in the hijab condition (M � .36,SD � .48) than in the niqab, p � .008, d � �0.49, 95% CI[�0.82, �0.17], or no-veil conditions, p � .010, d � �0.49, 95%CI [�0.82, �0.17].

Veiling also affected overall reported cue use, F(2, 229) �14.75, p � .001, �p

2 � .11. Participants were more likely to statethat they based their decisions on verbal cues than nonverbal cueswhen witnesses wore niqabs (M � �.17, SD � .10) than whenthey wore hijabs (M � �.10, SD � .08), d � �0.77, 95% CI[�1.10, �0.44], or did not veil (M � �.10, SD � .08),d � �0.77, 95% CI [�1.10, �0.44], all ps � .001. There were nodifferences, in terms of overall cue use, when participants viewedwitnesses who wore niqabs or did not veil, p � 1.000, d � 0.00,95% CI [�0.31, 0.32].

Coded cues. We performed the same analyses on the eyeregion as above to allow for a comparison between participants’self-reports and the actual presence of cues to deception. Only theeffect of veracity was statistically significant, F(2, 53) � 3.84, p �.028; Pillai’s trace � .13 �p

2 � .67. Veracity did not affect blinking,F(1, 54) � 0.06, p � .812, �p

2 � .06. However, lie-tellers (M �26.83, SD � 10.81) made eye contact less frequently than truth-tellers (M � 34.20, SD � 10.72), F(1, 54) � 7.67, p � .008, �p

2 �.78, d � �0.68, 95% CI [�1.21, �0.15].

To determine whether one type of cue was more likely to occur,the data were transformed into z scores and each variable wasclassified as a nonverbal or verbal cue. Then, we calculated a

difference score (i.e., subtracted overall verbal from nonverbal cueuse). A Veracity � Veiling Condition ANOVA on the differencescore data revealed a significant main effect of veiling, F(1, 54) �8.15, p � .001, �p

2 � .23. Witnesses who wore niqabs (M � .32,SD � .36) were more likely to reveal verbal (vs. nonverbal)information than witnesses who wore hijabs (M � �.26, SD �.60), p � .001, d � 1.18, 95% CI [0.49, 1.87], or did not veil(M � �.06, SD � .36), p � .031, d � 1.06, 95% CI [0.38, 1.75].There were no differences between witnesses who wore hijabs ordid not veil, p � 1.000, d � 0.00, 95% CI [�0.31, 0.32]. Therewere no other significant effects.

We also conducted an exploratory Veracity � Veiling Condi-tion MANOVA on the overall presence of empirically verifiedcues to deception. Participants rated both diagnostic and nondiag-nostic cues to reduce any effects of demand characteristics; onlythe former were analyzed here. Including known nondiagnosticcues in the analysis would have unnecessarily impeded the likeli-hood of uncovering significant effects. Interested readers can ob-tain these analyses from the corresponding author, however. Intotal, we analyzed 15 cues (i.e., fidgeting, inconsistencies, admit-ted lack of memory, length of response, negative statements,spontaneous corrections, unfriendly facial expressions, word/phrase repetitions, vocal tension, coherence/plausibility, vagueness,cooperativeness, nervousness, amount of detail, and pitch). There wasno significant interaction between veracity and veiling condition,F(28, 84) � 0.88, p � .639; Pillai’s trace � .45 �p

2 � .23, nor asignificant main effect of veiling on the combined dependent vari-ables, F(28, 84) � 1.02, p � .449; Pillai’s trace � .51 �p

2 � .25.However, there was a statistically significant difference betweenlie-tellers and truth-tellers, F(14, 41) � 2.46, p � .013; Pillai’strace � .46 �p

2 � .46. A closer examination of the univariate effectsrevealed that lie-tellers spoke in a higher pitch, were less cooperative,and provided accounts that were less coherent than truth-tellers (seeTable 2).

Discussion

As predicted, participants were more accurate when witnesseswore niqabs than when witnesses did not wear veils. They did not,however, exhibit different response biases toward the formergroup. Our sample consisted of participants from an ethnically andreligiously diverse student population at a Canadian university.Perhaps the characteristics of the student body, exposure to across-cultural curriculum, and/or social desirability concerns couldaccount for the lack of response bias toward witnesses wearingveils. Indeed, the demographic composition and diversity of reli-gious life in Canada, as well as the country’s historical endorse-ment of a cultural mosaic approach to multiculturalism, might havemade biased decision-making unlikely. It may be more prevalentin geographical regions where the niqab has been more publicallyopposed and expressing a negative response bias would be moresocially acceptable.

The lack of bias in the current study could also be attributed tothe context in which data collection took place. R. v. N. S. (2010),the Canadian case in which a witness was asked to remove her veilin court, was tried within the university’s catchment area. Therewere numerous appeals and a Supreme Court trial that took placein the midst of data collection (see N. S. v. Her Majesty the Queenet al., 2012). Simultaneously, a neighboring provincial govern-

405VEILED WITNESSES

Page 26: Effective Advocacy-Taking It to the Next Leveliframe.dri.org/DRI/course-materials/2017-WITL/pdfs/05b_Schwartz.pdf2 From Chapter 3, as excerpted in Amar, Akhil Reed. “America's Lived

26 ■ Women in the Law ■ February 2017

ment drafted Bill 60 (2013), which limited State employees’ abil-ities to wear overt religious symbols and conceal their faces; inessence, it would have severely restricted wearing of the niqab. Inresponse, the university’s local hospital mounted a recruitmentcampaign—including ads and signs on city streets—depicting amedical professional wearing a hijab next to the slogan, “We don’tcare what’s on your head. We care what’s in it” (Mok, 2013).Thus, participants were exposed to explicit messages from localauthorities that veiling should not bias decision-making, in addi-tion to significant public debate surrounding the permissibility ofwearing niqabs in court. That exposure might have affected anypreexisting response biases.

Study 2

Study 2 served as both a direct replication (in Canada) as wellas an extension to two other countries (i.e., the Netherlands and theUnited Kingdom). We chose the Netherlands because its govern-ment recently came very close to banning the niqab (Governmentof the Netherlands, 2012). We also sought to replicate findings inthe United Kingdom because a ruling on the permissibility ofwearing a niqab in British courts was imminent. Indeed, shortlyafter data collection began, a judge ruled that a woman must unveilin court (The Queen v. D(R), 2013). A comparison between theselocations and Canada would further test the generalizability ofresults.

As in Study 1, we expected that participants would be better ableto detect deception in witnesses who wore niqabs than in witnesseswho did not wear veils. We hypothesized that response bias wouldvary by region. Specifically, participants in Canada were expectedto exhibit similar response tendencies, regardless of veiling con-dition, replicating the findings from the first study. We posited thatthe participants in the Netherlands would exhibit the originallyhypothesized pattern of response due to the government’s stanceon veiling. Dutch participants were expected to be more likely toindicate that women were lying when they were wearing niqabsthan when they did not wear veils. We did not have set hypothesesabout the nature of response bias in the U.K. sample.

Method

Participants. Two hundred and 91 students at universities inCanada, the United Kingdom, and the Netherlands (201 females,90 males; M age � 21.11 years, SD � 4.33) completed the studyin exchange for extra credit or a small honorarium. Participantsself-identified as belonging to the following ethnic groups: Arab/West Asian (n � 19), Black (n � 20), Chinese (n � 7), White (n �194), Hispanic (n � 1), Korean (n � 1), Latin American (n � 1),South Asian (n � 26), South East Asian (n � 13), Other (n � 9).The majority of participants (n � 286) did not report wearing anytype of veil or having a religious affiliation (n � 171).

Study design. We employed a 2 (veracity: lie-tellers vs. truth-tellers) � 3 (country: Canada vs. the Netherlands vs. the UnitedKingdom) � 3 (veiling condition: niqab vs. hijab vs. no veil)mixed-factors design. As in Study 1, veiling condition was abetween-participants factor, whereas veracity was a within-participants factor.

Materials and procedure. The procedure was similar toStudy 1, with a few key differences. Participants in the UnitedKingdom and the Netherlands were not asked to provide the cuesthat they used to render their lie detection decisions. In addition,we assessed the English proficiency of participants in the Nether-lands, using the criteria established by the Centre for CanadianLanguage Benchmarks (2010), to ensure that they could under-stand the witnesses’ accounts. Dutch participants were asked toself-report their overall English proficiency on a 12-point scale(Basic � 1–3; Intermediate � 4–8; Advanced � 9–12). Averageproficiency was on the boundary between Intermediate and Ad-vanced (M � 8.87, SD � 1.88). At the conclusion of each session,participants listened to two messages that were read aloud inEnglish. After each message, they were asked three multiple-choice questions about its content. Each correct answer wasawarded a “1,” whereas each incorrect answer was awarded as “0.”Thus, the highest possible score was 6 (out of 6 questions).Participants’ objective language comprehension was extremelyhigh (M � 5.02, SD � 0.95) and would be considered “Advanced”according to the Canadian Language Benchmarks. Our ANOVA

Table 2Mean Nonverbal and Verbal Behaviors by Veracity

BehaviorsLie-tellers

M (SD)Truth-tellers

M (SD) d [CI] p

Unfriendly facial expressions 0.90 (1.24) 0.70 (1.75) .13 [�.38, .65] .600Fidgeting 1.47 (2.64) 0.60 (1.25) .42 [.07, .52] .111Overall nervousness 3.87 (1.01) 3.73 (0.83) .15 [.06, .52] .557Word or phrase repetitions 0.67 (1.12) .53 (1.33) .11 [.06, .52] .680Pitch 3.13 (0.35) 2.77 (0.54) .79 [.25, 1.33] .001Vocal tension 2.07 (1.02) 1.93 (0.87) .15 [�.37, .67] .559Length of responses 23.60 (0.77) 23.43 (0.94) .20 [�.32, .71] .467Coherence 4.73 (0.52) 5.00 (0.00) �.73 [�1.27,�.20] .008Amount of detail 2.60 (1.10) 2.70 (0.92) �.09 [�.61, .41] .706Spontaneous corrections 1.40 (1.38) 1.23 (1.33) .13 [�.39, .64] .633Admitted lack of memory 0.27 (0.83) 0.07 (0.37) .31 [�.21, .83] .236Inconsistencies 0.07 (0.25) 0.00 (0.00) .40 [�.12, .92] .139Vagueness 2.63 (1.27) 2.90 (1.35) �.21 [�.72, .31] .437Negative statements 0.00 (0.00) 0.00 (0.00)Cooperativeness 4.87 (0.35) 5.00 (0.00) �.53 [�1.05,�.00] .044

Note. CI � confidence interval.

406 LEACH ET AL.

Page 27: Effective Advocacy-Taking It to the Next Leveliframe.dri.org/DRI/course-materials/2017-WITL/pdfs/05b_Schwartz.pdf2 From Chapter 3, as excerpted in Amar, Akhil Reed. “America's Lived

Effective Advocacy—Taking It to the Next Level ■ Schwartz ■ 27

revealed that there was a similar distribution of English compre-hension scores across veiling conditions, F(1, 96) � 1.94, p �.150, �p

2 � .04.

Results

There were nonsignificant effects of race, gender, veiling, reli-gious affiliation, and lie detection experience. Thus, we collapsedacross those variables when conducting the following analyses.

Participants’ accuracy. A Veracity � Country � VeilingCondition ANOVA indicated that there was a significant maineffect of veiling condition, F(2, 281) � 13.28, p � .001, �p

2 � .09(see Table 1). Post hoc tests revealed that participants were betterable to detect the deception of women who wore niqabs or hijabsthan of those who did not veil, p � .001, d � 0.48, 95% CI [0.19,0.77] and p � .001, d � 0.66, 95% CI [0.37, 0.95], respectively.Performance in the niqab and hijab conditions was similar, p �.392, d � �0.17, 95% CI [�0.46, 0.11]. In addition, participantswere more accurate when judging truth-tellers (M � .71, SD �.22) than lie-tellers (M � .39, SD � .20), F(1, 281) � 225.14, p �.001, �p

2 � .45, d � 1.52, 95% CI [0.18, 1.34]. There was nosignificant main effect of country, F(2, 281) � 1.44, p � .240,�p

2 � .01. Interactions between veracity and veiling condition, F(2,281) � 0.13, p � .878, �p

2 � .00, veracity and country, F(2, 281) �2.70, p � .069, �p

2 � .02, country and veiling condition, F(2,281) � 0.88, p � .475, �p

2 � .01, and all three variables, F(4,281) � 1.06, p � .376, �p

2 � .02 were also nonsignificant.Participants’ signal detection. As in Study 1, we used a

signal detection analysis to examine the independent contributionsof discrimination and bias.

Discrimination. We performed a Country � Veiling Condi-tion ANOVA on discrimination (i.e., d=). Again, there was asignificant effect of veiling condition, F(2, 281) � 14.37, p �.001, �p

2 � .09 (see Table 1). Post hoc tests indicated that partic-ipants were better able to discriminate between lie-tellers andtruth-tellers in niqabs and hijabs than those who did not wear veils,p � .001, d � 0.59, 95% CI [0.29, 0.88] and p � .001, d � 0.96,95% CI [0.63, 1.22], respectively. Participants performed similarlywhen viewing witnesses who were wearing hijabs or niqabs, p �.232, d � �0.26, 95% CI [�0.54, �0.02]. There was no signifi-cant main effect of country, F(2, 281) � 0.86, p � .424, �p

2 � .01,or interaction between the variables, F(4, 281) � 0.93, p � .444,�p

2 � .01.One-sample t tests, comparing discrimination scores to zero

(i.e., no sensitivity), revealed that participants could discriminatebetween lie- and truth-telling witnesses who wore niqabs, t(95) �5.69, p � .001, d � 0.58, 95% CI [0.36, 0.80] or hijabs, t(96) �8.98, p � .001, d � 0.91, 95% CI [0.46, 1.37]. Participants couldnot discriminate between lie- and truth-tellers who did not wearveils beyond chance levels, however, t(96) � 0.45, p � .652, d �0.46, 95% CI [0.02, 0.07].

Response bias. According to a Country � Veiling ConditionANOVA, participants’ biases (i.e., �) were affected by veilingcondition, F(2, 281) � 5.03, p � .007, �p

2 � .04 (see Table 1). Posthoc tests indicated that participants who viewed witnesses in hijabsdisplayed a different pattern of response bias than those who sawwitnesses who did not veil, p � .005, d � �0.44, 95% CI[�0.72, �0.15]. Participants who viewed witnesses in niqabs didnot differ from those who saw witnesses in hijabs, p � .335, d �

0.24, 95% CI [�0.04, 0.53], or without veils, p � .198,d � �0.22, 95% CI [�0.50, 0.06]. There was no significant maineffect of country, F(2, 281) � 0.97, p � .382, �p

2 � .01, orinteraction between the variables, F(4, 281) � 0.44, p � .778,�p

2 � .01.We compared participants’ � scores to one (i.e., no bias) to

examine their tendencies to label witnesses as lie-tellers or truth-tellers within each veiling condition. Participants exhibited a truthbias toward witnesses in niqabs, t(95) � �2.27, p � .025,d � �0.23, 95% CI [-[�0.43, �0.02] and hijabs, t(97) � �5.21,p � .001, d � �0.53, 95% CI [�0.79, �0.26]. They did notexhibit response biases when witnesses did not wear veils, t(96) �0.29, p � .775, d � 0.03, 95% CI [0.01, 0.04].

Discussion

We partially replicated Study 1’s primary findings. Participantswere more accurate at detecting the deception of witnesses whowore niqabs or hijabs than that of witnesses who did not wearveils. There was no evidence of a negative response bias towardwomen who veiled in any country. Rather, participants exhibited atendency to indicate that women who wore niqabs or hijabs weretelling the truth.

General Discussion

Contrary to the assumptions underlying the court decisions citedearlier, lie detection was not hampered by veiling across twostudies. In fact, observers were more accurate at detecting decep-tion in witnesses who wore niqabs or hijabs than those who did notveil. Discrimination between lie- and truth-tellers was no betterthan guessing in the latter group, replicating previous findings(Bond & DePaulo, 2006). It was only when witnesses wore veils(i.e., hijabs or niqabs) that observers performed above chancelevels. Thus, veiling actually improved lie detection (see Table 1).

It is unlikely that these findings were simply false positives.Simmons, Nelson, and Simonsohn (2011) have identified fourresearcher degrees of freedom that can increase Type I error:disclosing only certain subsets of conditions or dependent vari-ables, employing covariates, and altering the sample size. We didnot engage in any of those practices. All conditions and dependentvariables were reported, and covariates were not used. The samplesizes differed between the two studies, but the difference was notdue to an attempt to manipulate significance. Rather, because thiswork was the first of its kind, we had no basis upon which topredict effect sizes for use in an a priori power analysis for Study1. We set a healthy sample size (i.e., 75 participants per veilingcondition) and ceased data collection when our target was reached.Due to the nature of our university’s participant pool (i.e., testingsessions were posted online at least one week in advance andparticipants could modify appointments up until the beginning ofeach session), our final sample size was slightly above what wasspecified. A post hoc power analysis of the discrimination find-ings, using G�Power (Faul, Erdfelder, Lang, & Buchner, 2007),revealed that the study was adequately powered (power � .97). Byusing the effect size from the discrimination findings, we were ableto estimate the required sample to produce statistical power at thesame level in Study 2 (i.e., N � 290); we terminated data collec-tion when it was reached. Thus, there is no reason to believe that

407VEILED WITNESSES

Page 28: Effective Advocacy-Taking It to the Next Leveliframe.dri.org/DRI/course-materials/2017-WITL/pdfs/05b_Schwartz.pdf2 From Chapter 3, as excerpted in Amar, Akhil Reed. “America's Lived

28 ■ Women in the Law ■ February 2017

“p-hacking” was responsible for our significant lie detection re-sults.

Increases in lie detection accuracy associated with veiling mightbe attributed to the added emphasis on witnesses’ eyes. Partici-pants reported that they were no more likely to use the eye regionto detect deceit when witnesses wore niqabs than when they didnot veil. Eye-tracking data suggest that, when forming socialimpressions, people spend more time looking at the eyes than anyother feature (Janik, Wellends, Goldberg, & Dell’Osso, 1978).People’s eyes, and their perceived link to deception, might be sosalient that highlighting them with a niqab was superfluous. In-deed, over 90% of the participants in our study reported using eyecontact as a cue to deceit whether the witnesses veiled or not.However, self-report should be treated with a degree of caution(e.g., Nisbett & Wilson, 1977). In our study, lie-tellers were morelikely to avert their gaze than truth-tellers; veils should havehighlighted this difference. Improvements in lie detection perfor-mance suggest that participants might have attended to, or inter-preted, eye gaze information more accurately in the veiling con-ditions.

Deception detection strategies were also affected by the amountof visual information that was available. Compared to the otherconditions, witnesses in niqabs revealed significantly more verbalthan nonverbal cues. Appropriately, participants were more likelyto base their decisions on verbal cues than nonverbal cues whenviewing witnesses from this group. During several testing sessions,participants did not watch all of the videos (i.e., they turned awayfrom the screens and listened to the testimony). However, thispractice only seemed to occur when witnesses wore niqabs. Futureresearch should examine the frequency of self-selected minimiza-tion of information (e.g., using eye tracking). Establishing thatobservers watched the witnesses would then allow researchers toexplore the specific mechanisms underlying decision-making (e.g.,correlate deception cues with deception judgments using a lensmodel analysis; see Hartwig & Bond, 2011).

Despite not being explicitly discussed by the courts in theaforementioned cases, we considered whether response bias af-fected decisions related to veiled witnesses. The decisions ofjudges and other members of the justice system are typicallyguided by principles related to fair treatment, such as those laid outin the Equal Treatment Bench Book in the United Kingdom(Judicial College, 2013). The same might not be true of jurors.Tending to (dis)believe a veiled witness due to preexisting stereo-types would severely undermine court proceedings. It was, thus,encouraging that participants were not negatively biased againstwitnesses who wore niqabs, even in the absence of explicit in-struction. These findings replicated previous work, in which mockjurors were similarly unaffected when a witness was described ashaving worn a burqa (Maeder et al., 2012).

We cannot completely discount the possibility that findingswere due to social desirability, however, because participants werenot blind to veiling condition. Of course, if participants alteredtheir responses systematically, that could not explain the above-chance discrimination between lie- and truth-tellers in the veilingconditions (i.e., response bias and discrimination are independent;Green & Swets, 1966). Only response biases should have beenaffected. Yet, were our findings merely a reflection of sociallyacceptable norms, then we might have expected differences inresponse biases between the countries; participants in the Nether-

lands—a country that had considered banning veils (e.g., Govern-ment of the Netherlands, 2012)—might have been less positivetoward witnesses who wore niqabs, for example. Instead, partici-pants in the Netherlands, Canada and the United Kingdom viewedveiled witnesses similarly. Judges and jurors always know whethera witness is wearing a niqab while testifying and, presumably,would exhibit the same tendencies as participants in our study.Indeed, meta-analyses have failed to find consistent differences inlie detection performance between students, community members,and justice officials (Aamodt & Custer, 2006; Bond & DePaulo,2006).

An additional limitation of this work is that we randomlyassigned our witnesses to lie and/or wear a veil. This practice wasimportant from a scientific standpoint because it helped to ensureinitial equivalence between the groups. Experimentally manipulat-ing lying (vs. inducing volitional, naturalistic lies) should not havesignificantly affected the results and is in keeping with previousresearch on lie detection (see Vrij, 2008 for a review). Witnessesthought that the study was involving, and they were motivated tobe believed: the deception paradigm invoked experimental realism.However, because we randomly assigned witnesses to veilingcondition, we might also have obscured natural differences be-tween the groups (Ammar & Leach, 2013). For example, in Am-mar and Leach’s (2013) study, the women who wore niqabs wereless likely to be native English speakers than women who did notveil. Emerging work suggests that laypersons and police officersare not only less able to discriminate between lie- and truth-tellerswho are speaking in a non-native language, but also view themless positively than native speakers (Leach & Da Silva, 2013). It isunknown how natural variations in veiled witnesses’ languageproficiencies would have mitigated our findings. In the future,researchers might wish to examine people’s assessments of actualniqab-wearers to address this issue.

The two studies reported here provide unique tests of the be-havioral assumptions underlying important courts decisions in theUnited States, United Kingdom, and Canada. The essence of thesedecisions is that women must remove their niqabs while testifyingto ensure the fairness of court proceedings (e.g., The Queen v.D(R), 2013). Although preliminary, in the sense that we havereported only two empirical studies addressing these assumptions,the data consistently suggested that minimizing visual informationactually improved participants’ lie detection performance. It isnoteworthy that witnesses themselves believed that they would bemore accurately judged when wearing niqabs. Thus, seeing aperson’s entire face does not appear to be necessary for lie detec-tion; banning the niqab because it interferes with one’s ability todetermine whether the speaker is lying or telling the truth is notsupported by scientific evidence. In addition to the potential policyimplications concerning the wearing of a niqab or hijab on thestand, the studies reinforce the value that behavioral science datahave for informing judiciaries.

ReferencesAamodt, M. G., & Custer, H. (2006). Who can best catch a liar? A

meta-analysis of individual differences in detecting deception. ForensicExaminer, 15, 6–11.

Akehurst, L., Kohnken, G., Vrij, A., & Bull, R. (1996). Lay persons’and police officers’ beliefs regarding deceptive behavior. Applied

408 LEACH ET AL.

Page 29: Effective Advocacy-Taking It to the Next Leveliframe.dri.org/DRI/course-materials/2017-WITL/pdfs/05b_Schwartz.pdf2 From Chapter 3, as excerpted in Amar, Akhil Reed. “America's Lived

Effective Advocacy—Taking It to the Next Level ■ Schwartz ■ 29

Cognitive Psychology, 10, 461– 471. http://dx.doi.org/10.1002/(SICI)1099-0720(199612)10:6�461::AID-ACP413�3.0.CO;2-2

Albrechtsen, J. S., Meissner, C. A., & Susa, K. J. (2009). Can intuitionimprove deception detection performance? Journal of ExperimentalSocial Psychology, 45, 1052–1055. http://dx.doi.org/10.1016/j.jesp.2009.05.017

al-Ghazali, M. (2008). Lisat min al-islam [Not from Islam]. Cairo, Egypt:Dar al-shorouq.

Ambady, N. (2010). The perils of pondering: Intuition and thin slicejudgments. Psychological Inquiry, 21, 271–278. http://dx.doi.org/10.1080/1047840X.2010.524882

Ambady, N., Bernieri, F. J., & Richeson, J. A. (2000). Toward a histologyof social behavior: Judgmental accuracy from thin slices of the behav-ioral stream. In M. Zanna (Ed.), Advances in experimental social psy-chology (Vol. 32, pp. 201–251). San Diego, CA: Academic Press.

Ambady, N., & Rosenthal, R. (1992). Thin slices of expressive behavior aspredictors of interpersonal consequences: A meta-analysis. Psychologi-cal Bulletin, 111, 256–274. http://dx.doi.org/10.1037/0033-2909.111.2.256

Ammar, N., & Leach, A.-M. (2013, November). Veiling in the courtroom:Muslim women’s credibility and perceptions. Paper presented at theQatar Foundation Annual Research Conference, Doha, Qatar.

Baron-Cohen, S., Wheelwright, S., & Jolliffe, T. (1997). Is there a “lan-guage of the eyes”? Evidence from normal adults, adults with autism orAsperger syndrome. Visual Cognition, 4, 311–331. http://dx.doi.org/10.1080/713756761

Behiery, V. (2013). Bans on Muslim facial veiling in Europe and Canada:A cultural history of vision perspective. Social Identities: Journal for theStudy of Race. Nature and Culture, 19, 775–793.

Bill 60: Charter Affirming the Values of State Secularism and ReligiousNeutrality and of Equality between Women and Men, and Providing aFramework for Accommodation Requests, Quebec National Assembly,40th Legislature. (2013).

Blais, C., Roy, C., Fiset, D., Arguin, M., & Gosselin, F. (2012). The eyesare not the window to basic emotions. Neuropsychologia, 50, 2830–2838. http://dx.doi.org/10.1016/j.neuropsychologia.2012.08.010

Bond, C. F., Jr., & Atoum, A. O. (2000). International deception. Person-ality and Social Psychology Bulletin, 26, 385–395. http://dx.doi.org/10.1177/0146167200265010

Bond, C. F., Jr., & DePaulo, B. M. (2006). Accuracy of deception judg-ments. Personality and Social Psychology Review, 10, 214–234. http://dx.doi.org/10.1207/s15327957pspr1003_2

Bond, C. F., Jr., Omar, A., Mahmoud, A., & Bonser, R. N. (1990). Liedetection across cultures. Journal of Nonverbal Behavior, 14, 189–204.http://dx.doi.org/10.1007/BF00996226

Burgoon, J. K., Blair, J. P., & Strom, R. E. (2008). Cognitive biases andnonverbal cue availability in detecting deception. Human Communica-tion Research, 34, 572–599. http://dx.doi.org/10.1111/j.1468-2958.2008.00333.x

Centre for Canadian Language Benchmarks. (2010). Canadian languagebenchmarks: English as a second language for adults. Retrieved onAugust 25th, 2011, from http://www.cic.gc.ca/english/pdf/pub/language-benchmarks.pdf

Clarke, L. (2013). Women in niqab speak: A study of niqab in Canada.Toronto, Ontario, Canada: Canadian Council of Muslim Women.

Datavyu Team. (2014). Datavyu: A video coding tool. New York, NY:Databrary Project, New York University. Retrieved from http://datavyu.org

DePaulo, B. M., Lindsay, J. J., Malone, B. E., Muhlenbruck, L., Charlton,K., & Cooper, H. (2003). Cues to deception. Psychological Bulletin,129, 74–118. http://dx.doi.org/10.1037/0033-2909.129.1.74

Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G�Power 3: Aflexible statistical power analysis program for the social, behavioral, and

biomedical sciences. Behavior Research Methods, 39, 175–191. http://dx.doi.org/10.3758/BF03193146

Fischer, P., Greitemeyer, T., & Kastenmuller, A. (2007). What do we thinkabout Muslims? The validity of Westerners’ implicit theories about theassociations between Muslims’ religiosity, religious identity, aggressionpotential, and attitudes toward terrorism. Group Processes & IntergroupRelations, 10, 373–382. http://dx.doi.org/10.1177/1368430207078697

The Global Deception Research Team. (2006). A world of lies. Journal ofCross-Cultural Psychology, 37, 60 –74. http://dx.doi.org/10.1177/0022022105282295

Government of the Netherlands. (2012, January 27). Government approvesban on clothing that covers the face. Retrieved from http://www.government.nl/news/2012/01/27/government-approves-ban-on-clothing-that-covers-the-face.html

Green, D. M., & Swets, J. A. (1966). Signal detection theory and psycho-physics. New York, NY: Wiley.

Hartwig, M., & Bond, C. F., Jr. (2011). Why do lie-catchers fail? A lensmodel meta-analysis of human lie judgments. Psychological Bulletin,137, 643–659. http://dx.doi.org/10.1037/a0023589

Hoodfar, H. (1997). The veil in their minds and on our heads: Veilingpractices and Muslim women. In L. Lowe & D. Lloyd (Eds.), Thepolitics of culture in the shadow of capital (pp. 248–279). Durham, NC:Duke University Press.

Janik, S. W., Wellens, A. R., Goldberg, M. L., & Dell’Osso, L. F. (1978).Eyes as the center of focus in the visual examination of human faces.Perceptual and Motor Skills, 47, 857–858. http://dx.doi.org/10.2466/pms.1978.47.3.857

Jarvis, B. G. (2008). MediaLab (Version 2008.1.33e) [Computer softwareand manual]. New York, NY: Empirisoft Corporation.

Judicial College. (2013). Equal treatment bench book. London, England:Judicial College.

Khiabany, G., & Williamson, M. (2008). Veiled bodies—Naked racism:Culture, politics and race in the sun. Race & Class, 50, 69–88. http://dx.doi.org/10.1177/0306396808096394

Leach, A. M., & Da Silva, C. S. (2013). Language proficiency and policeofficers’ lie detection performance. Journal of Police and CriminalPsychology, 28, 48–53. http://dx.doi.org/10.1007/s11896-012-9109-3

Leal, S., & Vrij, A. (2008). Blinking during and after lying. Journal ofNonverbal Behavior, 21, 87–102.

Loi n° 2010–1192 du 11 octobre 2010 interdisant la dissimulation duvisage dans l’espace public [Act No 2010-1192 of 11 October 2010prohibiting the concealing of the face in public], J. O. 0237, October 12,18344 (2010).

Loi visant a interdire le port de tout vêtement cachant totalement ou demanière principale le visage, Moniteur, July 13 (2011).

Maeder, E. M., Dempsey, J., & Pozzulo, J. (2012). Behind the veil of jurordecision-making: Testing the effect of Muslim veils and defendant racein the courtroom. Criminal Justice and Behavior, 39, 666–678. http://dx.doi.org/10.1177/0093854812436478

Manson v. Brathwaite, 432 U.S. 98 (1977).Meissner, C. A., & Kassin, S. M. (2002). “He’s guilty!”: Investigator bias

in judgments of truth and deception. Law and Human Behavior, 26,469–480. http://dx.doi.org/10.1023/A:1020278620751

Mistry, H., Bhugra, D., Chaleby, K., Khan, F., & Sauer, J. (2009). Veiled com-munication: Is uncovering necessary for psychiatric assessment? TransculturalPsychiatry, 46, 642–650. http://dx.doi.org/10.1177/1363461509351366

Mok, T. (2013, September 12). “We don’t care what’s on your head”:Ontario hospital launches ad aimed at Quebec medical students, valuescharter. The National Post. Retrieved from http://news.nationalpost.com/2013/09/12/we-dont-care-whats-on-your-head-toronto-area-hospital-ad-aims-at-quebec-medical-students-values-charter/

Muhammad v. Enterprise Rent-A-Car, No. 06–41896-GC (31st D. Mich,Oct. 11, 2006).

409VEILED WITNESSES

Page 30: Effective Advocacy-Taking It to the Next Leveliframe.dri.org/DRI/course-materials/2017-WITL/pdfs/05b_Schwartz.pdf2 From Chapter 3, as excerpted in Amar, Akhil Reed. “America's Lived

30 ■ Women in the Law ■ February 2017

Murphy, K. R., & Balzer, W. K. (1986). Systematic distortions in memory-based behavior ratings and performance evaluations: Consequences forrating accuracy. Journal of Applied Psychology, 71, 39–44. http://dx.doi.org/10.1037/0021-9010.71.1.39

Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we can know:Verbal reports on mental processes. Psychological Review, 84, 231–259.http://dx.doi.org/10.1037/0033-295X.84.3.231

N. S. v. Her Majesty the Queen, et al., No. 33989 (SCC 2012).R. v. N. S., 670 (ONCA 2010).Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive

psychology: Undisclosed flexibility in data collection and analysis al-lows presenting anything as significant. Psychological Science, 22,1359–1366. http://dx.doi.org/10.1177/0956797611417632

Street, C. N. H., & Richardson, D. C. (2015, December). The focalaccount: Indirect lie detection need not access unconscious, implicitknowledge. Journal of Experimental Psychology: Applied, 21, 342–355.http://dx.doi.org/10.1037/xap0000058

Syed, Z. (2010, June 21) Banning the niqab. Islamic Insights. Retrievedfrom http://www.islamicinsights.com

Taylor, P., Larner, S., Conchie, S., & Van der Zee, S. (2014). Cross-cultural deception detection. In P. A. Granhag, A. Vrij, & B. Verschuere(Eds.), Deception detection: Current challenges and cognitive ap-proaches (pp. 175–202). Chichester, England: Wiley Blackwell.

The Queen v. D(R) [2013] EW Misc 13 (CC).Troje, N. F. (2002). Decomposing biological motion: A framework for

analysis and synthesis of human gait patterns. Journal of Vision, 2,371–387. http://dx.doi.org/10.1167/2.5.2

Vakulenko, A. (2007). “Islamic headscarves” and the European Conven-tion on Human Rights: An intersectional perspective. Social & LegalStudies, 16, 183–199. http://dx.doi.org/10.1177/0964663907076527

Vrij, A. (2008). Detecting lies and deceit: Pitfalls and opportunities (2nded.). New York, NY: Wiley.

Vrij, A., & Winkel, F. W. (1991). Cultural patterns in Dutch and Surinamnonverbal behavior: An analysis of simulated police/citizen encounters.Journal of Nonverbal Behavior, 15, 169–184. http://dx.doi.org/10.1007/BF01672219

Wells, G. L., & Quinlivan, D. S. (2009). Suggestive eyewitness identifi-cation procedures and the Supreme Court’s reliability test in light ofeyewitness science: 30 years later. Law and Human Behavior, 33, 1–24.http://dx.doi.org/10.1007/s10979-008-9130-3

Wixted, J., & Lee, K. (2014). Signal detection theory. http://dx.doi.org/10.1002/9781118445112.stat06743

Zuckerman, M., Koestner, R., & Colella, M. J. (1985). Learning to detectdeception from three communication channels. Journal of NonverbalBehavior, 9, 188–194. http://dx.doi.org/10.1007/BF01000739

Appendix

Nonverbal and Verbal Cues

The nonverbal cues were eye contact, blinking, pupil dilation,smiling, covering mouth and eyes, facial expressiveness, un-friendly facial expressions, shifts in posture, self-manipulations(e.g., self-touching or scratching), leg and foot movements, fidg-eting, and use of hand gestures to illustrate speech. Vocal cuesincluded stuttering, grammatical errors, repetitions of words orphrases, voice pitch, vocal tension, rate of speech, speech hesita-tions, number of pauses, length of pauses, coherence of account,length of answers, amount of detail, inclusion of unusual details,spontaneous corrections, admitting lack of memory, inconsistentinformation, generalizations, vagueness, complaints, cooperative-ness, and overall nervousness.

When coding cues, research assistants counted the frequency ofthe majority of the nonverbal and verbal behaviors listed above.

Cues that were more difficult to quantify in that manner—vocaltension, coherence, vagueness, cooperativeness, nervousness, fa-cial expressiveness, generalizations, rate of speech, and amount ofdetail—were rated on a scale from 1 to 5. In addition, pupildilation was not coded by research assistants because it was notsufficiently visible in all videos.

All of the nonverbal and verbal cues were presented to partic-ipants as part of the Cue Use Measure. Participants indicated thatthey had used the cue by selecting the box next to the word orphrase.

Received March 16, 2015Revision received January 12, 2016

Accepted February 22, 2016 �

410 LEACH ET AL.