People are usually not very successful in detecting a lie. Research has shown that sometimes even experienced and trained police officers, if they base their judgements on verbal and non-verbal behaviour, generally only successfully detect a liar one out of two times, with rates almost comparable to relying on chance in choosing1.
However, witness analysis plays a very important role in the criminal justice system. In order to facilitate the detection of deceptive behaviour concealed in the suspect’s or eyewitness’s narrative, several methods have been developed to assess the reliability of verbal indicators.
These methods aim to discriminate between true and false statements, not focusing on the source of the narrative (i.e. the person making the statement), but rather focusing on the amount of detail or the quality of the narrative account provided by him/her. One such method is Scientific Content Analysis (SCAN)2. This methodology was developed by the Israeli Avinoam Sapir, a former polygraph examiner, who argued that truthful statements differ from those provided by a liar in the specific type of language used. Based on these differences, Sapir developed a list of criteria that can help distinguish between true and false statements. However, most of the criteria were designed to ‘unearth’ phrases usually used in lying statements, rather than objectively assessing the possible credibility of the verbal account provided. Four studies examined the SCAN, but found no solid evidence of its discriminative value 3.
In addition, the SCAN has been shown to have low inter-observer reliability; users differ both in the way they apply the tool and in the misuse of its criteria when evaluating the same statement4.
These characteristics of the SCAN raised the suspicion that it might be sensitive to ‘contextual bias’ (extra-dominant information) or the ‘expectation of the observer’ using it5. When we refer to ‘contextual bias’ and ‘observer’s expectation’, we define a set of phenomena related to the suggestibility of observers, with respect to personal biases or extra information received from third parties, which influence their objective assessment6. A simple example of this phenomenon is the self-confirmation bias, which is the tendency to seek (voluntarily or unconsciously) only evidence that confirms our initial belief, ignoring objective evidence that might disconfirm it7. In addition to being a judgement filter, confirmation bias also includes the tendency to judge information that supports one’s beliefs as more important than information that goes against what one wants to prove8. In short, it refers to an implicit selectivity in the acquisition and use of evidence9. In the forensic context, this aspect plays a key role. Findley and Scott, for instance, provide an extensive overview of how such biases play a crucial role in miscarriages of justice10.
Hill, Memon and McGeorge examined how providing extra-domain information can influence observers’ testimonial evaluation. In their study, participants were asked to formulate interview questions to determine whether an individual was more or less credible: this was done after the subjects had been prompted by the experimenters to believe that the suspect might be ‘reasonably’ innocent or guilty (by instilling in them the doubt that the suspect was a recidivist liar; by reassuring them about the witness’s sincerity and accuracy). Participants who had been suggestible by the experimenter’s blaming comments on the suspect, often gave higher guilt ratings than those to whom the subject had been described as trustworthy and sincere11. According to Hill et al. (2008), the experiment suggested the power of confirmation bias, because the participants had sought information to support their expectation (suggested ad hoc by the experimenters), when then asking the subject questions12.
These studies show how relevant the issue of contextual bias is within the criminal justice system and how it can negatively influence the experts’ judgement, inhibiting in them the ability to consider alternative scenarios, once a bias has crept in13.
On the basis of this reasoning, some authors14 wanted to perform two experiments to test the effectiveness of three verbal report analysis tools, in order to assess whether or not they were immune to bias and prejudice during their use: the aforementioned Scientific Content Analysis (SCAN), Criteria-Based Content Analysis (CBCA) and Reality Monitoring (RM).
The CBCA was originally developed in Germany, and employed to analyse the testimonial reliability in alleged child victims of sexual abuse. Undeutsch, the inventor of the method, argued that the statements of genuinely abused children differed in the level of quantity and quality of details compared to false statements, perhaps invented on the spot by ‘lying’ children (or reporting facts from fantasy or confabulation)15. Based on these differences, they developed 24 criteria for assessing the credibility of children’s testimonies. Steller and Kohnken refined these criteria, and their revised version is the one currently in use16. The CBCA is one step in a broader method of assessing the credibility of narrative accounts: the Statement Validity Analysis (SVA). Although the CBCA was developed to assess the testimony of children, numerous studies have demonstrated its usefulness also with adults, victims or eyewitnesses of criminal acts17. A qualitative review conducted by Vrij showed that the accuracy rate of the CBCA ranged between 55% and 90%, with an average accuracy rate of 70%.18
Another verbal report analysis tool provided is Sporer’s Reality Monitoring (RM)19. RM refers to the cognitive operations a person relies on when attributing his or her recollection. The rationale behind the RM method is that memories of real events are different in quality and content from fabricated and insincere memories20. The instrument consists of eight criteria, which investigate content details regarding the realism of the narrative, details about space and time, the sensory information reported and the clarity-vividness of the narrative.
In experiments involving the use of SCAN, CBCA and MRI, the extent to which these methods were sensitive to contextual bias was tested. Four witness narratives were presented, and the context was manipulated in order to create a priori suspicions of guilt or innocence in the observers through the use of extra-contextual information: some were told that the person who would testify was in fact a convicted felon who had already been involved in perjury in the past; others were told that the testimony given had already been confirmed by other eyewitnesses. The purpose of these experiments was to assess whether the observers (trained in the use of the techniques), could be influenced in any way by this information in assessing the narrative account as credible or untrue.
In the first experiment, 32 police officers trained in the use of SCAN were involved, and compared with a control group.
In the second experiment, the observers were 128 students who analysed the narratives through the SCAN, the CBCA or the MRI.
The results showed that all three methods were equally vulnerable to contextual bias.
This research makes clear an important methodological shortcoming of these instruments, which is often underreported in scientific reference papers: when applied to victim or eyewitness statements, the verbal credibility assessment method must be used by professionals with no prior information on the case, in order to avoid negative influences in their judgement, to the detriment of the objectivity and rigour of the method. The results revealed how fallacious it is to analyse only one channel of communication when we have to assess a person’s credibility. It is important to use a method that allows us to assess each channel of expression simultaneously, a model therefore aimed at the complete analysis of the elements of the interlocutor’s verbal and non-verbal communication. Just think how much information our face conveys through emotional expressions and micro-expressions; how much our body language says about our attitudes or intentions. Those who deal with emotions know how essential it is to also detect voice oscillations, which are closely connected with the individual’s central and autonomic nervous system. Other aspects that are more connected to the person’s mnemonic and cognitive system can be investigated through a careful analysis of the verbal style and the verbal content expressed. An approach characterised by scientific rigour allows one to control every possible variable at play and enables one to complete and perfect one’s professional competence in the field of credibility analysis. To specialise only on one element of communication presents us with obvious limitations. But with the proper training on how and what to look at, our contribution can become reliable, precise and make a difference.
Diego Ingrassia
——
1 Research and publications
.
.
.
.
.
2 Sapir, A. (2005). The LSI course on scientific content analysis (SCAN). Phoenix, AZ: Laboratory for Scientific Interrogation.
.
3 SCAN validation studies
.
.
.
.
.
.
4 Smith, N. (2001). Reading between the lines: an evaluation of the scientific content analysis technique (SCAN). Police Research Series Paper 135, 1-42.
.
5 Risinger, D. M., Saks, M. J., Thompson, W. C., & Rosenthal, R. (2002). The Daubert/Kumho implications of observer effects in forensic science: Hidden problems of expectation and suggestion. California Law Review, 90, 1-56.6
.
7 Self-confirmation bias
.
.
8 Findley, K. A., & Scott, M. S. (2006). The multiple dimensions of tunnel vision in criminal cases. Wisconsin Law Review, 2, 291-397.
9 Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2, 175-220.
.
10 Findley, K. A., & Scott, M. S. (2006). The multiple dimensions of tunnel vision in criminal cases. Wisconsin Law Review, 2, 291-397.
11 Hill, C., Memon, A., & McGeorge, P. (2008). The role of confirmation bias in suspect interviews: A systematic evaluation. Legal and Criminological Psychology, 13, 357-371.
.
12 Hill, C., Memon, A., & McGeorge, P. (2008). The role of confirmation bias in suspect interviews: A systematic evaluation. Legal and Criminological Psychology, 13, 357-371.
.
13 Rassin, E., Eerland, A., & Kuijpers, I. (2010). Let’s find the evidence: An analogue study of confirmation bias in criminal investigations. Journal of Investigative Psychology and Offender Profiling, 7, 231-246.
.
14 Bogaard, G., Meijer, E. H., Vrij, A., Broers, N. J., & Merckelbach, H. (2014). Contextual Bias in Verbal Credibility Assessment: Criteria-Based Content Analysis, Reality Monitoring and Scientific Content Analysis. Applied Cognitive Psychology, 28(1), 79-90.
.
15 Undeutsch, U. (1967). Beurteilung der glaubhaftigkeit von Aussagen. In U. Undeutsch (Ed.), Handbuch der psychologie Vol 11: Forensische Psychologie. Göttingen, Germany: Hogrefe.
.
16 Steller, M., & Köhnken, G. (1989). Criteria Based Statement Analysis. In D. C. Raskin (Ed.), Psychological methods in criminal investigation and evidence (pp. 217-245). New York: Springer.
.
17 CBCA used in adult forensics
.
.
.
.
.
18 Vrij, A. (2005). Criteria Based Content Analysis: A qualitative review of the first 37 studies. Psychology, Public Policy, and Law, 11, 3-41.
.
19 Sporer, S. L. (1997). The less travelled road to truth: Verbal cues in deception detection in accounts of fabricated and self-experienced events. Applied Cognitive Psychology, 11, 373-397.
.
20 Johnson, M. K., & Raye, C. L. (1981). Reality monitoring. Psychological Review, 88, 67-85.
.