Home
| Databases
| WorldLII
| Search
| Feedback
Precedent (Australian Lawyers Alliance) |
By Kristy A Martire and Stephanie Summersby
Expert evidence is an important part of civil and criminal trial processes in Australia and around the world.[1]
The admissibility of expert opinion evidence in most Australian jurisdictions is determined by s79 of the Uniform Evidence Law (UEL).
‘79 Exception: opinions based on specialised knowledge
(1) If a person has specialised knowledge based on the person’s training, study or experience, the opinion rule does not apply to evidence of an opinion of that person that is wholly or substantially based on that knowledge.’
According to this exception, if an opinion is based wholly or substantially on knowledge, and that knowledge is specialised knowledge based on training, study and/or experience, then the opinion is considered to be of an expert nature and may be admitted. On the surface, these criteria appear to be a sensible way for permitting qualified witnesses to explain relevant specialist technical and scientific knowledge to fact finders. However, closer inspection reveals some important distinctions between the legal and scientific definitions of expertise. These distinctions suggest that the criteria specified in s79 of the UEL is insufficient for determining whether a potential witness has acquired expertise according to scientific standards.
Specifically, scientific conceptualisations of expertise generally require more than relevant training, study and/or experience in a technical or scientific domain. Instead, expertise requires a demonstration of performance – measured by a clearly defined standard (for example, accuracy) – that is superior to that of lay people or novices.[2] This means that an individual claiming to have expertise should be personally proficient in the task they are opining about, with a known low error rate.[3] Training, study and/or experience does not automatically guarantee this superior performance, unless a particular level or type of performance is required in order to obtain qualification, accreditation or appointment. For example, forensic odontologists may obtain their qualifications, certifications and years of experience without ever having to demonstrate that they can determine the source of a bite impression with high accuracy, and better than novices or laypeople. Yet courts may admit the testimony of forensic odontologists, assuming that superior performance is a professional requirement.
From a scientific perspective, training, study and/or experience is not enough to establish expertise. They are imperfect proxies that may accompany expertise, but do not necessarily guarantee it.[4] Rather, the analyst needs to be asked questions to establish what testing they have completed to demonstrate that they can perform the tasks in their domain, better than laypeople and with a high degree of accuracy.
Unfortunately, s79 of the UEL does not directly seek or require demonstration of superior ability before expert status is given to an opining witness. Consequently, courts in Australia have been criticised for their inability to regulate the quality of opinions being admitted in trials.[5] In Australia, faulty and flawed expert opinions have contributed to the wrongful conviction of innocent people.[6] Courts also admit and routinely rely on opinions of low or unknown quality.[7] Ultimately, the partial and imperfect assessments of expert status created by s79 of the UEL threatens the integrity of trial outcomes and undermines justice processes.
In order to improve the administration of justice, we need to improve our evaluations of expert opinions and their quality. While it is perhaps most important that assessments of expertise consider the demonstrable ability (personal proficiency) of the analyst or practitioner, there are other aspects of a witness and their opinion that are also relevant to whether it is truly expert and therefore worthy of belief. These include:
• the foundational validity of the techniques/discipline;
• the transparency and methodological rigour of the expert’s approach;
• whether the opinion has been independently verified;
• the scope and relevance of the expert’s experience; and
• the modesty of the expert’s claims/opinion.
A brief description of each of these attributes follows.[8]
FOUNDATIONAL VALIDITY
First, for an opinion to merit belief, it must be based upon a technique that has foundational validity. The President’s Council of Advisors on Science and Technology (PCAST) defines foundational validity as an empirical demonstration that a technique produces repeatable, reproducible and accurate results.[9] The PCAST regards this empirical demonstration as a key attribute of an expert scientific opinion – sine qua non (second to none).[10]
In 2016, the PCAST reviewed seven forensic science feature-comparison disciplines regularly admitted to court:
• DNA analysis of single-source and simple-mixture samples;
• DNA analysis of complex-mixture samples;
• bitemark analysis;
• latent fingerprint analysis;
• firearms analysis;
• footwear analysis; and
• hair analysis.
It concluded that based on the available empirical evidence, only two of the disciplines (single-source and simple mixture DNA, and latent fingerprint analysis) have been shown to produce repeatable, reproducible and accurate results. Yet expert testimony from all seven disciplines is routinely admitted in courts in Australia and abroad. Foundational validity cannot be assumed; it must be demonstrated, using appropriately designed empirical studies and reported to courts. Where foundational validity is low or unknown, an opinion should not be relied upon.
TRANSPARENCY AND RIGOUR
In addition to being foundationally valid, the method or technique used by a witness should be transparent and explicitly described. Transparency facilitates critique and can reveal inferential and/or methodological weaknesses. Furthermore, steps should be taken to ensure that the foundationally valid technique is applied in a manner that appropriately addresses the potentially damaging effects of cognitive bias. Under conditions of uncertainty, for example, where evidence is complex, partial or distorted, human decision-making is particularly vulnerable to biases and heuristics (‘rules of thumb’). These cognitive reflexes are generally unintentional and automatic but can, nevertheless, significantly affect opinions.[11]
In a classic study, five fingerprint examiners were unknowingly presented with a pair of fingerprints they had previously declared a match.[12] The examiners were told that the prints had been erroneously matched by other examiners and were asked whether they would agree. Under these conditions, three of the five experts changed their opinion about the fingerprints, and one decided that there was now insufficient information to make the decision. Only one of the five examiners maintained their original opinion. Even though fingerprint comparison is considered foundationally valid,[13] the absence of strategies like examiner blinding, evidence lineups or case management to reduce potential bias can undermine the quality of the opinion.[14] Any form of opinion based on human judgment is vulnerable to these types of effects and they need to be actively managed and appropriately explored at trial.
VERIFICATION
Although, as demonstrated in the fingerprint study just described, witnesses can be biased by the opinions of others, agreement among experts can be a helpful cue to opinion quality – under the right circumstances. Many of the opinions offered to courts include some form of peer review. For example, the technique used for fingerprints, footwear impressions and other types of feature-comparison analyses is called ‘ACE-V’, where the ‘V’ stands for verification.[15] If this verification is completed in circumstances where the verifying analyst is blind to the conclusions of the original analyst, and involves a complete re-analysis of the evidence, then this independent verification of the original opinion adds to the credibility of the original opinion.[16] In contrast, if the verification is completed knowing the outcome of the first analysis, does not involve independent reanalysis, or does not intend to test the accuracy of the conclusion (that is, it is focused on technical or administrative considerations), then the verification does little to strengthen the original opinion. Questions need to be asked to establish what verification and review procedures have been used, their purpose, and the nature of any disagreement that may have arisen. Without this information, the value of any agreement or verification among analysts may be low.
SCOPE AND RELEVANCE OF EXPERTISE
As already discussed, training, study and/or experience is explicitly considered in s79 of the UEL. Even so, it is important to carefully consider the relevance of the training, study and/or experience claimed by the witness. Studies of expertise show that expert performance is narrow in scope and does not automatically generalise to tasks across domains[17] or even between tasks within the same domain.[18] So opinions that are offered outside an area of expertise are not necessarily expert at all.
While it is intuitively sensible that an expert in chess cannot be assumed an expert in Monopoly, it is not always clear what opinions are inside a domain, and what opinions are outside of it. This issue was addressed in Honeysett v The Queen.[19] In this case, an anatomist was excluded from giving an opinion about the similarities between the appellant and photographs/CCTV footage of an armed robber because the anatomist did not have relevant specialised knowledge to interpret images. However slippage still regularly occurs. If information about demonstrated ability is not available,[20] careful consideration of the specific training, study and/or experience of the witness is required to ensure that it matches closely with the opinion being provided.
MODESTY OF THE EXPERT’S CLAIMS/OPINION
Finally, it is also important to consider the uncertainty of the witness’s opinions. While courts seek and value opinions that are expressed in definite terms, [21] the opinions of scientists and genuine experts are rarely absolute. Research shows that genuine expertise is generally accompanied by conservatism that comes from an appreciation of the complexities of the field and the evidence.[22]
Furthermore, many opinion witnesses are also engaging in a process of inductive reasoning – where they are trying to derive general theories about an incident from specific pieces of evidence.[23] This type of inference does not permit certain conclusions. Consequently, when these limits are appropriately acknowledged, opinions will be indefinite and conservative. Thus, if a witness expresses complete certainty in their opinion, or is not prepared to entertain the possibility of alternatives, it is likely that they are overstating what the evidence can support, or underappreciate the complexity in the scenario. In general, unequivocal opinions show inexperience rather than expertise and require critical consideration.
CONCLUSION
These foregoing considerations are all meaningfully relevant to the evaluation of a scientific expert opinion, and to determining the weight that it should be given in legal proceedings. Yet, s79 of the UEL only considers the training, study and/or experience at the admissibility stage. And although the other attributes can be considered by the court, it appears that this happens inconsistently. Foundational validity and demonstrable ability are often ignored or not sought,[24] uncertainty in conclusions tends to be absent or discouraged,[25] and reported methodologies and verification procedures are often vague.[26] This leaves considerable scope for attentive lawyers to uncover information relevant to the quality of an expert opinion during pre-trial preparations and/or during trial proceedings. In doing so, evaluations of expert opinions and the integrity of trial outcomes can be meaningfully improved, potentially restoring or increasing faith in our civil and criminal justice outcomes.
Want to learn more?
Our research team is conducting research exploring ways to improve the evaluation of opinion evidence. If you would like to participate, learn more about our research, or request a resource pack to help you evaluate expert opinion evidence, please contact Stephanie Summersby at s.summersby@unsw.edu.au.
Kristy A Martire is a Forensic Psychologist and Associate Professor in the School of Psychology at the University of New South Wales, Sydney. Kristy’s research examines the development of expertise and the impact of expert opinions on decision-making. EMAIL k.martire@unsw.edu.au.
Stephanie Summersby is a PhD Candidate in the School of Psychology at the University of New South Wales, Sydney. Stephanie’s research areas include forensic psychology and expert evidence. EMAIL s.summersby@unsw.edu.au.
[1] I Freckelton et al, Expert evidence and criminal jury trials, Oxford University Press, UK, 2016; AW Jurs, ‘Expert prevalence, persuasion, and price: What trial participants really think about experts’, Indiana Law Journal, Vol. 91, 2015, 353; National Academies of Science (NAS), Strengthening forensic science in the United States: A path forward (2009) <http://www.nap.edu/catalog/12589.html> .
[2] KA Martire, G Edmond, ‘Rethinking expert opinion evidence’, Melbourne University Law Review, Vol. 40(3), 2017, 967–98.
[3] Ibid.
[4] For examples where training, study and/or experience have not been correlated with ability see SL Avon et al, ‘Error rates in bite mark analysis in an in vivo animal model’, Forensic Science International, Vol. 201(1–3), 2010, 45–55; D White et al, ‘Passport officers’ errors in face matching’, PLoS ONE, Vol. 9(8), 2014, e103510.
[5] For example, G Edmond et al, ‘Forensic science evidence and the limits of cross-examination’, Melbourne University Law Review, Vol. 42(3), 2019, 1–62; G Edmond, K Martire, M San Roque, ‘Expert reports and the forensic sciences’, UNSW Law Journal, Vol. 40(2), 2017, 590–637;Martire and Edmond, above note 2.
[6] For example, Inquest into the death of Azaria Chantel Loren Chamberlain [2012] NTMC 020; FHR Vincent, Report: Inquiry into the Circumstances that Led to the Conviction of Mr Farah Abulkadir Jama, Victorian Government Printer, Melbourne, 2010; Wood v R [2012] NSWCCA 21.
[7] President’s Council of Advisors on Science and Technology (PCAST), Forensic science in criminal proceedings: Ensuring scientific validity of feature comparison methods (2016) <https://wwwinnocenceprojectorg/wp-content/uploads/2017/03/PCAST-2017-updatepdf>.
[8] For more details see G Edmond et al, ‘How to cross-examine forensic scientists: A guide for lawyers’, Australian Bar Review, Vol. 39, 2014, 174–97; G Edmond et al, ‘Model forensic science’, Australian Journal of Forensic Sciences, Vol. 485, 2016, 496–537; G Edmond, KA Martire, ‘Antipodean forensics: A comment on ANZFSS’s response to PCAST’, Australian Journal of Forensic Sciences, Vol. 50(2), 2018, 140–51; Martire and Edmond, above note 2.
[9] PCAST, above note 7, 47.
[10] Ibid, 66.
[11] G Edmond, KA Martire, ‘Just cognition: Scientific research on bias and some implications for legal procedure and decision-making’, Modern Law Review, Vol. 82(4), 2019, 633–64.
[12] IE Dror, D Charlton, AE Peron, ‘Contextual information renders experts lulnerable to making erroneous identifications’, Forensic Science International, Vol. 156(1), 2006, 74–8.
[13] PCAST, above note 7.
[14] IE Dror et al, ‘Letter to the Editor: Context management toolbox: A linear sequential unmasking (LSU) approach for minimizing cognitive bias in forensic decision making’, Journal of Forensic Sciences, Vol. 60(4), 2015: 1111–2.
[15] National Academies of Science (NAS), Strengthening forensic science in the United States: A path forward (2009) <https://www.ncjrs.gov/pdffiles1/nij/grants/228091.pdf>.
[16] KN Ballantyne, G Edmond, B Found, ‘Peer review in forensic science’, Forensic Science International, Vol. 277, 2017, 66–76.
[17] J Bédard, MTH Chi, ‘Expertise’, Current Directions in Psychological Science, Vol. 1(4), 1992, 135–9; WG Chase, HA Simon, ‘Perceptions in chess’, Cognitive Psychology, Vol. 41, 1973, 55–81.
[18] KA Martire, B Growns, DA Navarro, ‘What do the experts know? Calibration, precision, and the wisdom of the crowds among forensic handwriting experts’, Psychonomic Bulletin and Review, Vol. 25(6), 2018, 2346–55; VL Patel, DA Evans, GJ Groen, ‘Biomedical knowledge and clinical reasoning’ in D Evans, V Patel (eds), Cognitive science in medicine: Biomedical modelling, MIT Press, 1989.
[20] G Edmond, ‘A closer look at Honeysett: Enhancing our forensic science and medicine jurisprudence’, Flinders Law Journal, Vol. 17(2), 2015, 287–329.
[21] A Ligertwood, G Edmond, ‘Expressing evaluative forensic science opinions in a court of law’, Law Probability and Risk, Vol. 11, 2012, 289–302.
[22] JM Tangen, MB Thompson, DJ McCarthy, ‘Identifying fingerprint expertise’, Psychological Science, Vol. 22(8), 2011, 995–7.
[23] CEH Berger, ‘Criminalistics is reasoning backwards’, Nederlands Juistenblad, Vol. 85, 2010, 784–9.
[24] JJ Koehler, ‘Intuitive error rate estimates for the forensic sciences’, Jurimetrics, Vol 57, 2017, 153–68.
[25] A Bali et al, ‘Communicating forensic science opinion: An examination of expert reporting practices.’ In submission.
[26] Ballantyne et al, above note 16.
AustLII:
Copyright Policy
|
Disclaimers
|
Privacy Policy
|
Feedback
URL: http://www.austlii.edu.au/au/journals/PrecedentAULA/2020/3.html