How To Cross-Examine Forensic Scientists: A Guide For Lawyers
How To Cross-Examine Forensic Scientists: A Guide For Lawyers
How To Cross-Examine Forensic Scientists: A Guide For Lawyers
/journals/journal/abr/vol39pt2/part_2
1 Introduction
This guide is intended as a resource for lawyers confronted with forensic
science evidence.1 It is, in effect, a guide to exploring the validity and
reliability of forensic science evidence.2 We have endeavored to address issues
that are important in any attempt to understand the probative value of expert
evidence, particularly the identification (or comparison) sciences.3 Factors
relating to experimental validation, measures of reliability and proficiency are
key because they, rather than conventional legal admissibility heuristics (eg,
174
JOBNAME: No Job Name PAGE: 69 SESS: 1 OUTPUT: Fri Oct 17 11:26:18 2014
/journals/journal/abr/vol39pt2/part_2
4 National Research Council (of the National Academy of Sciences), Strengthening the
Forensic Sciences in the United States: A Path Forward, Washington DC, The National
Academies Press, 2009 (NAS Report). See Section 6 and G Edmond, ‘What lawyers should
know about the forensic “sciences”’ (2014) 35 Adel L Rev (forthcoming) for an overview.
5 Several of our suggested questions incorporate multiple issues. They are presented in forms
that are not always conducive to actual cross-examination. We do not recommend adopting
any particular question or line of questioning. Rather, they are propaedeutics. They provide
an indication of the kinds of issues that ought to be considered in many cases; especially
where the lawyer is attempting to explore or challenge the value of a technique or derivative
opinion.
6 It is important to recognise that those able to offer advice and support will not always be
from the domain (or ‘field’) in which the original expert operates. It may be that medical
researchers, mainstream scientists, cognitive scientist or statisticians will be of much greater
utility in developing appropriate lines of inquiry than, for example, a second fingerprint
analyst or ballistics expert.
7 In several places in this guide we have used the term ‘expert’. We caution those approaching
‘expert evidence’ against simply assuming that the individual proffering, and indeed allowed
by courts to proffer, their opinions actually possesses expertise. Legal indifference to
validation and reliability means that in too many cases we do not know if those permitted to
proffer incriminating opinions are actually able to do the things they claim. There are
important differences between ‘training, study and experience’ (Uniform Evidence Law
s 79) and the possession of an actual ability (ie, genuine expertise) that distinguishes an
individual from those without the ability. See G Edmond, ‘The admissibility of forensic
science and medicine evidence under the Uniform Evidence Law’ (2014) 38 Crim LJ 136.
8 Even though rebuttal evidence might be admitted, resource constraints and concerns with
finality together constrain the scope for proceeding beyond the answers provided by expert
witnesses in many cases.
JOBNAME: No Job Name PAGE: 70 SESS: 1 OUTPUT: Fri Oct 17 11:26:18 2014
/journals/journal/abr/vol39pt2/part_2
possess expertise doing the specific task on which their opinion is based.14
They should be conversant with relevant specialist literatures, including
criticism. Those questioning expert witnesses should focus their attention on
the specific task or claim to expertise and not allow a witness with formal
training or experience (in apparently cognate fields, however extensive) to
claim expert status and simply assert their ‘considered opinion’. There should
be demonstrable evidence of actual expertise in the specific domain (ie, doing
specific tasks) rather than appeals to general ‘training, study or experience’.15
According to s 79(1) of the Uniform Evidence Law (UEL), the witness must
possess ‘specialised knowledge’ and the opinion must be based on ‘specialised
knowledge’.16 ‘Training, study or experience’ does not constitute ‘specialised
knowledge’.
Our sample questions (in italics, below) are intended to focus attention on
issues that will ordinarily be significant in any attempt to determine relevance,
admissibility, probative value and credibility.17 Our questions are often
complex, sometimes with multiple issues embedded within them. They are
heuristics; better suited to this educative exercise than a purely forensic one.
They are intended to draw the reader’s attention to important issues that
demand, and in many cases will reward, sustained scrutiny during contested
proceedings involving forensic science and medicine evidence. Some of these
questions, and questions informed by them, will be better suited to
admissibility challenges on the voir dire than cross-examination before a jury.
Equally, some of our questions may highlight the need to undertake research
or seek pre-trial advice in order to adequately address these and other issues
at trial.
I accept that you are highly qualified and have extensive experience, but how
do we know that your level of performance regarding . . . [the task at hand —
eg, voice comparison] is actually better than that of a lay person (or the jury)?
Given that you undertake blind proficiency exercises, are these exercises also
given to lay persons to determine if there are significant differences in results,
such that your asserted expertise can be supported?
B Validation
Validation provides experimental evidence that enables the determination of
whether a technique does what it purports to, and how well — see App A. In
the absence of formal validation studies, undertaken in circumstances where
the correct answer (ie, ground truth) is known, the value of techniques and
derivative opinions becomes uncertain and questionable.20 Importantly, the
experimental testing associated with validation studies helps to generate
standards (and protocols) to guide the application of techniques.
Can you direct us to specific studies that have validated the technique that you
used?
What precisely did these studies assess (and is the technique being used in the
same way in this case)?
Have you ever had your ability formally tested in conditions where the correct
answer was known? (ie, not a previous investigation or trial)
Might different analysts using your technique produce different answers? Has
there been any variation in the result on any of the validation or proficiency
tests you know of or participated in?
Can you direct us to the written standard or protocol used in your analysis?
Was it followed?
20 Criminal cases do not provide a credible basis for validation even if the accused is found
guilty on trial and the conviction is upheld on appeal. See App A.
JOBNAME: No Job Name PAGE: 73 SESS: 1 OUTPUT: Fri Oct 17 11:26:18 2014
/journals/journal/abr/vol39pt2/part_2
situations we do not know if they can do what they claim. Qualifications and
experience (and previous legal admission) are not substitutes for scientific
validation and, if substituted for it can be highly misleading.21
Lawyers (and judges) should be cautious about claims for validity (or
ability) based on appeals to longevity of the field, previous involvement in
investigations, previous admission in criminal proceedings, resilience against
cross-examination, previous convictions, an otherwise compelling case,22
analogous but different activities, references to books and articles on related
but different topics, claims about personal validation or private studies that
have not been published and are not disclosed, and claims that (un)specified
others agreed with the result whether as peer review or some other verification
process.23 Individually and in combination, none of these provide evidence of
ability and accuracy. Validation studies should apply to the circumstances and
inform analysis in the instant case. Where analysts move away from the
conditions in which the validation testing was originally performed they start
to enter terrain where the validation described in publications may no longer
apply.
Validation is vitally important because superficially persuasive abilities
might not in reality exist or might be less impressive than they seem to
analysts and lay observers.24 Recent studies have revealed that forensic
odontologists, for example, have very limited abilities when it comes to
comparing bite marks in order to identify a biter. They generally cannot
identify people, although in some instances they might be able to exclude a
person from the pool of potential biters.25 Another example concerns the
ability of anatomists and physical anthropologists to identify strangers in
images. It does not follow that a person trained in anthropology or anatomy
will be better (or significantly better) than a lay person when it comes to
interpreting features and persons in images (even if they possess a more
21 In terms of the Uniform Evidence Law (UEL), validation studies should be considered part
of ‘specialised knowledge’ required by s 79. ‘Training, study or experience’ do not overcome
the need for ‘specialised knowledge’ and they do not constitute ‘specialised knowledge’
otherwise s 79 does not make sense. See Edmond, above n 7.
22 When considering the admissibility of expert opinion evidence, according to ss 79(1), 135
and 137, in the vast majority of cases the evidence should stand on its own. That is, there
should be independent evidence (ie, not case related) that supports the validity and reliability
of both the technique and the analyst’s ability. It does not matter if the case is otherwise
strong or even compelling. This does not tell us whether the technique works or whether the
analyst has actual expertise. Indeed, in many cases the analyst(s) will have been exposed to
the other ‘compelling’ evidence when undertaking their analysis. This, as Sections 2.G
‘Contextual bias and contextual effects’ and 2.H ‘Cross-contamination of evidence’ explain,
tends to be highly undesirable and threatens the value of incriminating opinion evidence.
23 The fact that one or more analysts agree, especially where a technique has not been
validated, may not be particularly meaningful. What does agreement using a technique that
may not work or may have a high (or unknown) level of error, mean? Moreover, on many
occasions agreement is reached in conditions where the other analysts knew the original
conclusion. Again, such circumstances are conducive to neither accuracy nor independence.
See Sections 2.G and 2.H.
24 It is not only lay persons who may be impressed, but the analysts themselves may well
believe they possess special abilities even when they do not.
25 See, eg, E Beecher-Monas, ‘Reality Bites: The Illusion of Science in Bite-mark Evidence’
(2008) 30 Cardozo L Rev 1369.
JOBNAME: No Job Name PAGE: 74 SESS: 1 OUTPUT: Fri Oct 17 11:26:18 2014
/journals/journal/abr/vol39pt2/part_2
26 See eg, Honeysett v R [2014] HCA 29; BC201406345 at [45]. Preliminary studies suggest
that anatomical training does not make a significant difference to the ability to interpret
images for identification/comparison purposes. See, eg, A Towler, Evaluating training for
facial image comparison, PhD research, UNSW, 2014.
27 Studies suggest that experience and training may have limited value in improving abilities.
For example, White et al report that the ability of passport officers to determine whether two
portrait photographs are of the same unfamiliar person is unrelated to the duration of
employment, with some passport officers who have been in the post for less than a year
outperforming others who have held the position for more than 20 years. See D White,
R Kemp, R Jenkins, M Matheson and M Burton, ‘Passport Officers’ errors in face
matching’ (2014) 9 PLoS ONE e103510.
28 Latent fingerprint comparison, for example, was only validated in recent years: J M Tangen,
M B Thompson and D J McCarthy, ‘Identifying fingerprint expertise’ (2011) 22
Psychological Science 995; B T Ulery, R A Hicklin, J Buscaglia and M A Roberts,
‘Accuracy and reliability of forensic latent fingerprint decisions’ (2011) 108 Proceedings of
the National Academy of Sciences of the United States of America 7733. There have,
however, been many criticisms of the assumptions and practices maintained by examiners in
the United States, Scotland and, by implication, Australia. See NAS Report, above n 4,
pp 136–45; Expert Working Group on Human Factors in Latent Print Analysis, Latent Print
Examination and Human Factors: Improving the Practice through a Systems Approach, US
Department of Commerce, National Institute of Standards and Technology, National
Institute of Justice, 2012 (NIST/NIJ Report); A Campbell, The Fingerprint Inquiry Report,
APS Group Scotland, 2011 (FI Report).
29 NAS Report, above n 4, p 184:
All results for every forensic science method should indicate the uncertainty in the
measurements that are made, and studies must be conducted that enable the estimation
of those values. . . . the accuracy of forensic methods . . . needs to be evaluated in
well-designed and rigorously conducted studies. The level of accuracy of an analysis is
likely to be a key determinant of its ultimate probative value.
30 ‘Domain irrelevant information’ is information that is not relevant to the analyst’s task. For
JOBNAME: No Job Name PAGE: 75 SESS: 1 OUTPUT: Fri Oct 17 11:26:18 2014
/journals/journal/abr/vol39pt2/part_2
Can you tell us about the error rate or potential sources of error associated
with this technique?
Can you point to specific studies that provide an error rate or an estimation
of an error rate for your technique?
Were there any differences observed when making your comparison . . . [eg,
between two fingerprints], but which you ultimately discounted? On what
basis were these discounted?
Could there be differences between the samples that you are unable to
observe?
Did any of your colleagues disagree with you? Did any express concerns
about the quality of the sample, the results, or your interpretation?
Would some analysts be unwilling to analyse this sample (or produce such a
confident opinion)?
All techniques have limitations and all techniques and processes involving
humans are error prone.31 Limitations and risks, and their reality, should be
disclosed. Also, institutional strategies for managing and reducing the
ubiquitous threat of error should be publicly available.
D Personal proficiency
Formal evaluation (eg, validation) of techniques provides empirical evidence
that they are valid — that is, they produce stable and consistent results on
different occasions and between analysts.32 In any given case, however, the
example, telling a latent fingerprint examiner that the main suspect has previously been
convicted for a similar offence is not necessary for the examiner to compare two fingerprints.
Generally, analysts should not be exposed to domain irrelevant information about the case,
investigation or the suspect because it has a demonstrated potential to mislead. See
Sections 2.G and 2.H.
31 See, eg, National Academy of Sciences, Institute of Medicine, Committee on Quality of
Health Care in America, To Err Is Human: Building A Safer Health System, McGraw-Hill
Companies, Washington DC, 1999.
32 There may be utility in ascertaining whether the same analyst will produce the same
interpretation on different occasions. Studies of fingerprint examiners found that they tend to
identify different points of similarity when comparing the same prints on different occasions.
See I Dror, C Champod, G Langenburg, D Charlton, H Hunt and R Rosenthal, ‘Cognitive
issues in fingerprint analysis: Inter-and intra-expert consistency and the effect of a “target”
comparison’ (2011) 208 Forensic Science International 10.
JOBNAME: No Job Name PAGE: 76 SESS: 1 OUTPUT: Fri Oct 17 11:26:18 2014
/journals/journal/abr/vol39pt2/part_2
analyst may not be proficient with the use of the technique, may not have used
the technique appropriately, or the validity of the technique may be
compromised by factors such as the unnecessary exposure of the analyst to
domain irrelevant information (see Sections 2.G ‘Cognitive and contextual
bias’ and 2.H ‘Cross-contamination of evidence’). Where techniques have not
been validated, claims to personal proficiency are questionable. Apparent
proficiency in the use of a technique that has not been formally evaluated does
not enable the court to assess the probative value of the evidence.33 For, it does
not address the primary issue of whether the technique does what it is
purported to do, whether it does so consistently, nor how consistently it does
so. Failure to validate a technique means that there are few appropriate
measures with which to evaluate the derivative opinion evidence.34
Have you ever had your own ability... [doing the specific task/using the
technique] tested in conditions where the correct answer was known?
If not, how can we be confident that you are proficient?
If so, can you provide independent empirical evidence of your performance?
Internal (or in-house) proficiency tests and many commercial proficiency
tests available to forensic scientists and their institutions are reported to be
notoriously easy.35 In most cases, the proficiency tests are only used to
compare results between forensic practitioners, and since they are not given to
lay persons, the validity of the tests themselves (like the expertise of the
analysts) cannot be evaluated.36 There has, in addition, been a tendency to
design proficiency tests in ways that may reflect casework processes but are
incapable of assessing actual expertise. This can lead to flaws in the way
results are understood and represented — see App A.37
Once again, appeals to formal study and training, like long experience using
a technique, do not address the question of whether the technique works, in
what conditions, how well, and how often. Where the analyst cannot show that
they are proficient with a technique, where the proficiency instrument is
flawed, or there is no independent evidence of proficiency, serious challenge
might be made to both admissibility (around relevance and expertise) as well
as the probative value of the analyst’s opinion.
E Expressions of opinion
The expression of results, really the expression of the analyst’s interpretation
or opinion (based on the trace, data or results), should be developed using a
33 See, eg, J J Koehler, ‘Fingerprint error rates and proficiency tests: What they are and why
they matter’ (2008) 59 Hastings LJ 1077; J J Koehler, ‘Proficiency tests to estimate error
rates in the forensic sciences’ (2012) 12 Law, Probability & Risk 89.
34 Failure to validate tends to shift the focus to heuristics with more limited value, such as the
longevity of the ‘field’, the analyst’s qualifications and experience, what other courts have
done and so on.
35 See Koehler, above n 33; D M Risinger, ‘Cases Involving the Reliability of Handwriting
Identification Expertise Since the Decision in Daubert’ (2007) 43 Tulsa L Rev 477.
36 See, eg, Tangen, Thompson and McCarthy, above n 14.
37 Problems seem to be pervasive in both in-house and commercially provided proficiency
testing for forensic analysts.
JOBNAME: No Job Name PAGE: 77 SESS: 1 OUTPUT: Fri Oct 17 11:26:18 2014
/journals/journal/abr/vol39pt2/part_2
validated technique. The expression should be consistent with the limits of the
technique. Where a particular form of words is used (eg, ‘match’ or ‘one and
the same’) whether free-standing or drawn from a scale (eg, involving a range
of evidence strengths such as ‘probable’, ‘very probable’, ‘strong support’
etc), the reason for the selection of the specific expression should be
explained.38
Can you explain how you selected the terminology used to express your
opinion?
Would others analyzing the same material produce similar conclusions, and a
similar strength of opinion? How do you know?
You would accept that forensic science results should generally be expressed
in non-absolute terms?
Is the review process documented and are the results included in the report?
Is the person undertaking the review of the result blinded to the original
decision?
How often does a reviewer... [in your institution] disagree with the original
conclusion? What happens when there are disagreements or inconsistencies?
Are these reported? Are these errors or limitations?
G Cognitive bias and contextual effects
The perception and interpretation of evidence is a subjective process that can
be influenced by a range of cognitive, contextual and experiential factors. This
is particularly so where the evidence to be evaluated is of low quality or
You accept that cognitive bias and other contextual effects represent a threat
to forensic science evidence?
You accept that even a sincere analyst may be influenced by cognitive and
contextual effects and not know it?
Can you explain the processes employed to avoid exposure to information that
is not relevant to your analysis? Can you tell us about them?
Can you tell us what you knew about the accused and circumstances of this
case before you were asked to analyse the evidence, and before you produced
your conclusion?
Were you told anything about the suspect when asked to undertake your
analysis?
Can you explain why medical researchers use double blind clinical trials
when attempting to evaluate the effıcacy and safety of pharmaceuticals?
Recent research has demonstrated that exposure to information about the
case or the accused, has the potential to influence, and sometimes reverse, an
analyst’s conclusion. Exposure to gratuitous information can influence
interpretations and produce mistaken decisions even where the underlying
techniques are otherwise valid and reliable. Studies have shown that
experienced latent fingerprint examiners can change their minds about
whether two fingerprints ‘match’.46 Similarly, exposure to information about
the suspect or the case that is not required for their interpretation can influence
decisions about whether the profile of a suspect appears in a mixed DNA
sample.47 Influences can operate unconsciously. Importantly, once the analyst
has been exposed to domain irrelevant information (or contexts that encourage
particular types of approaches and orientations) there is usually no way of
decontaminating the resulting opinion.48 The appropriate response is for
another analyst to undertake an independent analysis using a validated
technique in conditions where they are not exposed (ie, remain ‘blind’) to
domain irrelevant information or suggestive processes.
H Cross-contamination of evidence
Very often prosecutors (and judges) present forensic science evidence as
independent corroboration of other evidence (or the case) against the accused.
In many proceedings, this is not appropriate because the analyst was
unnecessarily exposed to suggestive information or may have revealed their
opinions (or had them revealed by investigators) to other witnesses — whether
forensic scientists or lay witnesses. In consequence, many opinions do not
constitute genuinely independent corroboration.49 They are not independent of
other inculpatory (or suggestive) evidence.
Were other witnesses, whether forensic scientists or lay witnesses (eg, those
proffering eyewitness identification evidence), told about the results of your
analysis?50
Were you told about other evidence or the opinions of other investigators or
forensic analysts?
These questions (and others from Section 2.G ‘Cognitive bias and
contextual effects’) are relevant where, for example, eyewitnesses are told that
a fingerprint analyst confirmed their tentative identification. Studies suggest
that such witnesses are likely to be more confident in future versions of their
identification evidence if they have reason to believe they are correct.51
Similarly, forensic scientists (eg, a forensic odontologist reporting on bite
marks or an anatomist interpreting an image) is vulnerable to suggestion and
confirmation bias where, for example, they are told about a DNA result or the
suspect’s criminal record. Another common example is where the analyst was
asked to confirm a police hypothesis (eg, that the police suspect is the
perpetrator from the crime scene images) rather than determine whether the
perpetrator is one of the persons in a ‘lineup’ of potential suspects.52
I Codes of conduct and rules about the content of reports
Almost all expert witnesses are now required to agree to be bound by
court-imposed codes of conduct, and to formally acknowledge that
commitment when preparing reports and testifying.53 A remarkably small
proportion of the reports produced by forensic scientists are compliant with
the terms of these formal codes.54 While non-compliance will not necessarily
lead to exclusion, flagrant non-compliance by forensic science institutions
ought to generate judicial opprobrium.55 Regardless, formal rules should be
invoked to secure compliance in order to obtain information that enables the
lawyer (and others) to determine whether techniques have been validated.
Adherence to the formal rules will help lawyers (and others) to understand and
rationally evaluate the evidence.56 Significantly, failure to comply with formal
codes frequently reflects an inability to comply. In many cases there is no
empirically derived information about limitations, uncertainties and error
because the underlying research has not been done. It may, in consequence, be
useful to go through the requirements in the codes step by step in order to
elicit what the analyst has done in relation to each section and to generate a
record that will facilitate more meaningful engagement with the opinion.
Now, could you show me where in your report you have appropriately
addressed . . . [each of the elements specified in the code]?
Could you indicate where you made reference to alternative approaches and
assumptions, or criticisms of your techniques and expressions?
56 See HG v R (1999) 197 CLR 414; 160 ALR 554; [1999] HCA 2; BC9900188; Ocean Marine
Mutual Insurance Association (Europe) OV v Jetopay Pty Ltd (2000) 120 FCR 146; [2000]
FCA 1463; BC200007242; Dasreef v Hawchar (2011) 243 CLR 588; 277 ALR 611; [2011]
HCA 21; BC201104304.
57 Practice Note: Expert Evidence in Criminal Trials (Victoria), para 4.2.
JOBNAME: No Job Name PAGE: 83 SESS: 1 OUTPUT: Fri Oct 17 11:26:18 2014
/journals/journal/abr/vol39pt2/part_2
Have there been any recent criticisms of this kind of evidence . . . [eg, latent
fingerprints, ballistics, image comparison and so on]?
You are no doubt familiar with the National Academy of Sciences report?
Could you tell the court what the report says about . . . [eg, latent fingerprint
evidence]?
Also, I note that you reported a ‘match’ and equated that with the
identification of my client. Is that correct?
I would like to refer you to the following recommendations and invite you to
comment. First, Recommendation 3.7 from the US National Institute of
Standards and Technology’s review of latent fingerprint evidence in 2012. The
National Institue concluded:
Because empirical evidence and statistical reasoning do not support a source
attribution to the exclusion of all other individuals in the world, latent print
examiners should not report or testify, directly or by implication, to a source
attribution to the exclusion of all others in the world’.60
Secondly, I’d like to refer you to Recommendation 3 from the 2011 report of
the Fingerprint Inquiry in Scotland, conducted by Lord Campbell in 2011.
Lord Campbell recommended that:
Examiners should discontinue reporting conclusions on identification or exclusion
with a claim to 100% certainty or on any other basis suggesting that fingerprint
evidence is infallible.61
You did not qualify your interpretation or conclusion on the basis of this very
authoritative criticism and advice, did you?
58 ACE-V is the dominant ‘method’ of latent print comparison. The acronym stands for
Analysis, Comparison, Evaluation, and Verification.
59 NAS Report, above n 4, pp 142–5. See also NIST/NIJ Report, above n 28, pp 9, 39, 123–4.
60 See also NIST/NIJ Report, above n 28, p 77: ‘examiners should qualify their conclusions
instead of stating an exclusion of identification in absolute terms.’
61 FI Report, above n 28, p 740.
JOBNAME: No Job Name PAGE: 84 SESS: 1 OUTPUT: Fri Oct 17 11:26:18 2014
/journals/journal/abr/vol39pt2/part_2
But you have not referred to them in your report, have you?
Most forensic analysts are aware of the NAS and other recent reports. Not
all have credible responses to the numerous criticisms and recommendations.
Many forensic analysts do not have training in statistics, research methods or
cognitive science and so are not well positioned to respond to the wide range
of criticisms and recommendations. Some forensic analysts are curiously
hostile. These reports provide useful resources to identify some of the
persistent problems with different types of forensic science evidence.
Questions derived from the NAS Report, particularly if it is clear the report is
being invoked, might be quite confronting for many forensic analysts. See
Section 6 ‘Further Reading’.
4 Ad hoc experts
Most ad hoc experts are police officers or interpreters who have listened to
covert voice recordings, or police officers and anatomists who have repeatedly
watched images relevant to criminal acts. Because they have repeatedly
listened to a voice, or watched a video, courts sometimes allow them to
express their impressions about the identity of the speaker or persons in
images, including persons speaking different languages and those wearing
disguises.62 ‘Ad hoc experts’ rarely write reports and are not always
challenged about the limits of their abilities and the character of their
‘expertise’.
Have you read any of the scientific literature . . . [eg, on voice comparison or
image comparison]?
You are not familiar with any of the studies of voice comparison of strangers,
of cross-lingual comparisons, of the effects of comparing voices speaking on
phones as opposed to live speech, and so on?
Are you aware of how common it is for those comparing voices to make
mistakes? Would you like to make a guess about the frequency of such
mistakes in favourable conditions? How do you think the quality of the
recording, accents, foreign languages, etc influence the accuracy of voice
comparison?
62 See, eg, R v Leung and Wong (1999) 47 NSWLR 405; [1999] NSWCCA 287; BC9905924
and R v Riscutta & Niga [2003] NSWCCA 170; BC200303629. Contrast G Edmond and
M San Roque, ‘Honeysett v The Queen: Forensic science, “specialised knowledge” and the
Uniform Evidence Law’ (2014) 36 Syd LR 323.
JOBNAME: No Job Name PAGE: 85 SESS: 1 OUTPUT: Fri Oct 17 11:26:18 2014
/journals/journal/abr/vol39pt2/part_2
If I was to suggest to you that published scientific studies suggest that even
experienced individuals make mistakes identifying a voice speaking in a
familiar language, in favourable conditions, about one third of the time, what
would you say?63
You have not produced a report in relation to your impression . . . [of the
voices], have you?
You have not written or published any papers on voice identification, have
you?
You accept that there are experts in voice analysis and comparison? And, you
accept that you are not a voice comparison expert? Do you know why a
witness with voice comparison expertise was not called in this case?
Even though you are confident, you accept that you cannot be certain? And,
you accept that the scientific evidence — with which you are not familiar —
suggests that voice comparison is an error-prone task?
Were you aware of who the police believed the voice [or gait or image]
belonged to when you undertook your comparison?
Were you involved in the investigation (and was the accused a suspect when
you made your comparison)? If not, how did you come to ‘identify’ the
accused?
For police officers and interpreters, if they are allowed to testify it may be
useful to make clear that they are not relevant experts and that their
impressions might well be mistaken. Most ‘ad hoc experts’ are not conversant
with relevant methods, literatures or limitations, and do not comply with codes
of conduct and practice directions. Significantly, there is no evidence that
experience as a police officer and police training improves the interpretive
abilities of police relative to others.65 For ‘ad hoc experts’ with formal
qualifications it will usually be useful to refer to the need for validation and
63 Relevant research is discussed in G Edmond, K Martire and M San Roque, ‘Unsound law:
Issues with (“expert”) voice comparison evidence’ (2011) 35 MULR 52 at 84–91.
64 See, eg, International Association for Forensic Phonetics and Acoustics, Code of Practice, at
<http://www.iafpa.net/code.htm> (accessed 15 September 2014).
65 See, eg, S Smart, M Berry and D Rodriguez, ‘Skilled observation and change blindness:
A comparison of Law enforcement and student samples’ (2014) 28 Applied Cognitive
JOBNAME: No Job Name PAGE: 86 SESS: 1 OUTPUT: Fri Oct 17 11:26:18 2014
/journals/journal/abr/vol39pt2/part_2
5 Conclusion
The cross-examination of forensic analysts on the substance of their evidence
is difficult. It requires careful and protracted preparation and meticulous
execution. In many, perhaps most, cases it will require research and expert
assistance or advice.
There are many ways to cross-examine forensic scientists. It may be that
highly creative and surprising questions will be informative, perhaps
revelatory. It may be that the witness has overcharged, expressed inconsistent
opinions in previous trials, not used appropriate methods and protocols, not
cleaned equipment and so on. On occasion, serious problems or conflicts
might be conceded or exposed, perhaps unwittingly. That said, in order to
explore the probative value of forensic science evidence at the trial, in most
cases it would seem paramount to expose and convey problems with methods,
the lack of validation, other significant limitations, as well as the speculative
nature of many opinions. This can only be done through carefully planned
questioning.
Notwithstanding its great potential as a trial safeguard, surprisingly few of
the problems with the forensic sciences are explored in detail through
cross-examination.68 Most of the problems with the forensic sciences are yet
to be ventilated in Australian courts. There have, for example, been few
attempts to challenge the way latent fingerprint examiners equate a ‘match’
with identification even though the three most recent reviews (by the National
Academy of Sciences (US), the National Institute of Standards and
Technology (US) and the Scottish Fingerprint Inquiry) all recommend against
this practice. Such reports provide fertile grounds for contesting the historical
status and claims made by forensic analysts, including those predicated upon
longstanding and legally accepted techniques. The now notorious problems
with many forensic sciences means that there may be little need to adopt
highly rhetorical strategies or spend time endeavouring to impugn the
credibility of individual witnesses. Carefully exploring limitations and
oversights might be much more confronting for analysts than crude attempts
to challenge credibility or vague insinuations about interests or partisan bias.69
There is no universally correct position on whether to challenge evidence
on the voir dire and/or during the trial. Where courts maintain liberal
admissibility standards it may be advantageous to leave the most serious
questions and criticisms to the trial — to prevent analysts adjusting their
testimony or preparing in advance. We would caution that trial safeguards do
not seem to have been particularly effective at identifying, exposing and
conveying problems.70 On this note, we would caution defence lawyers to
think very carefully about calling rebuttal witnesses, especially if the witness
uses the same problematic (ie, non-validated) technique as the prosecution
witness or will reinforce the existence of a disputed ‘field’ (eg, face mapping
or forensic gait comparison). Calling such a ‘critic’ might inadvertently
legitimate an enterprise that is entirely without empirical foundations.71
Unfortunately, the lack of judicial interest in excluding the unreliable,
speculative and weak opinions of those characterised by prosecutors as
experts means that decisions about responding to these forms of ‘evidence’
become tactical. Defence counsel should think very carefully about the best
stage to challenge, the best means of challenging, and how best to expose the
limitations, frailties and weaknesses in the forensic science evidence called by
the prosecutor. Defence counsel need to think about ways of contesting and,
where necessary, discrediting forensic science and medicine evidence that are
appropriate to the audience — whether a judge on the voir dire or a judge or
jury at trial. In doing so, they may need to attend to the significance of the
evidence to the overall case.72 Also, concerns about the relevance of the
evidence and the mandatory and discretionary exclusions (ss 135 and 137)
should not be too readily abandoned. Defence counsel should direct attention
to the possibility of admissibility and sufficiency challenges on appeal.73
Perhaps the most important thing for lawyers and judges to know is that a
good deal of forensic science and medicine evidence seems to lack scientific
foundations. A surprisingly large proportion of techniques, standards,
protocols and expressions have never been independently evaluated. We do
not know if they work. In consequence, it is not necessarily helpful to
approach plea and charge negotiations, admissibility challenges or
cross-examination before a jury on the assumption that the analyst proffering
an opinion possesses actual expertise. For far too long fact-finders and judges
have been deprived of this information and its serious and destabilising
implications for legal practice. The worthy goal of doing justice in the pursuit
of truth is threatened by weak, speculative and unreliable opinions, especially
where the opinions are presented by prosecutors as ‘expert’ and that
imprimatur is reinforced by admission.
6 Further reading
This guide draws on many scientific and technical works. We recommend that
lawyers working around the forensic sciences, particularly those
contemplating cross-examining them, should be conversant with the
following:
Is the forensic analyst able to do what they claim they can do?
There are many kinds of validity, but in the context of the forensic sciences we
are most often thinking about the validity of the conclusions derived from an
analyst’s method (or technique); whether they result from a method for
visually comparing two fibres, a method for comparing two substances
chemically, or anything in between. The validity of the conclusions reached is
JOBNAME: No Job Name PAGE: 89 SESS: 1 OUTPUT: Fri Oct 17 11:26:18 2014
/journals/journal/abr/vol39pt2/part_2
determined by the extent to which the analyst is actually able to offer the best
approximation of the truth in their conclusion.74
For example, if it can be shown that an analyst is able to compare two fibres
and reach an accurate determination regarding whether they came from the
same source or a different source, their conclusion can be deemed valid
because it provides the best available approximation of the truth regarding the
origin of the fibres. If it is shown that the analyst is not able to accurately
attribute the fibre sources, the conclusions derived from their method must be
considered invalid as they do not, to the best of our knowledge, truthfully
speak to the origins of the fibres.
Importantly, in order to establish the validity of the analyst’s conclusions,
we must also know about the accuracy of their methods where the objective
truth of the situation is known. That is, we need to establish whether they can
correctly differentiate between fibres originating from different sources and
fibres originating from the same source where the correct answer is derived
independently from the analyst’s evaluation. Without this information the
validity of the conclusions derived from the method cannot be estimated or
assessed.
Reliability
origins 30% of the time, and the worst that will happens is that you will
accidentally buy the wrong curtains for your living room, you might still be
prepared to consult the fibre analyst before trying to match the fabric for your
curtains to the cushions on your couch. But if you are trying to establish
whether the fibres from the crime scene and the fibres from the jumper of the
accused share a common origin, and the comparison leads to inaccurate
conclusions 30% of the time, you might not wish to lead the evidence of the
fibre analyst because the likelihood of an error and the consequences of
making a mistake (either exculpatory or inculpatory) are too high given the
context.
Ultimately, irrespective of the specific level of reliability, the determination
regarding whether something is sufficiently reliable for the purpose at hand
can only be made in light of evidence regarding reliability and after
considering the consequences of possible errors. It cannot be assumed or
inferred in the absence of data. What is critical is that the person applying the
technique knows the reliability of their procedure. For example, a pathologist
may choose to use a test to detect cancer even though it is not 100% reliable.
Yet it may still be safe to do so because they know what the reliability of the
test is, and in particular how often it will result in sick people being
misclassified as well, and vice versa. In light of this knowledge the doctor can
interpret the result of the test appropriately and decide on a proper course of
action.
Proficiency