Breaking NEWz you can UzE...
compiled by Jon Stimac
Man Suspected of 1979 Rape Out on Bail
INDEPENDENT ONLINE, So. AFRICA - Dec 2, 2005
...the suspect's fingerprints "popped
up" as a match for the 1979 case...
Home to New Fingerprint Machine
– ATHENS DAILY REVIEW, TX
- Nov 26, 2005 ...Police just
call it the “fumin’ machine”...
Escapes Noose Over Doubtful Fingerprint Evidence
BORNEO BULLETIN, Brunei Darussalam -
Nov 28, 2005 ...the court said the absence
of prints from one or both thumbs was not supported by expert
Admits Expert Dispute
SUNDAY HERALD, UK - Nov 27, 2005 ...the crisis in Scottish
Recent CLPEX Posting Activity
containing new posts
[ Poll ] (Webmaster) - Recent Board Spam
clpexco 04 Dec 2005 08:41 pm
Fingerprint Dogma Final Exam
Dogma (formerly Guest) 04 Dec 2005 04:44
Guest 03 Dec 2005 07:05 pm
guest 03 Dec 2005 01:56 am
Charles Parker 02 Dec 2005 09:29 pm
Guest 02 Dec 2005 12:54 am
Latent prints and bombs
Kathy Saviers 01 Dec 2005 11:10 pm
This weeks Detail
Guest 01 Dec 2005 04:55 pm
Diabetics and Ridge Counts!!
Dave C 30 Nov 2005 11:02 pm
1000 ppi Issue
Guest 30 Nov 2005 04:37 pm
2006 IBMBS Conferrence in Shanghai in March
Hnadreadered 29 Nov 2005 08:34 pm
POLICY FOR DIGITAL PHOTOGRAPHY / AND LATENT PROCESSING
HPDDB 29 Nov 2005 04:14 pm
INTERNATIONAL SYMOSIUM ON ADVANCES IN FINGERPRINT TECHNOLOGY
Dave 28 Nov 2005 10:49 pm
Trace metal detection test
RAE 28 Nov 2005 03:42 pm
UPDATES ON CLPEX.com
Updated the Smiley Files with a new Smiley
we looked at part of a recent article by Simon Cole regarding latent print
error, detailing misatribution and analysis of cases.
we look at the rest of this article detailing "The
Rehetoric of Error", analysis of the Mayfield case, and conclusions. As
last week, the portion represented below is not complete, but rather represents
portions of the 95 page article that would be most interesting to latent print
can be found on the "Reference" page of CLPEX.com.
Excerpts from "More
The Journal of Criminal Law & Criminology,
Vol 95, No. 3,
Author: Simon Cole
III. THE RHETORIC OF ERROR
A. THE ZERO ERROR RATE
As discussed above, latent print examiners continue to claim
that the error rate of latent print identification is "zero." How can the claim
that the error rate of forensic fingerprint identification is zero be sustained?
The claim is sustained by two types of parsing of errors, which I will call
typological and temporal parsing.
I. Typological Parsing
Typological parsing is achieved by assigning errors to two
distinct categories: "methodological" (sometimes called "scientific") and
"practitioner" (sometimes called "human"). It may be illustrated most clearly by
Agent Meagher's testimony at the Mitchell Daubert hearing:
Q: Now—Your Honor, if I could just have a moment here. Let's move on into error
rate, if we can, please, sir? I want to address error rate as we have—you've
heard testimony about ACE-V, about the comparative process, all right? Have you
had an opportunity to discuss and read about error rate?
Q: Are you familiar with that concept when you talk about methodologies?
Q: And where does that familiarity come from, what kind of experience?
A: Well, when you're dealing with a scientific methodology such as we have for
ever since I've been trained, there are distinctions—there's two parts of errors
that can occur. One is the methodological error, and the other one is a
practitioner error. If the scientific method is followed, adhered to in your
process, that the error in the analysis and comparative process will be zero. It
only becomes the subjective opinion of the examiner involved at the evaluation
phase. And that would become the error rate of the practitioner.
Q: And when you're talking about this, you're referring to friction ridge
A: That is correct. It's my understanding of that regardless of friction ridge
The analysis comparative evaluation and verification process is pretty much the
standard scientific methodology and a lot of other disciplines besides—
Q: And that may be so. Are you an expert or familiar with other scientific areas
A: No, I'm not an expert, but I do know that some of those do adhere to the same
methodology as we do.
Q: Are you an expert on their error rate?
Q: Based on the uniqueness of fingerprints, friction ridge, etcetera, do you
have an opinion as to what the error rate is for the work that you do. latent
A: As applied to the scientific methodology, it's zero.
Meagher's invocation of the "zero methodological error rate"
generated an approving response within the fingerprint community. In another
case, Meagher testified as follows:
"With regards to discussing the error rates in terms of methodology which from
my understanding is the real focus of attention for the hearing here. The
methodology has an error rate of zero where practitioner error rate is whatever
practitioner error rates for that individual or group of individuals."
Since the Mitchell Daubert hearing, the claim that the error
rate of fingerprint "methodology" is zero has become enshrined as dogma within
the fingerprint community. Latent print examiners are coached to recite this
position when cross-examined. For example, Wertheim fils [Latin for "Son"]
advises latent print examiners to answer the question "What is the error rate of
fingerprint identification?" as follows:
"In order to fully address this issue, you must decide which
error rate you are going to
address. Two types of error are involved: PRACTITIONER error and the error of
the SCIENCE of fingerprints. The fact is, nobody knows exactly how many
comparisons have been done and how many people have made mistakes, so you can't
answer that issue. Of course the error rate for the SCIENCE itself is zero. The
way to answer this question on the stand might sound something like: If by error
you mean HUMAN error, then I would answer that there is no way for me to know,
since I do not have detailed knowledge of casework results from departments
throughout the country. However, if by error you mean the error of the science
itself, then my answer is definitely zero. If follow up questions are asked, you
can explain: There are only three conclusions a latent print examiner can come
to when comparing two prints: Identification, Elimination, or Insufficient
detail to determine. (Explain each of these) Insufficient doesn't apply, because
you are asking about the error rate involving identification. The fact is, any
two prints "of value" either A: were made by the same source, or B: they were
not. There is no probability associated with that fact. Therefore, the science
allows for only one correct answer, and unless the examiner makes a mistake, it
WILL be the correct answer. That is what I mean when I say the error rate for
the science of fingerprints IS zero, (the little emphasis on "is", as you nod
your head once to the jury, doesn't show up in the transcript, but it sure helps
get the jury to nod back in agreement!!)"
It should be noted that, in their sworn testimony, latent
print examiners appear to follow Wertheim's second piece of advice, but not his
first. That is, judging from court opinions (infra Part lll.B), latent print
examiners do testify that the "methodological error rate" is zero, but they do
not testify that the "practitioner error rate" is unknown. Rather, they testify
that the practitioner rate is "essentially zero" or "negligible" - statements
that have no basis in any attempt to actually measure the "practitioner error
rate" but are nevertheless taken by courts as gospel.
2. Temporal Parsing
An alternative stratagem rests upon a temporal parsing of
error. In this formulation, all documented errors are consigned to a
conceptually distant past that is no longer relevant to the present. The
reasoning is that errors provoke changes in procedure that then render past
procedures obsolete. Since new procedures are now in place, it is unfair to
brand the state-of-the-art practice with past errors. Temporal parsing may be
illustrated by the testimony of Dr. Budowle at the Mitchell Daubert hearing:
Q: "Tell us how [error rate] applies to scientific methods, methodology."
A: "Well, this transcends all kinds of forensic, it transcends all disciplines
in that[, but] in the forensic area particularly, this has been an issue
discussed repeatedly in lots of disciplines, whether it is DNA chemistry and
latent fingerprints. We have to understand that error rate is a difficult thing
to calculate. I mean[,] people are trying to do this, it shouldn't be done, it
can't be done. I'll give you an example as an analogy. When people spell words,
they make mistakes. Some make consistent mistakes like separate, some people
I'll say that I do this. I spell it S-E-P-E-R-A-T-E. That's a mistake. It is not
a mistake of consequence, but it is a mistake. It should be A-R-A-T-E at the
That would be an error. But now with the computer and Spell
Check, if I set up a protocol, there is always Spell Check. I can't make that
error anymore. You can see, although I made an error one time in my life, if I
have something in place that demonstrates the error has been corrected, it is no
longer a valid thing to add [as] a cumulative event to calculate what a error
rate is. An error rate is a wispy thing like smoke, it changes over time because
the real issue is, did you make a mistake, did you make a mistake in this case?
If you made a mistake in the past, certainly that's valid information that
someone can cross-examine or define or describe whatever that was, but to say
there's an error rate that's definable would be a misrepresentation. So we have
to be careful not to go down the wrong path without understanding what it is we
are trying to quantify.
Now, error rate deals with people, you should have a method
that is defined and
stays within its limits, so it doesn't have en-or at all. So the method is one
people making mistakes is another issue."
Whatever the merits, in principle, of Budowle's argument, if
taken seriously, it places an immovable obstacle in the path of any court
seeking to estimate an error rate for anything. There are, of course, inherent
problems in estimating any sort of error rate. But these are problems that
practitioners in diverse areas of science and industry have managed to live
with, and courts, according to the Supreme Court, are now duty-bound to struggle
with them as well. Even if we accept Budowle's argument that it is difficult to
calculate error rates prospectively, that does not mean that we should not try
to estimate error rates, nor that past performance is still probably the best
guide to estimating future performance. In Budowle's schema, no error rate could
ever be calculated, as all exposed errors recede immediately into the supposedly
"irrelevant" past. The error rate does indeed become "a wispy thing like smoke."
S. What is "Methodological Error Rate"?
The concept of "methodological error rate" is not one that
the government adapted for fingerprinting from some other area of scientific or
technical endeavor. Typing the term "methodological error rate" into an Internet
search engine (for example, Google) yields results pertaining almost only to
forensic fingerprint evidence, not to any other area of scientific or technical
endeavor. In none of its briefs in Mitchell supporting this concept did the
government cite any other area of scientific or technological endeavor where it
is thought appropriate to split the concept of error rate in this fashion. Nor
does the government cite any other cases in which the Daubert error rate
criterion is interpreted in this fashion. Since the concept exists only in the
field of latent print identification, a field that is not populated by
credentialed scientists, it merits especially strict scrutiny.
The problem is that the practitioner is integral to the
method of latent print identification. In other words, the "methodology"
consists entirely and solely of having a practitioner analyze the prints. There
is no methodology without a practitioner, any more than there is automobile
without a driver, and claiming to have an error rate without the practitioner is
akin to calculating the crash rate of an automobile, provided it is not driven.
Even if one were to accept the distinction between
"methodological" and "practitioner" error, these categories would be useful only
for a scientific or policy-driven assessment of latent print identification. For
legal purposes, the only relevant matter is the overall error rate—that is, the
sum of the "methodological" and "practitioner" error rates. If one is boarding
an airplane, one is interested in the total error rate-the sum of all error
rates, if error is parsed. Although there may be some utility to parsing error
in the case of airplane crashes into, say. pilot and mechanical errors—provided,
of course, that attributions can be made consistently and coherently—no one
would wish for them to substitute for, or obscure, the overall error rate. If
one is deciding whether to board an airplane, the relevant infonnation is the
overall error rate. If one is deciding whether scarce resources should be
allocated to pilot training or mechanical inspections, then the relevant
information may be to parse crashes into "human" and "mechanical" causes. A
legal fact finder is in the position of the passenger boarding the plane, not
the policymaker allocating resources. Therefore, judges, who are responsible for
ensuring that relevant and reliable information is put before the fact finder,
should be concerned with the rate at which the process or technique in question
provides accurate conclusions to the fact finder, which is given by the overall
error rate. Even if one were to grant the legitimacy of parsing of error into
categories, the categorical error rates are irrelevant to the court's inquiry.
The overall error rate is the only relevant piece of information to put before a
Moreover, unlike the broad categories posited in airplane
crashes, the assignment of error in fingerprint identification is asymmetric. In
aviation risk assessment, neither the pilot nor the mechanical error rate is
zero. In fingerprint identification, one type of error is said to be zero. How
can this be? The answer is that all known cases of error are automatically
assigned to only one of the two categories: practitioner error. By attributing
all documented errors to practitioners, the methodological error rate
remains—eternally—zero. The "methodological error rate," by definition, could
not be anything other than zero. This, of course, takes away the force of any
claim of an empirical finding that the "methodological error rate" has been
found to be zero. Fingerprint evidence could be shoddiest evidence ever
promulgated in a court of law and, defined as it has been, the "methodological
error rate" would still remain zero!
What this means, of course, is that even if in some areas a meaningful
distinction can be drawn between "methodological" and "practitioner" error, in
fingerprint practice the concept is vacuous.
The most generous interpretation of what latent print
examiners mean when they claim the "methodological error rate" is zero is that
they are saying that no latent print misidentifications are caused by nature. In
other words, no misattributions are caused by completely identical areas of
friction ridge detail existing on two different fingers. As one prominent latent
print examiner, William Leo, testified: "And we profess as fingerprint examiners
that the rate of error is zero. And the reason we make that bold statement is
because we know based on 100 years of research that everybody's fingerprint are
unique, and in nature it is never going to repeat itself again."
As Wertheim pere [Latin: "Father"] puts it, "So when we
testify that the error rate is 'zero,' what we mean is that no two people ever
have had or ever will have the same fingerprint." This argument fails to
understand that the issue in the original Mitchell Daubert hearing—and the issue
more generally—was never about errors caused by individuals possessing duplicate
An even more simplistic formulation of this generous version
of the "methodological error rate" is the argument that because there is only
one source for the latent print, the "methodological error rate" is zero. As
Agent Meagher put it in his testimony in a pre-trial hearing in People v. Hood:
"Because fingerprints are unique and they are permanent there can
only be one source attributed to an impression that's left so there have—there
can only be one conclusion. It's ident or non ident."
One might just as well argue that since there is only one
person that an eyewitness actually saw, one could claim that the "methodological
error rate" of eyewitness identification is zero. Or, that, because each test
subject is either pregnant or not pregnant, the "methodological error rate" of
any pregnancy test—no matter how shoddy—is zero.
It is apparent that, when pressed, latent print examiners can
water down the claim of a "zero methodological error rate" to propositions that
are, in and of themselves, so banal as to be unobjectionable. Who can doubt that
only one individual is, in fact, the source a particular latent print? Or even
that there are not individuals walking around with exact duplicate ridge detail
on their fingertips? The danger lies in not fully communicating these retreats
to the jury. Latent print examiners can clarify what they mean by
"methodological error rate" in their professional literature and in pretrial
admissibility hearings and neglect to do so in their trial testimony. A juror
who hears "the methodological error rate is zero, and the practitioner error
rate is negligible" would be forgiven for assuming that "methodological error
rate," in this context, refers to something significant, rather than a banality,
like "only one person could have left the latent print." This potential for
using the aura of science to inflate the fact-finder's credence in expert
testimony is precisely the sort of thing that an admissibility standard, like
Daubert/Kumho, is designed to mitigate. The "methodological error rate" is so
potentially misleading that courts must rein it in.
4. The "Roomful of Mathematicians"
The fallacy of the "methodological error rate" is well
illustrated by an example that fingerprint examiners are fond of using: the
roomful of mathematicians. Consider the following analogy drawn by Agent
"The analogy that I like to use to help better understand the
situation is the science of math. I think everyone agrees that probably the most
exact science there is, is mathematics. And let's take the methodology of
addition. If you add 2 plus 2, it equals 4. So if you take a roomful of
mathematics experts and you ask them to perform a rather complex mathematical
problem, and just by chance one of those experts makes a [sic] addition error -
adds 2 plus 2 and gets 5 - does that constitute that the science of math and the
methodology of addition is invalid? No. It simply says is that that practitioner
had an error for that particular day on that problem."
Fingerprint examiners are particularly fond of using the
mathematics analogy. Wertheim pere writes, "[j]ust as errors in mathematics
result from mistakes made by mathematicians, errors in fingerprint
identification result from the mistakes of fingerprint examiners. The science is
valid even when the scientist errs." Special Agent German argues. "[t]he latent
print examination community continues to prove the reliability of the science in
spite of the existence of practitioner error. Math is not bad science despite
practitioner error. Moreover, air travel should not be banned despite occasional
crashes due to pilot error." In response to the Mayfield case, Wertheim pere
commented, "Just because someone fails to balance his checkbook, that should not
shake the foundations of mathematics." The analogy between the practice of
forensic fingerprint analysis and the abstract truth of addition seems rather
strained. But, even if we accept the analogy on its own terms, we can readily
apprehend that the only relevant information for assessing the reliability of a
forensic technique is precisely that which Agent Meagher deems irrelevant: the
rate at which the roomful of mathematicians reaches correct results. In other
words, it is the roomful of mathematicians that constitutes forensic practice,
not the conceptual notion of the addition of abstract quantities. If defendants
were implicated in crimes by mathematicians adding numbers, a court would want
to know the accuracy of the practice of addition, not the abstract truth of the
principles of addition.
B. THE COURTS' VIEW OF ERROR RATE
Courts have been generally credulous of the parsing of error
into categories. In the first written ruling issued in response to an
admissibility to challenge to fingerprint evidence under Daubert, the court
wrote: "The government claims the error rate for the method is zero. The claim
is breathtaking, but it is qualified by the reasonable concession that an
individual examiner can of course make an error in a particular case . . . Even
allowing for the possibility of individual error, the error rate with latent
print identification is vanishingly small when it is subject to fair adversarial
testing and challenge."
On appeal, the Seventh Circuit credited Agent Meagher's
testimony "that the error rate for fingerprint comparison is essentially zero.
Though conceding that a small margin of error exists because of differences in
individual examiners, he opined that this risk is minimized because print
identifications are typically confirmed through peer review." In United States
V. Crisp, the court similarly accepted at face value the testimony of a latent
print examiner "to a negligible error rate in fingerprint identifications."
In United States v. Sullivan, the court did share "the
defendant's skepticism that" latent print identification "enjoys a 0% error
rate." However, the court concluded that there was no evidence that latent print
identification "as performed by the FBI suffers from any significant error
rate," noting "FBI examiners have demonstrated impressive accuracy on
certification-related examinations." These are, of course, the examinations
characterized as laughable by Mr. Bayle and the Llera Plaza court. The court
allowed the government's unsupported claim of a "minimal error rate" to stand.
In the first decision in United States v. Llera Plaza (herinafter
Llera Plaza I), the court allowed the claim of zero "methodological error rate"
to stand, although it dismissed it as largely irrelevant to the reliability
determination before the court.
In its second decision (hereinafter Llera Plaza II), however,
the court credited the testimony of FBI examiners that they were not themselves
aware of having committed any errors;
"But Mr. Meagher knew of no erroneous identifications
attributable to FBI examiners. Defense counsel contended that such non-knowledge
does not constitute proof that there have been no FBI examiner errors. That is
true, but nothing in the record suggests that the obverse is true. It has been
open to defense counsel to present examples of erroneous identifications
attributable to FBI examiners, and no such examples have been forthcoming. I
conclude, therefore, on the basis of the limited information in the record as
expanded, that there is no evidence that the error rate of certified FBI
fingerprint examiners is unacceptably high."
The court appears to have understood full well the point made
here (supra Part II.A.4.c) that because of the weakness of exposure mechanisms
it would be foolhardy to assume that known errors are any more than a small
subset of actual errors. Nonetheless, the court chose to use this argument to
uphold fingerprint evidence on the "error rate" prong of Kumho Tire. As I have
argued elsewhere, this was poor enough reasoning at the time, but it is even
more embarrassing now that two short years later we do have definitive proof
that the FBI has committed at least one exposed false positive; the Mayfield
case. The court's embarrassment should be even more acute since the Mayfield
case has brought to light that: one of the examiners implicated in the Mayfield
misattribution, John Massey, did, in fact, make errors that were exposed within
the organization itself prior to being presented in court; that Massey continued
to analyze fingerprints for the FBI and presumably testify in court with the
usual "infallible" aura; and that Massey was still hired to "verify" an
identification in an extremely high-profile case. Further, Massey's history of
false attributions was exposed during a trial in 1998, four years prior to the
Llera Plaza hearing."
The court would have been better educated by asking Agent
Meagher about exposed errors within the laboratory than focusing solely on the
highly unlikely exposure of errors after they are presented in court as
purportedly error-free. In my critique of Llera Plaza II, I argued that
Meagher's testimony was better evidence of the weakness of the FBI's
error-detection mechanisms than it was that the FBI had not committed any
errors. Interestingly, in a presentation about the Mayfield case to the IAI,
Meagher reportedly said the following: "Question: "Has the FBI made erroneous
identifications before?" Steve: "The FBI identification unit started in 1933 and
we have had 6 or 7 in total about 1 every 11 years. Some of these were reported
and some were not."
Given Meagher's sworn testimony in Llera Plaza /, we must
assume that he was referring here to errors that were caught within the
laboratory before being testified to in court. Where and how some of these
errors were "reported" is not clear.
The Third Circuit ruling on the appeal of Mitchell (the first
challenge to fingerprint evidence under Daubert) rejected Mitchell's argument
that there is no methodological error rate distinct from the practitioners. But
the court's reasoning made clear that by "methodological error rate" it
understood something like an "industry-wide" error rate, as contrasted with an
individual practitioner error rate, not a theoretical error rate that is set by
fiat at zero. The court acknowledged the argument made in this article (infra
Part III.C) that it is problematic to automatically assign all known errors to
practitioners rather than "the method." But, like other courts, the Mitchell
court then went on to make the unsupported assertion that "even if every false
positive identification signified a problem with the identification method
itself (i.e., independent of the examiner), the overall error rate still appears
to be microscopic. From this, the court concluded that "the error rate has not
been precisely quantified, but the various methods of estimating the error rate
all suggest that it is very low." In short, the court completely neglected the
exposure problem indicated by the fortuity of the known false positives. Instead
the court noted that "the absence of significant numbers of false positives in
practice (despite the enormous incentive to discover them)."
Of course, a technique with a "very low" measured error rate
may be admissible, but it ought not be permitted to tell the fact-finder that
its error rate is "zero." Interestingly, the court acknowledges this, noting,
''the existence of any error rate at all seems strongly disputed by some latent
fingerprint examiners." The court looks dimly on this. In one of its "three
important applications" of "[t]he principle that cross-examination and
counter-experts play a central role in the Rule 702 regime," the courtnotes that
"district courts will generally act within their discretion in excluding
testimony of recalcitrant expert witnesses—those who will not discuss on
cross-examination things like error rates or the relative subjectivity or
objectivity of their methods. Testimony at the Dauhert hearing indicated that
some latent fingerprint examiners insist that there is no error rate associated
with their activities.... This would be out-of-place under Rule 702. But we do
not detect this sort of stonewalling on the record before US."
Here, then, is a welcome and long overdue judicial
repudiation of latent print examiners' claim of a "zero methodological error
rate." The only baffling part is the court's patently false assertion that such
claims were not made "on the record before us," when, as we have seen the claim
originated and was most fully developed in the very record before the court.
There is no record in which the claim of a zero error rate was made earlier, nor
any record in which it was made more forcefully.
In sum, not only do courts gullibly accept the claim of the
zero "methodological error rate," they also parrot totally unsupported
assertions from latent print examiners that the so-called "practitioner error
rate" is "vanishingly small," "essentially zero," "negligible," "minimal," or
"microscopic." These assertions are based on no attempt to responsibly estimate
the "practitioner error rate"; they are based solely on latent print examiners'
confidence in their own practice. Confidence, as we know from the world of
eyewitness identification, does not necessarily equate with accuracy. A sign of
hope, however, recently emerged from a concurring opinion in the Court of
Appeals of Utah, which suggested that "we should instruct our juries that
although there may be scientific basis to believe that fingerprints are unique,
there is no similar basis to believe that examiners are infallible."
"Methodological error rate" might be viewed not merely as a
product of latent print examiners' and prosecutors' misunderstanding of the
notion of error rate, but, worse, as a deliberate attempt to mislead finders of
fact. The concern over the potential for a finder of fact to give inflated
credence to evidence clad in the mantle of science is embedded in the very
notion of having an admissibility barrier. The potential to mislead a
fact-finder by saying, "My methodological error rate is zero, and my
practitioner error rate is negligible," is extremely high. The "methodological
error rate" is a bankrupt notion that should have been immediately rejected when
it was first proposed. Indeed, it probably would have been, had it not been
advanced in defense of something with such high presumed accuracy as latent
print identification. Since it was not rejected, courts should do as the Third
Circuit said (if not as it did) and exclude any testimony claiming that the
error rate of latent print identification (or. for that matter, anything) is
zero because of the extreme danger that fact-finders will give it credence. If
they do not, then all sorts of expert and non-expert witnesses will be able to
invoke this notion as well. Why should the manufacturer of litmus paper not be
able to claim that her litmus paper has a zero "methodological error rate"
because substances are either acid, base, or neutral? Why not allow the
eyewitness to claim a zero "methodological error rate" because only one person
was seen? Why not allow a medium to claim a zero "methodological error rate"
because the defendant is either guilty or innocent? Why not allow all pregnancy
tests to claim a zero "methodological error rate" because all women either are
pregnant or are not? The scientific and forensic scientific communities should
also explicitly disavow the notion of "methodological error rate" as it is
framed by latent print examiners.
C. ACCOUNTING FOR ERROR
In one sense, the claim of a zero "methodological error rate"
is merely a rhetorical ploy to preserve fingerprinting's claim to infallibility.
But, at the same time, it has a more insidious effect. The insistence upon
"methodological" infallibility serves to deter inquiry into how the process of
fingerprint analysis can produce errors. This, in turn, hampers efforts to
improve the process of fingerprint analysis and, possibly, reduce the rate of
error. Only by confronting and studying errors can we learn more About how to
The mechanism for assigning all errors to the category of
"human error" is attributing them to "incompetence." Elsewhere I have explored
the sociological dimensions of the fingerprint profession's mechanisms for
sacrificing practitioners who have committed exposed false positives on the
altar of incompetence, in order to preserve the credibility of the technique
itself. In fingerprint identification, incompetence is said to be the cause of
all known cases of error—at least all of those that are not assigned to outright
fraud or malfeasance. These attributions of incompetence, as we shall see, are
made in a retrospective fashion and without evidence. In short, the only
evidence adduced in favor of the claim that the examiner was incompetent is the
same thing incompetence is supposed to explain: the exposed misattribution.
Incompetence then supports a variant on the "zero methodological error rate"
argument; the claim that "the technique" is infallible as long as "the
methodology" is applied correctly. Again, attributions of incorrect application
of the methodology are made in a retrospective fashion without evidence. It is
the exposed error that tells us that correct procedures were not followed.
Fingerprint examiners steadfastly maintain that the process
is error-free in competent hands. Ashbaugh states, "When an examiner is properly
trained a false identification is virtually impossible." Wertheim pere asserts,
"Erroneous identifications among cautious, competent examiners, thankfully, are
exceedingly rare; some might say 'impossible.'" Wertheim fils flatly declares,
"a competent examiner correctly following the ACE-V methodology won't make
errors." And, elsewhere: "When coupled with a competent examiner following the
Analysis, Comparison, Evaluation process and having their work verified,
fingerprint identification is a science, the error rate of the science is zero."
Beeton states, "As long as properly trained and competent friction ridge
identification specialists apply the scientific methodology, the errors will be
minimal, if any." These arguments can be sustained, even in the face of exposed
cases of misidentification committed by IAI-certified, or otherwise highly
qualified, examiners only by retrospectively deeming individuals who have
committed exposed false positives incompetent.
Thus, Sedlacek, Cook, and Welbaum, the three examiners implicated in the
Caldwell case, were deemed incompetent, despite being IAI-certified. In the
Cowans case, Massachusetts Attorney General Thomas Reilly, after failing to
secure a criminal indictment for perjury against LeBlanc, stated, "Science is
not an issue in this case. What we know is that there is right way to do this
and the right way was not followed." LeBlanc himself, said, curiously, "The
system failed me. And the system failed Cowans."
Regarding the Jackson case. Agent Meagher stated "I think
this was a—a case where you need to really look at the qualifications of the
examiners. Having looked at the prints, I would certainly say that these
individuals were lacking the necessary training and experience needed to take on
that level of a comparison examination that they did."
Again, one of the three examiners implicated in the disputed
attribution (Creighton) was IAI-certified, and, therefore, should be difficult
to deem incompetent. On paper, the IAI-certified expert Creighton, who was
"wrong," was no less qualified than the IAI-certified experts Wynn and McCloud,
who were "right." It is only because we now agree with Wynn and McCloud that we
deem Creighton incompetent. Notice the circularity of Meagher's argument:
"Having looked at the prints, I would certainly say that these individuals were
lacking..." It is by looking at the evidence, that we are able to judge the
expert's competence. Yet, in all routine fingerprint cases it is only by looking
at the competence of the expert that we are able to judge the evidence!
This approach to error raises the problem of the
unreliability of mechanisms to expose incompetence. Imagine, for instance, that
Jackson had fewer resources to marshal in his defense and had either been unable
to procure defense experts or had procured less able defense experts who had
corroborated the misidentification. The examiners who made the misidentification
would now be presumed competent. Indeed, according to the logic put forward by
proponents of fingerprint identification, the jury would be justified in
believing—or being told—that forensic fingerprint identification, when in the
hands of "competent" experts such as these, is error-free. Alternatively,
consider the case of the identifications made by these experts just before they
took on the Jackson case. Should the experts be deemed competent in these
judgments or incompetent?
Finally, it should be noted that all of these attributions of
incompetence are simply postulated. No evidence was advanced to show that
Sedlacek, Cook, Welbaum, or Creighton were incompetent. Instead, the presumed
misattributions serve as the sole evidence of incompetence.
I. Incompetence as a Hypothesis
At root, incompetence as an explanation for error is a
hypothesis. Proponents of forensic identification attribute all exposed errors
to incompetence. This may or may not be correct, but the answer cannot be known
simply by assuming the conclusion.
Consider once again the analogy with airplane crashes, an
area where the adjudication ofthe attribution of an accident to a category of
error (pilot or mechanical) is often hotly contested and highly consequential.
In this case, there are actors with an interest in both attributions of error
(the manufacturer and its insurer favor pilot error; the pilots' union—and
perhaps the victims, seeing the manufacturer as having the deeper pockets—favor
mechanical error). Clearly, reasons must be given to attribute the error to one
cause or the other. Although the attribution may be contested, both sides must
adduce evidence in favor of their hypothesized cause of error.
In the cases discussed above, the attribution of incompetence
is circular. No evidence is offered that the examiner is incompetent other than
the fact that he or she participated in an error. The fingerprint establishment
"knows" that the examiner is incompetent only because it disagrees with that
examiner's conclusion in a particular case. Thus, the fingerprint
establishment's judgment of the examiner's competence is based, not on any
objective measure of competence, but solely on whether it agrees with the
examiner's conclusions in one case.
The effect of this is the creation of what might be called "a
self-contained, self-validating system." Observe:
1.The proposition is urged that: Forensic fingerprint identification is 100%
accurate (error-free) when performed by a competent examiner.
2. This proposition can only be falsified (refuted) by the demonstration of a
case in which a "competent" examiner makes an error.
3.When cases of error are exposed, the examiners implicated are immediately,
automatically, and retrospectively deemed "incompetent."
4. No exposed error—and no number of exposed errors—can refute the proposition.
5.The proposition cannot be refuted.
Note also another effect of this: all criminal defendants are
forced into the position of assuming that examiners in their cases are
competent. Since incompetence is only exposed in a retrospective fashion (i.e.
by making a misidentification) and such examiners are almost always
excommunicated from the profession, all criminal defendants are subject to the
"infallible" competent examiner.
The remarkable thing is that we can easily imagine a state of
affairs in which the proposition urged in (1) above can be tested. All we need
is some measure of competence that is not circular, that does not depend on
exposed misidentifications. For instance, one might reasonably treat the IAI's
certification examination as a measure of competence. In that case, we would
reason as follows:
1.The proposition is urged that: Forensic fingerprint identification is 100%
accurate (error-free) when performed by a competent examiner.
2. Passage of the IAI certification test is a measure of competence.
3.The proposition may now be falsified by the exposure of a case in which an IAI-certified
examiner is implicated in a misidentification. (Of course, in true
falsificationist fashion, even if no such case is exposed, we still do not know
that the proposition is true.)
4. IAI-certified examiners have been implicated in misidentifications (supra
5. The proposition is false.
Note that this way of reasoning about error does not,
contrary to what some might suggest, cause the sky to fall upon forensic
fingerprint identification. All we have arrived at is that rather reasonable
position that forensic fingerprint identification is not error-free. Fingerprint
examiners admit this. But they attempt to have their cake and eat it too, by
insisting on some mythical error-free zone that is unsullied by exposed cases of
The real danger of attributing error to incompetence is that it works just as
well, regardless of the actual accuracy of the technique. In fact, the tragic
irony of forensic fingerprint identification is that, even though it may be
highly accurate, it adopts modes of reasoning and argumentation so obscurantist
that they would work as well even if it were highly inaccurate.
2. Alternate Theoretical Approaches
As a hypothesis, the assignment of all exposed errors
to incompetence is unpersuasive. The range of circumstances, even in the very
small data set of exposed cases, is extremely broad. Errors have been committed
in obscure local law enforcement agencies by unheralded practitioners (Trogden)
and by the elite of the profession in the highest profile cases imaginable
(Mayfield). These examples suggest that error does not necessarily require an
explanation; it is part of normal practice and is hardly surprising. All areas
of scientific and technical practice are infused with error and have to confront
and try to understand their own sources of error. Indeed, in some areas of
science, like astronomy, as Professor Alder has recently eloquently described,
the understanding of error is, in some ways, the core of the scientific work.
Thus, one consequence of insisting upon incompetence as
the explanation for all errors is that it prevents us from understanding
anything about fingerprint errors. In place of the fingerprint community's
unhelpful and unsupportable insistence upon assigning all errors to
incompetence, I will suggest two sociological frameworks for thinking in a
realistic way about forensic errors.
a. The Sociology of Error
One way of understanding the fingerprint community's
insistence on the incompetenee hypothesis draws from a sociology of science
notion called "the sociology of error." This refers to the tendency, in
commenting on science, to invoke "external causes," such as sociological or
psychological phenomena, asymmetrically, to explain only incorrect results, not
correct ones. Correct results are attributed solely to "nature," whereas false
results are attributed to bias, ambition, financial pressure, and other such
causes. For example, it has become commonplace to attribute Martin Fleischmann
and Stanley Pons's premature announcement of having achieved cold fusion to
"psychological" and "sociological" explanations-greed, ego, ambition, and the
excessive pressure to publish first that pervades contemporary science. However,
such explanations cannot explain incorrect results unless it is implausibly
assumed that these psychological and sociological forces are not operative when
science yields purportedly "correct" results. As Bloor puts it: "This approach
may be summed up by the claim that nothing makes people do things that are
correct but something does make, or cause them to go wrong. The general
structure of these explanations stands out clearly. They all divide behaviour or
belief into two types, right or wrong, true or false, rational or irrational.
They then invoke sociological or psychological causes to explain the negative
side of the division. Such causes explain error, limitation and deviation. The
positive side of the evaluative divide is quite different. Here logic,
rationality and truth appear to be their own explanation. Here psycho-social
causes do not need to be invoked... The central point is that, once chosen, the
rational aspects of science are held to be self-moving and self-explanatory.
Empirical or sociological explanations are confined to the irrational . . . .
Causes can only be located for error. Thus the sociology of knowledge is
confined to the sociology of error."
We can see the operation of this logic in latent print
examiners' self-analysis. Incompetence, prosecutorial pressure, over-haste, a
"bad day," vigilantism, and so on are invoked to explain errors. But presumably,
if these factors were in force when errors were produced, they were also in
force when supposedly "correct" results were produced as well.
As an antidote to the sociology of error, Bloor proposed the principles of
"impartiality" and "symmetry." Bloor proposed that sociological explanations of
the production of scientific knowledge would have to be capable of explaining
the production of both "false" and "correct" beliefs (impartiality). And, the
same causes would have to explain both "false" and "correct" beliefs (symmetry).
We might begin to apply an impartial, symmetric analysis to
fingerprint misattributions. The fingerprint community's inquiries into its own
errors tend to fall, exactly into the sociology of error. Once it is determined
that the conclusion was in error, retrospective explanations are sought as
causes of the erroneous conclusions. But there is absolutely no evidence that
fingerprint misattributions are caused by "the process" gone awry. (Indeed,
because latent print examiners do not record bench notes—document what leads to
their conclusions—there would be no way of demonstrating this even if it were
true.) It is more likely that whatever process it is that produces correct
results also sometimes produces incorrect results.
If it were true that fingerprint errors had different
superficial attributes from correct conclusions, detecting errors would not be
difficult. We could simply devise ways of detecting incompetent examiners, bad
days, high-pressure laboratories, and so on. But the insidious thing about
fingerprint attributions is that they look just like correct attributions, until
we identify them as misattributions.
In short, retrospective explanations of fingerprint
misattributions will not help us leam to identify them prospectively. This is
the intended meaning of my epigraph—not, as the reader may have initially
assumed, to liken latent print examiners to charlatans. The epigraph highlights,
with absurd precision, the obvious point that the insurance scam only works
because the mark cannot prospectively tell the difference between an honest
insurance salesman and an imposter. The same is true of a fingerprint
identification. The criminal justice system has no way of prospectively
distinguishing between correct latent print attributions and misattributions.
But, more importantly, it is true of the latent print examiner as well. A
falsely matching known print (an imposter) presumably looks much the same as a
truly matching one. What this leaves us with is an empirical question about
latent print examiners' ability to detect imposters. All the rest of it—good
intentions, the fact that there is only one finger that left the print—is beside
the point. Latent print examiners are not the phony insurance salesmen of my
epigraph; they are the victims, the unwitting consumers.
For instance, in the wake of the Mayfield case, some latent
print examiners have declared that they "do not agree with the identification."
But whether a latent print examiner agrees with an identification that is posted
on the Internet as a misattribution is of little interest to us. As my epigraph
suggests, we want to know whether latent print examiners can distinguish the
fake insurance salesmen from the real ones before they know they're phony, not
b. Normal Accidents Theory
Another way of looking at this problem is drawn from "normal
accidents theory" (NAT). Professor Perrow suggests that many catastrophic
failures of technical systems are caused, not by deviation from proper
procedure, but from the normal functioning of highly complex and "tightly
coupled" systems (hence the term "normal accidents"). Fingerprint analysis is
not highly complex, but it is tightly coupled. Similarly. Professor Vaughan
suggests that error and misjudgment can be part of normal behavior, not
necessarily caused by deviance. NAT would suggest that fingerprint errors are
not pathological deviations from normal procedure, but simply consequences of
Perrow's analysis of marine accidents is suggestive of the
type of "normal accident" that a latent print misattribution might be. These are
accidents that are to some extent caused by creating an erroneous image of the
world and interpreting all new, potentially disconfirming, information in light
of that "expected world." As Perrow puts it; "[W]e construct an expected world
because we can't handle the complexity of the present one. and then process the
information that fits the expected world, and find reasons to exclude the
information that might contradict it. Unexpected or unlikely interactions are
ignored when we make our construction."
Now consider Wertheim pere's description of latent print
identification: "[T]he examiner would proceed with experimentation (finding
features in the latent print, then examining the inked print for the same
features) until the instant that the thought first crystallizes that this is, in
fact, an identification. , , , The examiner continues to search for new features
until it is reliably proven that each time a new feature is found in the latent
print, a corresponding feature will exist In the latent print"
While Wertheim thinks he has described "science," he has in
fact described a process of gradually biasing his analysis of new information
based on previously analyzed information. Could this be what happens in a
fingerprint misattribution? Could it be that an examiner, having formed a
hypothesis that two prints come from a common source, interprets potentially
disconfirming information in a manner consistent with this hypothesis? Could
this explain why latent print examiners make misattributions that in retrospect
seem clearly erroneous?
3. Alternate Causes of Error
I have argued that we need to confront and understand the
nature of fingerprint error, rather than minimizing it, dismissing it, or
retrospectively blaming it on incompetence, I will now suggest two possible
causal mechanisms for fingerprint errors. While I cannot demonstrate a causal
relationship between these factors and fingerprint errors, or arbitrate between
these two mechanisms, I would suggest that they are at least as likely to be
causal mechanisms as incompetence.
a. Natural Confounding
The first alternate hypothesis is that disputed attributions
are caused by the existence of areas of friction ridge skin on different
persons" fingertips that, while not identical, are in fact quite similar. This
possibility has been totally dismissed by the fingerprint community in its
insistence upon the absolute uniqueness of all areas of friction ridge skin, no
matter how small. This fact is supposed to rest upon a "law" that nature will
never repeat the same pattern exactly. Even accepting, for the moment, this
flawed argument, it does not hold that nature might not produce confounding
patterns. In other words, nature might produce areas of friction ridge skin
that, though not identical, are quite similar, similar enough to be confounded
when using the current tools of analysis (i.e.. "ACE-V").
In some sense this would be analogous to what, in forensic
DNA typing, is called an "adventitious" or "coincidental" match. This refers to
the fact that, given a certain DNA profile, a certain number of individuals may
be expected to match the profile, even though they did not, in fact, leave the
crime-scene sample. This expectation is phrased as the "random match
probability." There is an important difference between an adventitious match in
DNA and the analogous phenomenon in fingerprinting. In a DNA adventitious match,
the samples do in fact match. In other words, there is no way of knowing that
the match is "adventitious" rather than "true," other than, perhaps, external
factors that make the hypothesis that the identified individual is the true
source of the match implausible (such as, that the individual was incarcerated
at the time).
Because a fingerprint match is a subjective determination,
there is a sense in which an adventitious fingerprint match does not "really"
match. That is, once the match has been deemed erroneous one can find
differences between the two prints. There are always differences between print
pairs. In an identification, however, these differences are deemed explainable
by the examiner.
A second alternative hypothesis is bias. Bias can come in
many forms. The tendency of forensic scientists to suffer from "pro-prosecution
bias," which may be more or less conscious, as a consequence either of being law
enforcement agents, identifying closely with them, or of simply working closely
with them, has been well noted. This certainly might be problematic in
fingerprint identification where a high proportion of practitioners are law
enforcement officers and virtually all practitioners acquired their expertise
through work in law enforcement. However, by "bias" I also mean to refer to a
politically neutral form of psychological bias that has nothing to do with the
analyst's conscious or unconscious feelings about the parties in criminal cases.
In a groundbreaking article on "observer effects" in forensic science.
Professors Risinger, Saks, Thompson, and Rosenthal draw on psychological
literature to support the argument that forensic technicians engaged in tasks of
determining whether two objects derive from a common source, like latent print
identification, are subject to "expectation bias." In other words, the very
process of what, in the so-called "ACE-V methodology," is called
"Comparison"—going from the unknown to the known print—to see if the ridge
detail is "in agreement" may create an expectation bias. Features seen in the
unknown print may be more likely to be "seen" in the known print or, even more
insidiously, vice versa. These effects will tend to cause observers to expect to
see similarities, instead of differences, and may contribute to erroneous source
attributions. Risinger et al. note that observer effects are well known in areas
of science that are highly dependent on human observation, like astronomy, and
these disciplines have devised mechanisms for mitigating, correcting, and
accounting for them. Forensic science, however, has remained stubbornly
resistant to even recognizing that observer effects may be in force.
Latent print identification—in which no measurements are
taken, but a determination is simply made by an examiner as to whether she
believes two objects derive from a common source—is a prime candidate for the
operation of observer effects. Several factors support the plausibility of the
observer effects hypothesis. First, many of the disputed identifications
discussed above were confirmed by second, third, and even fourth examiners.
Since there is no policy in place for blinding verifiers from the conclusion
reached by the original examiner, these examiners almost surely knew that their
colleagues had reached conclusions of identification. This suggests that
examiners are indeed subject to expectation and suggestibility and that these
forces can cause them to corroborate misattributions. If expectation bias causes
latent print examiners to corroborate misattributions, could it cause them to
generate them as well?
Even more suggestive are the cases in which examiners
employed by the defense corroborated disputed attributions. That defense
examiners sometimes corroborate disputed attributions would suggest that
expectation and suggestion are so powerful they can overcome the defense
expert's presumed pro-defendant bias. If anything, we would expect defense
examiners to be biased in favor of exclusion because they are working for
clients with an interest in being excluded. The work of a defense examiner
likely consists mainly of confirming that the state's examiners did in fact
reach the right conclusion. This may create a situation in which the defense
examiner expects virtually all print pairs put before her to match. The fact
that defense examiners have corroborated disputed identifications indicates that
expectation bias may be even more powerful than the expert's bias toward the
party retaining her.
4. The Mayfield Case
It will be useful to explore the possible roles of natural
confounding and observer effects by returning to what is perhaps the richest and
most theoretically interesting (as well as the most recent and sensational)
misattribution case: the false arrest of Brandon Mayfield. In the wake of the
uproar, the FBI promised a review by "an international panel of fingerprint
experts. That review is now complete, and the FBI has published a "synopsis" of
the International Review Committee's findings.
The report adopts the rhetorical distinction between "human"
and "methodological" error, claiming, "The error was a human error and not a
methodology or technology failure." The claim that the Mayfield error somehow
did not involve "the methodology" as properly practiced is particularly
difficult to sustain given the impeccable credentials of the laboratory, the
individual examiners, and the independent expert.
The most easily dismissed hypothesis was that the error was
caused by the digital format in which the Madrid print was transmitted to the
FBI. A second hypothesis posed by an anonymous FBI official in the press was
"that the real issue was the quality of the latent print that the Spaniards
originally took from the blue bag." But, this explanation can also be dismissed
because the Spanish were apparently able to effect an identification to Daoud
from the latent print, so the latent print was presumably of adequate quality.
As mentioned above, the report singles out the high-profile
nature of the case as an explanation for the error. This conclusion is
interesting—and quite damaging to latent print identification's claims to
objectivity. If latent print identification is less reliable in high profile
cases, then how objective can the analysis be? But, pending further evidence,
the conclusion is unpersuasive. The report offers no evidence, such as
statements by the examiners, as to how the high-profile nature of the case might
have influenced them. Instead, because an error occurred in a high-profile case,
the report simply assumes that a causal relationship exists.
There is no reason for us to accept this hypothesis as more
persuasive than the NAT hypothesis: that the error was a product of normal
operating procedure and that, if anything, the high-profile nature of the case
is an explanation for the error's exposure, not its occurrence.
At bottom, blaming the error on the nature of the case is
merely a continuation of the rhetorical strategy of seeking to dismiss all
errors as exceptional cases. The latent print community's characterization of
the error as "the perfect storm" illustrates the effort to portray the case as
so exceptional that it remains irrelevant to all other cases.
Ultimately, the report itself identifies the reason that we
are unlikely ever to find a persuasive explanation for the error. Because latent
print examiners do not keep bench notes (they do not document their findings),
it is nearly impossible to retrospectively reconstruct a misattribution.
Given the impossibility of reconstructing the examiner's
subjective process, let us explore the possibilities of natural confounding and
bias. Could the Mayfield error be due to natural confounding? The FBI press
release refers to "the remarkable number of points of similarity between Mr.
Mayfield's prints and the print details in the images submitted to the FBI." The
possibility that the Mayfield case represents the first exposed "adventitious
cold hit" in a latent print database is intriguing.
As I have noted elsewhere, the idea of searching latent
prints in some sort of seamless global database has been an unfulfilled dream
throughout the twentieth century. Only today is computer technology beginning to
make such a "global database" possible, although there are still formidable
problems with making national and regional databases technically compatible.
Latent print examiners' flawed argument that, in the course of filing and
searching fingerprint records, they had never come across two identical prints,
was always based on searches of local databases. Since a truly global search was
impractical, fingerprint examiners extrapolated from the absence of duplicates
in local databases the much broader principle that, were one to search all the
world's databases, one would not find duplicates either. Today, functionally
global searches are becoming practicable in high profile cases (such as an
alleged Al Queda terrorist attack on European soil). Given the nature of the
case and the rapidly advancing technology, the Madrid print may have been one of
the most extensively searched latent prints of all time. It may be that the
Mayfield case demonstrates what may happen when one actually does a global
search: one finds very similar though not completely identical, areas of
friction ridge skin. This is analogous to a phenomenon long observed by DNA
analysts: as the size of the databases increases, the likelihood of an
adventitious cold hit increases as well.
Oddly enough, what makes the natural confounding hypothesis
seem less plausible are precisely the misleading "suspicious facts" about
Mayfield; his conversion to Islam, his Egyptian spouse, his military service,
and his connection to the "Portland Seven." If the Mayfield error were purely an
adventitious cold hit—a case of a computer searching for the closest possible
match to a latent print created by Daoud and then gulling the examiner into
making a misattribution—what is the likelihood that the victim of the
adventitious cold hit would be an individual with such seemingly plausible
connections to an Al Qaeda operation, as opposed to say an octogenarian
evangelical Christian with a criminal record? This suggests that the "suspicious
facts" about Mayfield may have been introduced into the latent print
identification process at some point, at least "finning up," if not actually
generating, the misattribution. If facts about Mayfield did influence latent
print examiners, then it was a highly improper introduction of
"domain-irrelevant information" into what should have been a technical analysis.
While it would be proper for an investigator to use the "suspicious facts" about
Mayfield to evaluate the plausibility of the latent print match, it is highly
dangerous for a forensic technician to do so. But an anonymous FBI source has
strenuously denied that the latent print analysts knew anything about Mayfield
before they made the attribution.
Even without domain-irrelevant information, the possibility
of unconscious bias ("observer effects") remains strong. The initial analyst,
Green, may have been induced to seek the best possible match among those
produced by the database search. The "verifiers." Wieners and Massey, may have
been unconsciously influenced by the fact that Green has made an attribution.
Then Moses, who did know the domain-irrelevant information, but whose bias ought
to have pointed away from attribution, also corroborated the false attribution.
Even in the face of the Mayfield case, the fingerprint community continues to
seek to minimize the significance of error. Wertheim pere for example, advised
his colleagues to give the following testimony when asked about the Mayfield
A: "(turning to the jury) The FBI fingerprint section was
formed in 1925. Over the last 79 years, the FBI has made tens of
thousands, hundreds of thousands, probably millions of correct identifications.
So now they finally made a mistake that led to the arrest of an innocent man.
and that is truly a tragic thing. But figure the "error rate" here. Are
fingerprints reliable? Of course they are. Can mistakes be made? Yes if proper
procedures are not strictly followed. But I cannot think of any other field of
human endeavor with a track record of only one mistake in 79 years of practice."
To interpret Mayfield as showing that the FBI has made only
one error in seventy-nine years, as opposed to only having had one error exposed
in seventy-nine years, exhibits a complete denial of the exposure problem
detailed above (supra Part ll.A.4.c).
As I have argued elsewhere, the myth of the infallibility of
fingerprint identification is in many ways a historical accident. I suggest,
with lain McKie, that it is a burden that fingerprint examiners never ought to
have been asked to shoulder and never ought to have assumed. Unable to resist
the offer of infallible expert witness status, fingerprint examiners have now
painted themselves into a comer in which they must resort to rhetorical
gymnastics in order to maintain the claim of infallibility in the face of
mounting evidence of error. We can help them out of this corner and give finders
of fact a more realistic way of assessing the trustworthiness of latent print
attribution, but the examiners will have to leave the "zero methodological error
We need to acknowledge that latent print identification is
susceptible to error, like any other method of source attribution, and begin to
confront and seek to understand its sources of error. I have drawn some
tentative conclusions in this paper based on what is probably a very inadequate
data set of exposed errors in the public record. Some of these conclusions may
not be sustained once a more complete data set is obtained. One way to begin the
process of studying error would be for law enforcement agencies and the
professional forensic science community to begin assembling a more complete data
set of latent print errors. In addition, the IAI should put in place a regular
mechanism for reviewing cases of disputed identifications, as was done in
Jackson. This mechanism should be known and publicized without the hesitation
that it will expose the fact that latent print attributions are sometimes
erroneous. Then latent print error will no longer be "a wispy thing like smoke."
Complete article with citations:
The Journal of Criminal Law & Criminology, Vol. 95, No. 3,
Feel free to pass The Detail along to other
examiners. This is a free newsletter FOR
latent print examiners, BY latent print examiners. There are no copyrights on
The Detail (except in unique cases such as this week's article), and the website
is open for all to visit. FAIR USE NOTICE: This
newsletter contains copyrighted
material from the public domain, the use of which has not been specifically authorized by
the copyright owner. www.clpex.com makes such material available in an
effort to advance scientific understanding in the field of latent
prints, thus constituting a 'fair use' of copyrighted material as provided for in section 107 of the US Copyright
Law. In accordance with Title 17 U.S.C. Section 107, the material
on this site is displayed without profit to those who wish to view this
information for research and/or educational purposes. If you wish to
use copyrighted material from this site for purposes of your own that go
beyond 'fair use', you must obtain permission from the copyright owner.
If you have not yet signed up to receive the Weekly Detail in YOUR e-mail inbox,
go ahead and join the list now
so you don't miss out! (To join this free e-mail newsletter, enter your
name and e-mail address on the following page:
You will be sent a
Confirmation e-mail... just click on the link in that e-mail, or paste it
into an Internet Explorer address bar, and you are
signed up!) If you have
problems receiving the Detail from a work e-mail address, there have been past
issues with department e-mail filters considering the Detail as potential
unsolicited e-mail. Try subscribing from a home e-mail address or contact
your IT department to allow e-mails from Topica. Members may
unsubscribe at any time. If you have difficulties with the sign-up process
or have been inadvertently removed from the list, e-mail me personally at
firstname.lastname@example.org and I will try
to work things out.
Until next Monday morning, don't work too hard or too little.
Have a GREAT week!