Breaking NEWz you can UzE...
compiled by Jon Stimac
Super Glue Gives Clue for Nanofibers
- Jan 27, 2006 ...engineers say fingerprint
technology gave them the crucial clue needed to discover an easy,
versatile method of making nanofibers...
Fingerprints Key to 18-year-old Rape
CITIZEN, SO AFRICA
- Jan 24, 2006
...prints were taken at the scene but at the time no
suspects could be linked to the crime...
Police to Unveil CSI-style Unit for Evidence
BOSTON GLOBE, MA - Jan 22, 2006 ...a new Boston police
crime scene unit has completed intensive training in state-of-the-art
evidence collection techniques...
Serial killer arrested in Mexico
Sydney Morning Herald, AU - Jan 25, 2006 ...fingerprints
of Mexico's most-wanted serial killer, the "Little Old Lady Killer",
match in 10 other murder cases as well...
Recent CLPEX Posting Activity
containing new posts
Moderated by Steve Everist
nator9692 30 Sat Jan 28, 2006 8:54 pm
Latent Print Certification Experience Requirements
Vicki Farnham 734 Fri Jan 27, 2006 7:52 pm
Funniest Item Requested to be Processed
Steve Everist 3813 Tue Jan 24, 2006 11:20 pm
RAM Dye Stain
Andrew Schriever 364 Tue Jan 24, 2006 7:51 pm
Lisa 378 Tue Jan 24, 2006 2:42 pm
Deuby 487 Tue Jan 24, 2006 12:16 pm
The V in ACE-V?
Mark Mills 1384 Mon Jan 23, 2006 8:14 am
UPDATES ON CLPEX.com
No major updates on the website this
we looked at
news of a third
erroneous identification in the McKie/Asbury case.
we look at a new article from Simon Cole
regarding his critical position on the validity of fingerprint
identification. Next week we will take a look at the basics of a
recent study on fingerprint examiner accuracy that renders a large portion
of his comments obsolete, but until then, here it is:
Is Fingerprint Identification Valid?
Rhetorics of Reliability in Fingerprint
By Simon A. Cole
Beginning around 1999, a growing number of scholars have claimed that
validation studies for forensic fingerprint identification do not exist.
This article revisits that claim by reviewing literature produced by
proponents of fingerprint identification in response to that charge. It
shows that fingerprint proponents employ rhetorical tricks in which they
claim to address the validity question, but then subtly shift the question
to ones that are easier to address. The article explores
several different rhetorical strategies fingerprint proponents use to appear
to be demonstrating validity, while in fact demonstrating other things.
These include “the fingerprint examiner’s fallacy” and “the casework
fallacy.” The inability of fingerprint proponents to refute the charge that
validity studies are lacking is further evidence that the charge is, in
Since at least 1999, when a landmark five-day admissibility hearing was held
in a burglary case (United States v Mitchell 1999), a debate has raged
inside and outside U.S. courtrooms over forensic fingerprint identification.
This debate has been framed in a number of different ways. Some of these
include: Is forensic fingerprint identification a science? Is it infallible?
Is it better or worse the forensic DNA analysis? Is it true that no two
fingerprints are alike? These are all interesting questions that deserve to
be addressed in great depth. In this article, however, I want to focus on
the one issue that I think is most fundamental. That question is: Is latent
print identification valid? The very meaning of the term “validity” is
ambiguous and contested in the law. It may, therefore, be helpful to begin
by trying to bring some clarity to this highly charged buzzword.
The question of the validity of forensic fingerprint identification is of
enormous legal import. The Supreme Court has mandated that “Proposed
testimony must be supported by appropriate validation” (Daubert v Merrell
Dow Pharmaceuticals 1993 at 590). In addition, Daubert imposed on federal
trial judges (as well as the trial judges of the many states that have
adopted Daubert wholly or in part) an obligation to ensure the “reliability”
of expert evidence put before a jury (ibid. at 589).
As Giannelli (1980: 1201) has pointed out, when courts say “reliable,” they
generally mean “accurate” or “valid.” Courts, Giannelli noted, have an
unfortunate tendency to use these three terms “interchangeably.” In fact:
"the terms have distinct meanings in scientific jargon. “Validity” refers to
the ability of a test procedure to measure what it is supposed to measure—
its accuracy. “Reliability” refers to whether the same results are obtained
in each instance in which the test is performed—its consistency. Validity
includes reliability, but the converse is not necessarily true." (Ibid.:
1201 n. 20)
A stuck clock, for example, is reliable, but not valid and only accurate
twice a day. Although courts use the term “reliability” far more often than
they use “validity,” in the vast majority of cases, it is validity that they
are interested in. Courts would be expected to be interested in the
reliability of various techniques offered into evidence only rarely, while
they would very often be expected to be interested in their validity.
The term “accuracy,” however, is almost interchangeable with “validity”
(Thornton & Peterson 2002: 19), although “accuracy” is perhaps preferable in
that it implies a continuous measurement, whereas “validity” seems to imply
an either-or judgment. Accuracy is how often a technique or process reaches
correct or incorrect conclusions. Given some specified level of accuracy, we
might anoint a technique as “valid.” It is thus critical to be precise about
what these terms mean. In this article, I will use the technical meanings of
these terms, except when quoting the courts.
Validation is usually achieved through a “validity study,” which
Imwinkelried (1995: 1254) defined as follows:
A validity study is designed to measure the accuracy of a scientific
technique. The study attempts to identify and quantify the inherent margin
of error in the technique; the researcher inquires how often the technique
will yield inaccurate results even when the analyst strictly follows proper
test procedure. By way of example, suppose that the hypothesis is that a new
type of breathalyzer validly measures a person’s blood alcohol concentration
(“BAC”). The researcher could test that hypothesis by using the instrument
to gauge the BAC of a number of persons who had consumed alcohol and
comparing the instrument’s readouts with direct blood tests of the same
persons. Since direct blood testing has already been established as a valid
method of measuring BAC, a coincidence between the readings and the direct
blood test results would tend to verify the hypothesis. However, the
readings might disagree with the direct blood alcohol test results to an
extent. The degree of disagreement would indicate how frequently the new
technique yields inaccurate results even when the analyst follows correct
In short, the “validity” of, say, a technique or a process hinges on its
ability to in fact do what it claims to be able to do. What does the process
of latent print identification claim to be able to do? According to the
government in Mitchell, latent print examiners claim to be able to do
something called “individualization.” The claim is that latent print
examiners can attribute prints of unknown origin (“latent” prints) to their
correct source finger to the exclusion of all other possible fingers in the
Notice that I have not said that latent print examiners’ claim is that all
human fingerprints are unique. This is a knowledge claim that is somewhat
relevant to latent print identification—latent print identification would be
far less useful if it were not true—but it is not fundamental to latent
examiners’ claims to expertise. In other words, the claim
could be true, and
yet latent print examiners might still be very poor at making correct source
attributions. For example, it is probably true that all human faces—even
of “identical” twins—are unique, but eyewitnesses are notoriously poor at
making correct attributions (Loftus 1996). The notion that uniqueness is
latent print examiners’ fundamental knowledge claim is something I have
elsewhere labeled “the fingerprint examiner’s fallacy” (Cole 2004).
In order to know whether latent print identification is valid, we need to
measure how accurate it is. A properly conducted validation study would not
yield a binary response: latent print identification “is” or “is not” valid.
Rather, it would yield an accuracy rate: latent print identification is
accurate approximately such-and-such percent of the time. Moreover, as
Denbeaux and Risinger (2003) have pointed out, latent print identification
not a single task; rather, it is a continuum of tasks. At the most basic
level, “individualizing” using ten complete rolled, inked prints is a far
different task than individualizing using a single latent print. Similarly,
individualizing using a good-quality latent print is a different task than
individualizing using a poor-quality one. (The Supreme Court, they argue,
has recognized the importance of distinguishing tasks by using the term
“task at hand” in Daubert.) A properly conducted validation study,
therefore, would not yield a single accuracy rate; it would yield an
accuracy curve, in which the accuracy rate varies depending on the
difficulty of the task. 
If forensic fingerprint identification cannot demonstrate its validity, it
may be grounds for exclusion under Daubert and its progeny cases (Epstein
2002; Daubert v Merrell Dow Pharmaceuticals 1993; General Electric Co. v
Joiner 1997; Kumho Tire v Carmichael 1999). There has been much confusion
about whether Daubert and its progeny require validity studies for any
expert testimony to be admissible. Critics of Daubert have correctly pointed
out that there are all sort of knowledge claims, emanating from fields that
we would not hesitate to characterize as legitimate science, such as
physics, medicine, and evolutionary theory, that nonetheless would have
difficult pointing to a “validity study” that establishes their truth
(Caudill & Redding 2000; Haack 2005; Laudan 1982, 1983). The Supreme Court
called for “appropriate validation”; it did not insist upon controlled
experimental validation (Imwinkelried 2003). Should latent print
identification escape the requirement for proof of validity on these
grounds? It would appear not. The difference between latent print
identification and the aforementioned knowledge claims is that latent print
identification can easily be subjected to controlled experimental validation
testing. Indeed, it is difficult to see how one could reasonably argue that
controlled experimental validation testing is not the “appropriate
validation” method to answer the empirical question of whether latent print
examiners source attributions are correct. Devising validity tests for
certain knowledge claims in physics, evolutionary theory, or medicine, in
contrast, would be impossible and/or unethical. Validity testing of latent
print examiners’ knowledge claims poses no such obstacles.
The most obvious way to measure accuracy would be to ask latent print
examiners to replicate, as closely as possible, the task they perform in
casework: making source attributions for latent prints of unknown origin. If
studies were conducted in which the true origin of the latent prints were
known to the tester, we could measure the accuracy of latent print
examiners’ source attributions. 
It is this type of study that scholars have found lacking. The closest thing
to it would be a series of proficiency tests that have been conducted on
latent print examiners since 1983. There are a number of problems with
construing proficiency tests as measuring the accuracy rate of latent print
examiners. First, there is a clear difference in purpose between proficiency
and a validity studies. As Imwinkelried again explains:
A proficiency study is radically different. While a validity study tests a
scientific technique, in a proficiency study the object of the test is a
particular analyst or laboratory. The validity test is designed to ensure to
the extent possible that correct test procedures are used; in a validity
test, the researchers attempt to eliminate any concern about the use of a
proper test procedure because they want to reach the central question of how
often the scientific technique itself will produce inaccurate results
despite proper test protocol. In contrast, a proficiency study endeavors to
measure the analyst’s proficiency in the sense of the probability that he or
she will consistently use proper test procedure. Assuming the validity of
the technique, the researcher inquires into the probability that while using
the technique, the analyst will “mistakenly use the wrong materials or make
the wrong measurement. . . .” Even when the analyst is utilizing a valid
technique, the analyst might commit a “performance–type error”. (1995: 1255,
citations omitted) 
Second, there are various reasons, including the difficulty of the tests and
the uncontrolled conditions under which they are administered, that suggest
that latent print examiners might over-perform on them. Lastly, fingerprint
proponents themselves oppose construing proficiency test results as
indicative of the error rate of routine fingerprint practice. This, no
doubt, has a lot to do with the fact that the error rates on these
proficiency tests, though not extraordinarily high, are non-zero (Haber &
Haber 2003). This creates a conflict with fingerprint identification’s
claims of infallibility.
Fingerprint proponents claim that latent print examiners under perform on
proficiency tests because they take them less seriously than casework.
Indeed, fingerprint proponents have reacted with outrage when others have
sought to use proficiency test data as a provisional measure of the accuracy
of latent print identification (Langenburg 2002). Fingerprint proponents are
certainly correct that the proficiency tests make for very poor validity
studies. But, if they wish to deny any construal of proficiency tests as
validation studies, then we are back where we started: No validation studies
It has also been argued that Daubert and Kumho are flexible, or even low,
standards and that reading them to demand validity testing from assays that
can be subjected to validity testing is too strict a reading. It is true
that Daubert and Kumho are intended to be flexible. But that does not
necessarily extend to allowing techniques that can easily be validity tested
to simply opt for other means of vouching for their own accuracy. The
reading of Daubert as a low threshold, meanwhile, is now generally viewed as
a myth (Cranor & Eastmond 2001; Graham 2000; Saks 1998). In fact, it appears
that the stringency with which Daubert and Kumho are applied really depends
upon whether the evidence is offered in civil or criminal litigation (Edmond
& Mercer 2004; Risinger 2000). The practical effect of this, of course, is
that Daubert is applied stringently to civil plaintiffs’ evidence and
loosely to the government’s forensic evidence.
The claim that latent print identification lacked validation was first
raised in a legal setting in a challenge to the admissibility of fingerprint
evidence under Daubert in the case United States v Mitchell. Mitchell made
it clear that he was—properly—questioning the validity of latent print
examiners’ claim to be able to make correct source attributions, not the
uniqueness of all human fingerprints. This was stated quite clearly in
Mitchell’s pre-hearing brief:
The government submits that, in contrast to handwriting evidence, “it is
well established that fingerprints are unique to an individual and
permanent.” Again, however, the government simply misses the point. The
question is not the uniqueness and permanence of entire fingerprint
patterns, but the scientific reliability of a fingerprint identification
that is being made from a small distorted latent fingerprint fragment.
(United States v Mitchell 1999 at 62, citation omitted)
Even before the Mitchell hearing, several commentators had noted the lack of
validation for fingerprint identification. For example, Stoney wrote, “there
is no justification” for fingerprint evidence “based on conventional
science: no theoretical model, statistics or an empirical validation
process” (1997: 72). A year later Saks wrote, “By conventional scientific
standards, any serious search for evidence of the validity of fingerprint
identification is going to be disappointing” (1998: 1106). Starrs (1999)
pointed to lack of validation of forensic fingerprint identification as
In 2000, the National Institute of Justice (NIJ) issued a grant solicitation
to meet “the need for validation of the basis of friction ridge
individualization and standardization of comparison criteria” and “to
provide greater scientific foundation for forensic friction ridge
(fingerprint) identification” (National Institute of Justice 2000). The NIJ
subsequently denied that it had intended by this statement to suggest that
validation did not yet exist, claiming that its solicitation merely
indicated “a desire for more research to further confirm the already
existing basis that permits fingerprints to be used as a means to
individualize” (Samuels 2000) Notably, however, this disavowal still did not
claim that validation studies exist, but merely that fingerprints are
unique: “NIJ wishes to note that it is accepted that fingerprints are unique
to the individual. NIJ has no basis to believe that this is not the case”
Since the Mitchell hearing, more commentators have gone on record stating
that the search for validation studies has come up empty (Cole 2001: 214–
15). Mnookin stated that “In the case of fingerprinting, the general rate of
error is simply not known” (2001: 59). Faigman added that fingerprinting had
“not been seriously tested” (2002: 340). Haber and Haber put it most
starkly: “no data have been collected on how accurately latent print
examiners match different images of the same finger” (2003: 339).
Courts have employed a variety of rhetorical tricks to avoid confronting the
obvious implications of the lack of validation studies of forensic
fingerprint identification (Cole 2004; Saks 2003). These include treating
criminal trials themselves as de facto validation studies; shifting the
burden of proof to defendants to show that fingerprint identification is in
accurate (in effect demanding what might be called “invalidity studies”);
and treating “the test of time” as equivalent to a validation study. But the
few courts that did directly address validity agreed with the critics’
assertion. In United States v Llera Plaza, the court wrote, “On the record
made in Mitchell, the government had little success in identifying
scientific testing that tended to establish the reliability of fingerprint
identifications” (Llera Plaza I 2002 at 506). The Llera Plaza court became
the first anywhere to limit the admissibility of forensic fingerprint
identification. Even when the court withdrew this limitation less than three
months later, it did not dissent from its original position on the lack of
data on the validity of fingerprint identification: “I concluded in the
January 7 opinion that Daubert’s testing factor was not met, and I have
found no reason to depart from that conclusion” (Llera Plaza II 2002 at
564). Similarly, in United States v Crisp, Judge Michael, in dissent, noted
The government did not offer any record of testing on the reliability of
fingerprint identification. . . . Indeed it appears that there has not been
sufficient critical testing to determine the scientific validity of the
technique. . . . The government did not introduce studies or testing that
would show that fingerprint identification is based on reliable principles
or methods. (United States v Crisp 2003 at 273–74)
And, in United States v Sullivan, the court noted, “The court further finds
that, while the ACE-V methodology [latent print examiners’ term for their
“method” of comparing prints] appears to be amenable to testing, such
testing has not yet been performed” (United States v Sullivan 2003 at 704).
Even though these opinions had limited legal force, fingerprint examiners
reacted with howls of outrage to these assertions from the courts.
Moenssens (2002), for example, in response to Llera Plaza I , flatly
asserted that the court was wrong: “The subjective determinations made by
properly trained experts in fingerprint identification can be and have been
validated.” In response to the Crisp dissent, Moenssens (2003) again stated
flatly, “Validation research does exist. It has simply been ignored or
deprecated by the lay critics who have set themselves up as the supreme
authorities on which branch of forensic science is or is not reliable.”
Assertions of lacunae in the literature are risky to make and easy to
refute. The assertion that no measurement of the accuracy of forensic
fingerprint identification exists can be refuted by a single piece of
evidence: the production of a study measuring accuracy. Despite this claim
being repeated by scholars, judges, government agencies, and sworn expert
witnesses, no one has yet produced such a study. Haber and Haber (2003) have
already produced an excellent literature review, which concludes that no
accuracy studies exist. 
In this article, I systematically examine fingerprint proponents’ claims
that validation studies exist, drawing on published literature, online
articles, and sworn expert testimony. I have endeavored to review all
published claims that fingerprint validation studies do exist. In so doing,
I am operating under the assumption that if validation studies do exist,
strenuous public, legal claims that they do not would be most likely to
flush them out. I found that none of the fingerprint proponents’ responses
actually points to anything that could legitimately be construed as a
validation study. Instead, fingerprint proponents employ rhetorical tricks
to make it seem like they are pointing to validation studies, when they are
not. Typically, they declare that validation studies exist and then address
some other question.
A recent article (Scarborough 2003) is a case in point. Scarborough begins
by addressing the very question with which I began this review: “Recently
critics allege that there is no modern research being done in the field of
fingerprints, and that fingerprint validation studies do not exist.”
Scarborough then states, “I found that both the above statements are untrue.
There is an enormous amount of current and recent research regarding
fingerprints” (ibid.: emphasis added).
Notice what Scarborough has done. First he quotes “the critics” as having
made two charges (“there is no modern research being done in the field of
fingerprints, and that fingerprint validation studies do not exist”). (In
fact, critics do not allege that no research is being done in the field of
fingerprints. There is of course an enormous amount of research in the field
of fingerprints, generally (Champod, Egli & Margot 2004). This research
concerns all sorts of things, including the anatomical formation of
fingerprint patterns, methods of detecting latent prints, research and
development of and automated biometric identification systems. But it does
not contain validation studies.) Then, Scarborough asserts that both
statements are false. Scarborough supports this double assertion with a
statement that concerns only the one of the two charges that critics do not
in fact make: “There is an enormous amount of current and recent research
regarding fingerprints.” Scarborough then goes on to demonstrate that
fingerprint research, generally, exists, which is quite easily done.
Scarborough might seem to have refuted the charge that validation studies do
not exist, but, in fact, he has cleverly elided it. The crucial charge—that
validation studies do not exist—is never mentioned again.
This technique of shifting the question reappears in several different
guises in fingerprint proponents’ discourse. Perhaps the most common form is
what I have called elsewhere “the fingerprint examiner’s fallacy” (Cole
II. THE FINGERPRINT EXAMINER’S FALLACY
The fingerprint examiner’s fallacy holds that the fundamental empirical
question concerning forensic fingerprint identification is not "How accurate
are latent print examiners’ attributions of source?" but rather "Are all
human friction ridge arrangements unique?"
It is, of course, well established in the forensic science literature that
the former, not the latter, is the foundational question concerning the
validity of fingerprint identification or any other forensic identification
technique (Inman & Rudin 2001; Thornton & Peterson 2002). It is true that if
there were identical fingerprint patterns, this could be a cause of
misattribution. But just because misattributions are not caused by identical
fingerprint patterns, that does not mean that misattributions do not occur.
In fact, misattributions do occur. Documented cases of fingerprint
misattribution have been known since 1920, and they continue to be exposed
regularly, even after the 1999 defense challenge in Mitchell, which might
have been expected to put latent print examiners on notice to be over-
conservative, given a heightened level of scrutiny by the defense bar. The
most recent misattribution was the Brandon Mayfield case, involving a
misattribution attested to by three FBI examiners and an independent expert
retained on Mayfield’s behalf. Moreover, given the unlikelihood of exposure,
exposed cases of misattribution are necessarily only a subset of actual
cases of misattribution (Cole 2005).
Nonetheless, fingerprint proponents continue to seek to demonstrate accuracy
by proving uniqueness.
The faulty assumption that uniqueness, rather than accuracy, is the
foundation of forensic fingerprint identification has a long history. The
earliest observers of fingerprint patterns simply intuited, based on no
evidence whatsoever, that all human fingerprint patterns were unique
(Cummins & Midlo 1943: 11; Faulds 1921: 29; Laufer 1912). When asked by
courts for proof of the “reliability” of forensic fingerprint evidence,
fingerprint examiners answered that all fingerprint patterns are unique.
Courts failed to grasp the gap in logic between the two statements and
uniqueness became enshrined as the foundation of the accuracy of forensic
fingerprint identification (Cole 2004). We continue to labor under this
Fingerprint examiners’ preference for uniqueness over accuracy is odd
because uniqueness is unprovable, whereas accuracy can be measured. Part of
the preference may stem from the fact that uniqueness, though unprovable,
ultimately lends itself to higher confidence in forensic fingerprint
identification than does accuracy. Although uniqueness cannot be proven,
only inferred, if one does infer it, one can then falsely reason that
fingerprint identification is infallible. Accuracy, however, can be
measured, and measurement would presumably yield a non-zero error rate, thus
puncturing the myth of infallibility surrounding fingerprint identification.
Through uniqueness cannot be proven, powerful arguments can be mustered on
its behalf. Two arguments have featured prominently in the material examined
here: first, the argument that the physiological process through which
friction ridges are formed is so variable that all friction
ridges must be unique. The second, simpler, argument reasons that if
identical twins’ fingerprint patterns are unique, then all human fingerprint
patterns must be unique.
A. THE MORPHOLOGY OF FRICTION RIDGES
Fingerprint proponents are fond of treating biological knowledge about the
formation of friction ridges as the science underlying latent print
identification. Perhaps the best example of this genre is Wertheim and Maceo
(2002). The article begins with the same premise as this one:
The accuracy and reliability of many of the forensic sciences are currently
being challenged. Specific among these challenges is the admissibility and
reliability of the science relating to friction ridge skin identification.
(2002: 35, emphasis added)
Thereafter, the article never mentions the words “accuracy” or “reliability”
again. Instead, it proceeds with a comprehensive, detailed review of the
anatomical literature on the formation (or “morphogenesis”) of friction
ridge skin. There are two problems with using this literature to “prove” the
uniqueness of all friction ridge skin. First, very little of this literature
even claims to prove uniqueness. Second, even those treatises that do
discuss uniqueness fall far short of proving it. This is a literature that
is concerned with how friction ridge skin is formed, which is a very
different question than trying to prove it is unique. For example, a review
of the work of Babler, who is often cited by fingerprint examiners as a key
researcher in the area of biological uniqueness, shows that, although Babler
has conducted extensive studies on the formation of friction ridges, he says
almost nothing about their uniqueness. The little he does say is simply an
assertion of uniqueness, not a study or measurement of it (if such a thing
could even be done). For example: “Between 6.5 and 10.5 weeks, volar pads
exhibit rapid growth and in the palm individualization” (Babler 1991: 97).
Elsewhere, Babler does not even assert uniqueness, let alone prove it
(Babler 1990, 1987, 1983, 1978, 1975). It is true that Babler has testified
under oath that he believes that human friction ridge arrangements are
unique (United States v Mitchell, Trial Transcript, July 7, 1999 at 73), but
this is quite different from demonstrating it.
The chief problem, however, is that, as acknowledged in Wertheim and Maceo’s
opening sentences, the challenge is to accuracy, not uniqueness. Indeed, in
the foundational legal admissibility challenge, referred to in the second
sentence of Wertheim and Maceo’s paper, United States v Mitchell, the
defense offered to stipulate to the uniqueness of friction ridge skin
(United States v Mitchell, Memorandum of Law 1999). Wertheim missed this
point, writing, “In U.S. v. Mitchell, the defense claimed that no testing
had been carried out providing scientific proof that fingerprints are
permanent and unique” (Wertheim 2001). In fact, as noted above, the
defense’s prehearing brief stated clearly and repeatedly that accuracy, not
uniqueness and permanence were the salient issues.
Similarly, Moenssens (2002) argues that the issue is uniqueness, rather than
accuracy. In an article criticizing the first decision in Llera Plaza
restricting the admissibility of forensic fingerprint evidence, Moenssens
cited the work of Cummins and Midlo (1943), Locard (1934), Ökrös (1965),
Wilder and Wentworth (1918), and Galton (1892). For the Llera Plaza court,
Apparently all of this scientific work done by seasoned and well
credentialed researchers with academic standing, none of whose findings
contradict or disprove the accuracy of current latent print identification
practices, can compare to the eight-year part-time experience of a person
who wrote a dissertation on a topic he only read about. 
However, Moenssens only asserts that the authorities he cites did not
contradict or disprove the accuracy of latent print identification. He does
not, however, assert that they measured its accuracy or that they even
sought to disprove the widespread assumption that the accuracy of latent
print identification is so high that the technique can be treated as
Cummins and Midlo, for example, were far more concerned with documenting the
formation of friction ridge skin than with measuring the accuracy of latent
print examiners. Nonetheless, in six pages of their 281-page book, they do
purport to examine “the conclusion that two identical prints must originate
from the same finger” (Cummins & Midlo 1943: 149). They begin by mentioning
the dissimilarity of prints on twins, the supposed “law” that nature never
repeats, and the failure to discover two identical fingerprint patterns on
different fingers. Then, they proceed with a mathematical demonstration in
which, by treating each occurrence of a “minutia” as a statistically
independent event, the duplication of twenty-five minutiae is deemed highly
improbable. They conclude, “Under the circumstances it is impossible to
offer decisive proof that no two fingers bear identical patterns, but the
facts in hand demonstrate the soundness of the working principle that prints
from two different fingers never are identical” (Cummins & Midlo 1943: 154,
emphasis in the original). Uniqueness has not been demonstrated, only
accepted as “working principle.” Accuracy is never mentioned. They neither
measure it, nor attempt to falsify the premise that latent print
identification is infallible.
Locard may have asserted the accuracy of latent print identification, but
nowhere have even his staunchest champions claimed that he ever performed
any studies which measured it (Champod 1995; Champod et al. 2004: ii–iii).
Wilder and Wentworth assert, and purport to prove, “the impossibility of the
duplication of a finger print, or even the small part of one” (Wilder &
Wentworth 1918: 328), but nowhere do they report any measurement of
accuracy nor of any proof, or attempt at disproof, of the assumed
infallibility of latent print identification. Galton actually believed the
probability of the uniqueness premise being false was 1 in 4, and made no
mention of, or measurements of, accuracy (Galton 1892; Stoney 2001: 335).
Ökrös (1965: 14) states that “Galton and Ramos established by statistical
calculations that no identical papillary patterns could exist among living
human beings.” (As noted above, this is not what actually Galton said, but
Ökrös later acknowledges this.) He concludes from extensive research that
fingerprint patterns are to a certain extent inherited but “that two
identical fingerprints, i.e. papillary ridge patterns cannot occur through
heredity” (Ökrös 1965: 19).
This fact is more meaningful than the statistical data of either Galton or
Ramos. . . . The statistical data are indeed convincing, but they are merely
assumptions. Nevertheless, according to experience two absolutely identical
fingerprints could not occur in the closest blood relations, and so
dactyloscopy is suitable for establishing accurate identity. (Ibid.: 19,
Later Ökrös reiterates: “no two identical patterns exist, not even in full
brothers and sisters (including twins)” (ibid.: 160).
Certainly then, it is true that Ökrös asserted the impossibility of exact
duplication of complete single fingerprint patterns, although he offers no
evidence for this assertion other than “experience” and twins. Ökrös does
further make the leap from assumed uniqueness to “suitability” for
identification, which is somewhat less than “accuracy” or “validity.”
Nonetheless, even if Ökrös did mean “valid” rather than “suitable,” the leap
from uniqueness to validity is, as discussed above, fallacious. And,
contrary to Moenssens’s implication, Ökrös reports no attempt to prove,
disprove, or measure the accuracy of latent print identification. Nor would
he be expected to, in a book devoted to the study of the inheritance of
Moenssens is, of course, correct that an extensive scientific literature
concerning fingerprint patterns does exist. But this literature concerns the
embryological formation of friction ridges, the heritability of friction
ridges, or other topics irrelevant to the accuracy of forensic fingerprint
identification; little of this literature even discusses the uniqueness of
friction ridges, and those authors who do discuss it merely assert
uniqueness, rather than demonstrate it; and, lastly, none of this literature
purports to measure the accuracy of forensic fingerprint identification.
In another article, Moenssens (2003) poses the titular question:
“Fingerprint Identification: A Valid Reliable ‘Forensic Science’?” Once
again, Moenssens hastily switches the topic to uniqueness, stating that
“It’s been well documented in scientific literature that the process of
prenatal development causes an infinite variation of individual friction
ridge details” (Moenssens 2003: 32). One wonders about the mathematical
calculations that could produce such a conclusion, but they are certainly
not to be found in the source that Moenssens cites (Ashbaugh 1999), which is
a review of the embryological literature. Moenssens then cites the
discredited (Champod & Evett 2001; Kaye 2003; Pankanti Prabhakar & Jain
2002; Stoney 2001; Wayman 2000; Zabell 2005) FBI “50K × 50K study,” which
is, again, according to its own author, a study of uniqueness—a poor one at
In courtroom testimony as well, fingerprint examiners make the argument that
“biological uniqueness” establishes the validity of their conclusions of
identification. For instance, in the Mitchell hearing, Meagher testified as
Q: Could you please refer me to any testimony we have heard here . . . as to
any scientific experimentation or controlled studies that [go] to the issue
of what is a “sufficient” basis to make a latent print identification?
A: The scientific basis lies with the biophysiology of the friction ridge
skin to create unique formation[s]. The basis of comparing the fingerprints
to sufficiency is finding the correspondence of quantity with quality in
order to determine the arrangement of those unique formations of that skin
are in agreement (United States v Mitchell, Trial Transcript, July 9, 1999
Another examiner, William Leo, testified that the assumption of uniqueness
justified the belief that the error rate of forensic fingerprint
identification is zero:
And we profess as fingerprint examiners that the rate of error is zero. And
the reason we make that bold statement is because we know based on 100 years
of research that everybody’s fingerprint are unique, and in nature it is
never going to repeat itself again." (People v Gomez 2002 at 270)
Here we have the crux of the matter. A nationally prominent practitioner of
latent print identification actually claims in sworn testimony that the
uniqueness of all human fingerprints is a warrant for claiming that the
error rate of fingerprint identification is zero.
Even Stigler (2004, 1995) falls for the fingerprint examiner’s fallacy.
Stigler contends that “Fingerprints were adopted initially because they
enjoyed a very strong scientific foundation attesting to their efficiency
and accuracy in personal identification” (2004: 12, emphasis added). Who,
according to Stigler, demonstrated the “accuracy” of fingerprint
identification? “The scientific foundation of fingerprints as a means of
accurate personal identification dates from the work of Francis Galton”
(Galton 1892, 1893; Stigler 2004: 12, emphasis added). But, as Stigler
notes, Galton only “gave a detailed quantitative demonstration of the
essential uniqueness of an individual’s fingerprints; a demonstration that
remains statistically sound by today’s standards” (Stigler 2004: 12,
emphasis added). Nowhere did Galton ever claim to have demonstrated, or even
measured, the accuracy of latent print identification.
B. IDENTICAL TWINS
The second approach to “proving” uniqueness is to reason simply that if
identical twins have different fingerprints, then all human fingerprint must
be unique. In his essay “Is Fingerprint Identification a ‘Science’?”,
Moenessens (1999) contends that fingerprinting rests upon three “underlying
premises” which “have been empirically validated.” These are
3. Uniqueness notwithstanding, friction ridge patterns fall into “broad
classes or categories that permit police to store and retrieve millions of
prints according to classification formulae.”
Interestingly, in Mitchell the government had also argued that forensic
fingerprint identification rested on three premises:
1. Uniqueness and permanence of friction ridges.
2. Uniqueness and permanence of friction ridge arrangements.
3. “Individualization, That is Positive Identifications, Can Result from
Comparisons of Friction Ridge Skin or Impressions Containing a Sufficient
Quality (Clarity) and Quantity of Unique Friction Ridge Detail.” (United
States v Mitchell, Government’s Memorandum 1999)
If the government’s premise three had indeed been empirically validated,
that would be significant. But Moenssens has changed the premises. He has
removed the government’s only controversial premise (#3). (Recall that the
Mitchell defense offered to stipulate to uniqueness and permanence.) He has
separated uniqueness and permanence into two premises. Lastly, he adds a
completely meaningless and uncontroversial third premise (that the
classification of fingerprint pattern types is possible). By this, Moenssens
is referring to the method of indexing a collection of inked ten-print
fingerprint cards (that is, cards containing impressions of all ten fingers)
according to the aggregate pattern types on all ten fingers, for the purpose
of keeping accurate records about individuals’ criminal histories. This, of
course, has little to do with forensic identification. Baseball cards fall
into “broad classes or categories,” which permit classification, but that
does not make them useful for forensic identification.
Social scientists have long observed the rhetorical power of “lists of
three,” in which the third item contrasts with the first two and serves as a
“punch line” (Atkinson 1984; Clark & Pinch 1995: 36–40; Jasanoff 2003: 389).
The fingerprint three-premise schemes follow this rhetorical structure. Two
premises are “throwaways,” that are either undisputed, easy to prove, or
reasonable to accept at face value. This sets up the third premise to be the
“punch line,” or, in this case, the crucial issue. The government’s scheme
at least correctly identifies the crucial issue: the accuracy of latent
print examiners’ conclusions of individualization, although it poses the
rather irrelevant question of whether individualization is “possible” (which
it surely is), rather than the more salient question of how often
conclusions of individualization are correct. In Moenssens’s scheme,
permanence and classification are the throwaways. This allows him to imply
that uniqueness is the crucial premise and suggests to the reader that
uniqueness is what the controversy is about. Individualization, and the
accuracy of it, disappears from the picture entirely.
Then, Moenssens offers “empirically established evidence of the uniqueness
of fingerprint patterns” (Moenssens 1999). This is, quite simply, the fact
that monozygotic “identical” twins do not have identical ridge detail.
Moenssens notes that studies have indicated that monozygotic twins have more
similar pattern types than dizygotic twins (Lin et al. 1982). Because
monozygotic twins have the most similar pattern types of all paired
individuals, he argues, “Might we not infer from that experience that all
fingerprint of different digits are, indeed, different?” (Moenssens 1999).
Interestingly, a proficiency test conducted in 1995 under the auspices of
the American Society of Crime Laboratory Directors suggests that fingerprint
examiners may be prone to mismatching impressions from twins (whether
identical or not is not stated). In this exercise, examiners were given
seven latent impressions and four sets of known prints. Two of the seven
latent impressions were collected from one twin (with the pseudonym “Hubert
Gomes”). “Hubert’s” prints were not provided to the test-takers, but those
of his twin brother “Peter” were. Nineteen percent of test-takers identified
one latent impression (labeled 5f ) made by “Hubert” to “Peter,” and 4%
misidentified another (5c). No item in the short history of fingerprint
proficiency testing has produced as anywhere near as many false positives
(erroneous identifications) as item 5f (Collaborative Testing Service 1995).
Though there are obvious problems with extrapolating results from a
proficiency test into a global error rate for a technique, the results of
the 1995 test do serve to highlight the flaw in Moenssens’s reasoning: The
friction ridge skin of “Hubert” and “Peter Gomes” was not identical, but it
was similar enough to cause a significant number of individuals holding
themselves out as latent print examiners to make false attributions. The
question is not uniqueness, but the potential for false attribution. It is
this question that Moenssens avoids completely with his twins argument.
III. THE CASEWORK FALLACY
Aside from the fingerprint examiner’s fallacy, the most prevalent means of
evading latent print identification’s lack of validation consists of
equating casework with validation testing. Again, there are two common
variations on the argument. First, that criminal trials amount to validation
second, that the accumulation of databases of fingerprints amounts to a
A. ONE HUNDRED YEARS OF EMPIRICAL STUDIES
What we might call “the casework fallacy” consists of arguing that a
criminal trial is in effect a scientific experiment designed to test the
validity of the fingerprint examiner’s conclusion. A variation on the
argument holds, not only that every criminal trial is a test, but that every
latent print comparison — that is, every time a latent print examiner
compares a latent print to an inked — constitutes a test. This argument
seems to have originated with the government’s case in Mitchell. The
government asserted, “The ACE-V process and the experts’ conclusions have
been tested empirically over a period of 100 years” and “The fingerprint
field and its theories and techniques have been published and peer reviewed
during a period of over 100 years” (United States v Mitchell 1999 at 89).
These assertions were supported by the testimony of Budowle:
Q. What is your opinion as to whether or not the studies and the testimony
that you heard in this hearing, comport to empirical studies?
A. Well, empirical studies is when you roll up your sleeves, you do
observational analysis. The idea of taking prints, comparing them to other
prints to seeing how often things are similar or dissimilar, is empirical
studies. The 100 years of fingerprint employment has been empirical studies.
The fact you are looking for individuals that match, no two unrelated
individuals or related individuals have the same print is an empirical study
that’s maybe not collected and counted up to say there are 10 million of
these, but we know there had to be a lot over the 100 years. (United States
v Mitchell 1999 at 114–15)
This reasoning received legal imprimatur from the Southern District Court of
Indiana, which wrote, in the first published opinion responding to a Daubert
challenge against latent print identification:
the methods of latent print identification can be and have been tested. They
have been tested for roughly 100 years. They have been tested in adversarial
proceedings with the highest possible stakes—liberty and sometimes life.
(United States v Havvard 2000 at 854).
This also seems to be what Leo had in mind, when he stated, “A fingerprint
examiner’s knowledge and ability can be and is tested, is documented and can
be verified, and is evaluated by the courts and juries every time the
examiner takes the witness stand” (2001: 2). Since Leo follows this passage
with no description of any testing, we can presume that he means this sort
of “casework testing.” Presumably, this is also what Grieve means when he
states, “The testability of fingerprint individuality has been conducted for
nearly a century, perhaps not in one grand empirical study the captivated
the [Mitchell] defense, but in the countless smaller studies performed daily
in all parts of the globe,” though Grieve also invokes the fingerprint
examiner’s fallacy, by shifting the question from validity to
“individuality” (Grieve 2001: 95).
Even Moenssens admits that the failure to discover two duplicates in the
world’s fingerprint repositories does not prove even uniqueness. Moenssens
(1999) points out that this argument
does not stand the test of reason. The millions of sets of prints were never
compared against one another for possible duplication of friction ridge
patterns. Filing and retrieving prints from such a massive file only results
in an examination of a comparatively small number of sets of prints: those
with a matching, or approximately matching, classification formula.
This is because ten-print files are indexed according to the aggregate
pattern type on all ten fingers. Thus, a ten-print search would not discover
the hypothetical case of two individuals with identical fingerprint patterns
on a single finger but different patterns on the other nine.
Astonishingly, however, Moenssens goes on the restate in different words the
very same argument that, two paragraphs earlier, he claimed failed “the test
Persons skilled in fingerprint identification, who have literally viewed,
scanned, and studied tens—if not hundreds—of thousands of individual
patterns, do not doubt this. Clearly, if exact pattern duplication were to
exist in the world, at least a single instance of this would have been
discovered by now. (Moenssens 1999)
Similarly, Wertheim (2001) reports that, according to the government in
Mitchell, “100 years worth of thousands of law enforcement agencies
conducting millions, probably even billions of comparisons across the globe
and never finding two areas of skin exactly the same was considered
empirical validation of the use of fingerprints to individualize,” and he
holds that “billions of comparisons that have been conducted throughout the
world for over a century, and not one ‘distorted fragment’ of value has ever
been found to contain the same detail as a print from another source.”
Wertheim, like Grieve, has combined the casework fallacy with the
fingerprint examiner’s fallacy, by changing the hypothesis that is in need
of testing. Although “adversarial testing” usually purports to test the
question of how often latent print examiner generate correct results,
Wertheim’s statement purports to test a very different question: whether
duplicate fingerprint fragments exist.
These fingerprint proponents all betray a fundamental misunderstanding of
what any scientifically minded person would mean by a “test.” A criminal
trial or a search of a fingerprint database are not “tests” of the accuracy
of the technique of latent print identification because they offer no
guarantee that an inaccurate result would be detected. In a subsequent
decision, another court exposed the fallacy of the Havvard court’s
reasoning: “ ‘Adversarial’ testing is not . . . what the Supreme Court meant
when it discussed testing as an admissibility factor” (Llera Plaza I 2002 at
Ordinary casework provides no method to expose erroneous results (other than
“verification,” which itself may be subject to inaccuracy). In casework, the
analyst does not know the true source of the latent print. Thus, print pairs
that are deemed to “match” are treated as having originated from the same
source. Print pairs that do not “match” are treated as having originated
from different sources. We are testing the notion of whether or not print
pairs from different sources might “match.” Casework would never answer this
question because any print pair that is deemed to “match” is automatically
treated as deriving from a common source. This may not in fact be true. The
latent print may derive from a different source. The caseworker has no way
of knowing, absent definitive proof from some other source concerning who
made the latent print.
B. AFIS ARGUMENTS
A variation on the casework argument is what might be called the “AFIS
argument.” The argument is basically the same, except that, rather than
citing billions of comparisons by human examiners over the past century, it
cites the billions of comparisons performed by Automated Fingerprint
Identification Systems (AFIS) over the past two decades or so. For example,
in his report on the Mitchell hearing, Grieve writes:
Dressed in the language of scientific jargon, the defense’s position
possessed a certain superficial appearance of reason and logic. No massive
organized research has ever been undertaken to locate what might pass for
the “closest similarities” between two different fingerprints, in part
because it would serve no practical purpose. Yet in thousands of smaller
studies, AFIS searches do just that as they comb through a database seeking
minutiae in similar arrangements. (1999: 726)
A more elaborate attempt to treat AFIS searches as validation studies of
fingerprint identification has been advanced by Clark (2002), drawing on the
2001 California IAI Statistical Committee Report. The Committee
reported that during the year 2000, the California Department of Justice
(Cal-DOJ) Central Site Latent Database contained approximately 42 million
fingers (from approximately 4.2 million individuals). During 2000, 125,732
latent prints were searched against the database. AFIS searches produce
candidate lists based on “similarity scores” that are then reviewed by human
examiners. Similarity scores are generated by an algorithm internal
to the AFIS. These lists may be of varying lengths. AFIS can be programmed
to report a set number of candidates (the top 10, 20, or 40 scoring prints)
or to report all prints whose similarity scores exceed a set threshold.
It is the human examiners who then reach conclusions of identification or
non-identification. Human examiners are reportedly accustomed to viewing
AFIS candidate lists with a skeptical eye; the “matching” print is often not
in the first position or even on the list at all. Since the average
candidate list for the Cal-DOJ searches consisted of ten candidates, the
number of manual comparisons conducted during 2000 was approximately 1.2
million, assuming (improbably) that the examiners continue through the list
of candidates even after they reach a conclusion of identification.
Clark then offers the following analysis, which is worth quoting in some
The historical statistical breakdown of fingerprint pattern types applied to
the total number of fingers in this latent searchable fingerprint database
would be as follows:
Avg. 60% loop 25,435,188
Avg. 35% whorl 14,837,193
Avg. 5% arch 2,119,599
The historical statistical breakdown of fingerprint pattern types applied to
the total latent searches would be as follows:
LATENTS SEARCHED 125,732
Avg. 60% loop 75,439
Avg. 35% whorl 44,006
Avg. 5% arch 6,286
The total number of fingers searched would be somewhere between only
searching single latent pattern calls to total pattern reference. This would
equate as follows:
No reference 2,585,054,462,004
(75,439 × 25,435,188) + (44,006 × 14,837,193) + (6,286 × 2,119,599)
Total reference 5,330,028,429,360
(125,732 × 42,391,980)
Between 2.5 and 5.3 trillion individual latent searches were made by
California DOJ in the year 2000. The computerized searches utilized an
algorithm that emulates how a human would make a level two comparison of
minutia type, flow, and direction in an x-y coordinate related to the
pattern core, counting the number of intervening ridges between the
minutiae. (Clark 2002)
Although Clark does not make the purpose of these calculations clear, he
appears to be attempting to establish upper and lower bounds for the number
of actual print-to-print comparisons (not “searches” as he states)
performed by the Cal-DOJ AFIS in the course of running 125,732 unidentified
latents through the system during the year 2000. The lower bound (2.5
trillion) assumes that the arch-loop-whorl (A-L-W) pattern type of the
latent was determinable in all 125,732 cases. Presumably, in that case, the
AFIS search is run only against prints of the same A-L-W pattern type in the
database. The upper bound (5.3 trillion) assumes that the A-L-W pattern type
of the latent was not determinable in any of the 125,732. Presumably, in
that case, the each latent is run against the entire database of
approximately 42 million fingers. Thus, depending on in how many cases the
A-L-W pattern type of the latent was determinable, the Cal-DOJ AFIS made
somewhere between 2.5 and 5.3 trillion latent-to-known print
The 2.5–5.3 trillion searches provided list [sic] of candidates of the ten
fingerprints with the closest agreement, to a human for Analysis,
Comparison, and Evaluation. For the 125,732 latents searched by California
DOJ, a human latent fingerprint analyst examined the top 1,257,320
candidates." (Clark 2002)
Clark here, presumably, is referring to the Cal-DOJ procedure for searching
unidentified latents using AFIS. It appears that for each unidentified
latent AFIS produces a candidate list consisting of the ten individual
database prints with the highest similarity score (according to the AFIS). A
human examiner then compares the unidentified latent to each of the ten
candidates in succession. Because 125,732 unidentified latents were searched
in 2000, the human examiners made ten times that many (1,257,320) latent-to
known print comparisons. 
Clark then concludes:
These examinations found no two different fingerprints that contained twelve
or more level two Galton detail [sic] of the same type, direction, ridge
flow, xy axis location, and ridge count spatial relationship. No where [sic]
near twelve level two Galton detail [sic] were found." (Clark 2002)
In other words, in making 1,257,320 latent-to-known print comparisons,
California-DOJ human latent print examiners “found no two different
fingerprints that contained twelve or more level two Galton detail [sic] of
the same type, direction, ridge flow, x-y axis location, and ridge count
spatial relationship.” 
This statement it is almost certainly false on its face. Unless California -
DOJ failed to identify a single unidentified latent during the year 2000,
presumably there was a large number of “different fingerprints that
contained twelve or more level two Galton detail [sic] of the same type,
direction, ridge flow, x-y axis location, and ridge count spatial
relationship.” These fingerprints that contained twelve or more level-two
Galton details in common were presumably attributed to a common source
finger by the human examiner.
What the passage probably meant to say was that no two fingerprints that
derived from different source fingers were found showing twelve or more
Galton details in common. But this conclusion rests on the invalid
assumption that all the “matches” declared by examiners were correct, that
“matching” pairs did in fact derive from a common source finger. But running
casework unidentified latents through an AFIS provides no way of testing
that assumption. Because we do not know the true source of any of the
125,723 latents, we do not know whether or not these latents shared a common
origin with the database print with which they shared twelve Galton details
The flaw in this reasoning may be summarized as follows: No datum from the
“study” would be able to alter its finding. Because every finding of a print
pair that does show twelve Galton details in common is attributed
to the (assumed) fact that the two prints share a common origin, the study
will never find a print pair showing twelve Galton details in common that
does not show a common origin. The analysis assumes the answer to the very
question it purports to answer: how many of those 125,000 latent print
comparisons reached erroneous conclusions?
It is important to emphasize that AFIS searches cannot yield either false
positives or false negatives. Since AFIS only produce candidates, they do
not offer conclusions of identification or non-identification, and they
cannot make errors. The supposed absence of false positives in AFIS searches
is proof of nothing. AFIS cannot make false positives. Yet, this absence is
sometimes construed as somehow supporting the validity of fingerprint
identification. This fundamental misunderstanding found its way into the
Third Circuit’s analysis when it upheld Mitchell’s appeal of his conviction.
In that decision, the court declared, “a lack of multiple matches from AFIS
searches can constitute testing of the hypothesis that single positive
identifications can be made from latent fingerprints” (United States v
Mitchell 2004 at 237).
How AFIS candidate lists interact with human examiners is a question
deserving of further research. Do AFIS searches cue or bias human examiners
into making more false positives? Are examiners more inclined to view a
print as matching because it appears on an AFIS candidate list? Or is the
opposite true? These are questions deserving of further research that have
received no attention at all.
Clark’s final conclusion is:
The past limited statistical models, as well as Dr. Edmond Locard’s
tripartite rule have been validated by the California DOJ latent database,
supporting quantifiable thresholds for friction ridge individualization.
The “statistical models” refer to eleven statistical models, reviewed by
Stoney (2001), which seek to estimate the probability of two duplicate
fingerprints existing. Locard’s tripartite rule is as follows:
(1) If more than 12 concurring points are present and the fingerprint is
sharp, then the certainty of identity is beyond debate.
(2) If 8 to 12 concurring points are involved, then the case is borderline
and the certainty of identity will depend on: a) the sharpness of the
fingerprint; b) the rarity of its type; c) the presence of the center of the
figure [core] and the triangle [delta] in the exploitable part of the print;
d) the presence of pores [poroscopy];
e) the perfect and obvious identity regarding the width of the papillary
ridges and valleys, the direction of the lines, and the angular value of the
bifurcations [ridgelogy/edgeoscopy]. In these instances, certainty can only
be established following discussion of the case by at least two competent
and experienced specialists.
(3) If a limited number of characteristic points are present, the
fingerprint cannot provide certainty for an identification, but only a
presumption proportional to the number of points available and their
clarity. (Champod 1995: 136)
Clark’s analysis certainly cannot be construed as having validated Locard’s
tripartite rule, which has to do with the confidence that one attaches to an
attribution of a common source between a latent and a known print. His
attempt to treat manual comparisons, guided by AFIS searches, as validation
studies misses an essential component of a validation study: if we are
validating the ability of latent print examiners to correctly report the
state of the word (i.e. the true origin of a latent print), then we need to
know the state of the world. This point was stated clearly by Stoney in his
testimony at the Mitchell Daubert hearing:
Remember, I am taking about premise three, the judgment of sufficient
quality or clarity, quantity, in order to make a judgment of absolute
individuality came from this person. That . . . is not being tested as we go
case by case by case. Indeed, we don’t know when we are wrong when we have
an unknown situation like a case before us. What would make it a scientific
test is if we know what is a correct answer or not. If we perform what we
believe to be a correct examination, and when we are tripped up when we are
wrong we know it. That’s what makes it scientific. (United States v
Mitchell, Trial Transcript, July 12, 1999 at 121)
Fingerprint validation studies still do not exist. And yet, it does not
appear that they would be all that difficult to perform. As I have argued
elsewhere (Cole 2004), this situation probably owes a lot to the precarious
legal situation in which latent print examiners find themselves. Since no
court, aside from the brief interlude of Llera Plaza I, has held that the
absence of validation studies renders latent print identification
inadmissible under Daubert/Kumho, latent print examiners retain legal carte
blanche to claim that fingerprint is validated and infallible. They thus
have nothing to gain and everything to lose from validation studies.
In this situation, the prospects for fingerprint validation studies appear
dim. The NIJ solicitation for fingerprint validation studies mentioned above
(National Institute of Justice 2000) was never funded, and some actors
within the NIJ testified under oath that its release was delayed, at the
behest of the FBI, until after the Mitchell was convicted (United States v
Mitchell 2004 at 232, 255; Faigman et al. 2002: 74). Moreover, a recent NIJ
solicitation in the area of “General Forensic Research and Development”
lists under the heading “What will not be funded” “Proposals to evaluate,
validate, or implement existing forensic technologies” (National Institute
of Justice 2003; Snell 2003). A forum convened by the International
Association for Identification in order to develop a fingerprint research
agenda, began its report by implicitly dismissing the need for validity
study, stating “This forum recognizes the reliability of friction ridge
identification as practiced during the last 100 years” (International
Association for Identification 2002). As its first priority the forum
recommended “the generation and publication of a sourcebook” (ibid.). While
sourcebooks are always useful, without agreement about what the relevant
empirical question is, let alone research, any sourcebook is likely to
simply contain new ways of framing the arguments delineated above. Even the
effort to explore the scientific claims made by forensic identification has
been stymied. It has recently been reported that a proposed National
Academies panel on science and forensic examination was dropped because
government agencies insisted upon extraordinary rights of review (Kennedy
2003). There are now reports of a new NIJ grant solicitation in the works
(Mills & McRoberts 2004). By the time this article appears, we will know
better whether there will be a genuine research effort.
In the meantime, rhetorical devices like those discussed here serve to sap
any momentum to test the accuracy of latent print identification. Consider,
for example, Wertheim’s view of “the fingerprint challenge”:
The challenge is not to test the fundamental principles of the science of
fingerprints. More than a century of medical research has established that
friction ridge skin is unique and permanent. The challenge is in
articulating that scientific basis to the jury. (Wertheim 2002: 671,
Thus, instead of validating latent print identification, Wertheim advocates
better persuasive efforts aimed at using the fingerprint examiner’s fallacy
to gull the jury into thinking that it demonstrates the accuracy of latent
This review points to two unpalatable conclusions. The first is that many
practitioners and defenders of forensic fingerprint identification still do
not understand what is meant by the demand for validation studies, still
believe that uniqueness is the fundamental empirical question necessary to
validate forensic fingerprint identification, and still believe in the
fallacy that casework comprises validation. The second is that the
misunderstanding may be deliberate. Historically, fingerprint evidence has
benefited enormously from courts’ willingness to construe the assumption of
uniqueness as evidence of accuracy (Cole 2004). The literature reviewed here
may intentionally be seeking to perpetuate that fallacy. Until all parties
come to some agreement about what are the relevant empirical questions
surrounding latent print identification, the fingerprint challenge will be
mired in rhetorical claims that fly past one another.
Simon A. Cole is Assistant Professor of Criminology, Law & Society at the
University of California, Irvine. He received his Ph.D. in Science &
Technology Studies from Cornell University. He is the author of Suspect
Identities: A History of Fingerprinting and Criminal Identification (Harvard
University Press, 2001).
1. It should be noted that this would require devising some sort of metric
for the difficulty of making a source attribution for a latent print. Such a
metric has not yet been devised.
2. It is to be expected that such a study would not be accepted wholesale,
but, rather, that the degree to which the study does in fact replicate
actual casework conditions might be disputed by whatever party dislikes the
3. Although there is a clear difference in purpose between validation and
proficiency studies, some scholars have questioned whether there is a
significant difference in research design between “validity” and
4. Since publication of that review, Langenburg (2004) has published a pilot
study that compares the abilities of fingerprint examiners at performing a
task to that of a control group and announced plans for future studies. In
the current study, however, the task is limited to the identification of
5. Emphasis added. This is an obvious reference to the author of this
article. I would offer only the following comments: First, the methods of
history are such that almost everyone writes “a dissertation on a topic”
they “only read about.” Second, neither the author’s scholarship nor his
testimony in Mitchell is cited in either of the Llera Plaza opinions, so it
is questionable what influence it had on the court.
6. Because this figure (1,257,320) was already given in the Statistical
Committee Report, it is difficult to understand why Clark performed the
exercise of deriving the range of 2.5–5.3 trillion AFIS comparisons. Neither
of these figures are ever mentioned or used again in Clark’s analysis.
7. In fact, Clark’s words are ambiguous. The passage could also mean that
none of the 1,257,320 database prints selected for comparison to
unidentified latents during the year 2000 “contained twelve or more level
two Galton detail [sic] of the same type, direction, ridge flow, x-y axis
location, and ridge count spatial relationship.” Clark confirmed that the
interpretation of his conclusion discussed in the text is what he intended.
Dusty Clark, e-mail communication, 16 May 2003.
Ashbaugh, David R. (1999) Quantitative-Qualitative Friction Ridge Analysis:
An Introduction to Basic and Advanced Ridgeology. Boca Raton, Fla.: CRC
Atkinson, Max (1984) Our Masters’ Voices: The Language and Body Language of
Politics. London: Methuen.
Babler, William J. (1975) “Early Prenatal Attainment of Adult Metacarpal-
Phalangeal Rankings and Proportions,” American Journal of Physical
Babler, William J. (1978) “Prenatal Selection and Dermatoglyphic Patterns,”
American Journal of Physical Anthropology 48: 21–28.
Babler, William J. (1983) “How Is Epidermal Ridge Configuration Determined,”
Newsletter of the American Dermatoglyphics Association (1983) (Spring): 3–4.
Babler, William J. (1987) “Prenatal Development of Dermatoglyphic Patterns:
Associations with Epidermal Ridge, Volar Pad and Bone Morphology,” Collegium
Antropologicum 11: 297–304.
Babler, William J. (1990) “Prenatal Communalities in Epidermal Ridge
In Trends in Dermatoglyphic Research, edited by N. M. Durham & C. C.
Plato. Dordrecht: Kluwer.
Babler, William J. (1991) “Embryological Development of Epidermal Ridges and
Their Configurations,” Birth Defects Original Article Series 27(2): 95–112.
Caudill, David S., and Richard E. Redding (2000) “Junk Philosophy of
Science?: The Paradox of Expertise and Interdisciplinarity in Federal
Courts,” Washington and Lee Law Review 57: 685–766.
Champod, Christophe (1995) “Edmond Locard—Numerical Standards and ‘Probable’
Identifications,” Journal of Forensic Identification 45: 136–63.
Champod, Christophe, Nicole Egli, and Pierre A. Margot (2004) “Fingermarks,
Shoesole and Footprint Impressions, Tire Impressions, Ear Impressions,
Toolmarks, Lipmarks, Bitemarks: A Review.” Paper presented at 14th Interpol
Forensic Science Symposium, 19–22 October, Lyon, France.
Champod, Christophe, and Ian W. Evett (2001) “A Probabilistic Approach to
Fingerprint Evidence,” Journal of Forensic Identification 51: 101–22.
Champod, Christophe, Chris Lennard, Pierre Margot, and Milutin Stoilovic
(2004) Fingerprints and Other Ridge Skin Impressions. Boca Raton, Fla.: CRC
Clark, Colin, and Trevor Pinch (1995) The Hard Sell: The Language and
Lessons of Street-Wise Marketing. London: HarperCollins.
Clark, Dusty (2002) CSDIAI Statistical Committee Report 2001. latent-
prints.com. Available at http://www.latent-prints.com/csdiai_stats.htm.
Cole, Simon A. (2001) Suspect Identities: A History of Fingerprinting and
Criminal Identification. Cambridge, Mass.: Harvard Univ. Press.
Cole, Simon A. (2004) “Grandfathering Evidence: Fingerprint Admissibility
Ruling from Jennings to Llera Plaza and Back Again,” American Criminal Law
Review 41: 1189–1276.
Cole, Simon A. (2005) “More Than Zero: Accounting for Error in Latent Print
Identification,” Journal of Criminal Law & Criminology 95: 985–1078.
Collaborative Testing Service (1995) Latent Prints Examination Report.
Sterling, Va.: Collaborative Testing Service.
Cranor, Carl F., and David A. Eastmond (2001) “Scientific Ignorance and
Reliable Patterns of Evidence in Toxic Tort Causation: Is There a Need for
Liability Reform?,” Law and Contemporary Problems 64: 5–48.
Cummins, Harold, and Charles Midlo (1943) Finger Prints, Palms and Soles: An
Introduction to Dermatoglyphics. Philadelphia: Blakiston.
Denbeaux, Mark, and D. Michael Risinger (2003) “Kumho Tire and Expert
Reliability: How the Question You Ask Gives the Answer You Get,” Seton Hall
Law Review 34: 15–70.
Edmond, Gary, and David Mercer (2004) “Daubert and the Exclusionary Ethos:
The Convergence of Corporate and Judicial Attitudes toward the Admissiblity
of Expert Evidence in Tort Litigation,” Law & Policy 26: 231–57.
Epstein, Robert (2002) “Fingerprints Meet Daubert: The Myth of Fingerprint
‘Science’ is Revealed,” Southern California Law Review 75: 605–57.
Faigman, David L. (2002) “Is Science Different for Lawyers?,” Science 297:
Faigman, David L., David H. Kaye, Michael J. Saks, and Joseph Sanders (2002)
Science in the Law: Forensic Science Issues. St. Paul, Minn.: West.
Faulds, Henry (1921) “The Dawn of Dactylography,” Dactylography (September):
Galton, Francis (1892) Finger Prints. London: Macmillan.
Galton, Francis (1893) Decipherment of Blurred Finger Prints: Supplementary
Chapter to “Finger Prints”. London: Macmillan.
Giannelli, Paul C. (1980) “The Admissibility of Novel Scientific Evidence:
Frye v. United States, A Half-Century Later,” Columbia Law Review 80: 1197–
Graham, Michael H. (2000) “The Expert Witness Predicament: Determining
“Reliable” Under the Gatekeeping Test of Daubert, Kumho, and Proposed
Amended Rule 702 of the Federal Rules of Evidence,” University of Miami Law
Review 54: 317–57.
Grieve, David L. (1999) “Rocking the Cradle,” Journal of Forensic
Identification 49: 719–27.
Grieve, David L. (2001) “Simon Says,” Journal of Forensic Identification 51:
Haack, Susan (2005) “Trial and Error: The Supreme Court’s Philosophy of
Science,” American Journal of Public Health 95: 566–73.
Haber, Lyn, and Ralph Norman Haber (2003) “Error Rates for Human Fingerprint
Examiners.” In Automatic Fingerprint Recognition Systems, edited by N. K.
Ratha & R. Bolle. New York: Springer-Verlag.
Imwinkelried, Edward J. (1995) “Coming to Grips with Scientific Research in
Daubert’s ‘Brave New World’: The Courts’ Need to Appreciate the Evidentiary
Differences between Validity and Proficiency Studies,” Brooklyn Law Review
Imwinkelried, Edward J. (2003) “The Meaning of ‘Appropriate Validation’ in
Daubert v. Merrell Dow Pharmaceuticals, Inc., Interpreted in Light of the
Broader Rationalist Tradition, not the Narrow Scientific Tradition,” Florida
State University Law Review 30: 735–66.
Inman, Keith, and Norah Rudin (2001) Principles and Practice of
Criminalistics: The Profession of Forensic Science. Boca Raton, Fla.: CRC
International Association for Identification (2002) “Chicago Fingerprint
Forum Recommendations,” Journal of Forensic Identification 52: 643–45.
Jasanoff, Sheila (2003) “Breaking the Waves in Science Studies: Comment on
H.M. Collins and Robert Evans, ‘The Third Wave of Science Studies’,” Social
Studies of Science 33: 389–400.
Kaye, David H. (2003) “Questioning a Courtroom Proof of the Uniqueness of
Fingerprints,” International Statistical Review 71: 521–33.
Kennedy, Donald (2003) “Forensic Science: Oxymoron?,” Science 302: 1625.
Langenburg, Glenn (2002) Defending Against the Critic’s Curse. CLPEX.
Available at http://www.clpex.com/Articles/CriticsCurse.htm.
Langenburg, Glenn M. (2004) “Pilot Study: A Statistical Analysis of the ACE
-V Methodology-Analysis Stage,” Journal of Forensic Identification 54: 64–
Laudan, Larry (1982) “Science at the Bar: Causes for Concern,” Science,
Technology, and Human Values 7: 16–19.
Laudan, Larry (1983) “More on Creationism,” Science, Technology, and Human
Values 8: 36–38.
Laufer, Berthold (1912) History of the Finger-Print System. Washington,
D.C.: Government Printing Office.
Leo, Willam F. (2001) “Fingerprint Identification: Objective Science or
Subjective Opinion?,” The Print 17: 1–3.
Lin, C. H., J. H. Liu, J. W. Osterburg, and J. D. Nicol (1982) “Fingerprint
Comparison I: Similarity of Fingerprints,” Journal of Forensic Sciences 27:
Locard, Edmond (1934) Manuel de technique policière: les constats, les
empreintes digitales. 2d ed. Paris: Payot.
Loftus, Elizabeth F. (1996) Eyewitness Testimony. 2d ed. Cambridge, Mass.:
Harvard Univ. Press.
Mills, Steve, and Flynn McRoberts (2004) “Critics Tell Experts: Show Us the
Science” Chicago Tribune 17 October: 18.
Mnookin, Jennifer L. (2001) “Fingerprint Evidence In An Age of DNA
Profiling,” Brooklyn Law Review 67: 13–70.
Moenssens, André (1999) Is Fingerprint Identification a “Science”? Forensic-
Evidence.com. Available at http://www.forensic-
Moenssens, André (2002) The Reliability of Fingerprint Identification: A
Case Report. Forensic Evidence.com. Available at http://www.forensic-
Moenssens, André (2003) “Fingerprint Identification: A Valid Reliable
‘Forensic Science’?,” Criminal Justice 18 (2): 31–37.
Moenssens, André (2003) Palmprint and Handwriting I.D. Satisfy Daubert Rule.
Forensic Evidence.com. Available at http://www.forensic-vidence.com/site/ID/
National Institute of Justice (2000) Solicitation for Forensic Friction
Ridge (Fingerprint) Examination Validation Studies. Washington, D.C.: United
States Department of Justice.
National Institute of Justice (2003) General Forensic Research and
Development. Washington, D.C.: National Institute of Justice.
Ökrös, Sándor (1965) The Heredity of Papillary Patterns. Translated by A.
Herczeg. Budapest: Akadémiai Kiadó.
Pankanti, Sharath, Salil Prabhakar, and Anil K. Jain (2002) “On the
Individuality of Fingerprints,” IEEE Transactions on PAMI 24: 1010–25.
Risinger, D. Michael (2000) “Navigating Expert Reliability: Are Criminal
Standards of Certainty Being Left on the Dock?,” Albany Law Review 64: 99–
Saks, Michael J. (1998) “Merlin and Solomon: Lessons from the Law’s
Formative Encounters with Forensic Identification Science,” Hastings Law
Journal 49: 1069–1141.
Saks, Michael J. (2003) “Reliability Standards: Too High, Too Low, or Just
Right? The Legal and Scientific Evaluation of Forensic Science (Especially
Fingerprint Expert Testimony),” Seton Hall Law Review 33: 1167–87.
Samuels, Julie (2000) “Letter from National Institute of Justice Regarding
the Solicitation of Forensic Friction Ridges (Fingerprint) Examination
Validation Studies,” Forensic Science Communications, July.
Scarborough, Steve (2003) “Contemporary Fingerprint Research,” Weekly Detail
5 May. Available at http://www.clpex.com/Articles/TheDetail/1-
Snell, J. Laurie (2003) “The Controversy of Fingerprints in the Courts,”
CHANCE News 12.05.
Starrs, James E. (1999) “Judicial Control Over Scientific Supermen:
Fingerprint Experts and Others Who Exceed the Bounds,” Criminal Law Bulletin
Stigler, Stephen M. (1995) “Galton and Identification by Fingerprints,”
Genetics 140: 857–60.
Stigler, Stephen M. (2004) “The Fingerprint Controversy,” Issues in Science
and Technology 20(2): 12–13.
Stoney, David A. (1997) “Fingerprint Identification: Scientific Status.” In
Modern Scientific Evidence: The Law and Science of Expert Testimony, edited
by D. L. Faigman, D. H. Kaye, M. J. Saks & J. Sanders. St. Paul: West.
Stoney, David A. (2001) “Measurement of Fingerprint Individuality.” In
Advances in Fingerprint Technology, edited by H. C. Lee & R. E. Gaensslen.
Boca Raton, Fla.: CRC Press.
Thornton, John I., and Joseph L. Peterson (2002) “The General Assumptions
and Rationale of Forensic Identification.” In Science in the Law: Forensic
Science Issues, edited by D. L. Faigman, D. H. Kaye, M. J. Saks & J.
Sanders. St. Paul: West.
Wayman, James L. (2000) “When Bad Science Leads to Good Law: The Disturbing
Irony of the Daubert Hearing in the Case of U.S. v. Byron C. Mitchell,”
Biometrics in the Human Services User Group Newsletter, January. Available
Wertheim, Kasey (2001) Weekly Detail 13 August. Available at
Wertheim, Kasey (2002) “Letter re: ACE-V: Is It Scientifically Reliable and
Accurate?,” Journal of Forensic Identification 52: 669–77.
Wertheim, Kasey, and Alice Maceo (2002) “The Critical Stage of Friction
Ridge Pattern Formation,” Journal of Forensic Identification 52: 35–85.
Wilder, Harris Hawthorne, and Bert Wentworth (1918) Personal Identification:
Methods for the Identification of Individuals, Living or Dead. Boston:
Zabell, S. L. (2005) “Fingerprint Evidence,” Journal of Law and Policy 13:
Daubert v Merrell Dow Pharmaceuticals, 509 US 579 (US 1993).
General Electric Co. v Joiner, 522 US 136 (US 1997).
Kumho Tire v Carmichael, 526 US 137 (US 1999).
People v Gomez, Trial Transcript, No 99CF 0391 (Cal Super Ct Orange Cty.
United States v Crisp, 324 F 3d 261 (4th Cir 2003).
United States v Havvard, 117 F Supp 2d 848 (SD Ind 2000).
United States v Llera Plaza, 179 F Supp 2d 492 (ED Pa 2002) [Llera Plaza I].
United States v Llera Plaza, 188 F Supp 2d 549 (ED Pa 2002) [Llera Plaza
United States v Mitchell, 365 F 3d 215 (3d Cir 2004).
United States v Mitchell, Government’s Post-Daubert Hearing Memorandum, No
96–407 (ED Pa 1999).
United States v Mitchell, Memorandum of Law in Support of Mr. Mitchell’s
Motion to Exclude the Government’s Fingerprint Evidence, No 96–407 (ED Pa
United States v Mitchell, Trial Transcript, No 96–407 (ED Pa 1999).
United States v Sullivan, 246 F Supp 2d 700 (ED Ky 2003).
I am grateful to John R. Vokey, William C. Thompson, and several anonymous
reviewers for helpful comments on this paper. The views expressed and
responsibility for any errors are my own. This project was funded in part by
the Newkirk Center for Science and Society. This material is based upon work
supported by the National Science Foundation under Grant No. 0347305. Any
opinions, findings, and conclusions or recommendations expressed in this
material are those of the author, and do not necessarily reflect the views
of the National Science Foundation, the Newkirk Center, or any individual
named above. Address correspondence to Simon A. Cole, Department of
Criminology, Law & Society, University of California, Irvine, CA 92697-7080,
USA. Telephone: 949 824-1443; fax: 949 824-
3001; e-mail: firstname.lastname@example.org
Feel free to pass The Detail along to other
examiners. This is a free newsletter FOR latent print examiners, BY latent
print examiners. There are no copyrights on The Detail, and the website is open
for all to visit.
If you have not yet signed up to receive the Weekly Detail in YOUR e-mail inbox,
go ahead and join the list now
so you don't miss out! (To join this free e-mail newsletter, enter your
name and e-mail address on the following page:
You will be sent a
Confirmation e-mail... just click on the link in that e-mail, or paste it
into an Internet Explorer address bar, and you are
signed up!) If you have
problems receiving the Detail from a work e-mail address, there have been past
issues with department e-mail filters considering the Detail as potential
unsolicited e-mail. Try subscribing from a home e-mail address or contact
your IT department to allow e-mails from Topica. Members may
unsubscribe at any time. If you have difficulties with the sign-up process
or have been inadvertently removed from the list, e-mail me personally at
email@example.com and I will try
to work things out.
Until next Monday morning, don't work too hard or too little.
Have a GREAT week!