Updated the Detail archives
we looked at the Accreditation of the Worcesther PD Latent Print Unit, by David Grady.
we look at a book review by Cindy Rennie of the Haber's new book.
Comments on the book
“Challenges To Fingerprints” – authors: Haber and
Lyn and Ralph Haber are research scientists specializing in the cognitive process of perception, memory, and decision making. As part of their research for this book, they took three courses in latent fingerprint comparison - totalling 104 classroom hours - from Richard Walley, Kasey Wertheim and David Ashbaugh, respectively. The courses focused on the method for performing latent print comparisons, the scientific issues underlying fingerprint identifications, and the training and proficiency testing of latent print examiners.
The authors raise many questions about the scientific validity of fingerprint identifications. They contend that none of the steps taken in the investigative process – from scene attendance to court testimony - have been scientifically tested and proven to be valid and reliable.
They begin by allowing that “friction ridge skin patterns” are permanent and unique, stating that: “The statistical arguments supporting the uniqueness of friction ridge systems on fingers have been generally accepted by the fingerprint profession, by researchers, and by the courts.” Unfortunately, after that they use the word “pattern” outside of its ‘Henry’ definition for the rest of the book.
For instance, when discussing an illustration of four different impressions of the same rolled finger (an exemplar, or “known” impression) they state: “… because the surface skin of fingers is flexible and squishy, the friction ridge pattern in each exemplar is never identical.” (Pg. 20)
They also seem to think that fingerprint examiners memorize every exemplar (“suspect” or “known”) fingerprint that they examine: “Success in finding two exemplars with the same pattern depends on fingerprint examiners having a perfect memory for every exemplar they have ever seen, a memory extending over years. Memory research has shown that such memory demands are impossible. This argument is fundamentally flawed and offers no support for the uniqueness of exemplar fingerprints. At present there is no research to support a uniqueness assumption for exemplar fingerprints.”
The authors state that exemplar fingerprints also suffer from being “substantially distorted in the very process of being transferred from the three-dimensional finger to a two-dimensional print card” for the following reasons: “friction ridge skin stretches and compresses under variations in the amount of pressure during the rolling or straight downward process; each time the finger is rolled or pressed downward, a different area of the finger touches the white card or the scanner, so the original pattern never appears exactly; the inking of the finger is different each time – too much ink may fill and obscure the grooves and insufficient ink may produce artificial discontinuities in ridges, both of which misrepresent the pattern on the finger; scanning does not reproduce all the detail, due to poor resolution, and plain impressions show only part of the fridge ridge skin surface”. (pg. 40)
They have a similar theory regarding the differences between fingers and latent impressions of fingers, stating that the latent image is diminished in quantity and quality due to the following factors:
“The size of an average latent print is only 1/5 of the size of the rolled print of the same finger; excessive downward pressure of the finger on the surface obscures much of the pattern, light pressure fails to reproduce the pattern; excessive lateral pressure of the finger on the surface produces distortion and smear in the pattern; dirt on the surface obscures or alters the pattern; over-and under-laid prints on the surface make separation of the target fingerprint more difficult, compared to looking at a single pattern on a single finger; double taps create two partially overlapping prints of the same finger, and are often difficult to separate, compared to a single pattern on a single finger; graining in wood or other patterns inherent to the surface mimic, distort or degrade the patterning on the finger; an irregular surface distorts the distances between ridges and grooves on the finger; the viscosity of the substance that produces the fingerprint distorts the patterning; and the lifting process alters the patterning of the latent.” (Pg. 41)
The authors offer the following challenges to fingerprint evidence:
“The accuracy of fingerprint comparisons can only be answered by scientific experiment. Accuracy is assessed by determining the amount of agreement with ‘ground truth’ (absolute knowledge) in the conclusions reach by examiners.
“The ACE (“Analyze, Compare, Evaluate”) method has never been tested empirically. Examiners are not required to keep bench notes when they perform a comparison, so there is no contemporaneous evidence as to which elements of the prints they compared, which differences they explained, and which features they found in agreement. There is no evidence that they were following the ACE method, or applying it correctly.
“Published versions of ACE differ. No single version of ACE has been accepted as official by the profession. ACE cannot be tested for its validity. It has no validated standards underlying its conclusions. There is no proof that ACE is a reliable method.
“There have been no independent tests of the accuracy of different AFIS search algorithms. No studies have demonstrated whether different ways of pre-processing latents for submission produce more accurate results.
“No research has been performed to determine how much pattern, how sharp the detail, and/or how many features or events along ridges in sequence must be present in a latent fingerprint for the probability of an erroneous identification to approach zero. It is for this reason the IAI abandoned point counting in 1973.
“There is no requirement for the examiner to complete a full analysis of the latent before comparing it to the print of a suspect. Many examiners testify that they begin analysis with the latent and exemplar (“known” print) side by side. This procedure is biased. Every time an examiner first identifies a feature in the exemplar and then searches for it in the latent, he increases the probability of an erroneous identification. The extent of this biasing practice is untested. In the absence of bench notes, their prevalence is unknowable and un-testable.
The authors offer the following comments on the process of comparing ‘exemplar’ (i.e. ‘known’) prints to latent impressions, and state why (in their opinion) the process is not scientific.
1. “The method is not quantitative and specified. Each step in the method should be fully described and objective, to allow the application of the method to be reliable.
2. “The method does not have a known error rate and should be experimentally tested to see how well it works.
3. “An objective description of features-in-place is required but impossible because no two images of the same finger are exactly alike, and the features themselves are not sufficiently numerous to differentiate one pattern from all other patterns.
4. “There are no objective standards for conclusions. There is no research evidence to show how many corresponding features are required before a decision of identity is reached, and the standards for identification differ between police services and sometimes between examiners.
5. “There are no objective measures of latent difficulty. There is no measure of the quality of a latent, so no empirical study can be done to show how much detail or clarity is needed to individualize a latent.
6. “There is no protection from bias. A scientific method defines procedures designed to prevent bias. Fingerprint examiners are routinely exposed to factors that could bias their findings, including the type of crime involved, the fact that there is other evidence indicating that the suspect was involved in the occurrence, and the practice of comparing the latent to the fingerprints of the suspect before fully analyzing the latent on its own.
7. “There are no proper training and proficiency testing programs. In the absence of a training program anchored to an objectively-described method, examiners are trained differently and reach different conclusions.”
The authors further comment on the following “ empirical claims” made by members of the profession: (pg. 180)
A. Examiners employ a single comparison method known as ACE.
“… there is no single ACE method. No complete description exists in the literature, published descriptions differ in fundamental ways, and the profession has not adopted and approved a specific description of the method.”
“… there is no standardized training program, so different examiners receive different training. Current proficiency tests, such as CTS, do not require examiners to record the steps they use to reach a conclusion, nor are examiners required to make bench notes on the job during the comparison itself.”
B. The ACE error rate is (close to) zero because few publicized erroneous identifications have been found.
“No crime laboratory, including the FBI, has reported the number of erroneous identifications it makes and discovers in-house.”
“The error rate of ACE is not known because it has never been scientifically tested. Examiner proficiency cannot be assessed, so the contribution of examiner error to the erroneous identification rate is unknown.”
C. Proficiency test results indicate a very low ACE error rate.
“Proficiency tests do not reflect casework and have not been validated, and therefore cannot be used as evidence of examiner accuracy.”
D. ACE is based on the Scientific Method.
“The argument that applying the ACE method to a single fingerprint comparison has its roots in the scientific method of hypothesis testing is erroneous. The results of a single comparison do not provide information about the accuracy of the method itself.”
“…… the outcome of (one) particular case is not a test of the accuracy of the ACE method itself. To test this hypothesis, a large number of fingerprint pairs are needed, with ground truth known for each one, and a large number of highly skilled examiners are needed who demonstrate that they have followed the official description of the ACE method. This is an entirely different set of procedures from comparing a single set of fingerprints and offering a conclusion without knowing the ground truth.”
E. Examiner certainty makes conclusions valid.
“Fingerprint examiners claim that the error rate of the ACE method is zero because they only testify to a conclusion when they feel absolutely certain they are right. However, scientific research has repeatedly shown that certainty is not a measure of correctness, and that the correlation between certainty and accuracy is very low under normal circumstances.”
F. The 50K study demonstrates that fingerprints are unique.
“In a trivial sense irrelevant to fingerprint comparison, it is true that every fingerprint, whether latent or exemplar, is unique, that is, no two different prints from the same finger are ever identical. In contrast, it’s trivial and obvious that if I make a fingerprint and then make a photographic or digital copy of that image, the pattern in the two copies will be identical. This is exactly what was done in the 50K study, and the computer successfully found great similarity between the identical copies and less similarity between non-identical prints. The 50K study as performed fails to demonstrate the uniqueness of exemplars. It also fails altogether to demonstrate that human fingerprint examiners can distinguish correctly between two different images from single and multiple donors.”
G. The 50K study demonstrates that latent prints are correctly matched to exemplars.
“The so-called latents in the 50K study were created by masking off all but the information-rich central 21.7% of the exemplar and comparing it to the unmasked copy of itself. The results do not apply to latents or to fingerprint comparison accuracy.”
RHETORICAL CLAIMS: (Pg. 184)
A. One hundred years of court acceptance means the ACE method is valid.
Validity can only be tested by an empirical experiment, which has never been done.
B. Adversarial testing shows the error rate of ACE is zero.
Adversarial testing does not provide a mechanism to assess the error rate of ACE. The vast majority of cases involving fingerprint evidence result in plea bargains, or the fingerprint evidence goes unchallenged and is therefore not subjected to adversarial testing.
Verification assures the
error rate of ACE is close to zero.
Not all labs require verification. Those that do, rarely require blind verification. Non-blind verification permits bias to reduce the chances of detecting errors.
D. Verification procedures provide a test of the validity of ACE.
In casework verification testing, ‘ground truth’ (i.e. the correct answer) is unknown, and agreement between two examiners might mean that they both were correct in the identification, or that they both made the same error.
E. Second opinions protect an innocent defendant.
Evidence shows that two examiners comparing the same set of fingerprints do not always agree on their conclusions. A second opinion is no substitute for a challenge of the scientific status of the method and/or the examiner.
F. The error rate of the ACE method is zero.
This has never been proven through proper scientific testing.
G. Only poorly-trained or inexperienced examiners make erroneous identifications.
The three FBI examiners who concurred in the
mis-identification of the
The Habers end their book by listing six
(1) The government’s fingerprint examiners disagreed on the standards for sufficiency of agreement for an identification. The judge concluded that without a reliable sufficiency standard, the testimony of fingerprint examiners was not admissible. (U.S. v. PARKS 1991)
(2) The government failed to produce evidence of the validity of the methods used to reach the conclusions introduced into court. The government appealed the decision, and lost. (Jacobs v. US Virgin Islands 2002)
entered a ‘simultaneous
fingerprint’ impression – more than one finger impression was
present, but no finger had sufficient information for an
identification on its own – and it was accepted by the courts. The defendant appealed and
the Supreme Judicial Counsel ruled that simultaneous impressions did
not meet Daubert
free to pass The Detail along to other examiners for Fair Use. This
is a not-for-profit newsletter FOR friction ridge examiners, BY
friction ridge examiners. The website is open for all to visit!
If you have not yet signed up to receive the Weekly Detail in YOUR e-mail inbox, go ahead and join the list now so you don't miss out! (To join this free e-mail newsletter, enter your name and e-mail address on the following page: http://www.clpex.com/Subscribe.htm You will be sent a Confirmation e-mail... just click on the link in that e-mail, or paste it into an Internet Explorer address bar, and you are signed up!) If you have problems receiving the Detail from a work e-mail address, there have been past issues with department e-mail filters considering the Detail as potential unsolicited e-mail. Try subscribing from a home e-mail address or contact your IT department to "whitelist" the Weekly Detail. Members may unsubscribe at any time. If you have difficulties with the sign-up process or have been inadvertently removed from the list, e-mail me personally at firstname.lastname@example.org and I will try to work things out.
Until next Monday morning, don't work too hard or too little.
Have a GREAT week!