Logo

G o o d  M o r n i n g !

via THE WEEKLY DETAIL

Monday, June 13, 2011
The purpose of the Detail is to help keep you informed of the current state of affairs in the latent print community, to provide an avenue to circulate original fingerprint-related articles, and to announce important events as they happen in our field.
Detail Archives Discuss This Issue Subscribe to The Detail Search Past Details Unsubscribe
Breaking NEWz you can UzE...
by Stephanie Potter
4 Years Later, Man Arrested For Stealing Monongahela Library Computer
WPXI Pittsburgh 06-06-2011
We contacted the state police and they were able to retrieve fingerprints off the glass ," said Monongahela Police Chief Brian Tempest. Tempest said the prints were entered into a state computer system, but no match was found at the time. ...
Judge denies bond for accused cop killer Dontae Morris
ABC Action News 06-10-2011
A fingerprint expert also testified that Morris' fingerprints were found at the murder scene of Harold Wright. An FDLE crime lab analyst said the bullet that killed Derek Anderson were fired from the same gun that killed Kocab and Curtis. ...
Squatter arrested in Deerfield Beach for several Boca Raton burglaries
WPEC 06-07-2011
... latent fingerprints were recovered. The fingerprints from both crimes, and from the Pontiac, were analyzed and searched through the Automated Fingerprint Identification System (AFIS). The fingerprints were identified as belonging to Lamont Shuler. ...
Decatur Police Make Arrest in String of Auto Entries
Patch.com 06-06-2011
As a result, the department can now process fingerprint evidence in-house rather than being sent to the GBI . "Israel was spotted in the East Ponce de Leon/Glendale area by two Decatur officers on their way into work on 06-04-11 and taken into custody ...
Fingerprint led to charges in rape case
Tulsa World 06-11-2011
A fingerprint found at the scene where a woman was raped and severely beaten led to charges against an ex-con , police said Friday . Police have been searching for Rlando Andrew Bowie, 22, since he was charged ...
Recent CLPEX Posting Activity
McKie Inquiry costs $6m
by 04-12-10
last post by 06-11-11
Needing guidance on a DFO/ Nin oven
by 06-08-11
last post by 06-10-11
CTS Test 10-516: Item 5D
by 05-27-11
last post by 06-09-11
Regina v. Smith (2011)
by 05-26-11
last post by 06-08-11
Milwaukee Hilton reservation
by 06-06-11
last post by 06-06-11
circular reasoning vs circular process
by 06-04-11
last post by 06-06-11
Now Read Nat'l Acad. Press Pubs Free
by 06-06-11
last post by 06-06-11
Latent Print Fabrication
by 06-04-11
last post by 06-04-11
UPDATES ON CLPEX.com

 No major website updates this week

ANNOUNCEMENTS

Funny Fingerprint Find... From a do-it-yourself fingerprint kit:  "dust for fingerprints like a professional" - (find a good surface, plant a print, then develop and lift it)

I guess I've been doing it wrong all these years.

LAST WEEK

 we looked at the Department of Justice, Office of the Inspector General report related to the Mayfield identification. 

THIS WEEK

 we look at the results of the FBI Laboratory Division's "Black Box Study" related to latent print decisions. As always with articles of this length and with embedded figures, charts, etc., I recommend reading the article in it's full length and original formatting:

http://www.pnas.org/content/early/2011/04/18/1018707108.full.pdf+html


__________________________________________

Accuracy and reliability of forensic latent fingerprint decisions

Bradford T. Ulery [a], R. Austin Hicklin [a], JoAnn Buscaglia [b],1, and Maria Antonia Roberts [c]

[a] Noblis, 3150 Fairview Park Drive, Falls Church, VA 22042; [b] Counterterrorism and Forensic Science Research Unit, Federal Bureau of Investigation Laboratory Division, 2501 Investigation Parkway, Quantico, VA 22135; and
[c] Latent Print Support Unit, Federal Bureau of Investigation Laboratory
Division, 2501 Investigation Parkway, Quantico, VA 22135

Edited by Stephen E. Fienberg, Carnegie Mellon University, Pittsburgh, PA, and approved March 31, 2011 (received for review December 16, 2010)

The interpretation of forensic fingerprint evidence relies on the expertise of latent print examiners. The National Research Council of the National Academies and the legal and forensic sciences communities have called for research to measure the accuracy and reliability of latent print examiners’ decisions, a challenging and complex problem in need of systematic analysis. Our research is focused on the development of empirical approaches to studying this problem. Here, we report on the first large-scale study of the accuracy and reliability of latent print examiners’ decisions, in which 169 latent print examiners each compared approximately 100 pairs of latent and exemplar fingerprints from a pool of 744 pairs. The fingerprints were selected to include a range of attributes and quality encountered in forensic casework, and to be comparable to searches of an automated fingerprint identification system containing more than 58 million subjects. This study evaluated examiners on key decision points in the fingerprint examination process; procedures used operationally include additional safeguards designed to minimize errors. Five examiners made false positive errors for an overall false positive rate of 0.1%. Eighty-five percent of examiners made at least one false negative error for an overall false negative rate of 7.5%. Independent examination of the same comparisons by different participants (analogous to blind verification) was found to detect all false positive errors and the majority of false negative errors in this study. Examiners frequently differed on whether fingerprints were suitable for reaching a conclusion.

The interpretation of forensic fingerprint evidence relies on the expertise of latent print examiners. The accuracy of decisions made by latent print examiners has not been ascertained in a large-scale study, despite over one hundred years of the forensic use of fingerprints. Previous studies (1–4) are surveyed in ref. 5. Recently, there has been increased scrutiny of the discipline resulting from publicized errors (6) and a series of court admissibility challenges to the scientific basis of fingerprint evidence (e.g., 7–9). In response to the misidentification of a latent print in the 2004 Madrid bombing (10), a Federal Bureau of Investigation (FBI) Laboratory review committee evaluated the scientific basis of friction ridge examination. That committee recommended research, including the study described in this report: a test of the performance of latent print examiners (11). The need for evaluations of the accuracy of fingerprint examination decisions has also been underscored in critiques of the forensic sciences by the National Research Council (NRC, ref. 12) and others (e.g., refs. 13–16).

Background

Latent prints (“latents”) are friction ridge impressions (fingerprints, palmprints, or footprints) left unintentionally on items such as those found at crime scenes (SI Appendix, Glossary). Exemplar prints (“exemplars”), generally of higher quality, are collected under controlled conditions from a known subject using ink on paper or digitally with a livescan device (17). Latent print examiners compare latents to exemplars, using their expertise rather than a quantitative standard to determine if the information content is sufficient to make a decision. Latent print examination can be complex because latents are often small, unclear, distorted, smudged, or contain few features; can overlap with other prints or appear on complex backgrounds; and can contain artifacts from the collection process. Because of this complexity, experts must be trained in working with the various difficult attributes of latents.

During examination, a latent is compared against one or more exemplars. These are generally collected from persons of interest in a particular case, persons with legitimate access to a crime scene, or obtained by searching the latent against an Automated Fingerprint Identification System (AFIS), which is designed to select from a large database those exemplars that are most similar to the latent being searched. For latent searches, an AFIS only provides a list of candidate exemplars; comparison decisions must be made by a latent print examiner. Exemplars selected by an AFIS are far more likely to be similar to the latent than exemplars selected by other means, potentially increasing the risk of examiner error (18).

The prevailing method for latent print examination is known as analysis, comparison, evaluation, and verification (ACE-V) (19, 20). The ACE portion of the process results in one of four decisions: the analysis decision of no value (unsuitable for comparison); or the comparison/evaluation decisions of individualization (from the same source), exclusion (from different sources), or inconclusive. The Scientific Working Group on Friction Ridge Analysis, Study and Technology guidelines for operational procedures (21) require verification for individualization decisions, but verification is optional for exclusion or inconclusive decisions. Verification may be blind to the initial examiner’s decision, in which case all types of decisions would need to be verified. ACE-V has come under criticism by some as being a general approach that is underspecified (e.g., refs. 14 and 15).

Latent-exemplar image pairs collected under controlled conditions for research are known to be mated (from the same source) or nonmated (from different sources). An individualization decision based on mated prints is a true positive, but if based on nonmated prints, it is a false positive (error); an exclusion decision based on mated prints is a false negative (error), but is a true negative if based on nonmated prints. The term “error” is used in this paper only in reference to false positive and false negative conclusions when they contradict known ground truth. No such absolute criteria exist for judging whether the evidence is sufficient to reach a conclusion as opposed to making an inconclusive or no-value decision. The best information we have to evaluate the appropriateness of reaching a conclusion is the collective judgments of the experts. Various approaches have been proposed to define sufficiency in terms of objective minimum criteria (e.g., ref. 22), and research is ongoing in this area (e.g., ref. 23). Our study is based on a black box approach, evaluating the examiners’ accuracy and consensus in making decisions rather than attempting to determine or dictate how those decisions are made (11, 24).

Study Description

This study is part of a larger research effort to understand the accuracy of examiner conclusions, the level of consensus among examiners on decisions, and how the quantity and quality of image features relate to these outcomes. Key objectives of this study were to determine the frequency of false positive and false negative errors, the extent of consensus among examiners, and factors contributing to variability in results. We designed the study to enable additional exploratory analyses and gain insight in support of the larger research effort.

There is substantial variability in the attributes of latent prints, in the capabilities of latent print examiners, in the types of casework received by agencies, and the procedures used among agencies. Average measures of performance across this heterogeneous population are of limited value (25)—but do provide insight necessary to understand the problem and scope future work. Furthermore, there are currently no means by which all latent print examiners in the United States could be enumerated or used as the basis for sampling: A representative sample of latent print examiners or casework is impracticable.

To reduce the problem of heterogeneity, we limited our scope to a study of performance under a single, operationally common scenario that would yield relevant results. This study evaluated examiners at the key decision points during analysis and evaluation. Operational latent print examination processes may include additional steps, such as examination of original evidence or paper fingerprint cards, review of multiple exemplars from a subject, consultation with other examiners, revisiting difficult comparisons, verification by another examiner, and quality assurance review. These steps are implemented to reduce the possibility of error.

Ideally, a study would be conducted in which participants were not aware that they were being tested. The practicality of such an approach even within a single organization would depend on the type of casework. Fully electronic casework could allow insertion of test data into actual casework, but this may be complex to the point of infeasibility for agencies in which most examinations involve physical evidence, especially when chain-of-custody issues are considered. Combining results among multiple agencies with heterogeneous procedures and types of casework would be problematic.

In order to get a broad cross-section of the latent print examiner community, participation was open to practicing latent print examiners from across the fingerprint community. A total of 169 latent print examiners participated; most were volunteers, while the others were encouraged or required to participate by their employers. Participants were diverse with respect to organization, training history, and other factors. The latent print examiners were generally highly experienced: Median experience was 10 y, and 83% were certified as latent print examiners. More detailed descriptions of participants, fingerprint data, and study procedures are included in SI Appendix, Materials and Methods.

The fingerprint data included 356 latents, from 165 distinct fingers from 21 people, and 484 exemplars. These were combined to form 744 distinct latent-exemplar image pairs. There were 520 mated and 224 nonmated pairs. The number of fingerprint pairs used in the study, and the number of examiners assigned to each pair, were selected as a balance between competing research priorities: Measuring consensus and variability among examiners required multiple examiners for each image pair, while incorporating a broad range of fingerprints for measuring image-specific effects required a large number of images.

We sought diversity in fingerprint data, within a range typical of casework. Subject matter experts selected the latents and mated exemplars from a much larger pool of images to include a broad range of attributes and quality. Latents of low quality were included in the study to evaluate the consensus among examiners in making value decisions about difficult latents. The exemplar data included a larger proportion of poor-quality exemplars than would be representative of exemplars from the FBI’s Integrated AFIS (IAFIS) (SI Appendix, Table S4). Image pairs were selected to be challenging: Mated pairs were randomly selected from the multiple latents and exemplars available for each finger position; nonmated pairs were based on difficult comparisons resulting from searches of IAFIS, which includes exemplars from over 58 million persons with criminal records, or 580 million distinct fingers (SI Appendix, section 1.3). Participants were surveyed, and a large majority of the respondents agreed that the data were representative of casework (SI Appendix, Table S3).

Noblis developed custom software for this study in consultation with latent print examiners, who also assessed the software and test procedures in a pilot study. The software presented latent and exemplar images to the participants, allowed a limited amount of image processing, and recorded their decisions, as indicated in Fig. 1 (SI Appendix, section 1.2). Each of the examiners was randomly assigned approximately 100 image pairs out of the total pool of 744 image pairs (SI Appendix, section 1.3). The image pairs were presented in a preassigned order; examiners could not revisit previous comparisons. They were given several weeks to complete the test. Examiners were instructed to use the same diligence that they would use in performing casework. Participants were assured that their results would remain anonymous; a coding system was used to ensure anonymity during analysis and in reporting.

Results

False Positives

False Negatives

Posterior Probabilities

Consensus

Examiner Skill

Conclusions

Assessing the accuracy and reliability of latent print examiners is of great concern to the legal and forensic science communities. We evaluated the accuracy of decisions made by latent print examiners on difficult fingerprint comparisons in a computer-based test corresponding to one stage in AFIS casework. The rates measured in this study provide useful reference estimates that can inform decision making and guide future research; the results are not representative of all situations, and do not account for operational context and safeguards. False positive errors (erroneous individualizations) were made at the rate of 0.1% and never by two examiners on the same comparison. Five of the six errors occurred on image pairs where a large majority of examiners made true negatives. These results indicate that blind verification should be highly effective at detecting this type of error. Five of the 169 examiners (3%) committed false positive errors, out of an average of 33 nonmated pairs per examiner.

False negative errors (erroneous exclusions) were much more frequent (7.5% of mated comparisons). The majority of examiners (85%) committed at least one false negative error, with individual examiner error rates varying substantially, out of an average of 69 mated pairs per examiner. Blind verification would have detected the majority of the false negative errors; however, verification of exclusion decisions is not generally practiced in operational procedures, and blind verification is even less frequent. Policymakers will need to consider tradeoffs between the financial and societal costs and benefits of additional verifications.

Most of the false positive errors involved latents on the most complex combination of processing and substrate included in the study. The likelihood of false negatives also varied by image. Further research is necessary to identify the attributes of prints associated with false positive or false negative errors, such as quality, quantity of features, distortion, background, substrate, and processing method.

Examiners reached varied levels of consensus on value and comparison decisions. Although there is currently no objective basis for determining the sufficiency of information necessary to reach a fingerprint examination decision, further analysis of the data from this study will assist in defining quality and quantity metrics for sufficiency. This lack of consensus for comparison decisions has a potential impact on verification: Two examiners will sometimes reach different conclusions on a comparison.

Examiner skill is multidimensional and is not limited to error rates. Examiner skill varied substantially. We measured various dimensions of skill and found them to be largely independent. This study is part of a larger ongoing research effort. To further our understanding of the accuracy and reliability of latent print examiner decisions, we are developing fingerprint quality and quantity metrics and analyzing their relationship to value and comparison decisions; extending our analyses to include detailed examiner markup of feature correspondence; collecting fingerprints specifically to explore how complexity of background, substrate and processing are related to comparison decisions; and measuring intraexaminer repeatability over time.

This study addresses in part NRC Recommendation 3 (12), developing and quantifying measures of accuracy and reliability for forensic analyses, and will assist in supporting the scientific basis of forensic fingerprint examination. The results of this study will provide insight into developing operational procedures and training of latent print examiners and will aid in the experimental design of future proficiency tests of latent print examiners.

1. Evett IW, Williams RL (1995) A review of the 16 point fingerprint standard in England and Wales. Fingerprint Whorld 21.

2. Wertheim K, Langenburg G, Moenssens A (2006) A report of latent print examiner accuracy during comparison training exercises. J Forensic Identification 56:55–93.

3. Gutowski S (2006) Error rates in fingerprint examination: The view in 2006. Forensic Bulletin 2006:18–19 Autumn.

4. Langenburg G, Champod P, Wertheim P (2009) Testing for potential contextual bias effects during the verification stage of the ACE-V methodology when conducting fingerprint comparisons. J Forensic Sci 54:571–582.

5. Langenburg G (2009) A performance study of the ACE-V process. J Forensic Identification 59:219–257.

6. Cole SA (2005) More than zero: Accounting for error in latent fingerprint identification. J Crim Law Criminol 95:985–1078.

7. United States v Mitchell No. 96-407 (ED PA 1999).

8. United States v Llera Plaza Cr. No. 98-362-10, 11, 12 (ED PA 2002).

9. Maryland v Rose No. K06-0545 (MD Cir 2007).

10. Office of the Inspector General (2006) A Review of the FBI’s Handling of the Brandon Mayfield Case (US Department of Justice, Washington, DC).

11. Budowle B, Buscaglia J, Perlman RS (2006) Review of the scientific basis for friction ridge comparisons as a means of identification: Committee findings and recommendations. Forensic Sci Commun 8:1.

12. National Research Council (2009) Strengthening Forensic Science in the United States: A Path Forward (National Academies Press, Washington, DC).

13. Koehler JJ (2008) Fingerprint error rates and proficiency tests: What they are and why they matter. Hastings Law J 59:1077–1110.

14. Mnookin JL (2008) The validity of latent fingerprint identification: Confessions of a fingerprinting moderate. Law Probability and Risk 7:127–141.

15. Haber L, Haber RN (2008) Scientific validation of fingerprint evidence under Daubert. Law Probability and Risk 7:87–109.

16. Cole S (2006) Is fingerprint identification valid? Rhetorics of reliability in fingerprint proponents’ discourse. Law Policy 28:109–135.

17. Scientific Working Group on Friction Ridge Analysis, Study and Technology (2011) Standard terminology of friction ridge examination, Version 3., Available at http://
www.swgfast.org/documents/terminology/110323_Standard-Terminology_3.0.pdf.

18. Dror I, Mnookin J (2010) The use of technology in human expert domains: Challenges and risks arising from the use of automated fingerprint identification systems in forensic science. Law Probability and Risk 9:47–67.

19. Huber RA (1959) Expert witness. Criminal Law Quarterly 2:276–296.

20. Ashbaugh D (1999) Quantitative-Qualitative Friction Ridge Analysis: An Introduction to Basic and Advanced Ridgeology (CRC Press, New York).

21. Scientific Working Group on Friction Ridge Analysis, Study and Technology (2002) Friction ridge examination methodology for latent print examiners, Version 1.01.,
Available at http://www.swgfast.org/documents/methodology/100506-Methodology-
Reformatted-1.01.pdf.

22. Champod C (1995) Edmond Locard—Numerical standards and “probable” identifications. J Forensic Identification 45:136–155.

23. Neumann C, et al. (2007) Computation of likelihood ratios in fingerprint identification for configurations of any number of minutiae. J Forensic Sci 52:54–64.

24. Mnookin JL (2008) Of black boxes, instruments, and experts: Testing the validity of forensic science. Episteme 5:343–358.

25. Budowle B, et al. (2009) A perspective on errors, bias, and interpretation in the forensic sciences and direction for continuing advancement. J Forensic Sci 54:798–809.

26. Saks M, Koehler J (2005) The coming paradigm shift in forensic identification science. Science 309:892–895.

27. Stoney DA (1991) What made us ever think we could individualize using statistics? J Forensic Sci Soc 31:197–199.

28. Grieve DL (1996) Possession of truth. J Forensic Identification 46:521–528.__________________________________________

Feel free to pass The Detail along to other examiners for Fair Use. This is a not-for-profit newsletter FOR friction ridge examiners, BY friction ridge examiners. The website is open for all to visit!

If you have not yet signed up to receive the Weekly Detail in YOUR e-mail inbox, go ahead and join the list now so you don't miss out! (To join this free e-mail newsletter, enter your name and e-mail address on the following page: http://www.clpex.com/Subscribe.htm You will be sent a Confirmation e-mail... just click on the link in that e-mail, or paste it into an Internet Explorer address bar, and you are signed up!) If you have problems receiving the Detail from a work e-mail address, there have been past issues with department e-mail filters considering the Detail as potential unsolicited e-mail. Try subscribing from a home e-mail address or contact your IT department to "whitelist" the Weekly Detail. Members may unsubscribe at any time. If you have difficulties with the sign-up process or have been inadvertently removed from the list, e-mail me personally at kaseywertheim@aol.com and I will try to work things out.

Until next Monday morning, don't work too hard or too little.

Have a GREAT week!

Detail Archives Discuss This Issue Subscribe to The Detail Search Past Details Unsubscribe