T  H  E      D  E  T  A  I  L

The Detail Archives

Discuss This Issue

Subscribe to The Detail

Monday, January 7, 2002

Good morning via the "Detail," a weekly e-mail newsletter that greets latent print examiners around the globe every Monday morning. The purpose of the Detail is to help keep you informed of the current state of affairs in the latent print community, to provide an avenue to circulate original fingerprint-related articles, and to announce important events as they happen in our field.

BREAKING NEWz you can UzE...

IT'S BIDNOW WEEK!  For sale this week on Ebay is a neat vintage latent fingerprint magnifying glass.  I believe this glass was produced and distributed by the Institute of Applied Science during the first half of the 1900's.   Click Here for more info and pictures. 


Last week we started a series on courtroom testimony by Ron Smith.  This week, we continue with the prior topic of statistics and fingerprint examination.  Christophe Champod brings us this weeks Detail:


Dear Detail readers,

I am very pleased to have been offered the opportunity to expand on my views regarding statistics and probabilities in latent print comparison in this forum.  I thought for a while about the format of this contribution and finally decided to share the constant debate I have between two distinct parts of my brain: me as a latent print examiner (although I am not a daily practitioner) and me as a scientist exploring statistics applied to forensic problems (although I am not a statistician).  I hope this battle of my wit will cover some relevant ground, and if I fail (which is statistically possible, by the way) please feel free to continue the discussion.

What is the fundamental meaning behind statistics applied to latent print identification?

In a nutshell, the application of statistics to latent print identification can be viewed as an attempt to translate the rarity of fingerprint features used by latent print examiners into numerical probabilities.  Practitioners are approaching the problem statistically (although subjectively) when, for example, they are observing that some level 1 or level 2 detail may be rarer than others; or that some areas of a latent print are more prone to contain an "open field" of ridges with no level 2 detail than other areas of a latent print.  One aim of statistics is to make that knowledge more explicit by gathering data relative to selected features through appropriate surveys (samples).  For example, it can be shown that it is very rare (1/10,000) to observe 5 continuous ridges around the core without any minutiae present; or that a "hook" is 10 times rarer than a ridge ending.  Not only will statistics allow for the gathering of data on individual features, but also the testing of probabilistic models to predict the probability of a combination of features.  For example, the model could robustly assign a match probability of 1/10000 to a configuration of 3 ridge endings and a hook.

But these examples suggest that statistics will apply only to level 2 features!?!

 Not at all.  The process can be extended to any feature that can be clearly articulated or defined.  However, it is fair to say that most past statistical efforts have been devoted to level 1 and level 2 features.  I believe more statistical analysis of level 3 features offers great promise.  The difficulty in such an endeavor is for the discipline to agree on a vocabulary with regards to specific characteristics.  Perhaps SWGFAST and/or other organizations could bring consistency in this regard.  In a very simplistic way, as soon as a feature can be defined and counted, it can then be measured and a statistical analysis can be performed.

Surely NO statistical model available now or in the future could EVER take into account the extensive volume of information that examiners use during the examination process!?!

This must be true.  Any statistical modeling of a complex pattern like friction ridge skin cannot pretend to capture the exhaustive set of features used by examiners.  The model must, by definition, be designed on a subset of features.  The coded features are selected in function of multiple constraints, mainly their relevancy to the identification process (strong input from examiners here), the capacity to define and encode them consistently and to process them automatically.  Generally, desirable features display limited within source variability and large between source variability.  Any model will offer a simplified view on reality, but even a limited view can be quantified probabilistically.  In other words, even if the statistical assessment is based on a myopic model that underestimates the true discriminative power of fingerprint features, it can still be safely used in court.

It doesn't even make sense to conduct statistical studies on a pattern which is biologically unique, such as friction ridge skin!  Why bring statistical concepts into the debate when in reality, the chance of finding another identical object is by definition zero!?!

Statements such as "all fingerprints are unique" or "Nature never duplicates itself" have resisted scientific challenge for long enough that we now accept those tenants as fact, especially when we refer to friction ridge skin.  There is no doubt that even if we were to explore non-friction ridge skin (provided we examine it using an appropriate knowledge and method), we would observe features which displayed collective uniqueness.  But what happens when this unique formation is used as a "stamp" to leave marks associated with criminal activities?  I think that the relevant issue is more around the marks than the skin itself.

But we CAN and we DO individualize based on latent print examinations!?!

Yes, competent examiners do this on a daily basis.  But this process is probabilistic in nature.  At some point, the examiner has observed sufficient information in agreement without unaccountable dissimilarities such that it is known that no other source could have possibly made the latent impression except that known impression that is being compared.  This is the expression of a belief that the probability of observing a different area of friction ridge skin which would leave the same set of features is zero.  David Stoney uses the term "leap of faith" to describe this opinion, and except for the sometimes perceived derogatory nature of this phrase, I believe it actually describes fairly accurately how this opinion is formed.  Even if the end result is expressed as a certainty, it doesn't preclude the fact that the process itself is probabilistic.  For that reason probabilistic studies are appropriate.

I just can't help but think that there are two categories of fingerprint people here: the believers and the scientists!?!

It is extremely dangerous to portray this as reality.  I think that a fingerprint expert forming a conclusion of identity is applying essentially a scientific process.  The belief of the individuality of friction ridge skin has been formed through repeated experiments, each of them being a challenge towards the hypothesis of uniqueness.  There comes a  point when there is no perceived need to challenge the principle anymore.  The hypothesis is then declared demonstrated and perceived as a certainty.  The process itself is highly scientific, and should not be viewed as an expression of faith such as our personal belief as to the existence of God.  On the other hand, the pure scientist would like to distil the process in order to structure it, to expand its application, to automate it, etc...

Hey... wasn't the "statistical" problem of fingerprints solved a long time ago when Francis Galton showed that even with a simplistic model, the match probabilities are ridiculously small and we do not need to bother?...  It seems like we are splitting hairs!?!

To a limited extent, it is true that from the early days fingerprints have been subjected to statistical analysis.  However, to my knowledge most early studies were focused on the assessment of the robustness of fingerprints as a means of personal identification.  THEN researchers developed models (most of the time articulated around questionable assumptions) to address the issue of the discriminative value of a complete rolled impression.  The number of studies devoted to partial marks, taking into account realistic features and effects such as pressure distortion and clarity is very limited.  There is huge room for improvement here.  Even the latest 50K study by the FBI has left many with a lot of questions (distortion being one of them) and unfortunately it has not yet been published in any peer-reviewed journal.

Let's assume that the field of fingerprints were to move ahead with statistics.  What would statistics provide latent print examiners on a daily basis?

The benefits are on different levels:
1) a new body of scientific research that will empirically support the extreme variability of the features which are used by latent print examiners during the examination process.  This will provide another string in the bow used to justify opinions of identity in court.
2) a way to assess the statistical value of marks declared insufficient for identification.  A model should allow probabilities to be assigned to partial marks e.g. assessing the chance of finding another finger showing a limited number of matching features.

 I am quite happy to see a statistical argument helping a multidisciplinary team of experts during a Dauber hearing but..., I am a latent print examiner, not a statistician!  I do not envisage to change the profession and ever report numbers in court!!!

This is a crucial point and certainly statistics should not be the major background of latent print examiners.  I subscribe entirely to Pat Wertheim's ability equation.  Researchers in this field should be in a position to provide practitioners with tools and user interfaces that can be understood and used by non-statisticians.  The methods should be published and peer-reviewed.  Once they have gained acceptance, users could then apply them using a dedicated calculator to compute a statistical figure.  In fact, most DNA reporting officers today, although aware of the statistical methodology, do not have any strong statistical background; they remain biologists.  But they do have computer programs at their disposal which provide match probabilities associated with DNA profiles and have gained experience to present such statistical argument in court.  At the end of the day reporting a statistical figure (probably between 0 and 1) or a certainty (probability of 1) should not be drastically different.

To conclude this short dialog with myself, it seems to me that , although these two sides of my brain may look antagonistic at first sight, they are linked by so many bridges (curiosity, scientific approach, willingness to understand) that they manage to work together in a synergistic way.

I look forward to hearing YOUR ideas on the CLPEX.com discussion board!



Next week we will look at an article by Alan McRoberts on SWGFAST entitled "Fingerprints On The Ropes?"


UPDATES on CLPEX.com this week...

Added Dwane Hilderbrand as a Consultant.  Welcome, Dwane!

Updated the Bookstore; added a few new books (additional copies of books already for sale) and took some sold books off.

Updated the "Detail" page and the "Detail Archives" to include the holiday season.


Feel free to pass the link to The Detail along to other examiners. This is a free service FOR latent print examiners, BY latent print examiners. There are no copyrights on The Detail, and the website is open for all to visit.

If you have not yet signed up to receive the Weekly Detail in YOUR e-mail inbox, go ahead and join the list now so you don't miss out!

Until next Monday morning, don't work too hard or too little.
Have a GREAT week!


The Detail Archives

Discuss This Issue

Subscribe to The Detail