Detail Archives    Discuss This Issue    Subscribe to The Detail Fingerprint News Archive       Search Past Details

G o o d   M o r n i n g !
via THE WEEKLY DETAIL
 
Monday, May 4, 2009

 
The purpose of the Detail is to help keep you informed of the current state of affairs in the latent print community, to provide an avenue to circulate original fingerprint-related articles, and to announce important events as they happen in our field.
_________________________________________
__________________________________________
Breaking NEWz you can UzE...

by Stephanie Potter

NoHo Sexual Assault Suspect in Custody
KNX1070 - CA,USA 04-27-09
The Los Angeles Police Department's Scientific Investigation Division processed the crime scenes and extracted latent finger prints. On April 15, 2009, ...

Shell casings, cigarette butt found near slain 65-year-old's body
Gaston Gazette - Gastonia,NC,USA 04-29-09
Costner said police used various methods to find latent fingerprints in the home. Powder was used to find prints on a filing cabinet, but only partial or ...

Sarasota woman shot to death at home
Sarasota Herald-Tribune - Sarasota,FL,USA 04-29-09
A Sarasota County Sheriff's Office crime scene technician lifts fingerprints from an SUV at the scene of a shooting Wednesday morning in the 2200 block of ...

Student chases burglar out of house
Ball State Daily News - Muncie,IN,USA 04-30-09
"His cigarettes were on the front porch and his finger prints were on the house." Engle said police arrested Miller on preliminary charges of burglary, ...

Fingerprints Nail Dealer
BainbridgeGa.com - Bainbridge,GA,USA 05-01-09
He was able to lift fingerprints from the baggies containing crack cocaine. The fingerprints were identified as belonging to Cedric Bernard Thomas. ...

__________________________________________
Recent CLPEX Posting Activity
Last Week's Board topics containing new posts
Moderated by Steve Everist and Charlie Parker

Public CLPEX Message Board
Moderated by Steve Everist

 UPDATES ON CLPEX.com

Updated the Detail Archives
_________________________________________

Last week

Gerald Clough brought us part 1 of a 2-part series that defines what is (and what is NOT) validation in fingerprint identification.
 

This week

Gerald Clough brings us part 2.

_________________________________________

Validation in Fingerprint Identification Part II - Striping the Tiger

 

In Part I, I explained my view of what validation is and is not. It is not reliability testing. It does not test competency. It does not test how accurately human examiners can interpret poorly impressed characteristics or reconcile distortions. It presumes a valid theory and perfect interpretation of ideal data. It is, then, best understood as a study of thresholds of conclusion, our answers to the question, “When presented with these impressions, how do you know?” I presented the data that must be provided in a validation study, primarily the data obtained from impressions of human friction ridge skin.

 

Here, I conclude with some thoughts on the problems and promises of actual validation studies and close with some last comments on the near future. Much of the following is speculative and intended to explore the issues in validation and to consider some of the difficult tasks in crafting a validation study. I previously said we can either undertake to ride this tiger, or we can deal with the business end of the beast. Our ticket to ride the tiger will largely be purchased by our ability to understand just how to credibly model tiger stripes, how to stripe one, as we develop precise fingerprint data.

 

First, we must make a conceptual break from our usual ways of thinking about examinations. Validation addresses the straightforward issue of whether the characteristics of fingerprints can be observed to reach valid conclusions, not how they are observed. For the moment, we are not concerned with, for instance, what we observe about the spatial relationship between two characteristic points that causes us to interpret them as satisfying a defined relationship. That is another issue, one we might have to deal with when challenged, but it is not a validation issue.

 

Our theory relates to the uniqueness of pore unit ridge structures. To address the validity of drawing certain conclusions by observing these structures, the validation study must naturally be limited to a consideration of just these structures and no other characteristic of human friction ridge skin. If our thresholds involve the existence of spatial relationships among these structures, the applicable validation study must evaluate thresholds expressed in terms of those relationships. If we wish to explore validation of conclusions that are drawn from observations of other aspects of pore structures, such as the characteristics of individual pores, the validation study for that mode must admit such data.

 

There are a number of potential approaches to generating data for a validation study. I surely cannot identify them all. But, it is important to remember that we can work from the equivalent of idealized impressions. In other words, for validation, we can assume an ability to observe and interpret perfectly. We are really considering the friction ridge skin itself by considering its common representation. The only limitation to how data is expressed is how we wish to characterize the data in examinations to reach conclusions.

 

We are not concerned with how one or anther examiner may interpret a recorded feature or how accurately they may characterize it. We are not concerned with how an examiner may determine that one or another sort of distortion requires the relationships among features to be adjusted to an idealized form. We are strictly concerned with what we do with the data once it is obtained.

 

How closely must validation model actual examination? We must remember that few validation studies will study everything that may be done within a discipline. But the examinations that are validated are those that consider the same types of data as the validation studies. A validation study that determines that one may identify an object as an orange by smell may not be applied to the practice of recognizing an orange by touch.

 

The issue of whether or not a validation study will be scientifically generally accepted is answered in peer-review and replication. I have said that while reliability is not a prerequisite of validation, validation is required for scientific reliability. Validation does not validate methods or work flow. It validates the use of particular data to apply a theory to reach an accurate conclusion.

 

If we wish to conclude a unique source for an impression, we must define “unique.” By unique, we have always meant unique for the purpose of identifying an individual from among the entire population of humanity with a very high degree of confidence. It can have no other meaning. Because we are referring to a finite set of variations, large as it may be, we do not reject out of hand the infinitesimal probability that there could exist two individuals between whom we could not discriminate. Absolute uniqueness is illusory. Happily, it is unnecessary. We believe it is possible that a room full of typewriting monkeys might eventually duplicate an existing brilliant novel. But we aren’t going spend much time worrying about what to do if it happens.

 

Since threshold is such a prominent part of validation, it is well to think about what that means. One issue is whether any threshold qualifies an absolute conclusion. Can conclusions that associate a latent impression with a unique source be validated? If we define the data products of observation, we define a finite set of discrete bits of information. A particular characteristic is present. Another characteristic is present, spatially related to the first in a particular way. Other characteristics are related to both of those in particular ways. All this information creates a map of a certain complexity, a map that can likely be characterized as one of many classes of mappings. Those familiar with topology are familiar with the concept of surfaces being equivalent while appearing quite different visually. (Hence, the mathematical joke that a topologist can’t tell a donut from a coffee cup.) It is not unreasonable to speculate that perhaps the complexity of a mapping common to two impressions or a maps assignment to a class of mappings is a measure of how often it can be predicted to occur in the population.

 

All examiners apply thresholds. They cannot, at this stage of evolution of the discipline define them outside the context of individual examinations of specific evidence. They could, perhaps, be challenged by removing one after another of observed features in a pair of impressions and could likely commit in that specific case that the removal of one additional feature would cause the accumulated data to become insufficient for identification. The data would then fall below the required threshold. We cannot validate that threshold by testing, but we do apply thresholds.

 

I don’t pretend that the thresholds revealed through scientific validation will admit all conclusions presently being rendered from experience, but the point is that we can use both idealized impressions and a straightforward way of plotting features and relationships to establish that theory, data, and threshold all combine with conclusions in a valid way. And it points up that we do not have to closely model any particular examination procedure, except that we have to be able to demonstrate that we have satisfied a similar threshold.

 

When thinking about validation studies, we must remember that we are not dealing with the same issues as AFIS. AFIS confronts problems of orientation and missing areas and myriad issues of interpretation. We need not address those in our initial attempts at validation. The nastier problems confronting AFIS can also be issues for examiners in individual cases, although they may eventually enter the validation arena and therefore the examination process if they can be expressed as data.

 

Like any other study, validation studies must be subject to review, challenge, and replication. Challenges may be based on various aspects of the study. But, for the most part, a validation study will be acceptable for the types of data used, if the data, relationships, and conclusions are clearly defined. Of course, the data must be available by observation and must be within the range of data acquired by an examiner. Thresholds are stated in terms of relationships, and so the relationships must be those that can reasonably be determined by an examiner, for the examiner cannot otherwise be working with the same threshold.

 

Because we are unambiguously describing physical reality, our data and relationships, both in validation and in practice, must be unambiguous.  By the time we reach threshold in an examination, we must have reached it through making absolute interpretations of what we observe – not guessing, in other words. That does not mean that everything we observe must be perfectly impressed. Training, experience, and inspiration will play no lesser parts than they always have. The examiner decides if the features can be identified as to character. The examiner characterizes the relationships among them and determines if there are congruous features in both impressions and if they are in the same relationships in both impressions. And the examiner applies a threshold.

 

Any validation study produces results that apply only to conclusions made by examiners using the same types of data used in the study. One could not, for instance, apply a validation study with data referencing only LII to a conclusion that depends on LIII observations. For a threshold to be validated, any conclusion based on observations of one sort of data requires validation based on the same sort of data. One could not apply a validation study testing threshold and conclusion in the common mode of identifying two impressions as having a common source to the less common mode of comparing impressions of multiple fingers with records of multiple fingers. That requires a different validation study.

 

It is reasonable to presume that validation studies will utilize considerable computer support. We wish to use a very large data set so that we present the threshold filter with many opportunities to reject the very large number of members that fall below a given threshold. We want a convincing wealth of data to insure that any threshold revealed is not inappropriately high and to make the results statistically significant. There is no reason such a study could not be ongoing, constantly checking an ever-growing data set. The source population doesn’t have to approach the ideal of all persons on Earth. The theory is presumed valid. Validation addresses practical potential. And, as we have seen with DNA conclusions, very small probabilities of duplication are as effective as scientifically unsustainable absolute identification.

 

So, to conduct a validation study, we require data in the form of actual or modeled friction ridge skin impressions. We also require metadata, characterizations of the friction ridge features of those impressions and their interrelationships. To test against a sufficiently large data set requires automated identification and characterization of features and relationships among features. Otherwise, we would require a very large room full of monkeys examiners to code the data.

 

The impression data might simply be drawn from clear impressions of actual skin, finger pads, for instance. Very large collections of images exist and could be further combined with others. But it may also be possible to generate virtual impression data modeled so that the set is not statistically different from the real-world population. The validation study would be proper, because we are characterizing features in the same scheme in both instances.

 

In a paper in The Journal of Theoretical Biology, titled simply “Fingerprint Formation”, Michael Kücken and Alan C. Newell at The University of Arizona presented work on mathematical modeling of fingerprints. Such work suggests that computer modeling might provide an arbitrarily large data set for validation. Their paper may be found at:

http://math.arizona.edu/~anewell/publications/Fingerprint_Formation.pdf

 

To prepare a validation data set, the researcher still would have to establish the same unambiguous definitions of characterization and relationships data in order to develop the statistically similar virtual population. We would still need the large collections of actual clear images with which to do statistical validation of our modeled population. Once validated, the modeling could then produce a very large data set indeed.

 

Clearly, the validation of fingerprint identification is not a project aimed at a single or ultimate study. Once the concept of well-defined threshold becomes a part of latent print examination, it quickly prompts exploration beyond the features and relationships we accumulate as factors in conclusions. We also begin thinking about how we might combine a variety of observations in ways that we had not previously been comfortable depending upon but that may become testable for validity. For example (and I have no idea if this will validate), it might be found that when one or another Level I class can be observed, that fact may combine with a relative few observed Level II features to produce a valid conclusion.

 

Establishing validated thresholds for mixed data types may allow us to develop new methods without depending on slow accumulation of experience to imply validity. Such questions as the significance of discontinuous impressions are amenable to examination through validity testing. We may ask what thresholds are valid for concluding from impressions when the nature of unseen ridges within gaps must be assumed or ignored. To the degree that such gaps can be properly defined and inserted into the idealized data, arguably valid thresholds can be developed with considerable confidence.

It is likely that higher thresholds of observable characteristics and relationships will be required when some continuity in impressions is lost. In other words, perhaps by satisfying a sufficiently high threshold, we need not be subject to speculation on what might lie within a gap, any more than we would need to speculate on what lies outside the bounds of a continuous impression. Validation studies can, in a very real way, serve to reveal the limits of examination. Overly liberal thresholds and conclusions will simply fail validation on account of insufficient power to discriminate.

 

When one begins thinking seriously about validation, one is beset from time to time by questions that grow out of the various ways that friction ridge features may manifest in skin or be impressed. I submit that these questions will resolve as issues of interpretive reliability, rather than validation. Validation is about actual skin structure, not distorted impressions of skin. One may also think about rejecting identification on the basis of a single clear contradictory characteristic. No scientifically valid threshold can include a “but for…” general condition. The principle of a single contradictory characteristic becomes moot, or the existence of the contradiction must invalidate the threshold and send us back for more validation work. That, too, is science.

 

So it would seem that our ability to ride the validation tiger has a lot to do with whether we can define his stripes in concrete terms and perhaps whether we can craft a scheme for striping virtual tigers sufficiently well that they can’t be differentiated from real tiger skins. The development of validation studies sufficiently rigorous to cite as validation for actual examinations will take some time. Where does that leave the examiner and the justice system in the meantime?

All stakeholders in the criminal litigation process must be prepared to respond with considerable versatility to evidence in what is likely for a time to be a more dynamic interaction of science and law. While no one who matters doubts that some area of friction ridge skin of some unspecified size is functionally unique and that impressions can be compared to determine the source, lack of validation haunts the conclusions of identification. Logically, given the current state of knowledge, there are decidedly different ways a court may view admissibility of fingerprint identification conclusions.

 

No Conclusion Allowed - The court may note that no scientific studies validate identification on the fundamental ground that there is no clearly defined specification for the threshold, no specific threshold of data that must accumulate during an examination to render an identification conclusion. And they may then find that the cultural weight of fingerprint evidence is so powerful that non-validated conclusions imply absolute statements of reality that must inevitably take on exaggerated probity. In legal terms, that the combination of the cultural view of fingerprints and a conclusion not subjected to scientific validation is more prejudicial than probative. They may consider cures.

 

Similarities Allowed - One is to limit testimony to the products of examination that can be reliably articulated and offered as versions of reality subject to reasonable judgments of credibility. Such testimony will limit to similarities. An examiner will be allowed to demonstrate interpretations of direct observations, pointing out and characterizing features in both latent and record impressions. The examiner will also demonstrate relationships among features, those relationships being found in both impressions.

 

The examiner could also speak from cited studies and experience to the frequency of occurrence of various classes of characteristics and the general relative rarity of unusual features of peculiar ridge formations. When LIII detail is available, similarities in number and appearance should be admissible as simple observed reality, along with observations of corresponding structures in both impressions. The jury, then, must make of it what they will. Attorneys may generate error by attempting to offer their own conclusions.

 

Conclusion Allowed, But With Expert General Testimony - A court might also acknowledge the lack of validation and allow testimony on the nature of validation and its value and implications for conclusions and the differences between validated theory‑data‑threshold conclusions and clinical opinion and allow the examiner to testify to a conclusion that the examiner characterizes as reliably identifying the source according to the examiner’s own undefined threshold. Such testimony would be intended to enable to jury to give appropriate weight to the examiner’s testimony by considering the evidence in light of the state of the science from multiple sources, rather than the cultural legend.

 

Conclusion Allowed Without Expert General Testimony - No doubt many courts will hear pre-trial testimony on the state of the science and nevertheless choose to accept the undisputed theory, the century of generally reliable practice, and the rational argument that there exist some valid threshold and that a qualified examiner is applying an undefined but sufficient threshold of conclusion. Many courts will be reluctant to apply a “pure science” argument to out‑of‑hand reject well-represented conclusions that have long been one of the most useful fact-finding tools in criminal justice.

 

It is likely that some of these courts will simply refuse to allow the jury to hear testimony on the general nature of fingerprint identification, including validation. We should remember that a great many attorneys and judges are not thinking as hard as we are about these issues and may never have heard any of the arguments and may not be aware of the evolving ideas on expert general testimony. .

 

All parties must consider carefully when they oppose or urge such expert general testimony. The evolution of legal response to challenges to eyewitness identification offers some guidance. Historically, courts routinely refused to admit any testimony about eyewitness identification and memory in general, and their rulings were supported by appellate courts on the grounds of appropriate trial court discretion and the lack of applicable research, published and peer-reviewed. Juries were deemed to be able to apply common sense to judge validity and the testimony of witnesses to judge credibility.

 

As a body of scientific research accumulated to show that common sense was an unreliable approach to evaluating eyewitness evidence, some appellate courts began remanding cases, holding that the trial courts erred by excluding expert general testimony, sometimes holding that such testimony was only required when there was little corroborating evidence. (The latter rulings should not be viewed as universally reliable. There may well be an ethical and logical conflict in bolstering eyewitness evidence by citing other associative evidence.)

 

The federal circuits appear to be trending toward requiring the admission of expert testimony to the general nature of non-validated forensic evidence. 

 

If lack of acceptable validation studies should move one or another court or jurisdiction to allow testimony only on similarities between impressions, it is not necessarily a clear benefit to defendants. Demonstrated similarities can be powerfully persuasive. If the only admissible result of examination is similarity, defense attorneys may well find it difficult to effectively limit how much weight jurors should give to the evidence, exactly because they succeeded in excluding the conclusions that would have allowed the issues of validation and probability to arise. They may still be subject to the “fingerprint culture.”

 

Questions addressing what can or cannot be concluded from a specific set of similarities are awkward or impossible when conclusions have been excluded from trial. The examiner may not be testifying to a conclusion, but the jury may well adopt one. The ingrained view of fingerprints as valuable discriminators is hard to overcome. Any testimonial discussion almost inevitably either solicits the conclusion that was excluded or attempts to solicit a false statement from the examiner that he cannot conclude. Dancing around the issue in court may well tend to lead the jury to believe the evidence is damning but that the fact is being deliberately kept from them for technical reasons.

 

Most latent print examiners will have neither the time and resources nor the inclination to undertake validation studies. Few forensic analysis organizations will be able to devote time and personnel to the work. Most of the effort will likely be in academia. In Part I, I said validation studies do not require particular expertise in the discipline being validated. That is not entirely true. The actual execution of validation, the testing of data and threshold producing conclusion, requires only the ability to set up testing and apply statistical tools to the results. But without close work with the discipline, what is validated may well not apply to examinations.

 

One role of adepts who are not directly engaged in conducting a study is to help develop the precise characterizations of idealized fingerprint features and relationships that can be used in the validation study and in examinations. The discipline must also think carefully about its own nature and processes so that it may help others understand the necessary breakdown of all the issues.

 

Scientific critique of fingerprint identification often fails to make sufficient distinctions among the issues. The results of human error are often cited as evidence of the effect of lack of scientific validation. As I hope I have shown here, that is frankly false. And we must always continue to work on those aspects of the discipline where our expertise is essential. As we have seen, aside from simply inattention, our errors have been due to erroneous generation of data which, if it had accurately reflected the reality represented by the latent impression, would have been a correct and valid identification. I suspect nearly all of them would have easily satisfied any threshold we could reasonably expect to be scientifically validated.

 

Saddle up!

 

The author may be contacted directly at clough@hwtx.com .

 

Bookmark and Share

_________________________________________

Feel free to pass The Detail along to other examiners for Fair Use.  This is a not-for-profit newsletter FOR latent print examiners, BY latent print examiners. The website is open for all to visit! 


If you have not yet signed up to receive the Weekly Detail in YOUR e-mail inbox, go ahead and join the list now so you don't miss out!  (To join this free e-mail newsletter, enter your name and e-mail address on the following page: http://www.clpex.com/Subscribe.htm  You will be sent a Confirmation e-mail... just click on the link in that e-mail, or paste it into an Internet Explorer address bar, and you are signed up!)  If you have problems receiving the Detail from a work e-mail address, there have been past issues with department e-mail filters considering the Detail as potential unsolicited e-mail.  Try subscribing from a home e-mail address or contact your IT department to allow e-mails from Topica.  Members may unsubscribe at any time.  If you have difficulties with the sign-up process or have been inadvertently removed from the list, e-mail me personally at kaseywertheim@aol.com and I will try to work things out.

Until next Monday morning, don't work too hard or too little.

Have a GREAT week!


Discuss
Subscribe
View Archives