Fingerprint evaluations reliable 99.8% of the time

Welcome to the public CLPEX.com Message Board for Latent Print Examiners. Feel free to share information at will.

Fingerprint evaluations reliable 99.8% of the time

Postby RedFive » Tue Apr 26, 2011 9:48 am

http://content.usatoday.com/communities/sciencefair/post/2011/04/fingerprint-evaluations-reliable-998-of-the-time/1

Fingerprint evidence on TV crime shows is either open-and-shut or hugely unreliable But a new large-scale study by the Federal Bureau of Investigation of the accuracy and reliability of fingerprint examiners' decisions finds that examiners are correct 99.8% of the time.

The study was done at the request of the National Research Council of the National Academies and the legal and forensic sciences communities It studied 169 latent (crime-scene) fingerprints and attempted to match them to exemplar or on-file prints from known subjects.

The false-positive rate was 0.1%. False negatives were much more common, at 7.5%, than false positives. But when an independent examination of the same prints was done by different examiners, all false positive errors and most false negative errors were detected

The study is part of a larger ongoing research efforts into the accuracy and reliability of fingerprints as a biometric identification method.
Red 5 standing by.....
RedFive
 
Posts: 25
Joined: Tue Nov 27, 2007 12:35 pm
Location: Arvada, CO

Re: Fingerprint evaluations reliable 99.8% of the time

Postby L.J.Steele » Tue Apr 26, 2011 1:12 pm

The problem, of course, is that there's a large number of print comparisons done in the US every year -- even 0.1% may be a large number of individuals.

The article is somewhat vague on methology -- I'll assume it is studying matches in the abstract and controlling for variations in training, workload, confirmation bias, etc, which may make the real world number quite different.
L.J.Steele
 
Posts: 386
Joined: Mon Aug 22, 2005 8:26 am
Location: Massachusetts

Re: Fingerprint evaluations reliable 99.8% of the time

Postby Pat » Wed Apr 27, 2011 5:51 am

I took the "Black Box Study" and have several observations.

First, the assortment of latent prints to compare seemed weighted around the borderline of value/no value. Casework represents a different balance of easy/minimal/worthless latents.

Second, the background noise was not representative of background noise I have seen in casework. That means the eye is not used to it and has more difficulty seeing through the confusion.

Third, I knew it was a test. I knew that nobody's fate hung in the balance. Try as I might to take it seriously, no test has the gravity of real casework.

Fourth, the study showed one erroneous ident per 1,000 correct identifications. I would submit that is significantly higher than the rate of erroneous identifications in real casework. But even so, one erroneous identification per one thousand correct decisions is probably far, far lower than just about any other testimony heard by the court. And the fact that no erroneous identification was repeated indicates that blind verification would give a very high probability of preventing the error from being reported. I would submit that even normal verification would be virtually as effective as blind verification.

Fifth, the error rate of erroneous exclusion points up a problem in our business. I think traditionally we have not reported exclusions, except in cases when the exclusion would be strongly probative. Post-NAS Report, we are starting to report all conclusions. The problem is that we are charging into new territory on exclusions without the extensive training we get in how to make correct identifications. There is need for serious review of what it takes to correctly exclude, and then there needs to be extensive training to bring us all up to speed.

Sixth, an erroneous exclusion is, indeed, a serious error. It may allow a rapist to rape again. But at a trial of a suspect, it does not contribute tragically to the deliberations in the manner that an erroneous identification would. Therefore, it is somewhat of a red herring for a defense attorney to attack an identification with the challenge of error rate based on erroneous exclusions.

And finally, this is one study in a growing body of literature. It is a valuable study, to be sure. I know the team that put it together went to extraordinary lengths to ensure its validity. But it is not a definitive study. Like all studies, it has its strengths and its weaknesses. It adds to our knowledge and no examiner who understands it has anything to fear from cross examination over it.
The views presented in this post are those of the author only. They do not necessarily represent the views of DoD or any of its components.
Pat
 
Posts: 226
Joined: Fri Nov 19, 2010 7:39 am

Re: Fingerprint evaluations reliable 99.8% of the time

Postby g. » Thu Apr 28, 2011 1:26 am

Pat,

FYI, I agree, as you say, it adds to the growing body of studies that report error rates. Incidentally, the link was available on Monday and Tuesday, in Minnesota in yet another Frye/Mack hearing here, we introduced the study in court. So to the authors of that paper: our "thanks" and good timing for it. It came out on Monday and we entered it into evidence on Tuesday. I am hopeful, that the judge will look at that study and the other 3 studies with error rates that we offered and see that they are all report relatively low error rates and ACE-V is highly accurate (especially with respect to the small number of false positives).

g.
g.
 
Posts: 211
Joined: Wed Jul 06, 2005 3:27 pm
Location: St. Paul, MN

Re: Fingerprint evaluations reliable 99.8% of the time

Postby josher89 » Thu Apr 28, 2011 1:21 pm

Just to save a bit of time searching, here is a link to the article: http://www.pnas.org/content/early/2011/04/18/1018707108.full.pdf+html
"It is possible to reconstruct a fingerprint with positive accuracy from a verbal description received by telephone or telegraph."
--E.K. Thode, chief of the National Division of Identification and Information, circa 1930
josher89
 
Posts: 197
Joined: Tue Aug 22, 2006 12:32 am
Location: NE USA

Re: Fingerprint evaluations reliable 99.8% of the time

Postby Neville » Thu Apr 28, 2011 2:40 pm

I would rather see 1000 false negatives go over someones desk than 1 false positive. It only goes to show that that person is ultra cautious.

Is that not the prime premise of our justice system.

I think any defence solicitor would agree, so they for one will be happy with the result.

It just goes to show that having identifications checked is very worth while.
Neville
 
Posts: 304
Joined: Mon Jan 23, 2006 1:44 pm
Location: NEW ZEALAND

Re: Fingerprint evaluations reliable 99.8% of the time

Postby L.J.Steele » Mon May 02, 2011 10:02 am

Neville wrote:I would rather see 1000 false negatives go over someones desk than 1 false positive. It only goes to show that that person is ultra cautious.

Is that not the prime premise of our justice system.

I think any defence solicitor would agree, so they for one will be happy with the result.


We do live in the community too. If you get the right guy, but good, solid evidence, then I'll do my job in defending him zealously within the bounds of the law. And if he's convicted, then so be it. If you get the wrong guy, and despite my zealous efforts, he gets convicted, that's a problem for all of us -- the wrong guy, the judicial system, and the future victims of the right guy, who is still on the streets.

I came across this quote recently: "People deserve to be proven guilty beyond a reasonable doubt by a system that is deeply committed to getting the facts right." We aren't there by a long shot.
L.J.Steele
 
Posts: 386
Joined: Mon Aug 22, 2005 8:26 am
Location: Massachusetts

Re: Fingerprint evaluations reliable 99.8% of the time

Postby Neville » Mon May 02, 2011 2:56 pm

The quote "People deserve to be proven guilty beyond a reasonable doubt by a system that is deeply committed to getting the facts right." does miss an important point though, that is as far as the courts are concerned it is not the facts of the case that is paramount but the quality of the evidence.
This was clearly shown in one of NZ's most (in)famous case where a Royal Commission refused to hear any evidence that added weight to the guilt of the convicted suspect.
There are numerous cases that go before the courts where not all the evidence is put to the jury because the facts of the case do not meet the criteria of the courts, for what ever the reason. In NZ there are the fraud cases where the courts have only accepted evidence on 20 offences at a time and no more. Guess how many cheques are in a cheque book in New Zealand, yes 20. So if the offender is using cheques from 1/2 a dozen cheque books what happens to the facts regarding his/her other offences, yep right again they are never produced to the court.
Neville
 
Posts: 304
Joined: Mon Jan 23, 2006 1:44 pm
Location: NEW ZEALAND

Re: Fingerprint evaluations reliable 99.8% of the time

Postby mart » Wed May 04, 2011 5:40 pm

I have not seen the results of these tests so please excuse me if I am talking about something already explained.

In the fingerprint evaluations has clerical errors been taken into account. It is not unusual for someone to write down the wrong finger or even wrong suspect after making a perfectly valid identification. This will of course be picked up when the identification is verified
mart
 
Posts: 3
Joined: Tue Aug 10, 2010 6:01 pm
Location: Wellington, New Zealand

Re: Fingerprint evaluations reliable 99.8% of the time

Postby Boyd Baumgartner » Thu May 05, 2011 12:09 pm

I can't help but read this paper and this thread and be reminded of the movie Anchorman and the quote "They've done studies you know... 60% of the time, it works everytime"

http://www.youtube.com/watch?v=zLq2-uZd5LY (for those who've never seen it)

Why is everyone glossing over this little gem of a statement from the paper.
. Eighty-five percent of examiners made at least one false negative error for an overall false negative rate of 7.5%


So accuracy is only concerned with getting individualizations right? We only count the 'error rate' of Type I errors? Type II errors don't count... and what about when people didn't agree on when something was of value as the paper indicated:

Examiners frequently differed on whether fingerprints were suitable for reaching a conclusion


If this study was created using known sources of latents, then there's an error rate on these as well. I guess 'science' has changed and I haven't gotten the memo that only type I errors matter these days. (note to self: update rolodex)


I also appreciated this statement as particularly wonky:

The ACE portion of the process results in one of four decisions: the analysis decision of no value (unsuitable for comparison); or the comparison/evaluation decisions of individualization (from the same source), exclusion (from different sources), or inconclusive.


Analysis actually results in one of two decisions, no value or value. It is a go/no go decision on whether or not the Comparison portion will take place. If a comparison is performed, then you can get one of those three Evaluations. However, it's also not unheard of for someone to perform a Comparison and realize that the latent really isn't of value in the first place. After all, isn't ACE-V recursive and iteratively applied? So, for those keeping track it's really 5 decisions, but in the instance it's no value it's really 1, unless you decide that it is, but then it's not (carry the 1) and so we end up with 3.14 decisions that are actually possible.

Lets also not forget this:
The Scientific Working Group on Friction Ridge Analysis, Study and Technology guidelines for operational procedures (21) require verification for individualization decisions, but verification is optional for exclusion or inconclusive decisions. Verification may be blind to the initial examiner’s decision, in
which case all types of decisions would need to be verified.


Whoever wrote this statement should go write tax code. I can be pretty dense sometimes, so advanced apologies if this is the case but to me this reads: verification can be blind, but in the instance it is the decision needs to be verified..... uh..... ok.... then what is verification?

Let's also consider that ACE-V is supposedly just the Scientific Method, but according to SWGFAST the scientific method only needs to be applied part of the time, but not the part of the time that according to this study results in the most errors. After all, science doesn't really care about getting it right, it's just about having some made up percentage that you can reference in testimony.

So, Glenn, when you say you introduced this as evidence I see it as more of a Charlie Sheen type of 'winning' as opposed to the Webster's Dictionary version.....
Boyd Baumgartner
 
Posts: 247
Joined: Sat Aug 06, 2005 1:03 pm

Re: Fingerprint evaluations reliable 99.8% of the time

Postby Steve Everist » Thu May 05, 2011 12:56 pm

Boyd Baumgartner wrote:So, for those keeping track it's really 5 decisions, but in the instance it's no value it's really 1, unless you decide that it is, but then it's not (carry the 1) and so we end up with 3.14 decisions that are actually possible.


I knew pi was somehow involved - it always is!
Steve E.
Steve Everist
Site Admin
 
Posts: 441
Joined: Sun Jul 03, 2005 6:27 pm
Location: Bellevue, WA

Re: Fingerprint evaluations reliable 99.8% of the time

Postby jluthy » Fri May 06, 2011 10:22 am

The shortcomings of the study notwithstanding, I agree with Glenn that it provides useful information for those of us in the field and the gatekeepers who validate our opinions.

The conclusions reached by the study validate my concerns for the solo examiners out there in mid-sized municipal and county agencies. We have a few of these within our state.

Our local paper just hailed the story of one local detective who somehow went to the body farm for three months and came back a fingerprint expert. Now she will be conducting fingerprint comparisons with no review, although we may see her idents submitted to the state lab for verification.

It seems that, at best, false negatives will occur that allow more criminals in those jurisdictions to walk free. More troubling is the increase in likelihood that our credibility in state courts will be dealt a blow by a critical error in fingerprint analysis by one of these solo operations than by an accredited laboratory with 100% peer review.

I'm beginning to wonder if this should be governed in some way. A change in the SWGFAST policy may be the place to start. Perhaps a state or federal law or ruling requiring some level of a peer review environment for an individual to testify to their conclusions?
jluthy
 
Posts: 1
Joined: Tue Apr 14, 2009 2:34 pm

Re: Fingerprint evaluations reliable 99.8% of the time

Postby Pat » Fri May 06, 2011 1:37 pm

jluthy wrote:I'm beginning to wonder if this should be governed in some way. A change in the SWGFAST policy may be the place to start. Perhaps a state or federal law or ruling requiring some level of a peer review environment for an individual to testify to their conclusions?

I understand that some States have enacted rules for expert fingerprint witness testimony. For example, I believe Texas rules of evidence now require a fingerprint expert to either be IAI certified (CLPE) or work in an accredited lab (ASCLD or FQS) in order to be allowed to testify in State court. Maybe somebody more familiar with the current situation in Texas could correct me if I'm wrong. I think Oklahoma also has some kind of similar restriction. Anybody from OK? Any other state rules?

SWGFAST can publish a "guideline" or even a "standard," but there is no mechanism to enforce anything SWGFAST publishes. A defense attorney could grill a witness on cross examination as to why they are out of compliance with SWGFAST standards, but SWGFAST itself can do nothing in the way of enforcement.
The views presented in this post are those of the author only. They do not necessarily represent the views of DoD or any of its components.
Pat
 
Posts: 226
Joined: Fri Nov 19, 2010 7:39 am

Re: Fingerprint evaluations reliable 99.8% of the time

Postby George Reis » Sat May 07, 2011 9:12 am

Boyd Baumgartner wrote:Analysis actually results in one of two decisions, no value or value. It is a go/no go decision on whether or not the Comparison portion will take place. If a comparison is performed, then you can get one of those three Evaluations. However, it's also not unheard of for someone to perform a Comparison and realize that the latent really isn't of value in the first place. After all, isn't ACE-V recursive and iteratively applied? So, for those keeping track it's really 5 decisions, but in the instance it's no value it's really 1, unless you decide that it is, but then it's not (carry the 1) and so we end up with 3.14 decisions that are actually possible.


I always enjoy your humor Boyd! And, you are right that it isn't four decisions, but it isn't five either - it's two. During Analysis there is one decision with two possible conclusions (value/no value), and during Comparison there is one decision with three possible conclusions (individualization, exclusion, or inconclusive).
I can resist anything except temptation - Oscar Wilde
George Reis
 
Posts: 134
Joined: Wed Jul 27, 2005 3:00 pm
Location: Orange County, CA - USA

Re: Fingerprint evaluations reliable 99.8% of the time

Postby Rah » Sat May 07, 2011 1:36 pm

The USA Today title was unfortunate: nowhere in the paper does it say "Fingerprint evaluations [were] reliable 99.8% of the time". That number cites the positive predictive value: 99.8% of the individualization decisions on the test were correct, which was one of a variety of results in the paper. Sounds like a journalist at USA Today made the jump that that means reliability.

On ACE returning 4 decisions - ACE *does* only return 4 terminal decisions. If Analysis returns No Value, we're done; if it returns a Value decison, we aren't done, and we proceed to one of the 3 Comparison/Evaluation decisions. A value decision isn't an end state. There can be a variety of paths through ACE, but there are only 4 distinct results that come out at the end.
Rah
 
Posts: 2
Joined: Sat May 07, 2011 1:14 pm

Next

Return to Public CLPEX Message Board

Who is online

Users browsing this forum: Baidu [Spider], Yahoo [Bot] and 0 guests