I can't help but read this paper and this thread and be reminded of the movie Anchorman and the quote "They've done studies you know... 60% of the time, it works everytime"http://www.youtube.com/watch?v=zLq2-uZd5LY
(for those who've never seen it)
Why is everyone glossing over this little gem of a statement from the paper.
. Eighty-five percent of examiners made at least one false negative error for an overall false negative rate of 7.5%
So accuracy is only concerned with getting individualizations right? We only count the 'error rate' of Type I errors? Type II errors don't count... and what about when people didn't agree on when something was of value as the paper indicated:
Examiners frequently differed on whether fingerprints were suitable for reaching a conclusion
If this study was created using known sources of latents, then there's an error rate on these as well. I guess 'science' has changed and I haven't gotten the memo that only type I errors matter these days. (note to self: update rolodex)
I also appreciated this statement as particularly wonky:
The ACE portion of the process results in one of four decisions: the analysis decision of no value (unsuitable for comparison); or the comparison/evaluation decisions of individualization (from the same source), exclusion (from different sources), or inconclusive.
Analysis actually results in one of two decisions, no value or value. It is a go/no go decision on whether or not the Comparison portion will take place. If a comparison is performed, then you can get one of those three Evaluations. However, it's also not unheard of for someone to perform a Comparison and realize that the latent really isn't of value in the first place. After all, isn't ACE-V recursive and iteratively applied? So, for those keeping track it's really 5 decisions, but in the instance it's no value it's really 1, unless you decide that it is, but then it's not (carry the 1) and so we end up with 3.14 decisions that are actually possible.
Lets also not forget this:
The Scientific Working Group on Friction Ridge Analysis, Study and Technology guidelines for operational procedures (21) require verification for individualization decisions, but verification is optional for exclusion or inconclusive decisions. Verification may be blind to the initial examiner’s decision, in
which case all types of decisions would need to be verified.
Whoever wrote this statement should go write tax code. I can be pretty dense sometimes, so advanced apologies if this is the case but to me this reads: verification can be blind, but in the instance it is the decision needs to be verified..... uh..... ok.... then what is verification?
Let's also consider that ACE-V is supposedly just the Scientific Method, but according to SWGFAST the scientific method only needs to be applied part of the time, but not the part of the time that according to this study results in the most errors. After all, science doesn't really care about getting it right, it's just about having some made up percentage that you can reference in testimony.
So, Glenn, when you say you introduced this as evidence I see it as more of a Charlie Sheen type of 'winning' as opposed to the Webster's Dictionary version.....