Detail Archives    Discuss This Issue    Subscribe to The Detail Fingerprint News Archive       Search Past Details

G o o d   M o r n i n g !
via THE WEEKLY DETAIL
 
Monday, January 14, 2008

 
The purpose of the Detail is to help keep you informed of the current state of affairs in the latent print community, to provide an avenue to circulate original fingerprint-related articles, and to announce important events as they happen in our field.
_________________________________________
__________________________________________
Breaking NEWz you can UzE...
compiled by Jon Stimac

Cop Says Prints Not on Weapon CHICAGO TRIBUNE, IL - Jan 10, 2008 - ...as many as 44 prints were found on the entrance door to the condominium...

Killer Asks For New Look At Case HARTFORD COURANT, CT - Jan 9, 2008 - ...state police fingerprint expert used new technology to eventually assist in matching a print...

Crime Lab, Morgue Opens  IDAHO PRESS-TRIBUNE - Jan 8, 2008 ...more than six times the space of its previous home, the Canyon County Forensics Complex will open its doors for business...

Fingerprint Solves 2004 Burglary WCAX-TV, VT - Jan 7, 2007  ...police lifted a fingerprint at the scene and sent it to the crime lab and last month...

__________________________________________
Recent CLPEX Posting Activity
Last Week's Board topics containing new posts
Moderated by Steve Everist

Announcement: Click link any time for recent, relevant fingerprint NEWS
clpexco 748 Sun Dec 16, 2007 3:36 pm

Body fluids on black bags
Philip Bekker 17 Sun Jan 13, 2008 5:16 pm

Evidence Fabrication in South Africa
Pat A. Wertheim 5342 Sun Jan 13, 2008 3:30 am

Calls for Inquiry to be scrapped
Daktari 5900 Sat Jan 12, 2008 9:01 pm

Zero Error Rate vs. No Error Rate
Michele 1071 Sat Jan 12, 2008 3:57 pm

JFI Commentary
Charles Parker 241 Fri Jan 11, 2008 5:06 pm

2006 Article from Judicature Journal - Improving Reliability
Steve Everist 64 Fri Jan 11, 2008 4:13 pm

Fingerprint Society Website
fpsociety 475 Fri Jan 11, 2008 12:21 pm

Paw prints point to dog in shooting death of Baytown teacher
Gerald Clough 215 Thu Jan 10, 2008 2:45 pm

One Of Ours-
Ann Horsman 438 Thu Jan 10, 2008 2:28 am

They Walk Among Us
Charles Parker 3706 Tue Jan 08, 2008 12:12 am

Need published studies on results of casing processing...
Amy Miller 358 Mon Jan 07, 2008 11:07 am

Zero Error Rate Methodology
Charles Parker 846 Mon Jan 07, 2008 1:29 am

(http://clpex.com/phpBB/viewforum.php?f=2)
 

 
UPDATES ON CLPEX.com


Updated the Fingerprint Interest Group web page with FIG # 28.

Updated the CLPEX.com Complete Consultants Worldwide page... more on what these updates mean in the near future!

Inserted KEPT (Keeping Examiners Prepared for Testimony) #2 - Blind Verification - Explanation, at the end of the Detail.
_________________________________________

Last week

we looked at the NIST CONOPS in their upcoming latent print technology evaluation.

This week

we review a debate on confirmation bias and cognitive issues in fingerprint identification recently published in Fingerprint Whorld, and represented here through Fair Use.  This includes a letter published by the Chair of the Fingerprint Society, a response (Dror, I.E. & Charlton, D., 2007, "Improving perception and judgment: An examination of expert performance"), additional comments and criticisms, and a response (Dror, I.E. & Charlton, D., 2007, "Methodological and conceptual problems in measuring and enhancing expert performance").
_________________________________________
Letters to the Editor
FINGERPRINT WHORLD
by Martin Leadbetter
Response by Itiel Dror & David Charlton

Page 230 FINGERPRINT WHORLD Vol 33 No 129 September 2007

Dear Sir,

Having read the article in the last issue of Fingerprint Whorld authored by Dr Itiel Dror and David Charlton, I feel I need to make a couple of observations. What concerns me seriously is the report that emotional influence is a factor that can bias perception and judgment. I note that so-called ‘fingerprint examiners’ were influenced in their decision making process by stories and photographs relating to the crimes from which they had been asked to undertake fingerprint comparisons. I do not intend getting involved in any lengthy pseudo dialectic about this issue other than to say, that any fingerprint examiner who comes to a decision on identification and is swayed either way in that decision making process under the influence of stories and gory images is either totally incapable of performing the noble tasks expected of him/her or is so immature he/she should seek employment at Disneyland. I have no knowledge of where these ‘examiners’ originate, but would certainly not want them on my staff. I would hope that they are not employed within a UK fingerprint bureau, (in fact, I hope they’re not employed in any bureau) but if they are, then some serious remedial action needs to be taken immediately to ensure that they discontinue putting fingerprints and the public at dire risk.

I do not totally disagree with the suggestion that an examiner might be influenced in his/her decision making process, but in all my 40 years service in fingerprints I can categorically state that I never knew of any examiner changing his or her decision because the crime was nasty or unimportant or had been swayed by a gruesome photograph. The rumoured instances that I have heard of have reportedly occurred when either a senior officer has leant over a junior expert’s shoulder, adding undue pressure to get a result or when an examiner has assumed that the previous competent checkers have done a correct job and he doesn’t make a correct analysis before reaching his/her decision.

Personally, I believe that this sort of reporting unnecessarily damages the true state of fingerprint identification. We all know that no system in any discipline can ever be absolutely perfect and flawless when humans are involved in it’s operation, but let’s be real. Can any other process boast such a reliable and highly accurate record of success? Certainly not the Health Service, where occasionally it has been reported that literally thousands of wrong diagnoses are sent out to unwitting patients. And I do find it rather unsavoury that those within our own ranks, who ought to know better and are aware just how reliable the fingerprint system is, continue to provide fuel for those within the media and Press who seem to relish attacking what is the most valuable tool in the investigating officer’s armoury.

Yours faithfully,
Martin Leadbetter FFS RFP BA (Hons)
Chairman – The Fingerprint Society

************************************************************************************************
Response FINGERPRINT WHORLD Vol 33 No 129 September 2007 page 231

Improving perception and judgment: An examination of expert performance
Itiel E. Dror and David Charlton

We are very happy and grateful for the opportunity to have an open forum to hear and respond to issues of concern regarding our research, as raised by the Chair of the Fingerprint Society. The fingerprint domain can only advance by considering new ideas and perspectives by discussing and examining them in an open and scientific way. Genuine self-reflection and being critical of one's own work is essential if one wants to improve. Many times it is unpleasant, but it is necessary, and we welcome it in regards to our own work and suggest you welcome it in the domain of fingerprint examination. We welcome the comments and criticisms raised by the Chair of the Fingerprint Society as a response to our paper Charlton, Del Manso, and Dror (2007). His comments can be divided into two categories: First, concrete and material comments, that
require us to respond by explaining our research methodology, clarifications, reconsidering our analysis and
conclusions, and so forth. The second are comments that are defensive and personal, which do little to advance our research or the fingerprint community. The former type require us to provide clarifications because the comments are legitimate and essential criticism that pull us all together to a unified platform that we can use to learn and advance for the benefit of everyone. The latter type of comments deserve dismissal because they do not progress the debate in a meaningful way, and in fact are harmful as they reflect a defensive, unscientific, and unprofessional approach.

The comments that need mere clarifications arise from lack of understanding of our research, or from the lack of clarity on our part in explaining our work. Either way, we think it is a good opportunity to clarify these issues. The concern that is characterised as most “serious” relates to “emotional influence” of “fingerprint examiners” by “stories and photographs”. We must clarify that our studies using stories and photographs were not conducted on fingerprint examiners (as clearly stated in our article, see Dror, Peron, Hind, & Charlton, 2005). We should note that there are good scientific reasons to believe that such factors do in fact affect expert judgements (not only fingerprint examiners, but also in any expert domain that relies heavily on human perception and judgement).

However, this has not yet been established or examined in our research. Our research does include studies with fingerprint experts, and these do examine and establish that contextual influence can bias fingerprint examiners, but these studies employ non-emotional contextual information (see the details in Dror, Peron, & Charlton, 2006, and Dror & Charlton, 2006).

We do agree, whole heartedly, with the Chair of the Fingerprint Society’s comment that “serious remedial action needs to be taken immediately” to deal with any psychological influences on fingerprint judgements. Where our opinions diverge is that the Chair of the Society dismisses the mere existence of such influences, ignoring a whole set of well established research in a variety of expert domains, as well as more recent
studies directly examining fingerprint experts. He focuses his attention and effort on dismissing and denying the problem rather than taking the necessary actions to deal with it.

As a second line of defence he personally attacks and insults experts, attributing any such influences to their personal shortcomings, calling them “immature” and “incapable”, and saying they “need to seek employment in Disneyland”. These “remedial actions” do nothing page 232 FINGERPRINT WHORLD Vol 33 No 129 September 2007 to address and deal with the basic psychological and cognitive influences that affect
fingerprint examiners. Not stopping here, he also engages in a personal attack on the experts who
are trying to research and deal with the problem by characterising their efforts to improve work in the
fingerprint domain as “unsavoury” and that they “ought to know better”. These kinds of comments should be dismissed, because not only do they not advance and promote the fingerprint domain, but also they are harmful to it as they reflect defensiveness and a lack of openness for scrutiny. It is also demonstrable of a lack of professional and scientific approach (not to mention that they are unfair and insulting to the individuals involved).

Although he acknowledges that “an examiner might be influenced in his/her decision making process”, he “categorically states that he never knew” of cases of examiners who were influenced by context. The fact that he never knew is purely an epistemic claim reflecting his knowledge, rather than an ontological claim reflecting what actually exists. In other words, not knowing is a weak justification to claim that it does not exist. Especially when you have not engaged in examining if it exists, have not been open minded to the
possibility that it exists, and have dismissed a priori attempts to examine it, while adding insult to those who did. We do not doubt that he is sincere that he himself has never known of such cases, but it may well be that he has not encountered them, but that they do exist nevertheless; it may also be that he encountered them, but did not recognise them for what they were.

What is clear from his letter is that he is not open to examining and finding out if they exist, and taking actions to deal with it. This approach – and not our research – is what is harmful to the fingerprint domain. Exposing weaknesses and dealing with them does not cause the “unnecessary damage”. There is damage, and it will increase, unless the fingerprint community adopts a more scientific approach. Such an approach entails not only being open to scrutiny, but also embracing and encouraging it. It is not an easy way,
but it is how any domain improves and advances.

Indeed, the Chair of the Fingerprint Society is correct that medical doctors make mistakes too. We have repeatedly said, and will say again, that the problems we expose underlie a whole set of expert domains (and there is scientific research demonstrating this in the medical domain too). In fact, these issues pertain to any human, as they arise from the architecture of human cognition (see, Dror 2005). The claim that these types of mistakes are more common in the medical domain (as claimed by the Chair of the Society – we have no
data to support or refute this claim) is a weak defence and unhelpful at best. Any errors made by fingerprint examiners, as rare as they may be (in absolute terms or relative to medical experts), need to be researched and understood with an open mind and an undefensive attitude. Then, based on the scientific findings, “remedial actions” should be taken as much as possible.

Finally, to address the concern that our research “provide fuel for those within the media and press who seem to relish attacking” this domain. First, we are here to advance and improve the domain, not to attack it. Sometimes moving forward requires acknowledging problems. Second, we have minimal control on how the media and press use our publicly available research.We go to great lengths to explain to the media that
fingerprint examination is for the most part very reliable (see, for example, the BBC News Night interview on the reliability of fingerprint examination, http://users.ecs.soton.ac.uk/id/bbc.html). Third, great damage is done to the fingerprint domain not by our studies and research, but by the attitude and defensive responses to it, as exhibited in the letter from the Chair of the Fingerprint Society.Aprofessional and scientific response to the media by the fingerprint community will FINGERPRINT WHORLD Vol 33 No 129 September 2007 page 233 only enhance this domain. Such a response could, for example, say “We have read this research and are learning from it how to further advance the reliability of fingerprints”, “We welcome such research and are determined and committed to improving our work”, or even, “In fact, one of the leading researchers is a fingerprint examiner himself, and we encourage all our examiners to continuously question and scrutinise our practices because this is the only way to guarantee fingerprint examination of the highest standards”. These kind of responses will reflect well, advance, and add respect to the domain in the eye of the media. In contrast, responses, such as “any examiner that may be influenced by emotional context should seek employment in Disneyland”, “such research is unsavoury and our examiners ought to know better than to take part in it”, are responses that are the cause of the harm.

We want to thank the Chair of the Fingerprint Society for taking the time to express his concerns, and to thank the editor of the journal for agreeing to have such an open debate. We hope that these exchanges of views will contribute to establishing a joint platform and agenda for research aimed at understanding and enhancing fingerprint expert decision making.

REFERENCES (can be downloaded from http://users.ecs.soton.ac.uk/id/biometrics.html):

Charlton, D., Del Manso, H., and Dror, I.E. (2007). Expert error: The mind trap. Fingerprint Whorld, 33, 151-155.

Dror, I.E. (2005). Perception is far from perfection: The role of the brain and mind in constructing realities. Brain and Behavioural Sciences 28 (6), 763.

Dror, I.E. & Charlton, D. (2006).Why experts make errors. Journal of Forensic Identification, 56 (4), 600-616.

Dror, I.E., Charlton, D. & Péron, A.E. (2006). Contextual information renders experts vulnerable to make erroneous identifications. Forensic Science International, 156 (1), 74-78.

Dror, I.E., Péron,A., Hind, S., & Charlton, D. (2005).When emotions get the better of us: The effect of contextual top-down processing on matching fingerprints. Applied Cognitive Psychology, 19(6), 799-809.

************************************************************************************************
Page 234 FINGERPRINT WHORLD Vol 33 No 129 September 2007
Further compiled questions for Dr Itiel Dror, David Charlton and Hannah Del Manso:

1. The profile of the sample group has been questioned. The subjects were predominately young, female and drawn from a narrow geographical area (and if the location of the examiners was similar then does this also bring into question the possibility of a narrow range of training, working procedures and environmental
influence?) as well as being volunteers. Is this sample group, which is not representative of the wider fingerprinting professional body in age, gender, experience or location sufficient in range or in size (only 5 subjects) to extrapolate onto the entire fingerprint profession?

2. It is well known within the fingerprint profession that people behave differently in different circumstances – i.e. under test conditions some examiners perform worse FINGERPRINT WHORLD Vol 33 No 129 September 2007 page 235 than they would if they were performing the same work in their daily life in
the bureau. Given that this was not an evaluation of analysis of the normal work they performed for 99% of the year, and were instead decisions on supposedly ‘controversial’ cases (the kind of high profile, high pressure work that they may not be used to dealing with) are the results representative of the routine work
done in the fingerprint bureau? Were all of the subjects working in the same environment, with the same
equipment, under the same time pressures as they would experience in the bureau and were the materials presented to them in the same format that they see on a daily basis?

3. Should it be noted that the change in results was from ‘Identified’ to ‘Not Identified’ or from ‘Identified’ to ‘Insufficient’? In the fingerprint profession the ultimate misdemeanour, other than a deliberate falsification of evidence, is to Misidentify. Stepping back from a previous decision to identify a mark (especially when you are unaware that you have previously identified it) to declare it non-ident or insufficient is quite different from declaring non-matching marks identified. In the profession missing an ident is treated very differently from misidentifying – both for the fingerprint practitioner and crucially in the consequences for the victim of the mis-identification – yet your study treated any change in decision equally, regardless of the working culture. Should you have taken this cultural influence
into account?

4. One of the criticisms levelled at the authors of this work is that it places weapons in the hands of defence lawyers that could enable them to attack the fingerprinting profession. Much of the work currently being done into fingerprints is at an early stage. Indeed, the work into expert bias or probability based evidence may not create a fully rounded picture for a decade or more at the current rate of research. Are you at all concerned that these preliminary studies may be twisted in the courts to misrepresent the strength and validity of fingerprint evidence?


************************************************************************************************
Response:
Methodological and conceptual problems in measuring and enhancing expert performance
Itiel E. Dror and David Charlton

Measuring and enhancing expert performance is a challenging task. Many domains avoid scientifically quantifying performance level, let alone examining decision making and error. However, for any expert domain, be it medical experts, military fighter pilots, police officers, or forensic experts, one must study and understand the underlying expertise (see Dror, Kosslyn & Waag, 1993). With proper measurements, better understanding can be achieved, which is a prerequisite for minimising errors and enhancing performance. Our
research is working to achieve these goals, but it is dependent on the cooperation and an open dialogue with the experts in the field. We would like to take this opportunity to thank all the experts, worldwide, who support the scientific investigation into expertise, both by taking part in our empirical data collection and by continuously giving us constructive feedback on our work.

Below we specifically respond to the concerns raised about our research. We are limited in space and thus can only partially address the issues here within the limited space allocated to us. However, we encourage and invite the readers who want to hear or ask more to feel free to contact us (Dave: david.charlton97@btinternet.com, Itiel: id@ecs.soton.ac.uk). The first concern in this discussion relates to the participants in our studies. The use of “predominately young females drawn from a narrow geographical area” relates only to our first study (Dror, Péron, Hind, & Charlton, 2005). This participant pool (undergraduate university students) is the standard used in most psychological research, worldwide. Like any study, the generalisability from the sample to other populations is an issue to consider. If the findings derive from basic cognitive mechanisms (like the study cited above), then one can be quite confident that the findings do apply to other populations. However, in our other studies we specifically changed our sample of participants to address this problem and we used experienced latent print examiners that were representative of the wider fingerprint community around the world.

The first time we studied latent print examiners we only used five practitioners. This was in the Dror, Charlton, and Péron (2006) study. In another study (Dror & Charlton, 2006) we used another set of examiners, this time six, thus giving a total of 11 examiners. Is this enough? How many participants are enough?  We would need much more space than we have here to explain and justify our methodology. In short, the number of
participants needed to justify scientific findings depends on many factors. Sometimes a single participant is sufficient (e.g., a case study or people with brain injuries, see, for example, Kosslyn, LeSueur, Dror, & Gazzaniga, 1993), and sometimes 10,000s of participants are needed (as in the study showing the effect of aspirin on the heart). The correct number of participants is determined by the effect size, methodology used, the statistical contribution and significance of each data point and participant, statistical power, variance and distribution of the data, the number of factors and degrees of freedom in the statistical model, and a host of other factors. Our studies with latent print examiners have used a within-subject experimental design in which participants are compared to themselves, and thus act as their own control. This is a very strong and robust methodology that requires fewer participants. Still relating to concerns about the participants, distinct from laboratory studies in which researchers can control many of the variables and experimental manipulations, field research (which is important for learning about experts’ performance) introduces a variety of limitations and confounds. The validity and reliability of these types of data collections depend on the participants not being aware of the purpose of the study and if possible, that they are even being studied. When participants know they are in a study, and worse, when they know what the study is about, they perform differently. Thus, all our studies with latent print examiners were conducted under very strict scientific conditions to avoid such problems (see Dror, Charlton, & Péron 2006, and Dror & Charlton, 2006, for full details). The downside of this is that the sample size is relatively small, but is nevertheless very powerful and meaningful.

Now in terms of the conditions and environment of testing, we tried, where possible, and with quite a high level of success, to perform the data collection under the same environmental conditions as every day and ordinary work in the fingerprint laboratory. This was especially achieved in the larger study (Dror & Charlton, 2006) when six examiners were involved in making 48 judgements. The materials were presented in the same format and circumstances of normal and routine work. Of course, there is variability in what ‘normal’ circumstances are, even within a single laboratory; this of course increases when considering the ‘normal’ circumstances across laboratories, across regions, across countries, and across continents. As in any research, in any domain, one has to be careful in how well the findings are applicable. We have gone to great lengths to use scientific methodologies and data collection procedures that make our findings applicable. However, no research is perfect, and ours is no exception to that. Without the strict stipulations on conditions and design that we used (for example, if we used participants in an open data collection in which they know they are being studied, or not using a within-subject experimental design), we could have easily collected data from dozens, if not hundreds of fingerprint examiners. But that data would be very limited in robustness and its interpretation even more problematic. Another concern raised relates to a distinction between a change in decision from ‘identify’ to ‘exclusion’ (or to ‘insufficient’) VS a change from ‘exclusion’ (or from ‘insufficient’) to ‘identify’. It is obvious that the latter is an error. If you do not regard the former as an error, then errors can be eliminated altogether by never making an ‘identification’ decision. If you never make an ‘identification’ call, then you will never be wrong. However, if you do regard the former as an error, that raises many issues relating to the payoff matrix (consequences) of false positive and false negatives (as correctly detailed within the concern). Personally we think that both are errors, but of a different magnitude. Scientifically speaking, the thresholds and decision criteria within the decision model (see, Dror, Busemeyer, & Basola, 1999) are not symmetrical. Thus, the level of ‘evidential weight’ needed for deciding an individualisation is higher than that needed for deciding an exclusion (and even lower for deciding ‘insufficient’).

Indeed, in our Dror, Charlton, and Péron (2006) study we only examined a one way decision change, changing from an individualisation decision. However, in the larger Dror & Charlton (2006) study we examined both directions of decision change. Consistent with the point raised in the concern, the findings (see full details in the study) support that contextual information and bias can more easily change a decision away from an individualisation. However, the data also showed that context can bias the decision toward an
identification (although this is more difficult, it can still happen). It should also be emphasised that we are only interested in data for scientific academic reasons. For example, we are fascinated by the data that highlighted consistency of decision making within some examiners (in both directions of decision), and future studies may help to provide detailed models of the sort of examiners that are more immune to such biasing effects. The final concern relates to the impact of the studies within the criminal justice system. We have addressed this issue in quite some detail in the last section of our response to the criticism raised by the Chair of the Fingerprint Society (so to avoid redundancy, we refer the readers to our response to the letter).

We would like to add here that with the cooperation of the fingerprint community these research studies can contribute (and already have) to changes in how examination is carried out, examiners are trained, and so forth. It is up to us (both researchers and latent print examiners) to work together to make fingerprint examination as sound and reliable as possible. For us, it means working with any fingerprint examiner and any fingerprint organisation to take our findings and use them to enhance fingerprint decision making. It also means that we work as much as we can with the media and courts to explain our research and what it really means. For you, the examiners, it means willingness to participate, cooperate, and support such research (not necessarily to agree with it!). It also means being open to hear and to question some of your own accepted ways of working and thinking.

This open debate is definitely a step to achieving our common goals. In the past few years we have witnessed more and more openness from the fingerprint community to take on board the delicate issues we research. Our research is purely guided and motivated to find the truth. We adopt strict scientific methodology and approach, so as to conduct truly scientific studies.

And more importantly, our interest is genuinely in finding the answers (in contrast to so called researchers who are just trying to find support for what they already believe, or to what they think, or hope, is the answer). This is not always easy, but it is the only way to maintain and improve the high quality of fingerprint identification in the long run. Your comments pointed out a variety of issues concerning our research. Some of these are artefacts and confounds inherent to this type of work, and we are well aware of them. Other comments have enlightened us and will be useful to us as we plan and carry out our future research. This set of comments are helpful in establishing a common platform for debate and research, and will improve the research in this domain – something we hope we are all happy with and motivated to do.We are thankful to have had this opportunity to discuss our research and findings, and hope to continue with a productive dialogue and interactions.

REFERENCES (can be downloaded from http://users.ecs.soton.ac.uk/id/publist.html):

Dror, I.E., Busemeyer, J.R., and Basola, B. (1999). Decision making under time pressure: An independent test of sequential sampling models. Memory and Cognition, 27 (4), 713-725.

Dror, I.E. and Charlton, D. (2006). Why experts make errors. Journal of Forensic Identification, 56 (4), 600-616.

Dror, I.E., Charlton, D. and Péron, A.E. (2006). Contextual information renders experts vulnerable to make erroneous identifications. Forensic Science International, 156 (1), 74-78.

Dror, I.E., Kosslyn, S.M. and Waag, W. (1993). Visual-spatial abilities of pilots. Journal of Applied Psychology, 78 (5), 763-773.

Dror, I.E., Péron, A., Hind, S. and Charlton, D. (2005). When emotions get the better of us: The effect of contextual top-down processing on matching fingerprints. Applied Cognitive Psychology, 19(6), 799-809.

Kosslyn, S.M., LeSueur, L.L., Dror, I.E. and Gazzaniga, M. (1993). The role of the corpus callosum in the representation of lateral orientation. Neuropsychologia, 31 (7), 675-686


_________________________________________

KEPT - Keeping Examiners Prepared for Testimony - #2
#2 - Blind Verification - Explanation
by Michele Triplett, King County Sheriff's Office

Disclaimer:  The intent of this is to provide thought provoking discussion.  No claims of accuracy exist. 

 

Question – Blind Verification:

Can you explain Blind Verification?

Does your office have a policy on how to do Blind Verification (when to do it or how often)?

 

Possible Answers:

a)      Blind verification is a tool we use to insure a conclusion is reproducible to others.  We use it when reproducibility may be an issue.

b)      Blind verification is a tool used to diminish bias from being introduced into a conclusion.  We use it in rare instances when bias may be suspected.

c)      (State your agency policy).

d)     Blind verification is a time consuming process and we aren’t staffed to do this.  It’s not practical in most offices because of the manpower.

e)      We have a 2 year backlog and can’t realistically add more steps to our current process.

f)       Blind verification is a tool used in complex conclusions.  It’s used to check for reliability of conclusion but it doesn’t check that a conclusion was arrived at appropriately or that a conclusion has a sufficient amount of justification behind it.  It’s one of the tools that can be used to insure good results but scientific peer review checks the reliability and also checks that the justification behind the conclusion is adequate.

 

Discussion:

Blind verification hasn’t been used for very long in our profession so most agencies don’t have written procedures on when or how to use it.  If your agency has a policy then you could answer accordingly.  If your agency doesn’t have a written policy then you could testify to the scientific literature that’s available.

Answers a, b, and c:  These are adequate answers.

Answers d and e: These may be correct answers but they leave the court feeling like more should have been done.  It’s true that blind verification is time consuming and it would back log many agencies but the courts are not sympathetic to this answer.  Imagine how you would feel if one of your children told you that they knew they should do their homework but they didn’t have time.  Or if an elderly relative told you that they knew they should take their insulin but they didn’t have time.  You might respond by stating that if something is necessary then they need to make the time.  The same holds true here, blind verification should be done when it’s needed.  The good news is that it’s not needed in most cases and we need to make sure the courts understand this.  Since blind verification is a tool to protect against bias then it should be used in complex examinations where bias is a concern (it’s been established that bias is only a concern when the alternative conclusion is plausible).  In simple comparisons that have a high amount of quality and a high amount of information, bias isn’t a problem.  Using blind verification in standard cases isn’t a good quality assurance measure it’s just an additional process that isn’t needed.

Answer f: This is the most comprehensive answer.

_________________________________________

Feel free to pass The Detail along to other examiners.  This is a free newsletter FOR latent print examiners, BY latent print examiners. With the exception of weeks such as this week, there are no copyrights on The Detail content.  As always, the website is open for all to visit!

If you have not yet signed up to receive the Weekly Detail in YOUR e-mail inbox, go ahead and join the list now so you don't miss out!  (To join this free e-mail newsletter, enter your name and e-mail address on the following page: http://www.clpex.com/Subscribe.htm  You will be sent a Confirmation e-mail... just click on the link in that e-mail, or paste it into an Internet Explorer address bar, and you are signed up!)  If you have problems receiving the Detail from a work e-mail address, there have been past issues with department e-mail filters considering the Detail as potential unsolicited e-mail.  Try subscribing from a home e-mail address or contact your IT department to allow e-mails from Topica.  Members may unsubscribe at any time.  If you have difficulties with the sign-up process or have been inadvertently removed from the list, e-mail me personally at kaseywertheim@aol.com and I will try to work things out.

Until next Monday morning, don't work too hard or too little.

Have a GREAT week!