T H E
D E T A I L
Monday, March 11, 2002
Good morning via the "Detail," a weekly e-mail newsletter that greets latent print examiners around the globe every Monday morning. The purpose of the Detail is to help keep you informed of the current state of affairs in the latent print community, to provide an avenue to circulate original fingerprint-related articles, and to announce important events as they happen in our field.
Last week, Pat Wertheim shared his thoughts on why NOT to panic over the Plaza ruling. If you didn't get a chance to read it over, it can be found in the Detail Archives. This week, we turn our attention toward the issue of proficiency testing.
I received feedback from
several individuals when I mentioned that this week’s Detail would cover the
issue of latent print examiner proficiency tests.
As many of you know, this is a “hot topic” right now that was covered
in the U.S. v Plaza hearing recently held by Judge Pollak to reconsider the
exclusion of fingerprint examiner testimony.
Allan Bayle reportedly testified that the science of fingerprints is
well, that the methodology of latent print examination is scientific, but also
that the FBI’s proficiency tests were too easy.
This may be an instance of not agreeing on a definition of what, exactly,
a proficiency test is designed to accomplish.
Proficiency is one of those
concepts which, depending on who one is asking, may be defined differently.
Some may associate proficiency with competency, others with ability.
Some believe proficiency tests should measure the individual, others
believe agency procedures or team product fall under the purview of proficiency
testing. So the first step in any
discussion of proficiency testing is to narrow definition of exactly what the
proficiency test in question is designed to do.
Combined with other QA/QC
procedures, a proficiency test becomes only a part of the guarantee that an
agency consistently delivers a quality product.
In fact, as Ed German recently put it, “because examiners know it is a
test, proficiency tests may not be nearly as valuable at measuring real-world
operation as other quality assurance (QA) options.”
So for latent print examiners, perhaps operational quality should be
assured by a more encompassing program, such as routine and documented
verification of all identifications, technical and administrative case reviews,
Proficiency testing more
appropriately becomes a part of the quality control procedures within the latent
print section of an agency. However,
proficiency testing alone does not insure that the examiner, or if each examiner
is tested, that the section consistently delivers a quality product.
This is true for several reasons, but the main reason is that true
quality control involves more than the administration of an annual test.
Quality control could (and should) also involve quarterly case review,
blind or double-blind testing, etc…
So we recognize that in
reality, proficiency testing should only serve to answer the basic question,
“can an individual examiner continuously demonstrate their ability to
correctly conduct latent print examinations.”
This is a very different question than asking “what is the ability of
this examiner.” A basic
proficiency test should not be designed to test the limits of latent print
examiner ability, or to establish where an examiners ability ranks within any
given community of examiners. So if
a proficiency test should not measure ability or departmental operations, that
leaves a proficiency test simply establishing that the examiner in question can
do their job at some minimal level of competency.
Of course, defining such a
test is much easier than creating one. In
order to measure minimum competency, one must actually ESTABLISH what minimum
competency is. So where is the cutoff point?
At what point is an examiner considered proficient?
What could an examiner do to be considered NOT proficient?
Of course, all of these issues are left to the proficiency
test provider and/or the panel that reviews the test format.
But the main point is that a latent print examiner proficiency test
should establish whether or not an individual possesses the minimum level of
ability or competency with regards to known print to unknown print comparisons,
as defined by the test provider. Even
the subjective nature of the terms “proficiency” and “competency” leave
room for discussion, and that’s without establishing a threshold.
Perhaps Dave Grieve summed it up best when he recently commented that
“in the absence of some national standard, competency and proficiency are
simply what an agency says it is. Even
if the profession could establish some kind of consensus, participation [in
testing] would still be voluntary.”
But even having established that minimum competency is the object of proficiency testing, the question arises of what the format for such a test should be. Does a test with 10 latents and 6 suspects, all identifications, accurately measure minimum competency, or should there always be at least one print which was not made by any of the suspects? There are also more fundamental questions to be answered; questions which may not even be considered, such as is it necessary for the prints to be 1:1? Should there be written questions? And we don’t just identify based on “points,” so shouldn’t a proficiency test include some comparisons utilizing prominent third level detail? Would enlargements of a latent print and an inked print (side-by-side) be appropriate to test proficiency in comparing and identifying a print containing low second level and high third level detail?
Of course, the same types of
issues apply to other disciplines and even other fields of human endeavor. How far does a test need to go to establish proficiency in
anything? For the latent print
discipline, it would seem that the answer lies in the intended purpose of the
test evaluated by each agency or individual.
If the purpose of the agency is to utilize proficiency testing as part of
a comprehensive QA/QC program to confirm that quality latent print work is
consistently produced, then perhaps testing minimum competency is exactly what
is needed. Even if all an agency wants to establish is that their
employees possess minimum competency, then external proficiency testing alone
may assist in this determination. However,
if an agency is interested in testing where the examiner ranks in terms of
ability, then perhaps options should be explored other than traditional
Of course, the same types of issues apply to other disciplines and even other fields of human endeavor. How far does a test need to go to establish proficiency in anything? For the latent print discipline, it would seem that the answer lies in the intended purpose of the test evaluated by each agency or individual. If the purpose of the agency is to utilize proficiency testing as part of a comprehensive QA/QC program to confirm that quality latent print work is consistently produced, then perhaps testing minimum competency is exactly what is needed. Even if all an agency wants to establish is that their employees possess minimum competency, then external proficiency testing alone may assist in this determination. However, if an agency is interested in testing where the examiner ranks in terms of ability, then perhaps options should be explored other than traditional proficiency testing.
So returning to the Pollak hearing, I would be curious to know what standard was being used to evaluate the FBI proficiency examination. If it was compared to the ability of top experts in the United Kingdom, then perhaps, as would be if similarly compared in the US, the test might be considered by some to be on the "easy" end of the spectrum. In fact, any test evaluated by the top persons in that field might appear elementary. However, if the intended purpose of the examination and what role it plays in quality assurance and quality control is revealed, perhaps a different opinion may surface.
What I would really like to do is open the floor for discussion in this regard. I would enjoy the thoughts of some of you who have not participated in past discussions if you have opinions on the format or difficulty level of proficiency tests. Further, if you have any thoughts on ability testing or other QA/QC procedures that can supplement this issue, I would love to see some posts. The informal Detail "Chat board" is available, as is the onin.com forum.
Feel free to pass the link to The Detail along to other examiners. This is a free service FOR latent print examiners, BY latent print examiners. There are no copyrights on The Detail, and the website is open for all to visit.
If you have not yet signed up to receive the Weekly Detail in YOUR e-mail inbox, go ahead and join the list now so you don't miss out!
Until next Monday morning, don't work too hard or too little.
Have a GREAT week!