Breaking NEWz you can UzE...
compiled by Jon Stimac
Statement on Brandon Mayfield Case –
FBI.GOV - May 24, 2004
...Official press release from the FBI National Press Office...
FBI Apologizes to Attorney –
COLUMBIA DAILY TRIBUNE, MO - May 26, 2004
...the FBI admitted mistakenly linking
a lawyer’s fingerprint to one found near the scene of a terrorist
bombing in Spain...
Spain Questioned FBI's Print Match –
INTERNATIONAL HERALD TRIBUNE, FRANCE
- May 27, 2004
...Spanish authorities had raised
questions about the FBI's fingerprint match weeks before the May 6
Expert Warns Fingerprinting System is ‘Riddled with flaws’ –
SUNDAY HERALD, UK
- May 30, 2004
...US forensic expert has called for a
radical review of fingerprint testing to determine the extent of
Sorry for the missed Detail last week.
It's a "first" in almost three years. A move from Mississippi to
West Virginia was the culprit. We still don't have phone service in our
new home, but I now have a cell plan that I can use to dial out from my laptop,
so I'm back online! New contact information:
Director of Forensics
Subcontracted to Lockheed Martin
Software engineering section, New Projects group
Cell: (304) 629-6795
From Neal Ferguson:
The San Mateo County Sheriff's Forensic Lab in California has an
opening for a Supervising Forensic Specialist. Applications will be
accepted between May 7, 2004 and June 3, 2004. Those interested may
apply online at www.co.sanmateo.ca.us or at Employee and Public Services
455 County Center, Monday thru Thursday 0700-1900 hrs. Pacific Standard
Most of you have probably seen or heard about the
recent news regarding the first ever erroneous fingerprint
identification reported by the FBI. If not, read the May 25 FBI
The most thorough media account I have seen of the case facts was published
yesterday in the Oregonian, and can be found at the following link:
The basics from multiple news reports are that the FBI case examiner received a
submission from Spanish authorities which consisted of a poor quality digital
image of a latent print. This image was searched on IAFIS and was
identified and verified as being the fingerprint of an Oregon lawyer.
Spanish authorities developed other suspects in the case and made an
identification to an Algerian National. Within hours of the release from
custody of the Oregon lawyer, the FBI issued a press release apologizing for the
mistake and any harm that it may have caused. Still unclear is whether the
FBI maintains this print was erroneously identified or is of no value for
identification purposes. Also unclear is whether the impression used to
identify the Algerian is the same impression compared by the FBI.
In this situation, it appears that two different people had been identified
based on the same impression. This is just as impossible as having one
expert claim individualization and another claim exclusion of two impressions.
It is refreshing to see an agency admit that an error has been made and make the
information public while insuring others that measures are being taken to
understand what caused the mistake. The FBI are also pursuing an active
resolution to this issue by convening an independent international panel of
experts to review the case and arrive at a conclusion regarding the identity or
non-identity of the impressions. This action shows the desire of the FBI
to leave no questions unanswered and to bring resolution to this issue. By
taking these actions, the FBI have allowed this situation to be openly discussed
in the media, by the community, and by critics of the science of fingerprints.
This discussion will lead to a healthy progression of the science of
fingerprints in many ways and will provide closure to this situation within a
short time period. I believe it
will also increase the desire for studies on practitioner error rate in the
field of fingerprints.
Calculating Latent Print Examiner Error Rate
a commentary by Kasey Wertheim
I personally feel that the error rate of an examiner should consist only of
"type 1" error, or false positives. In latent print examination we
generally think of type 1 errors when an examiner arrives at an identification
decision on two impressions that were not made by the same source. Type 2
errors, or false negatives, mostly involve impressions that were made by the
same source, but the examiner arrives at an inconclusive decision (does not
"make the call") when in fact they had the ability to do so. If an
examiner does not have the ability (training+experience+talent+motivation, etc.)
to make the call, then the error would actually be in over-extending themselves,
even though the answer happened to be correct. Next time it might not be
correct. An accurate method does not exist that I am aware to determine
an examiner's ability and therefore determine if they actually made a correct or
Traditionally, examiners who make the decision that sufficient similarity does
not exist to individualize (based on their ability) have been considered to be
in error. I feel they made a correct decision based on their ability, and
it should not be considered "error". An administrative assessment may be
considered, and the over-all work product of the examiner may be evaluated, but
the real "error" would have been for the examiner to make the call when they
were uncomfortable with that decision. For these reasons, true "type 2"
error is very difficult (if not impossible) to calculate, and therefore should
not be used in a determination of latent print examiner error rate.
"Misses" (which include other scenarios in addition to both correct and
incorrect "inconclusive" determinations) may be considered and used by
administration as indications of potential training needs, but I don't feel they
should be part of an actual "error rate" of an examiner. In short, I
believe they should be handled administratively, according to performance policy
and procedure. If you feel differently about type 1 and type 2 error, I
would like to see discussion on the CLPEX message board, or feel free to e-mail
me individually. I am working on a paper for presentation at the
conference in St. Louis and publication in the JFI regarding a much more
detailed version of this topic, so please let me know your thoughts if you have
other perspectives on this subject.
If we concentrate on type 1 error, there are several ways a practitioner error
rate could be calculated.
Each method has strengths and weaknesses, and I would like to see some
discussion on each of these methods. Basically, studies can focus on
casework or non-casework. Within casework, studies could examine past,
current, or future cases worked by the examiner. Non-casework could
include blind or double-blind tests, training exercises, surveys or studies,
etc. I feel that a valid study of error has to have samples that were
taken under controlled, known conditions and are known to be true and accurate
representations of known subjects. If these conditions do not exist, I
believe it opens up the test to scrutiny. Any study of casework to
determine error rate will always use a latent image that was not left under
controlled, known circumstances.
Ideally, an accurate study of error rate would include the examiner making
decisions in as close to the same environment to casework as possible.
This is especially true if the resulting error rate will be used to make
conclusions regarding the effectiveness of an examiner in casework. The
use of non-casework scenarios will always result in some condition other than
casework conditions, with the exception of double-blind testing.
Double-blind tests will usually represent actual casework conditions, but a true
double-blind test will only test the error rate of the lab system, not the
examiner. In other words, each identification will be double or triple
checked, and non-identifications may or may not be checked, depending on
laboratory policy and procedure. Variable testing conditions among
laboratories represents a potential limitation of the study, but the resulting
error rate would more closely estimate the error rate of the latent print field
rather than individual examiner error rate.
Images in an ideal study on error rate would be examined in a controlled
environment based on standard protocols. Proficiency testing is about as
far from this ideal as possible, since the examiner knows they are taking a
test. Identifications conducted in a classroom environment might offer a
good compromise between a stringent testing environment and a casework
environment. Of course in any study, false or erroneous identifications
would be divided by the total number of correct identifications to arrive at the
rate of error for an examiner or group of examiners. Mean error rates of
examiners or groups of examiners could then be calculated from these results.
For example, if the FBI has made ten million correct identifications in the 79
years of operation of the latent print section, then as of last week (if the
Mayfield case represented an erroneous identification on the part of the FBI),
their casework error rate of the FBI Lab Division would be approximately one in
ten million, or .0000001 percent.
In conclusion, our profession has successfully answered approximately 50 Daubert
challenges to fingerprint examination. Also during that time, we have
dealt with several high-profile problems including the McKie case, and most
recently erroneous identifications in Boston and the FBI error. These
mistakes are occurring at a time when our profession is already under close
scrutiny. Examiners should continually be on guard for questions regarding
errors, error rates, etc. This issue is even brought up by Professor
Starrs in today's "Riddled With Flaws" news article. I feel that a study
on practitioner error rate is overdue, and will probably be demanded in the near
future. The more we discuss what we consider to be appropriate or
inappropriate about such a study, the more prepared we will be to provide the
type of study that most closely represents the error rate of our profession.
To discuss this Detail, the
message board is always open: (http://www.clpex.com/phpBB/viewforum.php?f=2)
More formal latent print discussions are available at
SET A GOOD EXAMPLE FOR EMPLOYEES
To inspire employees and command respect, always speak and act in the
organization's best interests. Use these tips to help you lead by
1. Treat everyone with respect and graciousness. Everyone you work
with - from your biggest customer to the maintenance crew - contributes to your
organization's success. Keep that in mind when greeting them in the hall,
answering a question or talking with them at organizational functions.
2. Put your clients on a pedestal. Begin staff meetings by talking
about how the organization solved a problem for a client or customer.
Constantly remind staffers that their job is to serve customers, no matter what
their job titles are.
3. Refer to the mission statement frequently. Employees look to
managers to give their day-to-day work purpose and meaning. One of the
best ways to do that is to take every opportunity to remind people of the
organization's mission. When announcing a business decision, for example,
use the mission to explain your decision.
3. Tie everything to goals. Even when delivering criticism, tie
your comments back to the organization's goals. Saying "You need to redo
the illustrations on the brochure to reflect our commitment to diversity"
improves employees' understanding of your organization's priorities.
3. Don't complain about the organization to employees or clients.
It sets a bad example, and destroys your credibility.
Adapted from "Becoming a Leader: Communication
Techniques That Motivate, Guide and Inspire Employees to Excel," via Communication Briefings,
March, 2004, 800.722.9221, briefings.com.
UPDATES ON CLPEX.com
Updated the Detail Archives
Updated the Newsroom
Updated the Smiley Files with
two new smileys!
Feel free to pass The Detail along to other
examiners. This is a free newsletter FOR latent print examiners, BY latent
print examiners. There are no copyrights on The Detail, and the website is open
for all to visit.
If you have not yet signed up to receive the Weekly Detail in YOUR e-mail inbox,
go ahead and join the list now
so you don't miss out! (To join this free e-mail newsletter, send a blank
email@example.com) Members may
unsubscribe at any time. If you have difficulties with the sign-up process
or have been inadvertently removed from the list, e-mail me personally at
firstname.lastname@example.org and I will try
to work things out.
Until next Monday morning, don't work too hard or too little.
Have a GREAT week!