Breaking NEWz you can UzE...
compiled by Jon Stimac
Fingerprint Database Lets Police Identify Criminal Immigrants
CONTRA COSTA TIMES, CA
- Nov 14, 2006
...DHS is working to give local authorities rapid access to
DHS Refines its Requirements for Fingerprint Readers
- Nov 14, 2006
...fingerprint vendors were told to modify their
software and hardware in line with the governmentís evolving
Fingerprints Lead to Murder Arrest
CONNECTICUT POST, CT
- Nov 15, 2006 ...prints on a newspaper found at a murder
scene led to arrest in connection with the five-year-old killing...
New Technology Finds Criminal Histories at the Scene
- Nov 16, 2006 ...the Sheriff's Office is now able to look
up a criminal record with just a fingerprint...
Recent CLPEX Posting Activity
containing new posts
Moderated by Steve Everist
WAKE UP KASEY!!!!
Neville 533 19 Nov 2006 06:57 pm
What do to with questionable testimony?
L.J.Steele 255 17 Nov 2006 11:01 pm
Thanks to ABFDE Seminar organizers!
Cindy Rennie 185 16 Nov 2006 05:41 pm
Latent Print Community
Charles Parker 294 15 Nov 2006 09:48 pm
Looking for an article
Michele Triplett 252 15 Nov 2006 04:36 pm
Latents on Cartridge Cases
mark beck 438 14 Nov 2006 02:39 am
UPDATES ON CLPEX.com
No major updates on the website this week.
we looked at a survey involving the discipline of forensic individualization
based on images of detail other than friction ridge impressions.
we take a look at a recent article that is critical of latent print
examination, as well as two replies to the article.
Scholars Challenge the
Infallibility of Fingerprints
by Peter Monaghan
Section: Research & Publishing
Volume 53, Issue 13, Page A14
November 17, 2006
Researchers find fault with a key tool in criminal investigations and
find themselves ignored
For over a century, the recovery and analysis of fingerprints has been a
key tool in criminal investigations.
The link between fingerprint and identity was forged in 1888 by Sir
Francis Galton, a British scientist and mathematician who invented the
science of fingerprint identification. Galton calculated the likelihood of
two identical fingerprints at one in 64 billion, and ushered in the modern
era of a practice that dated back to 14th-century China by noting that
prints could be matched through the intersections, splits, and other
"minutiae" formed by the ridges on the fingertips.
Today examiners still work with these minutiae, which are also called
"ridge characteristics," "points of similarity," and "Galton details."
Galton assumed that each person's fingerprints were unique. But scholars
such as Simon A. Cole, an assistant professor of criminology, law, and
society at the University of California at Irvine, note that Galton's
assertion has never been scientifically validated.
In numerous scholarly articles and a book, Suspect Identities: A
History of Fingerprinting and Criminal Identification (Harvard
University Press), Mr. Cole has argued that the problems with assuming that
fingerprints are unique are compounded when forensic investigators and
law-enforcement officers try to match prints that are often smudged or
Researchers in other fields, including computer science and cognitive
neuroscience, are also taking aim at what they see as the myth of
infallibility that surrounds fingerprint identification. In particular,
critics wonder: Why are the champions of the method so hesitant to submit it
to scientific testing?
The lack of scientific scrutiny, says Mr. Cole, has broad implications
for justice systems around the world. Because courts have largely sided with
practitioners of fingerprint identification and not required more-scientific
studies of the method, he observes, law-enforcement agencies "retain legal
carte blanche to claim that fingerprinting is validated and infallible. They
have nothing to gain and everything to lose from validation studies."
Just what can go wrong when law enforcement depends on fingerprints was
demonstrated in a high-profile terrorism investigation in 2004.
After the bombing of several passenger trains in Madrid on March 11,
2004, which resulted in 191 deaths and thousands of injuries, the Federal
Bureau of Investigation arrested and detained a Portland, Ore., lawyer,
Brandon Mayfield, on the basis of a "100 percent match" of his fingerprints
with a set of fingerprints found on a plastic bag left near one bombing
scene. Mr. Mayfield, a convert to Islam, was held in solitary confinement
and then a jail mental ward.
But two weeks after his arrest, the FBI freed Mr. Mayfield after Spanish
police correctly connected those same prints with an Algerian man.
Mr. Cole is spearheading efforts to test and improve the accuracy of
fingerprinting. For instance, he points out that examiners rarely deal with
whole fingerprints. They use "latent" prints: invisible impressions that a
chemical agent converts into images, albeit often fragmentary, blurred,
overlapping, or distorted ones.
Examiners then seek to match the minutiae in those prints to minutiae in
much clearer inked or scanned prints in police databases that may hold
millions of records.
Mr. Cole contends that such experts are far too credulous about finding
or excluding matches. He says that the points upon which examiners base
identifications are often poor because minutiae are obscured. Prints created
by the same finger may look different, and prints from different fingers may
look the same, creating errors at top laboratories and in systems based on
complex computer-driven algorithms.
Researchers in other fields share Mr. Cole's skepticism about
For Anil K. Jain, a professor of computer science and engineering at
Michigan State University, fingerprint identification is a pattern-matching
problem that cannot be solved with any finality, but only improved upon. To
find such improvements, Mr. Jain created a biennial contest ó the
International Fingerprint Verification Competition ó in which he invites
colleagues from academe and industry to present algorithms that improve the
accuracy of fingerprint readings.
"When we started this competition," Mr. Jain says, "most of the time
vendors would report their results based on their own proprietary data, and
they could always clean up their reports." The current competition ends in
January 2007, and Mr. Jain notes that the entries often give him and his
colleagues a sense of how far they have to go in their own statistical
analysis of how likely algorithms are to produce false matches when they
analyze a specified number of "points" of identification on a finger. (The
Federal Bureau of Investigation standard, for instance, is to use 12 of the
52 points of the human finger.)
Errors of Nature
As Mr. Jain evaluates the study of fingerprints with the aid of
algorithms, Itiel Dror, a senior lecturer in cognitive neuroscience at the
University of Southampton, in England, has looked at the human factor. Do
the limits of human cognition create errors by fingerprint examiners?
Mr. Dror suggests that making such errors may be human nature. In
experiments, he presented expert latent-fingerprint examiners with sets of
prints about which they had previously made determinations. But he presented
those prints to the subjects with new contexts, such as images of graphic
Two-thirds of the experts "made differing, indeed conflicting,
decisions," he reported in the journal Applied Cognitive Psychology
last year and in the Journal of Forensic Identification this year.
The context experiment "illustrates the vulnerability of humans, even highly
trained and experienced experts, to cognitive and psychological effects that
can influence and even distort their perceptions and judgments," he
The results are not surprising, he wrote: "The mind is not a camera; we
are not passive in interpreting data."
Predictably, at his many presentations to gatherings of fingerprint
examiners and law-enforcement officials, "some people get defensive," he
says. "I can't blame them. I'm coming to people who have been doing it for
many years and saying there is a problem. But I say to them, Yes, but people
may have been wrongly sitting in jail for many years."
His message to such gatherings, he says, is that even when experts do
their job optimally, mistakes happen because of normal cognitive processes.
He also tries to elicit a promise from his audiences that they will use what
information they have to improve the training of examiners.
"That," says Mr. Dror, "is a first, necessary step, but it's not enough."
Also needed, he says, is the political and financial will of politicians and
law-enforcement officials to support research on fingerprints and ensure
that the findings make their way into training programs.
Many critics do not imagine that research like Mr. Dror's will quickly
shift opinion among fingerprint examiners. As academic research has
undermined fingerprinting, scholars find that practitioners are tuning out.
"It's not like a scholarly debate, scholar against scholar, where they fight
it out in the literature," he says. "Scholars are used to scholarly debate,
but in this dispute, the practitioners simply don't respond."
While it is not the job of practitioners to write for scholarly
publications, Mr. Cole argues that fingerprint experts think the research
"doesn't matter because it doesn't hurt them. They operate in the courtroom,
where the scholarly literature is just ignored."
But fingerprint examiners will have to take at least some of their
challengers seriously, says Glenn Langenburg, a forensic scientist with the
Minnesota Department of Public Safety. He may be more open to academic
research than his colleagues: He is also a doctoral candidate at the
University of Lausanne, in Switzerland, where he is studying with Christophe
Champod, a prominent researcher in efforts to develop statistical
"probabilistic" models for the accuracy of forensic methods.
Signs that Mr. Langenburg is, as he says, "a minority in our profession"
are easy to find. For instance, "a forum convened by the International
Association for Identification in order to develop a fingerprint research
agenda began its report by implicitly dismissing the need for validity
study," he wrote in a recent essay, "stating 'This forum recognizes the
reliability of friction ridge identification as practiced during the last
Indeed, not all of the psychological research deflates examiners' claims
to expertise. Thomas A. Busey, an associate professor of psychology at
Indiana University at Bloomington, has demonstrated that examiners' repeated
inspection of fingerprints does endow them with a visual expertise that
improves their performance, compared with that of nonexperts.
While experts may have an edge, they sometimes believe they cannot make
mistakes, several observers say. "Unfortunately, forensic scientists often
reject error-rate estimates in favor of arguments that there is an
error-free science," wrote two legal scholars, Michael J. Saks and Jonathan
J. Koehler, in a Science review article last year.
"Likewise, fingerprint experts commonly claim that all fingerprint
experts would reach the same conclusions about every print," wrote Mr. Saks,
a professor of law and psychology at Arizona State University at Tempe, and
Mr. Koehler, a professor in the business school at the University of Texas
at Austin who specializes in behavioral decision making.
That claim, they say, amounts to saying that errors like the one that put
Brandon Mayfield in federal custody are human errors, not methodological
errors. And such claims are problematic, they contend, since the method
relies "primarily on the judgment of the examiner."
And, they add, errors do, indisputably, occur, regardless of examiners'
claims. In one set of industry tests, they note, about one-fourth of
examiners made some errors, a result that even the editor of Journal of
Forensic Identification, a leading industry publication, called
The pressure for reform seems to be mounting, however. In the December 5,
2003, issue of Science, Donald Kennedy, editor in chief, echoed many
of the objections that Mr. Cole and others have raised about fingerprints.
He also noted that exaggerations of reliability may mar other techniques,
such as hair sampling and analysis of bullet markings.
"Criminal-justice agencies," said Mr. Kennedy, "have been slow to adopt
new scientific procedures and defensive about evaluation of their present
The lessons learned by rigorous scientific examination of DNA analysis
and polygraph testing were both salutary, he said. The former was proved
reliable and accepted. But in 2003, the National Research Council of the
National Academies found polygraph testing defective when they tested it on
behalf of the U.S. Department of Energy, and recommended it not be used.
(Nonetheless, he notes, the department rejected that recommendation and
continued to use lie detectors.)
Still, the U.S. legal system has fought off challenges to fingerprint
analysis. In 1993 the U.S. Supreme Court ruled that scientific testimony
must be based on tested, dependable science. Many saw the ruling as a
serious challenge to fields like fingerprinting, but two years later, a
federal appeals court ruled that the new test did not apply to nonsciences ó
in that case, handwriting analysis.
A 1999 ruling by the Supreme Court overruled the federal appeals court
and held that all expert testimony must meet tests of validity.
Fingerprint examination seems to have survived that ruling: In a decision
last December, the Supreme Judicial Court of Massachusetts ruled that the
test of validity for fingerprint analysis was not that the larger scientific
community approved it, but that its own practitioners did.
The Way Forward
Scientific researchers such as Mr. Cole say they will continue to fight
by filing amicus briefs in cases that adjudicate the admissibility of
fingerprinting and other controversial evidence.
Mr. Saks and Mr. Koehler believe it is only a matter of time until
fingerprint technicians adopt a more defensible stance toward the research
that challenges their practice. They might follow the example of DNA typing,
which involves the "application of knowledge derived from core scientific
disciplines," which survived court scrutiny. For fingerprint science to
achieve the same validity, critics say, it must make more use of such fields
as differential geometry and topology.
Still Mr. Cole sees only a glimmer of progress in that direction. He
observes that an FBI forensic-science report issued this year calls for the
development of a research agenda. But, he says, "it's a carefully worded
document; it doesn't give any ground on crucial legal issues."
RE: Scholars Challenge
by Michele Triplett
I find it humorous that an organization called ďThe
Chronicle of Higher EducationĒ would publish an article where the author
didnít do an adequate amount of research into his own topic.
-Galton invented the science of fingerprint
-The FBI standard is to use 12 of the 52 points in the
-The 5 experts in Mr. Drorís research (isnít it Dr.
Dror, Dr. Cole, and Dr. Champod?) were presented with images they previously
IDíd under a new context, graphic violence? I thought the new context was
that the images were from the Mayfield erroneous ID?
These errors are actually irrelevant to the topic of
the story, so Iíll try to ignore them and deal with the topic itself, that
researchers feel like theyíre being ignored. The author points out that
Cole says that uniqueness hasnít ever been scientifically validated. It
looks to me like Monaghan (the author) never looked for any information to
the contrary. If he had looked, he would have quickly found that multiple
researchers have studied 1) the embryonic development of the skin (Iím not
listing them because this information is easily available); 2) several
statistical models exist supporting uniqueness (I know, these models donít
accurately represent reality - thatís why we call them models, they resemble
reality, they donít exactly replicate it). And 3) empirical data collected for
over 100 years also scientifically supports the theory that fingerprints are
unique. If anyone thinks that the latent print community has been silent
about this, theyíre simply not listening. Weíve said it and weíve screamed
it. Most of us have finally realized that screaming any louder wonít help
(ok, so I havenít really realized it yet
Why are we so hesitant to submit to scientific
testing? First, research is being done. Scientific research requires time
and money, it hasnít been all that long since requests for additional
research were asked for. The latent print community has acknowledged this
need and has responded (although not as strongly or as quickly as many would
like) . Good conclusions donít happen overnight, high quality scientific
research sometimes takes decades. With regard to uniqueness and additional
research, as a comparison for Mr. Monaghan, just because a few quacks insist
the earth is flat, does that mean that the scientists have to continue
spending their time and research money trying to convince them?
Infallibility, absolute conclusions, and 100%
matchÖÖthese statements were common in the past but our industry has
recognized that perhaps the industry was overstating results. We have
acknowledged that we are human and errors are possible. We are continually
looking at ways to diminish errors but this doesnít mean that the error rate
is high. Human error is always going to exist but if you look at the data,
the error rate of this industry is amazingly low!
One more comment. Since Iím criticizing the author and
the critics, in all fairness I should also make a few comments about one of
our own (Glenn Langenburg). The article states that Glenn Langenburg may be
more open to academic research than his colleagues. I doubt if thatís
true. Weíre all open to bettering the industry and continually making
improvements, Glennís just more motivated than the rest of us (a lot more
motivated!!). The author makes it sound like Glenn is the only one who
recognizes the need for validity studies. This is supported by stating that
an IAI report about creating a research agenda stated that reliability is
already recognized (I paraphrased this). Anyway, I think we all recognize
the value of validity studies but at this time, the need for validity
studies is probably higher in other areas. If time and money are going to
be appropriated, it only makes sense that it gets appropriated in the most
needed areas first.
I donít blame the critics for misrepresenting the facts
in this article. I would image that they are rolling their eyes in their
head as well. I blame the journalist. This article is nothing more than a
regurgitation of previous statements and questions that have been asked and
answered. I also feel like itís filled with glaring errors that make most
of us discount the article as a whole.
RE: RE: Scholars Challenge
by Glenn Langenburg
afforded me an opportunity to read her critique of the article. I agree
whole-heartedly with Michele's observations of the misrepresentation of the
author. I think it is unfair to paint me as one of the only people in the
profession open to research. If I recall, from the interview, we were
talking about the use of a probabilistic approach and whether or not the
error rate is "zero". I am hardly the only one in the profession that is
interested in addressing these issues. If that were true, this would truly
be a sad state of affairs for our discipline. There are many notable
individuals driving this profession forward with their research. It takes
time. Science cannot be rushed to fit the courts', critics', or other nay-sayers'
comment is, if some of my statements were taken out of context, and
misrepresented, then I take any of the quotes from critics in the article
with a grain of salt as well. Thank-you Michele for your critique.
Feel free to pass The Detail along to other
examiners. This is a free newsletter FOR latent print examiners, BY
latent print examiners.
There are no copyrights on The Detail (except in
unique cases such as this week's article), and the website is open for all
If you have not yet signed up to receive
the Weekly Detail in YOUR e-mail inbox, go ahead and
join the list now so you don't miss out! (To join this free e-mail
newsletter, enter your name and e-mail address on the following page:
You will be sent a Confirmation e-mail... just click on the link in that
e-mail, or paste it into an Internet Explorer address bar, and you are
signed up!) If you have problems receiving the Detail from a work
e-mail address, there have been past issues with department e-mail filters
considering the Detail as potential unsolicited e-mail. Try
subscribing from a home e-mail address or contact your IT department to
allow e-mails from Topica. Members may unsubscribe at any time.
If you have difficulties with the sign-up process or have been inadvertently
removed from the list, e-mail me personally at
firstname.lastname@example.org and I will try
to work things out.
Until next Monday morning, don't work too hard or too little.
Have a GREAT week!