we looked at the IEEE Biometrics Certification and the September IAI
Monthly Update by Joe Polski.
By Drs. Ralph and Lyn Haber
We were invited to present testimony to the US Senate
Committee considering creating a new National Institute for Forensic
Sciences. The attached is a copy of what we delivered, which as of
this coming Wednesday, Sept 9, will be a public document. If you
willing, we would like to see this made available in the Detail, so
it can serve as a basis for discussion (calm and otherwise). We
you can do this.
Cordially, Ralph and Lyn
Testimony of Drs. Lyn Haber and Ralph Norman
Before the United State Senate Judiciary Committee
In Support of Legislation to Create a National Institute for Forensic
Prepared August 28, 2009
Human Factors Consultants
Ralph Norman Haber, Ph.D., and Lyn Haber, Ph.D., Partners
313 Ridge View Drive, Swall Meadows, California 93514
Telephone: 760-387-2458; Fax 760-387-2459
Scientific Background of Drs. Lyn and Ralph Haber
We are two of the few research scientists who are also trained as
fingerprint examiners, and who have been qualified to testify in courts
as experimental scientists about the validity and reliability of
fingerprint comparison methods in general, and about the application of
the method in the instant case.
When we began to study fingerprint methodology, we discovered that the
underlying research on the validity and reliability of the method(s)
used by fingerprint examiners had never been performed. As
research scientists, we simultaneously began to outline a research
program to provide this evidence, we presented our analyses to the
fingerprint profession, we published our findings and proposals in
scientific and professional journals, we wrote a book on fingerprint
comparison procedures, we visited fingerprint crime laboratories to urge
them to host and collaborate in research studies, and we welcomed
opportunities to testify in court when fingerprint evidence was at
Ralph Haber specializes in experimental psychology and human factors as
applied to forensic science. He has been a research professor for more
than 40 years. He has a Ph.D. degree from
University (1957) and post-doctoral
training in the Medical Research Council at Cambridge, England
(1970-1971). He has taught at Yale
of Rochester (where he was
chairman of the Department of Psychology), and the University of
Illinois, where he is now an Emeritus
Professor of Psychology. He has received 25 grants and contracts from
the National Science Foundation, the National Institutes of Health,
research branches of the military and Veterans Affairs, and from the
Department of Transportation. He has reviewed research proposals
for these governmental agencies, and has served on the editorial boards
of a dozen scientific journals. He has published 250 articles and
9 books in experimental psychology and experimental cognition, forensic
science and human factors. Nearly 100 of these published articles cover
research and analyses of eyewitness testimony and fingerprint comparison
Lyn Haber specializes in linguistic analyses of complex decision-making,
language development, interviewing, and human factors as applied to
forensic science. She has a Ph.D. from the
of California (Berkeley)
in 1970 and further training and degrees at
University of Illinois. She has taught at Temple University,
the University of Rochester, Arizona State University,
University and the
University of Illinois. She has served as a reviewer
for governmental granting agencies and scientific journals. She has
published 150 articles and books in experimental cognition, forensic
science and human factors. Over half of these concern research and
analyses of eyewitness testimony and fingerprint comparison methods.
Specifically on fingerprints, the two of us together have written a
book, Challenges to Fingerprints (October, 2009), published 8 articles,
and made 19 presentations to professional fingerprint organizations and
to fingerprint examiners in crime laboratories. We have attached copies
of our resumes.
In 1988 we established Human Factors Consultants, a two-partner firm
providing research and consultation services to the United States
government, the United States military, private US business companies,
and the legal profession. With respect to the legal profession, we
have been retained to consult or provide expert testimony in over 150
cases involving forensic eyewitness and fingerprint identifications.
We have been retained in nearly 30 fingerprint cases (half in Federal
Courts), and have testified 11 times, 6 of which have been Daubert or
Frye hearings on the admissibility of fingerprint evidence.
Testimony to the US Senate Judiciary Committee
Today, a person (with only a high school degree) can be hired (without
meeting any predefined qualifications for forensic comparison work) by a
crime laboratory (which is not accredited by any organization), and be
then trained on-the-job to carry out evidence comparisons (by another
technician without certified qualifications to provide training), using
local methods and procedures (lacking adoption by their profession or
evidence of validity and reliability), be approved by the laboratory to
perform independent forensic comparison casework (without passing any
external proficiency tests), allowed to represent their profession and
laboratory (without being certified by their profession), and to offer
testimony in a state or federal court (qualified only on the basis of
their employment in that laboratory), testimony sufficient that the jury
convicts the defendant. The parenthetical limitations in this
paragraph have pertained to daily occurrences in courts in the
United States over the last century.
Today, the majority of court testimony offered by forensic experts still
suffers from these parenthetically stated limitations.
To address the quality control problems inherent in the above paragraph,
The National Academy of Sciences (NAS) report on the Forensic Sciences
this past spring (2009) recommends the creation of a National Institute
for the Forensic Sciences (NIFS). In the remainder of our testimony, we
document from evidence drawn from the forensic disciplines the urgent
need to implement this NAS recommendation. We are most familiar
with the fingerprint comparison discipline, so most of our examples
concern fingerprints. The problems apply to all of the areas of
Our testimony is divided into three parts. First, we describe the
outdated and unregulated status of forensic evidence technicians and the
laboratories in which they work: the absence of quality control
regulations for personnel and workplace. Second, we describe the
absence of documentation and research evidence that the methods employed
to identify people give accurate results when used properly.
Third, we describe how a new National Institute of Forensic Sciences
could address these problems effectively and economically.
Part I: Absence of Quality Controls for Personnel and Laboratories
1. Personnel Quality Control: Absence of Hiring and Employment
with a few exceptions (e.g., the FBI), a crime laboratory will employ
anyone whom it decides can be trained to do forensic analyses of
evidence. Frequently, examiners-to-be have already worked as a
police officer or sheriff (positions which generally do not require a BA
degree with specialty in science). Other trainees come from a
variety of two and four year college programs, rarely ones with majors
in criminal justice or related programs. Fewer than 10% of the
evidence technicians listed in the International Association for
Identification (IAI) membership have a BA or BS degree, and fewer than
1% have an advanced degree. No data are available about the
working examiners who are not members of the IAI.
The IAI, though the Scientific Working Groups for each forensic
discipline, as well as the American Society of Crime Laboratory
Directors (ASCLD) through its accreditation procedures, lists some
recommended backgrounds for examiners, including a BA or BS degree, with
specialization in science. However, there is no requirement: there are
no teeth in their recommendations, and no way to enforce them. The
data listed above indicate there is little compliance with the
As a consequence, new trainees differ greatly in their abilities,
knowledge and skills. This complicates training curricula and
Quality Control: Absence of Training Requirements
Only a few
crime laboratories, such as the FBI, have developed detailed training
curricula, with stringent criteria for assessment. The remaining
thousands of laboratories have none. At present, the majority of
forensic technicians have been trained on-the-job, under the supervision
and tutelage of an employee with more experience. Most laboratories do
not require participation in courses offered by other laboratories,
organizations or universities.
exceptions, the forensic disciplines have no formal evaluations at the
end of training to document that the trainee has mastered the required
skills, and is now qualified to work independently. There are no
standard criteria for when a trainee can begin casework or testify in
through its Scientific Working Groups has published recommended outlines
of training programs for the different forensic areas. However, there
are no requirements and no way to enforce their use.
As a consequence, examiners receive different kinds and amounts of
training, and vary greatly in their methods, knowledge and skill.
3. Personnel Quality Control: Absence of Training Specialists
At present, none of the forensic areas defines a position of trainer, or
specifies qualifications for a person who provides training to new
employees or refresher training for more experienced technicians.
These highly technical professions do not recognize that to train others
is a skill in its own right that has to be acquired, mastered and
evaluated. The absence of any reference to training personnel also
reflects the absence of commitment to training as a significant part of
the forensic disciplines.
As a consequence, the personnel who train forensic technicians vary
greatly in the quality, kind and amount of training they provide.
Quality Control: Absence of Proficiency Testing Requirements
exceptions, proficiency testing is not required for forensic
technicians. The majority of examiners who belong to the IAI have
never been tested for their proficiency. At present, of the 5,000
members specializing in fingerprint examinations, fewer than 10% are
proficiency-tested in any given year, and the majority of the examiners
taking the test are the same ones who took it in previous years.
While the American Society of Crime Laboratory Directors (ASCLD)
recommends annual proficiency testing for accredited laboratories, no
information is available as to the number of laboratories that
administer that such tests, or the number that administer in-house tests
manufactured, administered and scored by the laboratory.
profession does not require proficiency testing and there is no way to
enforce a recommendation for such testing.
consequence, the majority of examiners cannot document either
improvement in their skill or mastery in their field.
Quality Control: Inadequacy Proficiency Tests
The external proficiency test currently used by the IAI and by ASCLD for
latent fingerprint examiners fails to meet the requirements for an
adequate proficiency test (see Haber & Haber, 2009, for a detailed
analysis). The latent print fingerprint proficiency test does not
contain test items comparable to typical casework, it samples mainly
same-donor pairs of prints (even though different donor pairs make up
the majority of casework, and pertain to the protection of innocent
persons), there is no measurement of the difficulty of individual items
or of the entire test, there is no evidence of the reliability or the
validity of the test, it is administered by mail without proctoring, it
requires conclusions that are not allowed in casework, it is
inappropriately scored, and it provides no guidance for remedial work
needed for a low-scoring examiner. The IAI latent fingerprint test
is so poorly designed, administered and scored that the results cannot
be used to assess the proficiency of latent fingerprint examiners.
The proficiency tests used by the IAI for other forensic disciplines are
As a side note, starting in 1995, the FBI created an in-house
proficiency test. This test was described in detail by an FBI
examiner (Meagher, 2002) in a Daubert hearing in federal court (US v.
Plaza, 2002) as an example of a good quality control procedure. Quality
control experts, experts in proficiency testing, and fingerprint
examiners testified in the same hearings that the FBI's test was
worthless. The FBI abandoned this test immediately thereafter.
The forensic disciplines have ignored the necessity for adequate
proficiency testing. As a consequence, the vast majority of
examiners who testify in court have not been routinely
proficiency-tested. The tests in present use fail to meet routine
criteria for quality proficiency tests, so that even these few examiners
who have been tested cannot offer evidence to the court of their level
of skill and accuracy. The forensic disciplines allow the skill
levels of their technicians to go unassessed.
6. Personnel Quality Control: Absence of Certification Requirements
IAI nor any other forensic regulatory organization requires a forensic
technician to be certified in order to perform casework, including to
testify in court. The IAI provides certification in eight
different disciplines, but few forensic examiners are certified.
For example, only about 15% of fingerprint examiners who are members of
the IAI are certified, and this number is dropping, not increasing.
Since the IAI is the only organization offering certification for
fingerprint examiners, and many fingerprint examiners are not members of
the IAI, even this low percentage is inflated.
Forensic technicians differ from scientific experts in other fields
(such as doctors, or engineers) in that there are no standardized
training, supervision and certification requirements.
7. Personnel Quality Control: Inadequate Certification Testing
The forensic professions exercise no quality control over the purposes,
design, construction and scoring of their certification tests. The
tests manufactured and administered by the IAI are unstandardized.
There is no evidence their reliability or their validity (Haber & Haber,
2009), and most of the criticisms listed above of the IAI proficiency
tests apply equally to their certification tests. The forensic
sciences have ignored the necessity for adequate certification tests. As
a consequence, the majority of examiners who testify in court are not
certified. The tests in present use fail to meet routine criteria
for quality certification tests, so that even these few examiners who
have been tested cannot offer evidence to the court of their level of
skill and accuracy. The forensic disciplines allow the skill
levels of their technicians to go unassessed.
Quality Control: Absence of Requirements for Court Testimony
There are no
required qualifications for the members of the various forensic
disciplines to testify in court as an expert. Any fingerprint
examiner is allowed by the crime laboratory to testify if he or she has
the first hand knowledge of the specific case being tried. It is
extremely rare that a count challenges the credentials of an employed
fingerprint examiner, and we do not know of a single instance in which
one was not permitted to testify. As a consequence, examiners who
provide forensic evidence vary greatly in their knowledge, skill and
reviewed the personnel areas of employment, training, proficiency,
experience, certification, and access to court. The forensic
disciplines do not regulate the technicians who provide forensic
evidence for the criminal justice system. Recommendations are not
enforced, and existing evidence shows little compliance.
Quality Control: Absence of Accreditation Requirements
estimates that as many as 8,000 laboratories employ forensic technicians
to examine forensic evidence for the criminal justice system
(Fitzpatrick, 2008). Today, only about 330 crime laboratories performing
forensic evidence analyses in the United States have met accreditation
recommendations issued by ASCLD or by any other national accrediting
organization, fewer than 5%. Further, accreditation recommendations are
not required, and laboratories fail to comply yet can remain accredited.
Required accreditation imposes and insures quality control procedures in
crime laboratories. Few laboratories, whether accredited or not,
have manuals covering their basic operations and work products.
These manuals serve to describe requirements for work flow through the
laboratory, for supervision of all work, for random sampling of products
for accuracy and compliance, and for continued protection to prevent
contamination and bias in decision making.
of poor quality control concern verification of conclusions and
correcting errors. Every conclusion made by a forensic examiner
has consequences: an identification risks the possibility that an
innocent person may be convicted, and conclusions of exclusion,
inconclusive or no value risk a guilty person remaining at large. At
present, few laboratories verify these conclusions. Of those that
do, nearly all laboratories use a non-blind ratification procedure, in
which a second examiner is asked to look over the work of the first one
and concur in the conclusion. Only a few laboratories require
independent replication, in which the case is assigned to another
examiner who has no knowledge that it has already been examined or that
another examiner reached a conclusion, and those verifications are
typically restricted to identification conclusions in high profile
cases. Research has documented that non-blind verification fails to
catch errors. The FBI’s erroneous identification of Brandon
Mayfield, in which three additional examiners ratified the
identification made by the first examiner, serves as a real-life
Because of the seriousness of all errors, good quality control should
require that a laboratory carry out an independent replication of all
critical conclusions made by examiners.
correction is a second example involving poor quality control. When an
error is detected, during replication or during review, the laboratory
needs explicit policies on how to record the error, investigate its
cause, work out changes to prevent such errors in the future, and
whether remedial retraining is needed for the examiner(s) who made the
error. Because errors are serious, and damaging to the prestige of
the laboratory, laboratories have been reluctant to publicize that an
error occurred, and currently have no way to learn from them. Most
laboratories express this reluctance by not having published error
correction procedures in place.
of required accreditation and quality controls means that laboratories
vary widely in the accuracy and completeness of their products.
Part II: Research Issues: Method Error and Examiner Error
examiner (or other forensic examiner) performs a comparison and
identifies a suspect as the source of the crime scene evidence.
What is the probability that he or she made a mistake? To answer this
question, the accuracy of this examiner must be known in general; and
how accurate the method is that was applied to make the comparison.
We showed above that current proficiency and certification tests are
inadequate to assess examiner accuracy in casework. In this part,
we describe the absence of evidence for the accuracy of the comparison
any method's accuracy requires experiments. The subjects for the
assessment must be master examiners, with substantial experience, tested
many times, so they are unlikely to make errors through lack of
training, experience or carelessness. The method itself must be
sufficiently described and the master examiners highly familiar with it.
The examiners must make bench notes for each comparison to document that
they used this method and followed it correctly. The assessment
should be carried out under optimal working conditions (i.e., state of
the art equipment, anonymously, without time pressure). Finally, the
crime scene evidence samples must represent the full range of the
quality and quantity of information found in normal casework evidence to
which the method is applied. With such controls, examiner
error is minimized, and the results of the comparisons represent a
measure of the accuracy of the method itself over the full range of
evidence to which it is applied.
of this experiment have ever been run in the 100 years since the
introduction of forensic comparison evidence in the courts. The accuracy
of the comparison method is untested and unknown. At present, the
experiments cannot even be performed because initial research is needed
to satisfy the conditions of such an experiment. No version of an
ACE method has ever been described in sufficient detail to decide
whether an examiner used the method correctly, and since there are
several versions of ACE, it is not clear what version should be tested.
No standardized formats for bench notes or reports have been approved by
the profession, and no published experiment (or proficiency or
certification testing) has required the examiners to provide bench
notes. There is no measure of the quantity and quality of
information in crime scene evidence, so latent prints cannot be selected
against any standard of difficulty or provide a guarantee that they
match the range found in casework.
problems were raised in the NAS (2009) report. The report
expressed the same concerns raised here: there is no research being done
to demonstrate the accuracy of the methods being used by forensic
examiners. We return to this concern in Part III of our testimony. Here
we illustrate the consequences for the forensic disciplines of the
failures to define the method, to measure the information values of
crime scene evidence, and to carry out the necessary research to
demonstrate the accuracy of the method.
of a Complete Description of the ACE Method
disciplines use an Analysis-Comparison-Evaluation method (known as ACE).
ACE was first described 50 years ago as a general forensic framework,
and has been gradually refined, especially in its application to
fingerprint comparisons. None of the forensic disciplines has
offered a complete description of each of the stages and sub-steps of
ACE. None of the dozen textbook descriptions is complete enough
for an examiner to follow step by step. The textbooks also differ
from each other in significant details, especially those involving
quality controls to minimize bias. The manual on how to carry out
an ACE comparison for each forensic discipline has never been written.
11: Absence of an Official Description of the ACE Method
In Frye and Daubert court challenges to forensic comparison evidence,
the courts look for evidence that both the professional community and
the scientific community accept the method in use, and agree that it
meets the requirements of their respective disciplines. Because
the so-called ACE method exists in multiple forms and details, and the
forensic disciplines have never approved a particular version as
official, the proponents of comparison methods have not able to point to
a method that has been adopted by their discipline (Cole, 2006).
The NAS report speaks clearly to the lack of acceptance by the
scientific community of the methods used by the forensic disciplines.
of Validation of the Standards Required By the Comparison Method
method requires three standards, one to justify the conclusion of value,
one for exclusion, and one for identification. Each standard
should be defined by the profession based on physical evidence uncovered
during the application of the ACE method.
The Value Standard, which is applied at the beginning of the analysis
stage of the comparison process, assesses whether the crime scene
evidence sample contains enough reliable information (quantity and
quality of detail) to match it correctly to the true donor. If the
information content fails to meet the value standard, the standard
states that no comparisons are to be made against that crime scene
sample in order to avoid potential erroneous conclusions. The
value standard rests on a physical measurement of the quality and
quantity of information contained in crime scene evidence. This
measurement has not been defined (see paragraph 14 below). Until
the amount of information in the crime scene evidence has been
quantified, the value standard cannot be validated to determine the
percentage of errors it avoids. In current practice, each examiner
uses his or her own subjective standard of value, which means that
different examiners can (and do) reach different conclusions about the
value of the same crime scene sample.
The Exclusion Standard is applied in the comparison stage of ACE.
Because there are always differences between evidence samples, an
examiner must decide whether any of those differences were not caused by
distortion. If a difference did not arise from distortion, then
two different people must have made the two samples, and the suspect is
excluded as the source of the crime scene sample. The
Exclusion Standard is explicitly stated (compared to the other two
standards): if even a single difference cannot be explained by
distortion, terminate the comparison and conclude that the suspect is
not the source of the crime scene sample. However, the sources of
distortions have neither been well defined nor measured. Without
these measurements, each examiner uses his or her own subjective
standard of exclusion, which means that different examiners can (and do)
reach different conclusions about the same two samples.
Sufficiency Standard is applied in the evaluation stage of ACE to the
amount of similarity found between the two samples (assuming the crime
scene sample had sufficient information, and every difference observed
between the two samples is attributed to distortion). If the two
samples have enough similarity so that the chance that they could have
come from two different people is remote, then the examiner concludes an
individualization of the suspect as the source of the crime scene
sample. However, none of the forensic disciplines has developed
and tested a metric of similarity, or determined how much similarity is
sufficient to avoid an erroneous identification. Each examiner
uses his or her own subjective standard of sufficiency, which means that
different examiners can (and do) reach different conclusions about the
same two samples.
None of the forensic disciplines has conducted the research necessary to
quantify the three standards that underlie ACE. We have described
the designs for this research (e.g., Haber and Haber, 2007; 2009), and
it is neither difficult nor expensive to carry out. Without it,
the standards of the ACE method on which conclusions are based are
undefined, and the method itself is incapable of producing valid or
of Evidence that Examiners Employ ACE
examiners in current practice are not required to document their work by
recording bench notes during the examination process. A typical report
contains only a conclusion.
One of the major reasons why the details of the stages and sub-steps of
a comparison method have to be spelled out concerns protecting the
examiner from bias. Recent research has shown that without proper
sequencing, examiners are more likely to conclude what they expected to
find rather than what was really there (see Haber & Haber, 2009,
for examples). The Office of the Inspector General of the
Department of Justice (2006) in its report on the erroneous
identification by the FBI in 2004 of Brandon Mayfield as one of the
terrorists, concluded that a major contributing factor was that the FBI
examiners were biased, and that the FBI failed to follow the appropriate
procedures to avoid bias. The NAS report (2009) reviewed other
examples where examiners were exposed to bias. The lack of a fully
described manual on the ACE method leaves the forensic disciplines open
to more "Brandon Mayfield" erroneous identifications by otherwise well
The absence of a specified ACE method and of contemporaneous bench notes
mean that different examiners can (and do) follow undocumented,
different steps in different sequences. Until contemporaneous, adequate
notes are required, no tests can be made of the accuracy of conclusions
reached by the application of the ACE method.
14. Absence of an Objective Measure of the Quality and Quantity of
Information in Crime Scene Evidence
forensic discipline works with a range of quality of crime scene
evidence, from unusable for comparison to extremely clear and
informative. Without a measurable scale of the amount of
information in the evidence, an objective standard of value cannot be
established. The forensic disciplines have not developed an objective
measure of information quality and quantity.
Without a measurable scale of the amount of information in the evidence,
the difficulty level of a proficiency test or a certification test
cannot be determined.
15. Absence of Evidence that the ACE method is Reliable and Valid, or
has a Known Error Rate
A method to compare fingerprints (or tire tracks, or DNA) can be
assessed for its reliability and for its validity. A method's
reliability can be demonstrated in two ways. Master examiners
apply the ACE method to a variety of latent-exemplar pairs and conclude
identification or exclusion. If all the examiners reach the same
conclusion about each latent-exemplar pair, the method is reliable: it
produces consistent results. Reliability can also be demonstrated
by asking a set of examiners to re-compare latent-exemplar pairs from
their distant past casework. If the examiners reach the same
conclusion today, the method is reliable. Reliability is a measure
In contrast, validity is a measure of accuracy. The accuracy of
the ACE method can be assessed by asking well trained examiners to
compare a number of latent-exemplar pairs using the ACE method, pairs
for which the true donor is known (whether the donor of each pair is the
same or a different person). If the examiners reach correct
conclusions for each pair of fingerprints (identification when the donor
of the two prints is the same; and exclusion when the donor of the two
prints is not the same person), the method is shown to be valid.
We have already shown that the conditions required to test the accuracy
of ACE have not been met. The method is not completely described,
a method has not been officially adopted by the discipline, the method
does not does not include validated or objective standards, and the
method does not have measures for the information content of the
evidence being compared.
to a lack of evidence of the validity of ACE, no evidence exists as to
the probability that conclusions based on ACE will be wrong. What
is the error rate for the method? Court requirements for the
introduction of scientific evidence arrived at by a scientific method
(e.g., Daubert) include an established error rate, yet the forensic
disciplines continue to attest to conclusions, with an unknown
probability of error.
Practice with ACE Fails to Benefit Society Adequately
evidence from research experiments, as well as from estimates provided
by forensic examiners, the two kinds of erroneous conclusions possible
from forensic examinations are not weighted equally by the profession
and do not occur with equal frequency (see Haber & Haber, 2009 for a
detailed presentation). An erroneous identification, in which an
examiner concludes that the crime scene sample and the sample from the
suspect match when in fact the suspect was not the source is treated as
an extremely serious mistake. The likely outcome of this error is
the indictment, trial and conviction of an innocent person. The
number of such instances is unknown, but the data on exoneration of
falsely convicted persons suggests it is far from zero. This is a
quality control problem of serious consequence for society, and it is
the explicit concern of the forensic professions.
exclusion, in which an examiner concludes the suspect is not the source
of the crime scene sample when in fact the suspect was the source, is
not treated as a serious error by the forensic disciplines.
Erroneous exclusions effectively also occur when a case is dismissed
because the method used for comparison was not powerful enough, or the
examiner was not skilled enough to find the similarity. Then, the
likely outcome is that guilty person remains at large to commit further
crimes. The exact number of these instances is also unknown, but
test and research results shows it far exceeds the number of erroneously
identified innocent people. In order to avoid erroneous
identifications, forensic examiners increase the number of true
perpetrators they fail to identify, a one-way quality control solution
that greatly weakens the value of forensic evidence for solving crimes.
The forensic disciplines rarely expend effort to review exclusions,
determinations of evidence of no value, or review inconclusive
conclusions. These reviews should be mandatory.
Part III: The Purposes of a National Institute for Forensic Science
The consumers of forensic evidence, the citizens of the United States,
would directly benefit from a public safety system committed to the
highest quality of processing forensic evidence. The NAS
documented an absence of quality controls and adequate scientific
support across the forensic disciplines. The report concluded that
the highest quality forensic
services would obtain if the disciplines were regulated under a single
federal agency. The report discussed the use of the Department of
Justice, and several other federal agencies, and rejected each of them
as lacking the forensic research expertise required, and lacking
sufficient independence from the forensic disciplines. The NAS report
also considered some of the federal research agencies, such as the
National Institutes of Health, or the National Science Foundation, but
noted that these do not have expertise in the forensic disciplines. In
their place, the NAS urged that a new and fully independent Institute be
The NAS envisioned a new institute with primary responsibility for
regulation of all of the forensic disciplines, drawing heavily on the
combined expertise of examiners, scientists, and researchers. With
this expertise, the new institute would increase the quality and
regulation of training, proficiency, certification and accreditation.
It would develop testing programs to demonstrate the validity,
reliability, and error rate of the comparison method. It would establish
a better balance of potential errors. There are no human or
financial resources to do this now in the forensic disciplines, in
federal regulatory agencies, or in federal research agencies.
Our testimony in this document has highlighted the absence of quality
controls in the disciplines and the absence of research. Until
these quality controls are in place, these disciplines will continue to
offer unregulated and uncontrolled evidence to their consumers.
The importance of severing regulation of the forensic disciplines from
the forensic disciplines themselves is the same as for most other
programs that serve the public. Whenever regulation is based on
principles that sometimes or frequently conflict with self-interest,
self-interest trumps what is best for the public at large. When
banks are allowed to regulate themselves, self-serving is inevitable.
When stock markets are allowed to regulate themselves, self-serving is
inevitable. The same with large companies, mortgage lenders,
credit card companies, and a host of other entities that serve the
public while regulating themselves. Independent regulation increases the
effectiveness and quality of product. It also insures that the
needs of the consumer come first.
The 100 year history of the forensic disciplines continues to show the
inadequacy of their self-regulation in their quality control and
research decisions. The forensic disciplines presently lack quality
control for personnel, quality control for laboratories, or research
support for their methods and procedures. The NAS report
attributes these failures to the
lack of independence between operations of the forensic disciplines and
the quality control of the forensic disciplines.
Design and Regulation of Quality Control Procedures
The NAS report strongly noted the absence of properly designed and
consistently administered proficiency tests, certification tests,
validated training programs, and the paucity of laboratory accreditation
programs and regulation. The principles of quality control, proficiency,
certification, training, and accreditation are the same for each
forensic discipline. Their design and construction can be combined so
that members of each discipline work together with experts in quality
control, in training, and testing and assessment. NIFS can assist
in the development of cross-disciplinary methods for training,
proficiency testing, certification procedures, supervision, and error
correction. This strategy is highly cost effective.
Identify Existing Models of Quality Controls and Research
A few states and crime laboratories have
developed standardized training curricula with periodic assessment.
A few laboratories have put in place rigorous work flow controls and
verification procedures. NIFS does not need to start from scratch.
The interaction between examiners and experts in training programs would
identify the quality measures now in place. Similarly, basic research
has already been performed for DNA. That could serve as a
model for the research needed in the other forensic disciplines. NIFS
could serve as a center to identify these models.
Identify and Facilitate Research Needs and Funding
A forensic examiner is not trained to design research or to carry it
out. Empirical research requires training in research science.
Scientists rarely are trained to carry out forensic comparisons.
Research on forensic comparisons requires collaboration between forensic
examiners working of laboratory settings and research scientists.
Little of this work can be performed in university or government
settings. This collaboration has not previously occurred except in
isolated instances, and the NIFS is needed to bring it about. Part
of that brokering includes participation in the review of research
designs, reviews of research analyses and interpretations and review of
A NIFS would help find and develop resources to fund research projects.
Part of that funding would be included in the NIFS budget, and part
could come from existing sources or outside sources.
Other Benefits of a NIFS
A National Institute for Forensic Sciences would be a forum for exchange
of ideas between technicians and researchers, especially at the level of
policy making. NIFS would be a forum for exchange between the
consumers (police investigators; district attorneys, defense attorneys,
criminal court judges), the legal scholars and research scientists, and
the forensic examiners in the different disciplines. NIFS could
participate with legal scholars and judges to help perfect criteria for
admission of forensic evidence in court. NIFS could facilitate
development of the interoperability of databases and computer systems
within and between the different forensic disciplines. It could
help solve the lack of a uniform forensic language to use in court.
In conclusion, as research scientists, ones also trained in one of the
forensic disciplines, we urge the Judiciary Committee to recommend
passage of legislation to create a National Institute for Forensic
Sciences as soon as possible.
Cole, S. A. (2006). Is Fingerprint identification valid: Rhetorics of
reliability in fingerprint proponents' discourse. Law & Policy,
Fitzpatrick, F. (2008). Whither Accreditation? Identification
38, #2, April, 2008, pp. 9-10.
Haber, L., & Haber, R.N. (2009). Challenges to Fingerprints. Tucson, AZ:
Lawyers and Judges Publishing Co.
Meagher, S. (2002). Testimony in U.S. v. Plaza regarding Mr. Plaza
exclude the government's latent print evidence. February 24, 2002.
National Academy of Sciences. (2009).
Strengthening Forensic Sciences in the
a Path Forward. Washington, D.C. National Academies Press.
Office of the Inspector General of the United States Department of
(S. Fine, Inspector General. (2006). Review of the FBI's Handling of the
Brandon Mayfield Case.
Department of Justice.
US. v. Llera Plaza (I). (2002). 179F Supp 2d 492 (ED
Dear Ralph and Lyn,
It was good to see the premise of your
testimony: to encourage the adoption of the NAS recommendations by
calling attention to the lack of quality assurance and research in
forensic science. However, I have several specific issues with your
document, and I have explained them below.
You state "The IAI, though the Scientific
Working Groups for each forensic discipline" but this confuses the IAI
Science and Practice committees with the FBI-sponsored Scientific
Working Groups (SWG's). They are very separate and distinct
organizations with different missions.
In quite a few places in your correspondence, you over generalize or
even state specific bad practices as absolutes in our discipline. When
you do this, you demonstrate your biased nature against the forensic
sciences and further alienate yourselves from being effective in
bringing about the change that you say you want. Statements such as the
following inflame examiners and their managers because they are
overstated at best, and in some cases they simply appear malicious and
1) "Only a few crime laboratories, such as the FBI, have developed
detailed training curricula, with stringent criteria for assessment.
The remaining thousands of laboratories have none.” I would
venture to say that the majority of the “remaining thousands of
laboratories” that train forensic scientists have written training
guidelines. They may not all be ideal programs, but I would venture to
say that they exist and are being utilized.
2) "With rare exceptions, the forensic disciplines have no formal
evaluations at the end of training to document that the trainee has
mastered the required skills, and is now qualified to work
independently. There are no standard criteria for when a trainee
can begin casework or testify in
court." Once again, I would venture to say that the majority of the
laboratories that train forensic scientists have documented minimum
requirements for proceeding from training on to casework.
3) "none of the forensic areas defines a position of trainer, or
specifies qualifications for a person who provides training to new
employees or refresher training for more experienced technicians." This
is a ridiculous claim.
4) “Few laboratories, whether accredited or
not, have manuals covering their basic operations and work products.” I
would venture to say that the vast majority of laboratories have some
sort of operations manual or documented reporting procedure(s).
5) “Every conclusion made by a forensic
examiner has consequences: an identification risks the possibility that
an innocent person may be convicted, and conclusions of exclusion,
inconclusive or no value risk a guilty person remaining at large. At
present, few laboratories verify these conclusions.” Your statement
groups identifications in with the other conclusions, and states that
few laboratories verify them. This is false. The vast majority of
laboratories verify identifications. Your statement would have been
mostly correct about the other conclusions than identification.
You state: "The absence of any reference to training personnel also
reflects the absence of commitment to training as a significant part of
the forensic disciplines." When in fact SWGFAST references instructor
qualifications in their standard for minimum qualifications and training
to competency documentation. This also seems overstated in the sense
that I am sure many of the other forensic disciplines also addresses
instructor qualifications in their guidelines and standards.
You state "only about 15% of fingerprint examiners who are members of
IAI are certified, and this number is dropping, not increasing."
However, at the IAI Certification Board meeting at the conference in Tampa, the numbers indicate an increase in
certification and a decrease in IAI membership. This is the opposite of
what you propose, so your information may be outdated.
You state "The forensic professions exercise no quality control over the
purposes, design, construction and scoring of their certification
tests." This is a vast overstatement. The IAI certification Boards
take great care in producing quality test design and construction. The
boards operate from a standard baseline that is consistent across them
all, and their purposes are all aligned with the mission of the IAI.
When you state "There is no evidence their reliability or their
validity", I would remind you that absence of evidence isn't evidence of
absence. It has not been demonstrated that the IAI certification tests
in any way invalid for the purpose for which they were established. You
also state later in part 2 of your document that “the absence of
evidence for the accuracy of the comparison method.” And “Absence of
Evidence that Examiners Employ ACE” And “Absence of Evidence that the
ACE method is Reliable and Valid” Again, the absence of evidence
isn’t evidence that the accuracy , reliability or validity of ACE-V is
absent or that examiners don’t use it.
You state that "most of the criticisms listed above of the IAI
proficiency tests apply equally to their certification tests" but you
must know that the IAI does not conduct proficiency tests.
You state that “the forensic sciences have
ignored the necessity for adequate certification tests.” This is not
true. And even if it were true, it doesn’t lead to the “consequence”
that “the majority of examiners who testify in court are not certified.”
There is no logical progression of thought to those sentences.
You state that “the tests in present use fail
to meet routine criterion for quality certification tests.” I wonder
what is the foundation for this conclusion since neither of you qualify
for the test and therefore you have never seen it.
I agree with your number 8: that there are no
required qualifications from the members of the various forensic
disciplines to testify in court as an expert. But there are instances
where examiners have not been permitted to testify (although they are
In part 2 of your document, you reference the
“Value Standard.” You state that if “the crime scene evidence sample
[fails to] contain enough reliable information… to match it correctly to
the true donor” then “…no comparisons are to be made…”. This
demonstrates your lack of knowledge of latent print examination. In many
laboratories, impressions that are not suitable for identification may
still be compared to exclude the subject as a potential donor of the
impression. This is just one example of what appears to be you either 1)
intentionally overlooking laboratory procedures that are transparent and
objective in nature in order to make your point, or 2) not knowing the
real facts about the disciplines you are testifying about as “experts”.
You state that “We have described the designs
for this research (e.g., Haber and Haber, 2007; 2009), and it is neither
difficult nor expensive to carry out.” If your statement is true, then I
would ask why you continue to talk about this research for years but not
I agree with you on all parts of 16B
involving erroneous exclusions. I would caveat that the community does
feel it’s a serious error, but I don’t feel that we treat it serious
enough. And I agree with your support of NIFS and the NAS
In conclusion, I don’t agree with the premise
of your correspondence: that there is a lack of quality control and
research. However, I completely disagree with your approach to overstate
this premise as “the absence of quality controls in the disciplines and
the absence of research” and to overstate most of the arguments for your
position. If you had only remained more constructive and conservative in
this correspondence, and in fact more so in your efforts over the last
decade, perhaps more good would have come from those efforts. But I fear
it is far too late to realize the majority of the benefits you could
have used your experiences to bring about. Instead, I am afraid that in
the case of the current correspondence, the Judicial Committee; and in
the case of the last decade, many examiners, feel only numbness and
bitterness toward what appears to be nothing more than persistent
rhetoric combined with inaction. I truly hope that your response next
week to these comments can convince the readers of the Weekly Detail
(Readers of the Weekly Detail may comment directly on the CLPEX.com
chat board on this issue if desired. A thread has been started for
discussion about this "Haber's Testimony Document".)