SOP's

Welcome to the public CLPEX.com Message Board for Latent Print Examiners. Feel free to share information at will.

SOP's

Postby Steph » Thu Jun 28, 2012 8:43 pm

I am writing policies and procedures (ISO), and while certain issues are covered in a general manner with the lab policies, I am attempting to write policies specific to the latent unit and I would like to find out what people do at their agencies, and how they approach different issues that come up in latent prints. I have covered many of the common issues with non-conforming work and conflict resolution but I would appreciate any assistance for the following situations that are causing debate:

1. Brady issue

If the department pays for the certification process and allows the employee to take the test on work time (even though certification is not required at the department) and the employee fails (for any reason, identification error or otherwise), should this information be available and provided along with other discoverable information if requested? What if the employee takes it on their own – should management ask their employees randomly or have a policy requiring they be informed of analysts taking the exam and the results, since a defense attorney could ask the analyst on the witness stand without needing to do discovery? Is it no business of the department if they don’t require it, even though a false positive error on the exam may be discovered during their testimony?

If an employee has made several errors (caught in verification) and the root cause determines re-training is needed, is that information provided to the attorneys automatically- or only if they ask on the stand or in discovery?

2. Handling false negatives

Since there are studies that indicate that there is a higher number than once thought of false negatives, how does an agency determine how many is too many? With a false positive there is immediate action, root cause, pulled off casework and case review is performed. While root cause should be done even with one false negative- since there is an indication of a higher percentage of them than false positives, is there a different threshold or should it be treated the same and the analyst is pulled off of casework with case review going back several months?

Are false positives and false negatives treated the same when they are caught in the verification process as they would be if they were verified and had been officially reported?

For the school of thought that the purpose of verification is to catch any mistakes, therefore the process worked and no one should be pulled off of casework and have case review done – what if there were repeated errors, what number would be acceptable before it was determined there was a problem with the analyst and re-training was needed? If no case review is performed how can it be determined that no errors were verified (we verify all id's and exclusions)?
Is re-training mandated at a certain number of errors or is it at the discretion of management?
Steph
 
Posts: 6
Joined: Fri Feb 20, 2009 4:51 pm

Re: SOP's

Postby ER » Fri Jun 29, 2012 3:51 pm

1. That whole question seems to revolve around how 'training records' are kept at your lab and what defense asks for. If all of that info is kept in an employee's training file and the defense asks for the training file, then they get it all. If the department tracks errors but not cert tests, then defense would get what's in the file. There's probably a lot of variability at each agency over what's kept in the file and variability with each case depending on what is asked for.

2. Erroneous exclusions are much more common than we thought. False negative can be a little vague because it may include erroneous inconclusive or erroneous no value decisions, which I view as even less serious than erroneous exclusions.

How does an agency determine how many is too many?

Well, that depends on the agency. Our agency has put in place policies to reduce erroneous exclusions. We expect our examiners to make 0-2 a year (but specific numbers aren't written down). All of our examiners combined have never made more than 3 in a calendar year. Depending on the root cause of the errors, we may discuss retraining that examiner on exclusions. However, other agencies that have not taken steps to reduce erroneous exclusions may have to accept more errors. I've heard of labs that expect a combined 20-25 erroneous exclusions each year. Overall, labs that: a) do not verify all conclusions and value statements; b) discourage or forbid inconclusive decisions; c) discourage or forbid examiner discussions; d) have no standard for sufficiency to exclude; and e) compare latents that are of value for exclusion only; will have more erroneous exclusions. On the plus side, if an agency doesn't verify all exclusions, then they'll never notice how many mistakes they're making. Erroneous exclusions are inevitable. But, if an examiner is making the same mistake for the same reasons on a regular basis, then that's too many.

Should it be treated the same (as an erroneous ID) and the analyst is pulled off of casework with case review going back several months?

Absolutely not. Erroneous ID's and erroneous exclusions are very different kinds of errors that need to be treated differently. Seeing matching features that aren't there (erroneous ID) is a very serious issue. Not seeing matching features that are there (erroneous exclusion) is an understandable and inevitable part of what we do because of how different two prints from the same finger can look. It's still a serious problem, but not as serious as a bad ID. Erroneous exclusions need to be addressed, but not in the same way as erroneous ID's are addressed.

Are false positives and false negatives treated the same when they are caught in the verification process as they would be if they were verified and had been officially reported?

That probably depends largely on how an agency treats errors, amended reports, CARs, etc.

If no case review is performed how can it be determined that no errors were verified?

If you pull someone off casework for a bad exclusion, you will eventually have no one left working cases. Look at the Black Box study. Almost every single examiner made an erroneous exclusion. Establish a standard for exclusion, and then let the verifier catch the mistakes. If an examiner makes the same mistake repeatedly, then let management decide when it is appropriate to do a case review and establish re-training.

Here's some questions for you... Would it really be retraining? What training has that examiner had in exclusions? Have they been trained in how to avoid erroneous exclusions? Do you have policies that reduce erroneous exclusions? I would think that you can't retrain someone in exclusions until you've provided training in exclusions. If an agency provides training in exclusions, establishes a standard for exclusions, and implements policies to reduce exclusions, and an examiner still makes too many mistakes, then it's time for retraining.
ER
 
Posts: 184
Joined: Tue Dec 18, 2007 5:23 pm
Location: USA

Re: SOP's

Postby Steph » Sun Jul 01, 2012 2:04 pm

Thank you for your response, you have made some valid points.

At our agency we have basically four categories for conclusions: identification, exclusion, inconclusive (similarities present but not sufficient to id or exclude – based on quality of the latent print) and insufficient exemplars (based on quality of the exemplars).

The term false negative was being used as -a subject should have been identified but was excluded. I agree with your statements regarding the various levels of severity in errors. We do not view erroneous inconclusive results as having the same weight as an exclusion that should have been an id. But we are looking for ways to minimize the variability that we are having.

We have always had some tolerance for false negatives, with all of us having the understanding that they are going to occur. We are now trying to account for -what is acceptable in our field? I would have to say my idea as well as most of the staff, is a tolerance that is very similar to yours of 0-2/year (not written). Since it was not written, we cannot hold staff accountable to that standard. There is now a situation of numerous errors from an analyst and we are having to establish and justify boundaries for when specifically there are too many errors.

I do understand that pulling an analyst off casework can severely impact a unit, but not pulling an analyst that is repeatedly making mistakes off casework also severely impacts a unit. Under normal circumstances I do not believe an analyst should be pulled off casework for a few errors a year, but I also see the benefits of holding off on cases until a root cause has been determined.

Our management is not familiar with latent print analysis, which is why I am trying to write SOP’s that are consistent with what is accepted in our field. The question has been -do we just say 0-2/year, based on what research? Do we look at some of the studies and if it is more than say 10% of the analyst’s casework then it warrants intervention, so there may be variability in the number of errors per analyst because not all analyst’s complete the same number of cases per year. Do we base it on each error and the root cause, but then there is the issue of not seeing the forest for the trees, and if one analyst has numerous errors that are each being explained away for different reasons, are we really accounting for the fact there may be a problem with that analyst’s ability to complete a case.

We do have detailed descriptions of the different quality of prints, but not a clear standard of an exclusion. You have made a very valid point regarding training of exclusions. We assume an understanding of them, but I don’t recall any specific training in them. I am not a supervisor, but I did attend the ISO training and have been tasked with writing SOP’s and rewriting the training manual. Your comments have inspired me to ensure that we have a clear standard for exclusions and provide training on them – thank you.
Steph
 
Posts: 6
Joined: Fri Feb 20, 2009 4:51 pm


Return to Public CLPEX Message Board

Who is online

Users browsing this forum: No registered users and 1 guest