1. That whole question seems to revolve around how 'training records' are kept at your lab and what defense asks for. If all of that info is kept in an employee's training file and the defense asks for the training file, then they get it all. If the department tracks errors but not cert tests, then defense would get what's in the file. There's probably a lot of variability at each agency over what's kept in the file and variability with each case depending on what is asked for.
2. Erroneous exclusions are much more common than we thought. False negative can be a little vague because it may include erroneous inconclusive or erroneous no value decisions, which I view as even less serious than erroneous exclusions.
How does an agency determine how many is too many?
Well, that depends on the agency. Our agency has put in place policies to reduce erroneous exclusions. We expect our examiners to make 0-2 a year (but specific numbers aren't written down). All of our examiners combined have never made more than 3 in a calendar year. Depending on the root cause of the errors, we may discuss retraining that examiner on exclusions. However, other agencies that have not taken steps to reduce erroneous exclusions may have to accept more errors. I've heard of labs that expect a combined 20-25 erroneous exclusions each year. Overall, labs that: a) do not verify all conclusions and value statements; b) discourage or forbid inconclusive decisions; c) discourage or forbid examiner discussions; d) have no standard for sufficiency to exclude; and e) compare latents that are of value for exclusion only; will have more erroneous exclusions. On the plus side, if an agency doesn't verify all exclusions, then they'll never notice how many mistakes they're making. Erroneous exclusions are inevitable. But, if an examiner is making the same mistake for the same reasons on a regular basis, then that's too many.
Should it be treated the same (as an erroneous ID) and the analyst is pulled off of casework with case review going back several months?
Absolutely not. Erroneous ID's and erroneous exclusions are very different kinds of errors that need to be treated differently. Seeing matching features that aren't there (erroneous ID) is a very serious issue. Not seeing matching features that are there (erroneous exclusion) is an understandable and inevitable part of what we do because of how different two prints from the same finger can look. It's still a serious problem, but not as serious as a bad ID. Erroneous exclusions need to be addressed, but not in the same way as erroneous ID's are addressed.
Are false positives and false negatives treated the same when they are caught in the verification process as they would be if they were verified and had been officially reported?
That probably depends largely on how an agency treats errors, amended reports, CARs, etc.
If no case review is performed how can it be determined that no errors were verified?
If you pull someone off casework for a bad exclusion, you will eventually have no one left working cases. Look at the Black Box study. Almost every single examiner made an erroneous exclusion. Establish a standard for exclusion, and then let the verifier catch the mistakes. If an examiner makes the same mistake repeatedly, then let management decide when it is appropriate to do a case review and establish re-training.
Here's some questions for you... Would it really be retraining? What training has that examiner had in exclusions? Have they been trained in how to avoid erroneous exclusions? Do you have policies that reduce erroneous exclusions? I would think that you can't retrain someone in exclusions until you've provided training in exclusions. If an agency provides training in exclusions, establishes a standard for exclusions, and implements policies to reduce exclusions, and an examiner still makes too many mistakes, then it's time for retraining.