Boyd Baumgartner wrote:If strict labor intensity/pragmatic philosophy is what drives the organization . . .
Obviously at some point, the quality of the service you provide begins to diminish due to lack of thoroughness and I think you run into ethical dilemmas. Where is that point, I for sure couldn't pin point it, but I'm sure you would agree?
I do so love philosophy and ethics. Immanual Kant says we must be guided by Duty. Always live up to your Duties and give your all to their performance. In that spirit, you would indeed do complete exclusions with verifications.
But John Stuart Mill says that action is most ethical which provides the greatest good (okay, he actually said happiness) for the greatest number. In that spirit, you would do each case, one method down and dirty ("one and done" I've heard it refered to), verify idents and out the door.
W. D. Ross urges a balance. Each issue considered alone is accompanied by a prima facie duty. But we do not confront the issues in isolation. Therefore, neither Kant nor Mill give a complete picture. To resolve conflicting prima facie duties requires us to balance our duties and responsibilities.
So, to a pure Kantian, we should exclude and verify, but why stop with the top ten candidates? Or the top 20 or the top 100? There is that categorical imperative to exclude, so we should exclude the whole data base and have every exclusion verified.
To a pure Mill adherent, you may run one pristine latent, ask for five candidates, do a cursory glance, and move on to the next case. "Greatest good" would seem to require no backlog be left unattended and all cases be pushed through ASAP.
Ross says there needs to be balance. So, exclusions and verifications on named suspects, but no reason to exclude and verify those on a candidate list of some arbitrary number.
Boyd Baumgartner wrote:I guess the two biggest questions I would ask you in response would be:
Are you really using ACE-V if you aren't verifying exclusions?
Is your understanding of scientific methodology strictly positivist in that the aim of science is to positively establish truth, or is it to weed out error with the residual conclusion being the best explanation?
I believe for sure it's the second of the two understandings. If that is the case, you would expect to find more error in organizations using 100% ACE-V with the understanding that erring on the side of transparently weeding out error is best case scenario.
Of course we want to weed out error, but 100% error free is not a scientifically achievable goal. Wouldn't you agree?
So what is an acceptable error rate? I would submit that in the balance of things, exclusions and verifications on an AFIS candidate list is so ultra conservative that the cost in time outweighs the benefit in extra identifications made.
Of course, if you work in an agency where there is never any backlog, then by all means, do the exclusions and verifications.
But if you work in an agency with a year or more backlog where many cases die when the statute of limitations runs out, then you shift the balance more into the realm of greater good by trying to rush more cases through. In that environment, doing exclusions and verifications on all candidates is time better spent moving on to the next case.