'Science' says 100% Non Blind Verification

Welcome to the public CLPEX.com Message Board for Latent Print Examiners. Feel free to share information at will.

'Science' says 100% Non Blind Verification

Postby Boyd Baumgartner » Wed Jun 13, 2012 2:12 pm

Despite what RS&A says @biaslovers (sounds like a pizza) will appreciate the article to read that you can't just outthink bias because other people are better at picking up on it than you (Read: Verifiers), and that if you think that ACE-V is ACE-ACE, you're really just doubling the bias because the original examiner's potential bias will not be addressed.

Image

#Article

The human brain is a weird old thing. When confronted with a new, uncertain situation, it virtually always abandons careful analysis, and instead resorts to a host of mental shortcuts—that almost always lead to the wrong answer. Turns out, the smarter you are, the more likely you are to make such mistakes.

A new study, published in the Journal of Personality and Social Psychology, suggests that you can be insanely intelligent, and still fall foul when it comes simple problems because of deviations in judgment—which are known as "cognitive bias".

To work all this out, a team of researchers form the University of Toronto gave 482 students a questionnaire of classic bias problems to complete. An example question runs along the lines of:
A bat and ball cost a dollar and ten cents. The bat costs a dollar more than the ball. How much does the ball cost?


If you're rushing, you might blurt out that the ball costs ten cents. It doesn't: it costs five. If you got it wrong, your brain made some shortcuts if thought made sense, but abandoned math along the way. (If you're sat there incredulously assuming that anyone getting that wrong is a dumb-ass, hear this: more than 50 percent of students at Harvard, Princeton, and M.I.T. give the incorrect answer.)

The researchers also measured a phenomenon called "anchoring bias", but what they were really interested in assessing was how the biases correlated with intelligence. So, they interspersed tests with with cognitive measures, like S.A.T. and Need for Cognition Scale questions.

The results are unnerving. Firstly, awareness of bias in one's thinking doesn't help. As the researchers explain: "people who were aware of their own biases were not better able to overcome them." Dammit.

Turns out that intelligence makes things worse, too. Writing in the Journal of Personality and Social Psychology they explain that "more cognitively sophisticated participants showed larger bias blind spots." In fact, that finding held across many different biases, and individuals who deliberated longer seemed to be even more susceptible to making mistakes. Double dammit.

So what's going on? Why are smart people seemingly so dumb some of the time? Sadly, nobody really knows. The best hypothesis yet suggests that it's tied up with the way we perceive ourselves and others. Basically, the way we process information, so some researchers suggest, makes it far easier for us to spot biases in other people than it is for us to notice ourselves making the exact same mistakes.

As a result, it's not clear whether there's anything that can be done about shaking off the problem. I'd suggest a drink to take the edge off your intelligence, but then, I can't guarantee that would make things any better.
Boyd Baumgartner
 
Posts: 246
Joined: Sat Aug 06, 2005 1:03 pm

Re: 'Science' says 100% Non Blind Verification

Postby George Reis » Thu Jun 14, 2012 6:50 pm

I'm not sure about the study, but I like the photo you posted. The woman on the left is pretty cute.
I can resist anything except temptation - Oscar Wilde
George Reis
 
Posts: 134
Joined: Wed Jul 27, 2005 3:00 pm
Location: Orange County, CA - USA

Re: 'Science' says 100% Non Blind Verification

Postby kevin » Fri Jun 15, 2012 9:40 am

The woman on the left is pretty cute.


Nah - the one on the right is way cuter...
kevin
 
Posts: 134
Joined: Thu Dec 01, 2005 5:37 pm
Location: elsewhere

Re: 'Science' says 100% Non Blind Verification

Postby Neville » Sun Jun 17, 2012 3:45 pm

Still can't see that the ball costs 5cents.
bat =1.00, ball =.10 therefore 1+.10=1.10, or 1.10-1.00=10.
what happened to the other 5cents.

Does that make me very bright or really thick? I think they are both cute.
Neville
 
Posts: 304
Joined: Mon Jan 23, 2006 1:44 pm
Location: NEW ZEALAND

Re: 'Science' says 100% Non Blind Verification

Postby Steve Everist » Sun Jun 17, 2012 7:03 pm

Neville wrote:Still can't see that the ball costs 5cents.
bat =1.00, ball =.10 therefore 1+.10=1.10, or 1.10-1.00=10.
what happened to the other 5cents.

Does that make me very bright or really thick? I think they are both cute.


Your bat is only .90 more than your ball. Supposed to be a dollar more. 1.05+.05=1.10
Steve E.
Steve Everist
Site Admin
 
Posts: 440
Joined: Sun Jul 03, 2005 6:27 pm
Location: Bellevue, WA

Re: 'Science' says 100% Non Blind Verification

Postby C. Coppock » Thu Aug 23, 2012 10:47 am

This is an interesting post. I like the framing issue. It can offer some good insight in how we can frame comparisons to increase accuracy.
If we "rush" we overlook details that are critical. Here the key words are "more than".
Here is some alternate framing of the same problem. I would be interested if there would be a better success rate with these two revisions presented to Harvard students?

1. A bat and a ball cost a dollar and ten cents. The bat costs an extra dollar more the ball. How much does the ball cost?
2. A bat and a ball cost a dollar and ten cents. The bat costs its price plus a dollar more than the ball. How much does the ball cost?

Of course, if we rush our analytical process we are switching to short cuts, thus our science breaks down to old fashion "winging it". Some recent research touched on time constraints (rushing) in our comparison process, especially relevant with false negatives.

Quantified Assessment of AFIS Contextual Information on Accuracy and Reliability of Subsequent Examiner Conclusions
Author: Itiel Dror, Kasey Wertheim

It is well known that examiners are not always afforded the proper time to apply ACE. This is especially true for AFIS candidate lists. Given the numbers of false negatives we are seeing in case work this should be an area of concern. How do we build in more QC in our process? How can we still do good science and be rushed at the same time? Where is the sweet spot of efficiency? I suspect it is further out “time wise” than we realize. However, the trend seems to be higher volumes with less real comparison time. A “more from less” approach, but possibly with ramifications of; less from less.
If certain types of comparisons prove statistically more challenging for accuracy is there a new way to train to that? Essentially, a way to reframe the way we analyze specific data sets such as low minutia count prints and prints with higher levels of distortion.

We don’t want to be stuck with a bat that cost only .90 cents more... or should that really be 90 cents more?
That’s a big difference.
C. Coppock
 
Posts: 50
Joined: Wed Apr 26, 2006 10:48 pm
Location: Mossyrock, Washington

Re: 'Science' says 100% Non Blind Verification

Postby Neville » Thu Aug 23, 2012 2:59 pm

My guess would be that the first question is designed to be missleading where as your second question explains the issue, I often see questions from math exams being quoted, the writer of the question appears to be wanting to confuse.
I would rather have 90cents than .90cents or should that be .9 of a cent, an example of what I just said I guess.
Neville
 
Posts: 304
Joined: Mon Jan 23, 2006 1:44 pm
Location: NEW ZEALAND

Re: 'Science' says 100% Non Blind Verification

Postby C. Coppock » Thu Aug 30, 2012 8:27 am

Exactly. How can we better train so examiners see the real questions? The analysis stage is often abbreviated to our disadvantage. In our modern age of “need it yesterday” we should look for ways to ensure our ACE is properly implemented. We need to stay on the race track regardless of our speed. A short-cut is not an advantage in this race.
C. Coppock
 
Posts: 50
Joined: Wed Apr 26, 2006 10:48 pm
Location: Mossyrock, Washington

Re: 'Science' says 100% Non Blind Verification

Postby Cindy Homer » Thu Aug 30, 2012 10:11 am

Standardized training and aptitude tests would be a good start. It would be a glorious day when all latent print examiners in the US had to go to a standardized school taught by well qualified, state of the art and rigorously monitored instructors. Each student would be thoroughly vetted for aptitude prior to graduating the course. And no certificates of completion. The person would have to pass exams and risk being dropped from the program. Each student would complete a prolonged apprenticeship under another examiner trained this way at the end of which another exam would take place. After another period of time they can apply for certification.

Standardized training, standardized terminology, standardized thought processes, standardized application of ACE-V.

In my fantasy this would be supported by Congress and forced (through funding?) on agencies and organizations whether they liked it or not. Regardless, in the end, it comes down to judges not allowing uncertified examiners to testify in their courtrooms.


Easy enough :shock:
Cindy Homer
 
Posts: 23
Joined: Wed Aug 10, 2005 9:51 am
Location: Augusta, Maine

Re: 'Science' says 100% Non Blind Verification

Postby John Vanderkolk » Fri Aug 31, 2012 7:20 am

Cindy, I have many thoughts on your post. But my processing of those thoughts has been all over the map. Please expand on your concept of 'standardized thought processes.' JohnV
John Vanderkolk
 
Posts: 41
Joined: Tue Feb 28, 2006 9:07 am
Location: Fort Wayne, Indiana

Re: 'Science' says 100% Non Blind Verification

Postby Cindy Homer » Fri Aug 31, 2012 10:29 am

Hmm…standardized thought processes. I should have known you’d get me on that one John. Hope you are well.

Really where I am coming from when I say that is I think there needs to be a standardized way of ‘thinking’ about what we are doing and what we SHOULD be doing when we are fulfilling our role as scientists. So, much more structure in how we examine and interpret information. I believe there are too many grey areas in how we Evaluate the information gathered in the Analysis and Comparison. We can see the same information; we can demonstrate it and is the objective part of what we do. The grey area comes in when we begin to determine weight and what that information means. I hate leaning on “experience and training.” I know I’ve heard that plenty of times and I’ve heard it used in lieu of “I can’t really explain it.” In other words: “trust me, I’m a trained scientist,” which to me is B.S. (and I don’t mean Bachelor of Science). If two or more “trained scientists” can’t agree on weight and meaning, than there is something wrong. I believe therein lies the evidence of the absence of a standardized thought process. If we are Analyzing the information in the same way, we should be interpreting the information in the same way and therefore we should be putting the same weight on the information.
Cindy Homer
 
Posts: 23
Joined: Wed Aug 10, 2005 9:51 am
Location: Augusta, Maine

Re: 'Science' says 100% Non Blind Verification

Postby Ernie Hamm » Fri Aug 31, 2012 1:28 pm

My favorite slant on identification/individualization is an adaptation of an opinion by Associate Supreme Court Justice Potter Stewart in Jacobellis v. Ohio (1964): “…hard to define, but I know it when I see it…”.

I am very certain that Justice Stewart did not render this statement from ‘training and experience’ given the subject of the case under review, but I believe these facets do come into play when establishing an OPINION of origin in comparative examinations.

BTTTP

Ernie
Ernie Hamm
 
Posts: 158
Joined: Sun Jan 22, 2006 12:24 pm
Location: Fleming Island, Florida

Re: 'Science' says 100% Non Blind Verification

Postby Cindy Homer » Wed Sep 05, 2012 11:48 am

I see what you’re saying and I know we are offering an opinion and I know that there is experience and training involved in that opinion but I believe that opinion needs to be based on demonstrable and repeatable evidence and facts. I am concerned about the lack of standardization in that training, in the terms we use and in the meanings we place on characteristics we see. We can’t just use “experience and training” as a safety net to fall back on. There needs to be more to our opinions. It needs to be repeatable and justifiable.
Cindy Homer
 
Posts: 23
Joined: Wed Aug 10, 2005 9:51 am
Location: Augusta, Maine


Return to Public CLPEX Message Board

Who is online

Users browsing this forum: Bing [Bot], Yahoo [Bot] and 1 guest