The Detail Archives    Discuss This Issue    Subscribe to The Detail

Fingerprint News Archive       Search Past Details

G o o d   M o r n i n g !
via THE WEEKLY DETAIL
 
Monday, May 3, 2004

The purpose of the Detail is to help keep you informed of the current state of affairs in the latent print community, to provide an avenue to circulate original fingerprint-related articles, and to announce important events as they happen in our field.


Breaking NEWz you can UzE...
compiled by Jon Stimac
 

Appeals Court Rules On Fingerprint Evidence KYW-TV, PA - May 1, 2004 ...a federal appeals court has ruled that there are good grounds to allow fingerprint evidence to continue being used at trial...

Fake Fingerprint Probe   BOSTON HERALD, MA  - April 25, 2004 ...AG's office begins criminal investigation into whether Boston police lab techs used a fake fingerprint to frame a wrongly convicted man...

Fingerprint Brings Arrest in Credit Union Holdup BEND.COM  - April 28, 2004 ...surveillance photos were helpful in alerting the public, but it was a fingerprint found at the scene that led to the arrest of a 20-year-old man...

Technology Allows for Palm Print Evidence WIS-TV, SC  - April 27, 2004 ...in the past investigators couldn't do anything but tuck those palm prints away…

 

Last week, we looked at an excellent ruling out of the Ninth Circuit Appellate Court on Daubert issues.  This week, Craig Coppock brings us a new and original CLPEX.com paper:

___________________________

A DETAILED LOOK AT INDUCTIVE PROCESSES IN FORENSIC SCIENCE
by Craig Coppock

      Recognition is a fundamental inductive cognitive process that we use constantly to understand the world around us.  Induction are the inferential processes that expand knowledge in the face of uncertainty. [1] Without these processes we would cease to function in an organized and meaningful manner, as would not be able to understand information in context. 

    We use our expertise in recognition to constantly assist us with both simple and complex cognitive tasks.  However, even simple recognition tasks, such as recognizing a face, are found to contain a very high degree of complexity.  Fortunately, much of the core of the recognition process is managed by the automated subconscious.  “The great Russian physiologist Nikolai Bernstein (1896-1966) proposed an early solution (to the problem of infinite degrees of complexity).  One of his chief insights was to define the problem of coordinated action, which is analogous to the cognitive process at issue here, as a problem of mastering the many redundant degrees of freedom in a movement; that is, of reducing the number of independent variables to be controlled.  For Bernstein, the large number of potential degrees of freedom precluded the possibility that each is controlled individually at every point in time.” [2] 

    The same can be said of how the brain processes information.  How can recognition or a face or fingerprint impression be effected within the context of excess and variable information?  Recognition is based on the evaluation of information and informational relationships (linkages) via default hierarchies and relevancy.  This process can inflate into a point when we have sufficient information to make recognition.  That is, we understand something in context.  Often, we understand it’s meaning as well.  This inflation is a cascade or increasing return of information gathered from smaller less relevant individual recognition processes.   The very key to this recognition concept, as it is applied to forensic science, was illustrated by the German Philosopher and Mathematician Gottfried Wilhelm Leibniz (1646-1716).  His insight was that; all natural objects could be differentiated if examined in sufficient detail.  This “detail” is information about the subject.  Accordingly, sufficient information would be needed to ensure effectiveness in an analysis.  The same cognitive processes are used whether you are examining objects for their differentiation or to recognize that they are one in the same (identical). 

    The Belgian statistician Adolph Quetelet (1796-1874), also recognized the value of randomness with his observation “Nature exhibits an infinite variety of forms.”   The difference between objects is information about the subject’s uniqueness.  This is called the Principle Of Individualization.  It is interesting and relevant that the same cognitive induction process used to recognize natural objects is also used in problem solving, whereas information is compared within relative contexts.  To understand all the applications of the cognitive inductive processes will help scientist’s better understand specific applications within forensic science.

    Unfortunately, due to the infinite complexities of the induction process, there is no current theory that explains the process in its entirely.  However, there are numerous models to describe many of the minutiae involved.   A theory of induction would have two main goals.  The first is the possibility of prediction.  Prediction is the statement of expected results from information not yet analyzed.  For fingerprint identification the various levels of detail would be predictable in their comparative spatial relationships between the exemplars and an unknown sourced impression.  Secondly, direct comparisons with observations can be made. [3]  The information has to be comparable in some manner in order to have meaning.   The fact that prediction can be made illustrates that the comparisons are with merit.  “While formal theory is a powerful tool, it typically encounters formidable barrier in complex domains such as the study of induction.” [4]

    It is argued that “confirmation of a hypothesis is isotropic… meaning that the facts relevant to confirmation of a hypothesis may be drawn from anywhere in the field’s previously confirmed truth and that the degree of confirmation of the hypothesis is sensitive to properties of the whole system.” “Such holism does indeed pose problems for the philosophy of induction and for the cognitive sciences.”  “It is impossible to put prior restraints on what might turn out to be useful in solving a problem or in making a scientific discovery.” [5]  In relation to the inductive forensic sciences, we now understand the limitation of standardized thresholds.  Artificial thresholds relation to the individualization process limits the usefulness and extent of the information available.

    The testing of a hypothesis of identification is the process of verification.  Here, another qualified examiner re-examines the available information according to established methodology.   The repetition of the verification process is called sequential analysis.  The methodology itself is scientifically designed to allow for repeatable application of the process with the highest accuracy possible.  In general, “scientific laws are general rules.  Scientific ideas are concepts that organize laws and other rules into useful bundles.  Scientific theories are complexes of rules that function together computationally by virtue of the concepts that connect them.  Moreover, these concepts are often of a special sort, making reference to entities and processes not directly observable by scientific instruments of the day.” [6]  Many of these processes involved in the cognitive inductive process, such as recognition, involve mental functions based on relevancy and knowledge.   These aspects are truly difficult to quantify.

    Fortunately, we do not need to know all of the available information in order to understand the probability that our conclusion of recognition is correct.  We will reach a point in our analysis of the matter, when we can say we have recognized a particular fact.  We are often blind to the fact that we rely on related experience to understand, or make sense of, the recognition process.  It is also fortunate, that when we do not have enough information to complete a specific recognition we understand that we lack the needed information to complete the task or that we do not understand all the information presented.  Accordingly, both recognition and non-recognition are based on the analysis of available information. 

    An example is, when we see a friend we have the ability to recognize them while viewing only a few percent of the total available information.  We see only one side of a person at a time.  We do not always need to also see their opposite side as well. [7]  Likewise, we can often make recognition while viewing only a person’s face or part of their face.  Accordingly, if we do not get a good look at the person, we are confronted with the possibility of not being able to recognize them due to insufficient information.   Specifically, in regards to the recognition process, it is not necessary to completely understand all the information before proceeding with an analysis.  Early selection is the idea that a stimulus need not be completely perceptually analyzed and encoded as semantic (meaningfully linked information) or categorical information before it can be either selected for further processing or rejected as irrelevant.  [8]  However, it is thought that the rejected information is not always completely ignored but rather reduced to a lesser role in the cognitive analysis.

    A phenomenon called hysteresis is “that when a system parameter changes direction, the behavior may stay where it was, delaying its return to some previous state.  This is another way of saying that several behavioral states may actually coexist for the same parameter value; which states what you see depends on the direction of parameter change.  Where you come from affects where you stay or where you go, just as history would have it.” [9]  With recognition, the information can be continuously variable.   Distortion and variability of information are not exceptions to the rule they are in fact, the norm.  Pictorial scenes may be familiar throughout different lighting conditions and seasons.  Yet, much of the information found in a pictorial scene is not distorted.  In many cases we would not even expect to see the same exact scene again.  Our next glance may come from a new perspective, a different hour, and a different season.  Thus, while color, contrast, and perspective may be variable, their remains the possibility of recognition provided sufficient information remains, and that that information falls within acceptable statistically modeling expectations.  This is sufficiency of information.  Accordingly, if we consider an enhancement process such as filtering, we can begin to understand that the process of enhancement only needs to be specific relative the information that is to be enhanced.   Non-relevant information can be safely discarded if it does not interfere with the value of the information being analyzed.  Also, it is important to note that specific enhancements need not be exact or standardized.  Again, sufficiency is the key.  There is no scientific basis for exact duplication of enhancement processes.  “Human subjects… can easily adopt a variety of strategies to solve a task.” [10]

    While many forms of cognition are based on analogy [11], recognition furthers this basic thought by adding comparative detail analysis with both, the conscious and subconscious mind. Recognition starts and ends in the brain.  The conscious and subconscious minds work together utilizing as much data as they can effectively process.  This information is drawn from experience as well as current evaluations of the environment.   The actual moment of recognition can be described as the moment of positive recognition or MPR.  MPR is defined as; an affirmative decision of identification based on the accumulation of contextually compared and experience based information that falls within predictable and logical statistical parameters.  The information above and beyond this point of sufficiency is simply additional supporting data that may not needed for further practical use.  This additional information is not needed to further support the recognition, yet we often make ourselves aware of it.  In the case of friction skin identification, this additional information is often analyzed to some degree to ensure accuracy, whereas additional predictions and comparisons are made of the remaining information as well as being available for relevant sequential analysis.  See figure 1.



    The brain is an amazing organ.  It has the capacity to process immense amounts of information simultaneously.   The brain also has the ability to put that information into a usable and understandable context for later reference.  The process of recognition is formed wholly inside the brain utilizing information that is further deduced from information that was analyzed regarding a particular issue, as well as from related information based on past experience. 

    The brain has specialized areas that are noted for their particular specialties.   “The use of triggering conditions obviates random search through the in exhaustible space of possible hypotheses.”  “Triggering conditions activate the induction inference process allowing problem solving on that information which is different, or non-expected.  Thus, a cognitive process can direct its inductions accordingly to its current problem situation, generating rules that are likely to be useful to it at the moment and hence possibly useful in the future as well.” [12]

    "Attention is a cognitive brain mechanism that enables one to process relevant inputs, thoughts, or actions while ignoring irrelevant or distracting ones…” Fingerprint identification, and other forms of formal recognition, use voluntary (endogenous) attention, whereas, an examiners attention is purposefully directed.  This is in contrast to reflexive (exogenous) attention in which a sensory event captures our attention. [13] 

    Interesting and importantly, the neuroscientific study of attention has three main goals.  One of which is relevant here.  This goal is to understand how attention enables and influences the detection, perception, and encoding of stimulus events, as well as the generation of actions based on the stimuli.  [14]  The relative concepts related to many forensic analysis processes also include spatial attention.  Spatial attention is the endogenous attention relating to some location(s) while ignoring or devaluing others. [15]  With the task of recognition, certain parts of the brain become very active.  This increased and localized activity can be studied and monitored.   One researcher excitedly stated:  “Your whole hippocampus is screaming!” [Much activity was noted in] a structure adjacent to the hippocampus known as the fusiform gyrus; this too, was not a surprise, ... Recent research on face recognition has identified this as the key area in the brain for the specialized task of perceiving faces.  What was a surprise was that the most excited sector in the brain as it viewed familiar faces was, once again, the “story telling area.” [16]  On the other hand, researcher Jim Haxby has found that in addition to localized activity, object recognition may actually rely on multiple areas of the brain. [16a]  But, how does recognition work?  Did Einstein or Newton have an enlarged or well-exercised fusiform gyrus? 

    We all remember the story of Newton and the falling apple.   Albert Einstein imagined what it would be like if he were riding on a light wave and recognized that the speed of light is relative.  If you shine a light out of a moving train it does not add up!  60 mph + c = c.  Prior to that, in 1858, Alfred R. Wallace, while sweltering in a fever on the island of Moluccas “there suddenly flashed upon me the idea of the survival of the fittest...then, considering the variation continually occurring in every fresh generation of animals or plants, and the changes of climate, of food, of enemies always in progress, the whole method of specific modification became clear to me....” [17]   This in turn, fueled Charles Darwin's fire on evolutionary theory.  

    In about 250 B.C. Archimedes was pondering over a problem for the Greek King Hieron II about measuring the content of gold in a crown.  He realized that copper has a density of 8.92 gcm and gold about double that, in Archimedes equivalents.  Archimedes thought there must be a solution to the problem, even though the mathematics of the time did not allow for such complex calculations.  Regardless, Archimedes took his thoughts to a public bath for some relaxation.  The bath ran over its edges as Archimedes displaced the water.  “And as he ran (naked), Archimedes shouted over and over, “I’ve got it! I’ve got it!”  Of course, knowing no English, he was compelled to shout it in Greek, so it came out, “Eureka! Eureka!”[18]  The MPR had been reached. 

    The physicist “Roger Penrose looks to ties between the known laws of quantum mechanics and special relativity for clues to human consciousness.  According to Penrose, it will be possible to elucidate consciousness once this linkage has been established and formulated into a new theory…  For Penrose, consciousness, together with other attributes such as inspiration, insight, and originality, in non algorithmic and hence non-computable.”  When one “experiences an “aha” (recognition) insight, this cannot, he thinks, be due to some complicated computation, but rather to direct contact with Plato’s world of mathematical concepts.” [19]

    Some aspects of recognition can be studied by focusing on particular aspects of the cognitive process.  “Visual perception plays a crucial role in mental functioning.  How we see helps determine what we think and what we do.   ...Denis G. Pelli a professor of psychology and neural science at New York University, has had the happy idea of enlisting the visual arts.” Study in the area of the visual arts has “disproved the popular assumption that shape perception is size-independent.[20]   Of course, this too, is relative.  When viewed at particular extremes, shape is size dependent from a human perspective.   Aristotle noted that shape perception could be independent of size only for sizes that are neither so huge as to exceed our visual field, nor so tiny as to exceed our visual acuity.[20] 

    Size and shape are forms of information.  Thus, we can assume that all information, including that used for recognition, is also relative.  Other extremes are also noticed whereas too much non-correlated information cannot be processed effectively and too little information will not yield sufficient relationships for a useful comparative analysis.  Thus, recognition cannot be supported.  The less information that is available for the recognition process, the more time and cognitive effort must be applied to the evaluation of the information.  Eventually, a point will be reached, that relative to one’s ability, further analysis will not result in a positive recognition.   This is why “hind sight is 20/20.”  After the fact, more contextual information is usually present, making relationships of relevant information more distinct… and obvious.

    The actual point of recognition (MPR) is not definable in everyday terms.  Sufficiency cannot be clinically quantified as to what is actually taking place and when it is taking place.  MPR is practically a statistically exempt process due to the infinite number of variables and overt complexities.   All we can hope for is a rough statistical model.  This is why there is not a current theory for the induction process.  Unlike DNA’s firm grasp on statistical modeling, most of the forensic sciences, ultimately based on the recognition process, are too complex and variable to explain with precise numbers. 

    An old analogy to our inability of finding exactness in most of nature has been described using a bow, an arrow, and a target.  Imagine an arrow shot at a target.  When will the arrow hit the target?  Can we not exactly predict when the arrow will impact?  Even if we know the speed and the distance, we will have to forget that the arrow will always be only halfway further to the target.  If it is always getting halfway further, when do you run out of “1/2” the distance?  You can get a neat answer by using the speed-distance-time formula.  Then we must ask ourselves as to which ruler will we use and how accurate is that ruler, and of course, how accurate is the person who is doing all the measuring?  Thus, even the simple task of measuring the time of impact for an arrow at its target can be frustratingly complex unless we are allowed to make some real world assumptions for the purpose of simplification.  Classical mechanics allows us to round our numbers. “Predictions are nice, if you can make them.  But the essence of science lies in explanation, laying bare the fundamental mechanisms of nature.”[21]  Most of these explanations arrive in the form of scientific models, according to the scientist John Von Neumann. [22]

    We understand that at some point our ability to measure things has no affect on what type of measuring devices we use (a meter, a second, or statistic).  This failure is due to the loose tolerances required to make everyday issues solvable in a practical sense.  Accordingly, we must accept a certain amount of tolerance in the answers to our questions. 

    Recognition also follows this path, as we are not given the opportunity to exactly define the MPR.  This is due the variables in information and the manner in which that information is analyzed.  There is no particular order in which a person must analyze information during the recognition process.  Some paths of information correlation may offer the MPR sooner than if another alternative path is taken.  This includes the analysis of the distortion of information as well.  Distortion is always present to some degree, whether it is recognizing your cat or in the analysis of a latent fingerprint.  Omnipresent distortion is a law of nature, yet we seem to deal with this aspect just fine.  The main reason for this is due to the fact that we always receive information differently.  We are used to understanding similar information presented in a variety of ways.  We understand that each time we evaluate "something," at least some of the previously available information will be different.  Hence, we are experts at recognizing distortion and variability in the recognition process.  We can often disregard information that falls in extreme categories simply because the influence they do have is often insignificant, or that other sufficient information is present.   Unintelligible information is not used in the recognition process.  If insufficient information is present, then recognition is simply not possible. 

    In many cases, as with distortion, it is only when we wish to view the component items of a problem, do we actually see them clearly.   It seems that, for the most part, our cognitive world is based on generalizations, analogies, and our ability for recognition.  The boundary of the classical reality is a boundary of informational usefulness and practicality.  The rounding of numbers is part of that practicality.  We need to understand the variables in their correct context to understand what recognition is, let alone, to make an attempt to measure it.  Within the forensic sciences, we must understand the limits of information in order to understand its usefulness.  Distortion can also obscure information, thus preventing recognition.  Distortions of friction ridge information can be found in a wide variety of forms.  What is realized, is that information is found embedded within other information and distortions of this information may only partially obscure specific details.  However, we are all experts at the recognition process.  We also have considerable experience dealing with various levels of distortion.   Experts that practice specialized forms of recognition, such as friction skin identification, shoe print identification, etc... can also be effective and accurate in that particular recognition process if they are sufficiently skilled in the applicable areas.  Essentially, they must be as comparatively skilled as a parent recognizing their children.  For each aspect is a specialized and learned skill that requires considerable experience.  “Special aptitudes for guessing right” reflect Philosopher C. S. Piece’s belief that constraints on induction are based on innate knowledge.  [23]  Of course, knowledge is the product of learned experience.

    “Sometimes the variability of a class of objects is not as immediately salient.  If variability is underestimated, people will be inclined to generalize too strongly from the available evidence.  Similarly, if people do not recognize the role of chance factors in producing their observations, they are also likely to generalize too strongly. [24]  Distortion of impression evidence is an example of variability.  The distortion must be understood in its degree of underlying information destruction.  Experience again shows its importance.  “…People’s assessment of variability, and hence their propensity to generalize, depend on their prior knowledge and experience.” [25]  “Perhaps the most obvious source of differences in knowledge that would be expected to affect generalizations is the degree of expertise in the relevant domain.  Experts – people who are highly knowledgeable about events in a given domain – could be expected to assess the statistical aspects of events differently from non-experts.  Experts should be more aware of the degree of variability of events in the domain and should be more cognizant of the role played by chance in the production of such events. [26]

    In the realm of all possibilities, recognition is not always absolute. However, within the context of the human population recognition can, in many forms, be absolute in a practical statistical sense.  For example, when properly effected (recognized), friction skin identification is far from extreme, and thus, can be valid for practical applications.  The inherent uniqueness of friction skin supports the quantity of information needed for recognition.  Provided that sufficient information is available in an impression of that friction skin individualization can occur, because recognition can be supported with prediction and comparisons.

    Fundamentally, there is no significant difference between degrees of recognition.  With a computer there exists preset thresholds and a lack of comparable experience that prevents true recognition.   Specifically, this relates to the recognition or individualization of a person.  Computers cannot recognize individuals, whether using fingerprint or facial recognition programs.  Computers simply compare preexisting data to new data without the ability to understand information within experienced based context, nor can a computer “recognize” distortion.  In fact, computers will use artificially low thresholds due to the fact that they do not have these capacities.  Without such a limit, computers would have little value to offer regarding assistance with the recognition process due to a high number of false positives.  Computers are effective at sorting large quantities of simplified, known data against other data.  Unfortunately, there is no concept of recognition here. 

    Another limitation of a computerized recognition system is that a computer cannot effectively verify its own product.   If the computer is in error, then running the same program may result in the same error, or possibly completely different results if the accessed data (or network routing) has changed before the repetition of the process.  This illustrates the need for human intervention.  With respect to recognition, this is also where experienced based analysis can be beneficial.  The ACEV methodology used by friction ridge examiners follows the scientific method complete with verification.  Verification is required following scientific parameters on testing procedures.  The verification step is also necessary for other specialized applications in forensic science.   We still need to verify recognition in order to make that recognition valid in a scientific sense.  While subjectiveness plays a role, it does not necessarily influence the verification or proof of recognition.  It mainly affects the investigative aspects.  Regardless, a certain amount of this subjectiveness is inevitable in most scientific endeavors and is simply a byproduct of the analysis of complex issues found throughout nature.

    Artificial intelligence is an effort to duplicate experienced based learning via a neural network computer simulation.  However, even the best simulations are rudimentary in comparison to the human equivalent.   Computers can however, within very tight informational parameters, highlight probable candidates during a database search.  It can even be said that, assuming the parameters are tight enough, and the information of high quality, a computer could provisionally identify a person within certain acceptable and practical statistical probabilities.  Friction ridge examiners use these databases to sort through the billions of bits of data, yet when the quality and quantity of the information falls below a certain threshold, the usefulness of the computer ceases.   Again, this is due to the simplified comparative analysis model a computer uses.   Humans have the advantage of default hierarchies.  “Default hierarchies are capable of representing both the uniformities and the variability that exist in the environment.  This representation serves to guide the kinks of inductive change that systems are allowed to make in the face of unexpected events.” 

    An example of this can be illustrated with a simple form of recognition.  Imagine, that you recognize your cat sitting on the windowsill of your home.  Without realizing it, the myriad of information about that cat is automatically evaluated by you in the normal process of recognition.  How much information do you need to evaluate before you know that the cat is, indeed, yours?  We find that with the average “everyday” form of recognition, a considerable quantity of the information is processed automatically by our subconscious.  The rest of the available information is ignored.  It is simply not needed.   Much of the information is contextual information.  If we take away just one part of the information, such as “location,” we may run into considerable difficulty with our recognition process.  This is due to the fact that much of the information we process is embedded in relative informational relationships. 

    An example considering the same cat is now located half way around the world.  If someone shows you a recent photograph of your cat on the island of Sumatra, you cannot easily recognize the cat as yours, unless you know there was the probability of your cat arriving there on vacation without you!  There is a significant amount of embedded information related to the simple aspect of “location” including associated probabilities.   This illustrates some of the complexity of the recognition process.  It doesn’t mean that recognition cannot be made.  It simply elucidates that specific information may need to be present to reach a MPR.   One must also consider that other less relevant supporting recognitions will have to be made before a main point, issue of recognition is made.  If the less relevant sustainment type recognition processes cannot be supported with information, then it follows that the main issue is threatened with the possibility of non-recognition, at least until a point when further research could be made.  “Inherent in the notion of a default hierarchy is a representation of the uncertainty that exists for any system that operates in a world having a realistic degree of complexity.”  [27]

    It is not an error in failing to recognizing the cat or other item until such a time that the recognition has been proven positive, such as with some detailed analysis, and that the cat is still not being recognized by the individual.  This is due to the fact that the supporting information may be interpreted (valued) in different ways.  This is why forensic examiners must be trained to competency.  When that interpretation has been proven incorrect, an error has taken place.  This contrasts with a false positive.  A false positive would be where the person recognized a different cat as being their cat.  This would be due to an error in the interpretation of the information itself, not in its value.  However, this does not mean the information was incorrect or distorted beyond usefulness.  This again lends to the fact that investigative phase of the recognition process is subjective and relies on experience-based knowledge.  This subjectiveness is due to the fact that much of the recognition process involves a mental investigation of the informational facts.  This can be summed up with the statement:  Investigation is an art.  It is when the comparative analysis deals with practical proofs relating to the evidence that the recognition process completes its evolution into a science.  This proof could take the form of DNA analysis or of other practical and statistically modeled forms of detail analysis.  The proof of recognition deals with the product itself rather than the process by which the recognition was made.  Yet, even the process itself can be fine tuned to allow for rigorous evaluation of information.  By structuring the analysis process in alignment with the scientific method, accuracy in the interpretation of the information at issue is improved.

    The recognition process experiences phase transitions as information is evaluated at different levels of specificity.  These levels are informational relationships.  These relationships also include the many related yet, minor supporting recognitions.  Phase transitions are a common feature of complex physical systems as components of a system are influenced variably at different levels.  Scientist Phil Anderson explained the trend in a 1972 paper.  “At each level of complexity, entirely new properties appear.  And at each stage, entirely new laws, concepts, and generalizations are necessary, requiring inspiration and creativity to just as great a degree as in the previous one.  Psychology is not applied biology nor is biology applied chemistry”[28] 

    We see a similar effect with recognition as the process transforms from general and experience based information correlation, to minor supporting recognitions, then on to a main issue MPR.  The process can then transform further by verification of the process.   Each phase of the process must be treated differently as each phase of the process, is indeed, different.  In essence, these phase transitions are the complexity. [29]  “The theory of self organized pattern formation in nonequalibrium systems also puts creativity and “aha” (recognition) experiences in a new light.  The breakup of old ideas and the sudden creation of something new is hypothesized to take the form of a phase transition.”  “The brain here is… fundamentally a pattern-forming self-organized system governed by nonlinear dynamical laws. [30].

    Eventually, the main question relating to a scientific inquiry is asked of the recognition process.  Can recognition be proven?  Many forms of recognition can be proven positive through verification that allows prediction and comparison of details.  This can often be achieved even though embedded information is often difficult to separate and quantify.  These interrelated packets of information are processed by the brain in a comparative and contextual fashion.   The hippocampus and the fusiform gyrus are doing their work.  Proof that this work or process has proceeded accurately is the study of the uniqueness as well as other supporting data as it relates to the recognition process.  This, of course, depends on the details of the recognition itself, the analysis of the information.  If the recognition is of a solution for a problem, then the data must be supported accordingly.  Predictions and comparisons would be possible.  If the recognition is of an individual item, person, or place, then the data relevant that issue must also be supported.  If the recognition process has sufficient detail available for study, the product of the recognition can possibly be proven.  Again, the key is sufficiency.  Sufficiency is a relative and variable quanta that establishes facts within specific probability models. 

    Uniqueness and information availability have laid the foundation for the possibility of recognition.  Recognition relies on the fact that nature is designed and built in a random fashion and that differences in objects and people allow for a separation of that information into meaningful and identifiably distinct groups.  This is illustrated in the previously mentioned work by Leibniz and Quetelet.  These groups of information are further compared chronologically and spatially within expected statistical results that are subsequently based on experience.  With friction ridge identification, the scope of recognition is somewhat isolated from high quantities of extraneous information the average person is used to dealing with on a daily basis.  The boundaries in which the information being evaluated is easier understood within the forensic sciences.   Also, the packets or groups of information are more distinct and easier to quantify in practical terms.  This assists with the verification aspect.

    Ultimately, the underlying dilemma of recognition is how can such complexity be quantified for statistical evaluation?  Of course, it cannot be precisely quantified.   Only the simplest of informational subsets can be quantified with any definitiveness.   When dealing with these complex issues one must concentrate on the effectiveness of the methods as well as the rates of error when the method has been applied.  It would not be correct to cite the statistical rates amongst the method’s components.  With the exception of verification, the component parts do not function independently regarding recognition.  Accordingly, they have no individual rates of error. 

    Likewise recognition is a process with an infinite number of component parts or parameters.  Regarding the process of recognition relating to friction ridge identification, these components have been grouped together into related fields.  These include analysis, comparison, and evaluation.   Regarding parameter dynamics, “the central idea is that understanding at any level of organization starts with the knowledge of basically three things:  the parameters action on the system… (boundary conditions), the interaction elements themselves…, and the emerging patterns or modes (cooperativities) to which they give rise.” [31] 

    Friction ridge identification, as well as the other forensic sciences that utilize the recognition process, do so in a formal, rather than an informal manner.  This difference is what lends the reliability, accuracy, and verification to the process.  A formal form of recognition requires that the information (evidence) be interpreted correctly and in the proper context.  This is achieved via sufficient training, proper methodology, and experience.  This, in part, is the separation between the art and the science of the recognition process.  Verification or attempts at proving recognition promotes the process to a formal process.  With a formal process of recognition we are entirely cognizant of the process itself.  The process is controlled in a manner that follows the scientific process to allow for the minimization of subjectiveness.  Compare this to an informal form of recognition and we begin to understand how subjectiveness can be minimized and a recognition proven.

    All forms of recognition require experience because it is the “experience factor” that most influences the contextual issues in the recognition process.   With the proper faculties in place, the underlying science of the recognition process can be liberated.  With facial recognition this equates to the difference of accuracy between recognizing your mother and trying to recognize a person whom you have only seen in passing.  With one, you can be certain, with the other, well....... you need more information.  The question would be how does an inductive process work?  One hypothesis of inductive problem-solving is called means-end analysis.  Even though general inductive problem-solving cannot be contained with a simple means-end analysis this outline offers some interesting insight.  A. Newell outlined this approach in his 1969 publication “Progress in operations research.” [32]  His outline consisted of four steps that, incidentally, closely models dactylology.

    Means-End Analysis

Compare the current state to the goal state and identify differences.

Select an operator relevant to reducing the difference.

Apply the operator if possible.  If it cannot be applied, establish a sub-goal…

Iterate the procedure until all differences have been eliminated or until some failure criterion is exceeded.

    Of course, means-end analysis relies on experience based decision making.  The interesting point about induction-based analysis is that there is no correct or set way to perform inductive inferences.  Likewise, each instant or comparison would be approved differently.  Experience plays such a major role on how to approach each particular issue.  Accordingly, all we can say is that there are efficient and inefficient approaches to problem solving.  However, even the most efficient method cannot guarantee the problem can be solved or he latent print can be identified.

    Summary:

    Recognition is a complex process that allows us to understand the information we encounter.  The basic process of recognition is similar in both formal and informal varieties.  The main difference is if the information is to be verified in some manner.  Verification of recognition requires a scientific analysis of the method as well as the product even though only the product can ultimately be verified with any precision.   The benefit of narrowing the recognition process to a scientific method facsimile allows for uniformity in the process, thus minimizing false positives and false negatives.  Different levels of the recognition process must be treated differently.  The informal process can simply and effectively rely on experience, evaluation, and probabilistic contexts alone.  This contrasts to the approach taken by the formal process of recognition.   Whereas, the formal approach structures the process in order to reduce subjectiveness, improve accuracy, and provide verification and/or proof of the recognition.

    The recognition process allows for three main results.  These include:

            A.  Recognition

            B.  Non-recognition

            C.  Experience

    Recognition is the identification of information, or at the least, to understand it or its value in context.  Non-recognition is the analysis of the information without its identification. Lastly, while not generally a goal in the recognition process, experience or familiarity, can be considered a byproduct.  This experience byproduct is a key aspect of the process itself.  The simple analysis of information during the recognition process creates experience-based knowledge.  This allows for increasing returns on that information.  Increasing returns is a compounding network of informational facts that is subsequently used in future applications of the process.  However, this positive feedback or increasing return allows for both recognition and non-recognition.  The product of the recognition process is dependent on the total information available as well as how well that information is understood.  Experience is necessary to understand information in context.   This “context” is also an understanding of its value and its probability.  The additional accumulation of experience based memory, including information gathered from the minor issue related recognitions and non-recognitions, further promotes efficiency in the process.   Increasing returns on our understanding of information is what drives the recognition process.  These returns are the information understood contextually and meaningfully within certain constraints.  The moment of positive recognition is founded on this relationship.  

    “It is worth noting that the issue of constraints arises not only with respect to inductive inference but with respect to deductive inference as well.  Deduction is typically distinguished from induction by the fact that only the former is the truth of an inference guaranteed by the truth of the premise on which it is based…” [33]  However, researchers should also take a reverse view on the issues of induction.  A more thorough understanding of the recognition process will allow a deductive approach regarding research in the field of friction skin identification.  A more thorough understanding will also shed needed light on the undesirable phenomena of confirmation bias. Traditional approaches have involved inductive processes that would describe relevant levels of the recognition process directly related to the verification aspect of recognition.  New research in the ACEV methodology used in the comparative analysis of friction skin attempts to further the understanding of the recognition process.  Induction is the study of how knowledge is modified through its use. [34]  Experience is that modified knowledge.  “Perception and recognition do not appear to be unitary phenomena but are manifest in many guises.  This problem is one of the core issues in cognitive neuroscience.”  [35]

Craig A. Coppock
Forensic Specialist, CLPE
4-27-2004
 

References: 

1.    Holland, J; Holyoak, K; Nisbett, R; Thagard, P;1986  Induction: Processes of Inference,

Learning, and Discovery. MIT, Cambridge p. 1

 

2.    J. A. Scott Kelso:1999 Dynamic Patterns: The Self-Organization of Brain and Behavior.

       Brandford Books, Cambridge p. 38

 

3.     Holland, J; Holyoak, K; Nisbett, R; Thagard, P;1986  Induction: Processes of Inference,

Learning, and Discovery. MIT, Cambridge   p. 347

 

4.    Holland, J; Holyoak, K; Nisbett, R; Thagard, P;1986  Induction: Processes of Inference,

Learning, and Discovery. MIT, Cambridge p. 349

 

5.    Holland, J; Holyoak, K; Nisbett, R; Thagard, P;1986  Induction: Processes of Inference,

Learning, and Discovery. MIT, Cambridge p. 349

 

6.    Holland, J; Holyoak, K; Nisbett, R; Thagard, P;1986  Induction: Processes of Inference,

Learning, and Discovery. MIT, Cambridge p. 322

 

7.    Coppock, Craig (2003) Differential Randomness and Individualization.

 

8.    Gazzaniga, M ; Ivry, R ; Mangun, G :2002: Cognitive Neuroscience; The biology of the

mind. 2nd Ed. W.W. Norton & Co., New York p. 250

 

9.    J. A. Scott Kelso:1999  Dynamic Patterns: The Self-Organization of Brain and Behavior.

       Brandford Books, p. 21

 

10.   Gazzaniga, M ; Ivry, R ; Mangun, G :2002: Cognitive Neuroscience; The biology of the

      mind. 2nd Ed. W.W. Norton & Co., New York  p. 201

 

11.   Holland, J; Holyoak, K; Nisbett, R; Thagard, P;1986  Induction: Processes of Inference,

Learning, and Discovery. MIT, Cambridge p. 10

 

12.    Holland, J; Holyoak, K; Nisbett, R; Thagard, P;1986  Induction: Processes of Inference,

Learning, and Discovery. MIT, Cambridge p .9

 

13.   Gazzaniga, M ; Ivry, R ; Mangun, G :2002: Cognitive Neuroscience; The biology of the

mind. 2nd Ed. W.W. Norton & Co., New York p.  246

 

14.   Gazzaniga, M ; Ivry, R ; Mangun, G :2002: Cognitive Neuroscience; The biology of the

mind. 2nd Ed. W.W. Norton & Co., New York  p. 247

 

15.   Gazzaniga, M ; Ivry, R ; Mangun, G :2002: Cognitive Neuroscience; The biology of the

mind. 2nd Ed. W.W. Norton & Co., New York p. 251

 

16.         Pelli, Denis G. (2000) Close Encounters:  An Artist Shows That Size Affects Shape.

       The Best American Science Writing 2000, Ecco Press, New York.

 

16a.  Gazzaniga, M ; Ivry, R ; Mangun, G :2002: Cognitive Neuroscience; The biology of the

 mind. 2nd Ed. W.W. Norton & Co., New York p. 532

 

17.  Wallace, Alfred Russell (1898), The Wonderful Century its Successes and its Failures

 

18.  Asimov, Isaac (1974) The Left Hand Of The Electron; Dell, New York, p. 190

 

19.   J. A. Scott Kelso: Dynamic Patterns: 1999 The Self-Organization of Brain and

Behavior. Brandford Books, Cambridge p. 25

 

20.    Pelli, Denis G. (2000) Close Encounters:  An Artist Shows That Size Affects Shape.

       The Best American Science Writing 2000, Ecco Press, New York.

 

21.  Waldrop, M. Mitchell. (1992) Complexity: The Emerging Science At The Edge Of Order

And Chaos; Penguin Books, New York. p. 39

 

22.  Gleick, James (1987)  Chaos: Making A New Science.  Penguin Books, New York

 

23.   Holland, J; Holyoak, K; Nisbett, R; Thagard, P;1986  Induction: Processes of Inference,

Learning, and Discovery. MIT, Cambridge p. 4

 

24.    Holland, J; Holyoak, K; Nisbett, R; Thagard, P;1986  Induction: Processes of Inference,

Learning, and Discovery. MIT, Cambridge p. 243

 

25.   Holland, J; Holyoak, K; Nisbett, R; Thagard, P;1986  Induction: Processes of Inference,

Learning, and Discovery. MIT, Cambridge p. 249

 

26.   Holland, J; Holyoak, K; Nisbett, R; Thagard, P;1986  Induction: Processes of Inference,

Learning, and Discovery. MIT, Cambridge p. 250

 

27.   Holland, J; Holyoak, K; Nisbett, R; Thagard, P;1986  Induction: Processes of Inference,

Learning, and Discovery. MIT, Cambridge p. 19-20

 

28.  Waldrop, M. Mitchell. (1992) Complexity: The Emerging Science At The Edge Of Order

And Chaos; Penguin Books, New York. p. 82

 

29.   Waldrop, M. Mitchell. (1992) Complexity: The Emerging Science At The Edge Of Order

And Chaos; Penguin Books, New York. p. 230

 

30.   J. A. Scott Kelso: Dynamic Patterns: The Self-Organization of Brain and Behavior.

       Brandford Books, Cambridge p. 26

 

31.   J. A. Scott Kelso: 1999 Dynamic Patterns: The Self-Organization of Brain and Behavior.

       Brandford Books, Cambridge p. 18

 

32.  Hofstadter, Douglas (2000) Analogy as the Core of Cognition,

The Best American Science Writing 2000, Ecco Press, New York.

 

33.   Holland, J; Holyoak, K; Nisbett, R; Thagard, P;1986  Induction: Processes of Inference,

Learning, and Discovery. MIT, Cambridge p. 4

 

34.   Holland, J; Holyoak, K; Nisbett, R; Thagard, P;1986  Induction: Processes of Inference,

Learning, and Discovery. MIT, Cambridge p. 5

 

35.   Gazzaniga, M ; Ivry, R ; Mangun, G :2002: Cognitive Neuroscience; The biology of the

mind. 2nd Ed. W.W. Norton & Co., New York  p. 193

_______________________________________________________

To discuss this Detail, the message board is always open: (http://www.clpex.com/phpBB/viewforum.php?f=2)

More formal latent print discussions are available at onin.com: (http://www.onin.com)


_______________________________________________________

FUNNY FINGERPRINT FIND

No FFF's this week
_______________________________________________________


UPDATES ON CLPEX.com


Updated the Detail Archives

_______________________________________________________

Feel free to pass The Detail along to other examiners.  This is a free newsletter FOR latent print examiners, BY latent print examiners. There are no copyrights on The Detail, and the website is open for all to visit.

If you have not yet signed up to receive the Weekly Detail in YOUR e-mail inbox, go ahead and join the list now so you don't miss out!  (To join this free e-mail newsletter, send a blank e-mail to: theweeklydetail-subscribe@topica.email-publisher.com)  Members may unsubscribe at any time.  If you have difficulties with the sign-up process or have been inadvertently removed from the list, e-mail me personally at kaseywertheim@aol.com and I will try to work things out.

Until next Monday morning, don't work too hard or too little.

Have a GREAT week!