Cognitive Psychology Class Notes > Pattern Recognition



Pattern Recognition

sensation: reception of stimulation from the environment and the initial encoding of that stimulation into the nervous system

sensory information = visual, auditory, tactile, olfactory

perception: the process of interpreting and understanding sensory information (Ashcraft, 1994)

uses previous knowledge to interpret what is registered by the senses

'Flori__' is perceived as Florida before we are done sensing the individual letters (top-down processing***LATER)

More than just simple registering of sensory information...

involves some sort of interpretation of that information

visual agnosia: an inability to recognize visual objects that is neither a function of general intellectual loss nor a loss of basic sensory abilities

apperceptive agnosia (Benson & Greenberg, 1969)

  • soldier who suffered brain damage from carbon monoxide poisoning
  • recognized objects through feel, smell, or sound
  • could not distinguish between a circle and a square
  • could not recognize faces or letters
  • could not copy shapes shown to them
  • could discriminate colors and which direction an object was moving
  • was able to register visual information, but could not combine that information into a perceptual experience

associative agnosia (Ratcliff & Newcombe, 1982)

  • able to recognize simple shapes and can copy shapes shown to them
  • unable to recognize such objects
  • anchor --> umbrella
  • apperceptive agnosia --> early perceptual processes are disrupted
  • associative agnosia --> later processes (pattern recog) are disrupted
  • visual perception can be divided into an early phase in which shapes and objects are extracted from the visual scene and into a later phase in which the shapes and objects are recognized (Anderson, 1995)

Pattern Recognition

how do we recognize, identify, and categorize information?

the identification of a complex arrangement of sensory stimuli (Matlin)

top of page

Four Models of Pattern Recognition

Template Matching Model

Assumption: a retinal image of an object is faithfully transmitted to the brain and that an attempt is made to compare it directly to various stored patterns

compare a stimulus to a large number of literal copies (templates) that are stored in memory in order to find a match against all templates

works well with computers (check-sorting machines - number characters are maximally discernible so that computer can make no mistake)

does not work well with humans -- too inflexible

(Neisser, 1967)

(The Noles beat the Gators)

Prototype Model

more flexible version of template model - the match does not have to be exact

prototypical A

details are vague

Distinctive Features or Feature Analysis Model

Assumption: stimuli consist of combinations of elementary features; (e.g for the alphabet, features may include horizontal lines, vertical lines, diagonals, and curves)

make discriminations based on a small number of characteristics of stimuli

distinctive feature components stored in memory [a mini-template model??]

Psychological Evidence: Gibson (1969)

decide whether or not two letters are different

takes longer to respond to P & R versus G & M

P & R share many critical features

Neurological Evidence: Hubel & Wiesel (1962)

microelectrodes in cats' brains (visual cortex)

some neurons respond only to horizontal lines, others to diagonals...

similar evidence in monkeys (Maunsell & Newsome, 1987)

certain feature detectors are 'wired' and help us identify features and simple patterns

what about more complex stimuli (i.e. other than letters)?

missing ingredient?? --> top-down processing

Neisser (1964)

there is some higher, more thorough cognitive process seems to assist in the basics of pattern recognition DEMO LATER

Recognition by Components (Biederman, 1987)

computational approach that combines prototype and feature analysis approaches for object recognition

the view of an object is represented as an arrangement of simple 3-D shapes called geons (abbreviation for "geometric ions")

3 Stages of Object Recognition:

1. Object is segmented into a set of basic subjobects. This reflects the output of the early visual processing.

2. Once the object has been segmented into basic subobjects, one can classify the category of each subobject. Bieberman argues that there are 36 basic categories of subobjects, or geons. Recognizing a geon involves recognizing the features that define it, where these features describe elements of its generation such as the shape of the object and the axis along which it is moved. (feature analysis)

3. Having identified the pieces out of which the object is composed and their configuration, one recognizes the object as the pattern composed from these pieces (prototype)

Psychological Evidence:

Biederman, Beiring, & Blickle (1985)

Ss asked to identify objects

Results (see graph):

    • at short exposure durations (65 and 100 ms) --> Ss made more errors with midsegment deletion stimuli than with component deletion stimuli
    • at longer exposure durations (200 ms) --> no difference
    • therefore, with more time, one can extract information necessary for more top-down processing

Conclusions:

    • with short exposures, Ss were unable to identify the components with the segment deletion and therefore had more difficulty recognizing the objects
    • with longer exposures, Ss were able to recognize all the components;
    • since there were more components in the segment deletion condition, they had more information to use to identify the object...but it took time

Cave and Kosslyn (1993)

    • stimuli broken into'natural' an 'unnaturl' parts (see Fig. 2.3)
    • all stimuli had long exposure durations

Results:

    • no difference between types of stimuli

Cave and Kosslyn's Conclusion:

    • overall shape is encoded first, then the parts
    • opposite conclusion of Recognition by Components theory

Criticisms:

    • all stimuli have long exposure durations
    • therefore, the recognition of overall shape is encoded with the aid of top-down processing
    • at shorter exposure durations, Recognition by Components theory is supported

Neurological Evidence: (Humphreys & Riddoch, 1987)

John - 'associative agnosia'

unable to combine local parts of objects (geons) correctly into recognizable objects

Top-Down Processing and Pattern Recognition

bottom-up processing or data-driven processing:

  • stimulus information arrives from the sensory receptors (the bottom level of processing)
  • the combination of bottom-level features allows us to recognized more complex, whole patterns
  • feature analysis

top-down processing or conceptually-driven processing:

our knowledge (memory) about how the world is organized helps in identifying patterns

psychologists believe both bottom-up and top-down processing are necessary to explain pattern recognition

DEMO: Neisser (1964) (fig. 3-11, Ashcraft, 1994)

A & B --> more difficult to find a line without a certain letter (B) than with a certain letter (A)

C & D --> more difficult to find a letter among letters with similar features (D) than among letters with similar features (C)

A, B, & D --> primarily bottom-up processing

C --> primarily top-down processing (if it was bottom-up, then it would take just as long to search through all of the features as in A, B, & D)

 

context and pattern recognition:

THE MAN RAN

  • reading sentence with ambiguous letters
  • knowledge of the world (grammar, words, etc.) helped to identify the ambiguous letter

FOR EXAMPLE, IT'S EASY TO READ THIS SENTENCE

  • in reading, we probably don't identify all of the features in the letters
  • this would require 5,000 feature detections per second
  • instead, we can read most sentences well if only 1/2 of the letters are presented

word superiority effect (Reicher, 1969; Wheeler, 1970)

  • letters identified more rapidly within a word than in a string of unrelated letters
  • brief presentation of D or WORD
  • immediately after, given a pair of alternatives and were asked to report what they saw D K or WORD WORK
  • if Ss saw words, they were more accurate in identification
  • therefore, Ss were more accurate in the context of a word than with letters alone, even though they had to process four letters in the word condition
  • PDP theoretical explanation in Matlin (1998)
  • similar pattern with words alone versus words in sentences (Rueckl & Oden, 1986) --> in Matlin

Facial Recognition

Diamond and Carey (1986)

face recognition is unique in 2 ways:

1. involves within-group discriminations based on different relational properties

first-order relational properties:

all faces have the same basic configuration eyes are always above the nose

mouth is always below the nose

second-order relational properties:

relationships between features must be used to differentiate between faces

distance between the eyes

2. we are 'experts' in face representation and recognition

we are able to recognize and distinguish between thousands of faces

we tend to take a holistic approach when recognizing faces in that we use our knowledge of faces and the context of a facial arrangement to make discriminations between faces

Tanaka & Farah (1993) --> in Matlin

  • Ss viewed (faces + names) or (houses + names)
  • at test, Ss saw either two whole faces/houses of two parts of two faces/houses (e.g. noses/doors)
  • Ss asked to recognized which of two faces/houses or parts of faces/houses were seen before

Results:

  • Fig. 2.6
  • when dealing with faces, Ss accurately recognized faces in the context of a whole face than in the context of parts of faces
  • no difference between whole and parts of houses

Conclusions:

  • face recognition has special status in the perceptual system
  • we process faces in a holistic way when compared to other stimuli

BUT...

what happens when faces are very similar??

photo lineups??

component features or holistic approach???

own-race bias???