[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Temporary binding of descriptions in perception



Dear Al, Alain, Pierre

There is a fundamental difference between the collectivity of
bound features which constitute a particular object, a concept
of the collectivity of bound features which constitute a
particular object and the word used to refer to the
collectivity of bound features.

The cortex does not have a problem binding the features of
multiple copies of the same object, i.e. a green ball on a red
table next to a red ball on a green table, because they are
spatially segregated in the visual cortex and there are
multiple copies of the individual feature detectors, i.e.
colour, edges, corners, etc. distributed throughout the visual
cortex.

The problem arises when we consider the representation of the
concept of the object constituted by the collectivity of bound
features and a word which we humans might use to refer to the
concept. The fact that pongids cannot speak but are able to
achieve a vocabulary of up to 130 signs indicates that it is
possible to have a concept of things and actions without words,
or even a faculty for words.

In the case of normal humans the evidence appears to be that
visual and auditory representations of a word are processed
independently (Peterson et al, 1988). This implies that there
are at least two representations of the same word.

Further insight into the representation of words is given by
considering the case of patients with particular kinds of
aphasia such as anomia. These are usually caused by lesions in
the posterior speech zone in the left hemisphere.

Patients with anomia have deficits in (a) synonym judgements
(b) naming to definition (c) categorisation (d) property
judgements. Common area of damage for these patients is
posterior temporal-inferior parietal (T-IP) region => single
word semantic processing in left T-IP region. The TPO junction
includes posterior 22, 21 and 39, 37 (phylogentically recent
areas). Possible function of earlier part (22, 21) is
higher-level elaboration exclusively within domain of auditory
and visual domains. Possible function of more recent parts (39,
37) are multimodal processing and integration with language.

However, there appears to be even further detailed
dissociations. For example, certain patients may have very
specific categorical deficits, e.g. difficulty naming fruits or
animals,  certain lesions may erase very selective portions of
semantic memory.

Some patients have shown a distinct deficit in verbally
accessing visual properties of semantic categories, such as
animals, although can make judgements in other non-verbal
tasks. This dissociation may be associated with relationship
between inferior temporal region (visual processing) and left
superior posterior T-IP region  (lexical semantic processing).

The above implies that in the normal state purely verbal
knowledge is separate from higher-order visual and other
perceptual information, but all these systems are highly
linked. The verbal system contains tags or connections to
visual and auditory systems. Representations then, reside in
connections between the verbal and sensory systems.
So long as the sensory/perceptual  systems can segregate and
represent multiple copies of the same object then this
separation will be maintained in the connections to the verbal
system.

So, my guess is that,  in response to Al and Dick, the
independent habituation of the shared word at left and right is
in the independent connections between the segregated voices
and the verbal system. The habituation is not the node,
but the connections to the node.

Neil