[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: speech/music



Jont Allen wrote:

>How do you know? I would expect they are connected at least by the
>auditory cortex AC (this is known by now, no? Somebody who knows, help us
>here.) The AC is BEFORE speech is recognized, because there is still
>a tonotopic representation. That would still be in the bottom up realm,
>don't you agree?
>
>> Some
>> information *must* be travelling top-down.  It's expectation driven.
>


Jont here is an edited extract from the chapter I advertised earlier.

For references on  detailed approaches to the neural architecture of
language see  for example Seldon (1985), Shallice (1988),  Eccles (1989)
and  Gordon (1990) who marshal the evidence from aphasics to support
a three level process of speech perception involving: (a) auditory
input feature processing;  (b) phonemic identification; and
(c) word recognition, in which recognised sequences of phonemes/features
trigger lexical access, which then allows access to meaning.

 Neurological evidence for phonemic processing comes from patients,
not with Wernike's aphasia, but with word deafness. In this condition
patients have intact hearing, and speech is appreciated as sound, but
cannot be understood as language, though music and environmental
hearing are intact, as are writing and reading. This implies that
pure word deafness is an inability to decode speech sounds,
particularly consonants. This selective deficit appears to have
two causes: a prephonemic inability to detect rapid changes,
and a disorder of phonemic processing. The first cause seems
to be associated with lesions around the primary cortex in
the left hemisphere (though vowel perception may be accomplished
by both hemispheres). The second cause may have its basis in a
disrupted phonemic lexicon, possibly caused by lesions to the
left posterior superior temporal lobe extending to the angular gyri.

Several sources of evidence support a separate third (semantic) stage.
Transcortical aphasias and anomias imply that the speech input can be
decoupled from semantic understanding. A common area of damage for
these patients is the posterior temporal-inferior parietal (T-IP) region,
implying that single word semantic processing takes place in the
left T-IP region.  Finer dissociations of this sort may be taken
to further imply that in the normal state,  purely verbal knowledge
is separate from higher-order visual and other perceptual information
(though all the systems involved are highly linked).

On the issue of the McGurk effect. There are three principal cues
for lip reading, lip modulation rate, maximum lip velocity and maximum
lip amplitude, all of which can be extracted from visual image flow,
which is coded in the temporal lobe (V5). Recent evidence from blood
flow studies (Calvert et al, 1997) of silent lip reading provide evidence
that "silent lip reading activates auditory cortical sites also engaged
 during the perception of heard speech. In addition it appears that
auditory cortex may be similary activated by visible pseudospeech but
not by nonlinguistic closed-mouth movements. This adds physiological
support to the psychological evidence that lipreading modulates the
perception of auditory speech at a prelexical level and most likely
at the stage of phonetic classification".


Calvert et al (1997) Activation of auditory cortex during silent
lipreading. Science, 276, 593-596.

Eccles, J. (1989) Evolution of the brain: Creation of the self.
(Routledge: London).

Gordon, B. (1990) Human language. In R.Kesner and D. Olton (Eds.)
Neurobiology of Comparative Cognition. (Lawrence Erlbaum:
New Jersey). pp 21-49.

Shallice, T. (1988) From neuropsychology to mental structure.
 CUP: New York.

Seldon, H.L. (1985) The anatomy of speech perception: Human
auditory cortex. In E. Jones and A. Peters (Eds.) Cerebral Cortex.
Vol. 4. Association and auditory cortices. Plenum: New York. pp. 273-327.


Neil