[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: speech/music
At 10:22 AM 4/1/98 +0000, Neil Todd wrote:
>For references on detailed approaches to the neural architecture of
>language see for example Seldon (1985), Shallice (1988), Eccles (1989)
>and Gordon (1990) who marshal the evidence from aphasics to support
>a three level process of speech perception involving: (a) auditory
>input feature processing; (b) phonemic identification; and
>(c) word recognition, in which recognised sequences of phonemes/features
>trigger lexical access, which then allows access to meaning.
>
> Neurological evidence for phonemic processing comes from patients,
>not with Wernike's aphasia, but with word deafness. In this condition
>patients have intact hearing, and speech is appreciated as sound, but
>cannot be understood as language, though music and environmental
>hearing are intact, as are writing and reading. This implies that
>pure word deafness is an inability to decode speech sounds,
>particularly consonants. This selective deficit appears to have
>two causes: a prephonemic inability to detect rapid changes,
>and a disorder of phonemic processing. The first cause seems
>to be associated with lesions around the primary cortex in
>the left hemisphere (though vowel perception may be accomplished
>by both hemispheres). The second cause may have its basis in a
>disrupted phonemic lexicon, possibly caused by lesions to the
>left posterior superior temporal lobe extending to the angular gyri.
>
>Several sources of evidence support a separate third (semantic) stage.
>Transcortical aphasias and anomias imply that the speech input can be
>decoupled from semantic understanding. A common area of damage for
>these patients is the posterior temporal-inferior parietal (T-IP) region,
>implying that single word semantic processing takes place in the
>left T-IP region. Finer dissociations of this sort may be taken
>to further imply that in the normal state, purely verbal knowledge
>is separate from higher-order visual and other perceptual information
>(though all the systems involved are highly linked).
>
>On the issue of the McGurk effect. There are three principal cues
>for lip reading, lip modulation rate, maximum lip velocity and maximum
>lip amplitude, all of which can be extracted from visual image flow,
>which is coded in the temporal lobe (V5). Recent evidence from blood
>flow studies (Calvert et al, 1997) of silent lip reading provide evidence
>that "silent lip reading activates auditory cortical sites also engaged
> during the perception of heard speech. In addition it appears that
>auditory cortex may be similary activated by visible pseudospeech but
>not by nonlinguistic closed-mouth movements. This adds physiological
>support to the psychological evidence that lipreading modulates the
>perception of auditory speech at a prelexical level and most likely
>at the stage of phonetic classification".
>
>
>Calvert et al (1997) Activation of auditory cortex during silent
>lipreading. Science, 276, 593-596.
>
>Eccles, J. (1989) Evolution of the brain: Creation of the self.
>(Routledge: London).
>
>Gordon, B. (1990) Human language. In R.Kesner and D. Olton (Eds.)
>Neurobiology of Comparative Cognition. (Lawrence Erlbaum:
>New Jersey). pp 21-49.
>
>Shallice, T. (1988) From neuropsychology to mental structure.
> CUP: New York.
>
>Seldon, H.L. (1985) The anatomy of speech perception: Human
>auditory cortex. In E. Jones and A. Peters (Eds.) Cerebral Cortex.
>Vol. 4. Association and auditory cortices. Plenum: New York. pp. 273-327.
>
Well, reading Neil's note could make the mortal think that his cotton-dry
all-black-or-white conclusions are 100% supported by studies in aphasics.
However, not even Geschwind (who was able to put forward a brilliant
theoretical account of **any** particular patient's linguistic deficits,
tying them to a unique lesion pattern -- how sad that he is no longer
arond!) would have supported the simplistic structuralist model of
processing stages Neil appears to be pushing. Maybe his chapter is more
nuanced than the summary...
Contemporary behavioral neurologists and clinical neuroanatomists
increasingly guard against giving undue emphasis to lesions, for the simple
reason that, even if a stroke- or trauma- or tumor-based lesion can be
absolutely circumscribed, its secondary and tertiary effects remain
basically unknown. The future seems to lie in fMRI studies; the few speech
results I have seen/heard of so far (mind you, there are enormous technical
problems associated with auditory fMRI experiments!) tell us stories
markedly different from what Neil implies: There is activity at cortical
sites often far away from the areas classically identified as speech and
language processors. PET studies, although deprived from the time locking
feature of fMRI, convey essentially the same message.
Pierre
****************************************************************************
Pierre Divenyi Experimental Audiology Research (151)
V.A. Medical Center, Martinez, CA 94553, USA
Phone: (510) 370-6745; Fax: (510) 228-5738
E-mail : PDivenyi@ucdavis.edu
****************************************************************************