[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: pitch neurons (1)
(I sent this out to the list on Monday, but it bounced because
I sent it from the wrong email account. I have a second comment
that I wrote late this afternoon that I will also send out.)
Eli wrote:
I would like however to raise another issue: why is there such a pressure
to assume low-level representation (i.e. subcortical) of pitch? After
all, pitch is a pretty high-level property of sounds, it is invariant to
many features of the physical structure of the sounds, and it is
affected by all kinds of high-level cognitive effects (e.g. capture of
harmonics in streaming conditions). All of these would suggest to me
that whatever is responsible for the pure representation of pitch
(independent of the physical structure of the sounds) is rather
high-level, rather than low-level.
------------------------------------
Hi Eli, everyone
I really wonder about the assumption of low-level and high-level
properties and their linkage with low-level (early, peripheral) and
high-level (later, central) processing. Certainly pitch is affected by
how the auditory system forms stable objects/streams, but object
formation itself need not be an exclusively cortical operation per se.
What evidence leads inescapably to this conclusion?
There are a number of demonstrations (of Kubovy and others) that,
while low-frequency hearing is largely insensitive to phase spectrum,
transient changes in the relative phases of components can lead to
perceptual pop-outs/stream separations. Since phase-locked information
is most abundant in more peripheral stations, it may be the case that
primitive auditory grouping occurs lower down in the system than we
might suppose. The cortex might be playing the role of organizing
processing in the lower centers rather than a locus for the
representations
themselves.
We understand so little about the detailed workings of the auditory
system at and above the midbrain. It's really too early to say that
pitch must be processed here or there, especially since no convincing
model for the central representation and/or processing of pitch has been
proposed. Even the representation of pure tones
at these levels has many of the same problematic aspects that are
apparent
at the level of the auditory nerve (invariance of percepts over a large
range
of stimulus intensities; the disconnect between neural tuning and
frequency discrimination, as a function of frequency).
We have yet to find an abundance of units that look like real pitch
detectors
anywhere in the system (there are reports of a few units here or there,
such
as Riquimaroux's 16 F0 units or Langer's recently-found units or high-BF
units
with multipeaked tuning curves at harmonics 1 and 2). I think that this
means
either 1) that the central representation is "covert" and sparsely
distributed across
many neurons (either in a spatial or temporal code) or 2) that we are
thinking the
wrong way about the whole problem, and that the representation is lower
down,
albeit controlled and accessed by the auditory cortex.
The first possibility suggests
a mechanism based on "mass action" -- the more units in a given
frequency region, the better the coding (Recanzone's study a decade
ago), as opposed
to a local feature model in which specialist detectors are highly tuned
to one freq/periodicity
or another. Connectionist theory notwithstanding, we don't really know
how a system based
on mass-action would work in practice. on the other hand, it is possible
that the auditory
cortex may be necessary for fine-grained pitch discrimination, but that
no fine
representation of pitch per se exists there. (I have yet to see any
units in the cortical literature
with sharp tuning better than 0.3-0.5 octave for 1 kHz tones -- all the
really sharp
tuning is for BFs > 5 kHz. It makes me wonder. At the lower stations one
sees coarse tuning, but then
there is also precise and abundant interval information that is
available.)
I think that it is clear that we need
a concerted effort to understand the detailed, mechanistic nature of the
central representations
and neurocomputations that subserve the basic auditory qualities of
pitch and timbre (something like what
is now going on for sound localization). Until we have this many aspects
of auditory function will remain
latent mysteries.
To Eckard:
There is a common misconception that single ANFs must fire every cycle
in order to encode a tone's frequency.
Warren Wever (and perhaps LT Troland before him) proposed the "volley
principle" to get around this. Even
without a volley principle per se, there are ways of processing interval
statistics that allow comparison of
whole interval distributions (e.g. peaks at 2/f, 3/f, 4/f, ... , n/f).
Regarding the issue of coding of high frequencies, we have to remember
that phase-locking weakens only for
high frequency pure tones. Once multiple simultaneous tones are
introduced, then ANFs can potentially follow the time structure of their
interactions. In Viemeister's 4000 + 8003 Hz experiment, interaction of
the two tones might create a detectable 3 Hz intensity fluctuation at
the 8 kHz place. Octave matching and musical interval recognition go to
pot above about 4 kHz (a strong argument for an interval basis of pitch
for tones < 4 kHz), but this is likely a different sort of perceptual
task from the one Viemeister studied.
--Peter Cariani