[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

40 Hz RIP

On May 23rd (May 19th) Peter Cariani wrote:-

> Once we admit the possibility of time playing a role --
> there being functionally-significant temporal
> (or spatial) microstructure to our inputs,
> either through synchrony or through common time pattern --
> then much more flexible modes of association become
> possible.

Dear Peter

This is exactly the point which I have been making for some
time (see message of 21st) and is the basis for the model which
I have developed (e.g. Todd (1996) Network: Computation in
Neural Systems 7, 349-356.) and which I presented in Montreal
last year (see proceedings of ICMPC96) at which you were
present (I know that since I remember you gave me a bit of a
hard time).

This is also the basis of the difference I have had with the so
called "oscillatory framework" which it seems to me is a case
of putting the oscillator cart before the signal horse. The
information for binding is already there is the common temporal
structure of the inputs from receptive fields which are
activated by a common source. Having a whole load of
"oscillators" on top of this is, quite frankly, massively
computationally redundent. I simply cannot believe that natural
selection would come up with such a scheme.

One important thing which many people seem to forget about the
visual system is that if an image is actually stabilised
against the retina then the image fades away within a few
seconds, almost as if it were an after-image. This fading is
prevented in normal vision by constant movements of the eyes.
These movements are of three kinds:

(a) saccadic movements  rate every .2 - 1 sec   disp. 20 deg
(b) micro-saccades      rate every 1 sec        disp. 5-10 mins arc
(c) micronystagmus      rate 40-50 Hz           disp >=1 min arc

In the case of the micronystagmus the rate of 40-50 Hz is
within the limits of temporal contrast senstivity (i.e. ability
to detect flashing lights) and the displacement of >= 1 min of
arc is within the limits of spatial contrast sensitivity
(spatial frequency of a grid) both depending on luminance.

The implication is that the
pattern of activity in retinal and higher level neurones is
likely to be modulated by these eye movements. Clearly then RFs
which are stimulated by a single object are likely to have
inputs with a common temporal structure, co-modulated by
micronystagmus. This I would suggest is the origin of the so
called 40 Hz oscillations. This could easily be tested since it
would be luminance dependent.

Those who are advocates of the "oscillatory framework" have
searched in desperation for some evidence of "40 Hz
oscillations" in the auditory system, but to no avail. The only
stuff that is cited are evoked potentials which sometime appear
to show signs of oscillation. However, these can be explained
by the latency between the MGB and cortex, i.e. is a
superimposed Pa response (Pa is a component of MLR). Remember
all the evoke potential experiments are done with repetitive
stimuli to get an average (approx. 512).

As you say Peter, if you talk to vision people about this they
are becoming increasingly sceptical about  the Singer et al
results, at least in part because it has been difficult to
replicate. Perhaps now the 40 Hz oscillator business can be
finally put out of its misery, but somehow I think it may
require some more anaesthesia before it may finally rest in

However, the question then is, if binding is triggered by
common temporal structure of inputs,  (a) how is that temporal
information represented in the cortex and (b) by what mechanism
is this temporal information compared so that RFs with common
inputs may pool togther.

The modelling approximation solution that I have proposed is
the following.

(a) Temporal information is represented spatially in the form
of kind of AM wavelet transform.

Following the peripheral filter bank  the brain-stem/mid-brain
(ICC) level is represented in a highly simplified manner  as an
array of linear band-pass filters forming a 2D
cochleotoptic/periodotopic map. Although this ignores the
massive complexity of the cochlear nucleus, it is consistent
the idea of ICC cells as coincidence detectors.  The medial
geniculate body (MGB) is modelled as 2D array of low-pass
filters with a cut-off about 200 Hz. This again is a great
simplification but approximately consistent with the
physiological data showing a decrease in temporal resolution
from thalamus to cortex. The outputs of the "MGB" filters are
downsampled to 1,000 Hz and input to a simulation of monaural
processes in the cortex which is modelled as an array of
columns. A proportion of the cortical cells within a column are
also modelled as  linear band-pass filters.

Overall, then, periodicity pitch is primarily associated with
subcortical processing whereas  time and rhythm are primarily
associated with the which acts functionally as a three
dimensional filter-bank:

(i) cochlear  approx. 30 Hz - 10000 Hz (timbre);
(ii) ICC approx. 10 Hz - 1000 Hz (periodicity pitch); and
(iii) cortex  approx. 0.5 Hz - 100 Hz (time and rhythm).

Although the temporal resolution of the cortex is less than
pitch periods, both cochlea and periodicity pitch information
are present in the cortical response, since both are spatially
represented in the form of the image of the inferior
colliculus. Note that the first two dimensions have been
demonstrated by Gerald Langner. The third dimension is still
somewhat speculative, but theoretically is very attractive.

Since the time-constants of the cortex are relatively long, its
response to an ICC input lasts long after the ICC input has
ceased. The cortical response thus embodies a form of sensory
memory. The ouput of this stage can be represented as a kind of
AM spectrogram or wavelet transform.

Clearly such an architecture is quite speculative. However, it
does successfully predict the general shape of AM detecability
detectability curve which shows two points of maximum
sensitivity, one at about 3 Hz and another about 300 Hz (Kay,
1982). The model here provides a natural explanation in terms
of two populations of cells which are tuned to different
ranges. The model also provides an account of time
discrimination . It has been well established that when making
judgements about time intervals, that we have maximum
sensitivity to time intervals of around 300 ms (Friberg and
Sundberg, 1995). Sensitivity drops off fairly rapidly for
intervals less than about 200 ms and similarly, but less
steeply, for intervals greater than about 1000 ms. The model
predicts this particular shape because of the distribution of
BMFs of the cortical units. For fundamental periods shorter
than about 300 ms, the harmonics become attenuated, thus
reducing sensitivity, and for fundamental periods longer than
about 1000 ms, the fundamental becomes attenuated, thus also
reducing sensitivity.

(b) RF inputs are compared with a cortical cross-correlation

As a first approximation one can model this as a simple product
moment correlation between cortical "columns". This can be
applied to both sequential and simultaneous grouping with some
success, although more testing is required (currently
underway). Certainly, it provides a very natural acount of
grouping by common AM and why particular rates are more
effective than others. For example, Yost and Sheft  (1993)
have shown that rates of 5 - 25 Hz are effective for
segregation. The fact that typical AM rates coincide with
cortical BMFs suuports the view that the cross-correlation
mechanism may be cortical in origin and take its input from
cortical AM sensitive cells.

However, clearly such a product moment matrix is an
abstraction. The next question is how may the nervous system
carry out a form of cross-correlation. In the model I have
proposed a simple hypothetical circuit which, at least is
computationally effective.  The basic idea of this circuit is
that the cortex is organised into an array of columns receiving
input from the thalamus. Each column represents the temporal
pattern of its input "spatially" in the form of an AM spectrum.
At any point in time it is possible for a column to compare its
input with that of other columns by comparing the spatially
distributed pattern of activity - effectively a temporal
correlation (note the power spectrum in the fourier transform
of the autocorrelation function) - via an extensive network of
inhibitory and excitatory cortico-cortical connections. Those
columns which have coherent inputs form a pooled
representation. For details of this circuit see Todd 1996

In case you think this was conjured out of thin air, in fact
the idea of a column as a processing unit was proposed by the
late Sir John Eccles [Eccles, J.C. (1984) The cerebral
neocortex: A theory of its operation. In E. Jones and A. Peters
(Eds.) Cerebral Cortex. Vol. 2 Functional Properties of
Cortical Cells. Plenum: New York. pp 1-32.]. There is fact some
similarities between the architecture of the above circuit and
that of some of the earlier temporal correlation models.
Indeed, a linear modulation filter has a damped oscillatory
impulse response. However, to reiterate my first point, in this
model the signal is in the driving seat and the function of the
modulation filters is to represent the temporal structure of
the signal spatially in order to make a cross-correlation
possible. A spatial/topographic representation makes as lot of
sense from a neuronal connectivity point of view since it does
away with the need for delay lines.

By now I can tell that I might have caused a few more


Neil Todd

PS Since I mention Eccles, may I recommend his book "The
evolution of the brain: Creation of the self" [1989,
Routledge, London], particularly the chapter "Linguistic
communication in hominid evolution" which Edward Burns may find
of interest.