[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Question about latency in CI comprehension
Thank you for your comments.
Yes I am aware that the combination of being a CI user and having
knowledge in acoustics is not precisely common practice.
And I am also convinced that this combination can provide for highly
You made the remark:
I thought it might be good to add to this story a cautionary note before
we draw conclusions about cochlear function or brain function.
Indeed I did that for myself before I composed the sound tracks for my
experiments. As background information my preliminary considerations
might be of interest in the discussion.
The CI processor transforms the signal into a series of compressed pulse
trains, and in doing so, discards a number of different properties of
the acoustic input. So even though we can be clever and design
experiments where perception of the acoustic signal can differentiate
various auditory processes, we are in many ways subordinate to the
prerogative of the CI processor.
Yes and therefore I have observed the CI processor and its software
content as a grey box.
So aware that I am not knowing all or even most of the internal details
of this grey box I formulated my consideration that the acoustic stimuli
I can compose will have the least imaginable chance to interact in an
undesired or erroneous way with the functional content of the grey box.
And therefore I considered that a beat experiment can be used in which
the combination of two pure tones is chosen extreme close in frequency,
which made the duration of the beat period 3000 seconds.
And I used the statement that the total transfer of the external
acoustic stimulus via the CI processor must give me a perception equal
to that in my acoustic ear.
Why? Simply because when my brain perceives striking systematic
differences between both hearing sides, the replacement of the normal
acoustic hearing by the mechanic-electronically operating CI processor
does not provide in the mimic of normal hearing.
The logic conclusion I can draw in that case is that not my brain, nor
my acoustic ear but the CI equipment fails in providing for the wanted
hearing disorder correction.
In other words, we cannot trust the perceived signal to be what we
intended it to be. This is especially true in the case of a delicate
temporal/spectral interaction of the type you described.
Well, let us look at what delicate temporal/spectral interaction can be
distinguished in my beat experiment.
The envelope Ep(t) of that beat can be described as:
Ep(t) = A|cos(2pi×t/6000)|
Where A is the starting sound pressure amplitude at t = 0.
>From t = 1485 sec to t = 1500 sec this envelope Ep(t) equals in extreme
good approximation a linearly decreasing signal from amplitude 0.01×A
to zero and from t = 1500 to t = 1515 sec a linearly increasing signal
from amplitude zero to again amplitude 0.01×A.
During that 30 seconds time period the 1000 Hz signal makes 30,000
And my composed sound track only exists of data within that 30 seconds
time window around the zero crossing.
What is left from the temporal interaction?
A for everybody fairly well acceptable continuous decreasing and
increasing stimulus of one tone. And no discontinuity, only at t = 1500
sec a returning from steady decreasing into steady increasing amplitude.
Hence a composition without any pitfall caused by unexpected
interactions between the input stimulus and the CI program.
And this counts for the input of both the CI microphone and the what you
call acoustic ear.
The next experiment in which I use the sound energy stimulus calculated
out of the sound pressure of the beat signal has an envelope Ee(t) given
Ee(t) = Ee(0)×[cos(2pi×t/3000) + 1]/2
Where Ee(0) = (2pi×A)^2, the value of the starting sound energy
amplitude at t = 0.
The frequency of the sound energy stimulus is 2000 Hz. And this stimulus
makes 60,000 complete evolutions within the 30 sec time window.
>From t = 1485 sec to t = 1515 sec this envelope equals in extreme close
approximation the parabolic function:
Ee(t) = Ee(0)×[(t-1500)/1500]^2
In this parabola Ee(1485) and Ee(1515) have both the value 0.0001×Ee(0)
Between 1492.5 sec and 1507.5 sec the value is even smaller than 25
parts per million of Ee(0).
Also here nothing will give rise for a temporal interaction. There only
exists a smooth quadratic decreasing and increasing amplitude around the
zero amplitude moment at t = 1500 sec. Nothing else.
Matt can you explain in a completely other logic hypothesis to me why
the in pulse trains transferred stimulus in the CI can mimic so
perfectly both the amplitude and the sharply in time restricted zero
crossing of the 1000 Hz sound pressure tone ? While my other ear
perceives the sound energy tone of 2000 Hz with an in amplitude smooth
zero approach in t = 1500 sec?
And why the CI equipment mimics in my brain perfectly well the sound
energy stimulus, when I evoke in front of the CI microphone the
artificially calculated corresponding sound energy stimulus?
Exactly equal to the process in my acoustic ear?
To make a simple analogy, you can imagine the pitfalls of drawing
conclusions about differences between your right eye and left eye if
color vision in your right eye were tested using a black & white tube
monitor from the 40s, and your left eye were tested using an LCD HD
monitor from 2014. Any conclusions you draw from this test would really
be a statement about the apparatus, not the visual system itself. In my
opinion, the same risks apply in the case of comparing a CI ear to an
I can agree with you to some extend if you would say that in my acoustic
ear I use for the sound stimulation a high quality BTE hearing aid, the
Phonak Naida or a high quality headphone . In the CI ear I use a top
class high quality apparatus of the most recent type, the Advanced
Bionics Naida CI. And I compare two different apparatuses. Both state of
the art devices are developed for one task: restoring in the best way
the capability of hearing and interpreting sound stimuli directed to my
And I can clearly draw the conclusion that the Phonak Naida mimics
perfectly well what the high quality headphone performs. And also what
other normal hearing subjects perceive from the sound pressure stimulus.
The AB Naida mimics perfectly well the sound pressure stimulus, but not
in the format equal to the process in the acoustic ear.
The AB Naida only does that in case I evoke in front of the CI
microphone the calculated sound energy stimulus out of the sound
pressure stimulus. And not the sound pressure stimulus.
Then the conclusion is simple: Apparently the AB CI transfers in
principle a stimulus linearly proportional to the evoked sound pressure
>From the Phonak Naida is known that it transfers only a frequency
related attenuated and/or amplified stimulus to the cochlea. All the
rest of transfer to the basilar membrane is done by the middle ear and
The nerve connection to the brain is still identical in both ears.
However the electrical signals in both ears differ. And they should be
equal for normal hearing.
To your specific experiment: although your acoustic ear heard the
fundamental in the complex sine tone you created, your CI ear in fact
never heard the sines at all (just as your right eye never saw the
color); it heard whatever the processor generated to represent those
tones. So in my mind, you might not have been comparing apples to
Here I disagree fundamentally with you. You actually suggest that what I
observed in the missing fundamental experiments is a pure acoustic
illusion. Well I can guarantee you that what I did with my mind is
comparing apples with apples. The cooperating manufacturers of both
hearing devices pretend and advertise that to the best of their
knowledge their goal is the restoration of hearing in their customers.
It isn?t also a kind of ?acoustical? placebo effect. I am not tumbling
in the pitfall that what I observe is what I want to observe. After a
scientific lifetime in applied physics research at an academic level I
characterize myself as a highly qualified observer, who will not easily
make mistakes like comparing apples with pears ? like it is given in a
What some researchers do is gain control over the CI signal by bypassing
the clinical processors and instead use research processors (e.g.
HEINRI, NIC, BEPS+, BEDCS), where each element of stimulation is
explicitly controlled. Then you can at least be assured of what signal
is being delivered and be confident about the relationship between
stimulus and response. Other experimenters have more experience in this
area and may offer more eloquent descriptions of their approach.
That is a good suggestion. However being an emeritus associate professor
without a laboratory or even any other facility in a university setting
I leave such kind of experiments to others.
Willem Chr. Heerens