[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Question about latency in CI comprehension



Willem,

It was a pleasure to read about your experiences with your CI. The intersection of CI use and expert knowledge in acoustics is a rarity, and we are lucky to have you share your story.

 

I thought it might be good to add to this story a cautionary note before we draw conclusions about cochlear function or brain function. The CI processor transforms the signal into a series of compressed pulse trains, and in doing so, discards a number of different properties of the acoustic input. So even though we can be clever and design experiments where perception of the acoustic signal can differentiate various auditory processes, we are in many ways subordinate to the prerogative of the CI processor. In other words, we cannot trust the perceived signal to be what we intended it to be. This is especially true in the case of a delicate temporal/spectral interaction of the type you described.

To make a simple analogy, you can imagine the pitfalls of drawing conclusions about differences between your right eye and left eye if color vision in your right eye were tested using a black & white tube monitor from the 40s, and your left eye were tested using an LCD HD monitor from 2014. Any conclusions you draw from this test would really be a statement about the apparatus, not the visual system itself. In my opinion, the same risks apply in the case of comparing a CI ear to an acoustic ear.

 

To your specific experiment: although your acoustic ear heard the fundamental in the complex sine tone you created, your CI ear in fact never heard the sines at all (just as your right eye never saw the color); it heard whatever the processor generated to represent those tones. So in my mind, you might not have been comparing apples to apples.


What some researchers do is gain control over the CI signal by bypassing the clinical processors and instead use research processors (e.g. HEINRI, NIC, BEPS+, BEDCS), where each element of stimulation is explicitly controlled. Then you can at least be assured of what signal is being delivered and be confident about the relationship between stimulus and response. Other experimenters have more experience in this area and may offer more eloquent descriptions of their approach.

 

Matt


On Tue, Dec 9, 2014 at 9:55 AM, Willem Christiaan Heerens <heerens1@xxxxxxxxx> wrote:
Dear Tamás, Nathan and List,

Tamás you reported:
…while working with cochlear implants (CI) I often notice that even CI
listeners with very good speech perception need some extra time (in
comparison to normal hearing listeners) to comprehend a spoken
sentence….

I have no data about relevant literature of this subject.
But maybe my study since January this year of my experiences as an
‘expert in the field’ can be of value and interest for you. And maybe
for others too.

Since May 2013 I have the Advanced Bionics Harmony CI in my to 120 dB
deaf left ear [In January 2014 my Harmony equipment is replaced by the
New concept AB Naida. In my right ear, with a 70 dB overall hearing
loss, I have the Phonak Naida hearing aid which can support to some
extend the functioning of the AB CI.

In my rehabilitation period it took me less than 2 weeks to have a
speech perception score that almost reaches that of a normal hearing
person even without seeing the speaking person.
My phoneme score was up to 90 % for a normal stimulation of my CI.
Remarkable enough my phoneme score reduces a few percent in case both
apparatuses ‘cooperate’.
But that is only under better than normal quit environmental conditions
and with listening to a single speaker.
As soon as the environment becomes more ‘noisy’ my hearing abilities
reduce rapidly.
When three or more people are discussing more or less chaotically I only
hear a tremendous loud noise in which I can hardly distinguish a single
word. My speech perception then is dropped to zero and the latency for
comprehending spoken sentences can be named infinite.

Only when someone in such an auditory environment is loudly speaking
[almost screaming] near the microphone  of my CI processor I can
comprehend just less than approximately 50 % of the sentences.
Far too low to have a pleasant discussion.

Listening to music – especially classical music – is for me far from
joyful. Actually the only aspect in music I experience almost normally
is rhythm. Pitch perception, timbre, dynamic range and melody
recognition are all really bad.  Naming a single instrument out of what
I hear with my CI is for me a hell of a job.
What I experience in the comparison of my two hearing apparatuses is
that with my CI I hear all background noises like traffic and cocktail
party rumble as lower frequencies compared to the frequencies I hear
with my normal hearing aid.
In literature such experiences are reported as well. But more as an
unclear and remarkable phenomenon.

So being a physicist and with my research in cochlear functioning in
mind – what brought me earlier to the statement that the normal
functioning human hearing sense makes use of the sound energy stimulus
in the cochlea and not the sound pressure stimulus, what everybody now
still assumes – I started with the survey of what actually the CI
processor software is doing with the incoming sound pressure stimulus.
What I found – and please correct me if I am wrong – in a nutshell was
that for dynamic behavior purposes in the different electrodes this
stimulus is rectified and there is further no indication that the sound
pressure stimulus is transferred into the sound energy stimulus, which
on its turn is used in a frequency selective way as the electrical
stimulation of the electrode array.

So I hypothesized that if I compose quite simple tone settings for
listening to beat phenomena I can study with the resulting sound
fragments how I experience beats with my CI in comparison with my other
hearing aid. They simply must sound different.
This because a beat phenomenon in the sound pressure domain is clearly
different from the corresponding beat phenomenon in the sound energy
domain.

My most illustrative beat experiment is the following:

I combined two tones with equal amplitude  – 999.99983333 Hz and
1000.00016667 Hz – to a sound pressure  stimulus.
This combination results in a beat in a 1000 Hz stimulus with a beat
period observed as having a duration of 3000 seconds.
Actually the complete beat period T is 6000 seconds. This because the
shape of the modulation function in the sum of the two sinusoidal
contributions is a cosine function with frequency equal to half the
frequency difference of the two combined tones.
Hence equal to cos(2&#960;×0.00016667×t) or equal to cos(2&#960;×t/6000). And the
modulation envelope is equal to the modulus of this function, so
|cos(2&#960;×t/6000)|. And that is a function with a period of 3000 seconds.

You must be aware that when you look closely to the shape of this
stimulus you will find that near halfway the 3000 seconds the signal
amplitude falls sharply to zero, remains zero during just a split second
and then rises again sharply to higher values.
However when you calculate the sound energy stimulus connected with this
sound pressure stimulus you will find that the beat in this signal still
has a period of 3000 seconds. And when time is approaching the 1500
seconds halfway this period the sound amplitude is also declining to
zero. But it does this in an entirely different way.
At first the frequency is not 1000 Hz anymore but an octave higher so
2000 Hz.
And the shape of the beat envelope of that 2000 Hz stimulus in the
vicinity of halfway the period is entirely smooth. The sudden transition
from sharply descending to sharply rising after the 1500 second point is
completely disappeared. Instead a gradual approach resulting in a smooth
touching to the zero level followed again by a gradual increase in
amplitude.

The two striking differences – 1000 Hz versus 2000 Hz and a sharp
approach to zero versus a smooth approach – must give unmistakable
differences in hearing impression.

And the results of my experiment confirm my hypotheses:

I have cut the 30 seconds period around halfway the period out of the
calculated soundtrack of the sound pressure stimulus. And with
sufficient amplification for my observations I have listened separately
with my CI and my Phonak hearing aid. And even with another amplifier
connected to a high quality headphone without my Phonak hearing aid.

With my CI I heard without any doubt the sharp continuous decline to
zero stimulus and after a split second the continuous  increase. I could
not observe a substantial long period of zero signal.
With my other ear I heard in both cases during the period of 30 seconds
a smooth decline to zero that was reached approximately 7 – 8 seconds
before the halfway moment. This zero signal ended approximately 7 – 8
seconds after the halfway moment. So during a period of 14 – 16 seconds
the signal remains zero followed by a smooth increase.
And the tone has without any doubt a doubled frequency – so 2000 Hz
instead of 1000 Hz.

I have repeated these experiments with the common series of audiology
test frequencies except the 125 Hz stimulus – so starting with 250 Hz up
to 7000 Hz.
With all frequencies I experienced the same results as for the 1000 Hz
signal.

My following experiment was modifying the 1000 Hz sound pressure
stimulus into the sound energy stimulus. And then listening to this
sound fragment with my CI.
As I expected as result for this experiment I experienced the same sound
via my CI as I heard from the sound pressure experiment with my Phonak
hearing aid.
A 2000 Hz signal and a smooth approach to a zero period of 16 seconds
followed by a smooth rising of the 2000 Hz signal.

After that I concluded that also pitch and missing fundamental
experiments will give different results when a normally functioning
basilar membrane is apparently stimulated with the sound energy stimulus
while in the CI processor the sound energy stimulus isn’t generated and
transferred to the brain but the sound pressure stimulus.

So I composed two tone complexes the first one existing of the
frequencies:

800 – 1000 – 1200 – 1400 – 1600 – 1800 – 2000 Hz.

All sine functions.
And the other one with the same frequencies but successively sine and
cosine functions.
Both functions having a 1/f amplitude frequency relation, which results
for the sound energy tone complex into equal energy contributions for
all frequencies.
>From calculations and experimental results in earlier studies and out of
literature I know that a normal hearing person experiences with all sine
functions a pitch of 200 Hz. While with the alternating sine – cosine –
sine composition the listener hears a 400 Hz pitch.

The complete calculation for all sine contributions results in a series
of missing lower harmonics starting with the  fundamental of 200 Hz
followed by harmonics 400 – 600 Hz and then the harmonics 800 – 1000 –
1200 Hz.
The alternating sine – cosine composition shows after calculation that
the series starts with the  missing lower harmonic 400 Hz followed by
the 800 Hz and 1200 Hz harmonic. All three od harmonics 200 – 600 – 1000
Hz are disappeared in the sound energy frequency spectrum.

The results of these two tone complex experiments are even more
remarkable.

With my CI apparatus I experience no significant difference between the
two sound fragments. I hear both sounds as higher tones with identical
frequency and hardly no difference in intensity.
While with my Phonak hearing aid or amplifier headphone combination I
hear precisely the missing fundamental as a low 200 Hz tone combined
with higher tone contributions for the all sine function contributions.
And a 400 Hz tone with a somewhat altered higher tone contribution –
which I can characterize as a change in timbre.

So now I can draw a number of conclusions out of these results:

When I follow the existing hearing hypotheses or theory I am confronted
with a serious anomaly:

It is clear that the implantation of the CI has done nothing at all with
my auditory brain functions.
However by the stimulation of my CI with the sound pressure signal my
auditory cortex or other brain areas involved in sound perception don’t
produce hearable missing fundamentals out of the sound pressure signal.

I can only draw the anomalous conclusion that before any signal is
transferred to the brain the missing fundamental information must be
present in this stimulus.
Hence it must be generated inside the cochlea. And not in the brain.

But when I follow my hearing concept, where the non-stationary Bernoulli
effect  transfers the incoming sound pressure stimulus into the sound
energy stimulus in front of the basilar membrane, there doesn’t exist
any anomaly.

May I remark that the non-stationary Bernoulli effect is a physically
correct solution of the Navier-Stokes equation for a non-viscous
alternating potential flow in a non-compressible fluid? These flow
conditions exist in the cochlear duct.

Tamás, regarding your remark:
….In fact, some patients with single-sided deafness and CI in the deaf
ear report a perceived latency between the normal hearing and the CI
side, which does not seem to be of technical nature…..

I can give you the following answer:

Related to my experiments in which it is clear to me that the CI program
does not generate the for normal hearing correct signals I also have my
strong doubts about your assumption that the latency you mentioned is
not of technical nature.

Nathan I agree with you, regarding your remarks to Tamás:

….In terms of your  specific question with unilateral loss and cochlear
implants, I would be tempted to look at the engineering side of the
device, or possibly the settings of the implant programming, but you
mention you do not think the delay is a technical one….

As you can conclude with me from the results of the described
experiments I have done it is not only a technical issue.

It is really fundamental in origin. It is related to the fact that
already for a long time the scientific hearing community is fully
convinced that the cochlea transfers the sound pressure stimulus to the
brain and the brain applies nonlinear functions in its auditory
perception process.
And for me apparently out of my experimental results I distinguish that
the cochlea performs the major non-linear process step. It transfers the
sound pressure stimulus by two successive process steps –
differentiation followed by squaring – into the sound energy stimulus.
And the latter stimulus is frequency selective transferred to the brain.

In that case the at the best 30 dB dynamic range of the CI processor is
transferred to 60 dB by the squaring process step as well. Which brings
the CI dynamic range in balance with the normal hearing apparatus.

For a better perception of sound impressions via the CI it is needed
that the programming of the CI processor must be changed. And that is a
technical issue.

Maybe the conclusions out of my experiments that my CI processor isn’t
well programmed for this transfer of fundamentals – especially for
missing fundamentals – can be of high value for Mandarin speaking
Chinese users of a CI. This tonal language, spoken by them, in which
fundamentals play a crucial role, is highly problematic for a good
speech perception score until now. I know that algorithms are developed
or in development for extracting the fundamentals together with CIS
technology.

[See for instance:
N. Lan*, K. B. Nie, S. K. Gao, and F. G. Zeng:
A Novel Speech-Processing Strategy Incorporating Tonal Information for
Cochlear Implants
IEEE Transactions on Biomedical Engineering, Vol. 51, No. 5, May 2004]

Nathan relating to your following remark:
….Another area to consider may be the idea of hemispheric connectivity.
In your example of a unilateral loss with the CI in the deaf ear, it may
be that the non-CI (and fully hearing ear) input is processing faster in
the brain than the CI input is. This is an extension of the concept that
auditory-deprivation impacts on plasticity ….

What do you think about the suggestion that a perception latency can be
observed in the CI activated side related to the more or less normal
hearing other side because the CI stimulus is fundamentally not correct
which causes that the brain needs more time to make a correct
perception. This can be placed perfectly in the category auditory-
deprivation resulting in an impact on the brain plasticity.

I want to close my remarks with the following:

Of course the scientific auditory community can state that my hearing
experiences with my CI and Phonak hearing aid in tone experiments are a
pure personal issue.
Firstly you can say that I have heard everything erroneously by using
the wrong arguments and my experiments does not meet the high
international standards you always use.
And secondly you are right if you say my experiments are purely
subjective in origin.

My answer to the first comment will be:

I want to remind you to August Seebeck’s quote [dated 1844] in the
dispute with Ohm and Helmholtz:

Wodurch kann über die Frage, was zu einem Tone gehöre, entschieden
werden, als eben durch das Ohr?
(How else can the question as to what makes out a tone be decided but by
the ear?)

And to the second comment:

Collect the data of such experiments and show me that I am wrong by
doing the same experiments I have done. Do that with other subjects who
are equipped with a hearing aid for moderate hearing loss and a CI in
the deaf ear. If necessary and applicable  use up-to-date techniques
like auditory fMRI or high resolution EEG methods to improve the level
of objectivity.

I have asked a few fellow CI-users for their experiences with these
phenomena. Their answers made me very confident.


Willem Chr. Heerens