Dear List,
I certainly agree that perception varies wildly
across CI users. There have been a number of controlled studies, and plenty of
anecdotal statements. Psychoacoustic measures can be determined as to how well a CI user perceives the grammars
of music. In the study that Bob referenced, the best CI user out of 8 could
only tell a 4 semitone interval apart. Which is clearly insufficient for
appreciating a simple melody (let alone a complex one). Moving beyond
tonal discrimination, we could analyze the ability to recognize chord groupings
and progressions as perceived by different CI users (if at all). But even
for the poor performers of the CI world, I think we should ask if it is possible
to relearn the appreciation of music.
When I first received my implant (13 yrs), I was
using the F0/F1 sound processing strategy. Speech reception was fair, but
telephone discourse difficult, music appreciation nil. I changed to the SPEAK
sound processor at 18 yrs. I still use the same processor. Speech reception is
high, telephone discourse easy, and music appreciation high. I am willing
to attribute some of my current music appreciation to advancement in CI sound
processors. But there is so much more. For one, while working on my Ph.D., I ran
a line out of my computer straight into my processor and listen to music all day
long. My colleagues envied me because there was a ton of constuction on the
floor that defeated their best noise-cancelling headphones. Anyways, I can
clearly recall my progress in music appreciation. Beginning with tabla (which is
a very rich percussive instrument) moving to indian classical (which generally
has one or two melodic instruments and does not incorporate chords), and then
moving to jazz and beyond (can you really go beyond jazz? bluegrass maybe). So
there was definitely a relearning process. A process that could be encouraged
with CI users. I currently listen to alot of 80's music, which is great for
reprogramming my brain since I remember well what the songs sound
like).
As for the noise vocoder, I agree that I can't know
how it sounds to Bob, and he can't know how it sounds to me. Bob Shannon never
visited my normal hearing childhood with CI simulation in tow. However, we can
analyze certain physical attributes of the CI simulators. For example, a
noise-excited vocoder when processed through a normal auditory filter bank model
and then subsequent envelope extraction will produce "noisy" envelopes in
the sense of having higher modulation frequency noise. But with a cochlear
implant (the real deal), the envelopes modulate their associated pulse
trains and perhaps are coded without such noise. For example, if I played a
single tone through the CI simulation, a normal hearing listener would hear a
continuously excited noise band. But an implantee would receive a continuously
excited pulse train. My area of expertise falls off rapidly as we leave those
electrodes into the surrounding tissue and penetrating the surving auditory
nerve fibers. Yet, I can't believe that a constant pulse train delivered from a
single electrode will produce a percept near the noise band for a NH
listener. Further, the combination of firing two electrodes will certainly
interact in ways that two excited noise bands will not. Unfortunately, to many
issues here to simply meander on about, so again, I'll zip it.
Anybody thought of making music tailored for CI
users? If the low-end users can only distinguish large steps, then perhaps we
can compose a new scale that only includes octaves and the associated
fifths?
Sensimetrics Corporation
48 Grove St. Somerville, MA 02144 Tel: 617-625-0600 x240 Fax: 617-625-6612 email: raygold@xxxxxxxx web: www.sens.com |