Dear Hugo and all, as a cognitive neuroscientist I’d like to add: don’t forget the brain! Jan made some very valid points about how much
we can infer from vocoders from a technical point of view. Complicating things further is the fact that no brain will interpret the sound in the same fashion. How much hearing experience you have had before you get a CI is crucial for what you can extract
from the CI signal. A person whose brain experienced decades of hearing and only a relatively short period of deafness before getting the implant will extract much more from the CI signal than someone whose brain has never learned to decode (audio) speech
and gets an implant late in life like your sister. Speech discrimination may come almost effortlessly for some in the first case, while it is out of reach for almost everybody in the latter case. The CI might still be useful because it informs you about environmental
sounds (your child is crying in the next room, someone is addressing you from behind etc) but understanding speech without lip-reading is no hope one should pose on cochlear implantation if born deaf and not getting the implant early in life. With music, it will be similar in certain aspects. The interviews of CI users Kathy and Angela made for the CI hackathon
that Alan send around (https://cihackathon.com/docs/CI_interviews) describe very nicely, I think, how they are able to fill in missing information for songs they know from before their hearing loss (and
which they are able to enjoy) and how this does not work for new pieces of music for which they do not have a “pre-CI” memory. On the other hand, music enjoyment has so much to do with your own expectations that in one of our studies we found that those who
have never experienced music bore they got the CI actually tend to enjoy it much more than those who can compare it to “how it used to sound” before their hearing loss and who are disappointed by how different the music sounds with the CI ( Hahne et al., 2020,
doi: 10.1055/s-0040-1711102). This is just to give you an idea of how diverse the experience of one and the same CI output may be depending on your
individual history and how it has shaped your brain. Of course, there are more factors that shape what you hear with the CI (many related to the individual brain, others linked to the technology itself), but one’s hearing history is a very fundamental one.
I for one would be very interested to hear your CI art project, maybe you could point me/us towards it when the time comes?
That would be great! All the best Niki *********************************** Dr. rer. nat. Niki K. Vavatzanidis (she/her) Saxonian Cochlear Implant Center Dresden University Hospital Dresden Fetscherstr. 74 01307 Dresden Germany niki.vavatzanidis(at)ukdd.de https://www.uniklinikum-dresden.de/scic/research Von: Jan Schnupp <jan.schnupp@xxxxxxxxxxxxxx>
Dear Hugo, one thing you must appreciate is that, although there are a number of vocoders out there to simulate cochlear implants, aone Alan recommended is perfectly fine, it is nevertheless important to appreciate it
is fundamentally impossible to give a true, veridical impression of the sensation cochlear implants create through acoustic stimulation of the normal cochlea. The main reason for this is that the mechanics of the cochlea links temporal stimulation patterns
to places of stimulation, and CIs don't do anything like that. Many established CI designs do not pay much attention to the precise temporal patterning of stimuus pulses, so CI users lose important cues for the pitch of complex sounds, for binaural scene analysis
and for spatial fearing. What exactly that means cannot be simulated with sound, although "vocoding techniques" give an impression. You may have seen the demo here which I like to use of a Beethoven sonata: http://auditoryneuroscience.com/prosthetics/music
If you listen to the original it is very clearly two instruments playing two distinct melodies. The vocoded version sounds much more like a single stream and the melody is much harder to appreciate, but the rhythm is unimpaired. That demo I made with a bit of simple Matlab code, a bank of bandpass followed by envelope extraction, and then I use the envelope to modulate narrow band noise. Happy to share the code but it is pretty trivial. Good luck with your public engagement artwork, and all the best to your sister. Jan --------------------------------------- Prof Jan Schnupp On Tue, 9 Mar 2021 at 13:15, Alan Kan <alan.kan@xxxxxxxxx> wrote:
|