Dear Hugo, Niki, and everyone,
I agree with Niki that music's perception in CI users is a complex topic, and listening to music through a vocoder might be misleading. First, the vocoder should not be considered a tool to simulate the sound perceived through a CI but to simulate speech score. In other words, when a normal hearing person (NH) listens to a vocoded speech, we cannot assume that he/she will have the same percept
as a CI user, but just that she/he will understand the same amount of words. For a given situation, if an NH understands 100% of a sentence and a CI user 50%, the same NH will also understand about 50% of a vocoded speech.
It is very difficult for a CI user to describe how they perceive sounds, as we lack vocabulary. Just for NH, for CI users, the sound of a bird sounds like a "bird singing."
The only way is to ask CI users, who have enough residual hearing on one ear, to compare the same sound presented into the two ears. You can find some studies that will do just that: Lazard et al., (2012), https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0038687 Adel et al. (2019) https://www.frontiersin.org/articles/10.3389/fnins.2019.01119/full Marozeau et al. (2020) https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0235504 Dorman et al. (2020) https://journals.sagepub.com/doi/full/10.1177/2331216520920079 Based on those studies' results, it seems that there are as many answers as CI users in the world. Some of them will claim that a sound through a CI is exactly as in the normal ear, some as white noise, and some as inharmonic sounds.
Now to answer your question, we do not know how CI users will perceive music because they will all perceive it differently. However, we know that the sound processor will not send enough information to convey pitch cues properly (see here).
Although there are some star performers (see Maarefvand, 2013), it is pretty safe to assume that
most CI users will not perceive the melody. Nevertheless, as Niki mentioned, many CI users appreciate music and are engaged in musical activities. They can probably focus on different musical cues such as rhythm and dynamic. Similarly, many NH people can appreciate
music without a clear tonal structure and defined melodies.
To be provocative, I will propose that as the vocoder is a good model for speech understanding, some contemporary music (like Boulez) can be a good model of how CI users can experience (not perceive !) music. And as for the music composed by Boulez, some people love it, and many people hate it. To support my point, we have made a study in which we have asked NH and CIL to rate the musical tension of a piano piece of Mozart (Spangmose, 2019, www.frontiersin.org/articles/10.3389/fnins.2019.00987/full.) Surprisingly, CIL and NHL rated overall musical tension in a very similar way. Then, we have repeated the task but on a modified version of the piano, in which all the notes were shuffled. Removing the melody had an important effect on NH's musical judgment, but none for the CIL. Furthermore, CIL reports appreciating the piece of music with original notes or with the random one similarly.
In summary, for your project, you should look into atonal or purely rhythmical music. Good luck, Jeremy
On Mar 20, 2021 5:15 AM, "Vavatzanidis, Niki" <Niki.Vavatzanidis@xxxxxxxxxxxxxxxxxxxxxx> wrote:
Dear Hugo and all,
as a cognitive neuroscientist I’d like to add: don’t forget the brain! Jan made some very valid points about how much we can infer from vocoders from a technical point of view. Complicating things further is the fact that no brain will interpret the sound in the same fashion. How much hearing experience you have had before you get a CI is crucial for what you can extract from the CI signal. A person whose brain experienced decades of hearing and only a relatively short period of deafness before getting the implant will extract much more from the CI signal than someone whose brain has never learned to decode (audio) speech and gets an implant late in life like your sister. Speech discrimination may come almost effortlessly for some in the first case, while it is out of reach for almost everybody in the latter case. The CI might still be useful because it informs you about environmental sounds (your child is crying in the next room, someone is addressing you from behind etc) but understanding speech without lip-reading is no hope one should pose on cochlear implantation if born deaf and not getting the implant early in life. With music, it will be similar in certain aspects. The interviews of CI users Kathy and Angela made for the CI hackathon that Alan send around (https://cihackathon.com/docs/CI_interviews) describe very nicely, I think, how they are able to fill in missing information for songs they know from before their hearing loss (and which they are able to enjoy) and how this does not work for new pieces of music for which they do not have a “pre-CI” memory. On the other hand, music enjoyment has so much to do with your own expectations that in one of our studies we found that those who have never experienced music bore they got the CI actually tend to enjoy it much more than those who can compare it to “how it used to sound” before their hearing loss and who are disappointed by how different the music sounds with the CI ( Hahne et al., 2020, doi: 10.1055/s-0040-1711102).
This is just to give you an idea of how diverse the experience of one and the same CI output may be depending on your individual history and how it has shaped your brain. Of course, there are more factors that shape what you hear with the CI (many related to the individual brain, others linked to the technology itself), but one’s hearing history is a very fundamental one. I for one would be very interested to hear your CI art project, maybe you could point me/us towards it when the time comes? That would be great!
All the best Niki
*********************************** Dr. rer. nat. Niki K. Vavatzanidis (she/her) Saxonian Cochlear Implant Center Dresden University Hospital Dresden Fetscherstr. 74 01307 Dresden Germany
niki.vavatzanidis(at)ukdd.de https://www.uniklinikum-dresden.de/scic/research
Von: Jan Schnupp <jan.schnupp@xxxxxxxxxxxxxx>
Dear Hugo,
one thing you must appreciate is that, although there are a number of vocoders out there to simulate cochlear implants, aone Alan recommended is perfectly fine, it is nevertheless important to appreciate it is fundamentally impossible to give a true, veridical impression of the sensation cochlear implants create through acoustic stimulation of the normal cochlea. The main reason for this is that the mechanics of the cochlea links temporal stimulation patterns to places of stimulation, and CIs don't do anything like that. Many established CI designs do not pay much attention to the precise temporal patterning of stimuus pulses, so CI users lose important cues for the pitch of complex sounds, for binaural scene analysis and for spatial fearing. What exactly that means cannot be simulated with sound, although "vocoding techniques" give an impression. You may have seen the demo here which I like to use of a Beethoven sonata: http://auditoryneuroscience.com/prosthetics/music If you listen to the original it is very clearly two instruments playing two distinct melodies. The vocoded version sounds much more like a single stream and the melody is much harder to appreciate, but the rhythm is unimpaired. That demo I made with a bit of simple Matlab code, a bank of bandpass followed by envelope extraction, and then I use the envelope to modulate narrow band noise. Happy to share the code but it is pretty trivial.
Good luck with your public engagement artwork, and all the best to your sister.
Jan --------------------------------------- Prof Jan Schnupp
On Tue, 9 Mar 2021 at 13:15, Alan Kan <alan.kan@xxxxxxxxx> wrote:
|