Re: [AUDITORY] Seeking Advice on Auditory Models for Hearing Loss (Mathias Dietz )


Subject: Re: [AUDITORY] Seeking Advice on Auditory Models for Hearing Loss
From:    Mathias Dietz  <mathias.dietz@xxxxxxxx>
Date:    Sun, 15 Dec 2024 22:35:47 +0000

--_000_872efdf38a9d4c889cf000d9dc66ab65unioldenburgde_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Dear Sathish and all, I agree with Lorenzo. Their 3D tone-in toolkit is probably the best off-the= -shelf solution for your purpose. I have been working toward updating the Dietz et al. (2011) / Kayser et al.= (2015) model - it's been a while. My long-term goal is predicting individu= al sound localization performance, including hearing aid users. However, th= ere is a long list of open questions: We cannot even properly model some as= pects of sound localization in normal hearing. Next there is only a partial= theory on the (average) influences of various types and degrees of hearing= impairment on sound localization, let alone detailed models. As soon as as= ymmetric hearing loss or neurologic disorders are involved, we are still pr= etty much at square one. With respect to hearing aids, it is never quite ce= rtain if your simulator missed out an important proprietary feature, especi= ally in case of high-end binaural hearing aids. And if and how your patient= has adapted to the hearing aid amplification. In asymmetric hearing loss t= his can be large and individual differences as we measured recently (Zimmer= et al. 2024; https://dx.doi.org/10.3205/zaud000045). In summary, don't wait for my update, it will take some more years. Connect= ing a state-of-the-art front-end, as Dick suggested, is certainly going to = help but also not a straightforward task (see, e.g. Klug et al. 2020 https:= //doi.org/10.1121/10.0001602, which does not even have a localization back-= end). And my guess is that any model or model combination is going to let a= lot of aspects unexplained (if you have a large and informative test batte= ry as well as a diverse patient pool). These unexplained aspects will be ve= ry important to the field because there is not even a dedicated list or com= prehensive review of the things we cannot model. However, I don't know if t= hat is within your scope. There is hope that DNNs are going to accelerate and/or strengthen this diff= icult endeavor (e.g., a combination of Saddler and McDermott 2024 https://w= ww.nature.com/articles/s41467-024-54700-5 with Saddler et al. 2024 https://= computationalaudiology.com/modeling-normal-and-impaired-hearing-with-artifi= cial-neural-networks-optimized-for-ecological-tasks-2/). If you or anyone wants to dig deeper into this, or if you have a specific q= uestion on how to modify or interface with my 2011 model, don't hesitate to= contact me. Best, Mathias --_000_872efdf38a9d4c889cf000d9dc66ab65unioldenburgde_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable <html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr= osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" = xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:= //www.w3.org/TR/REC-html40"> <head> <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"= > <meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)"> <style><!-- /* Font Definitions */ @xxxxxxxx {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4;} @xxxxxxxx {font-family:Aptos;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {margin:0cm; font-size:11.0pt; font-family:"Aptos",sans-serif; mso-ligatures:standardcontextual; mso-fareast-language:EN-US;} a:link, span.MsoHyperlink {mso-style-priority:99; color:#467886; text-decoration:underline;} span.E-MailFormatvorlage17 {mso-style-type:personal-compose; font-family:"Aptos",sans-serif; color:windowtext;} .MsoChpDefault {mso-style-type:export-only; font-size:11.0pt; mso-fareast-language:EN-US;} @xxxxxxxx WordSection1 {size:612.0pt 792.0pt; margin:70.85pt 70.85pt 2.0cm 70.85pt;} div.WordSection1 {page:WordSection1;} --></style><!--[if gte mso 9]><xml> <o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" /> </xml><![endif]--><!--[if gte mso 9]><xml> <o:shapelayout v:ext=3D"edit"> <o:idmap v:ext=3D"edit" data=3D"1" /> </o:shapelayout></xml><![endif]--> </head> <body lang=3D"DE" link=3D"#467886" vlink=3D"#96607D" style=3D"word-wrap:bre= ak-word"> <div class=3D"WordSection1"> <p class=3D"MsoNormal">Dear Sathish and all,<o:p></o:p></p> <p class=3D"MsoNormal"><span lang=3D"EN-US">I agree with Lorenzo. Their 3D = tone-in toolkit is probably the best off-the-shelf solution for your purpos= e.<o:p></o:p></span></p> <p class=3D"MsoNormal"><span lang=3D"EN-US">I have been working toward upda= ting the Dietz et al. (2011) / Kayser et al. (2015) model &#8211; it&#8217;= s been a while. My long-term goal is predicting individual sound localizati= on performance, including hearing aid users. However, there is a long list of open questions: We cannot even properly model some= aspects of sound localization in normal hearing. Next there is only a part= ial theory on the (average) influences of various types and degrees of hear= ing impairment on sound localization, let alone detailed models. As soon as asymmetric hearing loss or neurologi= c disorders are involved, we are still pretty much at square one. With resp= ect to hearing aids, it is never quite certain if your simulator missed out= an important proprietary feature, especially in case of high-end binaural hearing aids. And if and how your = patient has adapted to the hearing aid amplification. In asymmetric hearing= loss this can be large and individual differences as we measured recently = (Zimmer et al. 2024; <a href=3D"https://dx.doi.org/10.3205/zaud000045">https://dx.doi.org/10.320= 5/zaud000045</a>).<o:p></o:p></span></p> <p class=3D"MsoNormal"><span lang=3D"EN-US">In summary, don&#8217;t wait fo= r my update, it will take some more years. Connecting a state-of-the-art fr= ont-end, as Dick suggested, is certainly going to help but also not a strai= ghtforward task (see, e.g. Klug et al. 2020 <a href=3D"https://doi.org/10.1121/10.0001602">https://doi.org/10.1121/10.0= 001602</a>, which does not even have a localization back-end). And my guess= is that any model or model combination is going to let a lot of aspects un= explained (if you have a large and informative test battery as well as a diverse patient pool). These unexpla= ined aspects will be very important to the field because there is not even = a dedicated list or comprehensive review of the things we cannot model. How= ever, I don&#8217;t know if that is within your scope.<o:p></o:p></span></p> <p class=3D"MsoNormal"><span lang=3D"EN-US">There is hope that DNNs are goi= ng to accelerate and/or strengthen this difficult endeavor (e.g., a combina= tion of Saddler and McDermott 2024 <a href=3D"https://www.nature.com/articles/s41467-024-54700-5">https://www.= nature.com/articles/s41467-024-54700-5</a> with Saddler et al. 2024 <a href=3D"https://computationalaudiology.com/modeling-normal-and-impaired-= hearing-with-artificial-neural-networks-optimized-for-ecological-tasks-2/"> https://computationalaudiology.com/modeling-normal-and-impaired-hearing-wit= h-artificial-neural-networks-optimized-for-ecological-tasks-2/</a>).<o:p></= o:p></span></p> <p class=3D"MsoNormal"><span lang=3D"EN-US">If you or anyone wants to dig d= eeper into this, or if you have a specific question on how to modify or int= erface with my 2011 model, don&#8217;t hesitate to contact me.<o:p></o:p></= span></p> <p class=3D"MsoNormal"><span lang=3D"EN-US">Best,<o:p></o:p></span></p> <p class=3D"MsoNormal"><span lang=3D"EN-US">Mathias<o:p></o:p></span></p> </div> </body> </html> --_000_872efdf38a9d4c889cf000d9dc66ab65unioldenburgde_--


This message came from the mail archive
postings/2024/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University