Subject: Re: [AUDITORY] Seeking advice on using ANF firing rate to reslove front-back confusion in sound localization model From: Qin Liu <000003c563e12bd3-dmarc-request@xxxxxxxx> Date: Mon, 3 Mar 2025 10:44:09 +0000--_000_c9c35eaee342470b82b2cb46e728bad8epflch_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Dear Sathish, Thank you for your warm response, and I apologize for my delayed reply. Your understanding is correct. Currently, I can only predict whether the so= und source is in the front or back hemifield. Could you please suggest any models or papers that I can refer to, which de= tail how to incorporate cross-frequency cues to accurately determine or cla= ssify whether a sound source is in front or behind? Thank you very much for your help. Best regards, Qin ________________________________ From: Sathish Kumar <sathish.sreeni58@xxxxxxxx> Sent: Wednesday, 26 February 2025 13:35:31 To: Qin Liu Cc: AUDITORY@xxxxxxxx Subject: Re: Seeking advice on using ANF firing rate to reslove front-back = confusion in sound localization model Dear Qin, I agree with Prof. Jan Schnupp that some pre-assumptions are necessary for = your problem. In one of my research projects, I simulated the auditory nerv= e responses of spatialized signals using the zillany2018 model, and their D= irection of Arrival was estimated using the dietz2011<https://amtoolbox.org= /amt-1.6.0/doc/models/dietz2011.php> model. However, I found that the respo= nses were interpretable only for fine structure data, not envelopes. From my understanding, the Dietz model considers only sound sources within = the frontal hemifield. For front-back localization, you could use cross-fre= quency integration to detect the high-frequency attenuation caused by the p= inna, allowing you to determine whether the sound source is in front or bac= k hemifield. This can then be followed by the Dietz model to estimate the D= oA, as ITD and ILDs are nearly identical for front and back azimuths. Best regards, -- Sathish Kumar PhD Research Scholar Department of Audiology and SLP Kasturba Medical College (MAHE), Mangalore Mob: 9789447666 || E-mail: sathish.sreeni58@xxxxxxxx<mailto:sathish.sreeni= 58@xxxxxxxx> [https://lh6.googleusercontent.com/5byXAEkLcmDiDEuMoWneEZQdjIKw6iXHjfmLsRbz= 0FYhcx20eR5fkDLbv6Pea9afWwT5tctVYd0IA6YZG3DjVnpO5wv6bekCzl0jnN0rHbUDj_i0_lt= Wn5DcTWV_d3VZUj0vee8E3Sn4M_fstbBxj8Q]<https://www.linkedin.com/in/sathish-k= umar-0815121a9/> [https://lh5.googleusercontent.com/xK4xzers236Zr7hl6kTXC= cf28ETM6L4XYqRBNoIyXKXLTMavTMfvDdDVpQZC_uarIITzhAVe-L36vaHFM-9zalrKP9vk45O-= eCzHcQdE7pQ4hbJUEqYILghXvWrhlqGbOL6obnonsC7O-CDSXgVcxQk] <https://www.resea= rchgate.net/profile/Sathish-Kumar-163> [https://lh5.googleusercontent.co= m/TeW5o2pJsNEw0t-KL4RMnHMeyOjtzL8-GjpsM26pbcbFdR9wKh8FfouojMU57VaePdryTPyW8= Y9jXgwUVNfqHDvlrn_gUM8HwlgzD_pa06vUm0Tw9gfRYz1Pi7PZqRQe32_q9S7Gc0c9DR_YztA2= C20] <https://github.com/SATHish64103> --_000_c9c35eaee342470b82b2cb46e728bad8epflch_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable <html> <head> <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"= > </head> <body> <style type=3D"text/css" style=3D"display:none;"><!-- P {margin-top:0;margi= n-bottom:0;} --></style> <div id=3D"divtagdefaultwrapper" style=3D"font-size:12pt;color:#000000;font= -family:Calibri,Helvetica,sans-serif;" dir=3D"ltr"> <p></p> <div>Dear Sathish,<br> <br> Thank you for your warm response, and I apologize for my delayed reply.<br> <br> Your understanding is correct. Currently, I can only predict whether the so= und source is in the front or back hemifield.<br> <br> Could you please suggest any models or papers that I can refer to, which de= tail how to incorporate cross-frequency cues to accurately determine or cla= ssify whether a sound source is in front or behind?<br> <br> Thank you very much for your help.<br> <br> Best regards,<br> <br> Qin</div> <p></p> </div> <hr style=3D"display:inline-block;width:98%" tabindex=3D"-1"> <div id=3D"divRplyFwdMsg" dir=3D"ltr"><font face=3D"Calibri, sans-serif" st= yle=3D"font-size:11pt" color=3D"#000000"><b>From:</b> Sathish Kumar <sat= hish.sreeni58@xxxxxxxx><br> <b>Sent:</b> Wednesday, 26 February 2025 13:35:31<br> <b>To:</b> Qin Liu<br> <b>Cc:</b> AUDITORY@xxxxxxxx<br> <b>Subject:</b> Re: Seeking advice on using ANF firing rate to reslove fron= t-back confusion in sound localization model</font> <div> </div> </div> <div> <div dir=3D"ltr">Dear Qin, <div>I agree with Prof. Jan Schnupp that some pre-assumptions are necessary= for your problem. In one of my research projects, I simulated the auditory= nerve responses of spatialized signals using the zillany2018 model, and th= eir Direction of Arrival was estimated using the <a href=3D"https://amtoolbox.org/amt-1.6.0/doc/models/dietz= 2011.php">dietz2011</a> model. However, I found that the responses wer= e interpretable only for fine structure data, not envelopes. </div> <div><br> </div> <div>From my understanding, the Dietz model considers only sound sourc= es within the frontal hemifield. For front-back localization, you could use= cross-frequency integration to detect the high-frequency attenuation cause= d by the pinna, allowing you to determine whether the sound source is in front or back hemifield. This can then be f= ollowed by the Dietz model to estimate the DoA, as ITD and ILDs are nearly = identical for front and back azimuths. </div> <div><br> </div> <div>Best regards,</div> <span class=3D"gmail_signature_prefix">--<span class=3D"gmail-Apple-convert= ed-space"> </span></span><br> <div dir=3D"ltr" class=3D"gmail_signature"> <div dir=3D"ltr"><span style=3D"color:rgb(0,0,0)"> <p dir=3D"ltr" style=3D"line-height:1.2;margin-top:0pt;margin-bottom:0pt"><= span style=3D"color:rgb(32,33,36);font-weight:700;vertical-align:baseline;w= hite-space:pre-wrap"><font face=3D"arial, sans-serif">Sathish Kumar</font><= /span></p> <p dir=3D"ltr" style=3D"line-height:1.2;margin-top:0pt;margin-bottom:0pt"><= span style=3D"color:rgb(32,33,36);vertical-align:baseline;white-space:pre-w= rap"><font face=3D"arial, sans-serif">PhD Research Scholar</font></span></p= > <p dir=3D"ltr" style=3D"line-height:1.2;margin-top:0pt;margin-bottom:0pt"><= span style=3D"color:rgb(32,33,36);vertical-align:baseline;white-space:pre-w= rap"><font face=3D"arial, sans-serif">Department of Audiology and SLP</font= ></span></p> <p dir=3D"ltr" style=3D"line-height:1.2;margin-top:0pt;margin-bottom:0pt"><= span style=3D"color:rgb(32,33,36);vertical-align:baseline;white-space:pre-w= rap"><font face=3D"arial, sans-serif">Kasturba Medical College (MAHE), Mang= alore</font></span></p> <p style=3D"line-height:1.2;margin-top:0pt;margin-bottom:0pt"><span style= =3D"color:rgb(32,33,36);vertical-align:baseline;white-space:pre-wrap"><font= face=3D"arial, sans-serif">Mob: 9789447666 || E-mail: <a href=3D"mailto:sathish.sreeni58@xxxxxxxx" target=3D"_blank">sathish.sre= eni58@xxxxxxxx</a></font></span></p> <p dir=3D"ltr" style=3D"line-height:1.2;margin-top:0pt;margin-bottom:0pt"><= br> </p> </span></div> </div> <div><span style=3D"color:rgb(0,0,0);font-size:11pt;font-family:Arial,sans-= serif;vertical-align:baseline;white-space:pre-wrap"><span style=3D"border:m= edium;display:inline-block;overflow:hidden;width:38px;height:38px"><a href= =3D"https://www.linkedin.com/in/sathish-kumar-0815121a9/" target=3D"_blank"= ><img width=3D"38" height=3D"38" class=3D"gmail-CToWUd" style=3D"margin-lef= t: 0px; margin-top: 0px;" src=3D"https://lh6.googleusercontent.com/5byXAEkL= cmDiDEuMoWneEZQdjIKw6iXHjfmLsRbz0FYhcx20eR5fkDLbv6Pea9afWwT5tctVYd0IA6YZG3D= jVnpO5wv6bekCzl0jnN0rHbUDj_i0_ltWn5DcTWV_d3VZUj0vee8E3Sn4M_fstbBxj8Q"></a><= /span></span><span style=3D"color:rgb(0,0,0);font-size:11pt;font-family:Ari= al,sans-serif;vertical-align:baseline;white-space:pre-wrap"> </span><span style=3D"color:rgb(0,0,0);font-size:11pt;font-family:A= rial,sans-serif;vertical-align:baseline;white-space:pre-wrap"><span style= =3D"border:medium;display:inline-block;overflow:hidden;width:38px;height:38= px"><a href=3D"https://www.researchgate.net/profile/Sathish-Kumar-163" targ= et=3D"_blank"><img width=3D"38" height=3D"38" class=3D"gmail-CToWUd" style= =3D"margin-left: 0px; margin-top: 0px;" src=3D"https://lh5.googleuserconten= t.com/xK4xzers236Zr7hl6kTXCcf28ETM6L4XYqRBNoIyXKXLTMavTMfvDdDVpQZC_uarIITzh= AVe-L36vaHFM-9zalrKP9vk45O-eCzHcQdE7pQ4hbJUEqYILghXvWrhlqGbOL6obnonsC7O-CDS= XgVcxQk"></a></span></span><span style=3D"color:rgb(0,0,0);font-size:11pt;f= ont-family:Arial,sans-serif;vertical-align:baseline;white-space:pre-wrap">&= nbsp; </span><span style=3D"color:rgb(0,0,0);font-size:11pt;font= -family:Arial,sans-serif;vertical-align:baseline;white-space:pre-wrap"><spa= n style=3D"border:medium;display:inline-block;overflow:hidden;width:38px;he= ight:38px"><a href=3D"https://github.com/SATHish64103" target=3D"_blank"><i= mg width=3D"38" height=3D"38" class=3D"gmail-CToWUd" style=3D"margin-left: = 0px; margin-top: 0px;" src=3D"https://lh5.googleusercontent.com/TeW5o2pJsNE= w0t-KL4RMnHMeyOjtzL8-GjpsM26pbcbFdR9wKh8FfouojMU57VaePdryTPyW8Y9jXgwUVNfqHD= vlrn_gUM8HwlgzD_pa06vUm0Tw9gfRYz1Pi7PZqRQe32_q9S7Gc0c9DR_YztA2C20"></a></sp= an></span> </div> </div> </div> </body> </html> --_000_c9c35eaee342470b82b2cb46e728bad8epflch_--