[AUDITORY] Call for abstracts for special session proposal at ICASSP - Aug 23 deadline (Laurie Heller )


Subject: [AUDITORY] Call for abstracts for special session proposal at ICASSP - Aug 23 deadline
From:    Laurie Heller  <hellerl@xxxxxxxx>
Date:    Fri, 19 Aug 2022 16:07:52 -0400

--Apple-Mail=_4C0C3807-6042-4035-993A-5C3230F96367 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=utf-8 [Apologies for cross-posting.] If you have an interest in both automatic and human sound/scene = recognition, please read this invitation.=20 We are proposing a special session for the next IEEE International = Conference on Acoustics, Speech and Signal Processing (ICASSP) from June = 4-9, 2023. The proposed special session will focus on the synergy between machine = and human approaches to sound/scenes. One motivation is to inject domain = knowledge from humans into Machine Learning. This encompasses many = areas. For example, work in auditory physiology, psychoacoustics, = and/or cognition could be applied to benefit automatic recognition, = signal processing, metrics, or active/multimodal learning. Conversely, = presenters could utilize machine learning, computational scene analysis, = etc. to address issues in human auditory processing (e.g. understanding = cognition, neural predictions, improving HRTFs).=20 If you=E2=80=99d like to present at this special session, please email a = draft abstract (approx. 200 words) to Dr. Benjamin Elizalde = (benjaminm@xxxxxxxx <mailto:benjaminm@xxxxxxxx>) by 5pm ET on = Tuesday, August 23rd. Please also indicate if you are interested in = helping to organize the session. Our Special Session proposal is due = next week; if the session is approved, your paper would be submitted for = peer review on Oct 19th. Best wishes, Laurie Heller, laurieheller@xxxxxxxx <mailto:laurieheller@xxxxxxxx> Benjamin Elizalde, benjaminm@xxxxxxxx = <mailto:benjaminm@xxxxxxxx> Bhiksha Raj, bhiksha@xxxxxxxx <mailto:bhiksha@xxxxxxxx>= --Apple-Mail=_4C0C3807-6042-4035-993A-5C3230F96367 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=utf-8 <html><head><meta http-equiv=3D"Content-Type" content=3D"text/html; = charset=3Dutf-8"></head><body style=3D"word-wrap: break-word; = -webkit-nbsp-mode: space; line-break: after-white-space;" class=3D""><div = style=3D"margin: 0px; font-stretch: normal; line-height: normal; color: = rgb(24, 24, 23);" class=3D""><span style=3D"font-kerning: none" = class=3D"">[Apologies for cross-posting.]</span></div><div = style=3D"margin: 0px; font-stretch: normal; line-height: normal; = min-height: 14px;" class=3D""><span style=3D"font-kerning: none" = class=3D""></span><br class=3D""></div><div style=3D"margin: 0px; = text-align: justify; font-stretch: normal; line-height: normal;" = class=3D""><span style=3D"font-kerning: none" class=3D"">If you have an = interest in both automatic and human sound/scene recognition, please = read this invitation.&nbsp;</span></div><div style=3D"margin: 0px; = text-align: justify; font-stretch: normal; line-height: normal; = min-height: 14px;" class=3D""><span style=3D"font-kerning: none" = class=3D""></span><br class=3D""></div><div style=3D"margin: 0px; = text-align: justify; font-stretch: normal; line-height: normal;" = class=3D""><span style=3D"font-kerning: none" class=3D"">We are = proposing a<b class=3D""> special session for the next IEEE = International Conference on Acoustics, Speech and Signal Processing</b> = (ICASSP) from June 4-9, 2023.</span></div><div style=3D"margin: 0px; = text-align: justify; font-stretch: normal; line-height: normal; = min-height: 14px;" class=3D""><span style=3D"font-kerning: none" = class=3D""></span><br class=3D""></div><div style=3D"margin: 0px; = text-align: justify; font-stretch: normal; line-height: normal; color: = rgb(19, 19, 18);" class=3D""><span style=3D"font-kerning: none" = class=3D"">The proposed special session will focus on the<b class=3D""> = synergy between machine and human approaches to sound/scenes</b>. One = motivation is to inject domain knowledge from humans into Machine = Learning. This encompasses many areas. For example, work in&nbsp; = auditory physiology, psychoacoustics, and/or cognition could be applied = to benefit automatic recognition, signal processing, metrics, or = active/multimodal learning. Conversely, presenters could utilize machine = learning, computational scene analysis, etc. to address issues in human = auditory </span><span style=3D"font-kerning: none; color: #000000" = class=3D"">processing (e.g. understanding cognition, neural predictions, = improving HRTFs).&nbsp;</span></div><div style=3D"margin: 0px; = text-align: justify; font-stretch: normal; line-height: normal; = min-height: 14px;" class=3D""><span style=3D"font-kerning: none" = class=3D""></span><br class=3D""></div><div style=3D"margin: 0px; = text-align: justify; font-stretch: normal; line-height: normal;" = class=3D""><span style=3D"font-kerning: none" class=3D"">If you=E2=80=99d = like to present at this special session, please <b class=3D"">email a = draft abstract</b> (approx. 200 words) to Dr. Benjamin Elizalde (<a = href=3D"mailto:benjaminm@xxxxxxxx" class=3D""><span = style=3D"-webkit-font-kerning: none; color: rgb(9, 79, 209);" = class=3D"">benjaminm@xxxxxxxx</span></a>) by 5pm ET on <b = class=3D"">Tuesday, August 23rd.</b> Please also indicate if you are = interested in <b class=3D"">helping to organize the session</b>. Our = Special Session proposal is due next week; if the session is approved, = your paper would be submitted for peer review on Oct = 19th.</span></div><div style=3D"margin: 0px; text-align: justify; = font-stretch: normal; line-height: normal; min-height: 14px;" = class=3D""><span style=3D"font-kerning: none" class=3D""></span><br = class=3D""></div><div style=3D"margin: 0px; text-align: justify; = font-stretch: normal; line-height: normal;" class=3D""><span = style=3D"font-kerning: none" class=3D"">Best wishes,</span></div><div = style=3D"margin: 0px; text-align: justify; font-stretch: normal; = line-height: normal; color: rgb(9, 79, 209);" class=3D""><span = style=3D"font-kerning: none; color: #000000" class=3D"">Laurie Heller, = <a href=3D"mailto:laurieheller@xxxxxxxx" class=3D""><span = style=3D"-webkit-font-kerning: none;" = class=3D"">laurieheller@xxxxxxxx</span></a></span></div><div = style=3D"margin: 0px; text-align: justify; font-stretch: normal; = line-height: normal; color: rgb(9, 79, 209);" class=3D""><span = style=3D"font-kerning: none; color: #000000" class=3D"">Benjamin = Elizalde, <a href=3D"mailto:benjaminm@xxxxxxxx" class=3D""><span = style=3D"-webkit-font-kerning: none;" = class=3D"">benjaminm@xxxxxxxx</span></a></span></div><div = style=3D"margin: 0px; text-align: justify; font-stretch: normal; = line-height: normal; color: rgb(9, 79, 209);" class=3D""><span = style=3D"font-kerning: none; color: #000000" class=3D"">Bhiksha Raj, <a = href=3D"mailto:bhiksha@xxxxxxxx" class=3D""><span = style=3D"-webkit-font-kerning: none;" = class=3D"">bhiksha@xxxxxxxx</span></a></span></div></body></html>= --Apple-Mail=_4C0C3807-6042-4035-993A-5C3230F96367--


This message came from the mail archive
src/postings/2022/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University