[AUDITORY] Job opportunity: transforming hearing devices through electrophysiology and deep learning (Nick Lesica )


Subject: [AUDITORY] Job opportunity: transforming hearing devices through electrophysiology and deep learning
From:    Nick Lesica  <lesica@xxxxxxxx>
Date:    Mon, 21 Feb 2022 16:11:16 +0000

--000000000000208acd05d8897ce4 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable *Job opportunity: transforming hearing devices through electrophysiology and deep learning* *What=E2=80=99s the problem?* Hearing loss accounts for a larger share of global disability than almost any other condition. It cannot yet be cured and existing technology often fails to help. Current hearing aids are limited in their ability to provide real-world benefit: speech in background noise remains difficult to understand, multi-talker environments are hard to parse, and music is hopelessly distorted. The key challenge in hearing aid design is fundamental: hearing loss is a complex nonlinear problem and our current understanding of it is too superficial to provide a basis for hand-designed solutions. *What=E2=80=99s the opportunity?* Fortunately, there is hope: recent advances in deep learning and auditory neuroscience have opened up the opportunity to solve the problem empirically. Hearing relies on the information about sound that is encoded in the brain=E2=80=99s neural activity patterns. If hearing is impaired by = hearing loss, it is because the details of these activity patterns have been distorted. The ideal hearing aid would correct these distortions by transforming incoming sounds such that, when processed by the impaired ear, they elicit the same neural activity patterns as the processing of the original sounds by a healthy ear. If this ideal can be achieved, hearing will be restored to normal. We can therefore reframe the design of hearing aids as an optimization problem in which the goal is to find the sound input that produces a desired auditory experience by eliciting the required neural activity. This is, of course, easier said than done. But we have spent the past several years developing a unique capability for recording large-scale neural activity data with high spatiotemporal resolution. We are now ready to use this resource to train deep learning models that link sound to perception via neural activity and to develop them into transformative applications. We are supported by the UK=E2=80=99s medical and engineering = research councils (MRC and EPSRC) and are working in partnership with the Royal National ENT Hospital and Perceptual Technologies, a startup formed to bring our technology to market. You can read more about our plans and the research behind it at lesicalab.com. *Who are we looking for?* We are looking for experts who are interested in applying deep learning methods to large-scale neural data to develop the next generation of hearing technologies. We are recruiting for multiple postdoc positions to join a team that will work together on different aspects of the problem. Each team member will have significant autonomy in contributing to a radically new approach to sensory device design. *What will you do?* - Build data pipelines for processing large-scale neural activity recordings - Develop deep learning models to map sounds to neural activity, and to transform sounds as required to create desired neural activity patterns - Analyze neural recordings to assess the efficacy of new sound transformations in correcting neural distortions - Work with audiologists to test the benefit of new sound transformations for listeners with hearing loss - Develop prototype hearing aid algorithms in preparation for commercialization *What will you bring?* - PhD in statistics, machine learning, computer science or related discipline - Experience working with high-dimensional datasets, and familiarity with modern deep learning workflows - Strong software engineering skills, experience working with the Python data science stack (numpy, scipy, sklearn, pytorch, tensorflow, etc.) - A knack for solving problems that are not amenable to off-the-shelf solutions - Experience modifying existing computational tools to solve new problem= s Other qualifications such as knowledge of neuroscience or auditory processing, or experience working with time-series data, are desirable but not essential. *What are we offering?* - An opportunity to pursue a transformative solution to a major public health problem that has the potential to benefit hundreds of millions of people - A chance to join a dynamic entrepreneurial team at one of the world=E2= =80=99s largest hearing research centers - The potential to join a deep tech startup as part of the founding team upon project completion to commercialize the technology - Salary at UCL Grade 7, which ranges from =C2=A336,770 to =C2=A344,338 = per annum, plus opportunity for equity in Perceptual Technologies - UCL staff benefits (click here <https://www.ucl.ac.uk/human-resources/pay-benefits/staff-benefits> for more info) Please get in touch if you would like to discuss these opportunities ( lesica@xxxxxxxx). And please forward this announcement to anyone else who might be interested. Nicholas A. Lesica, Ph.D. Professor of Neuroengineering Ear Institute University College London --000000000000208acd05d8897ce4 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable <div dir=3D"ltr"><div><div dir=3D"ltr" class=3D"gmail_signature" data-smart= mail=3D"gmail_signature"><div dir=3D"ltr"><div><div dir=3D"ltr"><div><div d= ir=3D"ltr"><div><div dir=3D"ltr"><div><div dir=3D"ltr"><span><div><div dir= =3D"ltr"><div><b style=3D"font-family:&quot;Helvetica Neue&quot;;font-size:= 12pt"><span style=3D"font-size:16pt;font-family:Arial,sans-serif;color:blac= k">Job opportunity: transforming hearing devices through electrophysiology and deep learning</span></b><br></div></div></div></span></div></div></div>= </div></div></div></div></div></div></div></div><div class=3D"gmail_quote">= <div dir=3D"ltr"><div><p class=3D"MsoNormal" style=3D"margin:0cm 0cm 4pt;fo= nt-family:&quot;Helvetica Neue&quot;"><b><span style=3D"font-family:Arial,s= ans-serif;color:black"><font size=3D"1"><br></font></span></b></p><p class= =3D"MsoNormal" style=3D"margin:0cm 0cm 4pt;font-size:12pt;font-family:&quot= ;Helvetica Neue&quot;"><b><span style=3D"font-size:11pt;font-family:Arial,s= ans-serif;color:black">What=E2=80=99s the problem?</span></b><span style=3D= "font-size:11pt;font-family:&quot;Times New Roman&quot;,serif"></span></p> <p class=3D"MsoNormal" style=3D"margin:0cm 0cm 4pt;text-align:justify;font-= size:12pt;font-family:&quot;Helvetica Neue&quot;"><span style=3D"font-size:= 11pt;font-family:Arial,sans-serif;color:black">Hearing loss accounts for a larger share of global disability than almost any other condition. It cannot yet be cured and existing technology often fails to help. Current hearing aids are limited in their ability to provide real-world benefit: sp= eech in background noise remains difficult to understand, multi-talker environme= nts are hard to parse, and music is hopelessly distorted. The key challenge in hearing aid design is fundamental: hearing loss is a complex nonlinear prob= lem and our current understanding of it is too superficial to provide a basis f= or hand-designed solutions.=C2=A0</span><span style=3D"font-size:11pt;font-family:&quot;Time= s New Roman&quot;,serif"></span></p> <p class=3D"MsoNormal" style=3D"margin:0cm;font-size:12pt;font-family:&quot= ;Helvetica Neue&quot;"><span style=3D"font-size:11pt;font-family:&quot;Time= s New Roman&quot;,serif">=C2=A0</span></p> <p class=3D"MsoNormal" style=3D"margin:0cm 0cm 4pt;font-size:12pt;font-fami= ly:&quot;Helvetica Neue&quot;"><b><span style=3D"font-size:11pt;font-family= :Arial,sans-serif;color:black">What=E2=80=99s the opportunity?</span></b><s= pan style=3D"font-size:11pt;font-family:&quot;Times New Roman&quot;,serif">= </span></p> <p class=3D"MsoNormal" style=3D"margin:0cm 0cm 4pt;text-align:justify;font-= size:12pt;font-family:&quot;Helvetica Neue&quot;"><span style=3D"font-size:= 11pt;font-family:Arial,sans-serif;color:black">Fortunately, there is hope: recent advances in deep learning and auditory neuroscience have opene= d up the opportunity to solve the problem empirically. Hearing relies on the information about sound that is encoded in the brain=E2=80=99s neural activ= ity patterns. If hearing is impaired by hearing loss, it is because the details= of these activity patterns have been distorted. The ideal hearing aid would correct these distortions by transforming incoming sounds such that, when processed by the impaired ear, they elicit the same neural activity pattern= s as the processing of the original sounds by a healthy ear. If this ideal can b= e achieved, hearing will be restored to normal. We can therefore reframe the design of hearing aids as an optimization problem in which the goal is to f= ind the sound input that produces a desired auditory experience by eliciting th= e required neural activity.</span><span style=3D"font-size:11pt;font-family:&= quot;Times New Roman&quot;,serif"></span></p> <p class=3D"MsoNormal" style=3D"margin:0cm 0cm 4pt;text-align:justify;font-= size:12pt;font-family:&quot;Helvetica Neue&quot;"><span style=3D"font-size:= 11pt;font-family:Arial,sans-serif;color:black">This is, of course, easier said than done. But we have spent the past several years developing a uniqu= e capability for recording large-scale neural activity data with high spatiotemporal resolution. We are now ready to use this resource to train d= eep learning models that link sound to perception via neural activity and to develop them into transformative applications. We are supported by the UK= =E2=80=99s medical and engineering research councils (MRC and EPSRC) and are working i= n partnership with the Royal National ENT Hospital and Perceptual Technologie= s, a startup formed to bring our technology to market.</span><span style=3D"font= -size:11pt;font-family:&quot;Times New Roman&quot;,serif"></span></p> <p class=3D"MsoNormal" style=3D"margin:0cm 0cm 4pt;text-align:justify;font-= size:12pt;font-family:&quot;Helvetica Neue&quot;"><span style=3D"font-size:= 11pt;font-family:Arial,sans-serif;color:black">You can read more about our plans and the research behind it at </span><span style=3D"font-size:11p= t;font-family:&quot;Times New Roman&quot;,serif"><a href=3D"http://lesicala= b.com" style=3D"color:blue" target=3D"_blank"><span style=3D"font-family:Ar= ial,sans-serif;color:rgb(17,85,204)">lesicalab.com</span></a></span><span s= tyle=3D"font-size:11pt;font-family:Arial,sans-serif;color:black">.=C2=A0</s= pan><span style=3D"font-size:11pt;font-family:&quot;Times New Roman&quot;,s= erif"></span></p> <p class=3D"MsoNormal" style=3D"margin:0cm 0cm 4pt;text-align:justify;font-= size:12pt;font-family:&quot;Helvetica Neue&quot;"><span style=3D"font-size:= 11pt;font-family:&quot;Times New Roman&quot;,serif">=C2=A0</span></p> <p class=3D"MsoNormal" style=3D"margin:0cm 0cm 4pt;font-size:12pt;font-fami= ly:&quot;Helvetica Neue&quot;"><b><span style=3D"font-size:11pt;font-family= :Arial,sans-serif;color:black">Who are we looking for?</span></b><span styl= e=3D"font-size:11pt;font-family:&quot;Times New Roman&quot;,serif"></span><= /p> <p class=3D"MsoNormal" style=3D"margin:0cm 0cm 4pt;text-align:justify;font-= size:12pt;font-family:&quot;Helvetica Neue&quot;"><span style=3D"font-size:= 11pt;font-family:Arial,sans-serif;color:black">We are looking for experts who are interested in applying deep learning methods to large-scale neural data to develop the next generation of hearing technologies. We are recruiting for multiple postdoc positions to join a team that will work together on different aspects of the problem. Each team member will have si= gnificant autonomy in contributing to a radically new approach to sensory device design.=C2=A0</span></p> <p class=3D"MsoNormal" style=3D"margin:0cm 0cm 4pt;text-align:justify;font-= size:12pt;font-family:&quot;Helvetica Neue&quot;"><span style=3D"font-size:= 11pt;font-family:&quot;Times New Roman&quot;,serif">=C2=A0</span></p> <p class=3D"MsoNormal" style=3D"margin:0cm 0cm 4pt;font-size:12pt;font-fami= ly:&quot;Helvetica Neue&quot;"><b><span style=3D"font-size:11pt;font-family= :Arial,sans-serif;color:black">What will you do?</span></b><span style=3D"f= ont-size:11pt;font-family:&quot;Times New Roman&quot;,serif"></span></p> <ul style=3D"margin-top:0cm;margin-bottom:0cm" type=3D"disc"> <li class=3D"MsoNormal" style=3D"color:black;margin:0cm 0cm 4pt;vertical-a= lign:baseline;font-size:12pt;font-family:&quot;Helvetica Neue&quot;"><span = style=3D"font-size:11pt;font-family:Arial,sans-serif">Build data pipelines for processing large-scale neural activity recordings</= span></li> <li class=3D"MsoNormal" style=3D"color:black;margin:0cm 0cm 4pt;vertical-a= lign:baseline;font-size:12pt;font-family:&quot;Helvetica Neue&quot;"><span = style=3D"font-size:11pt;font-family:Arial,sans-serif">Develop deep learning models to map sounds to neural activity, and to transfor= m sounds as required to create desired neural activity patterns</span></= li> <li class=3D"MsoNormal" style=3D"color:black;margin:0cm 0cm 4pt;vertical-a= lign:baseline;font-size:12pt;font-family:&quot;Helvetica Neue&quot;"><span = style=3D"font-size:11pt;font-family:Arial,sans-serif">Analyze neural recordings to assess the efficacy of new sound transformations = in correcting neural distortions</span></li> <li class=3D"MsoNormal" style=3D"color:black;margin:0cm 0cm 4pt;vertical-a= lign:baseline;font-size:12pt;font-family:&quot;Helvetica Neue&quot;"><span = style=3D"font-size:11pt;font-family:Arial,sans-serif">Work with audiologists to test the benefit of new sound transformations for listeners with hearing loss</span></li> <li class=3D"MsoNormal" style=3D"color:black;margin:0cm 0cm 4pt;vertical-a= lign:baseline;font-size:12pt;font-family:&quot;Helvetica Neue&quot;"><span = style=3D"font-size:11pt;font-family:Arial,sans-serif">Develop prototype hearing aid algorithms in preparation for commercialization<= /span></li></ul><font color=3D"#000000" face=3D"Arial, sans-serif"><span st= yle=3D"font-size:14.6667px"><br></span></font><ul style=3D"margin-top:0cm;m= argin-bottom:0cm" type=3D"disc"> </ul> <p class=3D"MsoNormal" style=3D"margin:0cm 0cm 4pt;font-size:12pt;font-fami= ly:&quot;Helvetica Neue&quot;"><b><span style=3D"font-size:11pt;font-family= :Arial,sans-serif;color:black">What will you bring?</span></b><span style= =3D"font-size:11pt;font-family:&quot;Times New Roman&quot;,serif"></span></= p> <ul style=3D"margin-top:0cm;margin-bottom:0cm" type=3D"disc"> <li class=3D"MsoNormal" style=3D"color:black;margin:0cm 0cm 4pt;vertical-a= lign:baseline;font-size:12pt;font-family:&quot;Helvetica Neue&quot;"><span = style=3D"font-size:11pt;font-family:Arial,sans-serif">PhD in statistics, machine learning, computer science or related disciplin= e</span></li> <li class=3D"MsoNormal" style=3D"color:black;margin:0cm 0cm 4pt;vertical-a= lign:baseline;font-size:12pt;font-family:&quot;Helvetica Neue&quot;"><span = style=3D"font-size:11pt;font-family:Arial,sans-serif">Experience working with high-dimensional datasets, and familiarity with modern de= ep learning workflows=C2=A0</span></li> <li class=3D"MsoNormal" style=3D"color:black;margin:0cm 0cm 4pt;vertical-a= lign:baseline;font-size:12pt;font-family:&quot;Helvetica Neue&quot;"><span = style=3D"font-size:11pt;font-family:Arial,sans-serif">Strong software engineering skills, experience working with the Python data science stack (numpy, scipy, sklearn, pytorch, tensorflow, etc.)</span= ></li> <li class=3D"MsoNormal" style=3D"color:black;margin:0cm 0cm 4pt;vertical-a= lign:baseline;font-size:12pt;font-family:&quot;Helvetica Neue&quot;"><span = style=3D"font-size:11pt;font-family:Arial,sans-serif">A knack for solving problems that are not amenable to off-the-shelf solutions</span></li> <li class=3D"MsoNormal" style=3D"color:black;margin:0cm 0cm 4pt;vertical-a= lign:baseline;font-size:12pt;font-family:&quot;Helvetica Neue&quot;"><span = style=3D"font-size:11pt;font-family:Arial,sans-serif">Experience modifying existing computational tools to solve new problems</span></l= i> </ul> <p class=3D"MsoNormal" style=3D"margin:0cm;font-size:12pt;font-family:&quot= ;Helvetica Neue&quot;"><span style=3D"font-size:11pt;font-family:&quot;Time= s New Roman&quot;,serif">=C2=A0</span></p> <p class=3D"MsoNormal" style=3D"margin:0cm 0cm 4pt;font-size:12pt;font-fami= ly:&quot;Helvetica Neue&quot;"><span style=3D"font-size:11pt;font-family:Ar= ial,sans-serif;color:black">Other qualifications such as knowledge of neuro= science or auditory processing, or experience working with time-series data, are desirable but = not essential.</span><span style=3D"font-size:11pt;font-family:&quot;Times New = Roman&quot;,serif"></span></p> <p class=3D"MsoNormal" style=3D"margin:0cm;font-size:12pt;font-family:&quot= ;Helvetica Neue&quot;"><span style=3D"font-size:11pt;font-family:&quot;Time= s New Roman&quot;,serif">=C2=A0</span></p> <p class=3D"MsoNormal" style=3D"margin:0cm 0cm 4pt;font-size:12pt;font-fami= ly:&quot;Helvetica Neue&quot;"><b><span style=3D"font-size:11pt;font-family= :Arial,sans-serif;color:black">What are we offering?</span></b><span style= =3D"font-size:11pt;font-family:&quot;Times New Roman&quot;,serif"></span></= p> <ul style=3D"margin-top:0cm;margin-bottom:0cm" type=3D"disc"> <li class=3D"MsoNormal" style=3D"color:black;margin:0cm 0cm 4pt;vertical-a= lign:baseline;font-size:12pt;font-family:&quot;Helvetica Neue&quot;"><span = style=3D"font-size:11pt;font-family:Arial,sans-serif">An opportunity to pursue a transformative solution to a major public heal= th problem that has the potential to benefit hundreds of millions of peop= le</span></li> <li class=3D"MsoNormal" style=3D"color:black;margin:0cm 0cm 4pt;vertical-a= lign:baseline;font-size:12pt;font-family:&quot;Helvetica Neue&quot;"><span = style=3D"font-size:11pt;font-family:Arial,sans-serif">A chance to join a dynamic entrepreneurial team at one of the world=E2= =80=99s largest hearing research centers=C2=A0</span></li> <li class=3D"MsoNormal" style=3D"color:black;margin:0cm 0cm 4pt;vertical-a= lign:baseline;font-size:12pt;font-family:&quot;Helvetica Neue&quot;"><span = style=3D"font-size:11pt;font-family:Arial,sans-serif">The potential to join a deep tech startup as part of the founding team upo= n project completion to commercialize the technology</span></li> <li class=3D"MsoNormal" style=3D"color:black;margin:0cm 0cm 4pt;vertical-a= lign:baseline;font-size:12pt;font-family:&quot;Helvetica Neue&quot;"><span = style=3D"font-size:11pt;font-family:Arial,sans-serif">Salary at UCL Grade 7, which ranges from =C2=A336,770 to =C2=A344,338 per ann= um, plus opportunity for equity in Perceptual Technologies=C2=A0</span></li> <li class=3D"MsoNormal" style=3D"color:black;margin:0cm 0cm 4pt;vertical-a= lign:baseline;font-size:12pt;font-family:&quot;Helvetica Neue&quot;"><span = style=3D"font-size:11pt;font-family:Arial,sans-serif">UCL staff benefits (click <a href=3D"https://www.ucl.ac.uk/human-resources= /pay-benefits/staff-benefits" style=3D"color:blue" target=3D"_blank"><span = style=3D"color:rgb(17,85,204)">here</span></a> for more info)</span></li> </ul> <p class=3D"MsoNormal" style=3D"margin:0cm 0cm 4pt;vertical-align:baseline;= font-size:12pt;font-family:&quot;Helvetica Neue&quot;"><span style=3D"font-= size:11pt;font-family:Arial,sans-serif;color:black">=C2=A0</span></p> <p class=3D"MsoNormal" style=3D"margin:0cm 0cm 4pt;vertical-align:baseline;= font-size:12pt;font-family:&quot;Helvetica Neue&quot;"><span style=3D"font-= size:11pt;font-family:Arial,sans-serif;color:black">Please get in touch if = you would like to discuss these opportunities (<a href=3D"mailto:lesica@xxxxxxxx" style=3D"color:blu= e" target=3D"_blank">lesica@xxxxxxxx</a>). And please forward this announcement to anyone else who might be interested= .</span></p> <p class=3D"MsoNormal" style=3D"background-image:initial;background-positio= n:initial;background-size:initial;background-repeat:initial;background-orig= in:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:&q= uot;Helvetica Neue&quot;"><span style=3D"font-size:11pt;font-family:Arial,s= ans-serif">=C2=A0</span></p> <p class=3D"MsoNormal" style=3D"background-image:initial;background-positio= n:initial;background-size:initial;background-repeat:initial;background-orig= in:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:&q= uot;Helvetica Neue&quot;"><span style=3D"font-size:11pt;font-family:Arial,s= ans-serif">Nicholas A. Lesica, Ph.D.</span></p> <p class=3D"MsoNormal" style=3D"background-image:initial;background-positio= n:initial;background-size:initial;background-repeat:initial;background-orig= in:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:&q= uot;Helvetica Neue&quot;"><span style=3D"font-size:11pt;font-family:Arial,s= ans-serif">Professor of Neuroengineering</span></p> <p class=3D"MsoNormal" style=3D"background-image:initial;background-positio= n:initial;background-size:initial;background-repeat:initial;background-orig= in:initial;background-clip:initial;margin:0cm;font-size:12pt;font-family:&q= uot;Helvetica Neue&quot;"><span style=3D"font-size:11pt;font-family:Arial,s= ans-serif">Ear Institute<br> University College London</span></p></div><div><br></div></div> </div></div> --000000000000208acd05d8897ce4--


This message came from the mail archive
src/postings/2022/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University