[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[AUDITORY] Pre-announcing the 2nd Clarity Enhancement Challenge for Hearing Aid Signal Processing - Launching 30th March



Pre-announcing the 2nd Clarity Enhancement Challenge for Hearing Aid Signal Processing.

Preliminary details appear on the Clarity website and below. Full details appear on the challenge launch date, 30th March, on a dedicated website. If you have questions, please contact us directly at claritychallengecontact@xxxxxxxxx

Important Dates 2022

30th March - Challenge launch including the release of train/dev data sets, mixing tools, full rules and documentation
end April - Release of full toolset + baseline system
25th July - Evaluation data released
1st Sept - Submission deadline
Sept-Nov - Listening test evaluation period
Early Dec - Results announced at a Clarity Challenge Workshop; prizes awarded.

Background
We are organising a series of machine learning challenges to advance hearing aid speech signal processing. Even if you’ve not worked on hearing aids before, we’ll provide you with the tools to enable you to apply your machine learning and speech processing algorithms to help those with hearing loss.

Although age-related hearing loss affects 40% of 55 to 74 year-olds, the majority of adults who would benefit from hearing aids don’t use them. A key reason is simply that hearing aids don’t provide enough benefit. In particular, speech in noise is still a critical problem, even for the most sophisticated devices. The purpose of the “Clarity” challenges is to catalyse new work to radically improve the speech intelligibility provided by hearing aids. This is the second in a series of enhancement challenges that are considering increasingly complex listening scenarios. The first round (CEC1) focused on speech in indoor environments in the presence of a single interferer. The new challenge extends CEC1 in several important respects: modelling listener head motion, including scenes with multiple interferers, and an extended range of interferer types.

The Task
You will work with simulated scenes, each including a target speaker and one or more interfering noises. For each scene, there will be signals that simulate those captured by a behind-the-ear hearing aid with 3-microphones at each ear and those captured at the eardrum without a hearing aid present. The target speech will be a short sentence and the interfering noises will be either speech, domestic appliance noise or music samples.

The task will be to deliver a hearing aid signal processing algorithm that can improve the intelligibility of the target speaker for a specified hearing-impaired listener. Initially, entries will be evaluated using an objective speech intelligibility measure. Subsequently, up to twenty of the most promising systems will be evaluated by a panel of hearing-impaired listeners.

Prizes will be awarded for the systems achieving the best objective measure scores and for the best listening test outcomes.

We will provide a baseline system so that teams can choose to focus on individual components or to develop their own complete pipelines.

What will be provided
  • Evaluation of the best entries by a panel of hearing-impaired listeners.
  • Premixed speech + interferer scenes for training and evaluation.
  • A database of 10,000 spoken target sentences, and speech, noise and music interferers.
  • Listener characterisations, including audiograms and speech-in-noise testing.
  • Software including tools for generating additional training data, a baseline hearing aid algorithm, a baseline model of hearing impairment, and a binaural objective intelligibility measure.
Challenge participants will be invited to present their work at a dedicated workshop to be held in early December (details TBC). There will be prizes for the best-performing systems. We will be organising a special issue of the journal Speech Communication to which participants will be invited to contribute.

For further information
Full details will be released on a dedicated website on the challenge launch date, 30th March. If you have questions, please contact us directly at claritychal...@xxxxxxxxx

Organisers
Michael A. Akeroyd, Hearing Sciences, School of Medicine, University of Nottingham
Jon Barker, Department of Computer Science, University of Sheffield
Will Bailey, Department of Computer Science, University of Sheffield
Trevor J. Cox, Acoustics Research Centre, University of Salford
John F. Culling, School of Psychology, Cardiff University
Lara Harris, Acoustics Research Centre, University of Salford
Graham Naylor, Hearing Sciences, School of Medicine, University of Nottingham
Zuzanna Podwinska, Acoustics Research Centre, University of Salford
Zehai Tu, Department of Computer Science, University of Sheffield

Funded by the Engineering and Physical Sciences Research Council (EPSRC), UK

Supported by RNID (formerly Action on Hearing Loss), Hearing Industry Research Consortium, Amazon TTS Research