[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[AUDITORY] [CFP] Announcing the 2nd Clarity Enhancement Challenge for Hearing Aid Signal Processing (CEC2)



We are pleased to announce the launch of the second Clarity Enhancement Challenge for Hearing Aid Signal Processing. The full challenge data and core development tools are now avaible. For details see the challenge website and our github repository.

Important Dates

Background

We are organising a series of machine learning challenges to advance hearing aid speech signal processing. Even if you’ve not worked on hearing aids before, we’ll provide you with the tools to enable you to apply your machine learning and speech processing algorithms to help those with a hearing loss.

Although age-related hearing loss affects 40% of 55 to 74 year-olds, the majority of adults who would benefit from hearing aids don’t use them. A key reason is simply that hearing aids don’t provide enough benefit. In particular, speech in noise is still a critical problem, even for the most sophisticated devices. The purpose of the “Clarity” challenges is to catalyse new work to radically improve the speech intelligibility provided by hearing aids.

The series of challenges will consider increasingly complex listening scenarios. The first round focusses on speech in indoor environments in the presence of a single interferer. It begins with a challenge involving improving hearing aid processing. Future challenges on how to model speech-in-noise perception will be launched at a later date.

The task

You will be provided with simulated scenes, each including a target speaker and interfering noise. For each scene, there will be signals that simulate those captured by a behind-the-ear hearing aid with 3-channels at each ear and those captured at the eardrum without a hearing aid present.  The target speech will be a short sentence and the interfering noise will be either speech, music, domestic appliance noise or a combination of up to three sources of those types. The scenes will be dynamic, with the simulated listener facing some direction away from the target at the beginning of the scene and turning to face some direction towards, but not exactly at the target.

The task will be to deliver a hearing aid signal processing algorithm that can improve the intelligibility of the target speaker for a specified hearing-impaired listener under these conditions. Initially, entries will be evaluated using an objective speech intelligibility measure. Subsequently, up to twenty of the most promising systems will be evaluated by a panel of listeners.

We will provide a baseline system so that teams can choose to focus on individual components or to develop their own complete pipelines.

What will be provided

Challenge and workshop participants will be invited to contribute to a journal Special Issue on the topic of Machine Learning for Hearing Aid Processing that will be announced next year.

For further information

If you are interested in participating and wish to receive further information, please sign up to the Clarity Challenge Google Group at https://groups.google.com/g/clarity-challenge

If you have questions, contact us directly at claritychallengecontact@gmail.com

Organisers (alphabetical)

Funded by the Engineering and Physical Sciences Research Council (EPSRC), UK

Supported by RNID (formerly Action on Hearing Loss), Hearing Industry Research Consortium, Amazon TTS Research




--
Professor Jon Barker,
Department of Computer Science,
University of Sheffield
+44 (0) 114 222 1824