First Cadenza Challenge launches
http://cadenzachallenge.org/
is organizing signal processing challenges for music and listeners with hearing loss.
The purpose of the challenges is to catalyse new work to radically improve the processing of music for those with a hearing loss.
Even if you’ve not worked on hearing loss before, we are providing you with the tools to enable you to apply your machine learning and music processing to get going.
The first round is focused on two scenarios:
Your task is to improve the perceived audio quality of
the reproduction considering the listeners' hearing loss. You might be doing this to make the lyrics clearer, correct the frequency balance, ensure the music has the intended emotional impact, etc.
Task 1 – Live Now
In Task 1, listeners are listening to music over headphones, you are asked to improve the perceived audio quality of the reproduction considering the listeners' hearing loss. The listeners are not using their normal
hearing aids, they are just listening to the signals you provide via the headphones. The task is divided into two steps; demixing and remixing a stereo song.
First, like traditional music source separation challenges, you’ll need to employ machine learning approaches to demix a piece of stereo music into eight stems corresponding to the left and right parts of the vocal,
bass, drum and other stems. However, unlike traditional challenges, the demixing needs to be personalized to the listener characteristics, and the metric used to objectively evaluate the eight stems is the HAAQI (Hearing
Aid Audio Quality Index) score instead of the SDR (Signal to Distortion Ratio).
Then, you’ll need to remix the signal in a personalized manner. This is the signal that the listener will receive in their headphones.
Task 2 – Available next week
In Task 2, you need to enhance music samples played by
the car stereo in the presence of noise from the engine/road. You will have access to the “clean” music (i.e., music without the presence of noise), listener characteristics and metadata information about the noise. Note that you won’t have access to the noise
signal, only metadata information.
In the evaluation stage, car noise and room impulses
will be added to your signal. Listeners will be using a fixed hearing aid. In this case, the HAAQI evaluation is performed to the signal resulting after adding the car noise and the hearing aid.
Learning Resources
In
http://cadenzachallenge.org/docs/learning_resources/learning_intro, we provide a range of material designed to fill any gap in their knowledge to enable them to enter the challenges. These materials include hearing impairment, hearing aids for music and
guidelines to understand audiograms. Evaluation
Task 1 and Task 2 uses HAAQI to evaluate the enhanced signals. Additionally, the best systems will go forward to be scored by our listening
panel of people with hearing loss.
Software
and Datasets
The
Team
Funders
Cadenza is funded by EPSRC. Project partners are RNID; BBC R&D; Carl von Ossietzky University Oldenburg; Google; Logitech UK Ltd and Sonova
AG. Trevor Cox
Professor of Acoustic Engineering Newton Building, University of Salford, Salford M5 4WT, UK. Mobile: 07986 557419 www.acoustics.salford.ac.uk @trevor_cox |