[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[AUDITORY] Signal processing challenge launch: improving music for listeners with a hearing loss



First Cadenza Challenge launches

http://cadenzachallenge.org/ is organizing signal processing challenges for music and listeners with hearing loss. The purpose of the challenges is to catalyse new work to radically improve the processing of music for those with a hearing loss. 

Even if you’ve not worked on hearing loss before, we are providing you with the tools to enable you to apply your machine learning and music processing to get going. 

The first round is focused on two scenarios: 

  • Task 1, listening over headphones. 
  • Task 2, listening in a car in the presence of noise. 

Your task is to improve the perceived audio quality of the reproduction considering the listeners' hearing loss. You might be doing this to make the lyrics clearer, correct the frequency balance, ensure the music has the intended emotional impact, etc. 

Task 1 – Live Now 

In Task 1, listeners are listening to music over headphones, you are asked to improve the perceived audio quality of the reproduction considering the listeners' hearing loss. The listeners are not using their normal hearing aids, they are just listening to the signals you provide via the headphones. The task is divided into two steps; demixing and remixing a stereo song.  

First, like traditional music source separation challenges, you’ll need to employ machine learning approaches to demix a piece of stereo music into eight stems corresponding to the left and right parts of the vocal, bass, drum and other stems. However, unlike traditional challenges, the demixing needs to be personalized to the listener characteristics, and the metric used to objectively evaluate the eight stems is the HAAQI (Hearing Aid Audio Quality Index) score instead of the SDR (Signal to Distortion Ratio). 

Then, you’ll need to remix the signal in a personalized manner. This is the signal that the listener will receive in their headphones.  

Task 2 – Available next week

In Task 2, you need to enhance music samples played by the car stereo in the presence of noise from the engine/road. You will have access to the “clean” music (i.e., music without the presence of noise), listener characteristics and metadata information about the noise. Note that you won’t have access to the noise signal, only metadata information.  

In the evaluation stage, car noise and room impulses will be added to your signal. Listeners will be using a fixed hearing aid. In this case, the HAAQI evaluation is performed to the signal resulting after adding the car noise and the hearing aid.  

Learning Resources 

In http://cadenzachallenge.org/docs/learning_resources/learning_intro, we provide a range of material designed to fill any gap in their knowledge to enable them to enter the challenges. These materials include hearing impairment, hearing aids for music and guidelines to understand audiograms. 

Evaluation 

Task 1 and Task 2 uses HAAQI to evaluate the enhanced signals. Additionally, the best systems will go forward to be scored by our listening panel of people with hearing loss. 

Software and Datasets 

  • The software is shared in https://github.com/claritychallenge/clarity GitHub repository.  
  • The baseline is stored in recipes/cad1 directory. 
  • You will find instructions on how to get access to the datasets on both the website and in the baseline recipe on git.   

The Team 

  • Trevor Cox, Professor of Acoustic Engineering, University of Salford 
  • Alinka Greasley, Professor of Music Psychology, University of Leeds 
  • Michael Akeroyd, Professor of Hearing Sciences, University of Nottingham 
  • Jon Barker, Professor in Computer Science, University of Sheffield 
  • William Whitmer, Senior Investigator Scientist, University of Nottingham 
  • Bruno Fazenda, Reader in Acoustics, University of Salford 
  • Scott Bannister, Research Fellow, University of Leeds 
  • Simone Graetzer, Research Fellow, University of Salford 
  • Rebecca Vos, Research Fellow, University of Salford 
  • Gerardo Roa, Research Fellow, University of Salford 
  • Jennifer Firth, Research Assistant in Hearing Sciences, University of Nottingham 

Funders 

Cadenza is funded by EPSRC. Project partners are RNID; BBC R&D; Carl von Ossietzky University Oldenburg; Google; Logitech UK Ltd and Sonova AG. 



Trevor Cox
Professor of Acoustic Engineering
Newton Building, University of Salford, Salford M5 4WT, UK.
Tel 0161 295 5474
Mobile: 07986 557419
www.acoustics.salford.ac.uk
@trevor_cox