Dear List,
The Organizing Committee of the Conference on Sound Perception (CSP) is very pleased to invite you to attend the
1st CSP Conference on 03 – 05 September
2021.
The conference is organized by the Department of Acoustics, at
the Adam Mickiewicz University in Poznań.
Due to the Covid-19 pandemic, the CSP will be a virtual e-conference fully held in on-line mode.
Look through the Scientific Program and register here:
csp.amu.edu.pl
Our plenary lectures include, among others:
Avenues for improvement in hearing aids
Brian C.J. Moore, Department of Experimental
Psychology, University of Cambridge, UK
Abstract: Despite the advances in signal processing in hearing
aids over the past 20-30 years, hearing aids are still far from
restoring “normal” hearing. This partly reflects limitations of
impaired auditory systems, such as reduced frequency selectivity
and reduced sensitivity to temporal fine structure, but also
reflects limitations in the hearing aids themselves. Some very
basic limitations are:
(1) The gains achieved on real ears are often substantially
different from those programmed into the manufacturer’s
software, even when averaged over many test ears. In other
words, something is systematically wrong in the calibration of
the fitting systems. A very common problem is a failure to meet
target gains for frequencies above about 3 kHz.
(2) The compression ratios obtained on real ears are often
substantially different from (usually below) those programmed
into the manufacturer’s software. As a result, soft sounds
remain inaudible and strong sounds are too loud.
(3) Despite claims of wide bandwidth, most hearing aids are
unable to meet the fitting targets of methods like NAL-NL2 or
CAM2 for frequencies above about 4 kHz.
(4) The output of many hearing aids often falls off markedly for
frequencies below a few hundred Hz. This does not create severe
problems when listening to speech, but produces severe
degradations of sound quality for music.
More subtle problems arise as side effects of the signal
processing in hearing aids. Processing such as multi-channel
amplitude compression, noise reduction, and adaptive
directionality changes the amplitude modulation patterns of the
signal and this can have adverse effects on speech
intelligibility and sound quality. For listening to music, many
hearing-impaired people prefer a linear amplifier with
high-quality headphones to their hearing aids. There is
increasing evidence that the intelligibility of speech in
background sounds is strongly affected by the amplitude
fluctuations in the background sounds, even for “steady” noise.
Improved models for predicting the intelligibility of speech in
fluctuating background sounds are needed to assess the
deleterious effects of the processing in hearing aids, and to
select parameters of the processing that minimise these
deleterious effects.
Much work is being conducted to develop “cognitively controlled”
hearing aids, that selectively enhance the voice of the talker
who the listener wishes to hear. The prospects for such devices
will be discussed.
The role of envelope cues in masking
Armin Kohlrausch, Chair of Auditory and Multisensory
perception at Eindhoven University of Technology, The
Netherlands
Abstract: In this presentation I will give an overview on how
the thinking about envelope cues has evolved in the past
decades. I will focus the presentation on masking conditions
with a random noise masker and a tonal signal. Historically,
those conditions have been analyzed in terms of the change in
energy introduced by the addition of the signal. Data analysis
based on such an energy detection lead to the concepts of
critical bands and critical ratios, where signal thresholds were
measured in bandpass noises of varying bandwidths. Thresholds of
signals being placed in a spectral notch of a bandstop noise led
to the auditory filter concept and the ERB scale. Energy
detection models were challenged in the 1980’s by a range of
paradigms. Random variations (level rove) of the noise level
from interval to interval render the energy cue much less
effective, but human thresholds remain nearly unaffected,
suggesting the use of additional cues. This observation has
motivated the use of envelope-based cues, including the mean
envelope slope value (Richards, 1992), in tone-in-noise
detection. Secondly, in profile analysis with narrowband
stimuli, the profile changes in the modulation spectrum derived
from the stimulus envelope were proposed as a cue (Green et al.,
1992). And finally, in masking experiments with harmonic complex
tone maskers, the waveform changes enabled by choosing different
phase values for the individual components emphasized the role
of changes in the envelope structure instead of changes in the
overall energy. In this talk, I will present a unified
framework, in which results from these different paradigms can
be understood by a detection process making use of the same
detection cues.
See You there fully on-line,
Organizing Committee