[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Downloadable simultaneous speaker corpus for CASA and ICA research
Announcing ShATR on the Web: a downloadable, multiple simultaneous speaker
corpus.
http://www.dcs.shef.ac.uk/research/groups/spandh/projects/shatrweb/index.htm
l
ShATR is a corpus of overlapped speech collected by the University of
Sheffield Speech and Hearing Research Group in collaboration with ATR Japan
in order to support research into computational auditory scene analysis and
independent component analysis. The task involved four participants working
in pairs to solve two crosswords. A fifth participant acted as a hint-giver.
Eight channels of audio data were recorded from the following sensors: one
close microphone per speaker, one omnidirectional microphone, and the two
channels of a binaurally-wired mannekin. Around 41% of the corpus contains
overlapped speech. In addition, a variety of other audio data was collected
from each participant. The entire corpus, which has a duration of around 37
minutes, has been segmented and transcribed at 5 levels: task structure,
nonspeech, sentence, word and phone. ShATR has been available on CDROM since
1995, but the growing interest in challenging multi-speaker domains led to
the decision to release it on the web. The entire corpus is available, split
into convenient 1 minute chunks in each of the 8 channels.
Martin Cooke
Speech & Hearing Research
Department of Computer Science
University of Sheffield, UK
http://www.dcs.shef.ac.uk/~martin