[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Computational and cognitive models for audio-visual interactions
Computational and cognitive models for audio-visual interactions
A one-day workshop in the English Peak District
March 11th, 2008
see: http://www.dcs.shef.ac.uk/~martin/AV/
In everyday situations and environments, our senses are flooded with
rich, multimodal inputs which somehow have to be processed and responded
to in real-time within the limits of attentional constraints. Consider,
for example, the sensory stimulation involved in cycling through a busy
city, watching a game of basketball, or simply attending a lecture. In
such dynamic environments, unimodal approaches based solely on auditory
or visual information are typically not robust. However, a combination
of modalities has far more potential. The purpose of this meeting is to
explore recent advances in cognitive and computational modelling of
audio-visual processes. Topics to be covered include attention,
tracking, integration and active exploration. Speakers include
representatives of EU projects POP, AMI, DIRAC and BACS, amongst others.
Organised by Martin Cooke (University of Sheffield) and Radu Horaud
(INRIA-Alpes)