Computational and cognitive models for audio-visual interactions (Martin Cooke )


Subject: Computational and cognitive models for audio-visual interactions
From:    Martin Cooke  <m.cooke@xxxxxxxx>
Date:    Wed, 30 Jan 2008 22:18:42 +0000
List-Archive:<http://lists.mcgill.ca/scripts/wa.exe?LIST=AUDITORY>

Computational and cognitive models for audio-visual interactions A one-day workshop in the English Peak District March 11th, 2008 see: http://www.dcs.shef.ac.uk/~martin/AV/ In everyday situations and environments, our senses are flooded with rich, multimodal inputs which somehow have to be processed and responded to in real-time within the limits of attentional constraints. Consider, for example, the sensory stimulation involved in cycling through a busy city, watching a game of basketball, or simply attending a lecture. In such dynamic environments, unimodal approaches based solely on auditory or visual information are typically not robust. However, a combination of modalities has far more potential. The purpose of this meeting is to explore recent advances in cognitive and computational modelling of audio-visual processes. Topics to be covered include attention, tracking, integration and active exploration. Speakers include representatives of EU projects POP, AMI, DIRAC and BACS, amongst others. Organised by Martin Cooke (University of Sheffield) and Radu Horaud (INRIA-Alpes)


This message came from the mail archive
http://www.auditory.org/postings/2008/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University