[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[2nd CFP] IJCAI-95 Workshop on CASA
Dear member of AUDITORY mailing list,
This is the second Call For Paper of IJCAI-95 Workshop on
Computational Auditory Scene Analysis.
If you have a plan to submit a paper and/or attend the workshop,
please fill the following reply form and send it to
casa-submission@media.mit.edu.
-------------------------------------------------------------------------
| Reply Form
|
-------------------------------------------------------------------------
Please send this form by email to casa-submission@media.mit.edu.
Name:
Affiliation:
Address:
Country:
Phone:
Fax:
e-mail:
Please check
- I would like to attend the workshop
- I intend to submit a a paper
--------------------------------------------------
Best wishes
-------------------------------
Hiroshi G. Okuno
NTT Basic Research Laboratories
okuno@nuesun.ntt.jp
-------------------------------
********************************************************************
WWW page for the CASA Workshop is
http://sound.media.mit.edu/~dfr/CASA.html
Call For Papers of Postscript version is available by anonymous FTP.
sail.stanford.edu: ~ftp/pub/okuno/casa/ijcai95-casa-cfp.ps.Z
or send an e-mail to M.Crawford@dcs.sheffield.ac.uk
with "ijcai-cfp" as the subject line.
********************************************************************
**********************************************************************
Call For Papers
IJCAI-95 Workshop on
Computational Auditory Scene Analysis
August 19-20, 1995, Montreal, CANADA
Two day workshop
The interest of AI in problems related to understanding sounds has a
rich history dating back to the ARPA Speech Understanding Project in
the 1970's. While a great deal has been learned from this and
subsequent speech understanding research, the goal of building systems
that can understand general acoustic signals (e.g. continuous speech
and/or non-speech sounds) from unconstrained environments is still
unrealized. Instead, there are now systems that understand "clean"
speech well in relatively noiseless laboratory environments, but that
break down in more realistic, noisier environments. As seen in the
"cocktail-party effect," humans (and other mammals) have the ability
to selectively attend to sound from a particular source, even when it
is mixed with other sounds. Computers also need to be able to decide
which parts of a mixed acoustic signal are relevant to a particular
purpose -- which part should be interpreted as speech, for example,
and which should be interpreted as a door closing, an air conditioner
humming, or another person interrupting.
Observations such as these have led a number of researchers to
conclude that research on speech understanding and on non-speech
understanding need to be united within a more general framework. One
such framework is suggested by the Bregman's recent book, "Auditory
Scene Analysis" (MIT Press, '90). This work has inspired a number of
systems that attempt to model what is known about the human auditory
system. It has also encouraged researchers to explore general models
of the structure of sounds in order to deal with more realistic
acoustic environments. Researchers have also begun trying to
understand computational auditory frameworks as parts of larger
perception systems whose purpose is to give a computer integrated
information about the real world. Inspiration for this work ranges
from research on how different sensors can be integrated to models of
how humans' auditory apparatus works in concert with vision,
proprioception, etc.
This workshop will provide a forum for researchers to compare
approaches to AI-oriented auditory scene analysis. We invite papers
from researchers active in all fields which have a bearing on this
complex and diverse field. The topics include, but are not limited to:
- Modeling Issues:
Cognitive Modeling
Low-level Auditory Models
Evaluation of Auditory models from viewpoint of engineering
- Sound Understanding:
Auditory Stream Segregation
Multi-Modal Understanding (i.e. integration with other perceptual
systems)
Engineering Aspects of Psychoacoustics
-Architectural Issues:
Unified Architectures
Blackboard Architectures
Multi-Agent Paradigm
Hybrid Approach to Top-Down/ Bottom-Up processing
- Control Issues:
Reactive/ Planned Behavior
Adaptive Behavior
- Representational Issues:
Representation of Audition
Representation of Speech
Representation of Music
Unified Representation of (possibly Dynamic) Vision and Audition
- Applications:
Speech Understanding
Music Understanding
Multi-Modal Integration
Submissions:
============
Please submit a detailed abstract (approx. 1500 words) or a full paper
(limited to 5000 words) by February 20. For those who can submit
electronically, please submit materials in plain, unformatted text or
Postscript text to casa-submission@media.mit.edu. For those who can
not submit via e-mail, send five hard copies to David Rosenthal to
arrive by February 20, 1995. All submitted papers will be reviewed by the
workshop committee.
Time table:
===========
Papers due: February 20, 1995
Notification of Acceptance: March 15, 1995
Camera-ready copy due: April 20, 1995
Important Notice
================
Delegates should note that workshop participation is not possible
WITHOUT REGISTRATION for the main conference (International Joint
Conference on Artificial Intelligence, IJCAI-95).
Workshop Committee
==================
David Rosenthal (co-chair)
16 Surrey Road
Woburn, MA 01801
USA
dfr@media.mit.edu
Phone (617) 935-3644
Hiroshi G. Okuno (co-chair)
NTT Basic Research Laboratories
3-1 Morinosato-Wakamiya, Atsugi
Kanagawa, 243-01 Japan
okuno@nuesun.ntt.jp
Malcolm Crawford
University of Sheffield, UK
M.Crawford@dcs.sheffield.ac.uk
S. Hamid Nawab
Boston University, USA
hamid@engc.bu.edu
Malcolm Slaney
Interval Research, Inc. USA
malcolm@interval.com
**********************************************************************