[CFP] IJCAI-95 Workshop on CASA (was: Sheffield Meeting) (Hiroshi G Okuno )


Subject: [CFP] IJCAI-95 Workshop on CASA (was: Sheffield Meeting)
From:    Hiroshi G Okuno  <okuno(at)NUESUN.NTT.JP>
Date:    Sun, 18 Dec 1994 15:32:19 +0900

Dear colleagues, > We will circulate abstracts but there won't be any proper proceedings. The > meeting was originally intended primarily for the UK community, but it's > clear that we have struck a rich vein. Perhaps there should be some kind of > follow-up, or a more formal occasion with proceedings. What do people think? > - responses to me & I will summarise. There is the proposed Computational > ASA workshop at IJCAI-95 to look forward to of course. Yes, the proposal of IJCAI-95 Workshop on Computational Auditory Scene Analysis is accepted. Enclosed is a copy of CFP. CFP of postscript version is placed at sail.stanford.edu: ~ftp/pub/casa/ijcai95-casa-cfp.ps Please post and/or redistribute it to whom it may concern. I hope many people will submit a paper or attend the workshop. Regards, ------------------------------- Hiroshi G. Okuno NTT Basic Research Laboratories okuno(at)nuesun.ntt.jp ------------------------------- ********************************************************************** Call For Papers IJCAI-95 Workshop on Computational Auditory Scene Analysis August 19-20, 1995, Montreal, CANADA Two day workshop The interest of AI in problems related to understanding sounds has a rich history dating back to the ARPA Speech Understanding Project in the 1970's. While a great deal has been learned from this and subsequent speech understanding research, the goal of building systems that can understand general acoustic signals (e.g. continuous speech and/or non-speech sounds) from unconstrained environments is still unrealized. Instead, there are now systems that understand "clean" speech well in relatively noiseless laboratory environments, but that break down in more realistic, noisier environments. As seen in the "cocktail-party effect," humans (and other mammals) have the ability to selectively attend to sound from a particular source, even when it is mixed with other sounds. Computers also need to be able to decide which parts of a mixed acoustic signal are relevant to a particular purpose -- which part should be interpreted as speech, for example, and which should be interpreted as a door closing, an air conditioner humming, or another person interrupting. Observations such as these have led a number of researchers to conclude that research on speech understanding and on non-speech understanding need to be united within a more general framework. One such framework is suggested by the Bregman's recent book, "Auditory Scene Analysis" (MIT Press, '90). This work has inspired a number of systems that attempt to model what is known about the human auditory system. It has also encouraged researchers to explore general models of the structure of sounds in order to deal with more realistic acoustic environments. Researchers have also begun trying to understand computational auditory frameworks as parts of larger perception systems whose purpose is to give a computer integrated information about the real world. Inspiration for this work ranges from research on how different sensors can be integrated to models of how humans' auditory apparatus works in concert with vision, proprioception, etc. This workshop will provide a forum for researchers to compare approaches to AI-oriented auditory scene analysis. We invite papers from researchers active in all fields which have a bearing on this complex and diverse field. The topics include, but are not limited to: - Modeling Issues: Cognitive Modeling Low-level Auditory Models Evaluation of Auditory models from viewpoint of engineering - Sound Understanding: Auditory Stream Segregation Multi-Modal Understanding (i.e. integration with other perceptual systems) Engineering Aspects of Psychoacoustics -Architectural Issues: Unified Architectures Blackboard Architectures Multi-Agent Paradigm Hybrid Approach to Top-Down/ Bottom-Up processing - Control Issues: Reactive/ Planned Behavior Adaptive Behavior - Representational Issues: Representation of Audition Representation of Speech Representation of Music Unified Representation of (possibly Dynamic) Vision and Audition - Applications: Speech Understanding Music Understanding Multi-Modal Integration Submissions: ============ Please submit a detailed abstract (approx. 1500 words) or a full paper (limited to 5000 words) by February 20. For those who can submit electronically, please submit materials in plain, unformatted text or Postscript text to casa-submission(at)mit.media.edu. For those who can not submit via e-mail, send five hard copies to David Rosenthal to arrive by February 20, 1995. All submitted papers will be reviewed by the workshop committee. Time table: =========== Papers due: February 20, 1995 Notification of Acceptance: March 15, 1995 Camera-ready copy due: April 20, 1995 Important Notice ================ Delegates should note that workshop participation is not possible WITHOUT REGISTRATION for the main conference (International Joint Conference on Artificial Intelligence, IJCAI-95). Workshop Committee ================== David Rosenthal (co-chair) 16 Surrey Road Woburn, MA 01801 USA dfr(at)media.mit.edu Phone (617) 935-3644 Hiroshi G. Okuno (co-chair) NTT Basic Research Laboratories 3-1 Morinosato-Wakamiya, Atsugi Kanagawa, 243-01 Japan okuno(at)nuesun.ntt.jp Malcolm Crawford University of Sheffield, UK M.Crawford(at)dcs.sheffield.ac.uk S. Hamid Nawab Boston University, USA hamid(at)engc.bu.edu Malcolm Slaney Interval Research, Inc. USA malcolm(at)interval.com ******************************************************************** Call For Papers of Postscript version is available by anonymous FTP. sail.stanford.edu: ~ftp/pub/casa/ijcai95-casa-cfp.ps ********************************************************************


This message came from the mail archive
http://www.auditory.org/postings/1994/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University