[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Special Session of the San Diego ASA on speech perception



All,
Please pass this along to anyone who might be interested.

Jont Allen


162 Meeting of the Acoustical Society of America, San Diego,
 (Oct 31-Nov 4, 2011):

Special Session:
 "Fifty Years of Haskins Invariant Speech Features: Where Do We Stand?"

(Jointly proposed by Speech and Psychological and Physiological Acoustics)

Will address the question of invariant features:
 *Do they exist?
 *Are they relevant to running (fluent) speech?
 *What is the role of context?

Organizers:
 Jont B. Allen (Univ. IL),
 Sandra Gordon-Salant (Univ. MD)
 Peggy B. Nelson (Univ. Minn)

ABSTRACT:
Since the early formulation by Haskins Labs (Lieberman, Cooper, Delattre,
and others, of various speech features, followed by the MIT studies
by Ken Stevens and his many students, it seems we are still left with
imprecise details of these speech features, or worse, even if they exist
at all. There are many theories, and none seem definitive. The original
experiments have been repeated, but conclusions remain open. Thus the
question naturally may be asked, "What is the status of invariant speech
features: Do they exist, and if so, what are they. If not, what is the
alternative?"

This session will ask researcher to explore these quantitative questions,
and address the question of invariant features. Do they exist?  Are they
relevant to running (fluent) speech? What is the role of context? While
all of the questions cannot be addressed in one session, the question
of invariant features is highly relevant today. We need a snapshot of
opinions--a broad collection of views, as well as their relevance to
the hearing impaired and even cochlear implant listeners.

The committee will pick 6 to 8 abstracts that are best fitted to the
session theme, to fill the invited podium session. The remaining
abstracts will be presented as regular short (8 min) podium (or poster
presentations following the special session).

*An abstract of not more than 200 words is required for each paper
*All abstracts must be submitted online by June 27, 2011 (in 3 weeks):
http://acousticalsociety.org/meetings/next_meeting/san_diego/05_13_11_call_for_papers#13

Procedure: Go to
 http://acousticalsociety.org and click on âSubmit Abstract" forthe
San Diego meeting (http://pubpartner.aip.org/pasa/submission/welcome.jsp)
and Enter the Password: "San Diego".

If you wish to be considered for the special session you must forward a
plain-text copy of your ASA-submitted abstract to: JontAllen@xxxxxxxx
Your email should also provide details about why your abstract addresses
questions related to the topic of the special session. Please try to do
this by June 20 (in two weeks), to give the committee time to process
the abstracts.

To be included in the special session you must:
 1) Submit your abstract to the ASA, using the methods described above
    (Deadline: 6/27/11).
2) Send an email to JontAllen@xxxxxxxx about why you feel your presentation
   is relevant to the special session (Soft deadline: 6/21/11)

The following is the abstract of Allen (to be submitted) to help you
understand my personal view:

Title: Invariant acoustic cues of consonants in a vowel context
Author: Jont B. Allen

The classic JASA papers by French+Steinberg (1947), Fletcher+Galt
(1950), Miller+Nicely (1955) and Furui (1986) provided us with
detailed CV+VC confusions due to masking noise and bandwidth and
temporal truncations.  FS47 and FG50 led to the succinctly summarizing
Articulation Index (AI), while MN55 first introduced information-theory.
Allen and his students have repeated these classic experiments and
analyzed the error patterns for large numbers of individual utterances
[http://hear.beckman.illinois.edu/wiki/Main/Publications], and showed
that the averaging of scores removes critical details. Consonants
have binary acoustic features that are used by the auditory system
in decoding isolated CV consonants.  Masking a binary feature causes
the consonant error to jump to chance within some subgroup of sounds,
with an entropy determined by conflicting cues, typically present in
naturally spoken sounds.  These same invariant features are also used
when decoding sentences having varying degrees of context. A precise
knowledge of acoustic features has allowed us to reverse engineer
Fletcher's error-product rule (FG50), providing deep insight into the
workings of the AI.  Applications of this knowledge is being applied
to a better understanding of the idiosyncratic speech loss in hearing
impaired ears, and machine recognition of consonants.


-------------------------------------------------------------------------
Call for Papers and Submission procedure:
-------------------------------------------------------------------------
http://acousticalsociety.org/meetings/next_meeting/san_diego

-------------------------------------------------------------------------
Special Sessions:
-------------------------------------------------------------------------
http://acousticalsociety.org/meetings/next_meeting/san_diego/05_13_11_call_for_papers#1

-------------------------------------------------------------------------
Guidelines:
-------------------------------------------------------------------------
http://acousticalsociety.org/meetings/next_meeting/san_diego/05_13_11_call_for_papers#3

END


162 Meeting of the Acoustical Society of America, San Diego,
 (Oct 31-Nov 4, 2011):

Special Session:
 "Fifty Years of Haskins Invariant Speech Features: Where Do We Stand?"

(Jointly proposed by Speech and Psychological and Physiological Acoustics) 

Will address the question of invariant features:
 *Do they exist?
 *Are they relevant to running (fluent) speech?
 *What is the role of context?

Organizers:
 Jont B. Allen (Univ. IL),
 Sandra Gordon-Salant (Univ. MD)
 Peggy B. Nelson (Univ. Minn)

ABSTRACT:
Since the early formulation by Haskins Labs (Lieberman, Cooper, Delattre,
and others, of various speech features, followed by the MIT studies
by Ken Stevens and his many students, it seems we are still left with
imprecise details of these speech features, or worse, even if they exist
at all. There are many theories, and none seem definitive. The original
experiments have been repeated, but conclusions remain open. Thus the
question naturally may be asked, "What is the status of invariant speech
features: Do they exist, and if so, what are they. If not, what is the
alternative?"

This session will ask researcher to explore these quantitative questions,
and address the question of invariant features. Do they exist?  Are they
relevant to running (fluent) speech? What is the role of context? While
all of the questions cannot be addressed in one session, the question
of invariant features is highly relevant today. We need a snapshot of
opinions--a broad collection of views, as well as their relevance to
the hearing impaired and even cochlear implant listeners.

The committee will pick 6 to 8 abstracts that are best fitted to the
session theme, to fill the invited podium session. The remaining
abstracts will be presented as regular short (8 min) podium (or poster
presentations following the special session).

*An abstract of not more than 200 words is required for each paper
*All abstracts must be submitted online by June 27, 2011 (in 3 weeks):
http://acousticalsociety.org/meetings/next_meeting/san_diego/05_13_11_call_for_papers#13

Procedure: Go to
 http://acousticalsociety.org and click on â??Submit Abstract" for the
San Diego meeting (http://pubpartner.aip.org/pasa/submission/welcome.jsp)
and Enter the Password: "San Diego".

If you wish to be considered for the special session you must forward a
plain-text copy of your ASA-submitted abstract to: JontAllen@xxxxxxxx 
Your email should also provide details about why your abstract addresses
questions related to the topic of the special session. Please try to do
this by June 20 (in two weeks), to give the committee time to process
the abstracts.

To be included in the special session you must:
 1) Submit your abstract to the ASA, using the methods described above
    (Deadline: 6/27/11).
 2) Send an email to JontAllen@xxxxxxxx about why you feel your presentation
   is relevant to the special session (Soft deadline: 6/21/11)

The following is the abstract of Allen (to be submitted) to help you
understand my personal view:

Title: Invariant acoustic cues of consonants in a vowel context
Author: Jont B. Allen

The classic JASA papers by French+Steinberg (1947), Fletcher+Galt
(1950), Miller+Nicely (1955) and Furui (1986) provided us with
detailed CV+VC confusions due to masking noise and bandwidth and
temporal truncations.  FS47 and FG50 led to the succinctly summarizing
Articulation Index (AI), while MN55 first introduced information-theory.
Allen and his students have repeated these classic experiments and
analyzed the error patterns for large numbers of individual utterances
[http://hear.beckman.illinois.edu/wiki/Main/Publications], and showed
that the averaging of scores removes critical details. Consonants
have binary acoustic features that are used by the auditory system
in decoding isolated CV consonants.  Masking a binary feature causes
the consonant error to jump to chance within some subgroup of sounds,
with an entropy determined by conflicting cues, typically present in
naturally spoken sounds.  These same invariant features are also used
when decoding sentences having varying degrees of context. A precise
knowledge of acoustic features has allowed us to reverse engineer
Fletcher's error-product rule (FG50), providing deep insight into the
workings of the AI.  Applications of this knowledge is being applied
to a better understanding of the idiosyncratic speech loss in hearing
impaired ears, and machine recognition of consonants.


-------------------------------------------------------------------------
Call for Papers and Submission procedure:
-------------------------------------------------------------------------
http://acousticalsociety.org/meetings/next_meeting/san_diego

-------------------------------------------------------------------------
Special Sessions:
-------------------------------------------------------------------------
http://acousticalsociety.org/meetings/next_meeting/san_diego/05_13_11_call_for_papers#1

-------------------------------------------------------------------------
Guidelines:
-------------------------------------------------------------------------
http://acousticalsociety.org/meetings/next_meeting/san_diego/05_13_11_call_for_papers#3

END