Re: MDS-distances ("James J. Jenkins" )


Subject: Re: MDS-distances
From:    "James J. Jenkins"  <j3cube@xxxxxxxx>
Date:    Tue, 27 Jun 2006 10:21:02 -0400

----------MailBlocks_8C8680C0D7D9749_1738_269A_MBLK-M02.sysops.aol.com Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Dear List I don't know anything about timbre but I do know something about vowels and MDS. For many years we have known that MDS analyses of similarities of vowels in American English yield a very nice match to a First Formant-Second Formant space. You may or may not get the third formant coming out but F1 and F2 soak up almost all the variance. To my mind it is one of the success stories of MDS applications Sorry I can't give the references (I am in Australia at the moment) but ask Rob Fox at Ohio State or search the speech perception literature. Jim James J. Jenkins university of South Florida and Grad Center, City University of New York J3cube@xxxxxxxx -----Original Message----- From: John Bates <jkbates@xxxxxxxx> To: AUDITORY@xxxxxxxx Sent: Tue, 27 Jun 2006 09:44:15 -0400 Subject: Re: MDS-distances Dear List, I think Malcolm is on the right track with his idea of an auditory version of three-color vision. I think it can be done. The reason is that I've encountered this sort of perception while doing experiments in classifying timbre of pulsed sequences. I noticed this effect when listening to groups of all possible combinations of sequences of from 1 to 4 successive impulse waveshapes. The impulse shapes I used were either rectangular or overdamped sines with constant amplitude and with ordered time spacings. I found that equal spacings give a tonal timbre (or pitch) while aperiodic spacings give an atonal timbre. I noticed that I heard atonal timbre in terms of the vowels (/ee/, /ih/, /eh/,...../oo/.) By a series of tests with different combinations of shapes and intervals I could locate the combinations that produced the closest match to vowel centers. I found that the non-matching timbres would generally lie between adjacent centers. For example, if a sample were not an /ah/ it would fall between either /aa/ or /aw/, or it might move toward the back vowel /oo/ as in an umlaut. These results led me to suppose that a set of vowels could represent cardinal points in a timbre space that applies to human speech as well as to environmental sounds. (Why not? The vowels of the human vocal cavity can be heard in a wide variety of non-speech sounds.) This indicates that a vowel space could be defined perhaps in terms of one like the "RGB" space of color TV. For example, consider an "FMB" space with F for front-closed, M for middle-open, and B for back-closed. Couldn't this space include all of the variants of vowel sounds? Couldn't this also provide a more quantitative calibration of timbre than one based on things like spectral brightness and "bite?" If anyone feels like trying these tests, the key is to listen to the atonal waveshape groups either individually or in a stream, each one separated by more than 25 milliseconds. The 25 ms.separation reduces the possibility of mixing the sound of group repetition with group timbre. Note that the group of the vowel, /ee/, as the only tonal vowel, must contain at least three equally spaced impulses having a repetition rate that defines the third formant. Note also that whispered speech, or all kinds of environmental sounds including transients could be simulated by randomizing the separation and/or mixing of timbre and pitch. IAn example of the dichotomy of timbre and pitch can be shown in the experiment by E. Terhardt and H. Fastl, "Zum einfluss von stortonen un storgerauschen auf die tonhohe von sinustonen" Acoustica, vol. 25, pp53-61, 1971 This is a study of phase masking vs.amplitude where a 200 Hz tone masks a 400 Hz tone as its phase is varied in steps 0 to 360 degrees. I've been testing this experiment and have found that the timbre varies with corresponding waveshape changes although the two pitches are constant. My experiments are based on Manfred Schroeder's description in "Models of hearing," Proceedings of the IEEE, Vol. 63, No.9, September,1974. I'm looking for more information on how the experiment was run, since Schroeder's paper was only a summary. So far I haven't found much about it on the Internet. John Bates snip- >>Failed at what? Malcolm, I think you have missed the point. > >Fair enough.. We have different goals. I want a model of timbre >perception (for speech and music sounds) that rivals the three-color >model of color vision science. Spectral brightness and attack time >are not enough of an answer for me. > >I don't think the timbre interpolation work I've seen (the >vibrabone?) shows that we understand timbre space yet. As I >remember the data, the synthesized instrument was not on a >perceptual line directly between the source sounds. ________________________________________________________________________ Check out AOL.com today. Breaking news, video search, pictures, email and IM. All on demand. Always Free. ----------MailBlocks_8C8680C0D7D9749_1738_269A_MBLK-M02.sysops.aol.com Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: 7bit <HTML><BODY><DIV style='font-family: "Verdana"; font-size: 10pt;'><DIV>Dear List</DIV> <DIV>I don't know anything about timbre but I do know something about vowels and MDS. For many years we have known that MDS analyses of similarities of vowels in American English yield a very nice match to a First Formant-Second Formant space. You may or may not get the third formant&nbsp;coming out but F1 and F2 soak up almost all the variance. To my mind it is one of the success stories of MDS applications</DIV> <DIV>Sorry I can't give the references (I am in Australia at the moment) but ask Rob Fox at Ohio State or search the speech perception literature.</DIV> <DIV>Jim</DIV> <DIV>James J. Jenkins</DIV> <DIV>university of South Florida and</DIV> <DIV>Grad Center, City University of New York</DIV> <DIV>&nbsp;<A href="mailto:J3cube@xxxxxxxx">J3cube@xxxxxxxx</A><BR>-----Original Message-----<BR>From: John Bates &lt;jkbates@xxxxxxxx&gt;<BR>To: AUDITORY@xxxxxxxx<BR>Sent: Tue, 27 Jun 2006 09:44:15 -0400<BR>Subject: Re: MDS-distances<BR><BR></DIV> <STYLE> .AOLPlainTextBody { margin: 0px; font-family: Tahoma, Verdana, Arial, Sans-Serif; font-size: 12px; color: #000; background-color: #fff; } .AOLPlainTextBody pre { font-size: 9pt; } .AOLInlineAttachment { margin: 10px; } .AOLAttachmentHeader { border-bottom: 2px solid #E9EAEB; background: #F9F9F9; } .AOLAttachmentHeader .Title { font: 11px Tahoma; font-weight: bold; color: #666666; background: #E9EAEB; padding: 3px 0px 1px 10px; } .AOLAttachmentHeader .FieldLabel { font: 11px Tahoma; font-weight: bold; color: #666666; padding: 1px 10px 1px 9px; } .AOLAttachmentHeader .FieldValue { font: 11px Tahoma; color: #333333; } </STYLE> <DIV class=AOLPlainTextBody id=AOLMsgPart_0_9b72db5a-19e8-4186-8a70-856823b6de2d>Dear List,&nbsp;<BR>&nbsp;<BR>I think Malcolm is on the right track with his idea of an auditory version of three-color vision. I think it can be done. The reason is that I've encountered this sort of perception while doing experiments in classifying timbre of pulsed sequences. I noticed this effect when listening to groups of all possible combinations of sequences of from 1 to 4 successive impulse waveshapes. The impulse shapes I used were either rectangular or overdamped sines with constant amplitude and with ordered time spacings. I found that equal spacings give a tonal timbre (or pitch) while aperiodic spacings give an atonal timbre. I noticed that I heard atonal timbre in terms of the vowels (/ee/, /ih/, /eh/,...../oo/.) By a series of tests with different combinations of shapes and intervals I could locate the combinations that produced the closest match to vowel centers. I found that the non-matching timbres would generally lie between adjacent centers. For example, if a sample were not an /ah/ it would fall between either /aa/ or /aw/, or it might move toward the back vowel /oo/ as in an umlaut.&nbsp;<BR>&nbsp;<BR>These results led me to suppose that a set of vowels could represent cardinal points in a timbre space that applies to human speech as well as to environmental sounds. (Why not? The vowels of the human vocal cavity can be heard in a wide variety of non-speech sounds.) This indicates that a vowel space could be defined perhaps in terms of one like the "RGB" space of color TV. For example, consider an "FMB" space with F for front-closed, M for middle-open, and B for back-closed. Couldn't this space include all of the variants of vowel sounds? Couldn't this also provide a more quantitative calibration of timbre than one based on things like spectral brightness and "bite?"&nbsp;<BR>&nbsp;<BR>If anyone feels like trying these tests, the key is to liste n to the atonal waveshape groups either individually or in a stream, each one separated by more than 25 milliseconds. The 25 ms.separation reduces the possibility of mixing the sound of group repetition with group timbre. Note that the group of the vowel, /ee/, as the only tonal vowel, must contain at least three equally spaced impulses having a repetition rate that defines the third formant. Note also that whispered speech, or all kinds of environmental sounds including transients could be simulated by randomizing the separation and/or mixing of timbre and pitch.&nbsp;<BR>&nbsp;<BR>IAn example of the dichotomy of timbre and pitch can be shown in the experiment by E. Terhardt and H. Fastl, "Zum einfluss von stortonen un storgerauschen auf die tonhohe von sinustonen" Acoustica, vol. 25, pp53-61, 1971 This is a study of phase masking vs.amplitude where a 200 Hz tone masks a 400 Hz tone as its phase is varied in steps 0 to 360 degrees. I've been testing this experiment and have found that the timbre varies with corresponding waveshape changes although the two pitches are constant. My experiments are based on Manfred Schroeder's description in "Models of hearing," Proceedings of the IEEE, Vol. 63, No.9, September,1974. I'm looking for more information on how the experiment was run, since Schroeder's paper was only a summary. So far I haven't found much about it on the Internet.&nbsp;<BR>&nbsp;<BR>John Bates&nbsp;<BR>&nbsp;<BR>snip-&nbsp;<BR>&nbsp;<BR>&gt;&gt;Failed at what? Malcolm, I think you have missed the point.&nbsp;<BR>&gt;&nbsp;<BR>&gt;Fair enough.. We have different goals. I want a model of timbre &gt;perception (for speech and music sounds) that rivals the three-color &gt;model of color vision science. Spectral brightness and attack time &gt;are not enough of an answer for me.&nbsp;<BR>&gt;&nbsp;<BR>&gt;I don't think the timbre interpolation work I've seen (the &gt;vibrabone?) shows that we understand timbre space yet. As I &gt;remember t he data, the synthesized instrument was not on a &gt;perceptual line directly between the source sounds.&nbsp;<BR>&nbsp;<BR></DIV><!-- end of AOLMsgPart_0_9b72db5a-19e8-4186-8a70-856823b6de2d --></DIV> <hr style="margin-top:10px;" /> <a href="http://pr.atwola.com/promoclk/100122638x1081283466x1074645346/aol?redir=http%3A%2F%2Fwww%2Eaol%2Ecom" target="_blank"><b>Check out AOL.com today</b></a>. Breaking news, video search, pictures, email and IM. All on demand. Always Free.<br /> </BODY></HTML> ----------MailBlocks_8C8680C0D7D9749_1738_269A_MBLK-M02.sysops.aol.com--


This message came from the mail archive
http://www.auditory.org/postings/2006/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University