Subject: Re: auditory scrambling From: Al Bregman <al.bregman@xxxxxxxx> Date: Tue, 18 Dec 2007 19:24:48 -0500 List-Archive:<http://lists.mcgill.ca/scripts/wa.exe?LIST=AUDITORY>Dear Valeriy and list: The perceptual effects of shuffled speech that you reported made me think of the fact that years ago, Chris Darwin and a colleague did an experiment in which they studied the effects of an instantaneous change in F0 from 101 Hz to 178 Hz in the middle of a synthesized syllable Darwin, C.J., & Bethell-Fox, C.E. (1977) Pitch continuity and speech source attribution. Journal of Experimental Psychology: Human Perception and Performance, 3, 665-672. The formant patterns changed smoothly between two vowels, and when the pitch was a monotone, the transitions were heard as semivowels and liquids. But when a discontinuity of pitch was introduced in the middle of the vowel transition, the listeners heard 2 separate speech sources, saying stop consonants. These consonants probably appeared because what came after the pitch change was completely dissociated from what came before. So it sounded like the sudden end of one vocalic sound and the sudden beginning of another. These offsets and onsets were heard as consonants. Valeriy, do you hear any spurious consonants when you listen to your rearranged segments? I should mention that your discontinuities are more severe than those of Darwin and Bethell-Fox, because you are breaking up formant transitions as well as F0 trajectories, while D & B kept the formant transitions intact. Best, Al ------------------------------------------------------------------- Albert S. Bregman, Emeritus Professor Psychology Department, McGill University 1205 Docteur Penfield Avenue Montreal, QC, Canada H3A 1B1. Tel: (514) 398-6103 Fax: (514) 398-4896 www.psych.mcgill.ca/labs/auditory/Home.html ------------------------------------------------------------------- On Dec 18, 2007 1:13 PM, Valeriy Shafiro <Valeriy_Shafiro@xxxxxxxx> wrote: > Hi Mathias, > > I did this some years ago in Matlab and also in Java (with less controls). > I was trying to see how difficult putting speech,music or scenes together > becomes based on segment duration and number. But, unfortunately it never > got to a formal experiment. One curious effect I remember is that when you > 'scramble' a speech utterance of a single talker into very short segments > (I want to say 50 -100 ms long, but I don't remember precisely now), you > actually hear more than one talker, which is not all that surprising I > thought given the discontinuities introduced during scrambling. If you are > interested, I can look if I still have the code. > > Best, > > Valeriy > > > > ------------------------------------------------------------- > Valeriy Shafiro > Communication Disorders and Sciences > Rush University Medical Center > Chicago, IL > > office (312) 942 - 3298 > lab (312) 942 - 3316 > email: valeriy_shafiro@xxxxxxxx > > > > -----AUDITORY - Research in Auditory Perception <AUDITORY@xxxxxxxx> > wrote: ----- > > > To: AUDITORY@xxxxxxxx > From: Mathias Oechslin <m.oechslin@xxxxxxxx> > Sent by: AUDITORY - Research in Auditory Perception > <AUDITORY@xxxxxxxx> > Date: 12/18/2007 06:00AM > Subject: auditory scrambling > > Dear list, > > Has anyone any experience with an automatic approach to "scramble" acousic > stimuli? > That means for example: first step, segmentation of a 4 secs phrase in 10 > segments of 400ms: second step, rearragement in a random order. > An advanced implementation would be to have the opportunity to define any > possible time range (i.e 50-400ms), at which the script rearranges the file > randomly. > > Thanks for any ideas, > Mathias > > > -- > > > > > ************************************************** > Mathias Oechslin > Ph.D student > Department of Neuropsychology > Institute for Psychology > Binzmühlestrasse 14/25 > University of Zürich > CH-8050 Zürich > Switzerland > http://www.psychologie.unizh.ch/neuropsy/ > > m.oechslin@xxxxxxxx > phone: +41 44 635 74 07 > > > ************************************************** --