[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [AUDITORY] Tool for automatic syllable segmentation



Hi Léo,
The following comments and references may help. Apologies for another long email. It's because I'm trying to write unambiguously, and identify what may be hidden assumptions that can influence data and theory. (And because I'm unsure exactly what you mean by "speech rate context cues, not "directly informative" cues".)

Production
For production only (i.e. measuring acoustic properties regardless of their perceptual salience) the duration of spread depends on the language, dialect, rate and style of speech, the particular speech sound/phoneme, and the part of the syllable it is in (broadly, onset vs coda/end).
   E.g. 1) for language, English sounds tend to spread further than Spanish.
          2)  for particular speech sounds, English /s/ tends to be quite localised, whereas cues to stop voicing at the end of a syllable (lab vs lap) spread across the entire syllable, and /r/ has been measured up to 1000 ms before the acoustic nucleus of the corresponding phoneme. (That's /r/ the approximant, not the trill found in some English dialects.)
  A comprehensive study for standard southern British English:
   Coleman, J. S. (2003). Discovering the acoustic correlates of phonological contrasts. Journal of Phonetics, 31, 351-372. https://doi.org/10.1016/j.wocn.2003.10.001

Perception
Of course, whether these measurable acoustic correlates of phonemes are perceptually salient depends on all the above factors, as well as other very powerful ones.
    E.g.
 1) whether the word is recognizable without the phoneme in question (essentially, predictability from context, broadly defined)
              Richard Warren's phoneme restoration work - 'legislature'        
 2) listener competence and expectations
           Heinrich, A., Flory, Y., & Hawkins, S. (2010). Influence of English r-resonances on intelligibility of speech in noise for native English and German listeners. Speech Communication, 52, 1038-1055. https://doi.org/10.1016/j.specom.2010.09.009
  3)   the particular type of speech and/or its position in the syllable and word (whose composition also matter. Many examples exist.  Laura Dilley and colleagues' work is neat (though would only work for the types of sounds she uses - unlikely for /s/ in those types of context, and may be what you wanted to exclude in speech rate context cues')
             Morrill, T. H., Dilley, L. C., McAuley, J. D., & Pitt, M. A. (2014). Distal rhythm influences whether or not listeners hear a word in continuous speech: Support for a perceptual grouping hypothesis. Cognition, 131, 69-74. https://doi.org/10.1016/j.cognition.2013.12.006
              Morrill, T. H., Heffner, C. C., & Dilley, L. C. (2015). Interactions between distal speech rate, linguistic knowledge, and speech environment    Psychonomic Bulletin & Review, 22, 1451–1457. https://doi.org/10.3758/s13423-015-0820-9
    

The above references all use meaningful speech. Alain has mentioned misperceptions when listening to speech in a language you don't know so well (see also Heinrich paper above). There are many studies of this type of thing. Fun ones include so-called mondegreen perceptions of song lyrics, common in both native and cross-linguistic contexts, e.g. English 'all my feelings grow' heard as German 'oma fiel ins klo'      You can find serious research on what influences these effects.

Finally, a comment on Alain's email: yes, segmentation involves some weird assumptions, but can still be valid for many questions, especially if what you are doing is on relatively simple speech composition (obstruent-vowel sequences in clear speech being the best example), or comparing like with like in different contexts. For this, there is usually more than one justifiable segmentation criterion. My own observations show that the particular criterion for segmentation rarely affects conclusions, as long as the chosen criterion is used reliably.
 An exception to this rule is measurements of English vowel duration following phonologically voiced vs voiceless syllable-initial stops (e.g. b vs p). For 50 or more years from the 1960s or 70s, we standardly measured the end of the stop/beginning of the vowel from the stop burst (as Pierre wrote in this thread). Nowadays many people measure from the onset of phonation, on the mistaken assumption that all vowels are always voiced/phonated, and that any aspiration belongs to the consonant. (It belongs to both.) The consequence is that vowels following heavily aspirated voiceless stops (English syllable-initial p t k) measure as much shorter than those following the (unaspirated) voiced stops, b d g.
    There is no 'right' segmentation criterion for this situation: what you do depends on why you are measuring, and what else you are measuring. The stop burst is better if your focus is articulation or you want to pool durations regardless of initial stop voicing (reduces the variance). From onset of periodicity/voicing may be better if your interest is on rhythmic influences on perception.
This also illustrates Alain's and my wider points that segmentation criteria need to reflect the type of speech and the purpose of the segmentation.
I hope also that these types of comment illustrate the theoretical constraints that follow from using averages summed over wildly different phonological units, and then generalised to natural connected speech, such as that syllable duration is about 250 ms.

all the best
Sarah



On 2024-09-23 11:43, Léo Varnet wrote:

Dear all,

I'd like to take Alain's response as a starting point for another sub-thread of discussion. 

Alain, I assume that you are referring to the research on automatic phoneme classification based on temporal patterns, which typically use a [-500ms; +500ms] window. I'm curious about the maximum distance a phonetic cue can be from the nucleus of the corresponding phoneme. Does anybody in the List have insights on this? In my own experiments I have observed that in some cases cues as far as 800 ms before the target sound can influence phoneme categorization -- but these were speech rate context cues, not "directly informative" cues. 

Best

Léo


Léo Varnet - Chercheur CNRS
Laboratoire des Systèmes Perceptifs, UMR 8248
École Normale Supérieure
29, rue d'Ulm - 75005 Paris Cedex 05
Tél. : (+33)6 33 93 29 34
https://lsp.dec.ens.fr/en/member/1066/leo-varnet
https://dbao.leo-varnet.fr/

Le 21/09/2024 à 11:51, Alain de Cheveigne a écrit :
Curiously, no one yet pointed out that "segmentation" itself is ill-defined.  Syllables, like phonemes, are defined at the phonological level which is abstract and distinct from the acoustics.  

Acoustic cues to a phoneme (or syllable) may come from an extended interval of sound that overlaps with the cues that signal the next phoneme. I seem to recall papers by Hynek Hermansky that found a support duration on the order of 1s. If so, the "segmentation boundary" between phonemes/syllables is necessarily ill-defined. 

In practice, it may be possible to define a segmentation that "concentrates" information pertinent to each phoneme within a segment and minimizes "spillover" to adjacent phonemes, but we should not be surprised if it works less well for some boundaries, or if different methods give different results.

When listening to Japanese practice tapes, I remember noticing that the word "futatsu" (two) sounded rather like "uftatsu", suggesting that the acoustic-to-phonological mapping (based on my native phonological system) could be loose enough to allow for a swap. 

Alain 
 to be categorized


On 20 Sep 2024, at 11:29, Jan Schnupp <000000e042a1ec30-dmarc-request@xxxxxxxxxxxxxxx> wrote:

Dear Remy,

it might be useful for us to know where your meaningless CV syllable stimuli come from. 
But in any event, if you are any good at coding you are likely better off working directly computing parameters of the recording waveforms and apply criteria to those. CV syllables have an "energy arc" such that the V is invariably louder than the C. In speech there are rarely silent gaps between syllables, so you may be looking at a CVCVCVCV... stream where the only "easy" handle on the syllable boundary is likely to be the end of end of the vowel, which should be recognizable by a marked decline in acoustic energy, which you can quantify by some running RMS value (perhaps after low-pass filtering given that consonants rarely have much low frequency energy). If that's not accurate or reliable enough then things are likely to get a lot trickier. You could look for voicing in a running autocorrelation as an additional cue given that all vowels are voiced but only some consonants are. 
How many of these do you have to process? If the number isn't huge, it may be quicker to find the boundaries "by ear" than trying to develop a piece of computer code. The best way forward really depends enormously on the nature of your original stimulus set. o

Best wishes,

Jan

---------------------------------------
Prof Jan Schnupp
Gerald Choa Neuroscience Institute
The Chinese University of Hong Kong
Sha Tin
Hong Kong

https://auditoryneuroscience.com
http://jan.schnupp.net


On Thu, 19 Sept 2024 at 12:19, Rémy MASSON <remy.masson@xxxxxxxxxx> wrote:
Hello AUDITORY list,
 We are attempting to do automatic syllable segmentation on a collection of sound files that we use in an experiment. Our stimuli are a rapid sequence of syllables (all beginning with a consonant and ending with a vowel) with no underlying semantic meaning and with no pauses. We would like to automatically extract the syllable/speech rate and obtain the timestamps for each syllable onset.
 We are a bit lost on which tool to use. We tried PRAAT with the Syllable Nuclei v3 script, the software VoiceLab and the website WebMaus. Unfortunately, for each of them their estimation of the total number of syllables did not consistently match what we were able to count manually, despite toggling with the parameters.  
 Do you have any advice on how to go further? Do you have any experience in syllable onset extraction?
 Thank you for your understanding,
 Rémy MASSON
Research Engineer
Laboratory "Neural coding and neuroengineering of human speech functions" (NeuroSpeech)
Institut de l’Audition – Institut Pasteur (Paris)<image001.jpg>