Re: [AUDITORY] Tolerance time interval for piano performance recognition (Justin London )


Subject: Re: [AUDITORY] Tolerance time interval for piano performance recognition
From:    Justin London  <000000aef06ea760-dmarc-request@xxxxxxxx>
Date:    Sat, 30 Mar 2019 05:50:10 -0500
List-Archive:<http://lists.mcgill.ca/scripts/wa.exe?LIST=AUDITORY>

--Apple-Mail=_A4486CD8-71D5-4D45-9FEA-A5E7A1E67130 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=utf-8 And here are a few more=E2=80=94the work of Caroline Palmer on melody = leads in keyboard playing may be especially pertinent: Palmer, C. (1996). "On the Assignment of Structure in Music = Performance." Music Perception 14(1): 23-56. Repp, B. H. (1996). "Patterns of note onset asynchronies in expressive = piano performance." Journal of the Acoustical Society of America 100(6): = 3917-3932. -Justin London > On Mar 29, 2019, at 5:15 AM, Huron, David <huron.1@xxxxxxxx> wrote: >=20 > Federico, >=20 > Regarding timing synchronization in music performance, the work of = Rudolph Rasch is especially pertinent. > If you're not already aware, the performance research from Sundberg = and colleagues at KTH offers a wealth of information. Additional work = listed below. >=20 > -David Huron >=20 > Bjurling, J. (2007). Timing in Piano Music: A Model of Melody Lead. = Skolan f=C3=B6r datavetenskap och kommunikation, Kungliga Tekniska = h=C3=B6gskolan. > Goebl, W. (2001). Melody lead in piano performance: Expressive device = or artifact?. The Journal of the Acoustical Society of America, 110(1), = 563-572. > Hirsh, I. J. (1959). Auditory perception of temporal order. Journal of = the Acoustical Society of America, 31(6), 759=E2=80=93767. > Holmes, S. D., & Roberts, B. (2006). Inhibitory influences on = asynchrony as a cue for auditory segregation. Journal of Experimental = Psychology: Human Perception and Performance, 32(5), 1231. > Llorens, A. (2017). Recorded asynchronies, structural dialogues: = Brahms's Adagio Affettuoso, Op. 99ii, in the hands of Casals and = Horszowski. Music Performance Research, 8. > McGookin, D. K., & Brewster, S. A. (2004). Understanding concurrent = earcons: Applying auditory scene analysis principles to concurrent = earcon recognition. ACM Transactions on Applied Perception (TAP), 1(2), = 130-155. > Mellinger, D. K. (1991). Event formation and separation in musical = sound (Doctoral dissertation, Department of Computer Science, Stanford = University). > Rasch, R. A. (1978). The perception of simultaneous notes such as in = polyphonic music. Acustica, 40, 21=E2=80=9333. > Rasch, R. A. (1979). Synchronization in performed ensemble music. = Acustica, 43, 121=E2=80=93131. > Rasch, R. A. (1981). Aspects of the perception and performance of = polyphonic music (Unpublished doctoral dissertation). Utrecht, = Netherlands: Elinkwijk BV. > Rasch, R. A. (1988). Timing and synchronization in ensemble = performance. In J. Sloboda (Ed.), Generative processes in music: The = psychology of performance, improvisa- tion, and composition (pp. = 70=E2=80=9390). Oxford: Clarendon Press. > Repp, B. H. (1996). The art of inaccuracy: Why pianists' errors are = difficult to hear. Music Perception: An Interdisciplinary Journal, = 14(2), 161-183. > Saldanha, E. L., & Corso, J. F. (1964). Timbre cues and the = identification of musical instruments. Journal of the Acoustical Society = of America, 36, 2021=E2=80=932026. > Sheft, S. (2008). Envelope processing and sound-source perception. In = Auditory perception of sound sources (pp. 233-280). Springer, Boston, = MA. >=20 >=20 >=20 > From: AUDITORY - Research in Auditory Perception = <AUDITORY@xxxxxxxx <mailto:AUDITORY@xxxxxxxx>> on behalf = of Federico Simonetta <federico.simonetta@xxxxxxxx = <mailto:federico.simonetta@xxxxxxxx>> > Sent: Thursday, March 28, 2019 7:37:03 AM > To: AUDITORY@xxxxxxxx <mailto:AUDITORY@xxxxxxxx> > Subject: Tolerance time interval for piano performance recognition > =20 > Dear list, > I am a Ph.D. student in music informatics at the University of Milan. = My project is about piano performance analysis and score-informed piano = transcription. >=20 > I was wondering if someone here knows any study about one or more of = the following time tolerance intervals. I am interested in the threshold = after which a human can recognize that in piano performances: =20 >=20 > * two or more onsets are not synchronous > * two onsets are in different timing positions in respect to the = previous identical note offset/onset > * two or more offsets are not synchronous > * two notes have different durations in monophonic/polyphonic = environments >=20 > I have found the following related paper, but it is rather old: > E. F. Clarke, =E2=80=9CThe Perception of Expressive Timing in = Music,=E2=80=9D vol. 51, no. 1, pp. 2=E2=80=939, Jun. 1989. >=20 > =46rom this study, it seems that humans are able to recognize = differences in music performances even if time changes lasts only 20 ms. = However, most of the researches involving computational analysis of = music performances (audio-to-score alignment and automatic music = transcription), refer to the threshold of 50 ms as tolerance. >=20 > I am wondering if, as of today, some more recent and in-depth research = has been carried on. >=20 > Thank you very much to anyone's help! >=20 > Cheers, > federico >=20 > --- >=20 > Federico Simonetta, PhD student >=20 > LIM - Music Informatics Laboratory=20 > Dept. of Computer Science=20 > University of Milano=20 > Via Celoria 18=20 > I-20133 Milano - ITALY=20 >=20 > Skype: federico_simonetta > http://www.lim.di.unimi.it <http://www.lim.di.unimi.it/> > http://federicosimonetta.frama.io <http://federicosimonetta.frama.io/> --Apple-Mail=_A4486CD8-71D5-4D45-9FEA-A5E7A1E67130 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=utf-8 <html><head><meta http-equiv=3D"Content-Type" content=3D"text/html; = charset=3Dutf-8"></head><body style=3D"word-wrap: break-word; = -webkit-nbsp-mode: space; line-break: after-white-space;" class=3D"">And = here are a few more=E2=80=94the work of Caroline Palmer on melody leads = in keyboard playing may be especially pertinent:<div class=3D""><br = class=3D""></div><div class=3D""><div style=3D"margin: 0px; = font-stretch: normal; line-height: normal; font-family: Helvetica;" = class=3D"">Palmer, C. (1996). "On the Assignment of Structure in Music = Performance." <span style=3D"text-decoration: underline" class=3D"">Music = Perception</span> <b class=3D"">14</b>(1): 23-56.</div><div = class=3D""><div style=3D"margin: 0px; font-stretch: normal; line-height: = normal; font-family: Helvetica;" class=3D"">Repp, B. H. (1996). = "Patterns of note onset asynchronies in expressive piano performance." = <span style=3D"text-decoration: underline" class=3D"">Journal of the = Acoustical Society of America</span> <b class=3D"">100</b>(6): = 3917-3932.</div></div><div class=3D""><br class=3D""></div><div = class=3D"">-Justin London</div><div><br class=3D""><blockquote = type=3D"cite" class=3D""><div class=3D"">On Mar 29, 2019, at 5:15 AM, = Huron, David &lt;<a href=3D"mailto:huron.1@xxxxxxxx" = class=3D"">huron.1@xxxxxxxx</a>&gt; wrote:</div><br = class=3D"Apple-interchange-newline"><div class=3D""><div = id=3D"divtagdefaultwrapper" dir=3D"ltr" style=3D"caret-color: rgb(0, 0, = 0); font-style: normal; font-variant-caps: normal; font-weight: normal; = letter-spacing: normal; text-align: start; text-indent: 0px; = text-transform: none; white-space: normal; word-spacing: 0px; = -webkit-text-stroke-width: 0px; text-decoration: none; font-size: 12pt; = font-family: Calibri, Helvetica, sans-serif;" class=3D""><div = style=3D"margin-top: 0px; margin-bottom: 0px;" = class=3D"">Federico,</div><div style=3D"margin-top: 0px; margin-bottom: = 0px;" class=3D""><br class=3D""></div><div style=3D"margin-top: 0px; = margin-bottom: 0px;" class=3D"">Regarding timing synchronization in = music performance, t<span style=3D"font-size: 12pt;" class=3D"">he work = of Rudolph Rasch is especially pertinent.</span></div><div = style=3D"margin-top: 0px; margin-bottom: 0px;" class=3D"">If you're not = already aware, the performance research from&nbsp;<span = style=3D"font-size: 12pt;" class=3D"">Sundberg and colleagues at KTH = offers a wealth of information. Additional work listed = below.</span></div><div style=3D"margin-top: 0px; margin-bottom: 0px;" = class=3D""><br class=3D""></div><div style=3D"margin-top: 0px; = margin-bottom: 0px;" class=3D"">-David Huron</div><div = style=3D"margin-top: 0px; margin-bottom: 0px;" class=3D""><br = class=3D""></div><div style=3D"margin-top: 0px; margin-bottom: 0px;" = class=3D"">Bjurling, J. (2007). Timing in Piano Music: A Model of Melody = Lead. Skolan f=C3=B6r datavetenskap och kommunikation, Kungliga Tekniska = h=C3=B6gskolan.</div><div style=3D"margin-top: 0px; margin-bottom: 0px;" = class=3D"">Goebl, W. (2001). Melody lead in piano performance: = Expressive device or artifact?. The Journal of the Acoustical Society of = America, 110(1), 563-572.</div><div style=3D"margin-top: 0px; = margin-bottom: 0px;" class=3D"">Hirsh, I. J. (1959). Auditory perception = of temporal order. Journal of the Acoustical Society of America, 31(6), = 759=E2=80=93767.</div><div style=3D"margin-top: 0px; margin-bottom: = 0px;" class=3D"">Holmes, S. D., &amp; Roberts, B. (2006). Inhibitory = influences on asynchrony as a cue for auditory segregation. Journal of = Experimental Psychology: Human Perception and Performance, 32(5), = 1231.</div><div style=3D"margin-top: 0px; margin-bottom: 0px;" = class=3D"">Llorens, A. (2017). Recorded asynchronies, structural = dialogues: Brahms's Adagio Affettuoso, Op. 99ii, in the hands of Casals = and Horszowski. Music Performance Research, 8.</div><div = style=3D"margin-top: 0px; margin-bottom: 0px;" class=3D"">McGookin, D. = K., &amp; Brewster, S. A. (2004). Understanding concurrent earcons: = Applying auditory scene analysis principles to concurrent earcon = recognition. ACM Transactions on Applied Perception (TAP), 1(2), = 130-155.</div><div style=3D"margin-top: 0px; margin-bottom: 0px;" = class=3D"">Mellinger, D. K. (1991). Event formation and separation in = musical sound (Doctoral dissertation, Department of Computer Science, = Stanford University).</div><div style=3D"margin-top: 0px; margin-bottom: = 0px;" class=3D"">Rasch, R. A. (1978). The perception of simultaneous = notes such as in polyphonic music. Acustica, 40, 21=E2=80=9333.</div><div = style=3D"margin-top: 0px; margin-bottom: 0px;" class=3D"">Rasch, R. A. = (1979). Synchronization in performed ensemble music. Acustica, 43, = 121=E2=80=93131.</div><div style=3D"margin-top: 0px; margin-bottom: = 0px;" class=3D"">Rasch, R. A. (1981). Aspects of the perception and = performance of polyphonic music (Unpublished doctoral dissertation). = Utrecht, Netherlands: Elinkwijk BV.</div><div style=3D"margin-top: 0px; = margin-bottom: 0px;" class=3D"">Rasch, R. A. (1988). Timing and = synchronization in ensemble performance. In J. Sloboda (Ed.), Generative = processes in music: The psychology of performance, improvisa- tion, and = composition (pp. 70=E2=80=9390). Oxford: Clarendon Press.</div><div = style=3D"margin-top: 0px; margin-bottom: 0px;" class=3D"">Repp, B. H. = (1996). The art of inaccuracy: Why pianists' errors are difficult to = hear. Music Perception: An Interdisciplinary Journal, 14(2), = 161-183.</div><div style=3D"margin-top: 0px; margin-bottom: 0px;" = class=3D"">Saldanha, E. L., &amp; Corso, J. F. (1964). Timbre cues and = the identification of musical instruments. Journal of the Acoustical = Society of America, 36, 2021=E2=80=932026.</div><div style=3D"margin-top: = 0px; margin-bottom: 0px;" class=3D"">Sheft, S. (2008). Envelope = processing and sound-source perception. In Auditory perception of sound = sources (pp. 233-280). Springer, Boston, MA.</div><div = style=3D"margin-top: 0px; margin-bottom: 0px;" class=3D""><br = class=3D""></div><div style=3D"margin-top: 0px; margin-bottom: 0px;" = class=3D""><br class=3D""></div><div style=3D"margin-top: 0px; = margin-bottom: 0px;" class=3D""><br class=3D""></div></div><hr = tabindex=3D"-1" style=3D"caret-color: rgb(0, 0, 0); font-family: = ArialMT; font-size: 18px; font-style: normal; font-variant-caps: normal; = font-weight: normal; letter-spacing: normal; text-align: start; = text-indent: 0px; text-transform: none; white-space: normal; = word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: = none; display: inline-block; width: 651.6875px;" class=3D""><span = style=3D"caret-color: rgb(0, 0, 0); font-family: ArialMT; font-size: = 18px; font-style: normal; font-variant-caps: normal; font-weight: = normal; letter-spacing: normal; text-align: start; text-indent: 0px; = text-transform: none; white-space: normal; word-spacing: 0px; = -webkit-text-stroke-width: 0px; text-decoration: none; float: none; = display: inline !important;" class=3D""></span><div id=3D"divRplyFwdMsg" = dir=3D"ltr" style=3D"caret-color: rgb(0, 0, 0); font-family: ArialMT; = font-size: 18px; font-style: normal; font-variant-caps: normal; = font-weight: normal; letter-spacing: normal; text-align: start; = text-indent: 0px; text-transform: none; white-space: normal; = word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: = none;" class=3D""><font face=3D"Calibri, sans-serif" style=3D"font-size: = 11pt;" class=3D""><b class=3D"">From:</b><span = class=3D"Apple-converted-space">&nbsp;</span>AUDITORY - Research in = Auditory Perception &lt;<a href=3D"mailto:AUDITORY@xxxxxxxx" = class=3D"">AUDITORY@xxxxxxxx</a>&gt; on behalf of Federico = Simonetta &lt;<a href=3D"mailto:federico.simonetta@xxxxxxxx" = class=3D"">federico.simonetta@xxxxxxxx</a>&gt;<br class=3D""><b = class=3D"">Sent:</b><span = class=3D"Apple-converted-space">&nbsp;</span>Thursday, March 28, 2019 = 7:37:03 AM<br class=3D""><b class=3D"">To:</b><span = class=3D"Apple-converted-space">&nbsp;</span><a = href=3D"mailto:AUDITORY@xxxxxxxx" = class=3D"">AUDITORY@xxxxxxxx</a><br class=3D""><b = class=3D"">Subject:</b><span = class=3D"Apple-converted-space">&nbsp;</span>Tolerance time interval for = piano performance recognition</font><div class=3D"">&nbsp;</div></div><div= class=3D"BodyFragment" style=3D"caret-color: rgb(0, 0, 0); font-family: = ArialMT; font-size: 18px; font-style: normal; font-variant-caps: normal; = font-weight: normal; letter-spacing: normal; text-align: start; = text-indent: 0px; text-transform: none; white-space: normal; = word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: = none;"><font size=3D"2" class=3D""><span style=3D"font-size: 11pt;" = class=3D""><div class=3D"PlainText">Dear list,<br class=3D"">I am a = Ph.D. student in music informatics at the University of Milan. My = project is about piano performance analysis and score-informed piano = transcription.<br class=3D""><br class=3D"">I was wondering if someone = here knows any study about one or more of the following time tolerance = intervals. I am interested in the threshold after which a human can = recognize that in piano performances:&nbsp;<span = class=3D"Apple-converted-space">&nbsp;</span><br class=3D""><br = class=3D"">* two or more onsets are not synchronous<br class=3D"">* two = onsets are in different timing positions in respect to the previous = identical note offset/onset<br class=3D"">* two or more offsets are not = synchronous<br class=3D"">* two notes have different durations in = monophonic/polyphonic environments<br class=3D""><br class=3D"">I have = found the following related paper, but it is rather old:<br class=3D"">E. = F. Clarke, =E2=80=9CThe Perception of Expressive Timing in Music,=E2=80=9D= vol. 51, no. 1, pp. 2=E2=80=939, Jun. 1989.<br class=3D""><br = class=3D"">=46rom this study, it seems that humans are able to recognize = differences in music performances even if time changes lasts only 20 ms. = However, most of the researches involving computational analysis of = music performances (audio-to-score alignment and automatic music = transcription), refer to the threshold of 50 ms as tolerance.<br = class=3D""><br class=3D"">I am wondering if, as of today, some more = recent and in-depth research has been carried on.<br class=3D""><br = class=3D"">Thank you very much to anyone's help!<br class=3D""><br = class=3D"">Cheers,<br class=3D"">federico<br class=3D""><br = class=3D"">---<br class=3D""><br class=3D"">Federico Simonetta, PhD = student<br class=3D""><br class=3D"">LIM - Music Informatics = Laboratory<span class=3D"Apple-converted-space">&nbsp;</span><br = class=3D"">Dept. of Computer Science<span = class=3D"Apple-converted-space">&nbsp;</span><br class=3D"">University = of Milano<span class=3D"Apple-converted-space">&nbsp;</span><br = class=3D"">Via Celoria 18<span = class=3D"Apple-converted-space">&nbsp;</span><br class=3D"">I-20133 = Milano - ITALY<span class=3D"Apple-converted-space">&nbsp;</span><br = class=3D""><br class=3D"">Skype: federico_simonetta<br class=3D""><a = href=3D"http://www.lim.di.unimi.it/" = class=3D"">http://www.lim.di.unimi.it</a><br class=3D""><a = href=3D"http://federicosimonetta.frama.io/" = class=3D"">http://federicosimonetta.frama.io</a></div></span></font></div>= </div></blockquote></div><br class=3D""></div></body></html>= --Apple-Mail=_A4486CD8-71D5-4D45-9FEA-A5E7A1E67130--


This message came from the mail archive
src/postings/2019/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University