Subject: Re: [AUDITORY] Measuring perceptual similarity From: Sabine Meunier <meunier@xxxxxxxx> Date: Wed, 21 Mar 2018 15:11:41 +0100 List-Archive:<http://lists.mcgill.ca/scripts/wa.exe?LIST=AUDITORY>This is a multi-part message in MIME format. --------------4F7449982B62347CB4136639 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: quoted-printable X-MIME-Autoconverted: from 8bit to quoted-printable by edgeum1.it.mcgill.ca id w2LEBh5O017538 I agree with Daniel. If you want to measure similarity MUSHRA will not=20 be the best method as it needs anchors. MUSHRA is principaly done to=20 evaluate degradations of a signal. That means that you need to know=20 which signal is non degraded and which one is the most degraded (as both=20 will serve as reference and anchor). best Sabine Le 20/03/2018 =C3=A0 12:29, Oberfeld-Twistel, Daniel a =C3=A9crit=C2=A0: > > Thanks for sharing the references! > > In my view, MUSHRA cannot be recommended for studying musical similarit= y. > > The method is designed to identify differences between stimuli on a=20 > defined dimension (which is audio quality in the MUSHRA=20 > recommendation, although this rating method could also be used for=20 > evaluating other perceptual dimensions). > > In the MUSHRA method, listeners are NOT asked to rate the similarity=20 > of the stimuli, however. While in principle information about=20 > similarity could be deduced in an indirect manner from the ratings=20 > obtained with MUSHRA (similar mean ratings =3D high similarity), this=20 > would require that you can specify=C2=A0 the perceptual dimension on wh= ich=20 > your stimuli differ or are similar (say, rhythm, tempo,=20 > consonance/dissonance, mood etc.). > > If that is not possible, the other approaches that were suggested like=20 > triadic tests or MDS can be used **without** having to specify which=20 > exact dimension =C2=A0the similarity judgments should refer to, and to=20 > identify structures in the (dis-) similarity ratings. > > In addition, I could imagine that the MUSHRA concepts of a=20 > high-quality =E2=80=9Creference=E2=80=9D and a low-quality =E2=80=9Canc= hor=E2=80=9D do not easily=20 > apply to the experiments you have in mind. > > Best > > Daniel > > --------------------------------- > > Dr. Daniel Oberfeld-Twistel > > Associate Professor > > Johannes Gutenberg - Universitaet Mainz > > Institute of Psychology > > Experimental Psychology > > Wallstrasse 3 > > 55122 Mainz > > Germany > > Phone ++49 (0) 6131 39 39274 > > Fax=C2=A0=C2=A0 ++49 (0) 6131 39 39268 > > http://www.staff.uni-mainz.de/oberfeld/ > > https://www.facebook.com/WahrnehmungUndPsychophysikUniMainz > > *From:*AUDITORY - Research in Auditory Perception=20 > <AUDITORY@xxxxxxxx> *On Behalf Of *Pat Savage > *Sent:* Tuesday, March 20, 2018 6:19 AM > *To:* AUDITORY@xxxxxxxx > *Subject:* Re: Measuring perceptual similarity > > Dear list, > > Thanks very much for all of your responses. I=E2=80=99m summarizing bel= ow all=20 > the reference recommendations I received. > > I still want to more fully read some of these, but so far my=20 > impression is that the Giordano et al. (2011) paper gives a good=20 > review of the benefits and drawbacks of previous methods, but since=20 > that was published MUSHRA seems to have become the standard method for=20 > these types of subjective perceptual similarity ratings. > > Please let me know if I seem to be misunderstanding anything here. > > Cheers, > > Pat > > -- > > Flexer, A., & Grill, T. (2016). The Problem of Limited Inter-rater=20 > Agreement in Modelling Music Similarity. /Journal of New Music=20 > Research/, /45/(3), 1=E2=80=9313. > > P. Susini, S. McAdams, S. Winsberg: A multidimensional technique for=20 > sound quality assessment. Acta Acustica united with Acustica 85=20 > (1999), 650=E2=80=93656. > > Novello, A., McKinney, M. F., & Kohlrausch, A. (2006). Perceptual=20 > evaluation of music similarity. In /Proceedings of the 7th=20 > International Conference on Music Information Retrieval/. Retrieved=20 > from http://ismir2006.ismir.net/PAPERS/ISMIR06148_Paper.pdf > > Michaud, P. Y., Meunier, S., Herzog, P., Lavandier, M., & D=E2=80=99Aub= igny,=20 > G. D. (2013). Perceptual evaluation of dissimilarity between auditory=20 > stimuli: An alternative to the paired comparison. /Acta Acustica=20 > United with Acustica/, /99/(5), 806=E2=80=93815. > > Wolff, D., & Weyde, T. (2011). Adapting Metrics for Music Similarity=20 > Using Comparative Ratings. /12th International Society for Music=20 > Information Retrieval Conference (ISMIR=E2=80=9911), Proc./, (Ismir), 7= 3=E2=80=9378. > > B. L. Giordano, C. Guastavino, E. Murphy, M. Ogg, B. K. Smith, S.=20 > McAdams: Comparison of methods for collecting and modeling=20 > dissimilarity data: Applications to complex sound stimuli.=20 > Multivariate Behavioral Research 46 (2011), 779=E2=80=93811. > > P. Y. Michaud, S. Meunier, P. Herzog, M. Lavandier, G. d=E2=80=99Aubign= y:=20 > Perceptual evaluation of dissimilarity between auditory stimuli: an=20 > alternative to the paired=C2=A0 comparison. Acta Acustica united with=20 > Acustica 99 (2013), 806=E2=80=93815. > > Collett, E., Marx, M., Gaillard, P., Roby, B., Fraysse, B., & Deguine,=20 > O. (2016). Categorization of common sounds by cochlear implanted and=20 > normal hearing adults. /Hearing Research/, /335/, 207=E2=80=93219. > > International Telecommunication Union. (2015). ITU-R BS.1534-3, Method=20 > for the subjective assessment of intermediate quality level of audio=20 > systems. /ITU-R Recommendation/, /1534/=E2=80=93/3/, 1534=E2=80=933. Re= trieved from=20 > https://www.itu.int/dms_pubrec/itu-r/rec/bs/R-REC-BS.1534-3-201510-I!!P= DF-E.pdf=20 > <https://www.itu.int/dms_pubrec/itu-r/rec/bs/R-REC-BS.1534-3-201510-I%2= 1%21PDF-E.pdf> > > Lavandier, M., Meunier, S., & Herzog, P. (2008). Identification of=20 > some perceptual dimensions underlying loudspeaker dissimilarities.=20 > /Journal of the Acoustical Society of America/, /123/(6), 4186=E2=80=93= 4198. > > DZHAFAROV, E. N., & OLDENBURG, H. C. (2006). RECONSTRUCTING DISTANCES=20 > AMONG OBJECTS FROM THEIR DISCRIMINABILITY. /Psychometrika/, /71/(2),=20 > 365=E2=80=93386. > > Software: > > Various: > > http://www.ak.tu-berlin.de/menue/digitale_ressourcen/research_tools/whi= sper_listening_test_toolbox_for_matlab/ > > MUSHRA: > > https://github.com/audiolabs/webMUSHRA > > Free-sorting: > > http://petra.univ-tlse2.fr/spip.php?rubrique49&lang=3Den > > --- > Dr. Patrick Savage > Project Associate Professor > > Faculty of Environment and Information Studies > > Keio University SFC (Shonan Fujisawa Campus) > http://PatrickESavage.com > --=20 Sabine Meunier Laboratoire de M=C3=A9canique et d'Acoustique CNRS 4 impasse Nikola Tesla CS 40006 13453 Marseille Cedex 13 France Tel: +33 4 84 52 42 02 email: meunier@xxxxxxxx http://www.lma.cnrs-mrs.fr/ --------------4F7449982B62347CB4136639 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-MIME-Autoconverted: from 8bit to quoted-printable by edgeum1.it.mcgill.ca id w2LEBh5O017538 <html> <head> <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dutf= -8"> </head> <body text=3D"#000000" bgcolor=3D"#FFFFFF"> I agree with Daniel. If you want to measure similarity MUSHRA will not be the best method as it needs anchors. MUSHRA is principaly done to evaluate degradations of a signal. That means that you need to know which signal is non degraded and which one is the most degraded (as both will serve as reference and anchor).<br> <br> best<br> <br> Sabine<br> <br> <div class=3D"moz-cite-prefix">Le 20/03/2018 =C3=A0 12:29, Oberfeld-Twistel, Daniel a =C3=A9crit=C2=A0:<br> </div> <blockquote type=3D"cite" cite=3D"mid:29893_1521606462_5AB1DF3E_29893_137_1_0969ecdb076e455b89bf2a2= 12b972794@xxxxxxxx"> <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Du= tf-8"> <meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)"> <style><!-- /* Font Definitions */ @xxxxxxxx {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4;} @xxxxxxxx {font-family:Calibri; panose-1:2 15 5 2 2 2 4 3 2 4;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {margin:0cm; margin-bottom:.0001pt; font-size:12.0pt; font-family:"Times New Roman",serif;} a:link, span.MsoHyperlink {mso-style-priority:99; color:blue; text-decoration:underline;} a:visited, span.MsoHyperlinkFollowed {mso-style-priority:99; color:purple; text-decoration:underline;} p.msonormal0, li.msonormal0, div.msonormal0 {mso-style-name:msonormal; mso-margin-top-alt:auto; margin-right:0cm; mso-margin-bottom-alt:auto; margin-left:0cm; font-size:12.0pt; font-family:"Times New Roman",serif;} span.E-MailFormatvorlage18 {mso-style-type:personal-reply; font-family:"Calibri",sans-serif; color:#1F497D;} .MsoChpDefault {mso-style-type:export-only; font-size:10.0pt;} @xxxxxxxx WordSection1 {size:612.0pt 792.0pt; margin:70.85pt 70.85pt 2.0cm 70.85pt;} div.WordSection1 {page:WordSection1;} --></style><!--[if gte mso 9]><xml> <o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" /> </xml><![endif]--><!--[if gte mso 9]><xml> <o:shapelayout v:ext=3D"edit"> <o:idmap v:ext=3D"edit" data=3D"1" /> </o:shapelayout></xml><![endif]--> <div class=3D"WordSection1"> <p class=3D"MsoNormal"><font size=3D"2" face=3D"Calibri" color=3D"#1f497d"><span style=3D"font-size:11.0pt;font-family:"Calibri",sans-serif;colo= r:#1F497D;mso-fareast-language:EN-US" lang=3D"EN-US">Thanks for sharing the references!<o:p></o:p= ></span></font></p> <p class=3D"MsoNormal"><font size=3D"2" face=3D"Calibri" color=3D"#1f497d"><span style=3D"font-size:11.0pt;font-family:"Calibri",sans-serif;colo= r:#1F497D;mso-fareast-language:EN-US" lang=3D"EN-US"><o:p>=C2=A0</o:p></span></font></p> <p class=3D"MsoNormal"><font size=3D"2" face=3D"Calibri" color=3D"#1f497d"><span style=3D"font-size:11.0pt;font-family:"Calibri",sans-serif;colo= r:#1F497D;mso-fareast-language:EN-US" lang=3D"EN-US">In my view, MUSHRA cannot be recommended for studying musical similarity.<o:p></o:p></span></font></p> <p class=3D"MsoNormal"><font size=3D"2" face=3D"Calibri" color=3D"#1f497d"><span style=3D"font-size:11.0pt;font-family:"Calibri",sans-serif;colo= r:#1F497D;mso-fareast-language:EN-US" lang=3D"EN-US"><o:p>=C2=A0</o:p></span></font></p> <p class=3D"MsoNormal"><font size=3D"2" face=3D"Calibri" color=3D"#1f497d"><span style=3D"font-size:11.0pt;font-family:"Calibri",sans-serif;colo= r:#1F497D;mso-fareast-language:EN-US" lang=3D"EN-US">The method is designed to identify differences between stimuli on a defined dimension (which is audio quality in the MUSHRA recommendation, although this rating method could also be used for evaluating other perceptual dimensions).<o:p></o:p></span></font></p> <p class=3D"MsoNormal"><font size=3D"2" face=3D"Calibri" color=3D"#1f497d"><span style=3D"font-size:11.0pt;font-family:"Calibri",sans-serif;colo= r:#1F497D;mso-fareast-language:EN-US" lang=3D"EN-US"><o:p>=C2=A0</o:p></span></font></p> <p class=3D"MsoNormal"><font size=3D"2" face=3D"Calibri" color=3D"#1f497d"><span style=3D"font-size:11.0pt;font-family:"Calibri",sans-serif;colo= r:#1F497D;mso-fareast-language:EN-US" lang=3D"EN-US">In the MUSHRA method, listeners are NOT aske= d to rate the similarity of the stimuli, however. While in principle information about similarity could be deduced in an indirect manner from the ratings obtained with MUSHRA (similar mean ratings =3D high similarity), this would require that you can specify=C2=A0 the perceptual dimension= on which your stimuli differ or are similar (say, rhythm, tempo, consonance/dissonance, mood etc.). <o:p></o:p></span></font></p> <p class=3D"MsoNormal"><font size=3D"2" face=3D"Calibri" color=3D"#1f497d"><span style=3D"font-size:11.0pt;font-family:"Calibri",sans-serif;colo= r:#1F497D;mso-fareast-language:EN-US" lang=3D"EN-US"><o:p>=C2=A0</o:p></span></font></p> <p class=3D"MsoNormal"><font size=3D"2" face=3D"Calibri" color=3D"#1f497d"><span style=3D"font-size:11.0pt;font-family:"Calibri",sans-serif;colo= r:#1F497D;mso-fareast-language:EN-US" lang=3D"EN-US">If that is not possible, the other approache= s that were suggested like triadic tests or MDS can be used *<b><span style=3D"font-weight:bold">without</span></b>* having to specify which exact dimension =C2=A0the similarit= y judgments should refer to, and to identify structures in the (dis-) similarity ratings. <o:p></o:p></span></font></p> <p class=3D"MsoNormal"><font size=3D"2" face=3D"Calibri" color=3D"#1f497d"><span style=3D"font-size:11.0pt;font-family:"Calibri",sans-serif;colo= r:#1F497D;mso-fareast-language:EN-US" lang=3D"EN-US"><o:p>=C2=A0</o:p></span></font></p> <p class=3D"MsoNormal"><font size=3D"2" face=3D"Calibri" color=3D"#1f497d"><span style=3D"font-size:11.0pt;font-family:"Calibri",sans-serif;colo= r:#1F497D;mso-fareast-language:EN-US" lang=3D"EN-US">In addition, I could imagine that the MUSHRA concepts of a high-quality =E2=80=9Creference=E2=80=9D and = a low-quality =E2=80=9Canchor=E2=80=9D do not easily apply to the experim= ents you have in mind.<o:p></o:p></span></font></p> <p class=3D"MsoNormal"><font size=3D"2" face=3D"Calibri" color=3D"#1f497d"><span style=3D"font-size:11.0pt;font-family:"Calibri",sans-serif;colo= r:#1F497D;mso-fareast-language:EN-US" lang=3D"EN-US"><o:p>=C2=A0</o:p></span></font></p> <div> <p class=3D"MsoNormal"><font size=3D"2" face=3D"Calibri" color=3D"#1f497d"><span style=3D"font-size:11.0pt;font-family:"Calibri",sans-serif;colo= r:#1F497D">Best<o:p></o:p></span></font></p> <p class=3D"MsoNormal"><font size=3D"2" face=3D"Calibri" color=3D"#1f497d"><span style=3D"font-size:11.0pt;font-family:"Calibri",sans-serif;colo= r:#1F497D"><o:p>=C2=A0</o:p></span></font></p> <p class=3D"MsoNormal"><font size=3D"2" face=3D"Calibri" color=3D"#1f497d"><span style=3D"font-size:11.0pt;font-family:"Calibri",sans-serif;colo= r:#1F497D">Daniel <o:p></o:p></span></font></p> <p class=3D"MsoNormal"><font size=3D"2" face=3D"Calibri" color=3D"#1f497d"><span style=3D"font-size:11.0pt;font-family:"Calibri",sans-serif;colo= r:#1F497D"><o:p>=C2=A0</o:p></span></font></p> <p class=3D"MsoNormal"><font size=3D"2" face=3D"Calibri" color=3D"#1f497d"><span style=3D"font-size:11.0pt;font-family:"Calibri",sans-serif;colo= r:#1F497D">---------------------------------<o:p></o:p></span></font></p> <p class=3D"MsoNormal"><font size=3D"2" face=3D"Calibri" color=3D"#1f497d"><span style=3D"font-size:11.0pt;font-family:"Calibri",sans-serif;colo= r:#1F497D" lang=3D"EN-US">Dr. Daniel Oberfeld-Twistel<o:p></o:p></sp= an></font></p> <p class=3D"MsoNormal"><font size=3D"2" face=3D"Calibri" color=3D"#1f497d"><span style=3D"font-size:11.0pt;font-family:"Calibri",sans-serif;colo= r:#1F497D" lang=3D"EN-US">Associate Professor<o:p></o:p></span></fon= t></p> <p class=3D"MsoNormal"><font size=3D"2" face=3D"Calibri" color=3D"#1f497d"><span style=3D"font-size:11.0pt;font-family:"Calibri",sans-serif;colo= r:#1F497D">Johannes Gutenberg - Universitaet Mainz<o:p></o:p></span></font></= p> <p class=3D"MsoNormal"><font size=3D"2" face=3D"Calibri" color=3D"#1f497d"><span style=3D"font-size:11.0pt;font-family:"Calibri",sans-serif;colo= r:#1F497D">Institute of Psychology<o:p></o:p></span></font></p> <p class=3D"MsoNormal"><font size=3D"2" face=3D"Calibri" color=3D"#1f497d"><span style=3D"font-size:11.0pt;font-family:"Calibri",sans-serif;colo= r:#1F497D" lang=3D"EN-US">Experimental Psychology<o:p></o:p></span><= /font></p> <p class=3D"MsoNormal"><font size=3D"2" face=3D"Calibri" color=3D"#1f497d"><span style=3D"font-size:11.0pt;font-family:"Calibri",sans-serif;colo= r:#1F497D" lang=3D"EN-US">Wallstrasse 3<o:p></o:p></span></font></p> <p class=3D"MsoNormal"><font size=3D"2" face=3D"Calibri" color=3D"#1f497d"><span style=3D"font-size:11.0pt;font-family:"Calibri",sans-serif;colo= r:#1F497D" lang=3D"EN-US">55122 Mainz<o:p></o:p></span></font></p> <p class=3D"MsoNormal"><font size=3D"2" face=3D"Calibri" color=3D"#1f497d"><span style=3D"font-size:11.0pt;font-family:"Calibri",sans-serif;colo= r:#1F497D" lang=3D"EN-US">Germany<o:p></o:p></span></font></p> <p class=3D"MsoNormal"><font size=3D"2" face=3D"Calibri" color=3D"#1f497d"><span style=3D"font-size:11.0pt;font-family:"Calibri",sans-serif;colo= r:#1F497D" lang=3D"EN-US"><o:p>=C2=A0</o:p></span></font></p> <p class=3D"MsoNormal"><font size=3D"2" face=3D"Calibri" color=3D"#1f497d"><span style=3D"font-size:11.0pt;font-family:"Calibri",sans-serif;colo= r:#1F497D" lang=3D"EN-US">Phone ++49 (0) 6131 39 39274 <o:p></o:p></span></font></p> <p class=3D"MsoNormal"><font size=3D"2" face=3D"Calibri" color=3D"#1f497d"><span style=3D"font-size:11.0pt;font-family:"Calibri",sans-serif;colo= r:#1F497D" lang=3D"EN-US">Fax=C2=A0=C2=A0 ++49 (0) 6131 39 39268<o:p= ></o:p></span></font></p> <p class=3D"MsoNormal"><font size=3D"2" face=3D"Calibri" color=3D"#1f497d"><span style=3D"font-size:11.0pt;font-family:"Calibri",sans-serif;colo= r:#1F497D" lang=3D"EN-US"><a class=3D"moz-txt-link-freetext" href=3D= "http://www.staff.uni-mainz.de/oberfeld/">http://www.staff.uni-mainz.de/o= berfeld/</a><o:p></o:p></span></font></p> <p class=3D"MsoNormal"><font size=3D"2" face=3D"Calibri" color=3D"#1f497d"><span style=3D"font-size:11.0pt;font-family:"Calibri",sans-serif;colo= r:#1F497D" lang=3D"EN-US"><a class=3D"moz-txt-link-freetext" href=3D= "https://www.facebook.com/WahrnehmungUndPsychophysikUniMainz">https://www= .facebook.com/WahrnehmungUndPsychophysikUniMainz</a><o:p></o:p></span></f= ont></p> </div> <p class=3D"MsoNormal"><font size=3D"2" face=3D"Calibri" color=3D"#1f497d"><span style=3D"font-size:11.0pt;font-family:"Calibri",sans-serif;colo= r:#1F497D;mso-fareast-language:EN-US" lang=3D"EN-US"><o:p>=C2=A0</o:p></span></font></p> <div style=3D"border:none;border-left:solid blue 1.5pt;padding:0c= m 0cm 0cm 4.0pt"> <div> <div style=3D"border:none;border-top:solid #E1E1E1 1.0pt;padding:3.0pt 0cm 0cm 0cm"> <p class=3D"MsoNormal"><b><font size=3D"2" face=3D"Calibri"= ><span style=3D"font-size:11.0pt;font-family:"Calibri",sans-serif;font= -weight:bold" lang=3D"EN-US">From:</span></font></b><font size=3D= "2" face=3D"Calibri"><span style=3D"font-size:11.0pt;font-family:"Calibri&q= uot;,sans-serif" lang=3D"EN-US"> AUDITORY - Research in Auditory Perception <a class=3D"moz-txt-link-rfc2396E" href=3D= "mailto:AUDITORY@xxxxxxxx"><AUDITORY@xxxxxxxx></a> <b= ><span style=3D"font-weight:bold">On Behalf Of </span></b>Pat Savage<br> <b><span style=3D"font-weight:bold">Sent:</span></b> Tuesday, March 20, 2018 6:19 AM<br> <b><span style=3D"font-weight:bold">To:</span></b> <a class=3D"moz-txt-link-abbreviated" href=3D"mailto:= AUDITORY@xxxxxxxx">AUDITORY@xxxxxxxx</a><br> <b><span style=3D"font-weight:bold">Subject:</span></= b> Re: Measuring perceptual similarity<o:p></o:p></span>= </font></p> </div> </div> <p class=3D"MsoNormal"><font size=3D"3" face=3D"Times New Roman= "><span style=3D"font-size:12.0pt" lang=3D"EN-US"><o:p>=C2=A0</o:= p></span></font></p> <p class=3D"MsoNormal"><font size=3D"3" face=3D"Times New Roman= "><span style=3D"font-size:12.0pt" lang=3D"EN-US">Dear list,<o:p>= </o:p></span></font></p> <div> <p class=3D"MsoNormal"><font size=3D"3" face=3D"Times New Rom= an"><span style=3D"font-size:12.0pt" lang=3D"EN-US"><o:p>=C2=A0</= o:p></span></font></p> </div> <div> <p class=3D"MsoNormal"><font size=3D"3" face=3D"Times New Rom= an"><span style=3D"font-size:12.0pt" lang=3D"EN-US">Thanks very m= uch for all of your responses. I=E2=80=99m summarizing belo= w all the reference recommendations I received.=C2=A0<o:p></o= :p></span></font></p> </div> <div> <p class=3D"MsoNormal"><font size=3D"3" face=3D"Times New Rom= an"><span style=3D"font-size:12.0pt" lang=3D"EN-US"><o:p>=C2=A0</= o:p></span></font></p> </div> <div> <p class=3D"MsoNormal"><font size=3D"3" face=3D"Times New Rom= an"><span style=3D"font-size:12.0pt" lang=3D"EN-US">I still want = to more fully read some of these, but so far my impression is that the Giordano et al. (2011) paper gives a good review of the benefits and drawbacks of previous methods, but since that was published MUSHRA seems to have become the standard method for these types of subjective perceptual similarity ratings.<o:p>= </o:p></span></font></p> </div> <div> <p class=3D"MsoNormal"><font size=3D"3" face=3D"Times New Rom= an"><span style=3D"font-size:12.0pt" lang=3D"EN-US"><o:p>=C2=A0</= o:p></span></font></p> </div> <div> <p class=3D"MsoNormal"><font size=3D"3" face=3D"Times New Rom= an"><span style=3D"font-size:12.0pt" lang=3D"EN-US">Please let me know if I seem to be misunderstanding anything here.<o:= p></o:p></span></font></p> </div> <div> <p class=3D"MsoNormal"><font size=3D"3" face=3D"Times New Rom= an"><span style=3D"font-size:12.0pt" lang=3D"EN-US"><o:p>=C2=A0</= o:p></span></font></p> </div> <div> <p class=3D"MsoNormal"><font size=3D"3" face=3D"Times New Rom= an"><span style=3D"font-size:12.0pt" lang=3D"EN-US">Cheers,<o:p><= /o:p></span></font></p> </div> <div> <p class=3D"MsoNormal"><font size=3D"3" face=3D"Times New Rom= an"><span style=3D"font-size:12.0pt" lang=3D"EN-US">Pat=C2=A0<o:p= ></o:p></span></font></p> </div> <div> <p class=3D"MsoNormal"><font size=3D"3" face=3D"Times New Rom= an"><span style=3D"font-size:12.0pt" lang=3D"EN-US">--<o:p></o:p>= </span></font></p> </div> <div> <p class=3D"MsoNormal" style=3D"mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;margin-left:2= 4.0pt;text-indent:-24.0pt"><font size=3D"3" face=3D"Times New Roman"><span style=3D"font-size:12.0pt" lang=3D"EN-US">Flexer, A., & Grill, T. (2016). The Problem of Limited Inter-rater Agreement in Modelling Music Similarity.=C2= =A0<i><span style=3D"font-style:italic">Journal of New Music Research</span></i>,=C2=A0<i><span style=3D"font-style:italic">45</span></i>(3), 1=E2=80= =9313.<o:p></o:p></span></font></p> <p class=3D"MsoNormal" style=3D"mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;margin-left:2= 4.0pt;text-indent:-24.0pt"><font size=3D"3" face=3D"Times New Roman"><span style=3D"font-size:12.0pt" lang=3D"EN-US"><o:p>=C2=A0</= o:p></span></font></p> <p class=3D"MsoNormal" style=3D"mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;margin-left:2= 4.0pt;text-indent:-24.0pt"><font size=3D"3" face=3D"Times New Roman"><span style=3D"font-size:12.0pt;background:white" lang=3D"EN-= US">P. Susini, S. McAdams, S. Winsberg: A multidimensional technique for sound quality assessment. Acta Acustica united with Acustica 85 (1999), 650=E2=80=93656.</span>= </font><span lang=3D"EN-US"><o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;margin-left:2= 4.0pt;text-indent:-24.0pt"><font size=3D"3" face=3D"Times New Roman"><span style=3D"font-size:12.0pt" lang=3D"EN-US"><o:p>=C2=A0</= o:p></span></font></p> <p class=3D"MsoNormal" style=3D"mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;margin-left:2= 4.0pt;text-indent:-24.0pt"><font size=3D"3" face=3D"Times New Roman"><span style=3D"font-size:12.0pt" lang=3D"EN-US">Novello, A., McKinney, M. F., & Kohlrausch, A. (2006). Perceptual evaluation of music similarity. In=C2=A0<i><= span style=3D"font-style:italic">Proceedings of the 7th International Conference on Music Information Retrieval</span></i>. Retrieved from=C2=A0</span><a href=3D"http://ismir2006.ismir.net/PAPERS/ISMIR06148_Pa= per.pdf" moz-do-not-send=3D"true"><span lang=3D"EN-US">http://is= mir2006.ismir.net/PAPERS/ISMIR06148_Paper.pdf</span></a></font><span lang=3D"EN-US"><o:p></o:p></span></p> </div> <div> <p class=3D"MsoNormal" style=3D"mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;margin-left:2= 4.0pt;text-indent:-24.0pt"><font size=3D"3" face=3D"Times New Roman"><span style=3D"font-size:12.0pt" lang=3D"EN-US">Michaud, P. Y= ., Meunier, S., Herzog, P., Lavandier, M., & D=E2=80=99Aubigny, G. D. (2013). Perceptual evaluation = of dissimilarity between auditory stimuli: An alternative to the paired comparison.=C2=A0<i><span style=3D"font-style:italic">Acta Acustica United with Acustica</span></i>,=C2=A0<i><span style=3D"font-style:italic">99</span></i>(5), 806=E2=80=93815.<o:p></o:p></span></font></p> <p class=3D"MsoNormal" style=3D"mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;margin-left:2= 4.0pt;text-indent:-24.0pt"><font size=3D"3" face=3D"Times New Roman"><span style=3D"font-size:12.0pt" lang=3D"EN-US"><o:p>=C2=A0</= o:p></span></font></p> <p class=3D"MsoNormal" style=3D"mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;margin-left:2= 4.0pt;text-indent:-24.0pt"><font size=3D"3" face=3D"Times New Roman"><span style=3D"font-size:12.0pt" lang=3D"EN-US">Wolff, D., &a= mp; Weyde, T. (2011). Adapting Metrics for Music Similarity Using Comparative Ratings.=C2=A0<i><span style=3D"font-style:italic">12th International Society for Music Information Retrieval Conference (ISMIR=E2=80=9911), Proc.</span></i>, (Ismir), 73=E2= =80=9378.=C2=A0<o:p></o:p></span></font></p> <p class=3D"MsoNormal" style=3D"mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;margin-left:2= 4.0pt;text-indent:-24.0pt"><font size=3D"3" face=3D"Times New Roman"><span style=3D"font-size:12.0pt" lang=3D"EN-US"><o:p>=C2=A0</= o:p></span></font></p> <p class=3D"MsoNormal" style=3D"mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;margin-left:2= 4.0pt;text-indent:-24.0pt"><font size=3D"3" face=3D"Times New Roman"><span style=3D"font-size:12.0pt;background:white" lang=3D"EN-= US">B. L. Giordano, C. Guastavino, E. Murphy, M. Ogg, B. K. Smith, S. McAdams: Comparison of methods for collecting and modeling dissimilarity data: Applications to complex sound stimuli. Multivariate Behavioral Research 46 (2011), 779=E2=80=93811.</span><= /font><span lang=3D"EN-US"><o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;margin-left:2= 4.0pt;text-indent:-24.0pt"><font size=3D"3" face=3D"Times New Roman"><span style=3D"font-size:12.0pt" lang=3D"EN-US"><o:p>=C2=A0</= o:p></span></font></p> <p class=3D"MsoNormal" style=3D"mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;margin-left:2= 4.0pt;text-indent:-24.0pt"><font size=3D"3" face=3D"Times New Roman"><span style=3D"font-size:12.0pt;background:white" lang=3D"EN-= US">P. Y. Michaud, S. Meunier, P. Herzog, M. Lavandier, G. d=E2=80=99Aubigny: Perceptual evaluation of dissimilari= ty between auditory stimuli: an alternative to the paired=C2=A0 comparison. Acta Acustica united with Acus= tica 99 (2013), 806=E2=80=93815.</span></font><span lang=3D"= EN-US"><o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;margin-left:2= 4.0pt;text-indent:-24.0pt"><font size=3D"3" face=3D"Times New Roman"><span style=3D"font-size:12.0pt" lang=3D"EN-US"><o:p>=C2=A0</= o:p></span></font></p> <p class=3D"MsoNormal" style=3D"mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;margin-left:2= 4.0pt;text-indent:-24.0pt"><font size=3D"3" face=3D"Times New Roman"><span style=3D"font-size:12.0pt" lang=3D"EN-US">Collett, E., Marx, M., Gaillard, P., Roby, B., Fraysse, B., & Deguine, O. (2016). Categorization of common sounds by cochlear implanted and normal hearing adults.=C2=A0<i><= span style=3D"font-style:italic">Hearing Research</span>= </i>,=C2=A0<i><span style=3D"font-style:italic">335</span></i>, 207=E2=80= =93219.<o:p></o:p></span></font></p> <p class=3D"MsoNormal" style=3D"mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;margin-left:2= 4.0pt;text-indent:-24.0pt"><font size=3D"3" face=3D"Times New Roman"><span style=3D"font-size:12.0pt" lang=3D"EN-US"><o:p>=C2=A0</= o:p></span></font></p> <p class=3D"MsoNormal" style=3D"mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;margin-left:2= 4.0pt;text-indent:-24.0pt"><font size=3D"3" face=3D"Times New Roman"><span style=3D"font-size:12.0pt" lang=3D"EN-US">International Telecommunication Union. (2015). ITU-R BS.1534-3, Method for the subjective assessment of intermediate quality level of audio systems.=C2=A0<i><span style=3D"font-style:italic">ITU-R Recommendation</s= pan></i>,=C2=A0<i><span style=3D"font-style:italic">1534</span></i>=E2=80=93= <i><span style=3D"font-style:italic">3</span></i>, 1534=E2=80= =933. Retrieved from=C2=A0</span><a href=3D"https://www.itu.int/dms_pubrec/itu-r/rec/bs/R-REC-BS.1534-3-20151= 0-I%21%21PDF-E.pdf" moz-do-not-send=3D"true"><span lang=3D"EN-US">https://w= ww.itu.int/dms_pubrec/itu-r/rec/bs/R-REC-BS.1534-3-201510-I!!PDF-E.pdf</s= pan></a></font><span lang=3D"EN-US"><o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;margin-left:2= 4.0pt;text-indent:-24.0pt"><font size=3D"3" face=3D"Times New Roman"><span style=3D"font-size:12.0pt" lang=3D"EN-US"><o:p>=C2=A0</= o:p></span></font></p> <p class=3D"MsoNormal" style=3D"mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;margin-left:2= 4.0pt;text-indent:-24.0pt"><font size=3D"3" face=3D"Times New Roman"><span style=3D"font-size:12.0pt">Lavandier, M., Meunier, S., & Herzog, P. (2008). </span></font><span lang=3D"EN-US">Identification of some perceptual dimensions underlying loudspeaker dissimilarities.=C2=A0<i><span style=3D"font-style:italic= ">Journal of the Acoustical Society of America</span></i>,=C2=A0= <i><span style=3D"font-style:italic">123</span></i>(6), 4186=E2=80=934198.=C2=A0<o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;margin-left:2= 4.0pt;text-indent:-24.0pt"><font size=3D"3" face=3D"Times New Roman"><span style=3D"font-size:12.0pt" lang=3D"EN-US"><o:p>=C2=A0</= o:p></span></font></p> <p class=3D"MsoNormal" style=3D"mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;margin-left:2= 4.0pt;text-indent:-24.0pt"><font size=3D"3" face=3D"Times New Roman"><span style=3D"font-size:12.0pt" lang=3D"EN-US">DZHAFAROV, E. N., & OLDENBURG, H. C. (2006). RECONSTRUCTING DISTANCES AMONG OBJECTS FROM THEIR DISCRIMINABILITY.=C2= =A0</span><i><span style=3D"font-style:italic">Psychometrika</span></i>,= =C2=A0<i><span style=3D"font-style:italic">71</span></i>(2), 365=E2=80= =93386.<o:p></o:p></font></p> </div> <div> <p class=3D"MsoNormal"><font size=3D"3" face=3D"Times New Rom= an"><span style=3D"font-size:12.0pt"><o:p>=C2=A0</o:p></span></fo= nt></p> </div> <div> <p class=3D"MsoNormal"><font size=3D"3" face=3D"Times New Rom= an"><span style=3D"font-size:12.0pt">Software:=C2=A0<o:p></o:p></= span></font></p> </div> <div> <p class=3D"MsoNormal"><font size=3D"3" face=3D"Times New Rom= an"><span style=3D"font-size:12.0pt"><o:p>=C2=A0</o:p></span></fo= nt></p> </div> <div> <p class=3D"MsoNormal"><font size=3D"3" face=3D"Times New Rom= an"><span style=3D"font-size:12.0pt">Various:<o:p></o:p></span></= font></p> </div> <div> <p class=3D"MsoNormal"><font size=3D"3" face=3D"Times New Rom= an"><span style=3D"font-size:12.0pt"><a href=3D"http://www.ak.tu-berlin.de/menue/digitale_ressourcen/research_too= ls/whisper_listening_test_toolbox_for_matlab/" moz-do-not-send=3D"true"><font size=3D"2" face=3D"Cal= ibri" color=3D"purple"><span style=3D"font-size:11.0pt;font-family:"Calibri",sans-serif;colo= r:purple">http://www.ak.tu-berlin.de/menue/digitale_ressourcen/research_t= ools/whisper_listening_test_toolbox_for_matlab/</span></font></a></span><= /font><o:p></o:p></p> </div> <div> <p class=3D"MsoNormal"><font size=3D"3" face=3D"Times New Rom= an"><span style=3D"font-size:12.0pt"><o:p>=C2=A0</o:p></span></fo= nt></p> </div> <div> <p class=3D"MsoNormal"><font size=3D"3" face=3D"Times New Rom= an"><span style=3D"font-size:12.0pt">MUSHRA:<o:p></o:p></span></f= ont></p> </div> <div> <p class=3D"MsoNormal"><font size=3D"3" face=3D"Times New Rom= an"><span style=3D"font-size:12.0pt"><a href=3D"https://github.com/audiolabs/webMUSHRA" moz-do-not-send=3D"true">https://github.com/audiolabs= /webMUSHRA</a></span></font><o:p></o:p></p> </div> <div> <p class=3D"MsoNormal"><font size=3D"3" face=3D"Times New Rom= an"><span style=3D"font-size:12.0pt"><o:p>=C2=A0</o:p></span></fo= nt></p> </div> <div> <p class=3D"MsoNormal"><font size=3D"3" face=3D"Times New Rom= an"><span style=3D"font-size:12.0pt">Free-sorting:<o:p></o:p></sp= an></font></p> </div> <div> <p class=3D"MsoNormal"><font size=3D"3" face=3D"Times New Rom= an"><span style=3D"font-size:12.0pt"><a href=3D"http://petra.univ-tlse2.fr/spip.php?rubrique4= 9&lang=3Den" moz-do-not-send=3D"true">http://petra.univ-tlse2.fr/s= pip.php?rubrique49&lang=3Den</a></span></font><o:p></o:p></p> </div> <div> <div> <p class=3D"MsoNormal"><font size=3D"3" face=3D"Times New R= oman"><span style=3D"font-size:12.0pt" lang=3D"EN-US">---<br> Dr. Patrick Savage<br> Project Associate Professor<o:p></o:p></span></font><= /p> </div> <div> <p class=3D"MsoNormal"><font size=3D"3" face=3D"Times New R= oman"><span style=3D"font-size:12.0pt" lang=3D"EN-US">Faculty of Environment and Information Studies<o:p></o:p></span>= </font></p> </div> <div> <p class=3D"MsoNormal"><font size=3D"3" face=3D"Times New R= oman"><span style=3D"font-size:12.0pt" lang=3D"EN-US">Keio University SFC (Shonan Fujisawa Campus)<br> </span><a href=3D"http://PatrickESavage.com" moz-do-not-send=3D"true"><span lang=3D"EN-US">http://= PatrickESavage.com</span></a></font><span lang=3D"EN-US"><o:p></o:p></span></p> </div> </div> </div> </div> </blockquote> <br> <pre class=3D"moz-signature" cols=3D"72">--=20 Sabine Meunier Laboratoire de M=C3=A9canique et d'Acoustique CNRS 4 impasse Nikola Tesla CS 40006 13453 Marseille Cedex 13 France Tel: +33 4 84 52 42 02 email: <a class=3D"moz-txt-link-abbreviated" href=3D"mailto:meunier@xxxxxxxx= nrs-mrs.fr">meunier@xxxxxxxx</a> <a class=3D"moz-txt-link-freetext" href=3D"http://www.lma.cnrs-mrs.fr/">h= ttp://www.lma.cnrs-mrs.fr/</a> </pre> </body> </html> --------------4F7449982B62347CB4136639--