Subject: Re: Music, emotion, memory of passages and content analysis (LSA) From: affige yang <affige@xxxxxxxx> Date: Sat, 28 Feb 2009 10:34:42 +0800 List-Archive:<http://lists.mcgill.ca/scripts/wa.exe?LIST=AUDITORY>--001636e90b777ef6b80463f16dbd Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Hi, Daniel: I am a phd student of EECS. I don't know if it is relevant, but I have some preliminary results related to the automatic prediction of emotion values of music. http://mpac.ee.ntu.edu.tw/~yihsuan/publication.html Y.-H. Yang et al, "A regression approach to music emotion recognition," IEEE Transactions on Audio, Speech and Language Processing (TASLP), vol. 16, no. 2, pp. 448-457, Feb. 2008. In this paper, we formulate music emotion recognition as a regression problem and predict the arousal and valence values (numerical values) of music. The ground truth data needed for training an automatic regression model is obtained through a subjective test. Subjects are asked to rate the arousal and valence values of a number of songs. Features are extracted from the audio signal to represent the songs, and support vector regression (SVR) is adopted to train the regression model. Y.-H. Yang and H.-H. Chen, "Music emotion ranking," in Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing 2009 (ICASSP'09), Taipei, Taiwan, accepted. The cognitive load of rating emotion may be too high. In this paper, we propose a ranking measure and ask subject to annotate emotion in a comparative way. Sincerely yours, -- Yi-Hsuan Yang (Eric), Ph.D. candidate, MPAC Lab, Graduate Institute of Communication Engineering, National Taiwan University. http://mpac.ee.ntu.edu.tw/~yihsuan/ --001636e90b777ef6b80463f16dbd Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable <div>Hi, Daniel:</div> <div>=A0</div> <div>I am a phd student of EECS.=A0 I don't know if it is relevant, but= I have some preliminary results related to the automatic prediction of emo= tion values of music.</div> <div>=A0</div> <div><a href=3D"http://mpac.ee.ntu.edu.tw/~yihsuan/publication.html">http:/= /mpac.ee.ntu.edu.tw/~yihsuan/publication.html</a></div> <div>=A0</div> <div>Y.-H. Yang et al, "A regression approach to music emotion recogni= tion," IEEE Transactions on Audio, Speech and Language Processing (TAS= LP), vol. 16, no. 2, pp. 448-457, Feb. 2008.</div> <div>=A0</div> <div>In this paper, we formulate music emotion recognition as a regression = problem and predict the arousal and valence values (numerical values) of mu= sic.=A0 The ground truth data needed for training an automatic regression m= odel is obtained through a subjective test.=A0 Subjects are asked to rate t= he arousal and valence values of a number of songs.=A0 Features are=A0extra= cted from the audio signal to represent the songs, and support vector regre= ssion (SVR) is adopted=A0to train the=A0regression model.<br> </div> <p>Y.-H. Yang and H.-H. Chen, "Music emotion ranking," in Proc. I= EEE Int. Conf. Acoustics, Speech, and Signal Processing 2009 (ICASSP'09= ), Taipei, Taiwan, accepted.</p> <div>The cognitive load of=A0rating emotion may be too high.=A0 In this pap= er, we propose a ranking measure=A0and=A0ask subject to annotate emotion in= a comparative way.</div> <div>=A0</div> <div>=A0</div> <div>=A0</div> <p>Sincerely yours,<br clear=3D"all"><br>-- <br>Yi-Hsuan Yang (Eric), Ph.D.= candidate,<br>MPAC Lab,<br>Graduate Institute of Communication Engineering= ,<br>National Taiwan University.<br><a href=3D"http://mpac.ee.ntu.edu.tw/~y= ihsuan/">http://mpac.ee.ntu.edu.tw/~yihsuan/</a><br> </p> --001636e90b777ef6b80463f16dbd--