[AUDITORY] Deep Cross-Modal Correlation Learning for Audio and Lyrics (Yi Yu )


Subject: [AUDITORY] Deep Cross-Modal Correlation Learning for Audio and Lyrics
From:    Yi Yu  <yi.yu.yy@xxxxxxxx>
Date:    Thu, 30 Nov 2017 19:08:48 +0900
List-Archive:<http://lists.mcgill.ca/scripts/wa.exe?LIST=AUDITORY>

--94eb2c1170882dbe51055f3070ab Content-Type: text/plain; charset="UTF-8" Dear colleagues, I would like to share one of my recent works (Deep Cross-Modal Correlation Learning for Audio and Lyrics ) with you at https://arxiv.org/abs/1711.08976 . Little research focuses on cross-modal correlation learning where temporal structures of different data modalities such as audio and lyrics are taken into account. Stemming from the characteristic of temporal structures of music in nature, we are motivated to learn the deep sequential correlation between audio and lyrics. In this work, we propose a deep cross-modal correlation learning architecture involving two-branch deep neural networks for audio modality and text modality (lyrics). Different modality data are converted to the same canonical space where inter modal canonical correlation analysis is utilized as an objective function to calculate the similarity of temporal structures. This is the first study on understanding the correlation between language and music audio through deep architectures for learning the paired temporal correlation of audio and lyrics. Pre-trained Doc2vec model followed by fully-connected layers (fully-connected deep neural network) is used to represent lyrics. Two significant contributions are made in the audio branch, as follows: i) pre-trained CNN followed by fully-connected layers is investigated for representing music audio. ii) We further suggest an end-to-end architecture that simultaneously trains convolutional layers and fully-connected layers to better learn temporal structures of music audio. Particularly, our end-to-end deep architecture contains two properties: simultaneously implementing feature learning and cross-modal correlation learning, and learning joint representation by considering temporal structures. Experimental results, using audio to retrieve lyrics or using lyrics to retrieve audio, verify the effectiveness of the proposed deep correlation learning architectures in cross-modal music retrieval. Any comments are very welcome. Best regards, Yi Yu http://research.nii.ac.jp/~yiyu/ --94eb2c1170882dbe51055f3070ab Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable <div dir=3D"ltr"><div>Dear colleagues,</div><div><br></div><div>I would lik= e to share one of my recent works (Deep Cross-Modal Correlation Learning fo= r Audio and Lyrics ) with you at <a href=3D"https://arxiv.org/abs/1711.0897= 6" target=3D"_blank">https://arxiv.org/abs/1711.<wbr>08976</a> .</div><div>= <br></div><div>Little research focuses on cross-modal correlation learning = where temporal structures of different data modalities such as audio and ly= rics are taken into account. Stemming from the characteristic of temporal s= tructures of music in nature, we are motivated to learn the deep sequential= correlation between audio and lyrics. In this work, we propose a deep cros= s-modal correlation learning architecture involving two-branch deep neural = networks for audio modality and text modality (lyrics). Different modality = data are converted to the same canonical space where inter modal canonical = correlation analysis is utilized as an objective function to calculate the = similarity of temporal structures. This is the first study on understanding= the correlation between language and music audio through deep architecture= s for learning the paired temporal correlation of audio and lyrics. Pre-tra= ined Doc2vec model followed by fully-connected layers (fully-connected deep= neural network) is used to represent lyrics. Two significant contributions= are made in the audio branch, as follows: i) pre-trained CNN followed by f= ully-connected layers is investigated for representing music audio. ii) We = further suggest an end-to-end architecture that simultaneously trains convo= lutional layers and fully-connected layers to better learn temporal structu= res of music audio. Particularly, our end-to-end deep architecture contains= two properties: simultaneously implementing feature learning and cross-mod= al correlation learning, and learning joint representation by considering t= emporal structures. Experimental results, using audio to retrieve lyrics or= using lyrics to retrieve audio, verify the effectiveness of the proposed d= eep correlation learning architectures in cross-modal music retrieval. </di= v><div><br></div><div>Any comments are very welcome.</div><div><br></div><d= iv>Best regards,</div><div><br></div><div>Yi Yu</div><div><br></div><div><a= href=3D"http://research.nii.ac.jp/~yiyu/">http://research.nii.ac.jp/~yiyu/= </a><br></div></div> --94eb2c1170882dbe51055f3070ab--


This message came from the mail archive
../postings/2017/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University