Subject: [AUDITORY] [DCASE-discussions] DCASE Workshop 2025: 2nd Call for Papers -- Submissions Open From: =?UTF-8?B?SXJlbmUgTWFydMOtbiBNb3JhdMOz?= <irenitram@xxxxxxxx> Date: Mon, 23 Jun 2025 18:15:26 +0300--000000000000ba1acb06383eb028 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable (apologies for cross-posting) The 10th Workshop on Detection and Classification of Acoustic Scenes and Events, DCASE 2025 <https://dcase.community>, will be held in Barcelona on 30-31 October. The day before, 29th of October, the satellite event BioDCAS= E <https://biodcase.github.io> (dedicated to bioacoustics) will take place in the same venue. As in previous years, the workshop is organized in conjunction with the DCASE challenge. We aim to bring together researchers from many different universities, research organizations and companies with an interest in the topic, and provide the opportunity for scientific exchange of ideas and opinions. We invite submissions on the topics of computational analysis of acoustic scenes and sound events, including but not limited to: *Tasks in computational environmental audio analysis* - Environmental audio classification and tagging - Sound event detection and localization - Audio captioning and natural language based audio retrieval - Bio-acoustics - Environmental audio generation - Anomalous sound detection - Audio source separation *Methods for computational environmental audio analysis* - Signal processing and auditory-motivated methods - Multimodal methods - Machine learning methods: e.g. feature learning, self-supervised learning, foundation modeling for environmental audio - Cross-disciplinary methods involving, e.g., acoustics, biology, psychology, geography, materials science, transports science - Generative modeling - Perceptual analysis and modeling of acoustic environments *Resources, applications, and evaluations of computational environmental-audio analysis* - Publicly available datasets: e.g., multichannel dataset, noisy dataset, missing dataset, mismatch device dataset - Publicly available software, taxonomies, and ontologies, evaluation procedures - Modeling, simulation, and synthesis of realistic acoustic scenes - Ethics, privacy, responsible research - Applications We strongly encourage reproducible research with open-source code and open data, though it is not mandatory. *Important notice for challenge participants*: Description of systems submitted to the DCASE2025 Challenge is expected to be expanded from the challenge technical report submissions to comply with the format of a scientific paper. This generally means describing the scientific novelty and including more discussions such as ablation studies for additional modules in your method. *Important Dates* - 04 Jul 2025, Workshop abstract submission deadline - 11 Jul 2025, Workshop final submission deadline - 05 Sep 2025, Notification of paper acceptance - 19 Sep 2025, Camera ready submission - 29 Oct 2025 BioDCASE satellite event - 30 Oct 2025 - 31 Oct 2025, Workshop The paper submission portal is now open at: https://dcase.community/workshop2025/submission *Technical program chairs* Emmanouil Benetos, Queen Mary University of London Magdalena Fuentes, New York University Irene Martin Morato, Tampere University Mart=C3=ADn Rocamora, Universitat Pompeu Fabra *General chair* Frederic Font, Universitat Pompeu Fabra *Contac*t: dcase.workshop@xxxxxxxx (or our own email addresses that you'll find in the DCASE website) --000000000000ba1acb06383eb028 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable <div dir=3D"ltr"> <p class=3D"MsoNormal" style=3D"margin:0cm 0cm 8pt;line-height:115%;font-si= ze:12pt;font-family:"Aptos",sans-serif"><span lang=3D"EN-GB">(apo= logies for cross-posting)<span></span></span></p> <p class=3D"MsoNormal" style=3D"margin:0cm 0cm 8pt;line-height:115%;font-si= ze:12pt;font-family:"Aptos",sans-serif"><span lang=3D"EN-GB">The = 10th Workshop on Detection and Classification of Acoustic Scenes and Events, </span><span><a href=3D"https= ://dcase.community" target=3D"_blank" style=3D"color:rgb(70,120,134);text-d= ecoration:underline"><span lang=3D"EN-GB">DCASE 2025</span></a></span><span= lang=3D"EN-GB">, will be held in Barcelona on 30-31 October. The day before, 29th of October= , the satellite event </span><span><a href=3D"https://biodcase.github.io" tar= get=3D"_blank" style=3D"color:rgb(70,120,134);text-decoration:underline"><s= pan lang=3D"EN-GB">BioDCASE</span></a></span><span lang=3D"EN-GB"> (dedicated to bioacoustics) will take place in the same venue.<br> <br> As in previous years, the workshop is organized in conjunction with the DCA= SE challenge. We aim to bring together researchers from many different universities, research organizations and companies with an interest in the topic, and provide the opportunity for scientific exchange of ideas and opinions.<br> <br> We invite submissions on the topics of computational analysis of acoustic scenes and sound events, including but not limited to:<br> <br> <b>Tasks in computational environmental audio analysis</b><span></span></sp= an></p> <ul style=3D"margin-top:0cm;margin-bottom:0cm" type=3D"disc"><li class=3D"M= soNormal" style=3D"margin:0cm 0cm 8pt;line-height:115%;font-size:12pt;font-= family:"Aptos",sans-serif"><span>Environmental audio classificati= on and tagging<span></span></span></li><li class=3D"MsoNormal" style=3D"margi= n:0cm 0cm 8pt;line-height:115%;font-size:12pt;font-family:"Aptos"= ,sans-serif"><span>Sound event detection and localization<span></span></spa= n></li><li class=3D"MsoNormal" style=3D"margin:0cm 0cm 8pt;line-height:115%= ;font-size:12pt;font-family:"Aptos",sans-serif"><span lang=3D"EN-= GB">Audio captioning and natural language based audio retrieval<span></span= ></span></li><li class=3D"MsoNormal" style=3D"margin:0cm 0cm 8pt;line-heigh= t:115%;font-size:12pt;font-family:"Aptos",sans-serif"><span>Bio-a= coustics<span></span></span></li><li class=3D"MsoNormal" style=3D"margin:0c= m 0cm 8pt;line-height:115%;font-size:12pt;font-family:"Aptos",san= s-serif"><span>Environmental audio generation<span></span></span></li><li c= lass=3D"MsoNormal" style=3D"margin:0cm 0cm 8pt;line-height:115%;font-size:1= 2pt;font-family:"Aptos",sans-serif"><span>Anomalous sound detecti= on<span></span></span></li><li class=3D"MsoNormal" style=3D"margin:0cm 0cm = 8pt;line-height:115%;font-size:12pt;font-family:"Aptos",sans-seri= f"><span>Audio source separation<span></span></span></li></ul> <p class=3D"MsoNormal" style=3D"margin:0cm 0cm 8pt;line-height:115%;font-si= ze:12pt;font-family:"Aptos",sans-serif"><b><span lang=3D"EN-GB">M= ethods for computational environmental audio analysis</span></b><span lang=3D"EN-GB"><span></span></span></p> <ul style=3D"margin-top:0cm;margin-bottom:0cm" type=3D"disc"><li class=3D"M= soNormal" style=3D"margin:0cm 0cm 8pt;line-height:115%;font-size:12pt;font-= family:"Aptos",sans-serif"><span lang=3D"EN-GB">Signal processing= and auditory-motivated methods<span></span></span></li><li class=3D"MsoNor= mal" style=3D"margin:0cm 0cm 8pt;line-height:115%;font-size:12pt;font-famil= y:"Aptos",sans-serif"><span>Multimodal methods<span></span></span= ></li><li class=3D"MsoNormal" style=3D"margin:0cm 0cm 8pt;line-height:115%;= font-size:12pt;font-family:"Aptos",sans-serif"><span lang=3D"EN-G= B">Machine learning methods: e.g. feature learning, self-supervised learning, foundation modeling for environmental audio<= span></span></span></li><li class=3D"MsoNormal" style=3D"margin:0cm 0cm 8pt= ;line-height:115%;font-size:12pt;font-family:"Aptos",sans-serif">= <span lang=3D"EN-GB">Cross-disciplinary methods involving, e.g., acoustics,= biology, psychology, geography, materials science, transports science<span></sp= an></span></li><li class=3D"MsoNormal" style=3D"margin:0cm 0cm 8pt;line-hei= ght:115%;font-size:12pt;font-family:"Aptos",sans-serif"><span>Gen= erative modeling<span></span></span></li><li class=3D"MsoNormal" style=3D"m= argin:0cm 0cm 8pt;line-height:115%;font-size:12pt;font-family:"Aptos&q= uot;,sans-serif"><span lang=3D"EN-GB">Perceptual analysis and modeling of a= coustic environments<span></span></span></li></ul> <p class=3D"MsoNormal" style=3D"margin:0cm 0cm 8pt;line-height:115%;font-si= ze:12pt;font-family:"Aptos",sans-serif"><b><span>Resources, applications, and evaluations of computational environmental-audio analysis= </span></b><span><span></span></span></p> <ul style=3D"margin-top:0cm;margin-bottom:0cm" type=3D"disc"><li class=3D"M= soNormal" style=3D"margin:0cm 0cm 8pt;line-height:115%;font-size:12pt;font-= family:"Aptos",sans-serif"><span>Publicly available datasets: e.g= ., multichannel dataset, noisy dataset, missing dataset, mismatch device dataset<span></span></span></li><li class=3D"MsoNormal" style=3D"margi= n:0cm 0cm 8pt;line-height:115%;font-size:12pt;font-family:"Aptos"= ,sans-serif"><span>Publicly available software, taxonomies, and ontologies, evaluation procedures<span></span></span></li><li class=3D= "MsoNormal" style=3D"margin:0cm 0cm 8pt;line-height:115%;font-size:12pt;fon= t-family:"Aptos",sans-serif"><span lang=3D"EN-GB">Modeling, simul= ation, and synthesis of realistic acoustic scenes<span></span></span></li><li class=3D"MsoNormal" style=3D"margin= :0cm 0cm 8pt;line-height:115%;font-size:12pt;font-family:"Aptos",= sans-serif"><span>Ethics, privacy, responsible research<span></span></span>= </li><li class=3D"MsoNormal" style=3D"margin:0cm 0cm 8pt;line-height:115%;f= ont-size:12pt;font-family:"Aptos",sans-serif"><span>Applications<= span></span></span></li></ul> <p class=3D"MsoNormal" style=3D"margin:0cm 0cm 8pt;line-height:115%;font-si= ze:12pt;font-family:"Aptos",sans-serif"><span lang=3D"EN-GB"><br> We strongly encourage reproducible research with open-source code and open data, though it is not mandatory.<br> <br> <b>Important notice for challenge participants</b>: Description of systems submitted to the DCASE2025 Challenge is expected to be expanded from the challenge technical report submissions to comply with the format of a scientific paper. This generally means describing the scientific novelty an= d including more discussions such as ablation studies for additional modules = in your method.<br> <br> </span><b><span>Important Dates</span></b><span><span></span></span></p> <ul style=3D"margin-top:0cm;margin-bottom:0cm" type=3D"disc"><li class=3D"M= soNormal" style=3D"margin:0cm 0cm 8pt;line-height:115%;font-size:12pt;font-= family:"Aptos",sans-serif"><span>04 Jul 2025, Workshop abstract s= ubmission deadline<span></span></span></li><li class=3D"MsoNormal" style=3D"marg= in:0cm 0cm 8pt;line-height:115%;font-size:12pt;font-family:"Aptos"= ;,sans-serif"><span>11 Jul 2025, Workshop final submission deadline<span></span></span></li><li class=3D"MsoNormal" style=3D"marg= in:0cm 0cm 8pt;line-height:115%;font-size:12pt;font-family:"Aptos"= ;,sans-serif"><span>05 Sep 2025, Notification of paper acceptance<span></sp= an></span></li><li class=3D"MsoNormal" style=3D"margin:0cm 0cm 8pt;line-hei= ght:115%;font-size:12pt;font-family:"Aptos",sans-serif"><span>19 = Sep 2025, Camera ready submission<span></span></span></li><li class=3D"MsoN= ormal" style=3D"margin:0cm 0cm 8pt;line-height:115%;font-size:12pt;font-fam= ily:"Aptos",sans-serif"><span>29 Oct 2025 BioDCASE satellite even= t<span></span></span></li><li class=3D"MsoNormal" style=3D"margin:0cm 0cm 8= pt;line-height:115%;font-size:12pt;font-family:"Aptos",sans-serif= "><span>30 Oct 2025 - 31 Oct 2025, Workshop<span></span></span></li></ul> <p class=3D"MsoNormal" style=3D"margin:0cm 0cm 8pt;line-height:115%;font-si= ze:12pt;font-family:"Aptos",sans-serif"><span lang=3D"EN-GB"><br> The paper submission portal is now open at: </span><span><a href=3D"https:/= /dcase.community/workshop2025/submission" target=3D"_blank" style=3D"color:= rgb(70,120,134);text-decoration:underline"><span lang=3D"EN-GB">https://dca= se.community/workshop2025/submission</span></a></span><span lang=3D"EN-GB">= =C2=A0<br> <br> <b>Technical program chairs</b><br> Emmanouil Benetos, Queen Mary University of London<br> Magdalena Fuentes, New York University<br> Irene Martin Morato, Tampere University<br> Mart=C3=ADn Rocamora, Universitat Pompeu Fabra<br> <br> <b>General chair</b><br> Frederic Font, Universitat Pompeu Fabra<br> <br> <b>Contac</b>t: </span><span><a href=3D"mailto:dcase.workshop@xxxxxxxx" ta= rget=3D"_blank" style=3D"color:rgb(70,120,134);text-decoration:underline"><= span lang=3D"EN-GB">dcase.workshop@xxxxxxxx</span></a></span><span lang=3D= "EN-GB"> (or our own email addresses that you'll find in the DCASE webs= ite)<span></span></span></p> <br></div> --000000000000ba1acb06383eb028--