Dear Colleagues, Please find below a call for a related special issue.
Here is
the link of the official Call for Papers. ***************************************************************************************************************************************** ***Apologies for multiple postings*** ************************************************************** ************************************************************** Computational Intelligence for End-to-End Audio Processing Special Issue of IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE ************************************************************** **************************************************************
GUEST EDITORS
SCOPE Computational Audio Processing techniques have been largely addressed by scientists and technicians in diverse application areas, like entertainment, human-machine interfaces, security,
forensics, and health. Developed services in these fields are characterised by a progressive increase of complexity, interactivity and intelligence, and the employment of Computational Intelligence techniques allowed to achieve a remarkable degree of automation
with excellent performance. The typical methodology adopted in these tasks consists in extracting and manipulating useful information from the audio stream to pilot the execution of target services. Such an approach is applied to different
kinds of audio signals, from music to speech, from sound to acoustic data, and for each of them we can easily identify specific research topics, some of which have already reached a high maturity level. In the last few years, a new emerging computational intelligence paradigm has become popular among scientists working in the field and across all a large variety of research areas. It is named
end-to-end learning and consists in omitting any hand-crafted intermediary algorithms in the solution of a given problem and directly learning all needed information from the sampled dataset. This means that features used as input of the parametric system
to train (like a Neural Network) are not selected by humans, but they are determined by the system itself during the learning process. Due to its flexibility and versatility, such an approach encountered a great interest in the Computational Audio Processing field, for all types of signals mentioned above. For instance, deep neural architectures
are often adopted in these contexts and fed with raw audio data in the time or frequency domains, whereas the supervised, weakly-supervised or unsupervised training algorithms involved in the process are responsible to find a suitable data representation across
the different abstraction layers to solve the task under study, i.e. classification, recognition and detection. On the other side, an increasing interest has been registered by the scientific community in the development of end-to-end solutions to synthesise raw audio streams, like speech or music. Generative Adversarial
Networks and WaveNets are the most recent and performing examples for this kind of problems.
TOPICS Workshop topics include, but are not limited to:
SUBMISSION GUIDELINES Electronic submissions for the Neurocomputing journal can be found under
https://mc.manuscriptcentral.com/tetci-ieee During the submission process, please choose Article Type as SI: CAP
IMPORTANT DATES
___________________________________________ Univ.-Prof. Dr.-Ing. habil. Björn W. Schuller Head
(Full Professor) Chair
of Complex and Intelligent Systems University
of Passau Passau
/ Germany Reader
(Associate Professor) Department
of Computing Imperial
College London London
/ U.K. CEO audEERING
GmbH Gilching
/ Germany Visiting
Professor School
of Computer Science and Technology Harbin
Institute of Technology Harbin
/ P.R. China Associate Institute
for Information and Communication Technologies Joanneum
Research Graz
/ Austria Associate Centre
Interfacultaire en Sciences Affectives Université
de Genève Geneva
/ Switzerland Editor
in Chief IEEE
Transactions on Affective Computing ___________________________________________ |