[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[AUDITORY] ICASSP 2023 Special Session on Neural Speech and Audio Coding



Dear list,

We are delighted to inform you of the special session we are organizing for ICASSP 2023: "Neural speech and audio coding: emerging challenges and opportunities." 

The special session addresses an exemplary topic that represents challenges and opportunities in “signal processing in the AI era.” We invite research papers worldwide that contribute to developing compact speech and audio representations for communication, archiving, and entertainment applications. Although codec systems can be seen as straightforward autoencoding models, where there are fewer concerns about data shortage, recent findings in this area disclosed various research problems to be solved, such as reducing the AI model’s complexity for the on-device inference, low-delay model architectures for real-time processing, a transparent audio quality for entertainment use cases (e.g., music signals), and robustness to the environmental noise and acoustic artifacts. The special session welcomes transformative ideas that can make a breakthrough in this area and benefit billions of daily users of coding technology. The scope of the special session, although not limited to, is as follows: 

● Generative models and neural vocoders for low bitrate coding
● Adversarial learning for coding applications
● Novel architectures for multichannel coding
● Psychoacoustics and perceptually motivated loss functions
● Low-delay, low-power, low-complexity models
● Speech and language models-driven machine learning methods
● Differentiable DSP and hybrids of ML and DSP
● Noise-robust speech coding
● Multimodality in neural coding

We encourage you to consider our special session if you plan to submit a paper about neural speech/audio coding to ICASSP 2023. The special session will show up as one of the topic areas in the ICASSP 2023 submission system. The SS papers will go through a similar review process to the regular papers, but once accepted, they will form a dedicated session at the conference, where we can socialize our ideas in a more intimate setup. 

If you have any questions about this special session, please let us know (Minje Kim minje@xxxxxxxxxxx; Jan Skoglund jks@xxxxxxxxxx). 

Best,
Jan Skoglund and Minje Kim