[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[AUDITORY] CCRMA Summer Workshop | Virtual Acoustics for Immersive Audio



We are pleased to announce the upcoming Virtual Acoustics for Immersive Audio workshop, taking place July 21 – August 1, 2025, at the Center for Computer Research in Music and Acoustics (CCRMA), Stanford University, with the option to attend remotely.

 

The workshop is aimed at those interested in exploring spatial audio for virtual and augmented reality applications. The goal is to provide participants with the theoretical background and practical tools needed to develop their immersive audio projects. The course will combine lectures with hands-on exercises. Week 1 will focus on room acoustics and artificial reverberation, key concepts for understanding how sound behaves in physical and virtual spaces. This foundational knowledge will prepare participants for Week 2, covering practical spatial audio techniques used to create immersive sound experiences.

 

Details:

  • When: July 21 – August 1, 2025
  • Where: CCRMA, Stanford University, California – or join remotely
  • Cost: $800 in person / $350 remote
  • Requirements: Familiarity with linear algebra, basic DSP knowledge, and some experience with a scientific programming language (we’ll be using Python).
  • Scholarships: A limited number of scholarships are available for students and individuals from underrepresented backgrounds in the field. The application deadline is May 16th at 23:59 (AoE).

 

Please visit the official workshop page for the full program and registration details:

https://ccrma.stanford.edu/workshops/virtual-acoustics-immersive-audio
Registration at: https://www.eventbrite.com/e/virtual-acoustics-for-immersive-audio-tickets-1279688395439

Instructors:

 

Orchisama Das brings a strong background in spatial audio and artificial reverberation, with experience spanning both academia and industry. She is currently a postdoctoral researcher at King’s College, London working on real-time room acoustics rendering for immersive audio. She was previously at Sonos, developing spatial audio algorithms for binaural rendering. Orchi received her PhD degree from CCRMA.

 

Gloria Dal Santo has a strong interest in the use of machine learning for audio applications, with a particular focus on artificial reverberation. As part of her PhD research at the Aalto Acoustics Lab in Finland, she is exploring how machine learning can help address some of the key challenges that remain in this field.

 

Feel free to reach out to Orchisama (orchisama.das@xxxxxxxxx) or Gloria (gloria.dalsanto@xxxxxxxx) if you have any questions!

 

Warm regards,

- Orchisama and Gloria