[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Recommended software for simulating moving sound sources?



Dear Daniel,

In my opinion, some additional information is necessary. As you are looking to combine audio and visual simulations, it is not a tribal matter. 

The first question I would ask is what the level of computer competence in your group? Are you looking for a total turnkey solution or are you willing to go into graphic programming (MaxMSP), Python, or even C/C++? 

Second question, what OS are you able to run, as not all options are available for all worlds? 

Generating the audio offline is an option, but as there is video, this must also be generated, and synchronized. Alternatively, the synthesis runs in realtime. Are you considering synthesized images, or playback of recorded video? In addition to time synchronization, what are the needs for spatial synchronization of sound and visual sources?

In our lab, we had been leading the development of BlenderVR, an interactive framework based on the open source Blender Game Engine for doing such audio/visual simulators. BlenderVR handles the visual and geometrical scene and syncronisation, with external audio rendering controlled via OSC messages. The 2 part solution allows freedom for the audio rendering engine, for which we have used MaxMSP and PureData, both real-time solutions. As a general audio spatialization powerhouse, I would highly recommend the Spat, from Ircam, a package that operates in the MaxMSP environment. 

Environmental acoustic modelling, reflecting surfaces, is something if another matter. It is important to know the detail precision you are looking for. This could be simple 1st order reflecting off of a few surfaces modeled simply as additional sources, dynamic image sources modeling of the geometrical scene (with something like the Evertims framework), or off-line 3d impulse response using raytracing software like CATT-Acoustics which is then used in the real-time engine. 

Be happy to discuss more with you off-list if you like. 

-Brian FG Katz
LIMSI-CNRS


> On 25 nov. 2014, at 17:26, "Oberfeld-Twistel, Daniel" <oberfeld@xxxxxxxxxxxx> wrote:
> 
> Dear list,
> 
> could anyone here recommend a software for simulating and presenting moving sound sources in virtual acoustics (via headphones)? 
> 
> We want to study the use of auditory and visual information in simple traffic-related tasks, like street crossing (pedestrian stands at a crosswalk, car approaches from the left, pedestrian decides if he or she can safely cross the street before the car will arrive at the crosswalk).
> 
> What should the software be capable to do?
> 
> 1) Physically accurate simulation of between 1 and 4 moving sound sources. It will be necessary to simulate different trajectories, different constant velocities, or decelerating or accelerating sound sources. In addition, it should of course be possible to specify the position of the virtual listener within the simulated acoustic scene. We would strongly prefer a "high level" approach where we can tell the software, e.g., that the sound source starts at a given point in space and then moves to another defined point in space at constant speed and within in a specified time interval, rather than having to solve wave equations ourselves...
> 2) We want to present the stimuli via headphones. Being able to use individualized HRTFs is not a critical issue.
> 3) We will simulate typical traffic situations in a relatively large open space, so only a reflecting ground surface will be present, and we need to model these reflections. However, if the software is capable of simulating other reflecting surfaces (e.g., walls of nearby buildings), this would of course be no disadvantage.
> 4) In the simplest case, we will simulate a point source emitting for example a broadband spectrum (engine noise). The capability to simulate sound sources with different directional properties would be a plus, but is not critical.
> 5) At least in the first phase of our project, we do not intend to use head- or motion tracking, so dynamic updating of the acoustic scene is not required. However, it would be advantageous to have a software with this capability for future studies.
> 6) The software should be capable to simulate self-motion of the listener on a pre-defined trajectory (again, no dynamic/interactive mode required).
> 7) It would be ok to generate the sound files offline, at least in the first phase where the simulations are non-interactive.
> 
> Apart from these "acoustic requirements", because we want to study conditions with both auditory and visual information, the issue of audio-visual synchronization is critical. I would be grateful to receive some recommendations concerning this issue, too!
> 
> This is non-commercial university research, and an approximate information about the price tag would be great...
> 
> Looking forward to any suggestions or ideas you might have!
> 
> Best
> 
> Daniel
> 
> 
> PD Dr. Daniel Oberfeld-Twistel
> Johannes Gutenberg - Universitaet Mainz
> Department of Psychology
> Experimental Psychology
> Wallstrasse 3
> 55122 Mainz
> Germany
> 
> Phone ++49 (0) 6131 39 39274 
> Fax   ++49 (0) 6131 39 39268
> http://www.staff.uni-mainz.de/oberfeld/
> https://www.facebook.com/WahrnehmungUndPsychophysikUniMainz