[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Recommended software for simulating moving sound sources?
Dear Daniel,
we developed and use such a tool for our psychoacoustical and
hearing-aid research, which is now available at
http://hoertech.de/cgi-bin/wPermission.cgi?file=/web_en/produkte/TASCAR.shtml.
The TASCAR software (toolbox for acoustic scene creation and rendering)
is a software solution which supports all requirements that you
mentioned: Moving sound sources are described in an abstract way in
terms of signals and trajectories and are then physically modelled. Air
absorption and Doppler shift are taken into account in the simulation.
An image source model with simple reflection properties, which allows
for static or moving reflectors, can be applied to simulate rooms, walls
or other moving and reflecting objects. The signals are rendered in
real-time in the time domain, to allow for interactive positioning of
the objects, including the receiver(s). The output can be rendered in
higher order ambisonics, vector base amplitude panning or binaurally
using generic or individual HRTFs. The receiver can move interactively
in the scene, controlled by a game-interface or body sensors.
An interface to video / computer graphics is available via the open
sound control (OSC). We have working solutions for an interface to the
blender game engine.
Best, Volker
References:
@InProceedings{Grimm2014a,
author = {Giso Grimm and Volker Hohmann},
title = {Dynamic spatial acoustic scenarios in multichannel
loudspeaker systems for hearing aid evaluations},
booktitle = {17. Jahrestagung der Deutschen Gesellschaft f{\"u}r
Audiologie},
year = 2014,
address = {Oldenburg, Germany},
organization = {Deutschen Gesellschaft f{\"u}r Audiologie}
}
@InProceedings{Grimm2014b,
author = {Giso Grimm and Torben Wendt and Volker Hohmann and
Stephan Ewert},
title = {Implementation and perceptual evaluation of a
simulation method for coupled rooms in higher order
ambisonics},
booktitle = {Proc. of the EAA Joint Symposium on Auralization and
Ambisonics},
year = 2014,
address = {Berlin},
pages = {27-32},
doi = {http://dx.doi.org/10.14279/depositonce-6}
}
On 25.11.2014 17:26, Oberfeld-Twistel, Daniel wrote:
> Dear list,
>
> could anyone here recommend a software for simulating and presenting moving sound sources in virtual acoustics (via headphones)?
>
> We want to study the use of auditory and visual information in simple traffic-related tasks, like street crossing (pedestrian stands at a crosswalk, car approaches from the left, pedestrian decides if he or she can safely cross the street before the car will arrive at the crosswalk).
>
> What should the software be capable to do?
>
> 1) Physically accurate simulation of between 1 and 4 moving sound sources. It will be necessary to simulate different trajectories, different constant velocities, or decelerating or accelerating sound sources. In addition, it should of course be possible to specify the position of the virtual listener within the simulated acoustic scene. We would strongly prefer a "high level" approach where we can tell the software, e.g., that the sound source starts at a given point in space and then moves to another defined point in space at constant speed and within in a specified time interval, rather than having to solve wave equations ourselves...
> 2) We want to present the stimuli via headphones. Being able to use individualized HRTFs is not a critical issue.
> 3) We will simulate typical traffic situations in a relatively large open space, so only a reflecting ground surface will be present, and we need to model these reflections. However, if the software is capable of simulating other reflecting surfaces (e.g., walls of nearby buildings), this would of course be no disadvantage.
> 4) In the simplest case, we will simulate a point source emitting for example a broadband spectrum (engine noise). The capability to simulate sound sources with different directional properties would be a plus, but is not critical.
> 5) At least in the first phase of our project, we do not intend to use head- or motion tracking, so dynamic updating of the acoustic scene is not required. However, it would be advantageous to have a software with this capability for future studies.
> 6) The software should be capable to simulate self-motion of the listener on a pre-defined trajectory (again, no dynamic/interactive mode required).
> 7) It would be ok to generate the sound files offline, at least in the first phase where the simulations are non-interactive.
>
> Apart from these "acoustic requirements", because we want to study conditions with both auditory and visual information, the issue of audio-visual synchronization is critical. I would be grateful to receive some recommendations concerning this issue, too!
>
> This is non-commercial university research, and an approximate information about the price tag would be great...
>
> Looking forward to any suggestions or ideas you might have!
>
> Best
>
> Daniel
>
>
> PD Dr. Daniel Oberfeld-Twistel
> Johannes Gutenberg - Universitaet Mainz
> Department of Psychology
> Experimental Psychology
> Wallstrasse 3
> 55122 Mainz
> Germany
>
> Phone ++49 (0) 6131 39 39274
> Fax ++49 (0) 6131 39 39268
> http://www.staff.uni-mainz.de/oberfeld/
> https://www.facebook.com/WahrnehmungUndPsychophysikUniMainz
>
--
---------------------------------------------------------
Prof. Dr. Volker Hohmann
Medizinische Physik and Cluster of Excellence Hearing4all
Universität Oldenburg
D-26111 Oldenburg
Germany
Tel. +49 441 798 5468
FAX +49 441 798 3902
Email volker.hohmann@xxxxxxxxxxxxxxxx
http://www.uni-oldenburg.de/mediphysik/
http://www.uni-oldenburg.de/auditorische-signalverarbeitung/
Public Key and Key Fingerprint
http://medi.uni-oldenburg.de/members/vh/pubkey_vh_uni.txt
C75A 8A8D 9408 28EE FCFD 20CA 1D9F 23CC BAD2 B967
---------------------------------------------------------