From a limited number of microphones, it is possible to
extract more than this number of signals - that's basically what stereo and
ambisonics (not to mention transaural) does. Using several spaced capsules (as
against coincident or virtually coincident) can work in higher order
ambisonics, theoretically, though there are some 'noise' issues at present.
Whtether the capsules are directional or omni (or 'in between' - semi-cardioid
as used in the SoundField Mic), it's not necessary to assume that switching
between capsules is what you want to do. - you're actually just looking for a
signal that is optimised for a particular source. By treating your signals in a
matrixed fashion, you can rapidly try out succesions of 'decodes' (and these can
have frequency dependent components, too). Nevertheless, who or what is
steering? - is this a learning algorithm designed to mimic scene analysis? - it
would seem that you need a best-fit explanation for the entirety of components
in a detected sound field, otherwise you'll have 'bits left over' - isn't that
right?
regards
ppl
|
No virus found in this outgoing message. Checked by AVG Anti-Virus. Version: 7.0.308 / Virus Database: 266.8.4 - Release Date: 27/03/05