Hello, everybody.
Would you be so kind to point out some research investigating how sound is stored in memory?
I would assume that some kind of encoding is performed to optimise the amount of information that can be stored, but is there a generally accepted mechanism describing that process?
For example, is it based on low-level descriptors such as loudness, brightness, roughness and noisiness?
If a pitch is recognisable, will that have higher priority over the overall brightness of the sound event?
Are the global/high-level behaviours of a sound event also stored in memory or are they processed "on the fly" based on the low-level descriptors each time a sound event is recalled? In this case, I guess that two sound events with a similar behavioural pattern but different local characteristics could recall each other.
What is the resolution of this encoding? How many low-level characteristics can be stored at the same time?
Thanks very much for your help.
Dario