Dear Yi Yu and dear list,
Thank you for the link to SoSoMir that I did not know. Personally I
rather go to MusiSorbonne for music discussion or Music-Ir (Ircam list).
No my question (which was underlied in my last message) addresses fondamentaly
the modern definition of timbre which, in term of Maths, is not a
definition but a theorem. Discrimination of timbre is only partly linked
to pitch, and there are many musical example that would make the
"official" definition in difficulty. I know that Mr Bregman advocate not
to use the term timbre, but unfortunatly, I believe it is difficult to
avoid using it.
I recall that if most psycho-acoustic experiments on timbre
differentiation suppose that the pitch of the sounds tested are the
same; however in the following paper three tones [B3 (247 Hz), C#4 (277
Hz) and Bb4 (466 Hz)] were used : Jeremy Marozeau, Alain de Cheveigné,
Stephen McAdams and Suzanne Winsberg, The dependency of timbre on
fundemental frequency, JASA 114-5, 2003 pp.2946-2957.
Further more Alain de Cheveigné has put forward in another paper , The
auditory system as a separation machine, In: Physiological and
Psychological Bass of Auditory Fuction, Shaker 2000 pp. 453-460 that to
not being able to seperate sounds is still a major problem for all of
our existing mechanical recording systems. I believe, and I showed it
somehow in my paper, Controlling spectral harmony with Kohonen maps,
Cognitive Processing vol.4 (2003), Springer-Verlag, that one of the
major problems we are having is that we chose the frequency as the first
axis in our representation of sounds. In Maths the first axis has a
very different signification than the second axis which is reserved for
the image. It would mean that starting from a frequency we obtain an
energy which is absurd but practical for representation. Chosing energy
(or amplitude) as the first axis is problematic and not easy to handle
as I showed in my paper, but it has the advantage to represent sound as
we hear them (or getting closer to), that is that we are capable of
discriminate timbre easely regardless of frequency (it is a vital function) but not frequencies regarless of timbre
(usually not a vital function: if we don't hear a car coming we can die,
if the car break the pitch of the car goes up and give us an important
information, but still it is secondary; ie we need to discriminate the
sound of the car whatever frequency the engine, breaks etc... are): I
mean also that in normal conditions, with very litlle cultural knowledge, we know if
a sound comes from a flute or a violin, but we are not capable to say
which note is being played (except for highly trained musicians who have
actually a different brain configuration, I am talking of scans not of
metaphorical implications).
So my questions remain: how can we have overlooked Rousseau's dictionary
for so long, which leads to another question: is our spectral
representation really the good one, according that Fourier was not
working on sound but on heat. Epistemologically speaking, I think that
we are going progressively off the tracks since the 80s, and that there
is a demand that our brain is being configurated to match how computer
hears sounds. I'd like to be proved wrong.
Frédéric Maintenant