[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: perceptual segregation of sound
Dear List,
Many thanks to all contributors for their enlightening replies to my
initial question, this has been a very interesting discussion.
> are we really capable of perceptually segregating multiple sources
concurrently, or are we just focusing our attention on one source, and
then shifting it very quickly to another source?
I would like to summarise and reply to some comments raised, though
mainly by conjecture on my part.
Firstly, examples have been given (e.g. listening to music) whereupon
repeated exposure to a sound and use of top-down processes, additional
information is extracted from perceptual streams that were not initially
the focus of attention. To make a loose analogy, repeated listening must
be like learning a foreign language; we start off learning the most
useful words/sounds and, over time, as these become part of our
vocabulary, we redirect our attention at more subtle structures and
relationships between words/sounds. However, it is evident that we can
form multiple perceptual streams even from completely unfamiliar sounds,
so let's try to isolate perceptual stream segregation from any
additional complications inherent in repeated listening. Brian puts it
thus: "top down processes are useful for resolving ambiguous cases".
John's remark that: "survival requires an animal's sensory
organs to produce a timely response to environmental information"
implies that we should maximise the "potential evolutionary benefit" of
perceived information in the shortest possible time. I would imagine
that to this end, using all processing resources is better than using
only some of them, so it would make sense to use spare resources to
analyse any content that is not the main focus of attention (the basis
of the "perceptual load" theory of selective attention that Erick
mentioned). Erick's comments about the sensory buffer are also
interesting in light of the above mentioned topic of repeated listening,
since we can repeatedly process a sound in short-term memory even if it
was physically heard once. However, he mentions a limit of around 4s for
the sensory buffer. So, how is the situation Kevin described possible:
" In my classes, I have had students who can "go back" to a sound (or
sounds) they heard and extract components that they did not 'hear' when
the sound was presented. In one case a student re-listened to a piece he
had heard a couple of weeks previously.) "?
Is 4s a typical limit for people with average memory, excluding those
with photographic/eidetic memory?
This was relatively clear in my mind before, but now I'm confused: what
is attention? If attention can be present even at a cochlear level, then
would we define it by its functionality rather than its "level in a
perceptual hierarchy of processes?"
Finally, to add to the arguments for preattentive structuring of sensory
evidence into multiple streams, some experiments are described in
(Bregman A.S., Auditory Scene Analysis, 1990, Chapter 2-Relation to
Other Psychological Mechanisms: Attention) where perception of
nonfocused streams in speech mixtures can extend even to recognition of
words or associated meanings of words. However, as Erick pointed out,
according to the "perceptual load" theory, perception of nonfocused
streams will not necessarily always reach such an advanced state given
different sounds and perception tasks, due to limited cognitive
resources.
Once again, thanks to all contributors.
Mark
--
Mark Every <m.every@xxxxxxxxxxxx>
CVSSP, SEPS, University of Surrey