[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: An Auditory Illusion



Dear Jont,

I can't quite tell from your example whether we are or aren't talking
about the same thing.  Do your independent estimates concern all possible
features being computed in parallel across the spectrum?  Also how does
Dick Warren's case of sending the signals to three different spatial
locations fit in?  Would the estimates also be made at all possible
spatial locations.

I have no real idea myself about how the parallel processing is done, but
it might be more economical to wait until the grouping processes had
created streams and then assess the phonetic features located in each
stream.  This would be particularly important when speech signals were
mixed.

Best,

Al

--------------------------------------------------------------
On Wed, 30 Apr 1997, Jont Allen wrote:

> Al Bregman wrote:
> >
> > I think, then, that your experiments on VTE force us to the conclusion
> > that the speech processor is not a homogeneous serial mechanism.  It seems
> > likely that there is a stage that is preattentive and operates on
> > concurrent auditory streams in parallel.
> >
> > Regards,
> >
> > Al
> >
>
> Al,
> In my 1994 paper in the IEEE Trans. on Speech and Audio Processing
> (Vol 2, #4, Oct, pp 567-576) I conclude that based on Fletcher's
> AI experiments, partial phone features must be extracted independently.
> For example, at the top of section VI I say:
> "Somehow the early CNS forms independent error estimates of features
> across frequency ...."
>
> My question is, could we be talking about the same thing?
> Could the "concurrent audiory streams in parallel" just be
> the different articulation channels across frequency?
>
>       Jont
> --
> Jont B. Allen, Room 2D553 --- http://www.research.att.com/~jba
>     600 Mountain AV, Murray Hill NJ, 07974; 908/582-3157
>