Re: perceptual evaluation of cochlear models ("ftordini@xxxxxxxx" )


Subject: Re: perceptual evaluation of cochlear models
From:    "ftordini@xxxxxxxx"  <ftordini@xxxxxxxx>
Date:    Mon, 8 Sep 2014 18:44:06 +0200
List-Archive:<http://lists.mcgill.ca/scripts/wa.exe?LIST=AUDITORY>

------=_Part_143505_440343604.1410194646131 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Hello Joshua,=20 Thank you for the great expansion and for the further reading suggestions.= =20 I may add three more items to the list, hoping to be clear in my formulatio= n. =20 (1) A perhaps provocative question could be: is there a loudness or more lo= udnesses? (is loudness domain dependant?) Should we continue to tackle loud= ness as an invariant percept across classes once we move onto the more comp= lex domain of real sounds? Rephrasing: once we define an ecologically valid= taxonomy of real world sounds (e.g. starting from Gaver), can we expect th= e loudness model we want to improve to be valid across (sound)classes? Hard= to say, I would attempt 'yes', but granting different paramenters tuning a= ccording to the dominant context (say, speech, music, or environmental soun= ds). [hidden question: are we actually, ever, purely "naive" listeners?] (2) A related question: can we jump form the controlled lab environment int= o the wild in a single step? I'd say no - The approach followed by EBU/ITU= using real world, long, stimuli is highly relevant to the broadcasting wor= ld, but it is hard to distinguish between energetic and informational maski= ng effects using real program material mostly made of speech and music. Set= s of less informative sources taken from environmental, natural sounds may = be a good compromise - a starting point to address basic expansions of the = current loudness model(s). Such stragegies and datasets are missing (to my= knowledge). (3) The role of space. Psysiologically driven models (Moore, Patterson) are= supported mostly by observations obtained using non-spatialized, or dichot= ic, scenes to better reveal mechanisms sorting out the spatial confound. Ho= wever, while spatial cues are considered to play a secondary role in scene = alaysis, spatial release from masking is, on the other hand, quite importan= t in partial loudness modeling, at least from the energetic masking point o= f view and especially for complex sources. This is even more relevant for a= symmetric sources distributions. I feel there is much to do before we can a= ddress this aspect with confidence, even limiting the scope to non-moving s= ources, but more curiosity with respect to spatial variables may be valuabl= e when designing listening experiments with natural sounds.=20 [If one asks a sound engineer working on a movie soundtrack: "where do you = start form?", he will start talking about panning, to set the scene using h= is sources (foley, dialogue, music, ...) and **then** adjust levels/eq ...]= =20 Best, --Francesco Tordinihttp://www.cim.mcgill.ca/sre/personnel/ http://ca.linkedin.com/in/ftordini =20 >----Messaggio originale---- >Da: joshua.reiss@xxxxxxxx >Data: 06/09/2014 13.43 >A: "ftordini@xxxxxxxx"<ftordini@xxxxxxxx>, "AUDITORY@xxxxxxxx"<AU= DITORY@xxxxxxxx> >Ogg: RE: RE: perceptual evaluation of cochlear models > >Hi Francesco (and auditory list in case others are interested), >I'm glad to hear that you've been following the intelligent mixing researc= h. > >I'll rephrase your email as a set of related questions... > >1. Should we extend the concepts of loudness and partial loudness to compl= ex material? - Yes, we should. Otherwise, what is it good for? That is, wha= t does it matter if we can accurately predict perceived loudness of a pure = tone, or the just noticeable differences between pedestal increments for wh= ite or pink noise, or the partial loudness of a tone in the presence of noi= se, etc., if we can't predict loudness outside artificial laboratory condit= ions. I suppose it works as validation of an auditory model, but its still = very limited. >On the other hand, if we can extend the model to complex sounds like music= , conversations, environmental sounds, etc., then we provide robust validat= ion a general model of human loudness perception. The model can then be app= lied to metering systems, audio production, broadcast standards, improved h= earing aid design and so on. > >2. Can we extend the concepts of loudness and partial loudness to complex = material? - Yes, I think so. Despite all the issues and complexity, there's= a tremendous amount of consistency in perception of loudness, especially w= hen one considers relative rather than absolute perception. Take a TV show = and the associated adverts. The soundtracks of both may have dialogue, fole= y, ambience, music,..., all with levels over time. Yet consistently people = can identify when the adverts are louder than the show. Same is true when s= omeone changes radio stations, and in music production, sound engineers are= always identifying and dealing with masking when there are multiple simult= aneous sources. >I think the issues that many issues relating to complex material may have = a big effect on perception of timbre or extraction of meaning or emotion, b= ut only a minor effect on loudness. > >3. Can we extend current auditory models of loudness and partial loudness = to complex material? - Hard to say. The state of the art in those based on = deep understanding of the human hearing system (Glasberg, Moore et al... ; = Fastl, Zwicker, et al...) were not developed with complex material in mind,= though when used with complex material, researchers have reported good but= far from great agreement with perception. Modification, though still in ag= reement with auditory knowledge, shows improvement, but more research is ne= eded. >On the other hand, we have models based mostly on listening test data, but= incorporating little auditory knowledge. I'm thinking here of the EBU/ITU = loudness standards. They are based largely on Gilbert Soulodre's excellent = listening test results=20 >(G. Soulodre, Evaluation of Objective Loudness Meters, 116th AES Conventio= n, 2004.), and represent a big improvement on say, just applying a loudness= contour to signal RMS. But they are generally for a fixed listening level,= may overfit the data, difficult to generalise, and rarely give deeper insi= ght into the auditory system. Furthermore, like Moore's model, these have a= lso shown some inadequacies when dealing with a wider range of content (Pes= tana, Reiss &amp; Barbosa, "Loudness Measurement of Multitrack Audio Conten= t Using Modifications of ITU-R BS.1770," 134th AES Convention, 2013). >So I think rather than just extend, we may need to modify, improve, and go= back to the drawing board on some aspects. > >4. How could one develop an auditory model of loudness and partial loudnes= s for complex material? >- Incorporate the validated aspects from prior models, but reassess any co= mpromises. >- Use listening test results from a wide range of complex material. Perhap= s a metastudy could be performed, taking listening test results from many p= ublications for both model creation and validation. >- Build in known aspects of loudness perception that were left out of exis= ting models due to resources and the fact that they were built for lab scen= arios (pure tones, pink noise, sine sweeps...). In particular, I'm thinking= forward and backward masking. > >5. What about JND? - I would stay clear of this. I'm not even aware of ane= cdotal evidence suggesting consistency in just noticeable differences for s= ay, a small change in the level of one source in a mix. And I think one can= be trained to identify small partial loudness differences. I've had conver= sations with professional mixing engineers who detect a problem with a mix = that I don't notice until they point it out. But the concept of extending J= ND models to complex material is certainly very interesting. > >________________________________________ >From: ftordini@xxxxxxxx <ftordini@xxxxxxxx> >Sent: 04 September 2014 15:45 >To: Joshua Reiss >Subject: R: RE: perceptual evaluation of cochlear models > >Hello Joshua, >Interesting, indeed. Thank you. > >So the question is - to what extent can we stretch the concepts of loudnes= s >and partial loudness for complex material such as meaningful noise (aka mu= sic), >where attention and preference is likely to play a role as opposed to beep= s and >sweeps ? That is - would you feel comfortable to give a rule of a thumb fo= r a >JND for partial loudness, to safely rule out other factors when mixing? > >I was following your intelligent mixing thread - although I've missed the >recent one you sent me - and my question above relates to the possibility = to >actually "design" the fore-background perception when you do automatic mix= ing >using real sounds... >I would greatly appreciate any comment form your side. > >Best wishes, >Francesco > > >>----Messaggio originale---- >>Da: joshua.reiss@xxxxxxxx >>Data: 03/09/2014 16.00 >>A: "AUDITORY@xxxxxxxx"<AUDITORY@xxxxxxxx>, "Joachim Thieman= n" ><joachim.thiemann@xxxxxxxx>, "ftordini@xxxxxxxx"<ftordini@xxxxxxxx> >>Ogg: RE: perceptual evaluation of cochlear models >> >>Hi Francesco and Joachim, >>I collaborated on a paper that involved perceptual evaluation of partial >loudness with real world audio content, where partial loudness is derived = from >the auditory models of Moore, Glasberg et al. It showed that the predicted >loudness of tracks in multitrack musical audio disagrees with perception, = but >that minor modifications to a couple of parameters in the model would resu= lt in >a much closer match to perceptual evaluation results. See >>Z. Ma, J. D. Reiss and D. Black, "Partial loudness in multitrack mixing,"= AES >53rd International Conference on Semantic Audio in London, UK, January 27-= 29, >2014. >> >>And in the following paper, there was some informal evaluation of the use= of >Glasberg, Moore et al's auditory model for loudness and/or partial loudnes= s >could be used to mix multitrack musical audio. Though the emphasis was on >application rather than evaluation, it also noticed issues with the model = when >applied to real world content. See, >>D. Ward, J. D. Reiss and C. Athwal, "Multitrack mixing using a model of >loudness and partial loudness," 133rd AES Convention, San Francisco, Oct. = 26- >29, 2012. >> >>These may not be exactly what you're looking for, but hopefully you find = it >interesting. >>________________________________________ >>From: AUDITORY - Research in Auditory Perception <AUDITORY@xxxxxxxx= A> >on behalf of Joachim Thiemann <joachim.thiemann@xxxxxxxx> >>Sent: 03 September 2014 07:12 >>To: AUDITORY@xxxxxxxx >>Subject: Re: perceptual evaluation of cochlear models >> >>Hello Francesco, >> >>McGill alumni here - I did a bit of study in this direction, you can >>read about it in my thesis: >>http://www-mmsp.ece.mcgill.ca/MMSP/Theses/T2011-2013.html#Thiemann >> >>My argument was that if you have a good auditory model, you should be >>able to start from only the model parameters and be able to >>reconstruct the original signal with perceptual transparency. I was >>looking at this in the context of perceptual coding - a perceptual >>coder minus the entropy stage effectively verifies the model. If >>artefacts do appear, they can (indirectly) tell you what you are >>missing. >> >>I was specifically looking at gammatone filterbank methods, so there >>is no comparison to other schemas - but I hope it is a bit in the >>direction you're looking at. >> >>Cheers, >>Joachim. >> >>On 2 September 2014 20:39, ftordini@xxxxxxxx <ftordini@xxxxxxxx> wrote: >>> >>> Dear List members, >>> I am looking for references on perceptual evaluation of cochlear models= - >>> taken form an analysis-synthesis point of view, alike the work introduc= ed >in >>> Homann_2002 (Frequency analysis and synthesis using a Gammatone filterb= ank, >>> =C2=A74.3). >>> >>> Are you aware of any study that tried to assess the performance of >>> gammatone-like filterbanks used as a synthesis model? (AKA, what are = the >>> advantages over MPEG-like schemas?) >>> >>> All the best, >>> Francesco >>> >>> http://www.cim.mcgill.ca/sre/personnel/ >>> http://ca.linkedin.com/in/ftordini ------=_Part_143505_440343604.1410194646131 Content-Type: text/html;charset="UTF-8" Content-Transfer-Encoding: quoted-printable <div></div><div>Hello Joshua, <br>Thank you for the great expansion and for= the further reading suggestions. <br>I may add three more items to the lis= t, hoping to be clear in my formulation.&nbsp; <br><br>(1) A perhaps provoc= ative question could be: is there a loudness or more loudnesses? (is loudne= ss domain dependant?) Should we continue to tackle loudness as an invariant= percept across classes once we move onto the more complex domain of real s= ounds? Rephrasing: once we define an ecologically valid taxonomy of real wo= rld sounds (e.g. starting from Gaver), can we expect&nbsp;the loudness mode= l we want to improve to be valid across (sound)classes? Hard to say, I woul= d attempt 'yes', but granting different paramenters tuning according to the= dominant context (say, speech, music, or environmental sounds). <i>[hidden= question: are we actually, ever, purely "naive" listeners?]</i><br><br>(2)= A related question: can we jump form the controlled lab environment into t= he wild in a single step?&nbsp; I'd say no - The approach followed by EBU/I= TU using real world, long, stimuli is highly relevant to the broadcasting w= orld, but it is hard to distinguish between energetic and informational mas= king effects using real program material mostly made of speech and music. S= ets of less informative sources taken from environmental, natural sounds ma= y be a good compromise - a starting point to address basic expansions of th= e current loudness model(s).&nbsp; Such stragegies and datasets are missing= (to my knowledge).<br><br>(3) The role of space. Psysiologically driven mo= dels (Moore, Patterson) are supported mostly by observations obtained using= non-spatialized, or dichotic, scenes to better reveal mechanisms sorting o= ut the spatial confound. However, while spatial cues are considered to play= a secondary role in scene alaysis, spatial release from masking is, on the= other hand, quite important in partial loudness modeling, at least from th= e energetic masking point of view and especially for complex sources. This = is even more relevant for asymmetric sources distributions. I feel there is= much to do before we can address this aspect with confidence, even limitin= g the scope to non-moving sources, but more curiosity with respect to spati= al variables may be valuable when designing listening experiments with natu= ral sounds. <br><i>[If one asks a sound engineer working on a movie soundtr= ack: "where do you start form?", he will start talking about panning, to se= t the scene using his sources (foley, dialogue, music, ...) and **then** ad= just levels/eq ...] <br></i></div><div>&nbsp;</div><div><font color=3D"#444= 444">Best,<br>--</font></div><div><font color=3D"#444444">Francesco Tordini= </font></div><div><a href=3D"http://www.cim.mcgill.ca/sre/personnel/"><font= color=3D"#444444">http://www.cim.mcgill.ca/sre/personnel/</font></a><br><a= href=3D"http://ca.linkedin.com/in/ftordini"><font color=3D"#444444">http:/= /ca.linkedin.com/in/ftordini</font></a><br><br></div><div>&nbsp;</div><div>= &nbsp;<br><br>&gt;----Messaggio originale----<br>&gt;Da: joshua.reiss@xxxxxxxx= ac.uk<br>&gt;Data: 06/09/2014 13.43<br>&gt;A: "ftordini@xxxxxxxx"&lt;ftord= ini@xxxxxxxx&gt;, "AUDITORY@xxxxxxxx"&lt;AUDITORY@xxxxxxxx&g= t;<br>&gt;Ogg: RE: RE: perceptual evaluation of cochlear models<br>&gt;<br>= &gt;Hi Francesco (and auditory list in case others are interested),<br>&gt;= I'm glad to hear that you've been following the intelligent mixing research= .<br>&gt;<br>&gt;I'll rephrase your email as a set of related questions...<= br>&gt;<br>&gt;1. Should we extend the concepts of loudness and partial lou= dness to complex material? - Yes, we should. Otherwise, what is it good for= ? That is, what does it matter if we can accurately predict perceived loudn= ess of a pure tone, or the just noticeable differences between pedestal inc= rements for white or pink noise, or the partial loudness of a tone in the p= resence of noise, etc., if we can't predict loudness outside artificial lab= oratory conditions. I suppose it works as validation of an auditory model, = but its still very limited.<br>&gt;On the other hand, if we can extend the = model to complex sounds like music, conversations, environmental sounds, et= c., then we provide robust validation a general model of human loudness per= ception. The model can then be applied to metering systems, audio productio= n, broadcast standards, improved hearing aid design and so on.<br>&gt;<br>&= gt;2. Can we extend the concepts of loudness and partial loudness to comple= x material? - Yes, I think so. Despite all the issues and complexity, there= 's a tremendous amount of consistency in perception of loudness, especially= when one considers relative rather than absolute perception. Take a TV sho= w and the associated adverts. The soundtracks of both may have dialogue, fo= ley, ambience, music,..., all with levels over time. Yet consistently peopl= e can identify when the adverts are louder than the show. Same is true when= someone changes radio stations, and in music production, sound engineers a= re always identifying and dealing with masking when there are multiple simu= ltaneous sources.<br>&gt;I think the issues that many issues relating to co= mplex material may have a big effect on perception of timbre or extraction = of meaning or emotion, but only a minor effect on loudness.<br>&gt;<br>&gt;= 3. Can we extend current auditory models of loudness and partial loudness t= o complex material? - Hard to say. The state of the art in those based on d= eep understanding of the human hearing system (Glasberg, Moore et al... ; F= astl, Zwicker, et al...) were not developed with complex material in mind, = though when used with complex material, researchers have reported good but = far from great agreement with perception. Modification, though still in agr= eement with auditory knowledge, shows improvement, but more research is nee= ded.<br>&gt;On the other hand, we have models based mostly on listening tes= t data, but incorporating little auditory knowledge. I'm thinking here of t= he EBU/ITU loudness standards. They are based largely on Gilbert Soulodre's= excellent listening test results <br>&gt;(G. Soulodre, Evaluation of Objec= tive Loudness Meters, 116th AES Convention, 2004.), and represent a big imp= rovement on say, just applying a loudness contour to signal RMS. But they a= re generally for a fixed listening level, may overfit the data, difficult t= o generalise, and rarely give deeper insight into the auditory system. Furt= hermore, like Moore's model, these have also shown some inadequacies when d= ealing with a wider range of content (Pestana, Reiss &amp; Barbosa, "Loudne= ss Measurement of Multitrack Audio Content Using Modifications of ITU-R BS.= 1770," 134th AES Convention, 2013).<br>&gt;So I think rather than just exte= nd, we may need to modify, improve, and go back to the drawing board on som= e aspects.<br>&gt;<br>&gt;4. How could one develop an auditory model of lou= dness and partial loudness for complex material?<br>&gt;- Incorporate the v= alidated aspects from prior models, but reassess any compromises.<br>&gt;- = Use listening test results from a wide range of complex material. Perhaps a= metastudy could be performed, taking listening test results from many publ= ications for both model creation and validation.<br>&gt;- Build in known as= pects of loudness perception that were left out of existing models due to r= esources and the fact that they were built for lab scenarios (pure tones, p= ink noise, sine sweeps...). In particular, I'm thinking forward and backwar= d masking.<br>&gt;<br>&gt;5. What about JND? - I would stay clear of this. = I'm not even aware of anecdotal evidence suggesting consistency in just not= iceable differences for say, a small change in the level of one source in a= mix. And I think one can be trained to identify small partial loudness dif= ferences. I've had conversations with professional mixing engineers who det= ect a problem with a mix that I don't notice until they point it out. But t= he concept of extending JND models to complex material is certainly very in= teresting.<br>&gt;<br>&gt;________________________________________<br>&gt;F= rom: ftordini@xxxxxxxx &lt;ftordini@xxxxxxxx&gt;<br>&gt;Sent: 04 Septembe= r 2014 15:45<br>&gt;To: Joshua Reiss<br>&gt;Subject: R: RE: perceptual eval= uation of cochlear models<br>&gt;<br>&gt;Hello Joshua,<br>&gt;Interesting, = indeed. Thank you.<br>&gt;<br>&gt;So the question is - to what extent can w= e stretch the concepts of loudness<br>&gt;and partial loudness for complex = material such as meaningful noise (aka music),<br>&gt;where attention and p= reference is likely to play a role as opposed to beeps and<br>&gt;sweeps ? = That is - would you feel comfortable to give a rule of a thumb for a<br>&gt= ;JND for partial loudness, to safely rule out other factors when mixing?<br= >&gt;<br>&gt;I was following your intelligent mixing thread - although I've= missed the<br>&gt;recent one you sent me - and my question above relates t= o the possibility to<br>&gt;actually "design" the fore-background perceptio= n when you do automatic mixing<br>&gt;using real sounds...<br>&gt;I would g= reatly appreciate any comment form your side.<br>&gt;<br>&gt;Best wishes,<b= r>&gt;Francesco<br>&gt;<br>&gt;<br>&gt;&gt;----Messaggio originale----<br>&= gt;&gt;Da: joshua.reiss@xxxxxxxx<br>&gt;&gt;Data: 03/09/2014 16.00<br>&gt= ;&gt;A: "AUDITORY@xxxxxxxx"&lt;AUDITORY@xxxxxxxx&gt;, "Joachi= m Thiemann"<br>&gt;&lt;joachim.thiemann@xxxxxxxx&gt;, "ftordini@xxxxxxxx"= &lt;ftordini@xxxxxxxx&gt;<br>&gt;&gt;Ogg: RE: perceptual evaluation of coc= hlear models<br>&gt;&gt;<br>&gt;&gt;Hi Francesco and Joachim,<br>&gt;&gt;I = collaborated on a paper that involved perceptual evaluation of partial<br>&= gt;loudness with real world audio content, where partial loudness is derive= d from<br>&gt;the auditory models of Moore, Glasberg et al. It showed that = the predicted<br>&gt;loudness of tracks in multitrack musical audio disagre= es with perception, but<br>&gt;that minor modifications to a couple of para= meters in the model would result in<br>&gt;a much closer match to perceptua= l evaluation results. See<br>&gt;&gt;Z. Ma, J. D. Reiss and D. Black, "Part= ial loudness in multitrack mixing," AES<br>&gt;53rd International Conferenc= e on Semantic Audio in London, UK, January 27-29,<br>&gt;2014.<br>&gt;&gt;<= br>&gt;&gt;And in the following paper, there was some informal evaluation o= f the use of<br>&gt;Glasberg, Moore et al's auditory model for loudness and= /or partial loudness<br>&gt;could be used to mix multitrack musical audio. = Though the emphasis was on<br>&gt;application rather than evaluation, it al= so noticed issues with the model when<br>&gt;applied to real world content.= See,<br>&gt;&gt;D. Ward, J. D. Reiss and C. Athwal, "Multitrack mixing usi= ng a model of<br>&gt;loudness and partial loudness," 133rd AES Convention, = San Francisco, Oct. 26-<br>&gt;29, 2012.<br>&gt;&gt;<br>&gt;&gt;These may n= ot be exactly what you're looking for, but hopefully you find it<br>&gt;int= eresting.<br>&gt;&gt;________________________________________<br>&gt;&gt;Fr= om: AUDITORY - Research in Auditory Perception &lt;AUDITORY@xxxxxxxx= &gt;<br>&gt;on behalf of Joachim Thiemann &lt;joachim.thiemann@xxxxxxxx&gt= ;<br>&gt;&gt;Sent: 03 September 2014 07:12<br>&gt;&gt;To: AUDITORY@xxxxxxxx= GILL.CA<br>&gt;&gt;Subject: Re: perceptual evaluation of cochlear models<br= >&gt;&gt;<br>&gt;&gt;Hello Francesco,<br>&gt;&gt;<br>&gt;&gt;McGill alumni = here - I did a bit of study in this direction, you can<br>&gt;&gt;read abou= t it in my thesis:<br>&gt;&gt;http://www-mmsp.ece.mcgill.ca/MMSP/Theses/T20= 11-2013.html#Thiemann<br>&gt;&gt;<br>&gt;&gt;My argument was that if you ha= ve a good auditory model, you should be<br>&gt;&gt;able to start from only = the model parameters and be able to<br>&gt;&gt;reconstruct the original sig= nal with perceptual transparency.&nbsp; I was<br>&gt;&gt;looking at this in= the context of perceptual coding - a perceptual<br>&gt;&gt;coder minus the= entropy stage effectively verifies the model.&nbsp; If<br>&gt;&gt;artefact= s do appear, they can (indirectly) tell you what you are<br>&gt;&gt;missing= .<br>&gt;&gt;<br>&gt;&gt;I was specifically looking at gammatone filterbank= methods, so there<br>&gt;&gt;is no comparison to other schemas - but I hop= e it is a bit in the<br>&gt;&gt;direction you're looking at.<br>&gt;&gt;<br= >&gt;&gt;Cheers,<br>&gt;&gt;Joachim.<br>&gt;&gt;<br>&gt;&gt;On 2 September = 2014 20:39, ftordini@xxxxxxxx &lt;ftordini@xxxxxxxx&gt; wrote:<br>&gt;&gt= ;&gt;<br>&gt;&gt;&gt; Dear List members,<br>&gt;&gt;&gt; I am looking for r= eferences on perceptual evaluation of cochlear models -<br>&gt;&gt;&gt; tak= en form an analysis-synthesis point of view, alike the work introduced<br>&= gt;in<br>&gt;&gt;&gt; Homann_2002 (Frequency analysis and synthesis using a= Gammatone filterbank,<br>&gt;&gt;&gt; =C2=A74.3).<br>&gt;&gt;&gt;<br>&gt;&= gt;&gt; Are you aware of any study that tried to assess the performance of<= br>&gt;&gt;&gt; gammatone-like filterbanks used as a synthesis model?&nbsp;= &nbsp; (AKA, what are the<br>&gt;&gt;&gt; advantages over MPEG-like schemas= ?)<br>&gt;&gt;&gt;<br>&gt;&gt;&gt; All the best,<br>&gt;&gt;&gt; Francesco<= br>&gt;&gt;&gt;<br>&gt;&gt;&gt; http://www.cim.mcgill.ca/sre/personnel/<br>= &gt;&gt;&gt; http://ca.linkedin.com/in/ftordini<br></div> ------=_Part_143505_440343604.1410194646131--


This message came from the mail archive
http://www.auditory.org/postings/2014/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University