Subject: Re: sinfa using matlab From: Matt Winn <mwinn83@xxxxxxxx> Date: Tue, 29 Mar 2016 00:15:17 -0700 List-Archive:<http://lists.mcgill.ca/scripts/wa.exe?LIST=AUDITORY>--001a113dff0c95be4c052f2acaad Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable At the risk of straying yet one more post away from Skyler=E2=80=99s origin= al request, I would like to echo Thomas=E2=80=99 comment that SINFA is still w= orthy of consideration. And I=E2=80=99m speaking as someone who has gone on record criticizing phonetic information transfer in general. Mathias Sharinger and others have demonstrated that phonological features are indeed represented in the brain, in ways that cannot be explained purely by auditory or phonetic processing. So the idea that linguistic(/distinctive) features should be retired seems to ignore the advances made in understanding how we process speech. Even for thoe of us interested in consonant errors, decades of work using Information Transfer Analysis has demonstrated that consonant errors cluster in groups consistent with their phonetic features. Jont=E2= =80=99s work corroborates this as well, to some extent. This might be a good time to simply recognize that different experimenters can be addressing different kinds of questions =E2=80=93 not just how noise= creates intelligibility errors. For example: -- the integration of multiple acoustic cues, the timecourse of their integration -- the speed with which cues activate or suppress lexical networks -- how phonetic categories are formed through experience -- how phonetic processing (not just intelligibility) is dependent on signal quality, bandwidth, noise, etc. -- how different degradations yield different kinds of errors -- how auditory cues are moderated by non-auditory cues (like visual cues as well as linguistic context and sociolinguistic cues) -- how basic auditory (pitch, spatial, contrast enhancement, etc.) processing supports access to linguistic information And finally=E2=80=A6. -- the extent to which consonant confusions for isolated syllables with predictable vowels does or does not reflect real-world communication, or even reflects the *information* that is contained within everyday speech signals. =E2=80=A6 and these are just a few topics that come to mind even without a literature search; I am certainly leaving out a host of other noteworthy ideas that explore phonetic perception. So I suppose it might be acceptable to recognize SINFA and other approaches as worthwhile even if they do not suit our specific needs. Matt On Mon, Mar 28, 2016 at 4:39 AM, Thomas Ulrich Christiansen < thomas@xxxxxxxx> wrote: > Dear Jont and List, > > I couldn't resist the kind invitation to discuss the analysis of consonant > confusions... > > I agree with Jont that SINFA may not be the best way to analyse consonant > confusions. I also agree that confusion data is quite complex. My > interpretation of this is that using only one analysis method will limit > our understanding rather than extend it. > > I disagree that it is time to give up on distinctive features as they *do* > provide insights into certain aspects of consonant confusions (e.g. > spectral integration as described in Christiansen and Greenberg, 2012). > > The fact that distinctive features are defined by production > characteristics does not, in my view, preclude them from playing a role in > perception. This is indeed what the data from Christiansen and Greenberg, > 2012 says. Moreover, this data cannot be explained by AI, which is why I > argue that we need to be open to different ways of analysis and > interpretation (and even open to degrading the speech signal by other mea= ns > than noise - e.g. band-pass filtering). > > Now, I do recognize the vast amount of valuable AI work and the attention > it has received. All I am saying is that perhaps it is time to *also* pay > attention to alternatives. > > This is my poppyseed-free two cents... > > Reference: > Christiansen, T.U. and Greenberg, S. (2012) Perceptual confusions among > consonans, Revisited =E2=80=93 Cross-spectral Integration of phonetic-fea= ture > information and consonant recognition. IEEE Trans. Audio, Speech and Lang. > Proc. 20: 147-161 > > Best regards, > Thomas Ulrich Christiansen, PhD > Senior Research and Development Engineer > Audiological Requirements > Audiology and Embedded Solutions > > Oticon A/S > Kongebakken 9 > DK - 2765 Sm=C3=B8rum > Denmark > > Direct: +45 3913 7675 > Main: : +45 3917 7100 > Mail: thch@xxxxxxxx > Web: www.oticon.com > > > > > Jont Allen skrev den 27-03-2016 14:46: > >> Dear All, >> >> My comment is not about HOW to get SINFA working, but WHY you would >> want to get it working. >> >> Since 1973 we have learned a great deal about phone identification by >> normal and hearing impaired listeners. Bob Bilger was a good friend, >> and his work represented >> an important stepping stone along the path toward building realistic >> and correct understanding of human speech processing. But today, in my >> view, SINFA is not a viable >> way to analyze human speech errors. One of the problems with the 1973 >> analysis was due to the limitations of computers in 1973. All the >> responses were averaged over >> the two main effects, tokens and SNR. This renders the results >> uninterperateable. >> >> Please share with us your thoughts on what the best methods are today, >> given what we now know. And I would be happy to do the same. >> >> My view: >> >> I would suggest you look at the alternatives, such as confusion >> patterns, which is a row of a confusion matrix, as a function of SNR, >> and most importantly, go down to >> the token level. It is time to give up on distinctive features. They >> are a production concept, great at classifying different types of >> speech productions, but they >> do not properly get at what human listeners do, especially those with >> hearing loss, when reporting individual consonants. Bilger and Wang >> make these points in their HSHR article. >> They emphasize individual differences of HI listeners (p 737), and the >> secondary role of distinctive features (p. 724) and of hearing level >> (p 737). I do not think that multidimentional scaling can give the >> answers to these questions, as it only works for a limited number of >> dimensions (2 or 3). Actual confusion data, as a function of SNR, are >> too complex for a 2-3 dimension analysis. >> >> Here are some pointers I suggest you consider, that describe how >> humans decode CV sounds as a function of the SNR. >> >> The Singh analysis explains why and how the articulation index (AI) >> works. >> The Trevino article shows the very large differences in consonant >> perception in impaired ears. Hearing loss leads to large individual >> differences, that are uncorrelated to hearing thresholds. >> The Toscano article is a good place to start. >> >> * Toscano, Joseph and Allen, Jont B (2014) _Across and within >> consonant errors for isolated syllables in noise,_ Journal of Speech, >> Language, and Hearing Research, Vol 57, pp 2293-2307; >> doi:10.1044/2014_JSLHR-H-13-0244, (JSLHR [6],pdf [7], AuthorCopy [8]) >> >> * Trevino, Andrea C and Allen, Jont B (2012). "Within-Consonant >> Perceptual Differences in the Hearing Impaired Ear," JASA v134(1); >> Jul, 2013, pp 607--617 (pdf [9]) >> >> * Riya Singh and Jont Allen (2012); "The influence of stop >> consonants=E2=80=99 perceptual features on the Articulation Index model,= " J. >> Acoust. Soc. Am., apr v131,3051-3068 (pdf [10]) >> >> These two publications describe the speech cues normal hearing >> listeners use when decoding CV sounds. Each token has a threshold we >> call SNR_90, defined as the SNR where the errors go form zero to 10%. >> Most speech sounds are below the Shannon channel capacity limit, below >> which there are zero errors, until the SNR is at the token error >> threshold. >> >> Distinctive features are not a good description of phone perception. >> The real speech cues are relieved in these papers, and each token has >> an SNR_90. Bilger and wang discuss this problem on page 724 of their >> 1973 JSHR article. >> >> * Li, F., Trevino, A., Menon, A. and Allen, Jont B (2012). "A >> psychoacoustic method for studying the necessary and sufficient >> perceptual cues of American English fricative consonants in noise" J. >> Acoust. Soc. Am., v132(4) Oct, pp. 2663-2675 pdf [11] >> >> * F. Li, A. Menon, and Jont B Allen, (2010) _A psychoacoustic >> method >> to find the perceptual cues of stop consonants in natural speech_, >> apr, _J. Acoust. Soc. Am._ pp. 2599-2610, (pdf [12]) >> >> If you want to see another view, other than mine, read this, for >> starters: >> >> Zaar, Dau, 2015, JASA vol 138, pp 1253-1267 >> >> http://scitation.aip.org/content/asa/journal/jasa/138/3/10.1121/1.4928142 >> [13] >> >> Jont Allen >> >> On 03/26/2016 10:44 AM, gvoysey wrote: >> >> I have not tried this, but i am willing to bet you can get FIX >>> running on a modern PC with DOSbox [4], which is a cross-platform >>> MS-DOS emulator. It=E2=80=99s most famous for letting you play very old >>> video games in your web browser (http://playdosgamesonline.com/ >>> [5]), but there=E2=80=99s no reason it shouldn=E2=80=99t work just as w= ell for >>> Real Work. >>> >>> -graham >>> =E2=80=8B >>> >>> On Sat, Mar 26, 2016 at 5:06 AM, David Jackson Morris >>> <dmorris@xxxxxxxx> wrote: >>> >>> Dear Skyler, >>>> >>>> I have been on a similar search and found an R package by David >>>> van Leeuwen that is available at github. Please let me know if >>>> you find any other alternatives? >>>> >>>> FIX is really awesome, but every time I want to use it I have to >>>> go over to Grannies and boot the Win 95 machine, and she makes me >>>> eat poppyseed cake which makes me tummy sore. . . >>>> >>>> Cheers >>>> >>>> DAVID JACKSON MORRIS, PHD >>>> >>>> K=C3=98BENHAVNS UNIVERSITET/UNIVERSITY OF COPENHAGEN >>>> >>>> INSS/Audiologop=C3=A6di/Speech Pathology & Audiology >>>> Byggning 22, 5 sal >>>> >>>> Njalsgade 120 >>>> >>>> 2300 K=C3=B8benhavn S >>>> >>>> Office 22.5.14 >>>> >>>> TLF 35328660 >>>> dmorris@xxxxxxxx >>>> >>>> University website [1] >>>> >>>> ------------------------- >>>> >>>> FROM: AUDITORY - Research in Auditory Perception >>>> [AUDITORY@xxxxxxxx on behalf of Skyler Jennings >>>> [Skyler.Jennings@xxxxxxxx >>>> SENT: Friday, March 25, 2016 9:15 PM >>>> TO: AUDITORY@xxxxxxxx >>>> SUBJECT: sinfa using matlab >>>> >>>> Dear list, >>>> >>>> I am writing in search of MATLAB-based software that performs >>>> sequential information transfer (SINFA; Wang and Bilger, 1973). I >>>> am impressed with the quality of the DOS-based software maintained >>>> by UCL called =E2=80=9CFIX;=E2=80=9D however, it would be more conveni= ent to >>>> do the analysis in MATLAB if possible. >>>> >>>> I appreciate any help you can offer, whether it be guiding me to >>>> publically-available software, or sharing software that you=E2=80=99ve >>>> developed. >>>> >>>> Sincerely, >>>> >>>> Skyler >>>> >>>> -- >>>> >>>> Skyler G. Jennings, Ph.D., Au.D. CCC-A >>>> >>>> Assistant Professor >>>> >>>> Department of Communication Sciences and Disorders >>>> >>>> College of Health University of Utah >>>> >>>> 390 South 1530 East >>>> >>>> Suite 1201 BEHS >>>> >>>> Salt Lake City, UT 84112 >>>> >>>> 801-581-6877 [2] (phone) >>>> >>>> 801-581-7955 [3] (fax) >>>> >>>> skyler.jennings@xxxxxxxx >>>> >>> >>> -- >>> >>> Graham Voysey >>> Boston University College of Engineering >>> HRC Research Engineer >>> Auditory Biophysics and Simulation Laboratory >>> ERB 413 >>> >> >> >> >> Links: >> ------ >> [1] >> >> https://urldefense.proofpoint.com/v2/url?u=3Dhttp-3A__forskning.ku.dk_fi= nd-2Den-2Dforsker_-3Fpure-3Dda-252Fpersons-252Fdavid-2Djackson-2Dmorris-286= 5eea758-2D6dd2-2D4783-2Dae28-2Deef3d5ef83ce-29.html&d=3DBQMFaQ&c=3D= 8hUWFZcy2Z-Za5rBPlktOQ&r=3DN7KKV9mcvQqNgAal48W_vzPUNrKl5mBxlJo8xP9z028&= amp;m=3DAQ_tsotHEkEP4CuE50mpAXGNS5ekvVC321rWDo1X6Vs&s=3DSP20p9UskD0LOFa= tpHoojsCUumO5ha0JSvXabOQe8uo&e=3D >> [2] tel:801-581-6877 >> [3] tel:801-581-7955 >> [4] >> >> https://urldefense.proofpoint.com/v2/url?u=3Dhttp-3A__www.dosbox.com_&am= p;d=3DBQMFaQ&c=3D8hUWFZcy2Z-Za5rBPlktOQ&r=3DN7KKV9mcvQqNgAal48W_vzP= UNrKl5mBxlJo8xP9z028&m=3DAQ_tsotHEkEP4CuE50mpAXGNS5ekvVC321rWDo1X6Vs&am= p;s=3DbfDR3yzi298jK3qIXb9EjBuUZV6Ywvl6JFL4K_XWWdk&e=3D >> [5] >> >> https://urldefense.proofpoint.com/v2/url?u=3Dhttp-3A__playdosgamesonline= .com_&d=3DBQMFaQ&c=3D8hUWFZcy2Z-Za5rBPlktOQ&r=3DN7KKV9mcvQqNgAa= l48W_vzPUNrKl5mBxlJo8xP9z028&m=3DAQ_tsotHEkEP4CuE50mpAXGNS5ekvVC321rWDo= 1X6Vs&s=3DCqht_GtwPnX_rGl46sGlvPWkwpH3SQzkLvtQAopRX-g&e=3D >> [6] http://jslhr.pubs.asha.org/Article.aspx?articleid=3D1894924 >> [7] http://173.161.115.245/Public/ToscanoAllenJSLHR.14.pdf >> [8] http://173.161.115.245/Public/Toscano-Allen-JSLHR-2014.pdf >> [9] http://173.161.115.245/Public/TrevinoAllenJul.13.pdf >> [10] http://173.161.115.245/Public/SinghAllen12.pdf >> [11] http://173.161.115.245/Public/LiTrevinoMenonAllen12.pdf >> [12] http://173.161.115.245/Public/LiMenonAllen10.pdf >> [13] >> http://scitation.aip.org/content/asa/journal/jasa/138/3/10.1121/1.4928142 >> > --001a113dff0c95be4c052f2acaad Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable <div dir=3D"ltr"><p class=3D"MsoNormal">At the risk of straying yet one mor= e post away from Skyler=E2=80=99s original request, I would like to echo Thomas=E2=80=99 comment that SINFA i= s still worthy of consideration. And I=E2=80=99m speaking as someone who has gone o= n record criticizing phonetic information transfer in general. Mathias Sharinger and others have demonstrated that phonological features are indeed represented = in the brain, in ways that cannot be explained purely by auditory or phonetic processing. So the idea that linguistic(/distinctive) features should be retired seems to ignore the advances made in understanding how we process s= peech. Even for thoe of us interested in consonant errors, decades of work = using Information Transfer Analysis has demonstrated that consonant errors cluster in groups consistent with their phonetic features. Jont=E2=80=99s work corroborates this as well= , to some extent.</p> <p class=3D"MsoNormal">=C2=A0</p> <p class=3D"MsoNormal">This might be a good time to simply recognize that d= ifferent experimenters can be addressing different kinds of questions =E2=80=93 not just how noise= creates intelligibility errors. For example:</p> <p class=3D"MsoNormal">-- the integration of multiple acoustic cues, the ti= mecourse of their integration</p> <p class=3D"MsoNormal">-- the speed with which cues activate or suppress le= xical networks=C2=A0</p> <p class=3D"MsoNormal">-- how phonetic categories are formed through experi= ence</p> <p class=3D"MsoNormal">-- how phonetic processing (not just intelligibility= ) is dependent on signal quality, bandwidth, noise, etc. </p> <p class=3D"MsoNormal">-- how different degradations yield different kinds = of errors</p> <p class=3D"MsoNormal">-- how auditory cues are moderated by non-auditory c= ues (like visual cues as well as linguistic context and sociolinguistic cues)</p> <p class=3D"MsoNormal">-- how basic auditory (pitch, spatial, contrast enha= ncement, etc.) processing supports access to linguistic information</p><p class=3D"MsoNorm= al"><br></p> <p class=3D"MsoNormal">And finally=E2=80=A6. </p> <p class=3D"MsoNormal">-- the extent to which consonant confusions for isol= ated syllables with predictable vowels does or does not reflect real-world communication, or even reflects the *information* that is contained within everyday speech signals. </p> <p class=3D"MsoNormal">=C2=A0</p> <p class=3D"MsoNormal">=E2=80=A6 and these are just a few topics that come = to mind even without a literature search; I am certainly leaving out a host of other noteworthy ideas that explore phonetic perception. So I suppose it might be acceptable to recognize SINFA and other approaches as worthwhile even if th= ey do not suit our specific needs. </p> <p class=3D"MsoNormal">=C2=A0</p> <p class=3D"MsoNormal">Matt</p></div><div class=3D"gmail_extra"><br><div cl= ass=3D"gmail_quote">On Mon, Mar 28, 2016 at 4:39 AM, Thomas Ulrich Christia= nsen <span dir=3D"ltr"><<a href=3D"mailto:thomas@xxxxxxxx" target=3D"_b= lank">thomas@xxxxxxxx</a>></span> wrote:<br><blockquote class=3D"gmail_= quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1= ex">Dear Jont and List,<br> <br> I couldn't resist the kind invitation to discuss the analysis of conson= ant confusions...<br> <br> I agree with Jont that SINFA may not be the best way to analyse consonant c= onfusions. I also agree that confusion data is quite complex. My interpreta= tion of this is that using only one analysis method will limit our understa= nding rather than extend it.<br> <br> I disagree that it is time to give up on distinctive features as they *do* = provide insights into certain aspects of consonant confusions (e.g. spectra= l integration as described in Christiansen and Greenberg, 2012).<br> <br> The fact that distinctive features are defined by production characteristic= s does not, in my view, preclude them from playing a role in perception. Th= is is indeed what the data from Christiansen and Greenberg, 2012 says. More= over, this data cannot be explained by AI, which is why I argue that we nee= d to be open to different ways of analysis and interpretation (and even ope= n to degrading the speech signal by other means than noise - e.g. band-pass= filtering).<br> <br> Now, I do recognize the vast amount of valuable AI work and the attention i= t has received. All I am saying is that perhaps it is time to *also* pay at= tention to alternatives.<br> <br> This is my poppyseed-free two cents...<br> <br> Reference:<br> Christiansen, T.U. and Greenberg, S. (2012) Perceptual confusions among con= sonans, Revisited =E2=80=93 Cross-spectral Integration of phonetic-feature = information and consonant recognition. IEEE Trans. Audio, Speech and Lang. = Proc. 20: 147-161<br> <br> Best regards,<br> Thomas Ulrich Christiansen, PhD<br> Senior Research and Development Engineer<br> Audiological Requirements<br> Audiology and Embedded Solutions<br> <br> Oticon A/S<br> Kongebakken 9<br> DK - 2765 Sm=C3=B8rum<br> Denmark<br> <br> Direct:=C2=A0 <a href=3D"tel:%2B45%203913%207675" value=3D"+4539137675" tar= get=3D"_blank">+45 3913 7675</a><br> Main: :=C2=A0 <a href=3D"tel:%2B45%203917%207100" value=3D"+4539177100" tar= get=3D"_blank">+45 3917 7100</a><br> Mail:=C2=A0 =C2=A0 <a href=3D"mailto:thch@xxxxxxxx" target=3D"_blank">thc= h@xxxxxxxx</a><br> Web:=C2=A0 =C2=A0 =C2=A0<a href=3D"http://www.oticon.com" rel=3D"noreferrer= " target=3D"_blank">www.oticon.com</a><div><div class=3D"h5"><br> <br> <br> <br> Jont Allen skrev den <a href=3D"tel:27-03-2016%2014" value=3D"+12703201614"= target=3D"_blank">27-03-2016 14</a>:46:<br> </div></div><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;bo= rder-left:1px #ccc solid;padding-left:1ex"><div><div class=3D"h5"> Dear All,<br> <br> My comment is not about HOW to get SINFA working, but WHY you would<br> want to get it working.<br> <br> Since 1973 we have learned a great deal about phone identification by<br> normal and hearing impaired listeners. Bob Bilger was a good friend,<br> and his work represented<br> an important stepping stone along the path toward building realistic<br> and correct understanding of human speech processing. But today, in my<br> view, SINFA is not a viable<br> way to analyze human speech errors. One of the problems with the 1973<br> analysis was due to the limitations of computers in 1973. All the<br> responses were averaged over<br> the two main effects, tokens and SNR. This renders the results<br> uninterperateable.<br> <br> Please share with us your thoughts on what the best methods are today,<br> given what we now know. And I would be happy to do the same.<br> <br> My view:<br> <br> I would suggest you look at the alternatives, such as confusion<br> patterns, which is a row of a confusion matrix, as a function of SNR,<br> and most importantly, go down to<br> the token level. It is time to give up on distinctive features. They<br> are a production concept, great at classifying different types of<br> speech productions, but they<br> do not properly get at what human listeners do, especially those with<br> hearing loss, when reporting individual consonants. Bilger and Wang<br> make these points in their HSHR article.<br> They emphasize individual differences of HI listeners (p 737), and the<br> secondary role of distinctive features (p. 724) and of hearing level<br> (p 737). I do not think that multidimentional scaling can give the<br> answers to these questions, as it only works for a limited number of<br> dimensions (2 or 3). Actual confusion data, as a function of SNR, are<br> too complex for a 2-3 dimension analysis.<br> <br> Here are some pointers I suggest you consider, that describe how<br> humans decode CV sounds as a function of the SNR.<br> <br> The Singh analysis explains why and how the articulation index (AI)<br> works.<br> The Trevino article shows the very large differences in consonant<br> perception in impaired ears. Hearing loss leads to large individual<br> differences, that are uncorrelated to hearing thresholds.<br> The Toscano article is a good place to start.<br> <br></div></div> =C2=A0 =C2=A0 =C2=A0 =C2=A0 * Toscano, Joseph and Allen, Jont B (2014) _Acr= oss and within<br> consonant errors for isolated syllables in noise,_ Journal of Speech,<span = class=3D""><br> Language, and Hearing Research, Vol 57, pp 2293-2307;<br></span> doi:10.1044/2014_JSLHR-H-13-0244, (JSLHR [6],pdf [7], AuthorCopy [8])<br> <br> =C2=A0 =C2=A0 =C2=A0 =C2=A0 * Trevino, Andrea C and Allen, Jont B (2012). &= quot;Within-Consonant<span class=3D""><br> Perceptual Differences in the Hearing Impaired Ear," JASA v134(1);<br>= </span> Jul, 2013, pp 607--617 (pdf [9])<br> <br> =C2=A0 =C2=A0 =C2=A0 =C2=A0 * Riya Singh and Jont Allen (2012); "The i= nfluence of stop<span class=3D""><br> consonants=E2=80=99 perceptual features on the Articulation Index model,&qu= ot; J.<br></span> Acoust. Soc. Am., apr v131,3051-3068 (pdf [10])<span class=3D""><br> <br> These two publications describe the speech cues normal hearing<br> listeners use when decoding CV sounds. Each token has a threshold we<br> call SNR_90, defined as the SNR where the errors go form zero to 10%.<br> Most speech sounds are below the Shannon channel capacity limit, below<br> which there are zero errors, until the SNR is at the token error<br> threshold.<br> <br> Distinctive features are not a good description of phone perception.<br> The real speech cues are relieved in these papers, and each token has<br> an SNR_90. Bilger and wang discuss this problem on page 724 of their<br> 1973 JSHR article.<br> <br></span> =C2=A0 =C2=A0 =C2=A0 =C2=A0 * Li, F., Trevino, A., Menon, A. and Allen, Jon= t B (2012). "A<span class=3D""><br> psychoacoustic method for studying the necessary and sufficient<br> perceptual cues of American English fricative consonants in noise" J.<= br></span> Acoust. Soc. Am., v132(4) Oct, pp. 2663-2675 pdf [11]<br> <br> =C2=A0 =C2=A0 =C2=A0 =C2=A0 * F. Li, A. Menon, and Jont B Allen, (2010) _A = psychoacoustic method<br> to find the perceptual cues of stop consonants in natural speech_,<br> apr, _J. Acoust. Soc. Am._ pp. 2599-2610, (pdf [12])<span class=3D""><br> <br> If you want to see another view, other than mine, read this, for<br> starters:<br> <br> Zaar, Dau, 2015, JASA vol 138, pp 1253-1267<br> <br> <a href=3D"http://scitation.aip.org/content/asa/journal/jasa/138/3/10.1121/= 1.4928142" rel=3D"noreferrer" target=3D"_blank">http://scitation.aip.org/co= ntent/asa/journal/jasa/138/3/10.1121/1.4928142</a><br></span> [13]<span class=3D""><br> <br> Jont Allen<br> <br> On 03/26/2016 10:44 AM, gvoysey wrote:<br> <br> </span><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-= left:1px #ccc solid;padding-left:1ex"><span class=3D""> I have not tried this, but i am willing to bet you can get FIX<br></span> running on a modern PC with DOSbox [4], which is a cross-platform<span clas= s=3D""><br> MS-DOS emulator. It=E2=80=99s most famous for letting you play very old<br> video games in your web browser (<a href=3D"http://playdosgamesonline.com/"= rel=3D"noreferrer" target=3D"_blank">http://playdosgamesonline.com/</a><br= ></span> [5]), but there=E2=80=99s no reason it shouldn=E2=80=99t work just as well = for<span class=3D""><br> Real Work.<br> <br> -graham<br> =E2=80=8B<br> <br> On Sat, Mar 26, 2016 at 5:06 AM, David Jackson Morris<br> <<a href=3D"mailto:dmorris@xxxxxxxx" target=3D"_blank">dmorris@xxxxxxxx= k</a>> wrote:<br> <br> </span><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-= left:1px #ccc solid;padding-left:1ex"><span class=3D""> Dear Skyler,<br> <br> I have been on a similar search and found an R package by David<br> van Leeuwen that is available at github. Please let me know if<br> you find any other alternatives?<br> <br> FIX is really awesome, but every time I want to use it I have to<br> go over to Grannies and boot the Win 95 machine, and she makes me<br> eat poppyseed cake which makes me tummy sore. . .<br> <br> Cheers<br> <br></span> DAVID JACKSON MORRIS, PHD<br> <br> K=C3=98BENHAVNS UNIVERSITET/UNIVERSITY OF COPENHAGEN<span class=3D""><br> <br> INSS/Audiologop=C3=A6di/Speech Pathology & Audiology<br> Byggning 22, 5 sal<br> <br> Njalsgade 120<br> <br> 2300 K=C3=B8benhavn S<br> <br> Office 22.5.14<br> <br> TLF 35328660<br> <a href=3D"mailto:dmorris@xxxxxxxx" target=3D"_blank">dmorris@xxxxxxxx</a= ><br> <br></span> University website [1]<br> <br> -------------------------<br> <br> FROM: AUDITORY - Research in Auditory Perception<span class=3D""><br> [<a href=3D"mailto:AUDITORY@xxxxxxxx" target=3D"_blank">AUDITORY@xxxxxxxx= TS.MCGILL.CA</a>] on behalf of Skyler Jennings<br> [<a href=3D"mailto:Skyler.Jennings@xxxxxxxx" target=3D"_blank">Skyler.J= ennings@xxxxxxxx</a>]<br></span> SENT: Friday, March 25, 2016 9:15 PM<br> TO: <a href=3D"mailto:AUDITORY@xxxxxxxx" target=3D"_blank">AUDITORY@xxxxxxxx= LISTS.MCGILL.CA</a><br> SUBJECT: sinfa using matlab<span class=3D""><br> <br> Dear list,<br> <br> I am writing in search of MATLAB-based software that performs<br> sequential information transfer (SINFA; Wang and Bilger, 1973). I<br> am impressed with the quality of the DOS-based software maintained<br> by UCL called =E2=80=9CFIX;=E2=80=9D however, it would be more convenient t= o<br> do the analysis in MATLAB if possible.<br> <br> I appreciate any help you can offer, whether it be guiding me to<br> publically-available software, or sharing software that you=E2=80=99ve<br> developed.<br> <br> Sincerely,<br> <br> Skyler<br> <br> --<br> <br> Skyler G. Jennings, Ph.D., Au.D. CCC-A<br> <br> Assistant Professor<br> <br> Department of Communication Sciences and Disorders<br> <br> College of Health University of Utah<br> <br> 390 South 1530 East<br> <br> Suite 1201 BEHS<br> <br> Salt Lake City, UT 84112<br> <br> </span><a href=3D"tel:801-581-6877" value=3D"+18015816877" target=3D"_blank= ">801-581-6877</a>=C2=A0[2] (phone)<br> <br> <a href=3D"tel:801-581-7955" value=3D"+18015817955" target=3D"_blank">801-5= 81-7955</a>=C2=A0[3] (fax)<br> <br> <a href=3D"mailto:skyler.jennings@xxxxxxxx" target=3D"_blank">skyler.je= nnings@xxxxxxxx</a><br> </blockquote><span class=3D""> <br> --<br> <br> Graham Voysey<br> Boston University College of Engineering<br> HRC Research Engineer<br> Auditory Biophysics and Simulation Laboratory<br> ERB 413<br> </span></blockquote> <br> <br> <br> Links:<br> ------<br> [1]<br> <a href=3D"https://urldefense.proofpoint.com/v2/url?u=3Dhttp-3A__forskning.= ku.dk_find-2Den-2Dforsker_-3Fpure-3Dda-252Fpersons-252Fdavid-2Djackson-2Dmo= rris-2865eea758-2D6dd2-2D4783-2Dae28-2Deef3d5ef83ce-29.html&amp;d=3DBQM= FaQ&amp;c=3D8hUWFZcy2Z-Za5rBPlktOQ&amp;r=3DN7KKV9mcvQqNgAal48W_vzPU= NrKl5mBxlJo8xP9z028&amp;m=3DAQ_tsotHEkEP4CuE50mpAXGNS5ekvVC321rWDo1X6Vs= &amp;s=3DSP20p9UskD0LOFatpHoojsCUumO5ha0JSvXabOQe8uo&amp;e=3D" rel= =3D"noreferrer" target=3D"_blank">https://urldefense.proofpoint.com/v2/url?= u=3Dhttp-3A__forskning.ku.dk_find-2Den-2Dforsker_-3Fpure-3Dda-252Fpersons-2= 52Fdavid-2Djackson-2Dmorris-2865eea758-2D6dd2-2D4783-2Dae28-2Deef3d5ef83ce-= 29.html&amp;d=3DBQMFaQ&amp;c=3D8hUWFZcy2Z-Za5rBPlktOQ&amp;r=3DN= 7KKV9mcvQqNgAal48W_vzPUNrKl5mBxlJo8xP9z028&amp;m=3DAQ_tsotHEkEP4CuE50mp= AXGNS5ekvVC321rWDo1X6Vs&amp;s=3DSP20p9UskD0LOFatpHoojsCUumO5ha0JSvXabOQ= e8uo&amp;e=3D</a><br> [2] tel:<a href=3D"tel:801-581-6877" value=3D"+18015816877" target=3D"_blan= k">801-581-6877</a><br> [3] tel:<a href=3D"tel:801-581-7955" value=3D"+18015817955" target=3D"_blan= k">801-581-7955</a><br> [4]<br> <a href=3D"https://urldefense.proofpoint.com/v2/url?u=3Dhttp-3A__www.dosbox= .com_&amp;d=3DBQMFaQ&amp;c=3D8hUWFZcy2Z-Za5rBPlktOQ&amp;r=3DN7K= KV9mcvQqNgAal48W_vzPUNrKl5mBxlJo8xP9z028&amp;m=3DAQ_tsotHEkEP4CuE50mpAX= GNS5ekvVC321rWDo1X6Vs&amp;s=3DbfDR3yzi298jK3qIXb9EjBuUZV6Ywvl6JFL4K_XWW= dk&amp;e=3D" rel=3D"noreferrer" target=3D"_blank">https://urldefense.pr= oofpoint.com/v2/url?u=3Dhttp-3A__www.dosbox.com_&amp;d=3DBQMFaQ&amp= ;c=3D8hUWFZcy2Z-Za5rBPlktOQ&amp;r=3DN7KKV9mcvQqNgAal48W_vzPUNrKl5mBxlJo= 8xP9z028&amp;m=3DAQ_tsotHEkEP4CuE50mpAXGNS5ekvVC321rWDo1X6Vs&amp;s= =3DbfDR3yzi298jK3qIXb9EjBuUZV6Ywvl6JFL4K_XWWdk&amp;e=3D</a><br> [5]<br> <a href=3D"https://urldefense.proofpoint.com/v2/url?u=3Dhttp-3A__playdosgam= esonline.com_&amp;d=3DBQMFaQ&amp;c=3D8hUWFZcy2Z-Za5rBPlktOQ&amp= ;r=3DN7KKV9mcvQqNgAal48W_vzPUNrKl5mBxlJo8xP9z028&amp;m=3DAQ_tsotHEkEP4C= uE50mpAXGNS5ekvVC321rWDo1X6Vs&amp;s=3DCqht_GtwPnX_rGl46sGlvPWkwpH3SQzkL= vtQAopRX-g&amp;e=3D" rel=3D"noreferrer" target=3D"_blank">https://urlde= fense.proofpoint.com/v2/url?u=3Dhttp-3A__playdosgamesonline.com_&amp;d= =3DBQMFaQ&amp;c=3D8hUWFZcy2Z-Za5rBPlktOQ&amp;r=3DN7KKV9mcvQqNgAal48= W_vzPUNrKl5mBxlJo8xP9z028&amp;m=3DAQ_tsotHEkEP4CuE50mpAXGNS5ekvVC321rWD= o1X6Vs&amp;s=3DCqht_GtwPnX_rGl46sGlvPWkwpH3SQzkLvtQAopRX-g&amp;e=3D= </a><br> [6] <a href=3D"http://jslhr.pubs.asha.org/Article.aspx?articleid=3D1894924"= rel=3D"noreferrer" target=3D"_blank">http://jslhr.pubs.asha.org/Article.as= px?articleid=3D1894924</a><br> [7] <!-- <a href=3D"http://173.161.115.245/Public/ToscanoAllenJSLHR.14.pdf"= rel=3D"noreferrer" target=3D"_blank"> -->http://173.161.115.245/Public/Tos= canoAllenJSLHR.14.pdf<!-- </a> --><br> [8] <!-- <a href=3D"http://173.161.115.245/Public/Toscano-Allen-JSLHR-2014.= pdf" rel=3D"noreferrer" target=3D"_blank"> -->http://173.161.115.245/Public= /Toscano-Allen-JSLHR-2014.pdf<!-- </a> --><br> [9] <!-- <a href=3D"http://173.161.115.245/Public/TrevinoAllenJul.13.pdf" r= el=3D"noreferrer" target=3D"_blank"> -->http://173.161.115.245/Public/Trevi= noAllenJul.13.pdf<!-- </a> --><br> [10] <!-- <a href=3D"http://173.161.115.245/Public/SinghAllen12.pdf" rel=3D= "noreferrer" target=3D"_blank"> -->http://173.161.115.245/Public/SinghAllen= 12.pdf<!-- </a> --><br> [11] <!-- <a href=3D"http://173.161.115.245/Public/LiTrevinoMenonAllen12.pd= f" rel=3D"noreferrer" target=3D"_blank"> -->http://173.161.115.245/Public/L= iTrevinoMenonAllen12.pdf<!-- </a> --><br> [12] <!-- <a href=3D"http://173.161.115.245/Public/LiMenonAllen10.pdf" rel= =3D"noreferrer" target=3D"_blank"> -->http://173.161.115.245/Public/LiMenon= Allen10.pdf<!-- </a> --><br> [13] <a href=3D"http://scitation.aip.org/content/asa/journal/jasa/138/3/10.= 1121/1.4928142" rel=3D"noreferrer" target=3D"_blank">http://scitation.aip.o= rg/content/asa/journal/jasa/138/3/10.1121/1.4928142</a><br> </blockquote> </blockquote></div><br></div> --001a113dff0c95be4c052f2acaad--