Re: [AUDITORY] Registered reports (Frederick Gallun )


Subject: Re: [AUDITORY] Registered reports
From:    Frederick Gallun  <fgallun@xxxxxxxx>
Date:    Tue, 12 Jun 2018 20:46:21 +0200
List-Archive:<http://lists.mcgill.ca/scripts/wa.exe?LIST=AUDITORY>

--000000000000fd89a3056e7648d4 Content-Type: multipart/alternative; boundary="000000000000fd89a2056e7648d3" --000000000000fd89a2056e7648d3 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable I will add a comment on Les=E2=80=99 point about the unfamiliarity of repli= cation crises and failures to publish null results in some of the areas of hearing science. This is relevant to the registered reports question because it is actually very important to not that psychophysics is not in a replication crisis and when a model prediction fails in a psychophysical laboratory, everyone is still interested in knowing about it. What then is the difference between psychophysics and other areas of psychology, other than what is being studied? A compelling answer is made quite well by a recent paper (Smith, P.L. & Little, D.R. Psychon Bull Rev (2018) https://doi.org/10.3758/s13423-018-1451-8) on the power of small-n repeated measures designs. The authors argue that the replication crisis is not going to be solved by overpowering all of our experiments, as some have proposed. Instead, we should look to the methods of psychophysics in which the individual participant is the replication unit, theories are quantitative and make mathematical predictions, and the hypothesis testing is thus on much firmer ground. So, what makes psychophysics so useful as a model, and why don=E2=80=99t we= see failures of replication weakening our theories of auditory perception? Smith and Little might say that it is because 1) we work hard to find and use measurement instruments that appear to be monotonically related to the psychological entity that we are trying to understand (i.e., intensity perception or binaural sensitivity), 2) we spend a lot of time coming up theories that can be formulated mathematically and thus the hypothesis to be tested takes the form of a mathematical prediction, and 3) these model predictions are directly expressed at the individual level. The last piece is extremely important, because it gives a level of control over error variance that is nearly impossible at the level of group effects. The Smith and Little article is not particularly surprising to those of us used to controlling variance by repeatedly testing our participants until they are well-practiced at the task and only then introducing variations in the tasks or stimuli that we expect to produce specific effects at the level of the individual participant. This approach is not common in the areas of psychology suffering from the replication crisis. Consequently, the common suggestion has been to increase the number of participants rather than question the wisdom of using large-n designs with ordinal hypotheses based on theories that cannot be described mathematically and measurement instruments that are designed based more on convenience than on monotonic relationships to the putative psychological entity to be tested. As Smith and Little argue, this is an opportunity to change the field of scientific psychology in a very positive way, and the path is by focusing on increasing sample size at the participant level through repeated testing across multiple theoretically-connected conditions rather than at the group level. As a psychophysicist who works with clinical populations (and an Editor and Reviewer of many clinical research manuscripts), I find this question very relevant, because those who work with patients are much more likely to come from a background of large-n designs, where experimental rigor is associated with assigning each participant to a single condition and comparing groups. In this case, it is obviously important to have as large a number of participants in each group as possible and to make each participant as similar to the others as possible. This often leads to enormous expenditures of time and effort in recruiting according to very strict inclusion criteria. For practical reasons, either the inclusion criteria or the sample size is almost an impossible barrier to achieving the designed experiment. The result is unless both money and time are in great supply, the study ends up being underpowered. From this perspective, I see the registered report as a useful way to have the discussion about the most powerful methods before large amounts of time and resources have been devoted to the study, and I would encourage those with expertise in controlling error variance and experience in developing robust tools to do their best to bring this knowledge to the other areas of the field in as constructive a manner as possible. I would hope that the registered report could be a vehicle for this discussion. Erick Gallun Frederick (Erick) Gallun, PhD Research Investigator, VA RR&D National Center for Rehabilitative Auditory Research Associate Professor, Oregon Health & Science University Editor in Chief - Hearing, Journal of Speech, Language, and Hearing Researc= h http://www.ncrar.research.va.gov/AboutUs/Staff/Gallun.asp On Tue, Jun 12, 2018 at 6:16 AM Les Bernstein <lbernstein@xxxxxxxx> wrote: > I agree with Ken and Roger. It's neither clear that the current system > falls short nor that RRs would, effectively, solve any such problem. To > the degree there is a problem, I fail to see how making RRs VOLUNTARY wou= ld > serve as an effective remedy or, voluntary or not, serve to increase > "standards of publication." If people wish to have the option, that soun= ds > benign enough, save for the extra work required of reviewers. > > As suggested by Matt, I tried to think of the "wasted hours spent by > investigators who repeat the failed methods of their peers and > predecessors, only because the outcomes of failed experiments were never > published." Across the span of my career, for me and for those with whom > I've worked, I can't identify that such wasted hours have been spent. As > Ken notes, well-formed, well-motivated experiments employing sound method= s > should be (and are) published. > > Likewise, re Matt's comments, I cannot recall substantial instances of > scientists "who cling to theories based on initial publications of work > that later fails replication, but where those failed replications never g= et > published." Au contraire. I can think of a quite a few cases in which > essential replication failed, those findings were published, and the fiel= d > was advanced. I don't believe that it is the case that many of us are > clinging to theories that are invalid but for the publication of failed > replications. Theories gain status via converging evidence. > > It seems to me that for what some are arguing would, essentially, be an > auditory version of The Journal of Negative Results ( > https://en.wikipedia.org/wiki/Journal_of_Negative_Results_in_Biomedicine)= . > > Still, if some investigators wish to have the RR option and journals are > willing to offer it, then, by all means, have at it. The proof of the > pudding will be in the tasting. > > > Les > > > On 6/9/2018 5:13 AM, Roger Watt wrote: > > 3 points: > > > > 1. The issue of RR is tied up with the logic of null hypothesis testing. > There are only two outcomes for null hypothesis testing: (i) a tentative > conclusion that the null hypothesis should be regarded as inconsistent wi= th > the data and (ii) no conclusion about the null hypothesis can be reached > from the data. Neither outcome refers to the alternative hypothesis, whic= h > is never tested. A nice idea in the literature is the counter-null. If I > have a sample of 42 and an effect size of 0.2 (r-family), then my result = is > not significant: it is not inconsistent with a population effect size of = 0. > It is equally not inconsistent with the counter-null, a population effect > size of ~0.4. It is less inconsistent with all population effect sizes in > between the null and the counter-null. (NHST forces all these double > negatives). > > > > 2. The current system of publish when p<0.05 is easy to game, hence all > the so-called questionable practices. Any new system, like RR, will in du= e > course become easy to game. By a long shot, the easiest (invalid) way to > get an inflated effect size and an inappropriately small p is to test mor= e > participants than needed and keep only the =E2=80=9Cbest=E2=80=9D ones. R= R will not prevent > that. > > > > 3. NHST assumes random sampling, which no-one achieves. The forms of > sampling we use in reality are all possibly subject to issues of > non-independence of participants which leads to Type I error rates (false > positives) that are well above 5%. > > > > None of this is to argue against RR, just to observe that it doesn=E2=80= =99t > resolve many of the current problems. Any claim that it does, is in itsel= f > a kind of Type I error and Type I errors are very difficult to eradicate > once accepted. > > > > Roger Watt > > Professor of Psychology > > University of Stirling > > > > *From:* AUDITORY - Research in Auditory Perception [ > mailto:AUDITORY@xxxxxxxx <AUDITORY@xxxxxxxx>] *On Behalf Of > *Ken Grant > *Sent:* 09 June 2018 06:19 > *To:* AUDITORY@xxxxxxxx > *Subject:* Re: Registered reports > > > > Why aren=E2=80=99t these =E2=80=9Cfailed=E2=80=9D experiments published? = What=E2=80=99s the definition of > a failed experiment anyway. > > > > I think that if the scientific question is well formed and well motivated > AND the methods sound and appropriate for addressing the question, then > whatever the result may be, this seems like a good experiment and one tha= t > should be published. > > Sent from my iPhone > > Ken W. Grant, PhD > > Chief, Scientific and Clinical Studies > > National Military Audiology and Speech-Pathology Center (NMASC) > > Walter Reed National Military Medical Center > > Bethesda, MD 20889 > > kenneth.w.grant.civ@xxxxxxxx > > ken.w.grant@xxxxxxxx > > Office: 301-319-7043 > > Cell: 301-919-2957 > > > > > > > > > On Jun 9, 2018, at 12:48 AM, Matthew Winn <mwinn2@xxxxxxxx> wrote: > > The view that RRs will stifle progress is both true and false. While the > increased load of advanced registration and rigidity in methods would, as > Les points out, become burdensome for most of our basic work, there is > another side to this. This is not a matter of morals (hiding a bad result= , > or fabricating a good result) or how to do our experiments. It=E2=80=99s = a matter > of the standards of *publication*, which you will notice was the scope of > Tim=E2=80=99s original call to action. In general, we only ever read abou= t > experiments that came out well (and not the ones that didn=E2=80=99t). If= there is > a solution to that problem, then we should consider it, or at least > acknowledge that some solution might be needed. This is partly the cultur= e > of scientific journals, and partly the culture of the institutions that > employ us. There's no need to question anybody's integrity in order to > appreciate some benefit of RRs. > > Think for a moment about the amount of wasted hours spent by investigator= s > who repeat the failed methods of their peers and predecessors, only becau= se > the outcomes of failed experiments were never published. Or those of us w= ho > cling to theories based on initial publications of work that later fails > replication, but where those failed replications never get published. THI= S > stifles progress as well. If results were to be reported whether or not > they come out as planned, we=E2=80=99d have a much more complete picture = of the > evidence for and against the ideas. Julia's story also resonates with me; > we've all reviewed papers where we've thought "if only the authors had > sought input before running this labor-intensive study, the data would be > so much more valuable." > > The arguments against RRs in this thread appear in my mind to be argument= s > against *compulsory* RRs for *all* papers in *all* journals, which takes > the discussion off course. I have not heard such radical calls. If you > don=E2=80=99t want to do a RR, then don=E2=80=99t do it. But perhaps we c= an appreciate the > goals of RR and see how those goals might be realized with practices that > suit our own fields of work. > > Matt > > > > -------------------------------------------------------------- > > Matthew Winn, Au.D., Ph.D. > Assistant Professor > Dept. of Speech & Hearing Sciences > University of Washington > > > > ------------------------------ > The University achieved an overall 5 stars in the QS World University > Rankings 2018 > The University of Stirling is a charity registered in Scotland, number SC > 011159. > > > > -- > *Leslie R. Bernstein, Ph.D. **| *Professor > Depts. of Neuroscience and Surgery (Otolaryngology)| UConn School of > Medicine > 263 Farmington Avenue, Farmington, CT 06030-3401 > <https://maps.google.com/?q=3D263+Farmington+Avenue,+Farmington,+CT%0D%0A= ++++++++06030-3401+%0D%0A+++++++Office&entry=3Dgmail&source=3Dg> > > <https://maps.google.com/?q=3D263+Farmington+Avenue,+Farmington,+CT%0D%0A= ++++++++06030-3401+%0D%0A+++++++Office&entry=3Dgmail&source=3Dg> > Office > <https://maps.google.com/?q=3D263+Farmington+Avenue,+Farmington,+CT%0D%0A= ++++++++06030-3401+%0D%0A+++++++Office&entry=3Dgmail&source=3Dg>: > 860.679.4622 | Fax: 860.679.2495 > > > -- --------------------------------------------- Frederick (Erick) Gallun, PhD Research Investigator, VA RR&D National Center for Rehabilitative Auditory Research Associate Professor, Oregon Health & Science University http://www.ncrar.research.va.gov/AboutUs/Staff/Gallun.asp --000000000000fd89a2056e7648d3 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable <div><div><div><div><div dir=3D"auto"><div dir=3D"auto">I will add a commen= t on Les=E2=80=99 point about the unfamiliarity of replication crises and f= ailures to publish null results in some of the areas of hearing science. Th= is is relevant to the registered reports question because it is actually ve= ry important to not that psychophysics is not in a replication crisis and w= hen a model prediction fails in a psychophysical laboratory, everyone is st= ill interested in knowing about it. What then is the difference between psy= chophysics and other areas of psychology, other than what is being studied?= </div><div dir=3D"auto"><br></div><div dir=3D"auto">A compelling answer is = made quite well by a recent paper (Smith, P.L. &amp; Little, D.R. Psychon B= ull Rev (2018) <a href=3D"https://doi.org/10.3758/s13423-018-1451-8" target= =3D"_blank">https://doi.org/10.3758/s13423-018-1451-8</a>) on the power of = small-n repeated measures designs. The authors argue that the replication c= risis is not going to be solved by overpowering all of our experiments, as = some have proposed. Instead, we should look to the methods of psychophysics= in which the individual participant is the replication unit, theories are = quantitative and make mathematical predictions, and the hypothesis testing = is thus on much firmer ground.=C2=A0</div><div dir=3D"auto"><br></div><div = dir=3D"auto">So, what makes psychophysics so useful as a model, and why don= =E2=80=99t we see failures of replication weakening our theories of auditor= y perception? Smith and Little might say that it is because 1) we work hard= to find and use measurement instruments that appear to be monotonically re= lated to the psychological entity that we are trying to understand (i.e., i= ntensity perception or binaural sensitivity), 2) we spend a lot of time com= ing up theories that can be formulated mathematically and thus the hypothes= is to be tested takes the form of a mathematical prediction, and 3) these m= odel predictions are directly expressed at the individual level. The last p= iece is extremely important, because it gives a level of control over error= variance that is nearly impossible at the level of group effects. The Smit= h and Little article is not particularly surprising to those of us used to = controlling variance by repeatedly testing our participants until they are = well-practiced at the task and only then introducing variations in the task= s or stimuli that we expect to produce specific effects at the level of the= individual participant.</div><div dir=3D"auto"><br></div><div dir=3D"auto"= >This approach is not common in the areas of psychology suffering from the = replication crisis. Consequently, the common suggestion has been to increas= e the number of participants rather than question the wisdom of using large= -n designs with ordinal hypotheses based on theories that cannot be describ= ed mathematically and measurement instruments that are designed based more = on convenience than on monotonic relationships to the putative psychologica= l entity to be tested. As Smith and Little argue, this is an opportunity to= change the field of scientific psychology in a very positive way, and the = path is by focusing on increasing sample size at the participant level thro= ugh repeated testing across multiple theoretically-connected conditions rat= her than at the group level. As a psychophysicist who works with clinical p= opulations (and an Editor and Reviewer of many clinical research manuscript= s), I find this question very relevant, because those who work with patient= s are much more likely to come from a background of large-n designs, where = experimental rigor is associated with assigning each participant to a singl= e condition and comparing groups. In this case, it is obviously important t= o have as large a number of participants in each group as possible and to m= ake each participant as similar to the others as possible. This often leads= to enormous expenditures of time and effort in recruiting according to ver= y strict inclusion criteria. For practical reasons, either the inclusion cr= iteria or the sample size is almost an impossible barrier to achieving the = designed experiment. The result is unless both money and time are in great = supply, the study ends up being underpowered.</div><div dir=3D"auto"><br></= div><div dir=3D"auto">From this perspective, I see the registered report as= a useful way to have the discussion about the most powerful methods before= large amounts of time and resources have been devoted to the study, and I = would encourage those with expertise in controlling error variance and expe= rience in developing robust tools to do their best to bring this knowledge = to the other areas of the field in as constructive a manner as possible. I = would hope that the registered report could be a vehicle for this discussio= n.</div><div dir=3D"auto"><br></div><div dir=3D"auto">Erick Gallun</div><di= v dir=3D"auto"><br></div><div dir=3D"auto"><span style=3D"color:rgb(117,117= ,117);word-spacing:1px;background-color:rgb(255,255,255)">Frederick (Erick)= Gallun, PhD =E2=80=A8</span></div><div dir=3D"auto"><span style=3D"color:r= gb(117,117,117);word-spacing:1px;background-color:rgb(255,255,255)">Researc= h Investigator, VA RR&amp;D National Center for Rehabilitative Auditory Res= earch=C2=A0</span><br style=3D"color:rgb(117,117,117);word-spacing:1px"><sp= an style=3D"color:rgb(117,117,117);word-spacing:1px;background-color:rgb(25= 5,255,255)">Associate Professor, Oregon Health &amp; Science University</sp= an></div><div dir=3D"auto"><span style=3D"color:rgb(117,117,117);word-spaci= ng:1px;background-color:rgb(255,255,255)">Editor in Chief - Hearing, Journa= l of Speech, Language, and Hearing Research</span></div><div dir=3D"auto"><= a href=3D"http://www.ncrar.research.va.gov/AboutUs/Staff/Gallun.asp" target= =3D"_blank" style=3D"font-size:1rem;word-spacing:1px">http://www.ncrar.rese= arch.va.gov/AboutUs/Staff/Gallun.asp</a><br></div><div dir=3D"auto"><br></d= iv></div></div></div></div></div><div><div><div><div><div class=3D"gmail_qu= ote"><div>On Tue, Jun 12, 2018 at 6:16 AM Les Bernstein &lt;<a href=3D"mail= to:lbernstein@xxxxxxxx" target=3D"_blank">lbernstein@xxxxxxxx</a>&gt; wrote= :<br></div><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;bor= der-left:1px #ccc solid;padding-left:1ex"> =20 =20 =20 <div text=3D"#000000" bgcolor=3D"#FFFFFF"> <font size=3D"-1"><font face=3D"Verdana">I agree with Ken and Roger.=C2= =A0 It&#39;s neither clear that the current system falls short nor that RRs would, effectively, solve any such problem.=C2=A0 To the degree there is a problem, I fail to see how making RRs VOLUNTARY would serve as an effective remedy or, voluntary or not, serve to increase &quot;standards of publication.&quot;=C2=A0 If people wish= to have the option, that sounds benign enough, save for the extra work required of reviewers.<br> <br> As suggested by Matt, I tried to think of the &quot;wasted hours spent by investigators who repeat the failed methods of their peers and predecessors, only because the outcomes of failed experiments were never published.&quot;=C2=A0 Across the span of my career, for me and for those with whom I&#39;ve worked, I can&#39;t identify that such wasted hours have been spent. As Ken notes, well-formed, well-motivated experiments employing sound methods should be (and are) published.<br> <br> Likewise, re Matt&#39;s comments, I cannot recall substantial instances of scientists &quot;who cling to theories based on initia= l publications of work that later fails replication, but where those failed replications never get published.&quot;=C2=A0 Au contr= aire.=C2=A0 I can think of a quite a few cases in which essential replication failed, those findings were published, and the field was advanced.=C2=A0 I don&#39;t believe that it is the case that ma= ny of us are clinging to theories that are invalid but for the publication of failed replications.=C2=A0 Theories gain status via converging evidence.<br> <br> It seems to me that for what some are arguing would, essentially, be an auditory version of The Journal of Negative Results (<a class=3D"m_3539291785761998971m_-8139567406708360751m_-71105792= 56523775512m_364433817559409555moz-txt-link-freetext" href=3D"https://en.wi= kipedia.org/wiki/Journal_of_Negative_Results_in_Biomedicine" target=3D"_bla= nk">https://en.wikipedia.org/wiki/Journal_of_Negative_Results_in_Biomedicin= e</a>).<br> <br> Still, if some investigators wish to have the RR option and journals are willing to offer it, then, by all means, have at it.=C2=A0 The proof of the pudding will be in the tasting.</font></= font></div><div text=3D"#000000" bgcolor=3D"#FFFFFF"><font size=3D"-1"><fon= t face=3D"Verdana"><br> <br> Les<br> </font></font></div><div text=3D"#000000" bgcolor=3D"#FFFFFF"><br> <br> <div class=3D"m_3539291785761998971m_-8139567406708360751m_-71105792565= 23775512m_364433817559409555moz-cite-prefix">On 6/9/2018 5:13 AM, Roger Wat= t wrote:<br> </div> <blockquote type=3D"cite"> =20 =20 =20 <div class=3D"m_3539291785761998971m_-8139567406708360751m_-711057925= 6523775512m_364433817559409555WordSection1"> <p class=3D"MsoNormal"><span>3 points:<u></u><u></u></span></p> <p class=3D"MsoNormal"><span><u></u>=C2=A0<u></u></span></p> <p class=3D"MsoNormal"><span>1. The issue of RR is tied up with the logic of null hypothesis testing. There are only two outcomes for null hypothesis testing: (i) a tentative conclusion that the null hypothesis should be regarded as inconsistent with the data and (ii) no conclusion about the null hypothesis can be reached from the data. Neither outcome refers to the alternative hypothesis, which is never tested. A nice idea in the literature is the counter-null. If I have a sample of 42 and an effect size of 0.2 (r-family), then my result is not significant: it is not inconsistent with a population effect size of 0. It is equally not inconsistent with the counter-null, a population effect size of ~0.4. It is less inconsistent with all population effect sizes in between the null and the counter-null. (NHST forces all these double negatives).<u></u><= u></u></span></p> <p class=3D"MsoNormal"><span><u></u>=C2=A0<u></u></span></p> <p class=3D"MsoNormal"><span>2. The current system of publish when p&lt;0.05 is easy to game, hence all the so-called questionable practices. Any new system, like RR, will in due course become easy to game. By a long shot, the easiest (invalid) way to get an inflated effect size and an inappropriately small p is to test more participants than needed and keep only the =E2=80=9Cbest=E2=80= =9D ones. RR will not prevent that.<u></u><u></u></span></p> <p class=3D"MsoNormal"><span><u></u>=C2=A0<u></u></span></p> <p class=3D"MsoNormal"><span>3. NHST assumes random sampling, which no-one achieves. The forms of sampling we use in reality are all possibly subject to issues of non-independence of participants which leads to Type I error rates (false positives) that are well above 5%. <u></u><u></u></span></p> <p class=3D"MsoNormal"><span><u></u>=C2=A0<u></u></span></p> <p class=3D"MsoNormal"><span>None of this is to argue against RR, just to observe that it doesn=E2=80=99t resolve many of the current problems. Any claim= that it does, is in itself a kind of Type I error and Type I errors are very difficult to eradicate once accepted.<u></u><u>= </u></span></p> <p class=3D"MsoNormal"><span><u></u>=C2=A0<u></u></span></p> <p class=3D"MsoNormal"><span>Roger Watt<u></u><u></u></span></p> <p class=3D"MsoNormal"><span>Professor of Psychology<u></u><u></u></span></p> <p class=3D"MsoNormal"><span>University of Stirling<u></u><u></u></span></p> <p class=3D"MsoNormal"><a name=3D"m_3539291785761998971_m_-81395674= 06708360751_m_-7110579256523775512_m_364433817559409555__MailEndCompose"><s= pan><u></u>=C2=A0<u></u></span></a></p> <span></span> <div> <div style=3D"border:none;border-top:solid #e1e1e1 1.0pt;padding:= 3.0pt 0cm 0cm 0cm"> <p class=3D"MsoNormal"><b><span lang=3D"EN-US">From:</span></b>= <span lang=3D"EN-US"> AUDITORY - Research in Auditory Perception [<a class=3D"m_3539291785761998971m_-8139567406708360751m_-= 7110579256523775512m_364433817559409555moz-txt-link-freetext" href=3D"mailt= o:AUDITORY@xxxxxxxx" target=3D"_blank">mailto:AUDITORY@xxxxxxxx= CA</a>] <b>On Behalf Of </b>Ken Grant<br> <b>Sent:</b> 09 June 2018 06:19<br> <b>To:</b> <a class=3D"m_3539291785761998971m_-813956740670= 8360751m_-7110579256523775512m_364433817559409555moz-txt-link-abbreviated" = href=3D"mailto:AUDITORY@xxxxxxxx" target=3D"_blank">AUDITORY@xxxxxxxx= CGILL.CA</a><br> <b>Subject:</b> Re: Registered reports<u></u><u></u></span>= </p> </div> </div> <p class=3D"MsoNormal"><u></u>=C2=A0<u></u></p> <p class=3D"MsoNormal">Why aren=E2=80=99t these =E2=80=9Cfailed=E2= =80=9D experiments published? What=E2=80=99s the definition of a failed experiment anyway.=C2=A0<u></u><u></u></p> <div> <p class=3D"MsoNormal"><u></u>=C2=A0<u></u></p> </div> <div> <p class=3D"MsoNormal" style=3D"margin-bottom:12.0pt">I think tha= t if the scientific question is well formed and well motivated AND the methods sound and appropriate for addressing the question, then whatever the result may be, this seems like a good experiment and one that should be published.=C2=A0<u></u><= u></u></p> <div id=3D"m_3539291785761998971m_-8139567406708360751m_-71105792= 56523775512m_364433817559409555AppleMailSignature"> <p class=3D"MsoNormal">Sent from my iPhone<u></u><u></u></p> <div> <p class=3D"MsoNormal"><span style=3D"font-size:13.0pt">Ken W= . Grant, PhD</span><u></u><u></u></p> </div> <div> <p class=3D"MsoNormal"><span style=3D"font-size:13.0pt">Chief= , Scientific and Clinical Studies</span><u></u><u></u></p> </div> <div> <p class=3D"MsoNormal"><span style=3D"font-size:13.0pt">Natio= nal Military Audiology and Speech-Pathology Center (NMASC)</s= pan><u></u><u></u></p> </div> <div> <p class=3D"MsoNormal"><span style=3D"font-size:13.0pt">Walte= r Reed National Military Medical Center</span><u></u><u></u= ></p> </div> <div> <div> <p class=3D"MsoNormal"><span style=3D"font-size:13.0pt">Bet= hesda, MD 20889</span><u></u><u></u></p> </div> <div> <p class=3D"MsoNormal"><a href=3D"mailto:kenneth.w.grant.ci= v@xxxxxxxx" target=3D"_blank">kenneth.w.grant.civ@xxxxxxxx</a><u></u><u></u= ></p> </div> <div> <p class=3D"MsoNormal"><a href=3D"mailto:ken.w.grant@xxxxxxxx= com" target=3D"_blank">ken.w.grant@xxxxxxxx</a><u></u><u></u></p> </div> <div> <p class=3D"MsoNormal"><span style=3D"font-size:13.0pt">Off= ice: =C2=A0301-319-7043</span><u></u><u></u></p> </div> <div> <p class=3D"MsoNormal"><span style=3D"font-size:13.0pt">Cel= l: =C2=A0301-919-2957</span><u></u><u></u></p> </div> <div> <div> <p class=3D"MsoNormal"><u></u>=C2=A0<u></u></p> <div> <p class=3D"MsoNormal"><u></u>=C2=A0<u></u></p> <div> <p class=3D"MsoNormal"><u></u>=C2=A0<u></u></p> </div> </div> </div> </div> </div> </div> <div> <p class=3D"MsoNormal" style=3D"margin-bottom:12.0pt"><br> On Jun 9, 2018, at 12:48 AM, Matthew Winn &lt;<a href=3D"mail= to:mwinn2@xxxxxxxx" target=3D"_blank">mwinn2@xxxxxxxx</a>&gt; wrote:<u></u><u></u></p> </div> <blockquote style=3D"margin-top:5.0pt;margin-bottom:5.0pt"> <div> <div> <p class=3D"MsoNormal" style=3D"margin-bottom:12.0pt">The view that RRs will stifle progress is both true and false. While the increased load of advanced registration and rigidity in methods would, as Les points out, become burdensome for most of our basic work, there is another side to this. This is not a matter of morals (hiding a bad result, or fabricating a good result) or how to do our experiments. It=E2=80=99s= a matter of the standards of *publication*, which you will notice was the scope of Tim=E2=80=99s original call = to action. In general, we only ever read about experiments that came out well (and not the ones that didn=E2=80=99t). If there is a solution to that problem, = then we should consider it, or at least acknowledge that some solution might be needed. This is partly the culture of scientific journals, and partly the culture of the institutions that employ us. There&#39;s no need t= o question anybody&#39;s integrity in order to appreciate some benefit of RRs. <br> <br> Think for a moment about the amount of wasted hours spent by investigators who repeat the failed methods of their peers and predecessors, only because the outcomes of failed experiments were never published. Or those of us who cling to theories based on initial publications of work that later fails replication, but where those failed replications never get published. THIS stifles progress as well. If results were to be reported whether or not they come out as planned, we=E2= =80=99d have a much more complete picture of the evidence for and against the ideas. Julia&#39;s story also resonates with me; we&#39;ve all reviewed papers where we&#39;ve th= ought &quot;if only the authors had sought input before running this labor-intensive study, the data would be so much more valuable.&quot;<br> <br> The arguments against RRs in this thread appear in my mind to be arguments against *compulsory* RRs for *all* papers in *all* journals, which takes the discussion off course. I have not heard such radical calls. If you don=E2=80=99t want to do a RR, then don=E2= =80=99t do it. But perhaps we can appreciate the goals of RR and see how those goals might be realized with practices that suit our own fields of work.<br> <br> Matt <u></u><u></u></p> <div> <div> <div> <div> <div> <div> <p class=3D"MsoNormal"><u></u>=C2=A0<u></u></p> </div> <div> <p class=3D"MsoNormal">------------------------= --------------------------------------<u></u><u></u></p> </div> <div> <p class=3D"MsoNormal">Matthew Winn, Au.D., Ph.D.<br> Assistant Professor<br> Dept. of Speech &amp; Hearing Sciences<br> University of Washington<u></u><u></u></p> </div> </div> </div> </div> </div> <p class=3D"MsoNormal"><u></u>=C2=A0<u></u></p> </div> </div> </div> </blockquote> </div> </div> <div align=3D"left"> <hr> <div align=3D"left"><font size=3D"2" face=3D"Arial">The University achieved an overall 5 stars in the QS World University Rankings 2018</font></div> <font size=3D"2" face=3D"Arial" color=3D"gray">The University of Stirling is a charity registered in Scotland, number SC 011159.<br> </font> </div> </blockquote> <br> <br> </div><div text=3D"#000000" bgcolor=3D"#FFFFFF"><div class=3D"m_3539291= 785761998971m_-8139567406708360751m_-7110579256523775512m_36443381755940955= 5moz-signature">-- <br> =20 =20 <b><span>Leslie R. Bernstein, Ph.D. </span></b><b><span>| </span></b>= <span>Professor</span><span><u></u><u></u></span><span><br> Depts. of Neuroscience and Surgery (Otolaryngology)| UConn School of Medicine </span><br> <span></span><span><a href=3D"https://maps.google.com/?q=3D263+Farmin= gton+Avenue,+Farmington,+CT%0D%0A++++++++06030-3401+%0D%0A+++++++Office&amp= ;entry=3Dgmail&amp;source=3Dg" target=3D"_blank">263 Farmington Avenue, Far= mington, CT 06030-3401</a></span><br><a href=3D"https://maps.google.com/?q=3D26= 3+Farmington+Avenue,+Farmington,+CT%0D%0A++++++++06030-3401+%0D%0A+++++++Of= fice&amp;entry=3Dgmail&amp;source=3Dg" target=3D"_blank"> </a><span></span><span><a href=3D"https://maps.google.com/?q=3D263+Fa= rmington+Avenue,+Farmington,+CT%0D%0A++++++++06030-3401+%0D%0A+++++++Office= &amp;entry=3Dgmail&amp;source=3Dg" target=3D"_blank">Office</a>: 860.679.46= 22 | Fax: 860.679.2495<br> <br> <img alt=3D"" src=3D"cid:163f3608963bc4481421" style=3D"width:125px= ;max-width:100%"><br> </span> </div> </div></blockquote></div></div> </div> </div> </div>-- <br><div dir=3D"ltr" class=3D"gmail_signature" data-smartmail=3D"g= mail_signature">---------------------------------------------<br>Frederick = (Erick) Gallun, PhD =E2=80=A8Research Investigator, VA RR&amp;D National Ce= nter for Rehabilitative Auditory Research <br>Associate Professor, Oregon H= ealth &amp; Science University<br><a href=3D"http://www.ncrar.research.va.g= ov/AboutUs/Staff/Gallun.asp">http://www.ncrar.research.va.gov/AboutUs/Staff= /Gallun.asp</a><br></div> --000000000000fd89a2056e7648d3-- --000000000000fd89a3056e7648d4 Content-Type: image/png; name="uconnhealth_stacked_blue_email.png" Content-Disposition: inline; filename="uconnhealth_stacked_blue_email.png" Content-Transfer-Encoding: base64 Content-ID: <163f3608963bc4481421> X-Attachment-Id: 163f3608963bc4481421 iVBORw0KGgoAAAANSUhEUgAAAH0AAAAwCAMAAAALmIWlAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJ bWFnZVJlYWR5ccllPAAAADNQTFRFKzVYHCZM4eLn8PHzpKi30dTbaG+IlZqrs7fDWWB8O0Nkd32U wsXPSlJwhoufDRhA////A68jmAAAABF0Uk5T/////////////////////wAlrZliAAACYklEQVR4 2uzY22KDIAwAUC7e6gjw/187gUC4OetaupflaTXKWSuBKNMuJptC+QMCP20auPHBQCubBWU46K24 uB7NfZ58ZqaMP8D8AEBHtT/goXnB8WPwBf8r1WYUXWxUOZobHcJXIMcfONclmE4s0lrRzYBIOjvT jX5WV6YbP2ZQJ6TWjfiAbuYzHW7p+xpuolQrP83sjQ5nupnu6LqaJ92MbnSznulcfkBHpNXNY6y+ c0JKnWX1eFeflQt5qcNESKnrQMrf6LSAXeiWkEoXdOU4nZBKxzPnoTohtW5ZZyK8W09Io6tYjyP1 iDS6XbAeh815S0irS6zHUfXuB0Ok1S3W41AdEWh1/Nb7UD1bWytd0LY4ThdnerYbBJ1taT5XBmy4 J8mNnWagp5fbba6HeiTdneK6vNgoFj0EhyO4eSZT6AlpdNXoNzqYi94GbIU0eqhHp3d5drRf4heZ Uo9Iq8vYptup6VB37H6m/W6m0hFpdaxH3/yK9cHpVq5Zy59n2GMV/QxPGdV9NElPE9njyOozsfWW Kq6j7bLuw97KPBnM/mX86//6n+tH/yLKP7AUXcyd80Lx53WnspB47ty5qKND1gLockfIFzHR7srp Q7X0+6GgHPNFXedPyaN1CPFli71z+ZBeTxbcI2RXxwOQj/lW/Ui610Z6gA4a32WRrssXYbO76V/Z y6FLnZVj/qCnqGcdZE1DeFKcntWrMV/RZXCh7B/fpF/+8tQQx6J74y9/Oeuyrm75+Jyf8oKWH9CL 1SYNJtJ91HTOAD2fdjPN9UcsumwmDNYXqvPtOLa9pH8LMACnoV0siZAyOAAAAABJRU5ErkJggg== --000000000000fd89a3056e7648d4--


This message came from the mail archive
src/postings/2018/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University