Re: [AUDITORY] Registered reports (Valeriy Shafiro )


Subject: Re: [AUDITORY] Registered reports
From:    Valeriy Shafiro  <firosha@xxxxxxxx>
Date:    Wed, 13 Jun 2018 23:58:13 -0500
List-Archive:<http://lists.mcgill.ca/scripts/wa.exe?LIST=AUDITORY>

--0000000000006f7859056e92f2a4 Content-Type: multipart/alternative; boundary="0000000000006f7857056e92f2a3" --0000000000006f7857056e92f2a3 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Dear all, As a latecomer to this thread, it is pretty clear that opinions on the value of RR to auditory research differ quite a bit (and why wouldn't they?). I don't know what support for RR from 30 people that Tim mentioned really means in terms of the number of people on this list or potential contributors to auditory journals. Do we know that journals that currently offer RR publish better quality work than before? How and when would we know if that is the case? It also occurs to me that many of the classic studies in auditory and speech research from the first half to mid 20th century may not have passed peer review now in the journals they were initially published. But even though there were methodological aspects that were problematic in those early studies, they led to significant advances in the field, provided conceptual frameworks and their results mostly held up pretty well, as Erik noted. Given that reviewers generally tend to favor more conservative choices, I worry that one unintended consequence of making a formal peer review of initial study design and protocols a "soft" requirement will provide an incentive for mediocre studies that have well specified protocols but don't really answer important or interesting questions. That said, there certainly could be advantages for pre-registering study protocols, for example for clinical trials (e.g. clinicaltrails.gov in the US), which is a big step forward in many ways. The problem with RRs as presented is how they link with future publications. The way it is described in the letter Tim shared can be interpreted in a number of ways by different reviewers and editors, just like it was on this list. i.e. " High quality pre-registered protocols that meet strict editorial criteria are then offered in principle acceptance, which guarantees publication of the results provided authors adhere to their pre-registered protocol, and provided various pre-specified quality standards are achieved in the final outcome." There seems to be plenty of room in this sentence for most of the biases to creep right back in, during either the initial or the second stage of peer review. As an alternative, why not have a system where authors can optionally preregister their protocols and perhaps choose to open it up for comments from their peers? When a final manuscript is submitted for publication this protocol can be referenced, providing continuity and making any changes in thinking and doing the study transparent. In this case there would still be only a single peer review, and perhaps some acute embarrassment of missing something very obvious in retrospect. Having to think through the key method and analysis questions during the preregistration process may encourage those who submit to address some of the potential problems with interpreting negative results or having underpowered studies. Perhaps that can be the middle ground. Cheers, Valeriy On Tue, Jun 12, 2018 at 4:56 AM, Bastiaan van der Weij < bjvanderweij@xxxxxxxx> wrote: > Dear all, > > My experience with conducting experimental research is very limited, but > with this hedge in place, maybe the following perspective on some of the > points raised in this interesting thread is of some use. > > >> I think that if the scientific question is well formed and well motivate= d >> AND the methods sound and appropriate for addressing the question, then >> whatever the result may be, this seems like a good experiment and one th= at >> should be published. >> > > Isn't this precisely what registered reports aim to achieve? The > underlying assumption is that in the current system, whether the results = of > a study are significant affects the likelihood that the study will be > published. I think this discussion is not so much about the integrity of > individual researchers and reviewers as it is about the incentives inhere= nt > in publishing and academia in general. > > > In theory, the rate of Type I errors should be smaller or equal to the > used significance level, but among published findings it isn't. This is > arguably problematic, although perhaps not for well-educated readers who > have been taught never to believe a single study (on the other hand, not > all journalists have been taught the same lesson). There may be many > reasons for an elevated Type I error rate, including the points raised by > Roger. Whether you believe it's likely registered reports alleviate the > problem (if you accept there is a problem) depends on the causes you > attribute to the problem. It seems plausible that some of these causes > are so-called "p-hacking" practices and a bias towards significant result= s in > publishing (if out of a set of equally well-designed studies, the ones > with significant results are more likely to be published, the Type I erro= r > rate among the published studies will be elevated), both of which may > result from perfectly honest research and reviewing combined with the wro= ng > incentives. Registered reports cannot address all ways of gaming the > system, but will likely reduce the incentive for p-hacking and eliminate > the bias towards significant results among published (pre-registered) > findings. > > > As has already been stressed, this is not to say that all studies should > be pre-registered or that only pre-registered studies should be taken > seriously, but seeing that a study has been pre-registered, even if > pre-registering a study is voluntary and rare, helps the reader assess it= s > results. That is in addition to the potential benefits Julia pointed out = of > receiving peer-review feedback on your methods alone in addition to > peer-review feedback on your results and interpretation of the results > later. > > > On the other hand, those who do have the kind of getting-your-hands-dirty > experience with empirical research and statistics in the wild that I lack > might agree with Les that, in practice, only very few or very uninteresti= ng > studies would qualify to benefit from being pre-registered. Then again, > maybe that is how it should be: findings that we can be truly confident > about are few and boring. > > > Best wishes, > > Bastiaan > > > On Mon, Jun 11, 2018 at 4:55 PM, Les Bernstein <lbernstein@xxxxxxxx> > wrote: > >> I agree with Ken and Roger. It's neither clear that the current system >> falls short nor that RRs would, effectively, solve any such problem. To >> the degree there is a problem, I fail to see how making RRs VOLUNTARY wo= uld >> serve as an effective remedy or, voluntary or not, serve to increase >> "standards of publication." If people wish to have the option, that sou= nds >> benign enough, save for the extra work required of reviewers. >> >> As suggested by Matt, I tried to think of the "wasted hours spent by >> investigators who repeat the failed methods of their peers and >> predecessors, only because the outcomes of failed experiments were never >> published." Across the span of my career, for me and for those with who= m >> I've worked, I can't identify that such wasted hours have been spent. As >> Ken notes, well-formed, well-motivated experiments employing sound metho= ds >> should be (and are) published. >> >> Likewise, re Matt's comments, I cannot recall substantial instances of >> scientists "who cling to theories based on initial publications of work >> that later fails replication, but where those failed replications never = get >> published." Au contraire. I can think of a quite a few cases in which >> essential replication failed, those findings were published, and the fie= ld >> was advanced. I don't believe that it is the case that many of us are >> clinging to theories that are invalid but for the publication of failed >> replications. Theories gain status via converging evidence. >> >> It seems to me that for what some are arguing would, essentially, be an >> auditory version of The Journal of Negative Results ( >> https://en.wikipedia.org/wiki/Journal_of_Negative_Results_in_Biomedicine >> ). >> >> Still, if some investigators wish to have the RR option and journals are >> willing to offer it, then, by all means, have at it. The proof of the >> pudding will be in the tasting. >> >> Les >> >> >> On 6/9/2018 5:13 AM, Roger Watt wrote: >> >> 3 points: >> >> >> >> 1. The issue of RR is tied up with the logic of null hypothesis testing. >> There are only two outcomes for null hypothesis testing: (i) a tentative >> conclusion that the null hypothesis should be regarded as inconsistent w= ith >> the data and (ii) no conclusion about the null hypothesis can be reached >> from the data. Neither outcome refers to the alternative hypothesis, whi= ch >> is never tested. A nice idea in the literature is the counter-null. If I >> have a sample of 42 and an effect size of 0.2 (r-family), then my result= is >> not significant: it is not inconsistent with a population effect size of= 0. >> It is equally not inconsistent with the counter-null, a population effec= t >> size of ~0.4. It is less inconsistent with all population effect sizes i= n >> between the null and the counter-null. (NHST forces all these double >> negatives). >> >> >> >> 2. The current system of publish when p<0.05 is easy to game, hence all >> the so-called questionable practices. Any new system, like RR, will in d= ue >> course become easy to game. By a long shot, the easiest (invalid) way to >> get an inflated effect size and an inappropriately small p is to test mo= re >> participants than needed and keep only the =E2=80=9Cbest=E2=80=9D ones. = RR will not prevent >> that. >> >> >> >> 3. NHST assumes random sampling, which no-one achieves. The forms of >> sampling we use in reality are all possibly subject to issues of >> non-independence of participants which leads to Type I error rates (fals= e >> positives) that are well above 5%. >> >> >> >> None of this is to argue against RR, just to observe that it doesn=E2=80= =99t >> resolve many of the current problems. Any claim that it does, is in itse= lf >> a kind of Type I error and Type I errors are very difficult to eradicate >> once accepted. >> >> >> >> Roger Watt >> >> Professor of Psychology >> >> University of Stirling >> >> >> >> *From:* AUDITORY - Research in Auditory Perception [ >> mailto:AUDITORY@xxxxxxxx <AUDITORY@xxxxxxxx>] *On Behalf >> Of *Ken Grant >> *Sent:* 09 June 2018 06:19 >> *To:* AUDITORY@xxxxxxxx >> *Subject:* Re: Registered reports >> >> >> >> Why aren=E2=80=99t these =E2=80=9Cfailed=E2=80=9D experiments published?= What=E2=80=99s the definition of >> a failed experiment anyway. >> >> >> >> I think that if the scientific question is well formed and well motivate= d >> AND the methods sound and appropriate for addressing the question, then >> whatever the result may be, this seems like a good experiment and one th= at >> should be published. >> >> Sent from my iPhone >> >> Ken W. Grant, PhD >> >> Chief, Scientific and Clinical Studies >> >> National Military Audiology and Speech-Pathology Center (NMASC) >> >> Walter Reed National Military Medical Center >> >> Bethesda, MD 20889 >> >> kenneth.w.grant.civ@xxxxxxxx >> >> ken.w.grant@xxxxxxxx >> >> Office: 301-319-7043 >> >> Cell: 301-919-2957 >> >> >> >> >> >> >> >> >> On Jun 9, 2018, at 12:48 AM, Matthew Winn <mwinn2@xxxxxxxx> wrote: >> >> The view that RRs will stifle progress is both true and false. While the >> increased load of advanced registration and rigidity in methods would, a= s >> Les points out, become burdensome for most of our basic work, there is >> another side to this. This is not a matter of morals (hiding a bad resul= t, >> or fabricating a good result) or how to do our experiments. It=E2=80=99s= a matter >> of the standards of *publication*, which you will notice was the scope o= f >> Tim=E2=80=99s original call to action. In general, we only ever read abo= ut >> experiments that came out well (and not the ones that didn=E2=80=99t). I= f there is >> a solution to that problem, then we should consider it, or at least >> acknowledge that some solution might be needed. This is partly the cultu= re >> of scientific journals, and partly the culture of the institutions that >> employ us. There's no need to question anybody's integrity in order to >> appreciate some benefit of RRs. >> >> Think for a moment about the amount of wasted hours spent by >> investigators who repeat the failed methods of their peers and >> predecessors, only because the outcomes of failed experiments were never >> published. Or those of us who cling to theories based on initial >> publications of work that later fails replication, but where those faile= d >> replications never get published. THIS stifles progress as well. If resu= lts >> were to be reported whether or not they come out as planned, we=E2=80=99= d have a >> much more complete picture of the evidence for and against the ideas. >> Julia's story also resonates with me; we've all reviewed papers where we= 've >> thought "if only the authors had sought input before running this >> labor-intensive study, the data would be so much more valuable." >> >> The arguments against RRs in this thread appear in my mind to be >> arguments against *compulsory* RRs for *all* papers in *all* journals, >> which takes the discussion off course. I have not heard such radical cal= ls. >> If you don=E2=80=99t want to do a RR, then don=E2=80=99t do it. But perh= aps we can >> appreciate the goals of RR and see how those goals might be realized wit= h >> practices that suit our own fields of work. >> >> Matt >> >> >> >> -------------------------------------------------------------- >> >> Matthew Winn, Au.D., Ph.D. >> Assistant Professor >> Dept. of Speech & Hearing Sciences >> University of Washington >> >> >> >> ------------------------------ >> The University achieved an overall 5 stars in the QS World University >> Rankings 2018 >> The University of Stirling is a charity registered in Scotland, number S= C >> 011159. >> >> >> >> -- >> *Leslie R. Bernstein, Ph.D. **| *Professor >> Depts. of Neuroscience and Surgery (Otolaryngology)| UConn School of >> Medicine >> 263 Farmington Avenue, Farmington, CT 06030-3401 >> Office >> <https://maps.google.com/?q=3D263+Farmington+Avenue,+Farmington,+CT%0D%0= A++++++++06030-3401+%0D%0A+++++++Office&entry=3Dgmail&source=3Dg>: >> 860.679.4622 | Fax: 860.679.2495 >> >> >> > --0000000000006f7857056e92f2a3 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable <div dir=3D"ltr"><div>Dear all,</div><div><br></div>As a latecomer to this = thread, it is pretty clear that opinions on the value of RR to auditory res= earch differ quite a bit (and why wouldn&#39;t they?).=C2=A0 I don&#39;t kn= ow what support for RR from 30 people that Tim mentioned really means in te= rms of the number of people on this list or potential contributors to audit= ory journals.=C2=A0 Do we know that journals that currently offer RR publis= h better quality work than before? How and when would we know if that is th= e case?=C2=A0 It also occurs to me that many of the classic studies in audi= tory and speech research from the first half to mid 20th century may not ha= ve passed peer review now in the journals they were initially published.=C2= =A0 But even though there were methodological aspects that were problematic= in those early studies, they led to significant advances in the field, pro= vided conceptual frameworks and their results mostly held up pretty well, a= s Erik noted.=C2=A0 Given that reviewers generally tend to favor more conse= rvative choices, I worry=C2=A0 that one unintended consequence of making a = formal peer review of initial study design and protocols a &quot;soft&quot;= requirement will provide an incentive for mediocre studies that have well = specified protocols but don&#39;t really answer important or interesting qu= estions.=C2=A0=C2=A0<div><br></div><div>That said, there certainly could be= advantages for pre-registering study protocols, for example for<span style= =3D"font-size:small;background-color:rgb(255,255,255);text-decoration-style= :initial;text-decoration-color:initial;float:none;display:inline">=C2=A0cli= nical trials (e.g.<span>=C2=A0</span></span><a href=3D"http://clinicaltrail= s.gov/" target=3D"_blank" style=3D"color:rgb(17,85,204);font-size:small;bac= kground-color:rgb(255,255,255)">clinicaltrails.gov</a><span style=3D"font-s= ize:small;background-color:rgb(255,255,255);text-decoration-style:initial;t= ext-decoration-color:initial;float:none;display:inline"><span>=C2=A0</span>= in the US), which is a big step forward in many ways.=C2=A0</span> The problem with RRs as presented is how they link with future publication= s.=C2=A0 The way it is described in the letter Tim shared can=20 be interpreted in a number of ways by different reviewers and editors, just= like it was on this list.=C2=A0 i.e.=C2=A0&quot; High quality pre-register= ed protocols that meet strict editorial criteria are then offered in princi= ple acceptance, which guarantees publication of the results provided author= s adhere to their pre-registered protocol, and provided various pre-specifi= ed quality standards are achieved in the final outcome.&quot;=C2=A0 There seems to be plenty of room in this sentence for most of the biases to= creep right back in, during either the initial or the second stage of peer= review.=C2=A0 As an alternative, why not have a system where authors can o= ptionally preregister their protocols and perhaps choose to open it up for = comments from their peers?=C2=A0 When a final manuscript is submitted for p= ublication this protocol can be referenced, providing continuity and making= any changes in thinking and doing the study transparent.=C2=A0 In this cas= e there would still be only a single peer review, and perhaps some acute em= barrassment of missing something very obvious in retrospect. Having to thin= k through the key method and analysis questions during the preregistration = process may encourage those who submit to address some of the potential pro= blems with interpreting negative results or having underpowered studies.=C2= =A0 Perhaps that can be the middle ground.<div><br><div>Cheers,</div><div><= br></div><div>Valeriy</div><div><br></div><div>=C2=A0<br><div><br></div><di= v><br></div><div><br></div></div></div></div></div><div class=3D"gmail_extr= a"><br><div class=3D"gmail_quote">On Tue, Jun 12, 2018 at 4:56 AM, Bastiaan= van der Weij <span dir=3D"ltr">&lt;<a href=3D"mailto:bjvanderweij@xxxxxxxx= m" target=3D"_blank">bjvanderweij@xxxxxxxx</a>&gt;</span> wrote:<br><block= quote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc= solid;padding-left:1ex"><div dir=3D"ltr"><div style=3D"text-decoration-sty= le:initial;text-decoration-color:initial;font-size:small">Dear all,</div><d= iv style=3D"text-decoration-style:initial;text-decoration-color:initial;fon= t-size:small"><br></div><div style=3D"text-decoration-style:initial;text-de= coration-color:initial;font-size:small">My experience with conducting exper= imental research is very limited, but with this hedge in place, maybe the f= ollowing perspective on some of the points raised in this interesting threa= d is of some use.</div><span class=3D""><div style=3D"text-decoration-style= :initial;text-decoration-color:initial;font-size:small">=C2=A0</div><blockq= uote class=3D"gmail_quote" style=3D"text-decoration-style:initial;text-deco= ration-color:initial;font-size:small;margin:0px 0px 0px 0.8ex;border-left:1= px solid rgb(204,204,204);padding-left:1ex"><span style=3D"font-size:12.8px= ;background-color:rgb(255,255,255);text-decoration-style:initial;text-decor= ation-color:initial;float:none;display:inline">I think that if the scientif= ic question is well formed and well motivated AND the methods sound and app= ropriate for addressing the question, then whatever the result may be, this= seems like a good experiment and one that should be published.=C2=A0</span= ><br></blockquote><div style=3D"text-decoration-style:initial;text-decorati= on-color:initial;font-size:small"><br></div></span><div style=3D"text-decor= ation-style:initial;text-decoration-color:initial;font-size:small">Isn&#39;= t this precisely what registered reports aim to achieve? The underlying ass= umption is that in the current system, whether the results of a study are s= ignificant affects the likelihood that the study will be published. I think= this discussion is not so much about the integrity of individual researche= rs and reviewers as it is about the incentives inherent in publishing and a= cademia in general.</div><div style=3D"text-decoration-style:initial;text-d= ecoration-color:initial;font-size:small">=C2=A0</div><p class=3D"MsoNormal"= style=3D"margin:0px;font-size:12.8px;text-decoration-style:initial;text-de= coration-color:initial">In theory, the rate of Type I errors should be smal= ler or equal to the used significance level, but among published findings i= t isn&#39;t. This is arguably problematic, although perhaps not for well-ed= ucated readers who have been taught never to believe a single study (on the= other hand, not all journalists have been taught the same lesson).=C2=A0<s= pan style=3D"background-color:rgb(255,255,255);text-decoration-style:initia= l;text-decoration-color:initial;float:none;display:inline">There may be man= y reasons for an elevated Type I error rate, including the points raised by= Roger.<span>=C2=A0</span></span><span style=3D"background-color:rgb(255,25= 5,255);text-decoration-style:initial;text-decoration-color:initial;float:no= ne;display:inline">Whether you believe it&#39;s likely registered reports a= lleviate the problem (if you accept there is a problem) depends on the caus= es you attribute to the problem.<span>=C2=A0</span></span>It seems plausibl= e that some of these causes are so-called &quot;p-hacking&quot; practices a= nd=C2=A0a bias towards significant results=C2=A0<span style=3D"background-c= olor:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:i= nitial;float:none;display:inline">in publishing<span>=C2=A0</span></span><s= pan>=C2=A0</span>(if out of a set of equally well-designed studies, the one= s with significant results are more likely to be published, the Type I erro= r rate among the published studies will be elevated), both of which may res= ult from perfectly honest research and reviewing combined with the wrong in= centives. Registered reports cannot address all ways of gaming the system, = but will likely reduce the incentive for p-hacking and eliminate the bias t= owards significant results among published (pre-registered) findings.</p><p= class=3D"MsoNormal" style=3D"margin:0px;font-size:12.8px;text-decoration-s= tyle:initial;text-decoration-color:initial"><br></p><p class=3D"MsoNormal" = style=3D"margin:0px;font-size:12.8px;text-decoration-style:initial;text-dec= oration-color:initial">As has already been stressed, this is not to say tha= t all studies should be pre-registered or that only pre-registered studies = should be taken seriously, but seeing that a study has been pre-registered,= even if pre-registering a study is voluntary and rare, helps the reader as= sess its results. That is in addition to the potential benefits Julia point= ed out of receiving peer-review feedback on your methods alone in addition = to peer-review feedback on your results and interpretation of the results l= ater.</p><p class=3D"MsoNormal" style=3D"margin:0px;font-size:12.8px;text-d= ecoration-style:initial;text-decoration-color:initial"><br></p><p class=3D"= MsoNormal" style=3D"margin:0px;font-size:12.8px;text-decoration-style:initi= al;text-decoration-color:initial">On the other hand, those who do have the = kind of<span>=C2=A0</span><span style=3D"font-size:small;background-color:r= gb(255,255,255);text-decoration-style:initial;text-decoration-color:initial= ;float:none;display:inline">getting-your-hands-dirty</span><span>=C2=A0</sp= an>ex<wbr>perience with empirical research and statistics in the wild that = I lack might agree with Les that, in practice, only very few or very uninte= resting studies would qualify to benefit from being pre-registered. Then ag= ain, maybe that is how it should be: findings that we can be truly confiden= t about are few and boring.</p><p class=3D"MsoNormal" style=3D"margin:0px;f= ont-size:12.8px;text-decoration-style:initial;text-decoration-color:initial= "><br></p><p class=3D"MsoNormal" style=3D"margin:0px;font-size:12.8px;text-= decoration-style:initial;text-decoration-color:initial">Best wishes,</p><p = class=3D"MsoNormal" style=3D"margin:0px;font-size:12.8px;text-decoration-st= yle:initial;text-decoration-color:initial">Bastiaan</p><div><div class=3D"h= 5"><br><div class=3D"gmail_extra"><br><div class=3D"gmail_quote">On Mon, Ju= n 11, 2018 at 4:55 PM, Les Bernstein <span dir=3D"ltr">&lt;<a href=3D"mailt= o:lbernstein@xxxxxxxx" target=3D"_blank">lbernstein@xxxxxxxx</a>&gt;</span>= wrote:<br><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;bor= der-left:1px #ccc solid;padding-left:1ex"> =20 =20 =20 <div text=3D"#000000" bgcolor=3D"#FFFFFF"> <font size=3D"-1"><font face=3D"Verdana">I agree with Ken and Roger.=C2= =A0 It&#39;s neither clear that the current system falls short nor that RRs would, effectively, solve any such problem.=C2=A0 To the degree there is a problem, I fail to see how making RRs VOLUNTARY would serve as an effective remedy or, voluntary or not, serve to increase &quot;standards of publication.&quot;=C2=A0 If people wish= to have the option, that sounds benign enough, save for the extra work required of reviewers.<br> <br> As suggested by Matt, I tried to think of the &quot;wasted hours spent by investigators who repeat the failed methods of their peers and predecessors, only because the outcomes of failed experiments were never published.&quot;=C2=A0 Across the span of my career, for me and for those with whom I&#39;ve worked, I can&#39;t identify that such wasted hours have been spent. As Ken notes, well-formed, well-motivated experiments employing sound methods should be (and are) published.<br> <br> Likewise, re Matt&#39;s comments, I cannot recall substantial instances of scientists &quot;who cling to theories based on initia= l publications of work that later fails replication, but where those failed replications never get published.&quot;=C2=A0 Au contr= aire.=C2=A0 I can think of a quite a few cases in which essential replication failed, those findings were published, and the field was advanced.=C2=A0 I don&#39;t believe that it is the case that ma= ny of us are clinging to theories that are invalid but for the publication of failed replications.=C2=A0 Theories gain status via converging evidence.<br> <br> It seems to me that for what some are arguing would, essentially, be an auditory version of The Journal of Negative Results (<a class=3D"m_-8841024088909575150m_1003477720667729248m_-60734171= 93203004144moz-txt-link-freetext" href=3D"https://en.wikipedia.org/wiki/Jou= rnal_of_Negative_Results_in_Biomedicine" target=3D"_blank">https://en.wikip= edia.org/wiki<wbr>/Journal_of_Negative_Results_i<wbr>n_Biomedicine</a>).<br= > <br> Still, if some investigators wish to have the RR option and journals are willing to offer it, then, by all means, have at it.=C2=A0 The proof of the pudding will be in the tasting.<span cla= ss=3D"m_-8841024088909575150m_1003477720667729248HOEnZb"><font color=3D"#88= 8888"><br> <br> Les<br> </font></span></font></font><div><div class=3D"m_-8841024088909575150= m_1003477720667729248h5"><br> <br> <div class=3D"m_-8841024088909575150m_1003477720667729248m_-60734171932= 03004144moz-cite-prefix">On 6/9/2018 5:13 AM, Roger Watt wrote:<br> </div> <blockquote type=3D"cite"> =20 =20 =20 <div class=3D"m_-8841024088909575150m_1003477720667729248m_-607341719= 3203004144WordSection1"> <p class=3D"MsoNormal"><span>3 points:<u></u><u></u></span></p> <p class=3D"MsoNormal"><span><u></u>=C2=A0<u></u></span></p> <p class=3D"MsoNormal"><span>1. The issue of RR is tied up with the logic of null hypothesis testing. There are only two outcomes for null hypothesis testing: (i) a tentative conclusion that the null hypothesis should be regarded as inconsistent with the data and (ii) no conclusion about the null hypothesis can be reached from the data. Neither outcome refers to the alternative hypothesis, which is never tested. A nice idea in the literature is the counter-null. If I have a sample of 42 and an effect size of 0.2 (r-family), then my result is not significant: it is not inconsistent with a population effect size of 0. It is equally not inconsistent with the counter-null, a population effect size of ~0.4. It is less inconsistent with all population effect sizes in between the null and the counter-null. (NHST forces all these double negatives).<u></u><= u></u></span></p> <p class=3D"MsoNormal"><span><u></u>=C2=A0<u></u></span></p> <p class=3D"MsoNormal"><span>2. The current system of publish when p&lt;0.05 is easy to game, hence all the so-called questionable practices. Any new system, like RR, will in due course become easy to game. By a long shot, the easiest (invalid) way to get an inflated effect size and an inappropriately small p is to test more participants than needed and keep only the =E2=80=9Cbest=E2=80= =9D ones. RR will not prevent that.<u></u><u></u></span></p> <p class=3D"MsoNormal"><span><u></u>=C2=A0<u></u></span></p> <p class=3D"MsoNormal"><span>3. NHST assumes random sampling, which no-one achieves. The forms of sampling we use in reality are all possibly subject to issues of non-independence of participants which leads to Type I error rates (false positives) that are well above 5%. <u></u><u></u></span></p> <p class=3D"MsoNormal"><span><u></u>=C2=A0<u></u></span></p> <p class=3D"MsoNormal"><span>None of this is to argue against RR, just to observe that it doesn=E2=80=99t resolve many of the current problems. Any claim= that it does, is in itself a kind of Type I error and Type I errors are very difficult to eradicate once accepted.<u></u><u>= </u></span></p> <p class=3D"MsoNormal"><span><u></u>=C2=A0<u></u></span></p> <p class=3D"MsoNormal"><span>Roger Watt<u></u><u></u></span></p> <p class=3D"MsoNormal"><span>Professor of Psychology<u></u><u></u></span></p> <p class=3D"MsoNormal"><span>University of Stirling<u></u><u></u></span></p> <p class=3D"MsoNormal"><a name=3D"m_-8841024088909575150_m_10034777= 20667729248_m_-6073417193203004144__MailEndCompose"><span><u></u>=C2=A0<u><= /u></span></a></p> <span></span> <div> <div style=3D"border:none;border-top:solid #e1e1e1 1.0pt;padding:= 3.0pt 0cm 0cm 0cm"> <p class=3D"MsoNormal"><b><span lang=3D"EN-US">From:</span></b>= <span lang=3D"EN-US"> AUDITORY - Research in Auditory Perception [<a class=3D"m_-8841024088909575150m_1003477720667729248m_-= 6073417193203004144moz-txt-link-freetext" href=3D"mailto:AUDITORY@xxxxxxxx= ILL.CA" target=3D"_blank">mailto:AUDITORY@xxxxxxxx<wbr>CA</a>] <b>On Behalf Of </b>Ken Grant<br> <b>Sent:</b> 09 June 2018 06:19<br> <b>To:</b> <a class=3D"m_-8841024088909575150m_100347772066= 7729248m_-6073417193203004144moz-txt-link-abbreviated" href=3D"mailto:AUDIT= ORY@xxxxxxxx" target=3D"_blank">AUDITORY@xxxxxxxx</a><br> <b>Subject:</b> Re: Registered reports<u></u><u></u></span>= </p> </div> </div> <p class=3D"MsoNormal"><u></u>=C2=A0<u></u></p> <p class=3D"MsoNormal">Why aren=E2=80=99t these =E2=80=9Cfailed=E2= =80=9D experiments published? What=E2=80=99s the definition of a failed experiment anyway.=C2=A0<u></u><u></u></p> <div> <p class=3D"MsoNormal"><u></u>=C2=A0<u></u></p> </div> <div> <p class=3D"MsoNormal" style=3D"margin-bottom:12.0pt">I think tha= t if the scientific question is well formed and well motivated AND the methods sound and appropriate for addressing the question, then whatever the result may be, this seems like a good experiment and one that should be published.=C2=A0<u></u><= u></u></p> <div id=3D"m_-8841024088909575150m_1003477720667729248m_-60734171= 93203004144AppleMailSignature"> <p class=3D"MsoNormal">Sent from my iPhone<u></u><u></u></p> <div> <p class=3D"MsoNormal"><span style=3D"font-size:13.0pt">Ken W= . Grant, PhD</span><u></u><u></u></p> </div> <div> <p class=3D"MsoNormal"><span style=3D"font-size:13.0pt">Chief= , Scientific and Clinical Studies</span><u></u><u></u></p> </div> <div> <p class=3D"MsoNormal"><span style=3D"font-size:13.0pt">Natio= nal Military Audiology and Speech-Pathology Center (NMASC)</s= pan><u></u><u></u></p> </div> <div> <p class=3D"MsoNormal"><span style=3D"font-size:13.0pt">Walte= r Reed National Military Medical Center</span><u></u><u></u= ></p> </div> <div> <div> <p class=3D"MsoNormal"><span style=3D"font-size:13.0pt">Bet= hesda, MD 20889</span><u></u><u></u></p> </div> <div> <p class=3D"MsoNormal"><a href=3D"mailto:kenneth.w.grant.ci= v@xxxxxxxx" target=3D"_blank">kenneth.w.grant.civ@xxxxxxxx</a><u></u><u></u= ></p> </div> <div> <p class=3D"MsoNormal"><a href=3D"mailto:ken.w.grant@xxxxxxxx= com" target=3D"_blank">ken.w.grant@xxxxxxxx</a><u></u><u></u></p> </div> <div> <p class=3D"MsoNormal"><span style=3D"font-size:13.0pt">Off= ice: =C2=A0301-319-7043</span><u></u><u></u></p> </div> <div> <p class=3D"MsoNormal"><span style=3D"font-size:13.0pt">Cel= l: =C2=A0301-919-2957</span><u></u><u></u></p> </div> <div> <div> <p class=3D"MsoNormal"><u></u>=C2=A0<u></u></p> <div> <p class=3D"MsoNormal"><u></u>=C2=A0<u></u></p> <div> <p class=3D"MsoNormal"><u></u>=C2=A0<u></u></p> </div> </div> </div> </div> </div> </div> <div> <p class=3D"MsoNormal" style=3D"margin-bottom:12.0pt"><br> On Jun 9, 2018, at 12:48 AM, Matthew Winn &lt;<a href=3D"mail= to:mwinn2@xxxxxxxx" target=3D"_blank">mwinn2@xxxxxxxx</a>&gt; wrote:<u></u><u></u></p> </div> <blockquote style=3D"margin-top:5.0pt;margin-bottom:5.0pt"> <div> <div> <p class=3D"MsoNormal" style=3D"margin-bottom:12.0pt">The view that RRs will stifle progress is both true and false. While the increased load of advanced registration and rigidity in methods would, as Les points out, become burdensome for most of our basic work, there is another side to this. This is not a matter of morals (hiding a bad result, or fabricating a good result) or how to do our experiments. It=E2=80=99s= a matter of the standards of *publication*, which you will notice was the scope of Tim=E2=80=99s original call = to action. In general, we only ever read about experiments that came out well (and not the ones that didn=E2=80=99t). If there is a solution to that problem, = then we should consider it, or at least acknowledge that some solution might be needed. This is partly the culture of scientific journals, and partly the culture of the institutions that employ us. There&#39;s no need t= o question anybody&#39;s integrity in order to appreciate some benefit of RRs. <br> <br> Think for a moment about the amount of wasted hours spent by investigators who repeat the failed methods of their peers and predecessors, only because the outcomes of failed experiments were never published. Or those of us who cling to theories based on initial publications of work that later fails replication, but where those failed replications never get published. THIS stifles progress as well. If results were to be reported whether or not they come out as planned, we=E2= =80=99d have a much more complete picture of the evidence for and against the ideas. Julia&#39;s story also resonates with me; we&#39;ve all reviewed papers where we&#39;ve th= ought &quot;if only the authors had sought input before running this labor-intensive study, the data would be so much more valuable.&quot;<br> <br> The arguments against RRs in this thread appear in my mind to be arguments against *compulsory* RRs for *all* papers in *all* journals, which takes the discussion off course. I have not heard such radical calls. If you don=E2=80=99t want to do a RR, then don=E2= =80=99t do it. But perhaps we can appreciate the goals of RR and see how those goals might be realized with practices that suit our own fields of work.<br> <br> Matt <u></u><u></u></p> <div> <div> <div> <div> <div> <div> <p class=3D"MsoNormal"><u></u>=C2=A0<u></u></p> </div> <div> <p class=3D"MsoNormal">------------------------= ------<wbr>------------------------------<wbr>--<u></u><u></u></p> </div> <div> <p class=3D"MsoNormal">Matthew Winn, Au.D., Ph.D.<br> Assistant Professor<br> Dept. of Speech &amp; Hearing Sciences<br> University of Washington<u></u><u></u></p> </div> </div> </div> </div> </div> <p class=3D"MsoNormal"><u></u>=C2=A0<u></u></p> </div> </div> </div> </blockquote> </div> </div> <div align=3D"left"> <hr> <div align=3D"left"><font size=3D"2" face=3D"Arial">The University achieved an overall 5 stars in the QS World University Rankings 2018</font></div> <font size=3D"2" face=3D"Arial" color=3D"gray">The University of Stirling is a charity registered in Scotland, number SC 011159.<br> </font> </div> </blockquote> <br> <br> </div></div><span><div class=3D"m_-8841024088909575150m_100347772066772= 9248m_-6073417193203004144moz-signature">-- <br> =20 =20 <b><span>Leslie R. Bernstein, Ph.D. </span></b><b><span>| </span></b>= <span>Professor</span><span><u></u><u></u></span><span><br> Depts. of Neuroscience and Surgery (Otolaryngology)| UConn School of Medicine </span><br> <span></span><span>263 Farmington Avenue, Farmington, CT 06030-3401</span><br> <span></span><span><a href=3D"https://maps.google.com/?q=3D263+Farmin= gton+Avenue,+Farmington,+CT%0D%0A++++++++06030-3401+%0D%0A+++++++Office&amp= ;entry=3Dgmail&amp;source=3Dg" target=3D"_blank">Office</a>: 860.679.4622 |= Fax: 860.679.2495<br> <br> <img alt=3D"" src=3D"cid:part6.4DF1ADC2.B162E604@xxxxxxxx" height= =3D"48" width=3D"125"><br> </span> </div> </span></div> </blockquote></div><br></div></div></div></div> </blockquote></div><br></div> --0000000000006f7857056e92f2a3-- --0000000000006f7859056e92f2a4 Content-Type: image/png; name="uconnhealth_stacked_blue_email.png" Content-Disposition: inline; filename="uconnhealth_stacked_blue_email.png" Content-Transfer-Encoding: base64 Content-ID: <part6.4DF1ADC2.B162E604@xxxxxxxx> X-Attachment-Id: e03b9798cf55b655_0.1 iVBORw0KGgoAAAANSUhEUgAAAH0AAAAwCAMAAAALmIWlAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJ bWFnZVJlYWR5ccllPAAAADNQTFRFKzVYHCZM4eLn8PHzpKi30dTbaG+IlZqrs7fDWWB8O0Nkd32U wsXPSlJwhoufDRhA////A68jmAAAABF0Uk5T/////////////////////wAlrZliAAACYklEQVR4 2uzY22KDIAwAUC7e6gjw/187gUC4OetaupflaTXKWSuBKNMuJptC+QMCP20auPHBQCubBWU46K24 uB7NfZ58ZqaMP8D8AEBHtT/goXnB8WPwBf8r1WYUXWxUOZobHcJXIMcfONclmE4s0lrRzYBIOjvT jX5WV6YbP2ZQJ6TWjfiAbuYzHW7p+xpuolQrP83sjQ5nupnu6LqaJ92MbnSznulcfkBHpNXNY6y+ c0JKnWX1eFeflQt5qcNESKnrQMrf6LSAXeiWkEoXdOU4nZBKxzPnoTohtW5ZZyK8W09Io6tYjyP1 iDS6XbAeh815S0irS6zHUfXuB0Ok1S3W41AdEWh1/Nb7UD1bWytd0LY4ThdnerYbBJ1taT5XBmy4 J8mNnWagp5fbba6HeiTdneK6vNgoFj0EhyO4eSZT6AlpdNXoNzqYi94GbIU0eqhHp3d5drRf4heZ Uo9Iq8vYptup6VB37H6m/W6m0hFpdaxH3/yK9cHpVq5Zy59n2GMV/QxPGdV9NElPE9njyOozsfWW Kq6j7bLuw97KPBnM/mX86//6n+tH/yLKP7AUXcyd80Lx53WnspB47ty5qKND1gLockfIFzHR7srp Q7X0+6GgHPNFXedPyaN1CPFli71z+ZBeTxbcI2RXxwOQj/lW/Ui610Z6gA4a32WRrssXYbO76V/Z y6FLnZVj/qCnqGcdZE1DeFKcntWrMV/RZXCh7B/fpF/+8tQQx6J74y9/Oeuyrm75+Jyf8oKWH9CL 1SYNJtJ91HTOAD2fdjPN9UcsumwmDNYXqvPtOLa9pH8LMACnoV0siZAyOAAAAABJRU5ErkJggg== --0000000000006f7859056e92f2a4--


This message came from the mail archive
src/postings/2018/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University