Re: [AUDITORY] Registered reports (Matthew Winn )


Subject: Re: [AUDITORY] Registered reports
From:    Matthew Winn  <mwinn2@xxxxxxxx>
Date:    Fri, 8 Jun 2018 21:48:58 -0700
List-Archive:<http://lists.mcgill.ca/scripts/wa.exe?LIST=AUDITORY>

--0000000000001382f5056e2e3c94 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable The view that RRs will stifle progress is both true and false. While the increased load of advanced registration and rigidity in methods would, as Les points out, become burdensome for most of our basic work, there is another side to this. This is not a matter of morals (hiding a bad result, or fabricating a good result) or how to do our experiments. It=E2=80=99s a = matter of the standards of *publication*, which you will notice was the scope of Tim=E2=80=99s original call to action. In general, we only ever read about experiments that came out well (and not the ones that didn=E2=80=99t). If t= here is a solution to that problem, then we should consider it, or at least acknowledge that some solution might be needed. This is partly the culture of scientific journals, and partly the culture of the institutions that employ us. There's no need to question anybody's integrity in order to appreciate some benefit of RRs. Think for a moment about the amount of wasted hours spent by investigators who repeat the failed methods of their peers and predecessors, only because the outcomes of failed experiments were never published. Or those of us who cling to theories based on initial publications of work that later fails replication, but where those failed replications never get published. THIS stifles progress as well. If results were to be reported whether or not they come out as planned, we=E2=80=99d have a much more complete picture of= the evidence for and against the ideas. Julia's story also resonates with me; we've all reviewed papers where we've thought "if only the authors had sought input before running this labor-intensive study, the data would be so much more valuable." The arguments against RRs in this thread appear in my mind to be arguments against *compulsory* RRs for *all* papers in *all* journals, which takes the discussion off course. I have not heard such radical calls. If you don=E2=80=99t want to do a RR, then don=E2=80=99t do it. But perhaps we can= appreciate the goals of RR and see how those goals might be realized with practices that suit our own fields of work. Matt -------------------------------------------------------------- Matthew Winn, Au.D., Ph.D. Assistant Professor Dept. of Speech & Hearing Sciences University of Washington --0000000000001382f5056e2e3c94 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable <div dir=3D"ltr">The view that RRs will stifle progress is both true and fa= lse. While the increased load of advanced registration and rigidity in meth= ods would, as Les points out, become burdensome for most of our basic work,= there is another side to this. This is not a matter of morals (hiding a ba= d result, or fabricating a good result) or how to do our experiments. It=E2= =80=99s a matter of the standards of *publication*, which you will notice w= as the scope of Tim=E2=80=99s original call to action. In general, we only = ever read about experiments that came out well (and not the ones that didn= =E2=80=99t). If there is a solution to that problem, then we should conside= r it, or at least acknowledge that some solution might be needed. This is p= artly the culture of scientific journals, and partly the culture of the ins= titutions that employ us. There&#39;s no need to question anybody&#39;s int= egrity in order to appreciate some benefit of RRs. <br><br>Think for a mome= nt about the amount of wasted hours spent by investigators who repeat the f= ailed methods of their peers and predecessors, only because the outcomes of= failed experiments were never published. Or those of us who cling to theor= ies based on initial publications of work that later fails replication, but= where those failed replications never get published. THIS stifles progress= as well. If results were to be reported whether or not they come out as pl= anned, we=E2=80=99d have a much more complete picture of the evidence for a= nd against the ideas. Julia&#39;s story also resonates with me; we&#39;ve a= ll reviewed papers where we&#39;ve thought &quot;if only the authors had so= ught input before running this labor-intensive study, the data would be so = much more valuable.&quot;<br><br>The arguments against RRs in this thread a= ppear in my mind to be arguments against *compulsory* RRs for *all* papers = in *all* journals, which takes the discussion off course. I have not heard = such radical calls. If you don=E2=80=99t want to do a RR, then don=E2=80=99= t do it. But perhaps we can appreciate the goals of RR and see how those go= als might be realized with practices that suit our own fields of work.<br><= br>Matt <br><br><div class=3D"gmail_extra"><div><div class=3D"gmail_signatu= re" data-smartmail=3D"gmail_signature"><div dir=3D"ltr"><div><div dir=3D"lt= r"><br></div><div dir=3D"ltr">---------------------------------------------= -----------------</div><div dir=3D"ltr">Matthew Winn, Au.D., Ph.D.<br>Assis= tant Professor<br>Dept. of Speech &amp; Hearing Sciences<br>University of W= ashington</div></div></div></div></div> <br></div></div> --0000000000001382f5056e2e3c94--


This message came from the mail archive
src/postings/2018/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University