[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [AUDITORY] Registered reports



Dear all,

My experience with conducting experimental research is very limited, but with this hedge in place, maybe the following perspective on some of the points raised in this interesting thread is of some use.
 
I think that if the scientific question is well formed and well motivated AND the methods sound and appropriate for addressing the question, then whatever the result may be, this seems like a good experiment and one that should be published. 

Isn't this precisely what registered reports aim to achieve? The underlying assumption is that in the current system, whether the results of a study are significant affects the likelihood that the study will be published. I think this discussion is not so much about the integrity of individual researchers and reviewers as it is about the incentives inherent in publishing and academia in general.
 

In theory, the rate of Type I errors should be smaller or equal to the used significance level, but among published findings it isn't. This is arguably problematic, although perhaps not for well-educated readers who have been taught never to believe a single study (on the other hand, not all journalists have been taught the same lesson). There may be many reasons for an elevated Type I error rate, including the points raised by Roger. Whether you believe it's likely registered reports alleviate the problem (if you accept there is a problem) depends on the causes you attribute to the problem. It seems plausible that some of these causes are so-called "p-hacking" practices and a bias towards significant results in publishing  (if out of a set of equally well-designed studies, the ones with significant results are more likely to be published, the Type I error rate among the published studies will be elevated), both of which may result from perfectly honest research and reviewing combined with the wrong incentives. Registered reports cannot address all ways of gaming the system, but will likely reduce the incentive for p-hacking and eliminate the bias towards significant results among published (pre-registered) findings.


As has already been stressed, this is not to say that all studies should be pre-registered or that only pre-registered studies should be taken seriously, but seeing that a study has been pre-registered, even if pre-registering a study is voluntary and rare, helps the reader assess its results. That is in addition to the potential benefits Julia pointed out of receiving peer-review feedback on your methods alone in addition to peer-review feedback on your results and interpretation of the results later.


On the other hand, those who do have the kind of getting-your-hands-dirty experience with empirical research and statistics in the wild that I lack might agree with Les that, in practice, only very few or very uninteresting studies would qualify to benefit from being pre-registered. Then again, maybe that is how it should be: findings that we can be truly confident about are few and boring.


Best wishes,

Bastiaan



On Mon, Jun 11, 2018 at 4:55 PM, Les Bernstein <lbernstein@xxxxxxxx> wrote:
I agree with Ken and Roger.  It's neither clear that the current system falls short nor that RRs would, effectively, solve any such problem.  To the degree there is a problem, I fail to see how making RRs VOLUNTARY would serve as an effective remedy or, voluntary or not, serve to increase "standards of publication."  If people wish to have the option, that sounds benign enough, save for the extra work required of reviewers.

As suggested by Matt, I tried to think of the "wasted hours spent by investigators who repeat the failed methods of their peers and predecessors, only because the outcomes of failed experiments were never published."  Across the span of my career, for me and for those with whom I've worked, I can't identify that such wasted hours have been spent. As Ken notes, well-formed, well-motivated experiments employing sound methods should be (and are) published.

Likewise, re Matt's comments, I cannot recall substantial instances of scientists "who cling to theories based on initial publications of work that later fails replication, but where those failed replications never get published."  Au contraire.  I can think of a quite a few cases in which essential replication failed, those findings were published, and the field was advanced.  I don't believe that it is the case that many of us are clinging to theories that are invalid but for the publication of failed replications.  Theories gain status via converging evidence.

It seems to me that for what some are arguing would, essentially, be an auditory version of The Journal of Negative Results (https://en.wikipedia.org/wiki/Journal_of_Negative_Results_in_Biomedicine).

Still, if some investigators wish to have the RR option and journals are willing to offer it, then, by all means, have at it.  The proof of the pudding will be in the tasting.

Les


On 6/9/2018 5:13 AM, Roger Watt wrote:

3 points:

 

1. The issue of RR is tied up with the logic of null hypothesis testing. There are only two outcomes for null hypothesis testing: (i) a tentative conclusion that the null hypothesis should be regarded as inconsistent with the data and (ii) no conclusion about the null hypothesis can be reached from the data. Neither outcome refers to the alternative hypothesis, which is never tested. A nice idea in the literature is the counter-null. If I have a sample of 42 and an effect size of 0.2 (r-family), then my result is not significant: it is not inconsistent with a population effect size of 0. It is equally not inconsistent with the counter-null, a population effect size of ~0.4. It is less inconsistent with all population effect sizes in between the null and the counter-null. (NHST forces all these double negatives).

 

2. The current system of publish when p<0.05 is easy to game, hence all the so-called questionable practices. Any new system, like RR, will in due course become easy to game. By a long shot, the easiest (invalid) way to get an inflated effect size and an inappropriately small p is to test more participants than needed and keep only the “best” ones. RR will not prevent that.

 

3. NHST assumes random sampling, which no-one achieves. The forms of sampling we use in reality are all possibly subject to issues of non-independence of participants which leads to Type I error rates (false positives) that are well above 5%.

 

None of this is to argue against RR, just to observe that it doesn’t resolve many of the current problems. Any claim that it does, is in itself a kind of Type I error and Type I errors are very difficult to eradicate once accepted.

 

Roger Watt

Professor of Psychology

University of Stirling

 

From: AUDITORY - Research in Auditory Perception [mailto:AUDITORY@LISTS.MCGILL.CA] On Behalf Of Ken Grant
Sent: 09 June 2018 06:19
To: AUDITORY@xxxxxxxxxxxxxxx
Subject: Re: Registered reports

 

Why aren’t these “failed” experiments published? What’s the definition of a failed experiment anyway. 

 

I think that if the scientific question is well formed and well motivated AND the methods sound and appropriate for addressing the question, then whatever the result may be, this seems like a good experiment and one that should be published. 

Sent from my iPhone

Ken W. Grant, PhD

Chief, Scientific and Clinical Studies

National Military Audiology and Speech-Pathology Center (NMASC)

Walter Reed National Military Medical Center

Bethesda, MD 20889

Office:  301-319-7043

Cell:  301-919-2957

 

 

 


On Jun 9, 2018, at 12:48 AM, Matthew Winn <mwinn2@xxxxxx> wrote:

The view that RRs will stifle progress is both true and false. While the increased load of advanced registration and rigidity in methods would, as Les points out, become burdensome for most of our basic work, there is another side to this. This is not a matter of morals (hiding a bad result, or fabricating a good result) or how to do our experiments. It’s a matter of the standards of *publication*, which you will notice was the scope of Tim’s original call to action. In general, we only ever read about experiments that came out well (and not the ones that didn’t). If there is a solution to that problem, then we should consider it, or at least acknowledge that some solution might be needed. This is partly the culture of scientific journals, and partly the culture of the institutions that employ us. There's no need to question anybody's integrity in order to appreciate some benefit of RRs.

Think for a moment about the amount of wasted hours spent by investigators who repeat the failed methods of their peers and predecessors, only because the outcomes of failed experiments were never published. Or those of us who cling to theories based on initial publications of work that later fails replication, but where those failed replications never get published. THIS stifles progress as well. If results were to be reported whether or not they come out as planned, we’d have a much more complete picture of the evidence for and against the ideas. Julia's story also resonates with me; we've all reviewed papers where we've thought "if only the authors had sought input before running this labor-intensive study, the data would be so much more valuable."

The arguments against RRs in this thread appear in my mind to be arguments against *compulsory* RRs for *all* papers in *all* journals, which takes the discussion off course. I have not heard such radical calls. If you don’t want to do a RR, then don’t do it. But perhaps we can appreciate the goals of RR and see how those goals might be realized with practices that suit our own fields of work.

Matt

 

--------------------------------------------------------------

Matthew Winn, Au.D., Ph.D.
Assistant Professor
Dept. of Speech & Hearing Sciences
University of Washington

 


The University achieved an overall 5 stars in the QS World University Rankings 2018
The University of Stirling is a charity registered in Scotland, number SC 011159.


--
Leslie R. Bernstein, Ph.D. | Professor
Depts. of Neuroscience and Surgery (Otolaryngology)| UConn School of Medicine

263 Farmington Avenue, Farmington, CT 06030-3401
Office: 860.679.4622 | Fax: 860.679.2495