Dear
all,
My
experience with conducting experimental research is very
limited, but with this hedge in place, maybe the following
perspective on some of the points raised in this interesting
thread is of some use.
I
think that if the scientific question is well formed and
well motivated AND the methods sound and appropriate for
addressing the question, then whatever the result may be,
this seems like a good experiment and one that should be
published.
Isn't
this precisely what registered reports aim to achieve? The
underlying assumption is that in the current system, whether
the results of a study are significant affects the likelihood
that the study will be published. I think this discussion is
not so much about the integrity of individual researchers and
reviewers as it is about the incentives inherent in publishing
and academia in general.
In
theory, the rate of Type I errors should be smaller or equal
to the used significance level, but among published findings
it isn't. This is arguably problematic, although perhaps not
for well-educated readers who have been taught never to
believe a single study (on the other hand, not all journalists
have been taught the same lesson). There
may be many reasons for an elevated Type I error rate,
including the points raised by Roger. Whether
you believe it's likely registered reports alleviate the
problem (if you accept there is a problem) depends on the
causes you attribute to the problem. It
seems plausible that some of these causes are so-called
"p-hacking" practices and a bias towards significant results in
publishing (if out of a
set of equally well-designed studies, the ones with
significant results are more likely to be published, the Type
I error rate among the published studies will be elevated),
both of which may result from perfectly honest research and
reviewing combined with the wrong incentives. Registered
reports cannot address all ways of gaming the system, but will
likely reduce the incentive for p-hacking and eliminate the
bias towards significant results among published
(pre-registered) findings.
As
has already been stressed, this is not to say that all studies
should be pre-registered or that only pre-registered studies
should be taken seriously, but seeing that a study has been
pre-registered, even if pre-registering a study is voluntary
and rare, helps the reader assess its results. That is in
addition to the potential benefits Julia pointed out of
receiving peer-review feedback on your methods alone in
addition to peer-review feedback on your results and
interpretation of the results later.
On
the other hand, those who do have the kind of getting-your-hands-dirty experience
with empirical research and statistics in the wild that I lack
might agree with Les that, in practice, only very few or very
uninteresting studies would qualify to benefit from being
pre-registered. Then again, maybe that is how it should be:
findings that we can be truly confident about are few and
boring.
Best
wishes,
Bastiaan