Well, we simply don't agree.
Separate RRs for each experiment when one is a follow-up or
control based on the first? No, I see no advantage there and
many disadvantages. I think the distinction between
"confirmatory" and "exploratory" analyses, as described, is
artificial and without merit. Performing an unanticipated
statistical (or other) analysis because it is warranted by the
data is not "exploratory."
I don't think we will ever rid ourselves of all aspects of being
carpenters. Much as many do not wish to admit it, there is art
involved in doing science. I don't mean that in the sense of
subjectivity but in the sense of having intuitive knowledge and
the judgment to define the path to be taken. I believe that the
POSITIVE history of science supports such a view. I hope we are
not facing the change in paradigm you herald. I think much of
value might be lost if we were.
There's nothing new under the sun. Generations have faced the
notion of "publish or perish" in academia. As I see it,
imposing stilted, restricted rules in the name of objectivity
that, in and of themselves, have the potential to degrade
scientific progress is not the answer. Again, the intent is
laudable, but in my view, the proposed solution is fundamentally
flawed.
Thanks for the history-- my field is Experimental Psychology. I
know well the stories. :-)
Best
from the cloudy and cool east coast of the US.
Les
On 6/6/2018 12:15 PM, Massimo Grassi wrote:
Les,
Thanks for your response. Note that I
mentioned a number of issues that
I identify as problems and shortcomings and not just a single
one. That
the "results" section of a Stage 2 submission allows for
"exploratory
analysis" hardly addresses the issues I raised with regard to
hypotheses, follow-up and control experiments, choice of PRIMARY
statistical tests, and archival value. Furthermore, the
"exploratory"
analyses you cite are clearly considered subordinate to the
pre-approved
"confirmatory analyses." See Nosek and Lakens (2014). As I see
it,
that's unnecessarily restrictive.
- Multiple experiments. Usually, Experiment 2 follows the results
of
Experiment 1 (that in RR are unknown). One solution is that you do
a
registered report of Experiment 1 and a new registered report for
Experiment 2
- confirmatory vs exploratory analysis. This is exactly the point.
Nowadays often we sell "exploratory" like "confirmatory". In
contrast,
we should make clear what is exploratory and what is not. I think
nobody
would ignore an interesting exploratory result.
- archival value. Nobody knows whether the "success rate" of RR is
higher equal or lower than traditional paper. Perhaps there are
not many
data yet. But we do know that current literature is inflated with
false
positive (e.g., look at the various replication experiments or at
the
analysis by Ioannidis et al.). In my opinion, the auditory field
is a
safe island (at least in comparison to other fields). However, I
have no
data about it.
Yes, we do "move more like a carpenter
trying to adapt and adjust things
in real time." Registered reports ask that we plan most, if not
all, of
our measurements, cuts, and adjustments in advance. They are
anathema to
the process.
I would like to move from "carpenter" to "engineer" :-)
RR and other standards that are now suggested (e.g.,
preregistration,
Bayesian stats, multi-lab experiments) enable to do so (in my
opinion).
In any case, all journals that are adopting RR still offer the
traditional submissions. So everything is preserved.
I agree with Nilesh's comments,
especially, "Aren't we, as researchers,
possessed of sufficient integrity and ethics to present our
research in
the correct light? If this core value is missing, I fear no
external
policing is going to help."
- I don't know. For example, here in Italy -where staff recruiting
is
screened by number of publications, H-index and number of
citations-
researchers are pushed a lot in the direction of "publish as much
as
possible in high impact factor journals and get cited a lot or
perish".
And in fact a recent editorial in Nature was highlighting that
here in
Italy the number of self-citations is increasing. I'm wandering
whether
others not-so-nice behaviours are also adopted (Interesting
enough,
Italy still scores zero (!) for number of scientific frauds. There
was
list in wikipedia, I can't find it, sorry.)
If we look back at the history of psychology (my own field), it
looks to
me we are facing a change in scientific paradigm. From Wundt up to
Titchener we were using (and trusting) introspection as a good
tool to
investigate psychology. Then we had the paper by John Watson
(1913,
Psychology as the behaviorist views it. Psychological review,
20(2),
158-177.) and in a few years time introspection was forgotten and
neglected. Let's look back at Watson's statement now: it sounds so
obvious today ("Introspection forms no essential part of its
methods,
nor is the scientific value of its data dependent upon the
readiness
with which they lend themselves to interpretation in terms of
consciousness.). My guess is that in a few years time we will do
the
same for several of the current research practices.
Apologies for the long email and ll the best from a scorchy hot
Italy,
m
ps: no carpenter has been killed or injured while writing this
email.
--
Leslie R. Bernstein, Ph.D. | Professor
Depts. of Neuroscience and Surgery (Otolaryngology)| UConn
School of Medicine
263 Farmington Avenue, Farmington, CT
06030-3401
Office: 860.679.4622 | Fax:
860.679.2495
|