Hi Nilesh,
I agree to certain extend but I do feel that registered reporting
makes sense for 'close to product' trials and trials that lead to
treatments (for example evaluation of a fitting algorithm). In fact,
it should not really be ' double work' as you fear because if you
execute a poor trial and then try to get it published (believe me.. it
happens :-) )but it gets rejected and you basically have no option but
to redo (part of) the work. (and -re-writing the text to get a poor
trial accepted for publication is of course exactly what you don't
want...). That's more double work that writing up a good trial
proposal, have it reviewed and then know that if you execute according
to plan it's likely to get published even if the results are negative
or non-conclusive, that could be a pre as well.
Best wishes,
Bas
Bas Van Dijk
Program Manager, A&A - Clinician and Research Tools
Cochlear Technology Centre Belgium
Schaliënhoevedreef 20 I
2800 Mechelen
BELGIUM
Phone: +3215795528
Mobile: +32473976270
Email: BVanDijk@xxxxxxxxxxxx
www.cochlear.com
-----Original Message-----
From: AUDITORY - Research in Auditory Perception
[mailto:AUDITORY@xxxxxxxxxxxxxxx] On Behalf Of Nilesh Madhu
Sent: dinsdag 5 juni 2018 13:16
To: AUDITORY@xxxxxxxxxxxxxxx
Subject: [AUDITORY] Registered reports
Dear Tim,
I appreciate your initiative towards reproducible research. However I
fear that registered reports would just add another layer of overhead
to academics and students already under the pressure to publish. If I
understand correctly, this involves two rounds of review: a first
review based on the methodology and evaluation and a second based on
the results of the research. For each stage, probably at least two
review rounds would be needed (going by the current publishing cycle).
I fear, as Gaston does, this might stifle creativity and lead to
overwork also for reviewers and editors. Of course, this is assuming
you want to make registered reports compulsory...
Furthermore, such an approach may not be equally applicable to all
research. For research into algorithms, for example, the value of the
research lies, usually, in the core idea. There are myriad accepted
forms of evaluation and to force a strict evaluation
pattern/methodology would be counterproductive. Reproducible research
in this case is targeted by encouraging authors to make their code and
test data public.
What I would support are (voluntary) guidelines on reporting results
of experiments. This is often to be found in in the engineering field,
when one participates in an open challenge.
Lastly, the main reason for this initiative is to avoid 'mis-reporting'
the results in favour of a hypothesis. Surely, this calls for self
policing? Aren't we, as researchers, possessed of sufficient integrity
and ethics to present our research in the correct light? If this core
value is missing, I fear no external policing is going to help.
Best regards
Nilesh Madhu
==============================================
"The information contained in this e-mail message may be confidential
information, and may also be privileged. If you are not the intended
recipient, any use, interference with, disclosure or copying of this
material is unauthorised and prohibited. If you have received this
message in error, please notify us by return email and delete the
original message."