Thank you Matt for bringing up this topic and for everybody who articulated their opinions. This is a very interesting debate, which I find particularly enlightening, given that I am one of those people that has chosen to park their manuscript in arXiv in the
foreseeable future.
If I can try to summarize the gist of the opinions, then it seems that every researcher tends to come up with a set of heuristics to try and determine whether a particular publication is worth their time and effort without actually reading it. These include
the reputation of the publication platform, but are additionally influenced greatly by the authors' perceived reputation, their affiliation(s), the level of presentation (aesthetics, language, structure, bibliography, etc.), the type and extent of the claims
made, their novelty, their topicality, and for older papers, the number of citations they received. We would like to think that all these give a pretty good idea of whether a paper is worthy even before reading the abstract. Just like any other endeavor, a
judgment error here can be a false negative - ignoring a good paper, which could have advanced other results and ideas, and could have saved repeating work, or realizing that what you have been working on had been already done by somebody else. The judgment
error here can also be a false positive - giving undeserved attention to an unworthy paper, which may result in waste of time, money, and escalate to wrongly citing it and basing further false claims upon it - a potential embarrassment. To some, there is a
pedagogical point to make here, since the risk in a false positive is so high that it is also critical to warn others against it.
I'd like to offer another perspective about the role and usefulness of arXiv, as I have personally experienced it, which goes beyond its preprint repository function. As it relates to a specific work, it may not be easily generalizable, although I think it
highlights the shades of grey involved in the process of doing different forms of science.
First, publishing on arXiv has liberated me from adhering to the standard article format and allowed for keeping a more organic structure that made better sense for the writing and topic - neither a book nor an article, a hybrid between theory and experiment,
something that does not clearly belong in any specific journal.
Second, it has allowed for some relaxation of the usual cautiousness of completely refraining from any speculation. While this may be an obvious red flag for some readers, I think it's fair play as long as the act of speculation is clearly stated and the ensuing
logical flow is kept in check.
Third, it has made the question of who can review the material moot. Every reader is a reviewer in their own right and must be able to trust their own judgment. Producing a document that may not be reviewable in the traditional sense because of its length and interdisciplinarity
has very limited options for publication. One such option is to publish it as a book or a thesis, if suitable reviewers can be found. Many walls can be hit here. Another option is to break it down to multiple papers and send them to different journals, which
would take many years and hoops to jump (a good example is de Boer's "Auditory physics" trilogy from 1980, 1984 and 1991, although I don't know the back story of this series). The benefit in going through this usual process may be the increase in trust in
the relevance and correctness of the material that the readers should have, while they can also enjoy a better presentation (fewer errors, better focus, etc.). The cost of adhering to traditional format would be many years of delay and loss of precision in
the message, as I envision and would like to communicate. It may also be the loss of precedence if someone else has come up with similar ideas at the same time - not at all an uncommon thing in the history of science (e.g., Darwin and Wallace).
The alternative was to use arXiv for publication (it could have been another repository). Critically, it provides an agreed upon stamp of authorship with a publication date. At the very least, it has non-zero reputation in several scientific fields, there is
very rudimentary control by its staff of what goes into it, initial author affiliation (or reference by affiliated people) is required, and it allows for version updates. More importantly, it relies on trust in the judgment of the few that would be willing
to invest time in reading the manuscript, so they can decide for themselves whether it is a worthwhile piece, or one that should have never seen the light of day and be forgotten. I believe it is a more adult way to treat the readers, who should be capable
to assess the quality of the work after decades of education, without being prescribed a nominal map of where "bad science" necessarily lies that must be avoided at all costs.
Whichever strategy of reading and publishing is embraced, there is going to be no one-rule-fits-all here, and every scholar has to be comfortable with their own choices, obviously. All have clear merits and none is completely infallible.
Adam.
Thank you Dan, Alain and everyone else for this important debate. I
think its essential that we, as a field, have a constructive debate
about publishing models because it feels like the current model of
for-profit publishing is unsustainable and will hopefully be replaced by
something better.
I agree with most of Dan's arguments in defense of preprints although I
think that the boost in speed and citations is the weakest just because
I think there is usually no inherent time-pressure to most of our
research - after all, it's not like we are developing vaccines for a
global pandemic or something.
More importantly, preprints provide open access for readers and authors
and removes gatekeepers. The latter may allow the publishing of research
that goes against widely accepted standards in style, design,
methodology and so on but this kind of heterodoxy is something I
personally welcome. Of course, I value the critique of experts but in
the current system I don't really get this critique. Instead, I just get
the information that someone, who is probably and expert on the matter
and may or may not have spent a lot of time on this particular paper,
saw it fit for publication.
I am not convinced by Alain's argument that the current peer-review
process is a safeguard against bad science. As Dan suggested, there is a
good amount of research showing the ineffectiveness of the current
review system. There may even be the danger that certain publications
are taken at face value, instead of being assessed critically, just
because they appeared in a reputable journal. Thus, peer-review may
provide a false sense of security, much like the use of helmets in
American Football caused an increase in traumatic brain injury because
it lead players to charge head first into each other.
The only time I noticed a truly bad effect of preprints was during the
pandemic, when media outlets picked up on flawed corona related research
( masks don't work etc.) and then reported it as facts without
understanding or explaining what a preprint is.
I think that it would be useful to have a review process that is open,
transparent and detached from publishing, like movie reviews written on
pages such as imdb. In this way, scientist could not only access and
cite the research itself but also critical reviews of that research.
This would also allow young scientists such as myself to get more
insight into the secretive world of academic publishing. Of course
coming up with a good architecture that sets the right incentives for
such a system is no trivial task but I don't see clinging to the status
quo of publishing as a viable option on the long run.
Again, thank you all for adding to this debate!
All the best,
Ole
Am 25.05.2023 11:51 schrieb Goodman, Daniel F M:
> Alain,
>
> You write about preprints as if they're some new thing with potentially
> dangerous unknowable consequences, but they've been around and used
> extensively (particularly in maths and physics) for over 30 years at
> this point (arXiv was founded in 1991). Most major funders and journals
> recognise preprints, probably the majority of funders now have open
> access requirements that can be fulfilled with preprints, and a few are
> even mandating their use. It's actually not much younger than the
> widespread use of peer review, which didn't become a de facto standard
> until the 1960s-1970s (Nature didn't use it until 1973 for example).
>
> When you say you're not convinced by arguments about speed or number of
> citations, I guess you mean about the net benefits not about the facts?
> Because the data is really start: papers in biology which originally
> appeared as preprints get 36% more citations
> immediate and long lasting
>
> To make the argument clearer, let's break it down into the different
> roles that preprints can have.
>
> The first role is what preprints can do in the period following the
> publication of a paper in a journal. In this case, posting a preprint
> of
> a paper fulfills open access requirements and makes it possible for the
> whole world to read your paper, including the general public, and
> people
> at less wealthy universities and countries that cannot afford the
> journal subscription. I cannot see any coherent argument against this.
> It's a disgrace that the public pays for science but is not able to
> access the results of the work they paid for, and it is only a
> hindrance
> to scientific progress to gate access to knowledge.
>
> The second role is what preprints can do in the time between the
> journal
> accepting the paper and making it available. This is purely about speed
> of publication but I can't see any reason why you wouldn't want this
> speed? I just went to the most recent issue of JASA and looked at the
> first three papers as a rough sample, and this delay was 3 weeks, 3.5
> weeks and 6.5 weeks. It's not years, but might make the difference in
> someone's job or grant application.
>
> The third role is where I guess you mostly disagree Alain, the time
> period between publishing the preprint and journal acceptance. But I
> don't really see any conflict here. If you don't want to read preprints
> and prefer to wait then just don't read them. But they will have value
> for other readers (like me) who accept the limitations, and they have
> great value for the authors (36% more citations for example). For
> reference, for my sample of JASA papers above, the times from first
> submission to journal publication were 22 weeks, 27 weeks, and 38
> weeks.
>
> I would dispute the strength of the quality control you mention though.
> A study of peer review at the BMJ with deliberate major and minor
> errors
> found that on average peer reviewers picked up on 2.6 to 3 of 9 major
> errors deliberately introduced
> sort of quality control, but not enough to mean that you can
> uncritically read peer reviewed papers.
>
> And on the other hand, there is also a downside to only reading peer
> reviewed work in that you are subject to editorial and reviewer biases.
> A PNAS study found that a paper submitted with a Nobel prize winner as
> author was recommended for acceptance by 20% of reviewers, but the very
> same paper with an unknown student as author was only recommended for
> acceptance 2% of the time
>
> More controversially perhaps, I think there is a potential fourth role
> for preprints that are never submitted to a journal. This is very
> common
> in maths, physics and computer science and works well there. I think it
> would work even better when combined with a post-publication peer
> review
> platform that made reviews open, prominently displayed with an
> at-a-glance summary, and easily accessible. But that's an argument for
> another day!
>
> Dan
>
> ------ Original Message ------
> Date 25/05/2023 09:01:43
> Subject Re: arXiv web of trust
>
>> Dan, all,
>>
>> I'm not convinced by arguments about speed of 'publication', number of
>> citations, or algorithmic suggestions. Think 'fake news' and the
>> impact of recommendation algorithms on the quality of information,
>> minds, and the state of the world.
>>
>> The review process can be seen as quality control. A product maker
>> that eliminates that phase can deliver them faster, introduce jazzier
>> products, make more money, and dominate the market. Peer-review - like
>> product quality control - doesn't eliminate all flaws, but it may make
>> them less likely and easier to spot and eliminate.
>>
>> I suspect there is a generational dimension to this debate. The three
>> of us that argued most strongly in defence of the review process have
>> (or have had) a well-established career. How could we not defend the
>> practices that got us there? Someone struggling to gain recognition,
>> and a job, may be tempted by mechanisms that bypass those practices.
>> Fair enough, but beware. It might be a bit like tearing down the walls
>> and ripping up the floor to feed the boiler.
>>
>> The debate may become moot with the introduction of AI-based tools to
>> assist writing and reviewing. Why not use similar tools to read the
>> papers too, and understand them, and produce new science (of possibly
>> better quality)? This sounds great, except that I don't see much room
>> for a human scientist in that loop. So much for your careers.
>>
>> I find the generational issue unnerving, personally. For the first
>> time in my life, I'm old and the others are new. It takes some
>> getting used to.
>>
>> Alain
>>
>>
>>
>>
>>> On 24 May 2023, at 15:42, Goodman, Daniel F M
>>>
>>> I have no hesitation in calling a preprint a "publication". There's
>>> no magic in peer review that makes it not count as published before
>>> this process. Even the word preprint is archaic now given how many
>>> journals are online only.
>>>
>>> Personally, I now primarily read preprints because most of the work
>>> in the areas I'm interested in appears a year or two earlier as
>>> preprints than in a journal. It's much more exciting and progress can
>>> be much faster when there isn't a multi year between doing work and
>>> seeing how others make use of it. I just had an email from someone
>>> asking if they could cite a tweet of mine that had inspired them to
>>> do some work and this sort of thing is great! Why should we accept
>>> years of delay between each increment of progress?
>>>
>>> Of course, reading preprints means you have to cautious. But, I
>>> always treat papers I read critically whether they've been through
>>> peer review or not, and I would encourage everyone to do the same.
>>> Peer review is of very uneven quality, based on quantitative studies
>>> and based on my own experience as a reviewer reading the other
>>> reviews. Terrible papers with glaring errors get through peer review.
>>> So I don't think we can uncritically accept the results of peer
>>> reviewed papers, and in practice most scientists don't. We criticise
>>> peer reviewed papers all the time. It's this process of review or
>>> feedback after publication that is the real scientific process, and
>>> it would be much easier if the reviews were made available so we
>>> could more easily judge for ourselves. The sooner we move to a system
>>> of open and transparent post publication peer review like the systems
>>> Etienne is talking about, the better.
>>>
>>> I do agree with Alain's point that there are too many papers to read
>>> them all, but for me that's not an argument for the traditional
>>> approach to peer review but for experimenting with different
>>> approaches to recommending papers. Again personally, I find I have a
>>> higher hit rate with algorithmic suggestions from Semantic Scholar
>>> and from things I see posted on social media than I do from going
>>> through journal table of contents (which I still do out of habit).
>>>
>>> And as a last point to encourage preprints, the evidence shows that
>>> papers that are first available as a preprint get cited more overall.
>>> And if that doesn't convince you I don't know what will. ��
>>>
>>> Dan
>>>
>>> ---
>>> This email was written on my phone, please excuse my brevity.
>>>
>>> Sent: Wednesday, 24 May 2023 10:38
>>> Subject: Re: [AUDITORY] arXiv web of trust
>>>
>>> Thanks for opening this nice debate, Max!
>>>
>>> I side with Brian for the need of serious peer-review, but I am less
>>> sure how this can be achieved nowadays. Publishers are increasingly
>>> pressuring reviewers to work fast because their business model relies
>>> on volume, and there seems to be little cost to publishing poor
>>> quality papers. With the ever precarisation of research, it takes a
>>> very strong faith in the ethos of scientific integrity to remain a
>>> thorough reviewer.
>>>
>>> If we accept that, as a consequence of this pressure, there are more
>>> flawed papers that pass the review process, it would mean that we, as
>>> consumers of the literature, should be more cautious when citing
>>> articles. We should more critically examine what we cite, and sort of
>>> perform our own review. But of course, that's also very time
>>> consuming... and it is also very inefficient at the scale of the
>>> community: me *not* citing an article because I found that it is
>>> potentially flawed will not prevent others from citing it, and the
>>> effort I will have put in reviewing it will be largely wasted.
>>>
>>> So I do believe that there is a strong benefit in having more open
>>> discussions about papers, and in some cases, the fact that they are
>>> published or not in the traditional sense, may be partially
>>> irrelevant. We definitely don't want to turn the scientific community
>>> into social media, where a few arbitrary influencers get to decide
>>> what's worthy and what isn't. But there are now places where
>>> scientific arguments can be shared, and reflections can be had,
>>> constructively.
>>>
>>> That's what we tried to do for the last edition of the International
>>> Symposium on Hearing, but hosting the papers as "pre-print" (for lack
>>> of a better term) freely available on Zenodo
>>> publically available on PubPeer (and more can be added; here's an
>>> example:
>>> Contributors are still able to publish their articles in the
>>> traditional sense, and hopefully the published version will be
>>> connected to the ISH version in some form so that users can view the
>>> history and comments. In others words, there is much benefit for the
>>> two systems to co-exist (we can get rid of private publishers,
>>> though, and switch to decentralized institutional ones).
>>>
>>> Remains the problem raised by Alain: as readers, how do we deal with
>>> the volume? While publishers have been selling us "reputation" in the
>>> form of journals in very much overrated ways (such as impact factors,
>>> and what not), it is true that journals do have a curating role that
>>> should not be underestimated. This being said, editors do not
>>> actively seek authors to steer publications towards a specific topic
>>> (besides Frontiers' take it all harassment approach). It is still the
>>> authors that decide to submit to a specific journal or another. As a
>>> result, following the JASA TOC gives us access to a semi-random
>>> sample of what's going on in the field. It does offer,
>>> stochastically, some degree of protection against confirmation bias
>>> in literature search (whereby you only look for papers that confirm
>>> your idea). I wonder if automatic suggestions of "related papers"
>>> could achieve something similar in other venues?
>>>
>>> Cheers,
>>> -Etienne
>>>
>>>
>>> --
>>> Etienne Gaudrain, PhD
>>>
>>> Lyon Neuroscience Research Centre / Auditory Cognition and
>>> Psychoacoustics (CAP)
>>> CNRS UMR5292, Inserm U1028, Université Lyon 1
>>> Centre Hospitalier Le Vinatier - Bâtiment 462 - Neurocampus
>>> 95 boulevard Pinel, 69675 Bron Cedex, France
>>>
>>>
>>>
>>> On Wed, 24 May 2023 at 10:56, Alain de Cheveigne
>>> Hi Jonathan, all,
>>>
>>> Here's a different perspective.
>>>
>>> First of all, the issue of peer review should be distinguished from
>>> that of publishers shaving the wool off our backs (more below).
>>>
>>> Peer review offers functions that we miss out on in the preprint
>>> model. Weeding out junk is one, improving papers (and the ideas in
>>> them) is another. A third is reducing the bulk of things to read.
>>>
>>> The last might seem counterintuitive: surely, more is better? The
>>> thing is, we have limited time and cognitive bandwidth. Lack of time
>>> is the major obstacle to keeping abreast, and lack of time of the
>>> potential audience is what prevents our ideas having an impact. You
>>> painstakingly work to solve a major problem in the field, write it up
>>> carefully, and no one notices because attention is carried away by
>>> the tweet cycle.
>>>
>>> The review/journal model helps in several ways. First, by
>>> prioritizing things to read (as an alternative to the random - or
>>> otherwise biased - selection induced by lack of time). Second, by
>>> improving the readability of the papers: more readable means less
>>> time per paper means more attention for other papers - including
>>> possibly yours. Third, by organizing - however imperfectly - the
>>> field.
>>>
>>> For example, you can (or could) keep abreast of a topic in acoustics
>>> by scanning JASA and a few other journals. With the preprint/twitter
>>> model the 'field' risks being shattered into micro-fields, bubbles,
>>> or cliques.
>>>
>>> My experience of the review process is - as everyone's - mixed. I
>>> remember intense frustration at the reviewer's dumbness, and despair
>>> at ever getting published. I also remember what I learned in the
>>> process. Almost invariably, my papers were improved by orders of
>>> magnitude (not just incrementally).
>>>
>>> I also spend a lot of time reviewing. I find it a painful process,
>>> as it involves reading (I'm a bit dyslexic), and trying to understand
>>> what is written and - to be helpful to the author - what the author
>>> had in mind and how he/she could better formulate it to get the
>>> message across, and avoid wasting the time of - hopefully - countless
>>> readers. It does involve weeding out some junk too.
>>>
>>> Science is not just about making new discoveries or coming up with
>>> radically new ideas. These are few and far between. Rather, it's a
>>> slow process of building on other people's ideas, digesting, tearing
>>> down, clearing the rubble, and building some more. The review process
>>> makes the edifice more likely to stand. Journals play an important
>>> role in this accumulation, even if most content is antiquated and
>>> boring. It's a miracle that some journals have done this over
>>> decades, even centuries.
>>>
>>> Which brings back to the issue of money, impact factors, and
>>> careers. Lots to say about that, mostly depressing, but mainly
>>> orthogonal from the peer-review issue.
>>>
>>> Alain
>>>
>>>
>>>
>>>
>>>
>>> wrote:
>>> >
>>> > Matt,
>>> >
>>> > In this context I would avoid the term “publishing”, since that
>>> has such a different meaning for so many people, but I personally do
>>> take advantage of posting preprints on a public server (like arXiv)
>>> almost every chance I get.
>>> >
>>> > Preprints (preprint = a fully written paper that is not (yet)
>>> published) have been useful for many decades, originally in physics,
>>> as a way of getting one's research results out in a timely manner.
>>> Other key benefits are that it establishes primacy of the research
>>> findings, that it is citable in other researchers' papers, and that
>>> it can be promoted by social media such as this listserve (more below
>>> on this). But the biggest benefit is typically getting the paper out
>>> into the world for others to learn from, without having to wait based
>>> on the whims of publishers and individual reviewers. If most of your
>>> published papers get accepted eventually, and the most important
>>> findings don’t get cut in the review process, then preprints are
>>> something you should definitely consider. Reviewers often make
>>> published papers better, but maybe not so much better that it’s worth
>>> waiting many months for others to see your results.
>>> >
>>> > arXiv is the oldest website for posting preprints, and if its
>>> Audio and Speech section is active, that might be a good place to
>>> post your preprints. But there may be other options for you. As an
>>> auditory neuroscientist I typically use bioRxiv (e.g., "Changes in
>>> Cortical Directional Connectivity during Difficult Listening in
>>> Younger and Older Adults”
>>> also use PsyArXiv if the topic is more perceptual than neural (e.g.,
>>> “Attention Mobilization as a Modulator of Listening Effort: Evidence
>>> about promoting your research on social media?]
>>> >
>>> > I’m sure others have opinions too.
>>> >
>>> > Jonathan
>>> >
>>> >
>>> wrote:
>>> >>
>>> >> Is anyone publishing on arXiv at the moment ? It seems that to
>>> publish there they rely on a web of trust.
>>> >>
>>> >> There is an Audio and Speech section of arXiv which would suit
>>> our community.
>>> >>
>>> >> thanks
>>> >>
>>> >> Matt
>>> >
>>> > --
>>> > Jonathan Z. Simon (he/him)
>>> > University of Maryland
>>> > Dept. of Electrical & Computer Engineering / Dept. of Biology /
>>> Institute for Systems Research
>>> > 8223 Paint Branch Dr.
>>> > College Park, MD 20742 USA
>>> > Office: 1-301-405-3645, Lab: 1-301-405-9604, Fax: 1-301-314-9281
>>> >
>>> >
>>