[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [AUDITORY] arXiv web of trust

I have no hesitation in calling a preprint a "publication". There's no magic in peer review that makes it not count as published before this process. Even the word preprint is archaic now given how many journals are online only.

Personally, I now primarily read preprints because most of the work in the areas I'm interested in appears a year or two earlier as preprints than in a journal. It's much more exciting and progress can be much faster when there isn't a multi year between doing work and seeing how others make use of it. I just had an email from someone asking if they could cite a tweet of mine that had inspired them to do some work and this sort of thing is great! Why should we accept years of delay between each increment of progress?

Of course, reading preprints means you have to cautious. But, I always treat papers I read critically whether they've been through peer review or not, and I would encourage everyone to do the same. Peer review is of very uneven quality, based on quantitative studies and based on my own experience as a reviewer reading the other reviews. Terrible papers with glaring errors get through peer review. So I don't think we can uncritically accept the results of peer reviewed papers, and in practice most scientists don't. We criticise peer reviewed papers all the time. It's this process of review or feedback after publication that is the real scientific process, and it would be much easier if the reviews were made available so we could more easily judge for ourselves. The sooner we move to a system of open and transparent post publication peer review like the systems Etienne is talking about, the better.

I do agree with Alain's point that there are too many papers to read them all, but for me that's not an argument for the traditional approach to peer review but for experimenting with different approaches to recommending papers. Again personally, I find I have a higher hit rate with algorithmic suggestions from Semantic Scholar and from things I see posted on social media than I do from going through journal table of contents (which I still do out of habit).

And as a last point to encourage preprints, the evidence shows that papers that are first available as a preprint get cited more overall. And if that doesn't convince you I don't know what will. 😉


This email was written on my phone, please excuse my brevity.

From: Etienne Gaudrain <egaudrain.cam@xxxxxxxxx>
Sent: Wednesday, 24 May 2023 10:38
To: AUDITORY@xxxxxxxxxxxxxxx
Subject: Re: [AUDITORY] arXiv web of trust

Thanks for opening this nice debate, Max!

I side with Brian for the need of serious peer-review, but I am less sure how this can be achieved nowadays. Publishers are increasingly pressuring reviewers to work fast because their business model relies on volume, and there seems to be little cost to publishing poor quality papers. With the ever precarisation of research, it takes a very strong faith in the ethos of scientific integrity to remain a thorough reviewer.

If we accept that, as a consequence of this pressure, there are more flawed papers that pass the review process, it would mean that we, as consumers of the literature, should be more cautious when citing articles. We should more critically examine what we cite, and sort of perform our own review. But of course, that's also very time consuming... and it is also very inefficient at the scale of the community: me *not* citing an article because I found that it is potentially flawed will not prevent others from citing it, and the effort I will have put in reviewing it will be largely wasted.

So I do believe that there is a strong benefit in having more open discussions about papers, and in some cases, the fact that they are published or not in the traditional sense, may be partially irrelevant. We definitely don't want to turn the scientific community into social media, where a few arbitrary influencers get to decide what's worthy and what isn't. But there are now places where scientific arguments can be shared, and reflections can be had, constructively.

That's what we tried to do for the last edition of the International Symposium on Hearing, but hosting the papers as "pre-print" (for lack of a better term) freely available on Zenodo (https://zenodo.org/communities/ish2022/), and reviews are made publically available on PubPeer (and more can be added; here's an example: https://pubpeer.com/publications/B12EF572A02E04659AF006FF9C5C91). Contributors are still able to publish their articles in the traditional sense, and hopefully the published version will be connected to the ISH version in some form so that users can view the history and comments. In others words, there is much benefit for the two systems to co-exist (we can get rid of private publishers, though, and switch to decentralized institutional ones).

Remains the problem raised by Alain: as readers, how do we deal with the volume? While publishers have been selling us "reputation" in the form of journals in very much overrated ways (such as impact factors, and what not), it is true that journals do have a curating role that should not be underestimated. This being said, editors do not actively seek authors to steer publications towards a specific topic (besides Frontiers' take it all harassment approach). It is still the authors that decide to submit to a specific journal or another. As a result, following the JASA TOC gives us access to a semi-random sample of what's going on in the field. It does offer, stochastically, some degree of protection against confirmation bias in literature search (whereby you only look for papers that confirm your idea). I wonder if automatic suggestions of "related papers" could achieve something similar in other venues?


Etienne Gaudrain, PhD

Lyon Neuroscience Research Centre / Auditory Cognition and Psychoacoustics (CAP)
CNRS UMR5292, Inserm U1028, Université Lyon 1
Centre Hospitalier Le Vinatier - Bâtiment 462 - Neurocampus
95 boulevard Pinel, 69675 Bron Cedex, France

On Wed, 24 May 2023 at 10:56, Alain de Cheveigne <alain.de.cheveigne@xxxxxxxxxx> wrote:
Hi Jonathan, all,

Here's a different perspective.

First of all, the issue of peer review should be distinguished from that of publishers shaving the wool off our backs (more below).

Peer review offers functions that we miss out on in the preprint model. Weeding out junk is one, improving papers (and the ideas in them) is another. A third is reducing the bulk of things to read.

The last might seem counterintuitive: surely, more is better?  The thing is, we have limited time and cognitive bandwidth. Lack of time is the major obstacle to keeping abreast, and lack of time of the potential audience is what prevents our ideas having an impact. You painstakingly work to solve a major problem in the field, write it up carefully, and no one notices because attention is carried away by the tweet cycle.

The review/journal model helps in several ways. First, by prioritizing things to read (as an alternative to the random - or otherwise biased - selection induced by lack of time).  Second, by improving the readability of the papers: more readable means less time per paper means more attention for other papers - including possibly yours. Third, by organizing - however imperfectly - the field.

For example, you can (or could) keep abreast of a topic in acoustics by scanning JASA and a few other journals. With the preprint/twitter model the 'field' risks being shattered into micro-fields, bubbles, or cliques.

My experience of the review process is - as everyone's - mixed.  I remember intense frustration at the reviewer's dumbness, and despair at ever getting published. I also remember what I learned in the process.  Almost invariably, my papers were improved by orders of magnitude (not just incrementally).

I also spend a lot of time reviewing. I find it a painful process, as it involves reading (I'm a bit dyslexic), and trying to understand what is written and - to be helpful to the author - what the author had in mind and how he/she could better formulate it to get the message across, and avoid wasting the time of - hopefully - countless readers. It does involve weeding out some junk too.

Science is not just about making new discoveries or coming up with radically new ideas. These are few and far between. Rather, it's a slow process of building on other people's ideas, digesting, tearing down, clearing the rubble, and building some more. The review process makes the edifice more likely to stand. Journals play an important role in this accumulation, even if most content is antiquated and boring. It's a miracle that some journals have done this over decades, even centuries.

Which brings back to the issue of money, impact factors, and careers.  Lots to say about that, mostly depressing, but mainly orthogonal from the peer-review issue.


> On 23 May 2023, at 13:54, Jonathan Z Simon <jzsimon@xxxxxxx> wrote:
> Matt,
> In this context I would avoid the term “publishing”, since that has such a different meaning for so many people, but I personally do take advantage of posting preprints on a public server (like arXiv) almost every chance I get.
> Preprints (preprint = a fully written paper that is not (yet) published) have been useful for many decades, originally in physics, as a way of getting one's research results out in a timely manner. Other key benefits are that it establishes primacy of the research findings, that it is citable in other researchers' papers, and that it can be promoted by social media such as this listserve (more below on this). But the biggest benefit is typically getting the paper out into the world for others to learn from, without having to wait based on the whims of publishers and individual reviewers. If most of your published papers get accepted eventually, and the most important findings don’t get cut in the review process, then preprints are something you should definitely consider. Reviewers often make published papers better, but maybe not so much better that it’s worth waiting many months for others to see your results.
> arXiv is the oldest website for posting preprints, and if its Audio and Speech section is active, that might be a good place to post your preprints. But there may be other options for you. As an auditory neuroscientist I typically use bioRxiv (e.g., "Changes in Cortical Directional Connectivity during Difficult Listening in Younger and Older Adults” <https://www.biorxiv.org/content/10.1101/2023.05.19.541500>), but I also use PsyArXiv if the topic is more perceptual than neural (e.g., “Attention Mobilization as a Modulator of Listening Effort: Evidence from Pupillometry” <https://psyarxiv.com/u5xw2>). [See what I mean about promoting your research on social media?]
> I’m sure others have opinions too.
> Jonathan
>> On May 22, 2023, at 6:45 PM, Matt Flax <flatmax@xxxxxxxxxxx> wrote:
>> Is anyone publishing on arXiv at the moment ? It seems that to publish there they rely on a web of trust.
>> There is an Audio and Speech section of arXiv which would suit our community.
>> thanks
>> Matt
> --
> Jonathan Z. Simon (he/him)
> University of Maryland
> Dept. of Electrical & Computer Engineering / Dept. of Biology / Institute for Systems Research
> 8223 Paint Branch Dr.
> College Park, MD 20742 USA
> Office: 1-301-405-3645, Lab: 1-301-405-9604, Fax: 1-301-314-9281
> http://www.isr.umd.edu/Labs/CSSL/simonlab/