[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [AUDITORY] arXiv web of trust

On 5/31/2023 8:54 PM, Goodman, Daniel F M wrote:
*** Attention: This is an external email. Use caution responding, opening attachments or clicking on links. ***
Thanks Les for this thoughtful response. I stand by my previous message. The explosive growth in journals was a commercial strategy and I think there's no controversy about that. During this period, before the 70s, Melinda Baldwin summarises in https://ethos.lps.library.cmu.edu/article/id/19/ (on which the interview you linked is based):

"...refereeing at a journal was not seen as a sign of scientific rigor or respectability during this period... The choice to use referees or not was essentially a logistical one, not an epistemological one."
That does not bolster your point.  It is about the referees.

On rereading the article I picked up on something I missed the last time. It seems this idea of peer review as a check on rigour was very specific to the US, and almost unheard of in non English speaking countries. Apparently a key factor was the increase in science spending in the cold war and political arguments about how that money was being spent. Perhaps this was more important than the takeover of publishing by commercial journals, but hard to say given that both were happening at the same time.

In any case, as you say, we can choose to ignore the origins of peer review and only consider it for what it is now. As I've said a number of times now, I'm not against peer review, only the current form in which it's done. This form probably made sense as the only option in an era before near universal access to the internet, and cheap networking and computing (arxiv costs are around $15 per article for reference). It doesn't make sense now.
Once more: we don't need less peer review, we need more and better. I think the best way to do that is to disentangle it from the decision to publish and reorient it towards providing readers (rather than editors) with more relevant context and analysis to help them make their own evaluation of a paper. Preprints are a small step towards that, making the publishing process a little more open and transparent. We can and should do more.
Well, we seem to agree on the value of peer review.



This email was written on my phone, please excuse my brevity.

From: Les Bernstein <lbernstein@xxxxxxxx>
Sent: Wednesday, 31 May 2023 15:43
To: Goodman, Daniel F M; AUDITORY@xxxxxxxxxxxxxxx
Subject: Re: [AUDITORY] arXiv web of trust

"[Peer review] only became widespread in the 1960s and 1970s, and it wasn't driven by a need for more rigour but to stem the tide of an overwhelming number of submissions brought about by the proliferation of journals."

Perhaps I missed it and, if so, that's my error, but I found nothing in the linked article to support that statement.  This article, on the history of peer review, dates its inception to the year 1731.  This article suggests an even earlier date.  This article provides an interesting historical perspective.  Interestingly, while they all acknowledge the rise and instantiation of modern peer-review in the 1970s, not a single one supports Dan's cynical account of its origins.

As I see it, the utility of peer-review, as a process, is orthogonal to the predatory, greedy, marketing, and profiteering that has become commonplace among many publishers.  On that score, I agree with Dan.  The process of peer-review, itself, could, in theory, be implemented in the absence of publishers.

The origins of a practice are often foreign and irrelevant to how that practice functions in modern-day society.  Even if one were to present evidence consistent with Dan's cynical description of the origins of peer-review, that would not be justification to jettison the practice.  I note that standardized college-admission tests were developed originally to prevent Jews from gaining admission to Harvard.  Beside the fact that the ploy was a failure, that certainly isn't how standardized tests have been used for decades.  Arguments about the equity of standardized tests aside, I don't think that there's a sane argument to suggest that their purpose today is to actively exclude any particular group or set of groups.  No, I will not go down the rabbit hole of the utility of standardized testing.

We can and should do better with regard to peer-review and the "marketization" of science.  For all of its faults, at least for the journals in which I've chosen to publish, I have found peer-review to be of substantial value from the point of view of an author, a reader, and an editor.


On 5/30/2023 6:04 AM, Goodman, Daniel F M wrote:
*** Attention: This is an external email. Use caution responding, opening attachments or clicking on links. ***
Thanks Etienne for your supportive message!

Since the issue of age and wisdom keeps coming up, I think it might be worth my saying that the older and more experienced I get (I hesitate to say wiser), the more I question the way we do things now, not the less. As a young researcher I just accepted that this is how things have always been and so it was surely right.

It was quite a shock to actually start reading about the origins of the modern form of peer review (systematic review and revisions as a precondition for publication), and this history is perhaps not widely enough known. It wasn't used for the majority of science for the majority of its history. It only became widespread in the 1960s and 1970s, and it wasn't driven by a need for more rigour but to stem the tide of an overwhelming number of submissions brought about by the proliferation of journals. This in turn was a conscious commercial tactic of the early pioneers in for profit publishing. It should be a mark of shame for science that we let these profiteers distort the scientific process for their enrichment, and that we not only continue to do so but actively eulogise in favour of this system.

That said, thanks in part to the continued efforts of these commercial publishers and willing support from governments, we do have a marketised system of science and there are strong incentives to game the system with low quality papers. So we do need a way to keep that in check. My experience and the evidence shows that the current form of peer review is not delivering. It misses major errors and is systematically biased. It also impedes efforts to do better. Commercial journals (and many but not all society journals) have no incentive to do things that would improve the system like surfacing post publication peer reviews from sites like PubPeer. It would undermine their business model if we all realised that this free service was better than the incredibly expensive and profitable service they are offering.

Anyway, as a final thought I want to thank science for being the only place now where (in my 40s) I get called youthfully naive. It's a rare treat, as it's been a long time now since the last time i was asked for id when buying alcohol.

In case you're interested in reading more about how modern peer review and journals came to be, I highly recommend this article as a start.


This email was written on my phone, please excuse my brevity.

From: Etienne Gaudrain <egaudrain.cam@xxxxxxxxx>
Sent: Tuesday, 30 May 2023 05:16
To: AUDITORY@xxxxxxxxxxxxxxx
Subject: Re: [AUDITORY] arXiv web of trust

Dear Les, Dear List,

Like Alain, I'm gonna break the rule and talk twice. (As a parenthesis, I actually don't think it is an unspoken rule of the Auditory List, but this is just basic etiquette: in a public debate one has to let other people express themselves too, and one should not assume that their word is so much more valuable than anyone else's. As they say: silence is golden, and I think it's because it gives a chance for others to talk... So, my apologies for holding the mic a bit too long.)

Les, you wrote:

I submit to not-yet-established researchers that your time would be much better spent, and your career advanced more rapidly, by reading the peer-reviewed literature-- especially that authored by well-established and respected investigators.  (No, that does not mean that "new investigators" cannot or do not publish spectacular work).

Although I agree that there is enormous wisdom in past work, I think your advice is rather ill-guided. Ironically, this is how social media works: orators are judged by their clout more than by the merit of what they say. I think we can all agree that, in the process of producing scientific knowledge, we want to avoid that as much as possible.

In fact, my experience, as a reader and as a reviewer (or as a conference organiser), is that some "well-established and respected investigators" — luckily not all of them, but often the most vocal ones — have a tendency to get a bit full of themselves and get away with publishing sub-quality material, thanks to their name and reputation.

When I am a reviewer, I can of course act on it and recommend rejection or heavy correction. Let's take the current discussion as an example*: Dan, to contribute to the debate, gave constructed arguments supported by references reporting carefully constructed studies. On the other side, opposing these arguments, Les, you gave mostly your opinions and left the onus of finding evidence supporting or disproving your views on Dan and other members of the list. Of course Dan is no spring chicken, but he has yet to reach your stature (and so am I). As a reviewer, I might recommend rejecting your arguments as unfounded, but (1) it will take some courage to go against a well-established figure of our field, and (2) there is no guarantee the editor themselves will not side with the better-established author, either because they know them personally better, or because they don't want to alienate them. The consequence in our public debate is that while people may still think that Dan made good points, they might eventually side with you, either because they imagine that your opinion is based on some unspoken wisdom, or simply because this is safer to go along with the bigger player.

You might say that this, in itself, is a very strong argument for peer-review, and I entirely agree.

I will add that this is an even stronger argument for not trusting blindly everything that has been peer-reviewed. And as a corollary, I would add: especially if this is a well-established author. After all, "well-established" means you should be able to hold these authors to higher standards...

In addition, I also think that this illustrates very well the need for open reviews. Everybody should be able to stand behind their reviews. That not only means they should be cordial enough in their tone (as a prerequisite to communication in general, that also applies to well-established authors of reviews, for instance), but should also be well argumented and supported by evidence and references**. This should traditionally be enforced by editors, but I think they too rarely do so, perhaps because they already struggle to find reviewers. I think open reviews would help naturally enforcing good quality and cordiality, but they also help identifying some manuscript that have been poorly reviewed. We can debate on whether they should be anonymous or not, and how much one's reputation would play a role in this. I think pre-prints and peer-communities can play a crucial role there.

Regarding speed, as much as I think that it is, in general, the recipe for shoddy science, we have to admit that our institutions are constantly pushing for more production, and more hype***. For the lucky ones of us who have an established enough position, we may be able to brush this constraint aside (and I personally try to do so as much as possible and be as slow as one can... the ISH contributors will know what I'm talking about...), but others at early stages of their careers, or those on ever precarious, soft-money positions, may not have that luxury. I would encourage well-established faculty member to put their weight behind denouncing and repelling these policies of precarisation, and constant competition in science, which directly impact the quality of scientific productions. I have the feeling that this would do more for the quality of science than saying "read my book".

Finally, one should also consider that science has become very technologically complex and new methods are being constantly developed. Open and speedy sharing of these methods can benefit scientific communities and pre-prints can be seen as a good way to share details about these tools. They are also a good way of informing others, in details, about what one is working on, which helps with distributing the research effort more efficiently. In short, while we used to get away with just posters at conferences, where you would take a note "ah, they used 45 dB, I see" and that'd be enough to generate your next experimental paradigm, the complexity of methods nowadays is such that it cannot possibly be explained in details at a conference. Having a formal, carefully prepared report available is very helpful in that sense. In fact, this is not new: Bell's labs, or KTH have published reports from the 50s an 60s that have gained notoriety, and citations, despite being unreviewed.

And with this, I will go back to being silent.
Thanks again for this nice discussion.

* Note that I'm well aware that we're having a public discussion, and that the rules of engagement are different than when submitting and reviewing a manuscript, but still think that some elements are informative as illustration.
** That seems obvious, but I think everyone has had to deal with these short, un-argumented reviews, with which the editor decided to side with, to the bewilderment of the authors...
*** One more ironic point here: impact-factor, which is supposed to reflect the reputation of a journal, is actually mostly a measure of speed of citation: it is the number of citations within two years of publication. No wonder it correlates best with retraction rate. Yet another case where reputation is actually working against quality...

On Mon, 29 May 2023 at 06:13, Les Bernstein <lbernstein@xxxxxxxx> wrote:
I submit to not-yet-established researchers that your time would be much better spent, and your career advanced more rapidly, by reading the peer-reviewed literature-- especially that authored by well-established and respected investigators.  (No, that does not mean that "new investigators" cannot or do not publish spectacular work).


On 5/26/2023 10:25 AM, Jonathan Z Simon wrote:
*** Attention: This is an external email. Use caution responding, opening attachments or clicking on links. ***
I think it’s a tribute to the Auditory List that we’ve been able to hear such diverse perspectives on preprints. All the opinions I’ve seen expressed here are based on real-world experience, and that really matters. I would argue that this discussion is an example of Social Media “done right”. 

To those who see preprints as dangerous:
Since they exist and will not go away, please give your students and young colleagues advice how to best navigate their use. Your advice might be that they should simply be ignored, but you may also have more nuanced advice about how to use them to positive advantage: how they should be read and interpreted appropriately (and maybe even when they should be posted, appropriately). Nobody wants their students and young colleagues to hobbled in today’s academic environment.

To not-yet-established researchers:
Consider the benefits expressed previously in this chain: preprints have the potential to enrich and benefit your academic career.
--When reading a preprint, remember that that it’s likely a submitted-but-not-yet-peer-reviewed article. Feel free to use use the authors' ideas and findings to add inspiration to your own work (and then cite them accordingly). As a written paper, it will very likely get better once it’s been peer reviewed (logical arguments tightened, data analysis improved, wild claims reeled in, etc.), but the guts of the paper will likely not change much. Don’t get hoodwinked by unsupported claims, but don’t leave innovative, inspirational ideas on the table either. Feel free to play with the authors’ Matlab/Python code that they also posted publicly (and if it’s not yet posted, feel free to email the authors to politely ask if they’d be willing to share it).
--When writing a paper, consider posting it as a preprint at the same time you submit it to a journal, as a step on the way to it becoming a final (peer-reviewed, formally published) article. If your ultimate goal is share your great ideas with the world at large, preprints can give you an extra lift. Researchers and journal clubs that do read preprints can use your ideas and findings to push their own work forward. If one of your ideas makes into one of their papers, your paper can be cited, and that’s good too, both as a feeling and for your career. Also, check out the evidence Dan Goodman provided below regarding the substantial boost in citations your published paper may get from having been first released as a preprint. Additionally, consider the benefit of having a preprint when you’re writing a grant. If you write about the topic in a grant submission it allows reviewers to distinguish between “work in progress that I’m in the middle or writing up” and “here’s my paper currently under review and you can also read it if you’d like”. Some reviewers may not care about the difference, but there are plenty that do: give the reviewers who want to push for your grant some extra ammunition.

My two cents,

On May 26, 2023, at 2:50 AM, Adam Weisser <adam_weisser@xxxxxxxxxxx> wrote:

Dear all,

Thank you Matt for bringing up this topic and for everybody who articulated their opinions. This is a very interesting debate, which I find particularly enlightening, given that I am one of those people that has chosen to park their manuscript in arXiv in the foreseeable future.

If I can try to summarize the gist of the opinions, then it seems that every researcher tends to come up with a set of heuristics to try and determine whether a particular publication is worth their time and effort without actually reading it. These include the reputation of the publication platform, but are additionally influenced greatly by the authors' perceived reputation, their affiliation(s), the level of presentation (aesthetics, language, structure, bibliography, etc.), the type and extent of the claims made, their novelty, their topicality, and for older papers, the number of citations they received. We would like to think that all these give a pretty good idea of whether a paper is worthy even before reading the abstract. Just like any other endeavor, a judgment error here can be a false negative - ignoring a good paper, which could have advanced other results and ideas, and could have saved repeating work, or realizing that what you have been working on had been already done by somebody else. The judgment error here can also be a false positive - giving undeserved attention to an unworthy paper, which may result in waste of time, money, and escalate to wrongly citing it and basing further false claims upon it - a potential embarrassment. To some, there is a pedagogical point to make here, since the risk in a false positive is so high that it is also critical to warn others against it.

I'd like to offer another perspective about the role and usefulness of arXiv, as I have personally experienced it, which goes beyond its preprint repository function. As it relates to a specific work, it may not be easily generalizable, although I think it highlights the shades of grey involved in the process of doing different forms of science.

First, publishing on arXiv has liberated me from adhering to the standard article format and allowed for keeping a more organic structure that made better sense for the writing and topic - neither a book nor an article, a hybrid between theory and experiment, something that does not clearly belong in any specific journal. 

Second, it has allowed for some relaxation of the usual cautiousness of completely refraining from any speculation. While this may be an obvious red flag for some readers, I think it's fair play as long as the act of speculation is clearly stated and the ensuing logical flow is kept in check.

Third, it has made the question of who can review the material moot. Every reader is a reviewer in their own right and must be able to trust their own judgment. Producing a document that may not be reviewable in the traditional sense because of its length and interdisciplinarity has very limited options for publication. One such option is to publish it as a book or a thesis, if suitable reviewers can be found. Many walls can be hit here. Another option is to break it down to multiple papers and send them to different journals, which would take many years and hoops to jump (a good example is de Boer's "Auditory physics" trilogy from 1980, 1984 and 1991, although I don't know the back story of this series). The benefit in going through this usual process may be the increase in trust in the relevance and correctness of the material that the readers should have, while they can also enjoy a better presentation (fewer errors, better focus, etc.). The cost of adhering to traditional format would be many years of delay and loss of precision in the message, as I envision and would like to communicate. It may also be the loss of precedence if someone else has come up with similar ideas at the same time - not at all an uncommon thing in the history of science (e.g., Darwin and Wallace).

The alternative was to use arXiv for publication (it could have been another repository). Critically, it provides an agreed upon stamp of authorship with a publication date. At the very least, it has non-zero reputation in several scientific fields, there is very rudimentary control by its staff of what goes into it, initial author affiliation (or reference by affiliated people) is required, and it allows for version updates. More importantly, it relies on trust in the judgment of the few that would be willing to invest time in reading the manuscript, so they can decide for themselves whether it is a worthwhile piece, or one that should have never seen the light of day and be forgotten. I believe it is a more adult way to treat the readers, who should be capable to assess the quality of the work after decades of education, without being prescribed a nominal map of where "bad science" necessarily lies that must be avoided at all costs.

Whichever strategy of reading and publishing is embraced, there is going to be no one-rule-fits-all here, and every scholar has to be comfortable with their own choices, obviously. All have clear merits and none is completely infallible.


On Thu, May 25, 2023, at 9:34 PM, Ole C Bialas wrote:
Thank you Dan, Alain and everyone else for this important debate. I 
think its essential that we, as a field, have a constructive debate 
about publishing models because it feels like the current model of 
for-profit publishing is unsustainable and will hopefully be replaced by 
something better.

I agree with most of Dan's arguments in defense of preprints although I 
think that the boost in speed and citations is the weakest just because 
I think there is usually no inherent time-pressure to most of our 
research - after all, it's not like we are developing vaccines for a 
global pandemic or something.

More importantly, preprints provide open access for readers and authors 
and removes gatekeepers. The latter may allow the publishing of research 
that goes against widely accepted standards in style, design, 
methodology and so on but this kind of heterodoxy is something I 
personally welcome. Of course, I value the critique of experts but in 
the current system I don't really get this critique. Instead, I just get 
the information that someone, who is probably and expert on the matter 
and may or may not have spent a lot of time on this particular paper, 
saw it fit for publication.

I am not convinced by Alain's argument that the current peer-review 
process is a safeguard against bad science. As Dan suggested, there is a 
good amount of research showing the ineffectiveness of the current 
review system. There may even be the danger that certain publications 
are taken at face value, instead of being assessed critically, just 
because they appeared in a reputable journal. Thus, peer-review may 
provide a false sense of security, much like the use of helmets in 
American Football caused an increase in traumatic brain injury because 
it lead players to charge head first into each other.

The only time I noticed a truly bad effect of preprints was during the 
pandemic, when media outlets picked up on flawed corona related research 
( masks don't work etc.) and then reported it as facts without 
understanding or explaining what a preprint is.

I think that it would be useful to have a review process that is open, 
transparent and detached from publishing, like movie reviews written on 
pages such as imdb. In this way, scientist could not only access and 
cite the research itself but also critical reviews of that research. 
This would also allow young scientists such as myself to get more 
insight into the secretive world of academic publishing. Of course 
coming up with a good architecture that sets the right incentives for 
such a system is no trivial task but I don't see clinging to the status 
quo of publishing as a viable option on the long run.

Again, thank you all for adding to this debate!
All the best,

Am 25.05.2023 11:51 schrieb Goodman, Daniel F M:
> Alain,

> You write about preprints as if they're some new thing with potentially
> dangerous unknowable consequences, but they've been around and used
> extensively (particularly in maths and physics) for over 30 years at
> this point (arXiv was founded in 1991). Most major funders and journals
> recognise preprints, probably the majority of funders now have open
> access requirements that can be fulfilled with preprints, and a few are
> even mandating their use. It's actually not much younger than the
> widespread use of peer review, which didn't become a de facto standard
> until the 1960s-1970s (Nature didn't use it until 1973 for example).

> When you say you're not convinced by arguments about speed or number of
> citations, I guess you mean about the net benefits not about the facts?
> Because the data is really start: papers in biology which originally
> appeared as preprints get 36% more citations
> immediate and long lasting

> To make the argument clearer, let's break it down into the different
> roles that preprints can have.

> The first role is what preprints can do in the period following the
> publication of a paper in a journal. In this case, posting a preprint 
> of
> a paper fulfills open access requirements and makes it possible for the
> whole world to read your paper, including the general public, and 
> people
> at less wealthy universities and countries that cannot afford the
> journal subscription. I cannot see any coherent argument against this.
> It's a disgrace that the public pays for science but is not able to
> access the results of the work they paid for, and it is only a 
> hindrance
> to scientific progress to gate access to knowledge.

> The second role is what preprints can do in the time between the 
> journal
> accepting the paper and making it available. This is purely about speed
> of publication but I can't see any reason why you wouldn't want this
> speed? I just went to the most recent issue of JASA and looked at the
> first three papers as a rough sample, and this delay was 3 weeks, 3.5
> weeks and 6.5 weeks. It's not years, but might make the difference in
> someone's job or grant application.

> The third role is where I guess you mostly disagree Alain, the time
> period between publishing the preprint and journal acceptance. But I
> don't really see any conflict here. If you don't want to read preprints
> and prefer to wait then just don't read them. But they will have value
> for other readers (like me) who accept the limitations, and they have
> great value for the authors (36% more citations for example). For
> reference, for my sample of JASA papers above, the times from first
> submission to journal publication were 22 weeks, 27 weeks, and 38 
> weeks.

> I would dispute the strength of the quality control you mention though.
> A study of peer review at the BMJ with deliberate major and minor 
> errors
> found that on average peer reviewers picked up on 2.6 to 3 of 9 major
> errors deliberately introduced
> sort of quality control, but not enough to mean that you can
> uncritically read peer reviewed papers.

> And on the other hand, there is also a downside to only reading peer
> reviewed work in that you are subject to editorial and reviewer biases.
> A PNAS study found that a paper submitted with a Nobel prize winner as
> author was recommended for acceptance by 20% of reviewers, but the very
> same paper with an unknown student as author was only recommended for
> acceptance 2% of the time

> More controversially perhaps, I think there is a potential fourth role
> for preprints that are never submitted to a journal. This is very 
> common
> in maths, physics and computer science and works well there. I think it
> would work even better when combined with a post-publication peer 
> review
> platform that made reviews open, prominently displayed with an
> at-a-glance summary, and easily accessible. But that's an argument for
> another day!

> Dan

> ------ Original Message ------
> From "Alain de Cheveigne" <alain.de.cheveigne@xxxxxxxxxx>
> To "Goodman, Daniel F M" <d.goodman@xxxxxxxxxxxxxx>
> Date 25/05/2023 09:01:43
> Subject Re: arXiv web of trust

>> Dan, all,
>> I'm not convinced by arguments about speed of 'publication', number of 
>> citations, or algorithmic suggestions. Think 'fake news' and the 
>> impact of recommendation algorithms on the quality of information, 
>> minds, and the state of the world.
>> The review process can be seen as quality control. A product maker 
>> that eliminates that phase can deliver them faster, introduce jazzier 
>> products, make more money, and dominate the market. Peer-review - like 
>> product quality control - doesn't eliminate all flaws, but it may make 
>> them less likely and easier to spot and eliminate.
>> I suspect there is a generational dimension to this debate. The three 
>> of us that argued most strongly in defence of the review process have 
>> (or have had) a well-established career. How could we not defend the 
>> practices that got us there? Someone struggling to gain recognition, 
>> and a job, may be tempted by mechanisms that bypass those practices. 
>> Fair enough, but beware. It might be a bit like tearing down the walls 
>> and ripping up the floor to feed the boiler.
>> The debate may become moot with the introduction of AI-based tools to 
>> assist writing and reviewing. Why not use similar tools to read the 
>> papers too, and understand them, and produce new science (of possibly 
>> better quality)?  This sounds great, except that I don't see much room 
>> for a human scientist in that loop.  So much for your careers.
>> I find the generational issue unnerving, personally. For the first 
>> time in my life, I'm old and the others are new.  It takes some 
>> getting used to.
>> Alain
>>>  On 24 May 2023, at 15:42, Goodman, Daniel F M 
>>>  I have no hesitation in calling a preprint a "publication". There's 
>>> no magic in peer review that makes it not count as published before 
>>> this process. Even the word preprint is archaic now given how many 
>>> journals are online only.
>>>  Personally, I now primarily read preprints because most of the work 
>>> in the areas I'm interested in appears a year or two earlier as 
>>> preprints than in a journal. It's much more exciting and progress can 
>>> be much faster when there isn't a multi year between doing work and 
>>> seeing how others make use of it. I just had an email from someone 
>>> asking if they could cite a tweet of mine that had inspired them to 
>>> do some work and this sort of thing is great! Why should we accept 
>>> years of delay between each increment of progress?
>>>  Of course, reading preprints means you have to cautious. But, I 
>>> always treat papers I read critically whether they've been through 
>>> peer review or not, and I would encourage everyone to do the same. 
>>> Peer review is of very uneven quality, based on quantitative studies 
>>> and based on my own experience as a reviewer reading the other 
>>> reviews. Terrible papers with glaring errors get through peer review. 
>>> So I don't think we can uncritically accept the results of peer 
>>> reviewed papers, and in practice most scientists don't. We criticise 
>>> peer reviewed papers all the time. It's this process of review or 
>>> feedback after publication that is the real scientific process, and 
>>> it would be much easier if the reviews were made available so we 
>>> could more easily judge for ourselves. The sooner we move to a system 
>>> of open and transparent post publication peer review like the systems 
>>> Etienne is talking about, the better.
>>>  I do agree with Alain's point that there are too many papers to read 
>>> them all, but for me that's not an argument for the traditional 
>>> approach to peer review but for experimenting with different 
>>> approaches to recommending papers. Again personally, I find I have a 
>>> higher hit rate with algorithmic suggestions from Semantic Scholar 
>>> and from things I see posted on social media than I do from going 
>>> through journal table of contents (which I still do out of habit).
>>>  And as a last point to encourage preprints, the evidence shows that 
>>> papers that are first available as a preprint get cited more overall. 
>>> And if that doesn't convince you I don't know what will. ��
>>>  Dan
>>>  ---
>>>  This email was written on my phone, please excuse my brevity.
>>>  From: Etienne Gaudrain <egaudrain.cam@xxxxxxxxx>
>>>  Sent: Wednesday, 24 May 2023 10:38
>>>  Subject: Re: [AUDITORY] arXiv web of trust
>>>  Thanks for opening this nice debate, Max!
>>>  I side with Brian for the need of serious peer-review, but I am less 
>>> sure how this can be achieved nowadays. Publishers are increasingly 
>>> pressuring reviewers to work fast because their business model relies 
>>> on volume, and there seems to be little cost to publishing poor 
>>> quality papers. With the ever precarisation of research, it takes a 
>>> very strong faith in the ethos of scientific integrity to remain a 
>>> thorough reviewer.
>>>  If we accept that, as a consequence of this pressure, there are more 
>>> flawed papers that pass the review process, it would mean that we, as 
>>> consumers of the literature, should be more cautious when citing 
>>> articles. We should more critically examine what we cite, and sort of 
>>> perform our own review. But of course, that's also very time 
>>> consuming... and it is also very inefficient at the scale of the 
>>> community: me *not* citing an article because I found that it is 
>>> potentially flawed will not prevent others from citing it, and the 
>>> effort I will have put in reviewing it will be largely wasted.
>>>  So I do believe that there is a strong benefit in having more open 
>>> discussions about papers, and in some cases, the fact that they are 
>>> published or not in the traditional sense, may be partially 
>>> irrelevant. We definitely don't want to turn the scientific community 
>>> into social media, where a few arbitrary influencers get to decide 
>>> what's worthy and what isn't. But there are now places where 
>>> scientific arguments can be shared, and reflections can be had, 
>>> constructively.
>>>  That's what we tried to do for the last edition of the International 
>>> Symposium on Hearing, but hosting the papers as "pre-print" (for lack 
>>> of a better term) freely available on Zenodo 
>>> (https://zenodo.org/communities/ish2022/), and reviews are made 
>>> publically available on PubPeer (and more can be added; here's an 
>>> example: 
>>> Contributors are still able to publish their articles in the 
>>> traditional sense, and hopefully the published version will be 
>>> connected to the ISH version in some form so that users can view the 
>>> history and comments. In others words, there is much benefit for the 
>>> two systems to co-exist (we can get rid of private publishers, 
>>> though, and switch to decentralized institutional ones).
>>>  Remains the problem raised by Alain: as readers, how do we deal with 
>>> the volume? While publishers have been selling us "reputation" in the 
>>> form of journals in very much overrated ways (such as impact factors, 
>>> and what not), it is true that journals do have a curating role that 
>>> should not be underestimated. This being said, editors do not 
>>> actively seek authors to steer publications towards a specific topic 
>>> (besides Frontiers' take it all harassment approach). It is still the 
>>> authors that decide to submit to a specific journal or another. As a 
>>> result, following the JASA TOC gives us access to a semi-random 
>>> sample of what's going on in the field. It does offer, 
>>> stochastically, some degree of protection against confirmation bias 
>>> in literature search (whereby you only look for papers that confirm 
>>> your idea). I wonder if automatic suggestions of "related papers" 
>>> could achieve something similar in other venues?
>>>  Cheers,
>>>  -Etienne
>>>  --
>>>  Etienne Gaudrain, PhD
>>>  Lyon Neuroscience Research Centre / Auditory Cognition and 
>>> Psychoacoustics (CAP)
>>>  CNRS UMR5292, Inserm U1028, Université Lyon 1
>>>  Centre Hospitalier Le Vinatier - Bâtiment 462 - Neurocampus
>>>  95 boulevard Pinel, 69675 Bron Cedex, France
>>>  On Wed, 24 May 2023 at 10:56, Alain de Cheveigne 
>>>  Hi Jonathan, all,
>>>  Here's a different perspective.
>>>  First of all, the issue of peer review should be distinguished from 
>>> that of publishers shaving the wool off our backs (more below).
>>>  Peer review offers functions that we miss out on in the preprint 
>>> model. Weeding out junk is one, improving papers (and the ideas in 
>>> them) is another. A third is reducing the bulk of things to read.
>>>  The last might seem counterintuitive: surely, more is better?  The 
>>> thing is, we have limited time and cognitive bandwidth. Lack of time 
>>> is the major obstacle to keeping abreast, and lack of time of the 
>>> potential audience is what prevents our ideas having an impact. You 
>>> painstakingly work to solve a major problem in the field, write it up 
>>> carefully, and no one notices because attention is carried away by 
>>> the tweet cycle.
>>>  The review/journal model helps in several ways. First, by 
>>> prioritizing things to read (as an alternative to the random - or 
>>> otherwise biased - selection induced by lack of time).  Second, by 
>>> improving the readability of the papers: more readable means less 
>>> time per paper means more attention for other papers - including 
>>> possibly yours. Third, by organizing - however imperfectly - the 
>>> field.
>>>  For example, you can (or could) keep abreast of a topic in acoustics 
>>> by scanning JASA and a few other journals. With the preprint/twitter 
>>> model the 'field' risks being shattered into micro-fields, bubbles, 
>>> or cliques.
>>>  My experience of the review process is - as everyone's - mixed.  I 
>>> remember intense frustration at the reviewer's dumbness, and despair 
>>> at ever getting published. I also remember what I learned in the 
>>> process.  Almost invariably, my papers were improved by orders of 
>>> magnitude (not just incrementally).
>>>  I also spend a lot of time reviewing. I find it a painful process, 
>>> as it involves reading (I'm a bit dyslexic), and trying to understand 
>>> what is written and - to be helpful to the author - what the author 
>>> had in mind and how he/she could better formulate it to get the 
>>> message across, and avoid wasting the time of - hopefully - countless 
>>> readers. It does involve weeding out some junk too.
>>>  Science is not just about making new discoveries or coming up with 
>>> radically new ideas. These are few and far between. Rather, it's a 
>>> slow process of building on other people's ideas, digesting, tearing 
>>> down, clearing the rubble, and building some more. The review process 
>>> makes the edifice more likely to stand. Journals play an important 
>>> role in this accumulation, even if most content is antiquated and 
>>> boring. It's a miracle that some journals have done this over 
>>> decades, even centuries.
>>>  Which brings back to the issue of money, impact factors, and 
>>> careers.  Lots to say about that, mostly depressing, but mainly 
>>> orthogonal from the peer-review issue.
>>>  Alain
>>>  > On 23 May 2023, at 13:54, Jonathan Z Simon <jzsimon@xxxxxxx
>>> wrote:
>>>  >
>>>  > Matt,
>>>  >
>>>  > In this context I would avoid the term “publishing”, since that 
>>> has such a different meaning for so many people, but I personally do 
>>> take advantage of posting preprints on a public server (like arXiv) 
>>> almost every chance I get.
>>>  >
>>>  > Preprints (preprint = a fully written paper that is not (yet) 
>>> published) have been useful for many decades, originally in physics, 
>>> as a way of getting one's research results out in a timely manner. 
>>> Other key benefits are that it establishes primacy of the research 
>>> findings, that it is citable in other researchers' papers, and that 
>>> it can be promoted by social media such as this listserve (more below 
>>> on this). But the biggest benefit is typically getting the paper out 
>>> into the world for others to learn from, without having to wait based 
>>> on the whims of publishers and individual reviewers. If most of your 
>>> published papers get accepted eventually, and the most important 
>>> findings don’t get cut in the review process, then preprints are 
>>> something you should definitely consider. Reviewers often make 
>>> published papers better, but maybe not so much better that it’s worth 
>>> waiting many months for others to see your results.
>>>  >
>>>  > arXiv is the oldest website for posting preprints, and if its 
>>> Audio and Speech section is active, that might be a good place to 
>>> post your preprints. But there may be other options for you. As an 
>>> auditory neuroscientist I typically use bioRxiv (e.g., "Changes in 
>>> Cortical Directional Connectivity during Difficult Listening in 
>>> Younger and Older Adults” 
>>> also use PsyArXiv if the topic is more perceptual than neural (e.g., 
>>> “Attention Mobilization as a Modulator of Listening Effort: Evidence 
>>> from Pupillometry” <https://psyarxiv.com/u5xw2>). [See what I mean 
>>> about promoting your research on social media?]
>>>  >
>>>  > I’m sure others have opinions too.
>>>  >
>>>  > Jonathan
>>>  >
>>>  >
>>>  >> On May 22, 2023, at 6:45 PM, Matt Flax <flatmax@xxxxxxxxxxx
>>> wrote:
>>>  >>
>>>  >> Is anyone publishing on arXiv at the moment ? It seems that to 
>>> publish there they rely on a web of trust.
>>>  >>
>>>  >> There is an Audio and Speech section of arXiv which would suit 
>>> our community.
>>>  >>
>>>  >> thanks
>>>  >>
>>>  >> Matt
>>>  >
>>>  > --
>>>  > Jonathan Z. Simon (he/him)
>>>  > University of Maryland
>>>  > Dept. of Electrical & Computer Engineering / Dept. of Biology / 
>>> Institute for Systems Research
>>>  > 8223 Paint Branch Dr.
>>>  > College Park, MD 20742 USA
>>>  > Office: 1-301-405-3645, Lab: 1-301-405-9604, Fax: 1-301-314-9281
>>>  >
>>>  >

Jonathan Z. Simon (he/him)
University of Maryland
Dept. of Electrical & Computer Engineering / Dept. of Biology / Institute for Systems Research
8223 Paint Branch Dr.
College Park, MD 20742 USA
Office: 1-301-405-3645, Lab: 1-301-405-9604, Fax: 1-301-314-9281

Leslie R. Bernstein, Ph.D. | Professor Emeritus
Depts. of Neuroscience and Surgery (Otolaryngology) | UConn School of Medicine
263 Farmington Avenue, Farmington, CT 06030-3401
Office: 860.679.4622 | Fax: 860.679.2495

Leslie R. Bernstein, Ph.D. | Professor Emeritus
Depts. of Neuroscience and Surgery (Otolaryngology) | UConn School of Medicine
263 Farmington Avenue, Farmington, CT 06030-3401
Office: 860.679.4622 | Fax: 860.679.2495

Leslie R. Bernstein, Ph.D. | Professor Emeritus
Depts. of Neuroscience and Surgery (Otolaryngology) | UConn School of Medicine
263 Farmington Avenue, Farmington, CT 06030-3401
Office: 860.679.4622 | Fax: 860.679.2495