*** Attention: This is an external email. Use caution responding, opening attachments or clicking on links. ***Here’s what chatGPT has to say about arXiv vs peer review:
I recognize the value of both preprint online archives like arXiv and peer-reviewed publication in journals. Each has its unique benefits and considerations, and understanding their respective values is crucial for disseminating research findings effectively.
Preprint online archives, such as arXiv, offer several advantages that make them valuable for researchers. Firstly, they provide a platform for researchers to share their work with the scientific community quickly. This allows for rapid dissemination of research findings and facilitates collaboration, as fellow researchers can access and provide feedback on the preprint. This open and transparent approach to sharing scientific knowledge encourages discussions and the exchange of ideas, fostering scientific progress.
Another significant benefit of preprint archives is that they offer a chance to receive feedback on research before formal peer review. This can be particularly useful in refining the work, identifying potential flaws or limitations, and strengthening the overall research design. Feedback received through the preprint process can contribute to improving the research before submitting it to a peer-reviewed journal.
Furthermore, posting a preprint on an online archive establishes a clear and early record of priority and intellectual ownership. In highly competitive research fields, where multiple groups might be working on similar projects simultaneously, this can be crucial for establishing precedence.
On the other hand, peer-reviewed publication in journals remains a vital aspect of scientific research and academic advancement. Peer review provides a rigorous evaluation process by experts in the field, ensuring the quality, validity, and reliability of the research. Through this process, potential flaws, errors, or biases can be identified and addressed, resulting in higher-quality publications.
Publishing in reputable peer-reviewed journals also adds a stamp of approval and credibility to the research. It signifies that the work has met the standards of the scientific community and has undergone rigorous scrutiny. This validation is particularly important for researchers seeking tenure, promotions, or funding, as it carries weight in the academic community.
Moreover, journals often have wider readership and visibility than preprint archives, making it more likely that the research will reach a broader audience. Journal publications are often indexed in databases and search engines, which facilitate discoverability and citation by other researchers. This, in turn, can enhance the researcher's reputation, impact, and career progression.
In summary, preprint online archives like arXiv offer rapid dissemination, early feedback, and a transparent platform for sharing research, fostering collaboration and discussion within the scientific community. Peer-reviewed publication in journals, on the other hand, provides a rigorous evaluation process, validation, credibility, wider readership, and increased visibility. Recognizing the value of both avenues, researchers often utilize preprint archives to share early versions of their work while working towards the goal of journal publication to receive formal recognition and validation.
—BrentOn 6 Jun 2023 at 7:39 PM +1000, Jan Schnupp <000000e042a1ec30-dmarc-request@xxxxxxxxxxxxxxx>, wrote:
I guess if reputation biases peer review as heavily as the consensus seems to think here, then what does the often pretty hard time I get from my reviewers tell me about my reputation? ... 🤔😝
On Tue, 6 Jun 2023, 17:17 Peter Harrison, <pmch2@xxxxxxxxx> wrote:
Several colleagues have mentioned how peer review is unduly biased by the reputation of the authors/institutions. I agree that this is an important problem, but it's only fair to observe that it applies to preprints too. In a world where we don't have time to read every preprint, many people will still end up using imperfect proxies for deciding what to read, such as the reputation of the authors/institutions. In the absence of a journal's mark of approval, these imperfect proxies could grow more influential, not less influential.
From: AUDITORY - Research in Auditory Perception <AUDITORY@xxxxxxxxxxxxxxx> on behalf of Helia Relano Iborra <0000017f74f788f8-dmarc-request@xxxxxxxxxxxxxxx>
Sent: 06 June 2023 09:21
To: AUDITORY@xxxxxxxxxxxxxxx <AUDITORY@xxxxxxxxxxxxxxx>
Subject: Re: [AUDITORY] [External] Re: [AUDITORY] arXiv web of trust
Dear Brian, all,
Thank you for a very enriching discussion. I just wanted to counter Brian’s last email, regarding the neutrality of peer review. There is extensive evidence of “status bias” in the peer-review system in studies comparing single-blind vs double-blind reviews. E.g. Huber et al. (2022) https://www.pnas.org/doi/10.1073/pnas.2205779119 or Blank (1991) https://www.jstor.org/stable/2006906. No system (or person) is free of bias, unfortunately. I think recognizing that these biases exist and being aware of them when we are reviewing manuscripts can only make us better reviewers.
Helia Relaño Iborra
Hearing Systems Section
Department of Health Technology
2800 Kgs. Lyngby
Dear Bob, et al,
I feel obliged to reply to some serious statements made in recent posts. While i think there is little doubt that numerous bias elements (privileges of various sorts" are present in career evolutions, recruitment committees, promotions, be them academic or corporate, I must return to the discussion to the topic at hand, in the broad sense, of the importance of peer-review.
As a regular reviewer in various journals (and fields of acoustics) what is judged is the work on the page, no more and no less. No free rides are given to authors of high reputation (sometime more scrutiny), nor penalties to young unknowns or unrepresented countries (sometimes more flexibility is given). If the arguement for publication is unpersuasive, it is solely on the merit of the presentation of the work. I say it this way because again it is only what is on the page that is reviewed. The work itself may be of high standards, but a work is reviewed by what is stated, not what is intended. As an Associate Editor, the same is true. Specific knowledge of the author is really only needed to assure lack of direct conflicts of interest in selecting reviewers. I have never considered the background, academic or career history of an author in accepting or rejecting a manuscript. I would even go so far as to say if one considers these elements in one's reviews they should probably recuse themselves from such benevolent activities to the community.
Finally, returning to the question of arXiv and preprints, where this all started, I don't think anyone came out against them on the whole, but they should be taken for what they are, and no more. They are a scientific blog or a conference proceeding. They do not hold the same value, or represent the same rigor of critique, that a journal article has passed. Thie difference is clear. However, it is only really relevant in a few circumstances: as a substantive citation in another journal article, in an academic/research career application/review, or a project proposal (a version of the previous point). If one doesn't require these elements, and that is a choice, then one isn't limited by the means one chooses to disseminate one's work. No one has critiqued the use of arxiv and the like, per se, but if one is competing on the quality of one's work, the process of peer-review is the widely accepted passage for some semblance of quality, for which no other alternative currently exists. A review committee cannot be expected to read every article, let alone the comments section, and be required to form an opinion.
This does not say the process cannot be improved, and that is also the motivation for journal quality classifications and the exclusion of some journals from being "acceptable" is those situations. Such rapid publication and limited review journals are more akin to arXiv than a reputable journal, though with fees, and rightly so with regards to scientific scrutiny. One is free to use them for what they are, but one should not make claims that they are anything more.
At least, that is my perspective.
Brian FG Katz
Equipe LAM : Lutheries Acoustique Musique
Sorbonne Université, CNRS, Institut ∂'Alembert
-------- Original message --------
From: "McMurray, Bob" <bob-mcmurray@xxxxxxxxx>
Date: 6/6/23 06:09 (GMT+01:00)
Subject: Re: [AUDITORY] [External] Re: [AUDITORY] arXiv web of trust
I’ve been watching from the wings on this discussion as I think our field is in a real point of flux with respect to scientific publishing and communication, and I don’t think I know what’s best any more. Its been fun to watch a very healthy and vigorous conversation unfold amonst my esteemed colleagues – both junior and senior – and I’ve learned a lot.
However, Matt (and Deniz) made a very powerful point, that I felt the need to weigh in on. They argue that the very nature of scientific communication is pervaded by issues power, positionality and discrimination. I don’t think I realized this till recently (perhaps I was an Eagle in that cartoon), but they are right. It’s important.
Les, I respect your point of view. We should be having these open and objective conversations and we should strive for that. But we also have to recognize that this is an aspirational point of view. In my view, the rhetoric of science is not objective. Its persuasive. A scientific discovery from my lab is not a fact until I convince the scientific community to believe it (or at least convince Reviewers 1,2 and 3). The rules of science – statistical and methodological norms, peer review, and the like -- are really designed to ensure that this persuasion is all geared to some mutually acceptable norms of objectivity. It often works and there’s not much better.
But fundamentally this is still a persuasive enterprise (as it should be). And fundamentally, some people – by virtue of their station and background – are going to be in a better place to persuade their colleagues than others. We commonly associate these issues of discrimination and positionality with things like race, religion and gender. And indeed these things matter – just look at the disparities among the medalists of the ASA and you can see for yourself.
But a good friend of mine recently showed me how these kind of factors extend all throughout academia. Are some fields privileged? Are hearing scientists more likely to discount a finding from a linguist or a social scientist than someone who is solidly situated in hearing science? What about a finding from a small clinical population (a “niche” field) or an obscure auditory phenomena vs. as opposed to a finding based on the core “modal” NH adult in a sound proof booth? Are we more likely to take a finding seriously if it was generated by one of the top universities (in our field) than a second tier state university? Or from a new scholar that was trained by one of the best vs. an emerging scholar who came to the field more independently? What about a person who is changing fields – migrating, for example, from a field like cognitive science to audiology or hearing science? What about clinical credentialing? Does that help or harm our cases?
All of these things have nothing to do with the objective argument that is being made and the quality of the data used to support it. But we all must admit that they do change how much credence we are likely to give a discussion or a paper (and each of us may weigh these differently). Sometimes these are useful heuristics – if the methods aren’t clear, but you know how a person was trained, it may be easier to trust that the experiments were done right. But sometimes this is just downright discriminatory, like when we discount contributions from outside what we perceive as the core field.
But how does this impact scientific publishing?
Matt makes the valuable point that as our field opens up to new viewpoints and new participants, the view from those people may be very different than the view from the people at the top. We should listen. People do struggle to gain entry to this field. I certainly did when I began working in hearing science, despite my training at a very good cognitive science program.
Peer review is part of the problem. It can amplify these biases. And peer review is not designed to “help” new entries – its is designed to help a journal editor decide what to do with a paper. So it often serves as an impersonal barrier to entry. OF course, we cannot dispense with it. But we should be actively exploring other models. if this new generation of talented, thoughtful, diverse and enterprising young scholars wants to engage in novel modes of scientific communication, I’m happy to listen and to contribute to these new models.
On Thu, Jun 1, 2023 at 1:55 PM Les Bernstein <lbernstein@xxxxxxxx> wrote:
On 5/31/2023 2:15 PM, Matthew Winn wrote:
*** Attention: This is an external email. Use caution responding, opening attachments or clicking on links. ***
There are statements in this thread that cannot go unchallenged, because they condone and perpetuate harmful ideas that need to end. Specifically:
1) “If one is not a sufficiently confident and independent thinker such that one can express ideas, arguments, disagreements, etc. with anyone in the field, regardless of stature, then that is a weakness”
This statement ignores the multiple power structures that affect the lives and employment of those below the ‘upper echelon’ in the field. Expressing an idea involves risk when your position is precarious. Adapting to and weighing that risk is a key survival strategy, not a weakness. I have a blind spot for this risk – not because I’m so great at science, but because my culture gives me unearned respect because of my demographics. For people like me (and, I will note, virtually everyone on this thread), we live in a culture that insulates us from any sense that our voice doesn’t belong.
I could not disagree more. The suggestion that, within our field, different cultural backgrounds confer more or less ability to have productive scientific discussions with anyone, regardless of status is, as I see it, just plain nonsense. Expressing an idea involves risk? Really, in our field of auditory science? I can give plenty of counterexamples to such an assertion.
2) “think about how such researchers earned such status. It was not because they had friends, it was not because people liked them. It was because they established a track-record of contributions that the field, in general, held in very high regard.”
This is a self-serving narrative that reflects survivorship bias and which ignores everything we know about how people act in real life. Science is done by humans, who have personal interests, biases, and who live within a culture where status is built on many layers of privilege. Every decision we make is filtered by these factors, which allow some people (like me) to accumulate a variety of advantages at every career stage, simply because of how they look, who their friends are, and where they grew up. They are more likely to have papers accepted, to be selected for podium presentations, to have a job application reviewed, to be interviewed, to be hired, to be selected as editors and reviewers, to be elected to positions of leadership, and to be given favorable treatment in the workplace. To be taken seriously. If we pretend that these advantages are ALL due to the scientific merit of one’s work, we are characterizing scientists as some species entirely separate from the rest of humanity.
Again, theoretical, social drivel. Lloyd Jeffress, Dave Green, Neal Viemester, Barbara Bohne, and on and on.
3) “Stature does not count. Everyone should be held to the very same standard”
We all agree that work should not be judged on the basis of who wrote it. But importantly, the influence of stature doesn’t need to be explicitly suggested in order to actually take place. Similar to the last point, the idea of equal standards and equal treatment is a convenient fiction that allows people like me to feel superior because I can attribute my success to my own hard work and merit, even though many factors that led to that success were unearned.
Again, your theoretical musing. Not the reality in auditory science that I have seen.
What does this have to do with preprints? The point is to consider that others have a different set of constraints, and that our definitions of merit are tailored to suit those who are already enjoying a wide variety of privileges. Consider the forces that lead authors to think that preprints are useful, and also whether you are facing the same expectations and constraints that they are. Numerous people have pointed at the apparent generational divide on this issue - let's figure out why. Graduate admissions and fellowship review increasingly expect a publication record that far exceeds anything that would have been expected of the reviewers when they were at that same career stage. For various reasons, the timeline of publication is increasingly long. Exacerbating this, it is no longer enough to simply conduct a good study; one must also curate a data management and sharing plan that includes open-access data and documented code. One must learn and conduct the latest statistical techniques that their advisors never needed to learn, and sift through a much broader set of literature that includes a lot of garbage. To compete for stable employment, younger scholars need an internet presence and must learn to incorporate inclusive language in their writing, even if that were not part of their training. They need to express how their work contributes to the reduction of harm in society, despite being advised by some of the people who are doing the harm.
None of this, much of which I find to be mere unjustified assertion, is an argument for shifting the weight of dissemination of work toward non-refereed open access. By the way, when was it the case that a solid knowledge of statistical techniques was unnecessary? Hey, you don't have to wire together analog equipment to generate your signals!
Preprints are not a magical solution that can eliminate the multiple barriers that I described above. But they have tangible value, and reflect adaptation to a changing academic landscape, rather than reflecting some loss of “standards” that are designed to protect those already at the top, and which were established under an entirely different system of constraints.
Preprints help address the needs for 1) visibility and 2) quicker feedback on your work from a wider variety of scholars who might not have been invited to review, simply because they were not in the network of the associate editor. These factors are often yoked together; the channels that spread awareness of a preprint (like Twitter) might also be the same channels that generate discussion that becomes useful feedback. The tendency (or need) to use these dissemination channels probably reinforces the generational divide on this thread. I assure you that the comments I've received from people enthusiastic enough to read a preprint have had meaningful influence and value. And those comments can come from a wider variety of people whose opinions have been historically discounted. Experienced reviewers will always have a place in our scientific discourse, but to discount the benefit and potential of preprints is to be willfully detached from our current reality.
I never said one should not use pre-prints for whatever benefit they can confer.
Leslie R. Bernstein, Ph.D. | Professor Emeritus
Depts. of Neuroscience and Surgery (Otolaryngology) | UConn School of Medicine
263 Farmington Avenue, Farmington, CT 06030-3401
Office: 860.679.4622 | Fax: 860.679.2495
Matthew Winn, AuD, PhD
University of Minnesota