Dear all,
Several colleagues have mentioned how peer review is unduly biased by the reputation of the authors/institutions. I agree that this is an important problem, but it's only fair to observe that it applies to preprints too. In a world where we don't have time
to read every preprint, many people will still end up using imperfect proxies for deciding what to read, such as the reputation of the authors/institutions. In the absence of a journal's mark of approval, these imperfect proxies could grow more influential,
not less influential.
Best wishes
PeterFrom: AUDITORY - Research in Auditory Perception <AUDITORY@xxxxxxxxxxxxxxx> on behalf of Helia Relano Iborra <0000017f74f788f8-dmarc-request@xxxxxxxxxxxxxxx>
Sent: 06 June 2023 09:21 To: AUDITORY@xxxxxxxxxxxxxxx <AUDITORY@xxxxxxxxxxxxxxx> Subject: Re: [AUDITORY] [External] Re: [AUDITORY] arXiv web of trust Dear Brian, all,
Thank you for a very enriching discussion. I just wanted to counter Brian’s last email, regarding the neutrality of peer review. There is extensive evidence of “status bias” in the peer-review system in studies comparing single-blind vs double-blind reviews. E.g. Huber et al. (2022) https://www.pnas.org/doi/10.1073/pnas.2205779119 or Blank (1991) https://www.jstor.org/stable/2006906. No system (or person) is free of bias, unfortunately. I think recognizing that these biases exist and being aware of them when we are reviewing manuscripts can only make us better reviewers.
Best, Helia.
From: AUDITORY - Research in Auditory Perception <AUDITORY@xxxxxxxxxxxxxxx>
On Behalf Of Brian FG Katz (SU)
Dear Bob, et al,
I feel obliged to reply to some serious statements made in recent posts. While i think there is little doubt that numerous bias elements (privileges of various sorts" are present in career evolutions, recruitment committees, promotions, be them academic or corporate, I must return to the discussion to the topic at hand, in the broad sense, of the importance of peer-review.
As a regular reviewer in various journals (and fields of acoustics) what is judged is the work on the page, no more and no less. No free rides are given to authors of high reputation (sometime more scrutiny), nor penalties to young unknowns or unrepresented countries (sometimes more flexibility is given). If the arguement for publication is unpersuasive, it is solely on the merit of the presentation of the work. I say it this way because again it is only what is on the page that is reviewed. The work itself may be of high standards, but a work is reviewed by what is stated, not what is intended. As an Associate Editor, the same is true. Specific knowledge of the author is really only needed to assure lack of direct conflicts of interest in selecting reviewers. I have never considered the background, academic or career history of an author in accepting or rejecting a manuscript. I would even go so far as to say if one considers these elements in one's reviews they should probably recuse themselves from such benevolent activities to the community.
Finally, returning to the question of arXiv and preprints, where this all started, I don't think anyone came out against them on the whole, but they should be taken for what they are, and no more. They are a scientific blog or a conference proceeding. They do not hold the same value, or represent the same rigor of critique, that a journal article has passed. Thie difference is clear. However, it is only really relevant in a few circumstances: as a substantive citation in another journal article, in an academic/research career application/review, or a project proposal (a version of the previous point). If one doesn't require these elements, and that is a choice, then one isn't limited by the means one chooses to disseminate one's work. No one has critiqued the use of arxiv and the like, per se, but if one is competing on the quality of one's work, the process of peer-review is the widely accepted passage for some semblance of quality, for which no other alternative currently exists. A review committee cannot be expected to read every article, let alone the comments section, and be required to form an opinion.
This does not say the process cannot be improved, and that is also the motivation for journal quality classifications and the exclusion of some journals from being "acceptable" is those situations. Such rapid publication and limited review journals are more akin to arXiv than a reputable journal, though with fees, and rightly so with regards to scientific scrutiny. One is free to use them for what they are, but one should not make claims that they are anything more.
At least, that is my perspective. -- Brian FG Katz Equipe LAM : Lutheries Acoustique Musique Sorbonne Université, CNRS, Institut ∂'Alembert
-------- Original message -------- From: "McMurray, Bob" <bob-mcmurray@xxxxxxxxx> Date: 6/6/23 06:09 (GMT+01:00) Subject: Re: [AUDITORY] [External] Re: [AUDITORY] arXiv web of trust
Hi Colleagues
I’ve been watching from the wings on this discussion as I think our field is in a real point of flux with respect to scientific publishing and communication, and I don’t think I know what’s best any more. Its been fun to watch a very healthy and vigorous conversation unfold amonst my esteemed colleagues – both junior and senior – and I’ve learned a lot.
However, Matt (and Deniz) made a very powerful point, that I felt the need to weigh in on. They argue that the very nature of scientific communication is pervaded by issues power, positionality and discrimination. I don’t think I realized this till recently (perhaps I was an Eagle in that cartoon), but they are right. It’s important.
Les, I respect your point of view. We should be having these open and objective conversations and we should strive for that. But we also have to recognize that this is an aspirational point of view. In my view, the rhetoric of science is not objective. Its persuasive. A scientific discovery from my lab is not a fact until I convince the scientific community to believe it (or at least convince Reviewers 1,2 and 3). The rules of science – statistical and methodological norms, peer review, and the like -- are really designed to ensure that this persuasion is all geared to some mutually acceptable norms of objectivity. It often works and there’s not much better.
But fundamentally this is still a persuasive enterprise (as it should be). And fundamentally, some people – by virtue of their station and background – are going to be in a better place to persuade their colleagues than others. We commonly associate these issues of discrimination and positionality with things like race, religion and gender. And indeed these things matter – just look at the disparities among the medalists of the ASA and you can see for yourself.
But a good friend of mine recently showed me how these kind of factors extend all throughout academia. Are some fields privileged? Are hearing scientists more likely to discount a finding from a linguist or a social scientist than someone who is solidly situated in hearing science? What about a finding from a small clinical population (a “niche” field) or an obscure auditory phenomena vs. as opposed to a finding based on the core “modal” NH adult in a sound proof booth? Are we more likely to take a finding seriously if it was generated by one of the top universities (in our field) than a second tier state university? Or from a new scholar that was trained by one of the best vs. an emerging scholar who came to the field more independently? What about a person who is changing fields – migrating, for example, from a field like cognitive science to audiology or hearing science? What about clinical credentialing? Does that help or harm our cases?
All of these things have nothing to do with the objective argument that is being made and the quality of the data used to support it. But we all must admit that they do change how much credence we are likely to give a discussion or a paper (and each of us may weigh these differently). Sometimes these are useful heuristics – if the methods aren’t clear, but you know how a person was trained, it may be easier to trust that the experiments were done right. But sometimes this is just downright discriminatory, like when we discount contributions from outside what we perceive as the core field.
But how does this impact scientific publishing?
Matt makes the valuable point that as our field opens up to new viewpoints and new participants, the view from those people may be very different than the view from the people at the top. We should listen. People do struggle to gain entry to this field. I certainly did when I began working in hearing science, despite my training at a very good cognitive science program.
Peer review is part of the problem. It can amplify these biases. And peer review is not designed to “help” new entries – its is designed to help a journal editor decide what to do with a paper. So it often serves as an impersonal barrier to entry. OF course, we cannot dispense with it. But we should be actively exploring other models. if this new generation of talented, thoughtful, diverse and enterprising young scholars wants to engage in novel modes of scientific communication, I’m happy to listen and to contribute to these new models.
theBob
On Thu, Jun 1, 2023 at 1:55 PM Les Bernstein <lbernstein@xxxxxxxx> wrote:
-- Matthew Winn, AuD, PhD Associate Professor Speech-Language-Hearing Sciences University of Minnesota |