Dear list, I have received several responses to my claim which I find very interesting and insightful. These response referred to "power"journals, such as Nature, and their editorial procedures. In reading the scope of Nature (see below) it is clear that there is more to their criteria than scientific merit, highlighting several points which, in my opinion, open the decision process to editorial decisions outside of what is typical of a scientific journal, more akin to a magazine where editorial interest is the deciding factor.
If one submits a manuscript to such a publication, with the clearly state scope of public interest, newsworthiness, elegance, and suprising results, one should. It be surprised by a significant degree of editorial leeway (or bias if one so deems). However, while I agree that refusal from such publication can be disappointing, I would be more akin to question classifying this journal as truly scientific, rather than science media. If one looks at the scope of an alternate publication:
One can plainly see the difference in editorial intention, with the first geared towards publicity/notoriety in the press while the second more towards scientific quality for scientists. This leads to a potentially more interesting discussion on how journals are evaluated, ranked, and why people chose certain journals. The journal impact factor is, I think many would admit, a flawed notion, and benefits greatly publications such as Nature where external citations are more common than say JASA where internal article citation would be the norm. How a journal is considered valuable in the community, and by the evaluating bodies associated (funding, promotion, etc) I think warrants more critique than the notion of peer-review in general. Acceptance rate is a factor, as more lenient publications can be considered as providing less quality control. I would further argue that societal run journals, rather than pure profit enterprises offer more likelihood of scientific rigor as the only real motivation is quality assurance. Of course, for-profit journals can also provide rigour, but the capitalistic effect cannot be ignored. The open-source model (free-for-all without limits* [at least only minor]) is well known for conferences and now extends to unreviewed avenues like arXiv (I will not term these preprints as there remains no assurance that there is a follow up actual 'print' version). This was the impetus of this discussion, and I think the principal critique remains that these avenues, which can justifiably be grouped together, do not provide any such rigor, and by their nature do not claim to, placing dissemination over validation. While I have seen little in the way of proposals to improve the peer-review system that do not risk serious threat to the positives (such as monetary compensation which would likely lead to more rapid reviews to increase production rather than higher quality, unless reviewers were "qualified" which then introduces a new level of bias and priviledge) or open/published reviews (which opens the authors to potential exposure of errors along the way of revision and can only diminish their standing in the community through exposure of what is happening before the document is considered "ready") or non-anonymous reviews which then adds stature and reputation to the review document itself, which is contrary to the original intention where the reviewer has not personal gain in providing their opinion, at the risk of severe bias, the are clearly means of improving the system at large, emphasizing scientific quality and rigor over competing elements. I would therefore invite those who are concerned by the system as a whole to reconsider how they chose their publishing outlet. We inhabit a system of out own making, and it is through engagement with (or avoidance of) the different actors that we can influence and improve our career community. -- Brian FG Katz Equipe LAM : Lutheries Acoustique Musique Sorbonne Université, CNRS, Institut ∂'Alembert |