Tonight's Science programme on BBC Radio 4 was critical of the peer review process, in which scientific articles are filtered for publication according to the comments of other researchers in the same field. [Peer Review in the Dock, 4 August 2008]
The purpose of peer review is to give us confidence in the quality of published scientific research. Like many other social institutions, it has well-known weaknesses as well as strengths. [BBC News, Science will stick with peer review]
I have often been asked to provide peer reviews on articles for journals and conferences. Sometimes I find I know much more about the subject of the article than the authors, or at least some aspects of the subject. Even when my knowledge is less, I can usually find some areas of weakness or confusion in the article, demanding (in my opinion) either a significant re-write or complete rejection.
Having gone to the trouble to provide these reviews, I used to be shocked when I discovered that papers sometimes slipped through to publication without the identified flaws being adequately corrected. Experienced authors (or their supervisors) know how to game the system, and most journals and conferences simply don't have the resources to prevent these games. Some years ago I wrote a critique of this process and identified a number of negative patterns [Review Patterns].
The BBC programme this evening identified several more, including the "famous institution" bias and the "publication" bias. The latter is particularly important for research that involves sophisticated statistics (such as medical research), because if only publishable data are included in the analysis, then the publication criteria may themselves distort the findings. The publication bias also affects the opinions of so-called experts, whose assumptions will have been reinforced by the papers they have read.
URL for this post: http://tinyurl.com/cx6wbv
Monday, August 04, 2008
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment