By Ivo Welch. Here is the abstract:
This paper analyzes referee recommendations in two settings: The first setting is a prestigious finance conference, in which a computer algorithm matched referees to papers based only on shared expertise. The second setting is the standard journal process, with data from eight prominent economics and finance journals (ECMTA, JEEA, JET, QJE, IER, RAND, JF, RFS). Despite referee selection differences, the data suggest similar referee behavior in both settings. First, referees display only modest consensus. Second, referees disagree not only about scales (a referee mean effect), but also about the relative ordering of papers. Third, the bias measured by the average generosity of the referee on other papers is about as important in predicting a referee’s recommendation as the opinion of another referee on the same paper.
In sum, the typical referee report consists roughly of one part signal of some referee- agreeable objective attribute of the paper and two parts (referee-specific) noise. In turn, the noise itself consists roughly of one part referee-mean effect (bias) and two parts unidentified effects or noise.
The random selection of referees removes this potential objection.
2 comments
Comments feed for this article
December 2, 2012 at 11:51 pm
Lones Smith
Ivo Welch appears to be back! Now, who are these Dr. No referees? Successful publishers? Unsuccessful ones? Young? Seasoned pros? Old? For this appears to the be the main editor control.
December 4, 2012 at 10:32 am
Enrique
I also like Ionnadis’s work, but what I have not seen yet is a paper criticicizing the papers criticizing the process of publishing papers