When Do Experts Listen to Other Experts? The Role of Negative Information in Expert Evaluations for Novel Projects
45 Pages Posted: 20 Jul 2020 Last revised: 5 Nov 2020
Date Written: November 4, 2020
Abstract
The evaluation of novel projects lies at the heart of scientific and technological innovation, and yet literature suggests that this process is subject to inconsistency and potential biases. This paper investigates the role of information sharing among experts as the driver of evaluation decisions. We designed and executed two field experiments in two separate grant funding opportunities at a leading research university to explore evaluators’ receptivity to assessments from other evaluators. Collectively, our experiments mobilized 369 evaluators from seven universities to evaluate 97 projects resulting in 761 proposal-evaluation pairs and over $300,000 in awards. We exogenously varied the relative valence (positive and negative) of others’ scores, to determine how exposures to higher and lower scores affect the focal evaluator’s propensity to change the initial score. We found causal evidence of negativity bias, where evaluators are more likely to lower their scores after seeing critical scores than raise them after seeing better scores. Qualitative coding and topic modelling of the evaluators’ justifications for score changes reveal that exposures to lower scores prompted greater attention to uncovering weaknesses, whereas exposures to neutral or higher scores were associated with strengths, along with greater emphasis on non-evaluation criteria, such as confidence in one’s judgment. Overall, information sharing among expert evaluators can lead to more conservative allocation decisions that favor protecting against failure than maximizing success.
Keywords: project evaluation, innovation, knowledge frontier, diversity, negativity bias
Suggested Citation: Suggested Citation
Do you have a job opening that you would like to promote on SSRN?
