Relative Performance Evaluation and Peer-Performance Summarization Errors
35 Pages Posted: 27 Apr 2011
Date Written: March 31, 2011
In tests of the relative performance evaluation (RPE) hypothesis, researchers rarely, if ever, aggregate peer performance in the same way as a firm’s board of directors. Framed as a standard errors-in-variables problem, a commonly-held view is that such aggregation errors induce an attenuation bias (i.e., toward zero) of the regression coefficient on systematic firm performance. This creates a bias towards support for the hypothesis that boards use RPE. In contrast, this study analytically demonstrates that aggregation differences generate more complicated summarization errors that significantly differ from a standard errors-in-variables problem. In particular, we find that aggregation differences create a bias against finding support for the RPE hypothesis (i.e., inducing a Type-II error). We also show that when the board does not use RPE, an empiricist will not find evidence for RPE (i.e., precluding a Type-I error). Using simulation methods, we demonstrate evidence of the sensitivity of empirical inferences to the bias. In particular, the simulation shows how an empiricist can conclude erroneously that boards (on average) do not apply RPE, simply by selecting more, fewer, or different peers than those chosen by a board.
Keywords: Relative Performance Evaluation, Peer Group, Aggregation, Incentive Compensation, Summarization Error
JEL Classification: J33, M41
Suggested Citation: Suggested Citation