Everybody Will Win, and All Must Be Hired: Comparing Additivity Neglect with the Nonselective Superiority Bias
Two streams of research looking at referent-dependent judgments from slightly different angles are subadditivity research and research on the nonselective superiority bias.
Both biases violate basic formal constraints: the probabilities of a set of exclusive events cannot add up to more than 100%, and a set of attractive candidates cannot all be rated as superior to the group mean. We examine in three experiments how these two biases are related, by asking the same participants to perform both kinds of tasks on the same material. Both biases appear to be widespread, even for sets where all alternatives are presented together, but they differ in the way they are affected by response format and experimental setup. Thus, presenting participants with an unbiased set of ratings will reduce but not normalize their probability estimates of the same alternatives; while presenting them with an unbiased (additive) set of probabilities will make most alternatives appear inferior to the group mean, inverting the superiority bias. Self-reports reveal that additivity neglect and the nonselective superiority bias can be based on two main response-strategies: (i) considering each alternative independently or (ii) comparing alternatives, while neglecting their complementarity. In both cases, assessments will be the outcome of a compromise between the perceived “absolute” merits of each alternative, its standing relative to referents, and properties of the response scale.
Journal of Behavioral Decision Making, 2017, 30 (1), 95- 106