Designing Informative Rating Systems: Evidence from two experiments

Garg, Nikhil and Johari, Ramesh

Rating systems on online platforms are used by both buyers and the platform itself to separate high quality sellers from low quality sellers. In practice, however, these ratings are often highly inflated, drastically reducing the signal available to distinguish seller quality. We consider whether rating systems can discriminate seller quality better by altering the meaning and relative importance of the levels in the rating system. In current systems, a norm has emerged that even average experiences deserve the highest possible rating. We show through a test in an online labor market that this norm can be countered by re-anchoring the meaning of the levels of the rating system, yielding much greater dispersion of estimated seller quality. Empowered by this insight, we develop a technical framework to optimize the rating system design, by optimizing the rate of convergence to the true seller quality distribution. Through simulations in an empirically calibrated model, we demonstrate that such optimization can substantially improve the quality of information obtained over baseline designs.