Journal of Wine Economics

Articles

An Examination of Judge Reliability at a major U.S. Wine Competition*

Robert T. Hodgsona1

a1 Professor Emeritus, Department of Oceanography, Humboldt State University, Arcata, CA 95521, email: bob@fieldbrookwinery.com

Abstract

Wine judge performance at a major wine competition has been analyzed from 2005 to 2008 using replicate samples. Each panel of four expert judges received a flight of 30 wines imbedded with triplicate samples poured from the same bottle. Between 65 and 70 judges were tested each year. About 10 percent of the judges were able to replicate their score within a single medal group. Another 10 percent, on occasion, scored the same wine Bronze to Gold. Judges tend to be more consistent in what they don't like than what they do. An analysis of variance covering every panel over the study period indicates only about half of the panels presented awards based solely on wine quality. (JEL Classification: Q13, Q19)

Footnotes

* I would like to thank the administration and advisory board of the California State Fair Wine Competition for supporting this research and agreeing to release the results. Taking such a leadership role benefits the entire wine industry. I would especially like to thank G.M. “Pooch” Pucilowski, chief judge, and Kem Pence, wine department chairperson of the California State Fair Commercial Wine Competition, for their continued support of this study. In addition, Matt Sainson, www.ijudgewine.com, was the programmer responsible for data management for the entire competition. I am also indebted to an anonymous referee.

Metrics