Hostname: page-component-76fb5796d-25wd4 Total loading time: 0 Render date: 2024-04-27T04:53:32.568Z Has data issue: false hasContentIssue false

The Paris 1976 Wine Tastings Revisited Once More: Comparing Ratings of Consistent and Inconsistent Tasters

Published online by Cambridge University Press:  08 June 2012

Domenic V. Cicchetti
Affiliation:
Yale Home Office, 94 Linsley Lake Road, North Branford, CT 06471; e-mail address:dom.cicchetti@yale.edu.

Abstract

In the author's earlier research, five quite reliable and six quite unreliable subsets of tasters were identified, from among the full sample of eleven wine tasters, at the heralded 1976 Paris blind Chardonnay and Bordeaux/Cabernets wine competitions. This study shows quite conclusively that the consistent tasters and the inconsistent ones provided quite different results when compared both to each other and to the results based upon the full sample of eleven tasters. Results demonstrate the following: one should be wary of findings based solely upon an omnibus approach (i.e., results based only on the full sample of 11 tasters); that a next logical step is not only to continue to identify consistent tasters, but to design future studies in which these reliable judges are used to teach neophyte imbibers to also achieve high levels of wine tasting consistency; and that in continuing to investigate other important empirically derived oenological information, we should not, in the process, lose sight of the sheer hedonic pleasure of the next glass of wine.

Type
Articles
Copyright
Copyright © American Association of Wine Economists 2006

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Ashenfelter, O. and Quandt, R. (1999). Analyzing a wine tasting statistically. Chance, 12, 1620.CrossRefGoogle Scholar
Bartko, J.J. (1966). The intraclass correlation coefficient as a measure of reliability. Psychological Reports, 19, 311.CrossRefGoogle ScholarPubMed
Bartko, J.J. (1976). On various intraclass correlation reliability coefficients. Psychological Bulletin, 83, 762765.CrossRefGoogle Scholar
Borenstein, M. (1998). The shift from significance testing to effect size estimation. In: Bellak, A.S. and Hersen, M. (Series Eds.) and Schooler, N. (Vol. Ed.), Research and Methods, Vol. 3, Comprehensive Clinical Psychology. New York, NY: Pergamon. 313349.CrossRefGoogle Scholar
Borenstein, M., Rothstein, H., and Cohen, J. (2001). Power and Precision: A Computer Program for Statistical Power Analysis and Confidence Intervals. Englewood, NJ: Biostat, Inc.Google Scholar
Cicchetti, D.V. (1981). Testing the normal approximation and minimal sample size requirements of weighted kappa when the number of categories is large. Applied Psychological Measurement, 5, 101104.CrossRefGoogle Scholar
Cicchetti, D.V. (2001). The precision of reliability and validity estimates re-visited: Distinguishing between clinical and statistical significance of sample size requirements. Journal of Clinical and Experimental Neuropsychology, 23, 695700.CrossRefGoogle ScholarPubMed
Cicchetti, D.V. (2004a). Who won the 1976 blind tasting of French Bordeaux and US cabernets? Parametrics to the rescue. Journal of Wine Research, 15, 211220.CrossRefGoogle Scholar
Cicchetti, D.V. (2004b). On designing experiments and analyzing data to assess the reliability and accuracy of blind wine tastings. Journal of Wine Research, 15, 221226.CrossRefGoogle Scholar
Cicchetti, D.V. (2006). The 1976 blind wine tastings: On the consistency of tasters from chardonnays to cabernets. Bordeaux: Vineyard Data Quantification Society. Mimeo.Google Scholar
Cicchetti, D.V., Bronen, R., Spencer, S., Haut, S., Berg, A., Oliver, P., and Tyrer, P. (2006). Rating scales, scales of measurement, issues of reliability: Resolving some critical issues for clinicians and researchers. Journal of Nervous and Mental Disease, 194, 557564.CrossRefGoogle ScholarPubMed
Cicchetti, D.V. and Fleiss, J.L. (1977). Comparison of the null distributions of weighted kappa and the C ordinal statistic. Applied Psychological Measurement, 1, 195201.CrossRefGoogle Scholar
Cicchetti, D.V. and Showalter, D. (1997). A computer program for assessing interexaminer agreement when multiple ratings are made on a single subject. Psychiatry Research, 72, 6568.CrossRefGoogle ScholarPubMed
Cicchetti, D.V., Showalter, D., and Rosenheck, R. (1997). A new method for assessing interexaminer agreement when multiple ratings are made on a single subject: Applications to the assessment of neuropsychiatric symptomatology. Psychiatry Research, 72, 5163.CrossRefGoogle Scholar
Cicchetti, D.V. and Sparrow, S.S. (1981). Developing criteria for establishing interrater reliability of specific items: Applications to assessment of adaptive behavior. American Journal of Mental Deficiency, 86, 127137.Google ScholarPubMed
Cicchetti, D.V., Volkmar, F., Klin, A., and Showalter, D. (1995). Diagnosing autism using ICD-10 criteria: A comparison of neural networks and standard multivariate procedures. Child Neuropsychology, 1, 2637.CrossRefGoogle Scholar
Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 23, 3746.CrossRefGoogle Scholar
Cohen, J. (1968). Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit. Psychological Bulletin, 70, 213220.CrossRefGoogle ScholarPubMed
Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences. Hillsdale, NJ: Lawrence Erlbaum Associates.Google Scholar
Everitt, B.S. (1968). Moments of the statistics kappa and weighted kappa. British Journal of Mathematical and Statistical Psychology, 21, 97103.CrossRefGoogle Scholar
Fleiss, J.L. and Cicchetti, D.V. (1978). Inference about weighted kappa in the non-null case. Applied Psychological Measurement, 2, 113117.CrossRefGoogle Scholar
Fleiss, J.L. and Cohen, J. (1973). The equivalence of weighted kappa and the intraclass correlation coefficient as measures of agreement. Educational and Psychological Measurement, 33, 613619.CrossRefGoogle Scholar
Fleiss, J.L., Cohen, J., and Everitt, B.S. (1969). Large sample standard errors of kappa and weighted kappa. Psychological Bulletin, 72, 323327.CrossRefGoogle Scholar
Fleiss, J.L., Levin, B., and Paik, C. (2003). Statistical Methods for Rates and Proportions. 3rd edition. New York, NY: John Wiley and Sons.CrossRefGoogle Scholar
Fleiss, J.L., Nee, J.C.M., and Landis, J.R. (1979). Large sample variance in the case of different sets of raters. Psychological Bulletin, 86, 974977.CrossRefGoogle Scholar
Landis, R.J. and Koch, G.G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33, 159174.CrossRefGoogle ScholarPubMed
Leach, C. (1979). Introduction to Statistics: A Nonparametric Approach for the Social Sciences. New York, NY: John Wiley and Sons.Google Scholar
Lindley, D.V. (2006). Analysis of a wine tasting. Journal of Wine Economics, 1, 3341.CrossRefGoogle Scholar
McCarthy, P.L., Cicchetti, D.V., Sznajderman, S.D., Forsyth, B.C., Baron, M.A., Fink, H.D., Czarkowski, N., Bauchner, H., and Lustman-Findling, K. (1991). Demographic, clinical and psychosocial predictors of the reliability of mothers' clinical judgments. Pediatrics, 88, 10411046.CrossRefGoogle ScholarPubMed
Parker, R.M., with Rovani, P.A. (2002). Parker's Wine Buyer's Guide. New York, NY: Simon and Schuster.Google Scholar
Rosenthal, R. (1991). Meta-Analytic Procedures for Social Research. Newbury Park, CA: Sage Publications (Revised Ed.).CrossRefGoogle Scholar
Rosenthal, R. and Rubin, D. (1979). A note on percent variance as a measure of the importance of effects. Journal of Applied Social Psychology, 9, 395396.CrossRefGoogle Scholar
Rosenthal, R. and Rubin, D. (1982). A simple, general purpose display of magnitude of experimental effect. Journal of Educational Psychology, 74, 166169.CrossRefGoogle Scholar
Stevens, S.S. (1951). Mathematics, measurement, and psychophysics. In: Stevens, S.S. (ed.). Handbook of Experimental Psychology. New York, NY: John Wiley and Sons. Ch. 1, 149.Google Scholar
Stevens, S.S. (1957). On the psychophysical law. Psychological Review, 14, 153181.CrossRefGoogle Scholar
van Belle, G. (2002). Statistical Rules of Thumb. New York, NY: John Wiley and Sons.Google Scholar
Volkmar, F.R., Cicchetti, D.V., Dykens, E., Sparrow, S.S., Leckman, J.F., and Cohen, D.J. (1988). An evaluation of the autism behavior checklist. Journal of Autism and Developmental Disorders, 18, 8197.CrossRefGoogle ScholarPubMed
von Wieser, F. (1893). Natural Value. New York, NY: Macmillan (English edition).Google Scholar