Behavioral and Brain Sciences



Generalization, similarity, and Bayesian inference


Joshua B. Tenenbaum a1 and Thomas L. Griffiths a2
a1 Department of Psychology, Stanford University, Stanford, CA 94305-2130 jbt@psych.stanford.edu http://www-psych.stanford.edu/~jbt
a2 Department of Psychology, Stanford University, Stanford, CA 94305-2130 gruffydd@psych.stanford.edu http://www-psych.stanford.edu/~gruffydd/

Abstract

Shepard has argued that a universal law should govern generalization across different domains of perception and cognition, as well as across organisms from different species or even different planets. Starting with some basic assumptions about natural kinds, he derived an exponential decay function as the form of the universal generalization gradient, which accords strikingly well with a wide range of empirical data. However, his original formulation applied only to the ideal case of generalization from a single encountered stimulus to a single novel stimulus, and for stimuli that can be represented as points in a continuous metric psychological space. Here we recast Shepard's theory in a more general Bayesian framework and show how this naturally extends his approach to the more realistic situation of generalizing from multiple consequential stimuli with arbitrary representational structure. Our framework also subsumes a version of Tversky's set-theoretic model of similarity, which is conventionally thought of as the primary alternative to Shepard's continuous metric space model of similarity and generalization. This unification allows us not only to draw deep parallels between the set-theoretic and spatial approaches, but also to significantly advance the explanatory power of set-theoretic models.


Key Words: additive clustering; Bayesian inference; categorization; concept learning; contrast model; features; generalization; psychological space; similarity.