Abstract
Belief polarization is the tendency for individuals with opposing beliefs to predictably disagree more upon being exposed to certain types of evidence. A variety of recent papers have argued that many of the core empirical results surrounding this effect are consistent with standard Bayesian or approximately-Bayesian theories of rationality. I argue that this is wrong. While there are some types of predictable polarization that are consistent with standard Bayesian models, the core of the phenomenon is not. This core is the fact that when individuals face polarizing processes (such as exposure to mixed evidence or like-minded discussion), they can predict their own polarization. In any standard Bayesian model, Reflection is a theorem. Thus no standard-Bayesian model can explain the type of (Reflection-violating) predictable polarization we observe. The culprit in this result is not probabilism, but the assumption that updates occur by conditioning on partitional evidence. I show that—given the value of evidence as a constraint on rational updating—this partitionality assumption is equivalent to the requirement that rational credences are always introspective, i.e. that when it’s rational to have a given probability, it’s rational to be certain that it’s rational to have that probability. I suggest, therefore, that if Bayesian accounts of polarization are to succeed, they must do so by rejecting the assumption of rational introspection.