Deviations from Bayesian updating are traditionally categorized as biases, errors, or fallacies, thus implying their inherent ``sub-optimality.'' We offer a more nuanced view. We demonstrate that, in learning problems with misspecified models, non-Bayesian updating can outperform Bayesian updating.