Confirmatory bifactor models have become very popular in psychological applications, but they are increasingly criticized for statistical pitfalls such as tendency to overfit, tendency to produce anomalous results, instability of solutions, and underidentification problems. In part to combat this state of affairs, many different reliability and dimensionality measures have been proposed to help researchers evaluate the quality of the obtained bifactor solution. However, in empirical practice, the evaluation of bifactor models is largely based on structural equation model fit indices. Other critical indicators of solution quality, such as patterns of general and group factor loadings, whether all estimates are interpretable, and values of reliability coefficients, are often not taken into account. In addition, in the methodological literature, some confusion exists about the appropriate interpretation and application of some bifactor reliability coefficients. In this article, we accomplish several goals. First, we review reliability coefficients for bifactor models and their correct interpretations, and we provide expectations for their values. Second, to help steer researchers away from structural equation model fit indices and to improve current practice, we provide a checklist for evaluating the statistical fit of bifactor models. Third, we evaluate the state of current practice by examining 96 empirical articles employing confirmatory bifactor models across different areas of psychology. (PsycInfo Database Record (c) 2025 APA, all rights reserved).