This essay discusses a peculiarity in institutionalized psychological measurement practices. Namely, an inherent contradiction between guidelines for how scales/tests are developed and how those scales/tests are typically analyzed. Best practices for developing scales/tests emphasize developing individual items or subsets of items to capture unique aspects of constructs, such that the full construct is captured across the test. Analysis approaches, typically factor analysis or related reflective models, assume that no individual item (nor a subset of items) captures unique, construct-relevant variance. This contradiction has important implications for the use of factor analysis to support measurement claims. The implications and other critiques of factor analysis are discussed. (PsycInfo Database Record (c) 2025 APA, all rights reserved).