INTRODUCTION: Neuropsychologists often use continuously scored measures to create dichotomous cutoff scores for making decisions. Dichotomization allows test users to employ traditional diagnostic statistics, such as sensitivity and specificity, but this approach is conceptually and statistically limited. This study uses simulated data to explore problems with dichotomizing continuous data. We critically review commonly proposed solutions and illustrate how logistic regression (LR) can overcome these limitations. We explore practical issues including homogeneity and heterogeneity in forced dichotomization and how such problems are compounded by reporting multiple cutoff scores. METHOD: Using R, we simulated data for a hypothetical, normally distributed, cognitive screening test using 200 simulated participants. We set the probability of "cognitive impairment" at .5 and constrained the simulated screening test and impairment designation to correlate at RESULTS: Receiver operating characteristic area under the curve was .78 (95% CI: .71-.84), indicating the analyses were simulating an adequately accurate test. We illustrate how interpreting from groups created by cut scores leads to misleading classifications whereby disparate scores above or below a cut score are treated similarly, adjacent scores at the cutoff are treated as categorically distinct, and how offering multiple cutoff score compounds each of these problems. Although the idea of jettisoning categories in favor of examining observed data has appeal, such approaches are ill-advised because datasets often have peculiarities that can lead to misleading conclusions. Deriving probabilities from LR uses the full continuum of data and does not involve evaluators choosing from among cutoff options. CONCLUSIONS: We advocate using LR-based probability estimates instead of group-based cutoff scores when making dichotomous decisions from continuous data. These probability estimates can be directly applied to clinical and research practice.