Many scholars have called for raising statistical hurdles to guard against false discoveries in academic publications. I show these calls may be difficult to justify empirically. Published data exhibit bias: results that fail to meet existing hurdles are often unobserved. These unobserved results must be extrapolated, which can lead to weak identification of revised hurdles. In contrast, statistics that can target only published findings (e.g. empirical Bayes shrinkage and the FDR) can be strongly identified, as data on published findings is plentiful. I demonstrate these results theoretically and in an empirical analysis of the cross-sectional return predictability literature.