$p$-Hacking undermines the validity of empirical studies. A flourishing empirical literature investigates the prevalence of $p$-hacking based on the distribution of $p$-values across studies. Interpreting results in this literature requires a careful understanding of the power of methods for detecting $p$-hacking. We theoretically study the implications of likely forms of $p$-hacking on the distribution of $p$-values to understand the power of tests for detecting it. Power depends crucially on the $p$-hacking strategy and the distribution of true effects. Publication bias can enhance the power for testing the joint null of no $p$-hacking and no publication bias.Comment: Some parts of this paper are based on material in earlier versions of our arXiv working paper "Detecting p-hacking" (arXiv:1906.06711), which were not included in the final published version (Elliott et al., 2022, Econometrica)