Comment: 47 pagesHere we present an expository, general analysis of valid post-selection or post-regularization inference about a low-dimensional target parameter, $\alpha$, in the presence of a very high-dimensional nuisance parameter, $\eta$, which is estimated using modern selection or regularization methods. Our analysis relies on high-level, easy-to-interpret conditions that allow one to clearly see the structures needed for achieving valid post-regularization inference. Simple, readily verifiable sufficient conditions are provided for a class of affine-quadratic models. We focus our discussion on estimation and inference procedures based on using the empirical analog of theoretical equations $$M(\alpha, \eta)=0$$ which identify $\alpha$. Within this structure, we show that setting up such equations in a manner such that the orthogonality/immunization condition $$\partial_\eta M(\alpha, \eta) = 0$$ at the true parameter values is satisfied, coupled with plausible conditions on the smoothness of $M$ and the quality of the estimator $\hat \eta$, guarantees that inference on for the main parameter $\alpha$ based on testing or point estimation methods discussed below will be regular despite selection or regularization biases occurring in estimation of $\eta$. In particular, the estimator of $\alpha$ will often be uniformly consistent at the root-$n$ rate and uniformly asymptotically normal even though estimators $\hat \eta$ will generally not be asymptotically linear and regular. The uniformity holds over large classes of models that do not impose highly implausible "beta-min" conditions. We also show that inference can be carried out by inverting tests formed from Neyman's $C(\alpha)$ (orthogonal score) statistics.