We study generalization of intervention effects across several simulated and real-world samples. We start by formulating the concept of the 'background' of a sample effect observation. We then formulate conditions for effect generalization based on a sample's set of (observed and unobserved) backgrounds. This reveals two limits for effect generalization: (1) when effects of a variable are observed under all their enumerable backgrounds, or, (2) when backgrounds have become sufficiently randomized. We use the resulting combinatorial framework to re-examine open issues in current causal effect estimators: out-of-sample validity, concurrent estimation of multiple effects, bias-variance tradeoffs, statistical power, and connections to current predictive and explaining techniques. Methodologically, these definitions also allow us to replace the parametric estimation problems that followed the 'counterfactual' definition of causal effects by combinatorial enumeration and randomization problems in non-experimental samples. We use the resulting non-parametric framework to demonstrate (External Validity, Unconfoundness and Precision) tradeoffs in the performance of popular supervised, explaining, and causal-effect estimators.