We study identifiability of the parameters in autoregressions defined on a network. Most identification conditions that are available for these models either rely on the network being observed repeatedly, are only sufficient, or require strong distributional assumptions. This paper derives conditions that apply even when the individuals composing the network are observed only once, are necessary and sufficient for identification, and require weak distributional assumptions. We find that the model parameters are generically, in the measure theoretic sense, identified even without repeated observations, and analyze the combinations of the interaction matrix and the regressor matrix causing identification failures. This is done both in the original model and after certain transformations in the sample space, the latter case being relevant, for example, in some fixed effects specifications.