The variance of a linearly combined forecast distribution (or linear pool) consists of two components: The average variance of the component distributions (`average uncertainty'), and the average squared difference between the components' means and the pool's mean (`disagreement'). This paper shows that similar decompositions hold for a class of uncertainty measures that can be constructed as entropy functions of kernel scores. The latter are a rich family of scoring rules that covers point and distribution forecasts for univariate and multivariate, discrete and continuous settings. We further show that the disagreement term is useful for understanding the ex-post performance of the linear pool (as compared to the component distributions), and motivates using the linear pool instead of other forecast combination techniques. From a practical perspective, the results in this paper suggest principled measures of forecast disagreement in a wide range of applied settings.