Background subtraction in videos is a core challenge in computer vision, aiming to accurately identify moving objects. Robust principal component analysis (RPCA) has emerged as a promising unsupervised (US) paradigm for this task, showing strong performance on various benchmark datasets. Building on RPCA, tensor RPCA (TRPCA) variants have further enhanced background subtraction performance. However, current TRPCA methods often treat moving object pixels independently, lacking spatial-temporal structured-sparsity constraints. This limitation leads to performance degradation in scenarios with dynamic backgrounds, camouflage, and camera jitter. In this work, we introduce a novel spatial-temporal regularized tensor sparse RPCA algorithm to address these issues. By incorporating normalized graph-Laplacian matrices into the sparse component, we enforce spatial-temporal regularization. We construct two graphs-one across spatial locations and another across temporal slices-to guide regularization. By maximizing our objective function, we ensure that the tensor sparse component aligns with the spatiotemporal eigenvectors of the graph-Laplacian matrices, preserving disconnected moving object pixels. We formulate a new objective function and employ batch and online-based optimization methods to jointly optimize background-foreground separation and spatial-temporal regularization. Experimental evaluation on six publicly available datasets demonstrates the superior performance of our algorithm compared to existing methods.