Quantitative analysis of single-cell dynamics in live-cell imaging is pivotal for understanding cellular heterogeneity, disease mechanisms, and drug responses. However, this analysis demands stringent accuracy in cell segmentation and tracking. A single segmentation error can significantly impact trajectory analyses, leading to error cascades, despite recent advances that have improved segmentation precision. To tackle these challenges, we introduce LivecellX, a deep-learning-based, object-oriented framework designed for scalable analysis of live-cell dynamics. We have defined a new task: segmentation correction for both over-segmentation and under-segmentation errors, and developed innovative evaluation metrics and machine learning techniques to address this issue. Our work includes annotating a novel imaging dataset from two distinct microscope types and training a Corrective Segmentation Network (CS-Net). The network leverages normalized distance transforms and synthetic augmentation to rectify segmentation inaccuracies. We also propose trajectory-level correction algorithms that use temporal consistency and CS-Net to resolve errors at the trajectory level. After tracking, LivecellX facilitates biological process detection, diverse feature extraction, and lineage reconstruction across different datasets and imaging platforms. Its object-oriented architecture enables efficient data management and seamless integration across multiple datasets. Enhanced by Napari GUI support and parallelized computation, LivecellX offers a robust and extensible infrastructure for high-throughput single-cell imaging analysis, paving the way for future developments in live-cell foundation models.