Lightweight and Fast Time-Series Anomaly Detection via Point-Level and Sequence-Level Reconstruction Discrepancy.

 0 Người đánh giá. Xếp hạng trung bình 0

Tác giả: Lei Chen, Guangyang Deng, Xuxin Liu, Jiajun Tang, Xingquan Xie, Ying Zou

Ngôn ngữ: eng

Ký hiệu phân loại: 571.455 Light

Thông tin xuất bản: United States : IEEE transactions on neural networks and learning systems , 2025

Mô tả vật lý:

Bộ sưu tập: NCBI

ID: 747580

Unsupervised time-series anomaly detection (TSAD) aims to identify anomalies in industrial sensing signals to ensure production safety. As Industry 4.0 emerges, TSAD deployment must migrate from resource-rich cloud to resource-limited edges for real-time and fine-grained control. In this case, it raises new targets with high accuracy, high timeliness, and low consumption for TSAD. However, existing models focus solely on achieving high accuracy by building neural networks with deep structures and large parameters. Consequently, these models demand prohibitive training durations and computational overhead, which makes them unsuitable for edge deployment. To solve this issue, an unsupervised lightweight and fast TSAD model, namely, LFTSAD, is proposed via point-level and sequence-level reconstruction discrepancy in this article. First, to achieve high timeliness and low consumption, LFTSAD uses two two-layer multilayer perceptron networks (MLPs) to construct a lightweight contrastive architecture with few parameters. Second, leveraging the lightweight architecture, a dual-branch reconstruction network is designed to generate corresponding reconstruction discrepancies from point-level and sequence-level perspectives. Finally, a novel anomaly scoring scheme is designed to combine point-level and sequence-level reconstruction discrepancies for more accurate anomaly detection. To the best of our knowledge, this is the first work to develop a lightweight All-MLP-based TSAD model for resource-limited edge devices. Extensive experiments demonstrate that LFTSAD is 3-10 times faster in timeliness, consumes only 1/2 of the resources, and achieves accuracy that is either comparable to or superior to several deep SOTA models. The source code of LFTSAD is here https://github.com/infogroup502/LFTSAD.
Tạo bộ sưu tập với mã QR

THƯ VIỆN - TRƯỜNG ĐẠI HỌC CÔNG NGHỆ TP.HCM

ĐT: (028) 36225755 | Email: tt.thuvien@hutech.edu.vn

Copyright @2024 THƯ VIỆN HUTECH