Beyond Information Distortion: Imaging Variable-Length Time Series Data for Classification.

 0 Người đánh giá. Xếp hạng trung bình 0

Tác giả: Hyeonsu Lee, Dongmin Shin

Ngôn ngữ: eng

Ký hiệu phân loại: 005.754 Network databases

Thông tin xuất bản: Switzerland : Sensors (Basel, Switzerland) , 2025

Mô tả vật lý:

Bộ sưu tập: NCBI

ID: 78994

 Time series data are prevalent in diverse fields such as manufacturing and sensor-based human activity recognition. In real-world applications, these data are often collected with variable sample lengths, which can pose challenges for classification models that typically require fixed-length inputs. Existing approaches either employ models designed to handle variable input sizes or standardize sample lengths before applying models
  however, we contend that these approaches may compromise data integrity and ultimately reduce model performance. To address this issue, we propose Time series Into Pixels (TIP), an intuitive yet strong method that maps each time series data point into a pixel in 2D representation, where the vertical axis represents time steps and the horizontal axis captures the value at each timestamp. To evaluate our representation without relying on a powerful vision model as a backbone, we employ a straightforward LeNet-like 2D CNN model. Through extensive evaluations against 10 baseline models across 11 real-world benchmarks, TIP achieves 2-5% higher accuracy and 10-25% higher macro average precision. We also demonstrate that TIP performs comparably on complex multivariate data, with ablation studies underscoring the potential hazard of length normalization techniques in variable-length scenarios. We believe this method provides a significant advancement for handling variable-length time series data in real-world applications. The code is publicly available.
Tạo bộ sưu tập với mã QR

THƯ VIỆN - TRƯỜNG ĐẠI HỌC CÔNG NGHỆ TP.HCM

ĐT: (028) 36225755 | Email: tt.thuvien@hutech.edu.vn

Copyright @2024 THƯ VIỆN HUTECH