An interpretable dimensional reduction technique with an explainable model for detecting attacks in Internet of Medical Things devices.

 0 Người đánh giá. Xếp hạng trung bình 0

Tác giả: Ranjan Kumar Dash, Nikola Ivković, Swati Lipsa

Ngôn ngữ: eng

Ký hiệu phân loại:

Thông tin xuất bản: England : Scientific reports , 2025

Mô tả vật lý:

Bộ sưu tập: NCBI

ID: 711745

The security of Internet of Medical Things (IoMT) devices is crucial for ensuring the integrity and reliability of patients' medical data. These devices, operating over TCP and ICMP protocols, are highly susceptible to cyberattacks. While machine learning models can detect these attacks with acceptable accuracy, their operational mechanisms remain unclear, leaving the decision-making process of the models undefined. Moreover, the accuracy and training time of machine learning models is more questionable when the datasets has large number of sparse features and class imbalances. This study introduces an interpretable feature selection technique designed to enhance intrusion detection in IoMT by reducing redundant features and improving model efficiency. Random Forest-based explainable AI model provides transparency in attack classification and better decision-making. The simulated results employing the CICIoMT2024 dataset demonstrate that the proposed method significantly improves detection performance, with the Random Forest model achieving 99% accuracy, outperforming XGBoost (98%), Decision Tree (97%), and Support Vector (98%), while ensuring explainability through SHAP-based feature analysis. Thus, the simulation outcomes reveal the key contributing factors for various cyberattacks on IoMT, facilitating enhanced security measures and real-time monitoring. The proposed approach boosts detection accuracy and interpretability, making it highly suitable for real-world IoMT security applications.
Tạo bộ sưu tập với mã QR

THƯ VIỆN - TRƯỜNG ĐẠI HỌC CÔNG NGHỆ TP.HCM

ĐT: (028) 36225755 | Email: tt.thuvien@hutech.edu.vn

Copyright @2024 THƯ VIỆN HUTECH