The security of Internet of Medical Things (IoMT) devices is crucial for ensuring the integrity and reliability of patients' medical data. These devices, operating over TCP and ICMP protocols, are highly susceptible to cyberattacks. While machine learning models can detect these attacks with acceptable accuracy, their operational mechanisms remain unclear, leaving the decision-making process of the models undefined. Moreover, the accuracy and training time of machine learning models is more questionable when the datasets has large number of sparse features and class imbalances. This study introduces an interpretable feature selection technique designed to enhance intrusion detection in IoMT by reducing redundant features and improving model efficiency. Random Forest-based explainable AI model provides transparency in attack classification and better decision-making. The simulated results employing the CICIoMT2024 dataset demonstrate that the proposed method significantly improves detection performance, with the Random Forest model achieving 99% accuracy, outperforming XGBoost (98%), Decision Tree (97%), and Support Vector (98%), while ensuring explainability through SHAP-based feature analysis. Thus, the simulation outcomes reveal the key contributing factors for various cyberattacks on IoMT, facilitating enhanced security measures and real-time monitoring. The proposed approach boosts detection accuracy and interpretability, making it highly suitable for real-world IoMT security applications.