The rise in intrusions on network and IoT systems has led to the development of artificial intelligence (AI) methodologies in intrusion detection systems (IDSs). However, traditional AI or machine learning (ML) methods can compromise accuracy due to the vast, diverse, and dynamic nature of the data generated. Moreover, many of these methods lack transparency, making it challenging for security professionals to make predictions. To address these challenges, this paper presents a novel IDS architecture that uses deep learning (DL)-based methodology along with eXplainable AI (XAI) techniques to create explainable models in network intrusion detection systems, empowering security analysts to use these models effectively. DL models are needed to train enormous amounts of data and produce promising results. Three different DL models, i.e., customized 1-D convolutional neural networks (1-D CNNs), deep neural networks (DNNs), and pre-trained model TabNet, are proposed. The experiments are performed on seven different datasets of TON_IOT. The CNN model for the network dataset achieves an impressive accuracy of 99.24%. Meanwhile, for the six different IoT datasets, in most of the datasets, the CNN and DNN achieve 100% accuracy, further validating the effectiveness of the proposed models. In all the datasets, the least-performing model is TabNet. Implementing the proposed method in real time requires an explanation of the predictions generated. Thus, the XAI methods are implemented to understand the essential features responsible for predicting the particular class.