OBJECTIVES: This study proposes a mobile-based explainable artificial intelligence (XAI) platform designed for diagnosing febrile illnesses. METHODS: We integrated the interpretability offered by local interpretable model-agnostic explanations (LIME) and the explainability provided by generative pre-trained transformers (GPT) to bridge the gap in understanding and trust often created by machine learning models in critical healthcare decision-making. The developed system employed random forest for disease diagnosis, LIME for interpretation of the results, and GPT-3.5 for generating explanations in easy-to-understand language. RESULTS: Our model demonstrated robust performance in detecting malaria, achieving precision, recall, and F1-scores of 85%, 91%, and 88%, respectively. It performed moderately well in detecting urinary tract and respiratory tract infections, with precision, recall, and F1-scores of 80%, 65%, and 72%, and 77%, 68%, and 72%, respectively, maintaining an effective balance between sensitivity and specificity. However, the model exhibited limitations in detecting typhoid fever and human immunodeficiency virus/acquired immune deficiency syndrome, achieving lower precision, recall, and F1-scores of 69%, 53%, and 60%, and 75%, 39%, and 51%, respectively. These results indicate missed true-positive cases, necessitating further model fine-tuning. LIME and GPT-3.5 were integrated to enhance transparency and provide natural language explanations, thereby aiding decision-making and improving user comprehension of the diagnoses. CONCLUSIONS: The LIME plots revealed key symptoms influencing the diagnoses, with bitter taste in the mouth and fever showing the highest negative influence on predictions, and GPT-3.5 provided natural language explanations that increased the reliability and trustworthiness of the system, promoting improved patient outcomes and reducing the healthcare burden.