As VR technology advances, the demand for multitasking within virtual environments escalates. Negotiating multiple tasks within the immersive virtual setting presents cognitive challenges, where users experience difficulty executing multiple concurrent tasks. This phenomenon highlights the importance of cognitive functions like attention and working memory, which are vital for navigating intricate virtual environments effectively. In addition to attention and working memory, assessing the extent of physical and mental strain induced by the virtual environment and the concurrent tasks performed by the participant is key. While previous research has focused on investigating factors influencing attention and working memory in virtual reality, more comprehensive approaches addressing the prediction of physical and mental strain alongside these cognitive aspects remain. This gap inspired our investigation, where we utilized an open dataset - VRWalking, which included eye and head tracking and physiological measures like heart rate(HR) and galvanic skin response(GSR). The VRwalking dataset has timestamped labeled data for physical and mental load, working memory, and attention metrics. In our investigation, we employed straightforward deep learning models to predict these labels, achieving noteworthy performance with 91%, 96%, 93%, and 91% accuracy in predicting physical load, mental load, working memory, and attention, respectively. Additionally, we conducted SHAP (SHapley Additive exPlanations) analysis to identify the most critical features driving these predictions. Our findings contribute to understanding the overall cognitive state of a participant and effective data collection practices for future researchers, as well as provide insights for virtual reality developers. Developers can utilize these predictive approaches to adaptively optimize user experience in real-time and minimize cognitive strain, ultimately enhancing the effectiveness and usability of virtual reality applications.