ObjectiveThis study investigates the effects of workload and task priority on multitasking performance and reliance on Level 1 Explainable Artificial Intelligence (XAI) systems in high-stakes decision environments.BackgroundOperators in critical settings manage multiple tasks under varying levels of workload and priority, potentially leading to performance degradation. XAI offers opportunities to support decision making by providing insights into AI's reasoning, yet its adoption and effectiveness in multitasking scenarios remain underexplored.MethodThirty participants engaged in a simulated multitasking environment, involving UAV command and control tasks, with the assistance of a Level 1 (i.e., basic perceptual information) XAI system on one of the tasks. The study utilized a within-subjects experimental design, manipulating workload (low, medium, and high) and AI-supported-task priority (low and high) across six conditions. Participants' accuracy, use of automatic rerouting, AI miss detection, false alert identification, and use of AI explanations were measured and analyzed across the different experimental conditions.ResultsWorkload significantly hindered performance on the AI-assisted task and increased reliance on the AI system especially when the AI-assisted task was given low priority. The use of AI explanations was significantly affected by task priority only.ConclusionAn increase in workload led to proper offloading by relying on the AI's alerts, but it also led to a lower rate of alert verification despite the alert feature's high false alert rate.ApplicationThe findings from the present work help inform AI system designers on how to design their systems for high-stakes environments such that reliance on AI is properly calibrated.