BACKGROUND: The use of ChatGPT in medical applications is of increasing interest. However, its efficacy in critical care medicine remains uncertain. This study aims to assess ChatGPT-4's performance in critical care examination, providing insights into its potential as a tool for clinical decision-making. METHODS: A dataset from the Chinese Health Professional Technical Qualification Examination for Critical Care Medicine, covering four components-fundamental knowledge, specialized knowledge, professional practical skills, and related medical knowledge-was utilized. ChatGPT-4 answered 600 questions, which were evaluated by critical care experts using a standardized rubric. RESULTS: ChatGPT-4 achieved a 73.5 % success rate, surpassing the 60 % passing threshold in four components, with the highest accuracy in fundamental knowledge (81.94 %). ChatGPT-4 performed significantly better on single-choice questions than on multiple-choice questions (76.72 % vs. 51.32 %, p <
0.002), while no significant difference was observed between case-based and non-case-based questions. CONCLUSION: ChatGPT demonstrated notable strengths in critical care examination, highlighting its potential for supporting clinical decision-making, information retrieval, and medical education. However, caution is required regarding its potential to generate inaccurate responses. Its application in critical care must therefore be carefully supervised by medical professionals to ensure both the accuracy of the information and patient safety.