AIM: This study aims to build a Custom GPT specifically designed to answer questions from the Chinese Nursing Licensing Exam, to examine its accuracy and response quality. BACKGROUND: Custom GPT could be an efficient tool in nursing education, but it has not yet been implemented in this field. METHODS: A quantitative, descriptive, cross-sectional approach was used to evaluate the performance of a Custom GPT. In this study, we developed a Custom GPT by integrating customized knowledge and using Prompt Engineering, retrieval-augmented generation and semantic search technology. Our Custom GPT's performance was compared with that of standard ChatGPT-4 by analyzing 720 questions from three mock exams for the 2024 Chinese Nursing Licensing Exam. RESULTS: Custom GPT provided superior results, with its accuracy consistently exceeding 90 % across all six parts of the exams, whereas the accuracy of ChatGPT-4 ranged from 73 % to 89 %. Furthermore, the performance of Custom GPT (accuracy, >
85 %) across different question types was superior to that of ChatGPT-4 (accuracy, 66-83 %). The odds ratios consistently favored Custom GPT, indicating a significantly higher likelihood of correct responses (P <
0.05 for most comparisons). In generating explanations, Custom GPT tended to provided more concise and confident responses, whereas ChatGPT-4 provided longer, speculative responses with higher chances of inaccuracies and hallucinations. CONCLUSIONS: This study demonstrated significant advantages of Custom GPT over ChatGPT in the Chinese Nursing Licensing Exam, indicating its immense potential in specific application scenarios and its potential for expansion to other areas of nursing.