INTRODUCTION: ChatGPT is an artificial intelligence (AI) chatbot, developed by OpenAI, that uses Deep Learning (DL) technology for information processing. The chatbot uses natural language processing (NLP) and machine learning (ML) algorithms to respond to users' questions. The purpose of this study was to review ChatGPT responses to determine if they were a reliable source of scientific information regarding local anesthesia for endodontics. MATERIALS AND METHODS: Sixteen representative questions pertaining to local anesthesia for endodontics were selected. ChatGPT was asked to answer the 16 questions and provide supporting references. Each provided ChatGPT reference was evaluated for accuracy using NLM NIH.GOV(PubMed), Google Scholar, journal citations, and author citations. Peer-reviewed, evidence-based literature citations related to the initial questions were collected by the authors. The two authors independently compared the answers of the ChatGPT to the peer-reviewed, evidence-based literature using a 5-answer Likert-type scale. RESULTS: ChatGPT was reliable 50% of the time when compared to the peer-reviewed, evidence-based literature. That is, ChatGPT had the same literature-based response as our peer-reviewed, evidence-based literature in 16 of the 32 questions. Of the 51 total references for Chatbot, 59% (30 of 51) had the wrong reference
12% (6 of 51) of the references couldn't be retrieved
and 18% (9 of 51) of the references were hallucinations (made up references). CONCLUSIONS: AI needs further training in or field to be trusted for accurate information in the filed of endodontic anesthesia. ChatGPT should continue to improve to provide reliable information for providers and patients alike.