The growing interest in leveraging artificial intelligence (AI) tools for healthcare decision-making extends to improving antibiotic prescribing. Large language models (LLMs), a type of AI trained on extensive datasets from diverse sources, can process and generate contextually relevant text. While their potential to enhance patient outcomes is significant, implementing LLM-based support for antibiotic prescribing is complex. Here, we specifically expand the discussion on this crucial topic by introducing three interconnected perspectives: (1) the distinctive commonalities, but also the crucial conceptual differences, between the use of LLMs as assistants in scientific writing and in supporting antibiotic prescribing in real-world practice
(2) the possibility and nuances of the expertise paradox
and (3) the peculiarities of the risk of error when considering LLMs to support complex tasks such as antibiotic prescribing.