BACKGROUND: Large language models (LLMs) such as ChatGPT have rapidly revolutionized the way computers can analyze human language and the way we can interact with computers. OBJECTIVE: To give an overview of the emergence and basic principles of computational language models. METHODS: Narrative literature-based analysis of the history of the emergence of language models, the technical foundations, the training process and the limitations of LLMs. RESULTS: Nowadays, LLMs are mostly based on transformer models that can capture context through their attention mechanism. Through a multistage training process with comprehensive pretraining, supervised fine-tuning and alignment with human preferences, LLMs have developed a general understanding of language. This enables them to flexibly analyze texts and produce outputs of high linguistic quality. CONCLUSION: Their technical foundations and training process make large language models versatile general-purpose tools for text processing, with numerous applications in radiology. The main limitation is the tendency to postulate incorrect but plausible-sounding information with high confidence.