[Technical foundations of large language models].

 0 Người đánh giá. Xếp hạng trung bình 0

Tác giả: Christian Blüthgen

Ngôn ngữ: eng

Ký hiệu phân loại: 622.66 Mechanical haulage

Thông tin xuất bản: Germany : Radiologie (Heidelberg, Germany) , 2025

Mô tả vật lý:

Bộ sưu tập: NCBI

ID: 742174

BACKGROUND: Large language models (LLMs) such as ChatGPT have rapidly revolutionized the way computers can analyze human language and the way we can interact with computers. OBJECTIVE: To give an overview of the emergence and basic principles of computational language models. METHODS: Narrative literature-based analysis of the history of the emergence of language models, the technical foundations, the training process and the limitations of LLMs. RESULTS: Nowadays, LLMs are mostly based on transformer models that can capture context through their attention mechanism. Through a multistage training process with comprehensive pretraining, supervised fine-tuning and alignment with human preferences, LLMs have developed a general understanding of language. This enables them to flexibly analyze texts and produce outputs of high linguistic quality. CONCLUSION: Their technical foundations and training process make large language models versatile general-purpose tools for text processing, with numerous applications in radiology. The main limitation is the tendency to postulate incorrect but plausible-sounding information with high confidence.
Tạo bộ sưu tập với mã QR

THƯ VIỆN - TRƯỜNG ĐẠI HỌC CÔNG NGHỆ TP.HCM

ĐT: (028) 36225755 | Email: tt.thuvien@hutech.edu.vn

Copyright @2024 THƯ VIỆN HUTECH