Maximum Hallucination Standards for Domain-Specific Large Language Models

 0 Người đánh giá. Xếp hạng trung bình 0

Tác giả: Tingmingke Lu

Ngôn ngữ: eng

Ký hiệu phân loại: 006.4 Computer pattern recognition

Thông tin xuất bản: 2025

Mô tả vật lý:

Bộ sưu tập: Metadata

ID: 224236

Large language models (LLMs) often generate inaccurate yet credible-sounding content, known as hallucinations. This inherent feature of LLMs poses significant risks, especially in critical domains. I analyze LLMs as a new class of engineering products, treating hallucinations as a product attribute. I demonstrate that, in the presence of imperfect awareness of LLM hallucinations and misinformation externalities, net welfare improves when the maximum acceptable level of LLM hallucinations is designed to vary with two domain-specific factors: the willingness to pay for reduced LLM hallucinations and the marginal damage associated with misinformation.
Tạo bộ sưu tập với mã QR

THƯ VIỆN - TRƯỜNG ĐẠI HỌC CÔNG NGHỆ TP.HCM

ĐT: (028) 36225755 | Email: tt.thuvien@hutech.edu.vn

Copyright @2024 THƯ VIỆN HUTECH