Optimizing generative AI by backpropagating language model feedback.

 0 Người đánh giá. Xếp hạng trung bình 0

Tác giả: Federico Bianchi, Joseph Boen, Carlos Guestrin, Zhi Huang, Sheng Liu, Pan Lu, Mert Yuksekgonul, James Zou

Ngôn ngữ: eng

Ký hiệu phân loại: 629.836 Nonlinear systems

Thông tin xuất bản: England : Nature , 2025

Mô tả vật lý:

Bộ sưu tập: NCBI

ID: 724113

Recent breakthroughs in artificial intelligence (AI) are increasingly driven by systems orchestrating multiple large language models (LLMs) and other specialized tools, such as search engines and simulators. So far, these systems are primarily handcrafted by domain experts and tweaked through heuristics rather than being automatically optimized, presenting a substantial challenge to accelerating progress. The development of artificial neural networks faced a similar challenge until backpropagation and automatic differentiation transformed the field by making optimization turnkey. Analogously, here we introduce TextGrad, a versatile framework that performs optimization by backpropagating LLM-generated feedback to improve AI systems. By leveraging natural language feedback to critique and suggest improvements to any part of a system-from prompts to outputs such as molecules or treatment plans-TextGrad enables the automatic optimization of generative AI systems across diverse tasks. We demonstrate TextGrad's generality and effectiveness through studies in solving PhD-level science problems, optimizing plans for radiotherapy treatments, designing molecules with specific properties, coding, and optimizing agentic systems. TextGrad empowers scientists and engineers to easily develop impactful generative AI systems.
Tạo bộ sưu tập với mã QR

THƯ VIỆN - TRƯỜNG ĐẠI HỌC CÔNG NGHỆ TP.HCM

ĐT: (028) 36225755 | Email: tt.thuvien@hutech.edu.vn

Copyright @2024 THƯ VIỆN HUTECH