AI-driven evidence synthesis: data extraction of randomized controlled trials with large language models.

 0 Người đánh giá. Xếp hạng trung bình 0

Tác giả: Yaolong Chen, Long Ge, Liangying Hou, Jiajie Huang, Honghao Lai, Hui Liu, Jiayi Liu, Xufei Luo, Bei Pan, Bingyi Wang, Danni Xia, Weilong Zhao

Ngôn ngữ: eng

Ký hiệu phân loại: 739.2 Work in precious metals

Thông tin xuất bản: United States : International journal of surgery (London, England) , 2025

Mô tả vật lý:

Bộ sưu tập: NCBI

ID: 701692

The advancement of large language models (LLMs) presents promising opportunities to enhance evidence synthesis efficiency, particularly in data extraction processes, yet existing prompts for data extraction remain limited, focusing primarily on commonly used items without accommodating diverse extraction needs. This research letter developed structured prompts for LLMs and evaluated their feasibility in extracting data from randomized controlled trials (RCTs). Using Claude (Claude-2) as the platform, we designed comprehensive structured prompts comprising 58 items across six Cochrane Handbook domains and tested them on 10 randomly selected RCTs from published Cochrane reviews. The results demonstrated high accuracy with an overall correct rate of 94.77% (95% CI: 93.66% to 95.73%), with domain-specific performance ranging from 77.97% to 100%. The extraction process proved efficient, requiring only 88 seconds per RCT. These findings substantiate the feasibility and potential value of LLMs in evidence synthesis when guided by structured prompts, marking a significant advancement in systematic review methodology.
Tạo bộ sưu tập với mã QR

THƯ VIỆN - TRƯỜNG ĐẠI HỌC CÔNG NGHỆ TP.HCM

ĐT: (028) 36225755 | Email: tt.thuvien@hutech.edu.vn

Copyright @2024 THƯ VIỆN HUTECH