A Cross-Sectional Study Comparing Patient Education Guides Created by ChatGPT and Google Gemini for Common Cardiovascular-Related Conditions.

 0 Người đánh giá. Xếp hạng trung bình 0

Tác giả: Collin R George, Lakshmi Manasa Gunturi, Saswaath Thiruvengadam K, Gayatri Anilkumar Menon, Hariharasudhan Saravanan, Nayanaa Varsaale

Ngôn ngữ: eng

Ký hiệu phân loại: 355.007 Education and related topics

Thông tin xuất bản: United States : Cureus , 2025

Mô tả vật lý:

Bộ sưu tập: NCBI

ID: 170742

 Introduction  Obesity, hypertension, and hypertriglyceridemia are key components of metabolic syndrome, a major contributor to cardiovascular diseases (CVDs), which remain a leading cause of global mortality. Patient education on these conditions can empower individuals to adopt preventive measures and manage risks effectively. This study compares ChatGPT and Google Gemini, two prominent artificial intelligence (AI) tools, to evaluate their utility in creating patient education guides. ChatGPT is known for its conversational depth, while Google Gemini emphasizes advanced natural language processing. By analyzing readability, reliability, and content characteristics, the study highlights how these AI tools cater to diverse patient needs, aiming to enhance health literacy outcomes. Methodology  A cross-sectional study evaluated patient education guides on obesity, hypertension, and hypertriglyceridemia, focusing on their links to metabolic syndrome. Responses from ChatGPT and Google Gemini were analyzed for word count, sentence count, readability (using the Flesch-Kincaid calculator), similarity score (using Quillbot), and reliability score (using the modified DISCERN score), with statistical analyses performed using the R Version 4.3.2.  Results  Statistical analysis revealed a significant difference in word and sentence counts between the AI tools: ChatGPT averaged 591.50 words and 66 sentences, while Google Gemini had 351.50 words and 36 sentences (p = 0.001 and p <
  0.0001). However, the average words per sentence, average syllables per word, grade level, similarity percentage, and reliability scores did not differ significantly. Although Google Gemini had a higher ease score (41.75) compared to ChatGPT (34.10), this difference was not statistically significant (p = 0.080). Both tools exhibited similar readability and reliability, indicating their effectiveness for patient education, despite ChatGPT providing longer responses.  Conclusion  The study found no significant difference between the two AI tools in terms of ease, grade, and reliability scores, with no correlation between ease and reliability scores.
Tạo bộ sưu tập với mã QR

THƯ VIỆN - TRƯỜNG ĐẠI HỌC CÔNG NGHỆ TP.HCM

ĐT: (028) 36225755 | Email: tt.thuvien@hutech.edu.vn

Copyright @2024 THƯ VIỆN HUTECH