A practical guide to the implementation of AI in orthopaedic research-Part 7: Risks, limitations, safety and verification of medical AI systems.

 0 Người đánh giá. Xếp hạng trung bình 0

Tác giả: Robert Feldt, Eric Hamrin Senorski, Elmar Herbst, Michael T Hirschmann, Christophe Ley, Umile Giuseppe Longo, Volker Musahl, Jacob F Oeding, Felix C Oettl, Ayoosh Pareek, James A Pruneski, Kristian Samuelsson, Thomas Tischer, Philipp W Winkler, Bálint Zsidai

Ngôn ngữ: eng

Ký hiệu phân loại: 333.822 Coal

Thông tin xuất bản: United States : Journal of experimental orthopaedics , 2025

Mô tả vật lý:

Bộ sưu tập: NCBI

ID: 743951

UNLABELLED: Artificial intelligence (AI) has been influencing healthcare and medical research for several years and will likely become indispensable in the near future. AI is intended to support healthcare professionals to make the healthcare system more efficient and ultimately improve patient outcomes. Despite the numerous benefits of AI systems, significant concerns remain. Errors in AI systems can pose serious risks to human health, underscoring the critical need for safety, as well as adherence to ethical and moral standards, before these technologies can be integrated into clinical practice. To address these challenges, the development, certification, and deployment of medical AI systems must adhere to strict and transparent regulations. The European Commission has already established a regulatory framework for AI systems by enacting the European Union Artificial Intelligence Act. This review article, part of an AI learning series, discusses key considerations for medical AI systems such as reliability, accuracy, trustworthiness, lawfulness and legal compliance, ethical and moral alignment, sustainability, and regulatory oversight. LEVEL OF EVIDENCE: Level V.
Tạo bộ sưu tập với mã QR

THƯ VIỆN - TRƯỜNG ĐẠI HỌC CÔNG NGHỆ TP.HCM

ĐT: (028) 36225755 | Email: tt.thuvien@hutech.edu.vn

Copyright @2024 THƯ VIỆN HUTECH