Reproducibility of methodological radiomics score (METRICS): an intra- and inter-rater reliability study endorsed by EuSoMII.

 0 Người đánh giá. Xếp hạng trung bình 0

Tác giả: Tugba Akinci D'Antonoli, Alessandra Borgheresi, Roberto Cannella, Armando Ugo Cavallo, Renato Cuocolo, Fabio Martino Doniselli, Salvatore Claudio Fanni, Merel Huisman, Michail E Klontzas, Burak Kocak, Emmanouil Koltsakis, Andrea Ponsiglione, Arnaldo Stanzione, Matthaios Triantafyllou, Romina Trotta, Lorenzo Ugga, Federica Vernuccio

Ngôn ngữ: eng

Ký hiệu phân loại: 920.71 Men

Thông tin xuất bản: Germany : European radiology , 2025

Mô tả vật lý:

Bộ sưu tập: NCBI

ID: 214414

 OBJECTIVES: To investigate the intra- and inter-rater reliability of the total methodological radiomics score (METRICS) and its items through a multi-reader analysis. MATERIALS AND METHODS: A total of 12 raters with different backgrounds and experience levels were recruited for the study. Based on their level of expertise, raters were randomly assigned to the following groups: two inter-rater reliability groups, and two intra-rater reliability groups, where each group included one group with and one group without a preliminary training session on the use of METRICS. Inter-rater reliability groups assessed all 34 papers, while intra-rater reliability groups completed the assessment of 17 papers twice within 21 days each time, and a "wash out" period of 60 days in between. RESULTS: Inter-rater reliability was poor to moderate between raters of group 1 (without training
  ICC = 0.393
  95% CI = 0.115-0.630
  p = 0.002), and between raters of group 2 (with training
  ICC = 0.433
  95% CI = 0.127-0.671
  p = 0.002). The intra-rater analysis was excellent for raters 9 and 12, good to excellent for raters 8 and 10, moderate to excellent for rater 7, and poor to good for rater 11. CONCLUSION: The intra-rater reliability of the METRICS score was relatively good, while the inter-rater reliability was relatively low. This highlights the need for further efforts to achieve a common understanding of METRICS items, as well as resources consisting of explanations, elaborations, and examples to improve reproducibility and enhance their usability and robustness. KEY POINTS: Questions Guidelines and scoring tools are necessary to improve the quality of radiomics research
  however, the application of these tools is challenging for less experienced raters. Findings Intra-rater reliability was high across all raters regardless of experience level or previous training, and inter-rater reliability was generally poor to moderate across raters. Clinical relevance Guidelines and scoring tools are necessary for proper reporting in radiomics research and for closing the gap between research and clinical implementation. There is a need for further resources offering explanations, elaborations, and examples to enhance the usability and robustness of these guidelines.
Tạo bộ sưu tập với mã QR

THƯ VIỆN - TRƯỜNG ĐẠI HỌC CÔNG NGHỆ TP.HCM

ĐT: (028) 36225755 | Email: tt.thuvien@hutech.edu.vn

Copyright @2024 THƯ VIỆN HUTECH