PURPOSE: This study aims to evaluate video quality, reliability, actionability, and understandability differences based on length, popularity, and source credentials (physician versus non-physician). The hypothesis suggests that current videos are of low quality and limited usefulness to patients, highlighting significant disparities based on the credentials of the video source. METHODS: The phrase "acromioclavicular joint separation" was searched on YouTube. The first 100 videos that populated were selected. Of those 100, 45 were excluded based on pre-existing criteria. Two reviewers watched and graded the included videos using four established, additive algorithmic grading scales. Grades for all included videos were analyzed using R software version 4.2.3. RESULTS: The mean Journal of the American Medical Association (JAMA) score was 2.32 (standard deviation (SD) = 0.74), with patient-made videos having a significantly lower reliability score (p = 0.008). The mean Patient Education Materials Assessment Tool (PEMAT) understandability and actionability scores were 59.78% (SD = 15.28%) and 67.55% (SD = 15.28%) respectively. PEMAT actionability scores were positively correlated to views (p = 0.002). The average DISCERN score was 2.51 (SD = 0.70)
longer videos were correlated with higher DISCERN scores (p = 0.047). CONCLUSION: Analysis indicated that there were significant differences in reliability and understandability between video source types. Additionally, there was no correlation between quality and/or reliability and views, indicating that the YouTube algorithm is not an effective indicator of the quality of videos.