OBJECTIVE: While there has been great interest in rubrics in recent decades, there are different types (with different advantages and disadvantages). Here, we examined and compared use of analytic rubrics (AR) and mixed-approach rubric (MAR) types to assess quality of research posters at an academic conference. METHODS: A prior systematic review identified 12 rubrics
we compared two notable analytic-rubrics (AR1, AR2) with a newer mixed-approach-rubric (MAR). Sixty randomly-selected research posters were downloaded from an academic conference poster repository. Two experienced academicians independently scored all posters using the AR1, AR2 and MAR. Time-to-score was also noted. For inter-rater reliability of scores from each rubric, traditional intraclass correlations as well as modern/advanced Rasch Measurement were examined and compared among AR1, AR2 and MAR. RESULTS: Scores for poster quality varied using all rubrics. For traditional indices of inter-rater reliability, all rubrics had equal or similar intraclass correlations using agreement, while AR1 and AR2 were slightly higher using consistency. The modern Rasch Measurement showed that the single-item MAR reliably separated posters into two distinct groups (low-quality versus high-quality)
same as the 9-item AR2, though better than the 9-item AR1. Furthermore, the MAR's single-item rating-scale functioned well, while AR1 had one misfunctioning item rating-scale and AR2 had four misfunctioning item rating-scales. Notably, the MAR was quicker-to-score than the AR1 or AR2. CONCLUSION: This MAR measured similar or better than two ARs, and was quicker to score. This investigation illuminated common misconceptions that ARs are more accurate and a best use of time for effective measurement.