A 75-year-old man enters your clinic with a new diagnosis of high-risk localized prostate cancer. You suggest to him a state-of-the-art treatment plan: radiotherapy to the prostate and lymph nodes, and hormonal therapy. The man expresses a concern about urinary leakage and wonders if nodal irradiation will be worthwhile. To address his concerns, you describe the recent POP-RT trial, which compared whole pelvic radiotherapy to prostate-only radiotherapy. Indeed, whole pelvic radiotherapy increased the risk of genitourinary toxicity, but the improved disease-free survival appears impressive with a hazard ratio (HR) of 0.4. The patient looks confused, so you try communicating using simpler terms: "Fear not, with whole pelvic radiotherapy, you'll enjoy a 60% lower rate of disease progression or death compared to prostate radiotherapy" (more statistically-minded folks will say this through gritted teeth, as HR 0.4 does not mean a 60% reduced risk of the endpoint
more on this later). Despite your efforts, the patient's gaze remains confused, and you realize that a more straightforward way to explain the treatment benefit with respect to delay in disease progression would be helpful. Thus, you try to find the median disease-free survival for both arms in the POP-RT study, but the medians are nowhere to be found in the entire POP-RT publication. On the verge of giving up, you recall that lunchtime chat with your center's statistician-the one where he went on and on about this "game-changing" metric, restricted mean survival time (RMST), insisting it is more intuitive and patient-friendly. And, of course, he even followed up with an email detailing different clinical trial results with RMST, including the POP-RT. After a quick search in your inbox, you confidently tell the patient, "Adding pelvic RT will, on average, delay progression by over 11 months." After a moment of silence, the patient nods and says, "Let's go for it."