RSNA 2025 Debrief: Why Radiology Needs a Governance System for Clinical AI, Yesterday
RSNA 2025 emphasized that AI is now integrated throughout radiology departments, affecting operations, clinical workflows, and patient interactions. The volume and complexity of deployed algorithms have surpassed the capacity of informal or committee-based oversight.
AI is poised to alter operations at every level, from independent outpatient imaging centers to hub-and-spoke enterprise systems, making a comprehensive governance platform for integration and real-world performance monitoring imperative to ensure safety and efficacy.
The Risks of Relying on Average Performance Metrics
Clinical care cannot depend on broad averages. A model may perform well overall but underperform for specific scanners, protocols, populations, or disease patterns. Without effective tracking, these inconsistencies remain undetected.
Foundation models, which were prominent at RSNA, offer broad capabilities but may still fail in edge cases, especially for underrepresented groups, complex cases, and rare diseases. Ongoing regulatory uncertainties also slow adoption.
The practical response is disciplined local protocols: training, feedback loops for inconsistencies, more diverse datasets, and retrospective review of misses to surface and close performance gaps.
AI Outpaces Manual Review Processes
The AI ecosystem is moving too fast for manual processes: new models, updates, vendor changes, and consolidation outpace traditional review cycles. Departments need a coordinated system that standardizes evaluation and continuously monitors real-world performance within local workflows and populations because this is fundamentally about preventing unintended variability in patient care.
Essential features of a monitoring system should include automated alerts to notify staff of performance issues, seamless integration with existing IT infrastructure, and tools for real-time analytics and reporting. These capabilities will allow leaders to assess the current state and advocate effectively for the necessary infrastructure.
Go-Live is a Moment; Clinical Performance is Ongoing
One-time go-live validation is not enough. AI behavior changes over time as demographics shift, scanner parameters and protocols change, and algorithms are updated. Radiology already expects hardware calibration and QA; AI should be held to the same or higher standard.
Building Trust in AI Through Transparency
Trust depends on transparency. Clinicians will not rely on tools they cannot understand, measure, or verify. Audit trails, explainable outputs, and real-world performance tracking are essential for accountability and building trust in AI.
For example, Sutter Health implemented an AI Governance Suite with audit-trail features, enabling them to track the decision-making process of algorithms. This transparency increased trust among clinicians, as they could verify the rationale and treatment recommendations. Plus, they can monitor, in real time, which algorithms are working and which are not, and immediately take action as needed.
Transitioning AI from Innovation to Standard of Care
RSNA 2025 made clear that the most important shift isn’t any single model; it’s that AI has become part of radiology’s core infrastructure. Radiology has always demanded reliability: PACS. RIS. Dose monitoring. Quality metrics. Structured reports. AI is no different.
The field now needs an organized method for evaluating tools before deployment, monitoring them once in use, understanding behavior over time, and ensuring they support equitable, consistent care, not as a buzzword, but as a clinical standard.



