2 min read
HSCC previews upcoming AI cybersecurity guidance for the health sector
Farah Amod
Dec 2, 2025 4:52:50 PM
The Health Sector Coordinating Council has released summaries of its upcoming AI cybersecurity guidelines, which will offer sector-wide recommendations for managing risks associated with healthcare AI.
What happened
The Health Sector Coordinating Council’s Cybersecurity Working Group released preview materials outlining its AI cybersecurity guidance scheduled for publication in early 2026. The summaries describe planned best practices for healthcare organizations adopting AI across clinical, operational, and administrative environments, with the goal of helping the sector reduce emerging risks associated with complicated AI systems.
Going deeper
The HSCC began developing the guidance in October 2024 when it formed an AI Cybersecurity Task Force composed of representatives from more than 100 healthcare organizations. The task force examined how AI is being used across the sector, how those uses introduce privacy and security considerations, and what controls organizations need to adopt as AI systems become integrated into medical workflows. To structure the work, the task force divided AI risks into five domains: education and enablement, cyber operations and defense, governance, secure by design, and third-party AI supply chain transparency. The preview summaries show that the final guidance will explain foundational AI terminology, outline operational playbooks for AI incident response, offer governance frameworks for safe AI adoption, and provide expectations for device manufacturers and vendors to incorporate security into AI-enabled systems throughout the lifecycle.
What was said
According to the preview documents, the education and enablement workstream focuses on standardizing terminology across organizations so that security, clinical, and administrative teams share a common understanding when gauging AI tools. The cyber operations and defense workstream is developing guidance for detecting and responding to AI-related incidents, including the safe use of AI-driven analytics and threat intelligence. The governance workstream tries to help organizations of all sizes align AI programs with existing security and compliance frameworks, while the secure-by-design workstream stresses building protections into AI-enabled medical devices from early design through post-market maintenance. The third-party risk workstream indicates the need for increased visibility into AI vendors, clear procurement requirements, and consistent assessment practices for externally developed tools.
The big picture
Federal oversight bodies have warned that AI adoption introduces new layers of cyber and privacy risk in critical infrastructure sectors. In a recent analysis, the U.S. Government Accountability Office reported that organizations deploying AI must strengthen governance processes, implement lifecycle oversight for third-party tools, and ensure that AI systems are monitored for integrity, reliability, and security impacts. These findings mirror the challenges the healthcare sector faces as AI becomes more embedded in patient care, operational decision-making, and administrative processes, reinforcing the value of structured sector-wide guidance.
FAQs
Why is the HSCC creating AI-specific cybersecurity guidance?
Healthcare organizations are adopting AI rapidly, and existing security frameworks do not fully address the unique risks created by machine learning models, automated decision systems, and AI-driven analytics.
How does dividing the work into five domains help organizations?
It allows healthcare entities to evaluate AI risks based on function, making it easier to assign responsibilities, prioritize controls, and align technical and governance practices with operational needs.
What part will governance play in AI safety?
Governance provides a structured way to assess AI tools, ensure they align with organizational policies, and verify that responsibilities for oversight, monitoring, and reporting are clearly defined.
Why is third-party AI risk becoming a major focus?
Many healthcare organizations rely on AI systems developed by external vendors, and without visibility into training data, model behavior, and lifecycle management, risks can enter the environment unnoticed.
How can hospitals prepare ahead of the full guidance release?
They can begin by establishing AI oversight committees, mapping current AI uses, reviewing vendor relationships, and updating security training to include AI-related risks and controls.