3 min read
HHS issues guidance to ensure fair and secure use of AI in healthcare
Tshedimoso Makhene Jan 14, 2025 3:59:49 PM
The HHS Office for Civil Rights has issued guidance to healthcare entities on responsibly using AI tools, emphasizing compliance with anti-discrimination laws and patient privacy protections to foster innovation and equity.
What happened
The HHS Office for Civil Rights (OCR) has issued a “Dear Colleagues” letter outlining guidelines for the responsible use of artificial intelligence (AI) tools in healthcare. The letter emphasizes compliance with Section 1557 of the Affordable Care Act, which prohibits discrimination by health care providers and insurers through AI-based patient care decision tools. This initiative aligns with HHS’s Strategic Plan for the Use of Artificial Intelligence to enhance the health and well-being of Americans.
Read also: HHS finalizes regulations on patient care decision tools, including AI
Going deeper
The Section 1557 final rule explicitly extends nondiscrimination protections to the use of AI and other emerging technologies in patient care, categorized as "patient care decision support tools." The rule mandates that covered entities avoid discrimination based on race, color, national origin, sex, age, or disability when using these tools in health programs or activities. This application of civil rights principles ensures that advancements in technology do not undermine equity in health care.
Key provisions
The rule requires covered entities to:
- Identify risks of discrimination: Covered entities must take reasonable steps to assess whether tools use input variables that could lead to discriminatory outcomes. Efforts may include:
- Reviewing guidance in the Section 1557 final rule.
- Researching peer-reviewed studies or industry recommendations.
- Developing or utilizing AI safety registries.
- Gathering input variable details from tool vendors.
- Mitigate risks of discrimination: After identifying risks, entities must implement reasonable measures to minimize them. Mitigation efforts may include:
- Establishing policies for tool usage and monitoring outcomes.
- Training staff to recognize and address potential bias.
- Auditing tools in real-world applications.
- Ensuring “human-in-the-loop” oversight to override biased outputs.
Examples of discrimination and mitigation
- Crisis standards of care flowcharts: Tools that inappropriately screen out patients with disabilities can be adjusted to include individualized assessments and modifications to ensure equitable treatment.
- Race-adjusted eGFR equation: Entities could adopt newer equations that do not factor in race or train staff to mitigate biased outcomes in kidney care referrals.
- Pulse oximeters: By recognizing that some devices may overestimate blood oxygen levels in patients of color, entities can train staff to use additional respiratory stress indicators and conduct audits to ensure fair care delivery.
Implementation timeline
The general nondiscrimination requirements of Section 1557 took effect on July 5, 2024, while the affirmative requirements to identify and mitigate discrimination risks in AI tools will be enforced starting May 1, 2025. OCR urges all covered entities to review their use of patient care decision support tools and implement measures to prevent discrimination, fostering equitable access to technological innovations.
See also: HIPAA Compliant Email: The Definitive Guide
What was said
The U.S. Department of Health and Human Services’ (HHS) Office for Civil Rights (OCR) emphasized its commitment to ensuring nondiscrimination in health care as AI tools become increasingly integrated into patient care. According to OCR, "the final rule makes clear that Section 1557’s nondiscrimination protections apply to the use of AI and other emerging technologies such as clinical algorithms and predictive analytics."
The guidance demonstrates the dual goals of leveraging AI to reduce clinician burnout and enhance care access while safeguarding fairness and accountability. OCR stated that "covered health programs and activities [must] take reasonable steps to identify and mitigate the risk of discrimination when they use AI… in patient care that use race, color, national origin, sex, age, or disability as input variables."
OCR underscored its unique regulatory role in overseeing how healthcare providers and insurers use AI tools in clinical decision-making, treatment planning, and resource allocation, ensuring trust and equity in the application of these technologies.
Why it matters
The use of AI in healthcare can transform patient outcomes and operational efficiency. However, without proper oversight, these tools can inadvertently perpetuate bias or compromise sensitive patient data. By promoting fairness and privacy in AI implementation, the OCR ensures a healthcare system that is inclusive, secure, and innovative.
The bottom line
The OCR’s guidelines balance fostering innovation and protecting patient rights. By addressing discrimination and privacy risks, the initiative lays the groundwork for a healthcare system where AI advances benefit everyone equitably, bolstering trust and ensuring ethical progress in medical technology.
FAQs
What is the purpose of Section 1557?
Section 1557 of the Affordable Care Act prohibits discrimination based on race, color, national origin, sex, age, and disability in health programs and activities that receive federal financial assistance. It ensures equitable access to health care services for all individuals.
How does the final rule apply to AI and other emerging technologies?
The final rule extends nondiscrimination protections to AI and emerging technologies used in patient care, known as "patient care decision support tools." It mandates that these tools must not discriminate based on protected characteristics when used in health care programs or activities.