HIPAA Times news | Concise, reliable news and insights on HIPAA compliance and regulations

Microsoft experts address AI bias at HLTH 2024 conference

Written by Caitlin Anthoney | Oct 24, 2024 3:35:16 PM

The recent HLTH 2024 conference, held October 20-23 in Las Vegas, gathered leaders and innovators to explore the future of healthcare technology, specifically artificial intelligence (AI). With the rise of AI integration into healthcare systems also comes concerns regarding bias and inequity.

 

What was said

During a moderated discussion, CNBC Senior Healthcare Reporter Bertha Coombs raised the issue of implicit bias in AI systems, stating, “We know that there's implicit bias in medicine, and I worry about that getting carried over into what these machines are learning rather than the machines trying to get that out of the way. How do we address that issue?”

Microsoft’s Chief Scientific Officer Eric Horvitz responded, "Well, it is a significant concern, and what's great in my mind is that the AI community has led up about this. There are several meetings, like the FACT conference, that look exactly at tools that can help developers in any city, including a healthcare setting, understand inequity in their models." 

He also pointed to a Microsoft open-source tool, "Fair Learn," which can help make AI systems fair across demographics.

However, he cautions, “There could be a trade-off sometimes. In some cases, you give up accuracy to make your system more fair... I think that many of us would aim for fairness, but you give up accuracy only a little bit, not a lot.”

Joe Petro, Corporate Vice President of Microsoft Health & Life Sciences (HLS) Solutions and Platforms, elaborated on the implications of this data-driven bias. He stated, “If you’re in a really high-income community and serving that community versus, say, the inner city, you’re going to get a very different outcome.”

The panelists acknowledged that while AI can improve healthcare delivery and patient outcomes, gaps must be addressed. Horvitz pointed out, “I do think there are many gaps in medicine including gaps between specialties, gaps in data, data siloing, and gaps in types of systems and how they work together.” 

 

Why it matters  

Bias in AI systems can impact the quality and accessibility of patient care. These gaps hinder the effectiveness of AI technologies and create barriers to equitable patient care. When AI systems reflect biased data, they risk perpetuating disparities in healthcare outcomes. Moreover, when it’s left unaddressed, marginalized communities could receive subpar treatment that worsens current health inequities. 

 

The bottom line

Stakeholders, including AI developers, government, and healthcare organizations, must collaborate to build AI systems that fairly and accurately reflect diverse patient populations. 

Ultimately, building equitable AI could lead to more accurate, equitable, and impactful healthcare solutions that improve health outcomes for all.

Learn more: 

 

FAQs

Can AI improve personalized patient education?

Yes, providers can use AI to analyze patient data to generate customized educational materials, like articles, videos, or interactive modules, addressing specific health concerns and challenges.

 

Can AI be integrated into HIPAA compliant emails?

Yes, AI-powered features can be integrated with HIPAA compliant emailing platforms, like Paubox, to automate processes like patient consent management and sending personalized emails while maintaining HIPAA compliance.

 

Are there any limitations when using AI in HIPAA compliant emails?

Yes, healthcare providers must ensure that AI-powered features comply with HIPAA regulations and industry best practices for data security and privacy. Additionally, providers should evaluate the reliability of AI algorithms to avoid potential risks or compliance issues.

Read also: HIPAA compliant email API