HIPAA Times news | Concise, reliable news and insights on HIPAA compliance and regulations

The strengths and weaknesses of using ChatGPT in healthcare

Written by Caitlin Anthoney | Feb 19, 2025 2:18:26 PM

ChatGPT is an AI tool recognized for its potential applications in medical education, consultation, and research. However, its use in clinical practice remains limited as several challenges must be addressed before it can be fully integrated into healthcare settings.

 

Strengths of ChatGPT in healthcare

Easy user interface

According to a taxonomy and systematic review of ChatGPT in healthcare,The ChatGPT's interface makes it easy to be integrated into existing clinical workflow, providing feedback in real-time.So, healthcare providers can use ChatGPT to get immediate responses to queries and explanations.  

In addition, "ChatGPT also shows superior performance in healthcare compared to other general large language models, such as InstructGPT, GPT-3.5."

 

Research ideas

Medical researchers can also use ChatGPT to get new research ideas andanswers to open-ended questions [that] contain new insights and viewpoints.

 

Limitations and challenges

ChatGPT suffers a few disadvantages that must be smoothed out for clinical applications.

 

Visual data 

To start with, ChatGPT is not capable of handling visual information. "The current release of ChatGPT can only accept input and provide feedback in texts so that ChatGPT cannot process questions needing the interpretation of images.

More specifically, ChatGPT can't interpret medical scans, X-rays, or pathology slides needed for most diagnosis processes.

 

Reasoning capabilities

According to the research,ChatGPT is incapable ofreasoninglike an expert system, and thejustificationsprovided by ChatGPT is merely a result of predicting the next words according to probability.” 

Even though ChatGPT can provide correct answers, its reasoning is often faulty, providing misleading or incorrect conclusions if not verified.

 

Data accuracy

The "accuracy of ChatGPT's answers depends largely on the quality of its training data, and the information ChatGPT is trained on [determines] how ChatGPT would respond to a question.”

ChatGPT can’t differentiate between actual and untrue data, possibly generating artificial data, as evidenced by the study.An almost worst-case problem with [the] recent release of ChatGPT, as testified by publications reviewed, is that it can 'make up' information and state it in a convincing way.”

This is particularly concerning in medicine, where misinformation can have catastrophic consequences.

 

Lacking originality

ChatGPT's responses are often superficial and lack originality. ChatGPT's answers, even if highly relevant, are often superficial, lacking depth and novelty.

So, for it to work as a reliable medical assistant, it would have to be optimized using medical data sets, which is presently not the situation. The study warns,Most significantly, ChatGPT is not healthcare-fine-tuned by design, and should not be used as such without specialist customization.”

 

Privacy concerns

Since ChatGPT is a proprietary system,entering sensitive patient information into its interface [to] receive a response might contravene privacy regulations,like the Health Insurance Portability and Accountability Act (HIPAA) which safeguards patients’ protected health information (PHI).

Additionally, entering anonymized data into ChatGPT is still not advisable because ChatGPT does not guarantee data privacy or security.

 

The way forward

Despite these limitations, further improvements are needed so ChatGPT can be used safely in clinical environments.Overall, the out-of-the-box performance of ChatGPT in healthcare is only moderate, which does not meet the high clinical standards.” 

Building a professional, specialized variant of ChatGPT, trained specifically on medical information, can help healthcare organizations overcome these challenges. So, developers must createa reliable evaluation system mirroring objectively the usefulness of ChatGPT in a clinical setting… so that the tool could be used confidently in clinical practice.”

Learn more: A quick guide to using ChatGPT in a HIPAA compliant way

 

FAQs

What is HIPAA?

The Health Insurance Portability and Accountability Act (HIPAA) sets national standards for protecting the privacy and security of certain health information, known as protected health information (PHI).

HIPAA is designed to protect the privacy and security of individuals’ health information and to ensure that healthcare providers and insurers can securely exchange electronic health information. Violations of HIPAA can result in significant fines and penalties for covered entities.

 

Who does HIPAA apply to?

HIPAA applies to covered entities, which include healthcare providers, health plans, and healthcare clearinghouses. It also applies to business associates of these covered entities. These are entities that perform certain functions or activities on behalf of the covered entity.

 

Is ChatGPT HIPAA compliant?

No, ChatGPT is not HIPAA compliant because it does not offer the necessary safeguards for safeguarding PHI.

ChatGPT also does not sign a business associate agreement (BAA) with healthcare providers, so entering patient data into ChatGPT could violate HIPAA and other privacy regulations, resulting in severe penalties.