4 min read

The use of ChatGPT in healthcare

The use of ChatGPT in healthcare

 

The deployment of ChatGPT in US healthcare is not without limitations. Studies highlight concerns regarding the reliability of AI-generated information, particularly the risk of ‘hallucinations’, instances where ChatGPT produces inaccurate or misleading responses. A letter on the ethical challenges presented by ChatGPT in healthcare published in the Journal of Clinical Neurology noted, “Hallucination can be particularly problematic in healthcare as it may cause AI to provide healthcare professionals with incorrect diagnoses, treatment recommendations, or misinformation…Incorporating human oversight and intervention in the AI-assisted decision-making process can... mitigate the hallucination risk.” These errors could have serious consequences in clinical decision-making. 

While ChatGPT shows pronounced potential in informational and administrative domains, its utility in direct clinical intervention remains limited. The technology is best suited for augmenting information flow, supporting documentation, and enhancing communication, rather than replacing the nuanced judgment required in hands-on clinical care. It suggests that, within the US healthcare system, ChatGPT should be viewed as a complementary tool, enhancing but not supplanting the expertise of healthcare professionals.

 

Why are all ChatGPT models not made the same 

The differences between ChatGPT models, particularly ChatGPT-3.5 and ChatGPT-4, are rooted in their architecture, capabilities, and application suitability. ChatGPT-3.5 is based on the GPT-3 architecture, which consists of approximately 175 billion parameters. ChatGPT-4 is a more advanced model rumored to have around 1 trillion parameters. The increase in parameters allows GPT-4 to achieve better contextual understanding and generate more coherent responses across a wider range of topics. The advancements in GPT-4 also include processing powers and the ability to handle longer inputs making it capable of handling more complex queries and nuanced conversation.

David Holt, owner of Holt Law LLC, provides an interesting perspective on the topic of AI in healthcare: “ChatGPT definitely has the potential to make a big difference in healthcare by speeding up administrative work, helping staff, and making patient education more engaging. there are some important limitations to keep in mind. First, it doesn’t actually “understand” medicine—it can sound confident even when it gives incorrect or misleading information, which could be risky in a clinical setting. It’s also not up to date with the latest medical guidelines or treatments if you're using versions trained on older data. Another issue is bias—since ChatGPT was trained on large sets of data from the internet, it can reflect gaps and inequalities that already exist in healthcare, especially for underrepresented communities.”

Related: Artificial Intelligence in healthcare

 

Recommendations for using ChatCPT in healthcare

According to a study published in Health Science Reports ChatGPT can be applied to healthcare based on the following recommendations: 

  • ChatGPT should be utilized for providing general information about medical conditions and treatments but not for diagnosing or treating patients. Consultation with qualified healthcare professionals is essential for any medical advice.
  • Users should frame their questions as specifically as possible to receive accurate and relevant responses, thereby maximizing the utility of the technology.
  • ChatGPT is intended to support healthcare providers rather than replace them. It can assist in tasks such as patient education and routine inquiries, allowing professionals to focus on more complex clinical responsibilities.
  • Given that ChatGPT generates responses based on training data patterns, it is crucial to verify the information obtained through this tool against trusted medical sources or professional guidance.
  • The technology can facilitate improved communication between healthcare providers and patients, enhancing understanding of medical information and treatment options.

 

The limitations of ChatGPT 

  1. ChatGPT often struggles with basic reasoning and contextual awareness, leading to responses that may not align with real-world logic or expectations.
  2. The AI cannot access real-time information or current events, which means it cannot provide updates on news, weather, or stock prices.
  3. Its knowledge is restricted to the data it was trained on, which does not include recent events or developments beyond its last training cut-off, leading to potential gaps in information.
  4. ChatGPT can misinterpret nuanced queries, particularly those involving sarcasm or humor, resulting in responses that may seem superficial or irrelevant.
  5. The AI processes each request independently and does not manage multiple tasks simultaneously, making it less effective for complex or multi-part inquiries.
  6. While capable of generating coherent text, ChatGPT lacks true creativity and innovation, often producing outputs that are derivative rather than original.
  7. The AI can inadvertently reflect biases present in its training data, which may lead to biased or prejudiced outputs.
  8. It may struggle with in-depth discussions on specialized subjects due to limited training data in those areas, often providing incomplete or irrelevant answers.
  9. Users of the free version face restrictions on the number of prompts they can submit per hour and the total length of conversations, which can hinder extensive interactions.
  10. While it can handle simple calculations, ChatGPT struggles with more complex mathematical tasks and may provide incorrect answers when faced with multiple operations.

 

ChatGPT in clinical decision support 

A study conducted by researchers at Mass General Brigham demonstrated that ChatGPT achieved an overall accuracy of approximately 72% in clinical decision-making tasks. The accuracy level indicates that ChatGPT can function similarly to a newly graduated medical intern. 

 

Why healthcare organizations shouldn’t rely on ChatGPT

ChatGPT should not be relied upon as a primary decision-making tool due to its critical limitations. There is a potential bias in training data which can lead to inaccurate or unfair recommendations, especially for underrepresented populations. The bias undermines the reliability of information provided as it cannot analyze its sources which increases the chances of misinformation critically. 

ChatGPT is also a statistical model in nature, no matter the version. This means that it lacks the understanding and clinical judgment that healthcare professionals possess. The reliance on AI models can also result in a decline in provider discernment in clinical cases. All of this is compounded by the fact that ChatGPT is not HIPAA compliant by nature.

This is not the end all be all as David Holt notes, “There are also special tools out there, like BastionGPT and CompliantGPT, that act as a secure layer around ChatGPT. These tools are built with HIPAA in mind and can sign Business Associate Agreements. Some organizations are also setting up ChatGPT models directly on their own servers, which keeps everything in-house and avoids sending patient data over the internet.”

Related: HIPAA Compliant Email: The Definitive Guide

 

FAQs

How does ChatGPT aid drug discovery? 

ChatGPT can use natural language processing to quickly sift through published research and patent databases, enabling researchers to discover disease-specific agents and relevant compounds. 

 

What are the cost savings associated with ChatGPT?

By automating data analysis and literature reviews, it reduces the time researchers spend on these tasks, which can be labor-intensive and costly. 

 

What are the anticipated future developments for the use of AI in healthcare?

As AI technology evolves, we can expect advancements in its ability to analyze complex datasets from various sources, including genomics and patient records, leading to better-targeted therapies.