The deployment of ChatGPT in US healthcare is not without limitations. Studies highlight concerns regarding the reliability of AI-generated information, particularly the risk of ‘hallucinations’, instances where ChatGPT produces inaccurate or misleading responses. A letter on the ethical challenges presented by ChatGPT in healthcare published in the Journal of Clinical Neurology noted, “Hallucination can be particularly problematic in healthcare as it may cause AI to provide healthcare professionals with incorrect diagnoses, treatment recommendations, or misinformation…Incorporating human oversight and intervention in the AI-assisted decision-making process can... mitigate the hallucination risk.” These errors could have serious consequences in clinical decision-making.
While ChatGPT shows pronounced potential in informational and administrative domains, its utility in direct clinical intervention remains limited. The technology is best suited for augmenting information flow, supporting documentation, and enhancing communication, rather than replacing the nuanced judgment required in hands-on clinical care. It suggests that, within the US healthcare system, ChatGPT should be viewed as a complementary tool, enhancing but not supplanting the expertise of healthcare professionals.
The differences between ChatGPT models, particularly ChatGPT-3.5 and ChatGPT-4, are rooted in their architecture, capabilities, and application suitability. ChatGPT-3.5 is based on the GPT-3 architecture, which consists of approximately 175 billion parameters. ChatGPT-4 is a more advanced model rumored to have around 1 trillion parameters. The increase in parameters allows GPT-4 to achieve better contextual understanding and generate more coherent responses across a wider range of topics. The advancements in GPT-4 also include processing powers and the ability to handle longer inputs making it capable of handling more complex queries and nuanced conversation.
David Holt, owner of Holt Law LLC, provides an interesting perspective on the topic of AI in healthcare: “ChatGPT definitely has the potential to make a big difference in healthcare by speeding up administrative work, helping staff, and making patient education more engaging. there are some important limitations to keep in mind. First, it doesn’t actually “understand” medicine—it can sound confident even when it gives incorrect or misleading information, which could be risky in a clinical setting. It’s also not up to date with the latest medical guidelines or treatments if you're using versions trained on older data. Another issue is bias—since ChatGPT was trained on large sets of data from the internet, it can reflect gaps and inequalities that already exist in healthcare, especially for underrepresented communities.”
Related: Artificial Intelligence in healthcare
According to a study published in Health Science Reports ChatGPT can be applied to healthcare based on the following recommendations:
A study conducted by researchers at Mass General Brigham demonstrated that ChatGPT achieved an overall accuracy of approximately 72% in clinical decision-making tasks. The accuracy level indicates that ChatGPT can function similarly to a newly graduated medical intern.
ChatGPT should not be relied upon as a primary decision-making tool due to its critical limitations. There is a potential bias in training data which can lead to inaccurate or unfair recommendations, especially for underrepresented populations. The bias undermines the reliability of information provided as it cannot analyze its sources which increases the chances of misinformation critically.
ChatGPT is also a statistical model in nature, no matter the version. This means that it lacks the understanding and clinical judgment that healthcare professionals possess. The reliance on AI models can also result in a decline in provider discernment in clinical cases. All of this is compounded by the fact that ChatGPT is not HIPAA compliant by nature.
This is not the end all be all as David Holt notes, “There are also special tools out there, like BastionGPT and CompliantGPT, that act as a secure layer around ChatGPT. These tools are built with HIPAA in mind and can sign Business Associate Agreements. Some organizations are also setting up ChatGPT models directly on their own servers, which keeps everything in-house and avoids sending patient data over the internet.”
Related: HIPAA Compliant Email: The Definitive Guide
ChatGPT can use natural language processing to quickly sift through published research and patent databases, enabling researchers to discover disease-specific agents and relevant compounds.
By automating data analysis and literature reviews, it reduces the time researchers spend on these tasks, which can be labor-intensive and costly.
As AI technology evolves, we can expect advancements in its ability to analyze complex datasets from various sources, including genomics and patient records, leading to better-targeted therapies.