The rise of generative artificial intelligence has captured the interest of many industries, and healthcare is no exception. Tools like ChatGPT, with their ability to generate human-like text, hold the promise of revolutionizing various aspects of healthcare communication workflows, streamlining administrative tasks, creating engaging patient education materials, and providing rapid support to healthcare staff.
However, the integration of such powerful technology into healthcare is not without hurdles. The potential of AI must be carefully balanced against the non-negotiable requirements of HIPAA and its stringent demands for patient privacy, data security, and clinical accuracy. The appeal of efficiency cannot overshadow the fundamental principles that govern healthcare.
Read more: A quick guide to using ChatGPT in a HIPAA compliant way
The capabilities of ChatGPT that are particularly relevant to communication include the ability to draft various types of content, summarize lengthy texts, answer questions, and even translate languages. This potential for streamlining communication tasks could make it a valuable took for healthcare providers.
A study published in Frontiers in Public Health states, “In clinical applications, ChatGPT has shown promise in diagnosis and decision-making. Studies have explored ChatGPT’s capabilities in clinical diagnostics, highlighting both its potential and constraints in handling diverse clinical issues. Evaluations by major medical institutions suggest that while ChatGPT can enhance decision-making and efficiency, medical professionals must be aware of its capabilities and limitations, advocating for cautious use in clinical settings.”
Healthcare providers need to be cautious when using ChatGPT because publically available versions of it and other LLMs are not HIPAA compliant. These are the reasons why:
The U.S. Department of Health and Human Services (HHS) has made it clear through its guidance on similar technologies that covered entities are prohibited from using third-party online technologies in a manner that would result in impermissible disclosures of PHI without a BAA and appropriate safeguards in place. These safeguards are outlined in the HIPAA Privacy Rule and include, administrative, technical, and physical safeguards to protect ePHI. Administrative safeguards include policies for workforce training and risk analysis; technical safeguards involve encryption, access controls, and audit logs; and physical safeguards focus on securing facilities and devices where ePHI is stored. This principle extends to the use of public AI tools like ChatGPT for handling sensitive patient information.
A January 2025 incident involving AI company DeepSeek illustrates the potential risks of entrusting sensitive information to AI platforms. Researchers discovered two unsecured databases containing over one million chat records in plaintext form, along with API keys and operational metadata. The exposure occurred through publicly accessible database instances that allowed arbitrary SQL queries without authentication. This type of incident demonstrates why healthcare organizations must exercise extreme caution when considering any AI platform for clinical use. Had this been healthcare data, it would have constituted a serious HIPAA violation with potential regulatory consequences.
Related: Safeguarding PHI in ChatGPT
While the use of standard ChatGPT with patient health information is strictly off-limits, healthcare organizations can explore several practical and compliant applications by ensuring that no PHI is ever inputted or generated within the tool. These low-risk uses can offer significant benefits in terms of efficiency and resource allocation.
By focusing on tasks that avoid any involvement with PHI, healthcare organizations can explore the efficiency gains offered by LLMs while remaining compliant with HIPAA regulations. Staff can then offload some of the initial drafting and summarization work, freeing up their time and expertise for higher-level tasks directly related to patient care. According to a professor of informatics at Becket University Antony Bryant, “Large language models (LLMs) have demonstrated significant efficiency in non-clinical healthcare tasks such as drafting administrative documents, generating educational materials, and supporting research workflows. These applications reduce the workload on healthcare professionals, allowing them to focus more on patient care while maintaining operational excellence.”
While the potential benefits of using tools like ChatGPT are enticing, particularly for streamlining workflows, ignoring or misunderstanding HIPAA compliance when dealing with patient health information can lead to severe consequences.
The most immediate danger lies in PHI exposure and HIPAA violations. Inputting any patient-identifiable information into a standard public platform like ChatGPT constitutes a data breach under HIPAA. This includes names, medical conditions, appointment details, or any other information that could directly or indirectly identify an individual and relates to their health or healthcare. Such actions, even seemingly harmless queries, violate the HIPAA Privacy Rule and can lead to substantial financial penalties, legal repercussions, and irreparable damage to an organization's reputation. A study titled Ethical Considerations of Using ChatGPT in Health Care states, “Privacy issues are an important aspect when using ChatGPT in health care settings. The collection, storage, and processing of sensitive patient information raise important privacy issues that need to be addressed to ensure the confidentiality and protection of personal data.” The fundamental issue is the lack of a BAAs with these standard platforms, which is legally required for any entity handling PHI on behalf of a covered entity.
Beyond the legal ramifications, there are serious accuracy and safety risks. LLMs, while impressive in their ability to generate text, are not flawless. They can produce outputs that are plausible but factually incorrect – a phenomenon often referred to as "hallucinations." In a healthcare context, this could have dire consequences. For instance, a user might ask ChatGPT for dosage information for a medication based on a specific patient's condition. If the model generates incorrect information, and this advice is taken without verification, it could lead to serious harm. The above study also states, “Ensuring the accuracy, reliability, and validity of ChatGPT-generated content requires rigorous validation and ongoing updates based on clinical practice."
Furthermore, there are significant bias and equity concerns. AI models are trained on vast datasets, and if this data contains biases (e.g., underrepresentation of certain demographics), the model's outputs can perpetuate and even amplify these biases. In healthcare, this could lead to disparities in the quality of information provided or in the way certain patient groups are addressed. According to the study on ethics, “Biased training data can lead to biased output, and overreliance on ChatGPT can reduce patient adherence and encourage self-diagnosis."
Finally, ethical considerations when using AI in healthcare raise questions about transparency – should patients be aware when AI is involved in generating communications they receive? Accountability is also a major concern – who is responsible if an AI tool provides incorrect or harmful information? Perhaps most importantly, the inappropriate use of AI can significantly erode patient trust in healthcare providers and the systems they use. The ethics study notes, “Overreliance on artificial intelligence (AI) can undermine compassion and erode trust. Transparency and disclosure of AI-generated content are critical to maintaining integrity."
Data about data that provides extra information about a file or piece of information, like its size, when it was created, who created it, or what type of file it is. For an email, metadata might include the sender's and recipient's email addresses, the subject line, and the time it was sent.
API keys are like passwords that allow different computer programs or services to talk to each other securely. When one program wants to use the features or data of another program, it often needs to provide an API key to prove it's authorized to do so.
SQL is a special language that computers use to manage and get information from databases (organized collections of data). An SQL query is like a question you ask the database using this language to find specific information, add new data, change existing data, or delete data.