
The rise of generative artificial intelligence has captured the interest of many industries, and healthcare is no exception. Tools like ChatGPT, with their ability to generate human-like text, hold the promise of revolutionizing various aspects of healthcare communication workflows, streamlining administrative tasks, creating engaging patient education materials, and providing rapid support to healthcare staff.
However, the integration of such powerful technology into healthcare is not without hurdles. The potential of AI must be carefully balanced against the non-negotiable requirements of HIPAA and its stringent demands for patient privacy, data security, and clinical accuracy. The appeal of efficiency cannot overshadow the fundamental principles that govern healthcare.
Read more: A quick guide to using ChatGPT in a HIPAA compliant way
Why should healthcare providers be cautious when using ChatGPT?
The capabilities of ChatGPT that are particularly relevant to communication include the ability to draft various types of content, summarize lengthy texts, answer questions, and even translate languages. This potential for streamlining communication tasks could make it a valuable took for healthcare providers.
A study published in Frontiers in Public Health states, “In clinical applications, ChatGPT has shown promise in diagnosis and decision-making. Studies have explored ChatGPT’s capabilities in clinical diagnostics, highlighting both its potential and constraints in handling diverse clinical issues. Evaluations by major medical institutions suggest that while ChatGPT can enhance decision-making and efficiency, medical professionals must be aware of its capabilities and limitations, advocating for cautious use in clinical settings.”
Healthcare providers need to be cautious when using ChatGPT because publically available versions of it and other LLMs are not HIPAA compliant. These are the reasons why:
- No business associate agreement (BAA): These public platforms do not typically offer a BAA to users, a legal requirement under HIPAA for any service that handles protected health information (PHI).
- Data for model training: Information inputted into standard public AI tools may be used to further train their models. Patient-specific data could potentially be stored and used in ways that violate patient privacy.
- Lack of specific security guarantees: These platforms do not provide the specific security guarantees mandated by the HIPAA Security Rule for safeguarding PHI, such as access controls and audit logs designed for healthcare data.
The U.S. Department of Health and Human Services (HHS) has made it clear through its guidance on similar technologies that covered entities are prohibited from using third-party online technologies in a manner that would result in impermissible disclosures of PHI without a BAA and appropriate safeguards in place. These safeguards are outlined in the HIPAA Privacy Rule and include, administrative, technical, and physical safeguards to protect ePHI. Administrative safeguards include policies for workforce training and risk analysis; technical safeguards involve encryption, access controls, and audit logs; and physical safeguards focus on securing facilities and devices where ePHI is stored. This principle extends to the use of public AI tools like ChatGPT for handling sensitive patient information.
A January 2025 incident involving AI company DeepSeek illustrates the potential risks of entrusting sensitive information to AI platforms. Researchers discovered two unsecured databases containing over one million chat records in plaintext form, along with API keys and operational metadata. The exposure occurred through publicly accessible database instances that allowed arbitrary SQL queries without authentication. This type of incident demonstrates why healthcare organizations must exercise extreme caution when considering any AI platform for clinical use. Had this been healthcare data, it would have constituted a serious HIPAA violation with potential regulatory consequences.
Related: Safeguarding PHI in ChatGPT
Low-risk practical and compliant uses that involve PHI
While the use of standard ChatGPT with patient health information is strictly off-limits, healthcare organizations can explore several practical and compliant applications by ensuring that no PHI is ever inputted or generated within the tool. These low-risk uses can offer significant benefits in terms of efficiency and resource allocation.
- General content creation: ChatGPT can be a valuable tool for drafting templates for general patient education handouts. Think of creating the basic structure and wording for informational sheets on topics like "Understanding the Common Cold," "Tips for a Healthy Diet," or "Preparing for Your Colonoscopy." Similarly, it can assist in generating initial drafts for website FAQ sections on general health-related inquiries or even help in outlining blog posts on broad health and wellness topics for your organization's website. All content generated in this way must undergo thorough review and approval by qualified clinical staff before being used or distributed to patients.
- Administrative support: For internal purposes, ChatGPT can aid in generating initial drafts of administrative documents. This could include outlining internal policies on topics like employee conduct or scheduling procedures. It can also assist in creating staff training materials on non-patient-specific topics, such as cybersecurity awareness for general threats or guidelines for using internal communication platforms. Job descriptions for administrative roles or initial summaries of internal meetings based on generalized notes (avoiding any mention of patient cases or sensitive data) are other potential applications.
- Research and summarization of public data: ChatGPT's ability to summarize large amounts of text can be leveraged for internal briefings. For instance, staff could input the text from publicly available medical journal articles or reports from organizations like the CDC or WHO into ChatGPT to get a summarized overview of the key findings. The input here is strictly limited to information already in the public domain and does not involve any patient-specific data. URLs of publicly accessible health-related websites can also be used to generate summaries.
- Communication brainstorming: When planning public health campaigns or wellness program announcements, ChatGPT can be used as a brainstorming partner. By providing a general topic, you can ask it to generate different ideas for campaign slogans, outreach strategies, or content themes. This can help spark creativity and provide a starting point for your team's discussions.
- Language aid: If your organization needs to communicate with a diverse patient population, ChatGPT can assist in drafting multilingual versions of general health information. For example, once a patient education handout has been finalized and approved by clinical staff in English, ChatGPT could be used to generate a draft translation in another language, which would then need to be reviewed and verified for accuracy and cultural appropriateness by a qualified translator.
By focusing on tasks that avoid any involvement with PHI, healthcare organizations can explore the efficiency gains offered by LLMs while remaining compliant with HIPAA regulations. Staff can then offload some of the initial drafting and summarization work, freeing up their time and expertise for higher-level tasks directly related to patient care. According to a professor of informatics at Becket University Antony Bryant, “Large language models (LLMs) have demonstrated significant efficiency in non-clinical healthcare tasks such as drafting administrative documents, generating educational materials, and supporting research workflows. These applications reduce the workload on healthcare professionals, allowing them to focus more on patient care while maintaining operational excellence.”
The dangers of misapplication
While the potential benefits of using tools like ChatGPT are enticing, particularly for streamlining workflows, ignoring or misunderstanding HIPAA compliance when dealing with patient health information can lead to severe consequences.
The most immediate danger lies in PHI exposure and HIPAA violations. Inputting any patient-identifiable information into a standard public platform like ChatGPT constitutes a data breach under HIPAA. This includes names, medical conditions, appointment details, or any other information that could directly or indirectly identify an individual and relates to their health or healthcare. Such actions, even seemingly harmless queries, violate the HIPAA Privacy Rule and can lead to substantial financial penalties, legal repercussions, and irreparable damage to an organization's reputation. A study titled Ethical Considerations of Using ChatGPT in Health Care states, “Privacy issues are an important aspect when using ChatGPT in health care settings. The collection, storage, and processing of sensitive patient information raise important privacy issues that need to be addressed to ensure the confidentiality and protection of personal data.” The fundamental issue is the lack of a BAAs with these standard platforms, which is legally required for any entity handling PHI on behalf of a covered entity.
Beyond the legal ramifications, there are serious accuracy and safety risks. LLMs, while impressive in their ability to generate text, are not flawless. They can produce outputs that are plausible but factually incorrect – a phenomenon often referred to as "hallucinations." In a healthcare context, this could have dire consequences. For instance, a user might ask ChatGPT for dosage information for a medication based on a specific patient's condition. If the model generates incorrect information, and this advice is taken without verification, it could lead to serious harm. The above study also states, “Ensuring the accuracy, reliability, and validity of ChatGPT-generated content requires rigorous validation and ongoing updates based on clinical practice."
Furthermore, there are significant bias and equity concerns. AI models are trained on vast datasets, and if this data contains biases (e.g., underrepresentation of certain demographics), the model's outputs can perpetuate and even amplify these biases. In healthcare, this could lead to disparities in the quality of information provided or in the way certain patient groups are addressed. According to the study on ethics, “Biased training data can lead to biased output, and overreliance on ChatGPT can reduce patient adherence and encourage self-diagnosis."
Finally, ethical considerations when using AI in healthcare raise questions about transparency – should patients be aware when AI is involved in generating communications they receive? Accountability is also a major concern – who is responsible if an AI tool provides incorrect or harmful information? Perhaps most importantly, the inappropriate use of AI can significantly erode patient trust in healthcare providers and the systems they use. The ethics study notes, “Overreliance on artificial intelligence (AI) can undermine compassion and erode trust. Transparency and disclosure of AI-generated content are critical to maintaining integrity."
FAQs
What is metadata?
Data about data that provides extra information about a file or piece of information, like its size, when it was created, who created it, or what type of file it is. For an email, metadata might include the sender's and recipient's email addresses, the subject line, and the time it was sent.
What are API keys?
API keys are like passwords that allow different computer programs or services to talk to each other securely. When one program wants to use the features or data of another program, it often needs to provide an API key to prove it's authorized to do so.
What are SQL queries?
SQL is a special language that computers use to manage and get information from databases (organized collections of data). An SQL query is like a question you ask the database using this language to find specific information, add new data, change existing data, or delete data.