At the 2025 Eduvos University Interdisciplinary Bootcamp in Pretoria, South Africa, I had the opportunity to speak about one of the most pressing questions facing the next generation of medical communicators: What does AI mean for the future of medical writing, and how can we safeguard the trust that patient communication depends on?
Medical writing has always been about clarity, accuracy, and responsibility. Now, we’re seeing AI tools step in to support those goals, but AI also introduces new challenges, particularly around data privacy, regulatory compliance, and ethical care.
As a writer with a background in neuroscience and a passion for simplifying complexity, I’ve witnessed firsthand how AI is reshaping the way we work, and why HIPAA compliant platforms like Paubox are needed.
For many of us, medical writing began with a love for language and a drive to make scientific information more accessible. That passion often developed through academic training, in my case, during a master's in neuroscience and years teaching health science students. The work is precise, highly regulated, and always centered around human lives.
Today, AI is helping medical writers manage increasing workloads by automating parts of the content creation process. Tools like ChatGPT, Claude, BioGPT, and PubMedGPT now support everything from:
These tools dramatically speed up early-stage research and drafting. For instance, BioBERT and Med-PaLM offer natural language processing tailored to medical contexts, while SciSummary and Semantic Scholar help writers quickly surface new studies with relevant clinical findings.
On one hand, AI supports a more personalized and data-rich writing process. With access to electronic health records (EHRs), clinical trial databases, and medical journals, writers can generate more personalized content in a shorter period.
On the other hand, the same technology raises major concerns:
A Multidisciplinary Review Exploring the Intersection of Artificial Intelligence and Clinical Healthcare puts it plainly: “One of the major concerns with AI in healthcare is the lack of trust and transparency in the decision-making process.”
AI lacks human compassion. It can predict outcomes but cannot replicate empathy, cultural nuance, or the physician-patient bond that builds trust.
AI doesn’t eliminate our ethical responsibilities, so when using AI for sensitive health information, we must adhere to privacy laws like HIPAA in the US, POPIA in South Africa, and GDPR in the EU.
Using a HIPAA compliant solution, like Paubox, can help support:
As the 2018 Verizon Protected Health Information Data Breach Report states, “Healthcare is the only industry in which internal actors are the biggest threat to an organization, with 58% of incidents involving insiders.”
Therefore, we must improve our security practices, especially as AI tools become more embedded in day-to-day operations.
For AI to enhance healthcare communication, organizations must improve their internal practices:
Anyone, including medical writers, who handles PHI must adhere to HIPAA regulations. Therefore, AI-generated content should be reviewed and delivered using secure solutions like Paubox to prevent data breaches.
Patients must know how their data is used, how it’s protected, and what AI tools may be involved. If there is a PHI breach or misuse, HIPAA-covered entities must promptly notify the affected individuals and, in some cases, the media.
Medical writers, developers, and clinicians must uphold ethical principles of beneficence, non-maleficence, and patient autonomy throughout the content lifecycle. AI tools can assist, but they should never replace human review, especially in emotionally sensitive or high-risk topics.
While AI tools are getting better at generating coherent, fact-like text, they are still prone to "hallucinations," which are convincing but false or misattributed statements. Writers should be alert to red flags like:
To avoid this, they must cross-check AI-generated content with trusted sources. Medical content writing, particularly for regulated industries, demands a level of rigor and context that generative models can’t guarantee on their own.
AI does not replace us as writers but rather changes how we work. It automates routine tasks, so we can focus on communicating with clarity, compassion, and ethical integrity. With this freedom also comes the responsibility of meeting regulatory requirements. Additionally, our writing must reflect diverse experiences and support the human relationships at the heart of healthcare.