Healthcare has seen a big rise in data breaches over the past few years, and it's becoming a concern for security professionals and organizations. The growing use of artificial intelligence (AI) across different fields, including cybercrime, is partly to blame for this trend. While AI can improve many aspects of healthcare, it also brings new risks—especially when it comes to data security.
Even with reports from agencies like the Department of Health and Human Services (HHS), many breaches go unmentioned. The HHS Data Breach Portal reflects incidents affecting 500 or more people, but that’s just scratching the surface.
In 2020, for example, there were 609 reported breaches involving data loss, but there were also over 63,000 smaller breaches involving fewer than 500 people. The issue is bigger than it seems on paper.
Read more: Healthcare data breaches: Insights and implications
Understanding AI data breaches
Tony Fields, President of Cleartech Group, discusses the growing concern of AI data breaches as AI transforms industries. "AI is rapidly transforming industries, offering businesses innovative solutions and unparalleled automation capabilities," Tony states, adding that with this remarkable progress comes an escalating concern: "The vast amounts of data AI collects, analyzes, and utilizes make it a prime target for cybercriminals."
Fields identifies some of the reasons for the rise in AI data breaches: "AI adoption is skyrocketing, and with it, the number of potential entry points for attackers. Each new AI system or model we integrate into our operations adds another layer of complexity and, unfortunately, another potential vulnerability." He also notes that AI models are often complex and opaque, making it difficult to understand exactly how they work or where vulnerabilities might lie.
To address these concerns, Fields advises taking proactive steps, such as implementing strong data governance, integrating security into the AI development process, and using only systems your team understands thoroughly. "AI offers immense benefits, but ignoring security risks can leave your company vulnerable," he warns, reiterating the need for a trusted partner in building a strong defense against potential breaches.
How AI is changing the game for cybercriminals
AI is making cybercrime more sophisticated, giving attackers new ways to carry out their plans.
- Smarter phishing: Phishing attacks are becoming harder to spot, with AI helping scammers create emails that look more legitimate. These tools can generate messages that mimic legitimate emails and even use data to craft personalized messages that target specific people. The added level of customization makes these attacks much more convincing, so organizations need to be on their toes.
- More advanced malware: AI is also shaking up malware development, making threats more adaptable. For example, AI-powered malware can change its code on the fly to avoid detection by traditional security systems or adjust its approach based on a target's defenses. This makes it harder for healthcare providers to keep up unless they adopt new security strategies.
The human side of security gaps
Technology isn’t the only factor—people’s actions often create openings for data breaches.
- Not enough security awareness: Employees might not know how to spot phishing or other threats. Regular training can help them recognize dangerous signs and take the right steps. Running simulated phishing attacks is another way to gauge how well-prepared employees are.
- Exploiting psychology: Cybercriminals often use tactics like urgency and impersonation to manipulate people. They may pretend to be someone within the organization or create a sense of urgency to push individuals into making snap decisions. Knowing these tricks can make training more effective.
Why healthcare is becoming a bigger target
As healthcare relies more on technology, the number of potential weak spots is growing.
- More connected devices: The boom in internet of things (IoT) devices—like smartwatches, fitness trackers, and remote monitoring tools—has opened up new ways for attackers to get in. These devices need strong security measures to keep them from becoming entry points for hackers.
- Cloud storage risks: Moving to cloud-based solutions introduces new challenges for data protection. While cloud storage offers convenience, it also means that sensitive data could be exposed if security isn't tight. Organizations need to know their responsibilities in keeping cloud-stored data safe, especially when sharing that duty with service providers.
Navigating the rules around data protection
Regulatory guidelines play a big part in shaping how healthcare organizations handle data security.
- HIPAA and data protection: The Health Insurance Portability and Accountability Act (HIPAA) sets standards for safeguarding patient information, and organizations must have measures in place to protect electronic health records (ePHI). Reporting breaches is also a requirement, which proves the need to follow HIPAA's rules closely.
- New regulations on the horizon: As threats change, so do the rules. Some states are introducing their own data protection laws, which makes compliance trickier. Healthcare providers with international operations face even more complex requirements. Staying updated on these changes is a must.
In the news: HHS finalizes regulations on patient care decision tools, including AI
Steps healthcare organizations can take to stay ahead
To fend off AI-driven attacks, healthcare organizations must take a more active role in defending their data.
- Using AI for defense: Just like attackers use AI, healthcare organizations can use it to boost security. Tools that analyze user behavior can detect unusual activity, while machine learning can help spot threats in real-time. Investing in AI-based defenses can give organizations an edge.
- Getting ready for incidents: A solid plan for dealing with breaches is fundamental. Conducting regular drills and involving different departments in planning can help ensure a quick, coordinated response if a breach occurs.
Go deeper:
Artificial Intelligence in healthcare
What’s next for healthcare data security?
The road ahead will likely bring more sophisticated cyber threats, and organizations need to stay flexible.
- Continuous learning: As attackers get more advanced, it’s beneficial to keep up with training programs that prepare staff for new threats. Sharing threat intelligence with other organizations can also strengthen defenses.
- Innovation is key: AI may be a double-edged sword, but it also opens up opportunities for better security tools. Investing in new technologies and research can help healthcare providers protect their data more effectively.
In the news
On February 20, 2024, U.S. House Speaker Mike Johnson and Democratic Leader Hakeem Jeffries jointly announced the creation of a bipartisan Task Force on Artificial Intelligence (AI). The task force is a strategic initiative to position America as a leader in AI innovation while addressing the complexities and potential threats posed by this transformative technology.
The creation of the task force comes in the wake of nuanced challenges and opportunities mentioned by experts like James Manyika. By bringing together regulatory authorities, policymakers, and industry experts, this task force strives to understand how to use AI for groundbreaking advancements in healthcare—such as improving diagnostic accuracy and treatment efficiency. They will also address concerns like privacy, bias, and the responsible use of technology.
Read more: U.S. House launches bipartisan AI task force
FAQs
Does HIPAA apply to the use of artificial intelligence and cybersecurity in healthcare?
Yes, HIPAA (Health Insurance Portability and Accountability Act) applies to the use of artificial intelligence and cybersecurity in healthcare. Any technology or process that involves the storage or transmission of protected health information (PHI) must comply with HIPAA regulations to ensure patient data privacy and security.
Do I need consent to use artificial intelligence and cybersecurity in healthcare?
In most cases, obtaining patient consent is necessary when using artificial intelligence and cybersecurity tools in healthcare, especially if these technologies involve the processing or analysis of patient data. Patient consent ensures transparency and compliance with ethical and legal standards, empowering patients to make informed decisions about their healthcare information.
What can I use to implement Artificial Intelligence and cybersecurity in healthcare?
To implement artificial intelligence and cybersecurity in healthcare, organizations can use advanced AI algorithms, machine learning models, data encryption technologies, and secure communication protocols. These tools help healthcare providers analyze large datasets, detect anomalies, and protect sensitive patient information from cyber threats, ultimately enhancing the efficiency and security of healthcare operations.