2 min read

Fighting AI data manipulation in health apps

Fighting AI data manipulation in health apps

Health apps have revolutionized personal healthcare, helping millions to manage their fitness and medical routines. Yet, as these tools become popular, they also become major targets for malicious actors who exploit them.

Cybersecurity experts Sina Yazdanmehr and Lucian Ciobotaru of Aplite reveal that malicious artificial intelligence (AI) can manipulate health app data, jeopardizing user health and safety.

For their research, Yazdanmehr and Ciobotaru focused on vulnerabilities in Google Health Connect, a platform that aggregates health data from various apps and showcases them on users' Google Fit dashboards. 

The researchers created malware capable of extracting this data and sending it to a hostile AI-driven application. Next, this app would automatically generate fabricated data relevant to specific medical conditions. 

For example, it generated false blood sugar readings for a diabetes management app user, potentially leading to incorrect treatment recommendations.

The implications of this manipulation are staggering, especially if users and their healthcare providers don’t realize it. As Ciobotaru indicated, "It's very hard as a doctor to contradict something that the patient sees daily.

The trust users put in their health apps and devices, plus the complexity of AI manipulation, creates just the right environment for misinformation.

While the study focused on Google Health Connect, these risks could extend to any app or device that collects or shares health data. Yazdanmehr and Ciobotaru warned that malicious AI could exploit vulnerabilities across the digital health ecosystem, putting patients, providers, and the broader healthcare system at risk.

If a heart rate monitor sends inaccurate readings, it could cause unnecessary panic, or if a fitness tracker manipulates calorie counts, it could lead to users adopting harmful diets. On a larger scale, compromised health apps could erode trust in digital health tools, derailing progress in a field with immense potential to improve lives.

How, then, do we mitigate such threats? 

Developers, users, and providers must bear their share of responsibilities. Yazdanmehr insisted on checking the integrity of health data, stating, "Always check the source of data, and make sure what you receive and consume is from a trustworthy source and application.”

Developers should check security at each stage of the design and deployment process. More specifically, they must use encryption, periodically audit their systems for vulnerabilities, and implement safeguards to detect and block malicious activities. Additionally, users should be well-informed about how their data is collected, processed, and shared.

Providers must be cautious when integrating data from health apps into clinical decision-making processes, cross-referencing it with verified medical records, and encouraging patients to report discrepancies. 

Patients must remain vigilant and only use those apps with reputations for security and privacy.

Furthermore, using secure communication platforms, like Paubox email, can help patients and providers to securely share sensitive health information. HIPAA compliant emails use advanced encryption, mitigating the risk of unauthorized access to patient data.

More specifically, it could provide additional defense from AI-fueled manipulations, retaining data originality and credibility. So, while malicious AI might be a sophisticated threat, using a secure platform combined with vigilant users should help minimize its risks.

As Yazdanmehr aptly concluded, "We should make this environment secure and safe, so everybody can use it without any problem.

Read also: 

 

FAQs

What types of information can HIPAA compliant emails include?

Providers can use HIPAA compliant emails to send sensitive health information, like education materials, appointment reminders, treatment plans, and other medical communications.

 

Can AI be integrated into HIPAA compliant emails?

Yes, AI-powered features can be integrated with HIPAA compliant emailing platforms, like Paubox, to automate processes like patient consent management and sending personalized emails while maintaining HIPAA compliance.

 

Are there any limitations when using AI in HIPAA compliant emails?

Yes, healthcare providers must ensure that AI-powered features comply with HIPAA regulations and industry best practices for data security and privacy. Additionally, providers should evaluate the reliability of AI algorithms to avoid potential risks or compliance issues.

Read also: HIPAA compliant email API