2 min read
How facial recognition technology is changing healthcare privacy
Caitlin Anthoney Nov 18, 2024 7:02:06 PM
Facial recognition technology (FRT) is transforming healthcare with applications that promise earlier diagnoses and more personalized treatments. New tools like Face2Gene can identify genetic patterns and predict health risks, holding groundbreaking potential.
However, concerns over inherent bias and patient privacy are also increasing.
Addressing bias
“Careful attention should be paid to the accuracy and validity of FRT used in health care applications,” explains an article published in the AMA Journal of Ethics on the ethical implications of using facial recognition technology in health care.
For example, when the images used to train software are not from a racially diverse sample, “the system may produce racially biased results.”
According to Nelly Matine, PhD candidate at Wits University, specializing in facial identification, "This lack of diversity has led to reduced accuracy and reliability of facial analysis technologies.”
Matine further suggests, “To mitigate this issue, [we must] prioritize the inclusion of diverse and representative samples in training datasets and adopt more inclusive data collection practices. [This will allow us to] work towards creating algorithms that perform equitably across all demographic groups, enhancing the fairness and generalisability of facial analysis technologies.”
Moreover, if we do not mitigate these biases, such tools could misdiagnose or underdiagnose conditions within minority populations, exacerbating existing healthcare inequalities.
Addressing privacy concerns
Under the Health Insurance Portability and Accountability Act (HIPAA), facial templates and comparable images are classified as biometric data and therefore, protected health information (PHI).
The law mandates securing individuals’ PHI, but this requirement applies primarily to covered entities and their business associates. While Face2Gene is HIPAA compliant, other consumer-facing tools used outside clinical environments are not bound by these laws.
These tools, though useful in diagnostic settings, do not protect PHI, exposing consumers to possible privacy breaches, like sharing or monetizing data without their consent.
So, how would we create and distribute FRTs that are fair, accurate, and respect patient rights?
- FRTs must use diverse training datasets to promote inclusivity while continuous ethical oversight addresses bias and potential misuse.
- Developers must use privacy protections, like those outlined in HIPAA to secure sensitive health data, including PHI.
- Developers should also engage patients, clinicians, and communities so FRT enhances healthcare outcomes while safeguarding patient rights.
Read also: Using AI for HIPAA compliance
FAQs
What is HIPAA?
The Health Insurance Portability and Accountability Act (HIPAA) is a federal law that establishes national standards for safeguarding protected health information (PHI). HIPAA mandates that healthcare providers, insurers, and business associates protect patients' PHI during transit and at rest.
Does HIPAA apply to all facial recognition technology?
No, technically, HIPAA only applies to facial recognition technology used by covered entities and business associates but not to consumer-facing tools outside the clinical environment.
However, using HIPAA compliant facial recognition technology protects consumers’ biometric data, upholding their privacy and security.
What is a covered entity under HIPAA?
A covered entity, as defined by HIPAA, is any healthcare provider, health plan, or healthcare clearinghouse that transmits any health information in electronic form.