2 min read
Confronting racial bias in Artificial Intelligence (AI)
Caitlin Anthoney Oct 15, 2024 5:55:54 PM
Generative artificial intelligence is the new revolution in our world. However, Ashwini K.P., UN Special Rapporteur at the Human Rights Council states, “I am deeply concerned about the rapid spread of the application of artificial intelligence across various fields. This is not because artificial intelligence is without potential benefits. In fact, it presents possible opportunities for innovation and inclusion."
So, while people still think that technology is neutral, Ashwini challenges this destructive perception, saying that it “has the potential to drive increasingly seismic societal shifts in the future.”
Take predictive policing, for instance, where AI algorithms predict future crimes based on historical data. That, if anything, increases police deployment in neighborhoods that have been over-policed for decades, perpetuating racial biases.
Ashwini elaborates that racial prejudices also function in health and education. More specifically, algorithms can predict that racial minorities will likely lag in academic or professional performance based on biased data points. Ultimately, such digital profiling intensifies exclusion and existing inequalities.
Additionally, UN Human Rights Chief Volker Türk emphasizes caution on sensitive sectors, such as law enforcement. " In areas where the risk to human rights is particularly high… the only option is to pause until sufficient safeguards are introduced."
Therefore, we need better safeguards and regulations, so AI isn’t used for discrimination but rather as a tool of inclusion. With regulatory frameworks based on human rights law, we can prevent AI from reinforcing racial discrimination and causing injury to marginalized groups.
Ashwini calls on governments, stating, "Placing human rights at the center of how we develop, use, and regulate technology is absolutely critical to our response to these risks."
These frameworks would also extend to how organizations handle protected health information (PHI) using AI technologies. For example, healthcare providers who use AI tools to assess patients’ health risks should note that these tools can incorporate race correction factors, possibly leading to unequal patient treatment.
Furthermore, these AI tools must adhere to the regulations outlined in the Health Insurance Portability and Accountability Act (HIPAA), safeguarding patients’ PHI.
Read also: Addressing racism in mental health services with HIPAA compliant email
FAQs
Can providers use regular emails for patient communication?
No, regular email services, like Gmail and Outlook, are not secure. Instead, providers must use a HIPAA compliant emailing platform, like Paubox, to safeguard patients' protected health information (PHI).
What makes an email HIPAA compliant?
Providers must use a HIPAA compliant email solution, like Paubox, to safeguard patients’ PHI. HIPAA compliant emails offer encryption, access controls, and other security measures, preventing unauthorized access and potential breaches.
Do providers need patient consent for HIPAA compliant emails?
Yes, providers must get explicit patient consent before sending PHI via HIPAA compliant emails.
Learn more: A HIPAA consent form template that's easy to share