Avoid Unintended Bias: How to Responsibly Use AI/Machine Learning to Address Health Disparities
Artificial intelligence (AI) and machine learning (ML) are increasingly being used in healthcare settings with applications in decision support, patient care, and disease management. If the underlying data on which AI depends is inherently biased or lacks a diverse representation of populations, the algorithms cannot produce accurate outputs and will further widen the gap of equitable care.
This activity discussed how AI and ML can be used to address social determinants of health and create more equitable healthcare solutions and improve health outcomes.
- Identify approaches to create inclusive data sets that produce positive health outcomes for all patients.
- Recognize processes that can result in biased data.
- Illustrate how providers can properly use innovative technology, like AI and ML, without compromising patient safety and experience.
- Distinguish collaborations necessary to ensure that future AI and ML technology will optimally integrate and produce accurate data that can improve health outcomes, and how to initiate collaboration within the organization.
- Moderator: Michael Currie, Chief Health Equity Officer, UnitedHealth Group
- William Gordon, MD, Senior Advisor – Data and Technology, Center for Medicare and Medicaid Innovation
- Kevin Larsen, MD, FACP, Senior VP, Clinical Innovation, Optum
- Elia Lima-Walton, MD, Director Data Science and Clinical Analytics, Elsevier