Artificial intelligence (AI) has the potential to drastically improve patient outcomes. AI utilizes algorithms to assess data from the world, make a representation of that data, and use that information to make an inference. From handling administrative tasks to actively diagnosing disease, AI could make treatment faster and more effective in clinical settings, especially as technology continues to improve.
However, AI can suffer from bias, which has striking implications for health care. The term “algorithmic bias” speaks to this problem. It was first defined by the co-directors of the AI for Health Care: Concepts and Applications program at the Harvard T.H. Chan School of Public Health: Trishan Panch, primary care physician, president-elect of the HSPH Alumni Association, and co-founder of digital health company Wellframe, and Heather Mattie, lecturer of biostatistics and co-director of the health data science master’s program.
In their 2019 paper in Journal of Global Health “Artificial intelligence and algorithmic bias: implications for health systems,” Panch, Mattie, and Rifat Atun define algorithmic bias as the application of an algorithm that compounds existing inequities in socioeconomic status, race, ethnic background, religion, gender, disability, or sexual orientation and amplifies inequities in health systems.
In other words, algorithms in health care technology don’t simply reflect back social inequities but may ultimately exacerbate them. What does this mean in practice, how does it manifest, and how can it be counteracted? (author abstract)