Sunday, December 10, 2023

"The Doctor will see you now" - From Emergency Medical Hologram to Algorithmovigilience


by Rachel Buchleiter, PhD student

A person wearing a green shirt

Description automatically generated with medium confidence

Fair use, https://en.wikipedia.org/w/index.php?curid=4957180


The future is now?

It is easy to think that because a computer is not a human, it is exempt from the limitations and constructs of humans in society. It should be impartial, “color-blind”, unbiased and fair in all computations. The reality is, computers were designed by humans and therefore these machines inherit some of the same flaws. One example of this is the phenomena of bias in computer generated algorithms, known as artificial intelligence (AI) or machine learning (ML). For more information how ML fits into AI (and other buzz words) check out this explanation. When the term "Artificial Intelligence" was first coined in 1957 some speculated that it would become some autonomous robot, capable of all sorts of things, including replacing humans in some capacity. In reality, at least for now, the technology is nowhere near that. Instead, AI is far more useful at automating many routine tasks and decisions, reducing human workload, but not with the depth or range previously showcased in science fiction. Along with this realistic application of AI comes some serious ethical dilemmas to confront. This includes bias inherent in the algorithms generated by ML. (Broussard, 2023; Igoe, 2021)

 

Some of the places where this bias has become apparent:  

  • Facial recognition software - a black student had to use a white mask in order to program her Mirror product, based on facial recognition (Broussard, 2023)
  • Amazon hiring algorithm (Dastin, 2018)
  • Cardiovascular risk prediction models (Igoe, 2021)

A picture containing mask

Description automatically generated


Why is this?

As a general rule of thumb, algorithms are only as good as the datasets they are built upon. This is often referred to as "garbage in, garbage out." The main sources for healthcare related data include electronic health records (EHR), and insurance claims. Often, the lack of interoperability and information exchange, or inconsistent care, results in incomplete health records. When a patient receives care from different locations, using different EHRs, the patient's complete record cannot always be analyzed for a wholistic view of treatment. Due to systemic inequities, these data sets typically contain more Caucasian patients who are consumers of the healthcare system. This results in an underrepresentation of minority patients and uninsured patients, perpetuating the marginalization of these populations (Gervasi & Chen, et al., 2022; Igoe, 2021).

 

Healthcare specific implications

Major tech giants, not always traditionally thought of as healthcare companies, are popping up in healthcare discussions. These include Google, Microsoft, IBM and Apple (Powles & Hodson, 2017). This influx of resources to the marathon of equity in healthcare is much needed, but should proceed with caution. The main aims of the movement are to improve the accuracy of medical diagnoses, predict diseases and assist healthcare professionals in reaching the quadruple aim (improve quality, efficiency, reduce costs and improve clinician and patient experience). (Parashar, Chaudhary & Rana, 2021) With demonstrated examples of bias within these diagnosis and treatment algorithms it is imperative to design unbiased systems and promote equity in all phases of healthcare.

 

Steps to mitigate bias

The concept of algorithmovigilience, "refers to scientific methods and activities relating to the evaluation, monitoring, understanding and prevention of adverse effects of algorithms" (Gervasi & Chen, et al., 2022). These principles must be driven by the industry and government agencies. Both the United States and the European Union have proposed guidelines for diversity, nondiscrimination, fairness and equity in ML. In response to the recent wildfire like spread of AI, the White House has published the AI Bill of Rights. Numerous other agencies provide guidance to improve the accuracy and fairness of algorithms, although this is highly dependent on data quality. In any AI development, especially healthcare, steps to eliminate bias should be taken at every step of the process. This starts with the population identified for data collection, gathering a diverse sample data set, the format in which data is stored, all the way to the output and analysis of data (Gervasi et al., 2022). It also includes a diverse representation on the development team to provide input along the way (Igoe, 2021) While it is not a silver bullet, using an appropriately diverse data set for algorithm training is foundational.

The path toward equity does not end with just the data however, it is imperative to be judicious in selecting applications for AI. In her book, More than a Glitch, Meredith Broussard argues against the use of AI in policing for example. The use of predictive analytics in crime prevention has unintended consequences. The same exercise should apply to development of ML algorithms in healthcare. What are the potential benefits to use ML for a certain diagnosis? Who or what might be overlooked in this algorithm? Is this treatment plan equitable for all patients? Is this an ethical application of AI

The humans who designed AI in the first place must strive to end the cycle of bias and inequity in healthcare which each technological advancement. This shift in thinking about the objectivity of computers to algorithmovigilience is critical to reach the goal of equitable healthcare for all.

 

References:

Broussard, M. (2023). More than a glitch: confronting race, gender and ability bias in tech. MIT Press.

Brown, S. (2021, April 21). Machine learning, explained. MIT Management, Ideas made to matter| Artificial Intelligence. https://mitsloan.mit.edu/ideas-made-to-matter/machine-learning-explained

Dastin, J. (2018, October 10). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters: Retail. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G

Igoe, K. J. (2021, March 12). Algorithmic bias in health care exacerbates social inequities – how to prevent it. Harvard T.H. Chan School of Public Health. https://www.hsph.harvard.edu/ecpe/how-to-prevent-algorithmic-bias-in-health-care/

Gervasi, S. S., Chen, I. Y., Smith-McLallen, A., Obermeyer, Z., Vennera, M. & Chawla, R. (2022). The potential for bias in machine learning and opportunities for health insurers to address it. Health Affairs 41(2)https://doi.org/10.1377/hlthaff.2021.01287

Parashar, G., Chaudhary, A. & Rana, A. (2021). Systematic Mapping Study of AI/Machine Learning in Healthcare and Future Directions. SN COMPUT. SCI. 2, 461. https://doi.org/10.1007/s42979-021-00848-6

Powles, J., Hodson, H. (2017). Google DeepMind and healthcare in an age of algorithms. Health Technol. 7, 351–367. https://doi.org/10.1007/s12553-017-0179-1

 

 

No comments:

Post a Comment