summary: Researchers have identified 29 sources of potential bias in artificial intelligence and machine learning (AI/ML) models used in medical imaging, from data collection to model deployment.
This report provides insight into the limitations of AI/ML in medical imaging, suggests possible ways to mitigate them, and deploys medical imaging AI/ML models more equitably and equitably in the future. open the way to
Key Point:
- AI/ML models are increasingly being used in medical imaging for diagnosis, risk assessment, and treatment outcome evaluation.
- This study identifies 29 potential sources of bias in medical imaging AI/ML development and implementation and proposes ways to mitigate them.
- Ignoring these biases can result in differing patient benefits and exacerbate disparities in access to care.
sauce: spy
Artificial intelligence and machine learning (AI/ML) technologies are constantly finding new applications across multiple fields. Medicine is no exception, with AI/ML being used for diagnosis, prognosis, risk assessment, and treatment response assessment in a wide variety of diseases.
In particular, AI/ML models are finding more and more applications in the analysis of medical images. This includes X-rays, computed tomography, and magnetic resonance imaging. A key requirement for successful implementation of AI/ML models in medical imaging is ensuring proper design, training, and use.
In practice, however, it is very difficult to develop an AI/ML model that works well for all members of the population and can be generalized to all situations.
Just like humans, AI/ML models can be biased, which can lead to different treatments for medically similar cases. Despite the factors associated with introducing such biases, it is important to address them and ensure fairness, impartiality, and trust in AI/ML for medical imaging.
This requires identifying sources of biases that may exist in medical imaging AI/ML and developing strategies to mitigate them. Failure to do so may result in differing patient benefits and exacerbate inequities in healthcare access.
as reported in Journal of Medical Imaging (JMI) is a multi-institutional team of experts from the Medical Imaging and Data Resource Center (MIDRC), including medical physicists, AI/ML researchers, statisticians, physicians, and regulatory agency scientists, to address this concern. dealt with.
This comprehensive report covers the five major steps that can occur along the five major steps of medical imaging AI/ML development and implementation, from data collection, data preparation and annotation, model development, model evaluation, and model deployment. It identifies 29 potential sources of potential bias. We have identified biases that can occur in multiple steps.
Strategies to reduce bias are discussed and information is also available on the MIDRC website.
One of the main sources of bias is data collection. For example, sourcing images from a single hospital or single type of scanner can introduce bias in data collection.
Data collection biases can also arise due to differences in how particular social groups are treated, both during research and across the healthcare system.
Additionally, data can become outdated as medical knowledge and practice evolves. This introduces a temporary bias into AI/ML models trained on such data.
Other sources of bias are in data preparation and annotation, which are closely related to data collection. This step allows you to introduce biases based on how your data is labeled before being fed into your AI/ML model for training.
Such biases can result from the annotator’s personal biases or oversights related to how the data itself is presented to the user responsible for labeling.
Biases can also arise during model development based on how the AI/ML models themselves are inferred and created. One example is inherited bias, which occurs when the output of a biased AI/ML model is used to train another model.
Other examples of biases in model development include biases caused by unequal representation of the target population, or biases resulting from historical circumstances such as social and institutional biases that lead to discriminatory practices.
Model evaluation can also be a potential source of bias. For example, when testing model performance, bias can be introduced by using an already biased dataset for benchmarking or by using an inappropriate statistical model.
Finally, biases can also come in when deploying AI/ML models in real-world settings, mostly from users of the system. For example, bias is introduced when the model is not used for the intended category of image or composition, or when users become overly reliant on automation.
In addition to identifying and thoroughly explaining these potential sources of bias, the team suggests possible ways to mitigate them and best practices for implementing medical imaging AI/ML models. To do.
This article therefore provides researchers, clinicians, and the general public with valuable insight into the limitations of AI/ML in medical imaging and a roadmap for remediation in the near future. This may promote fairer and fairer deployment of medical imaging AI/ML models in the future.
About this Artificial Intelligence and Machine Learning Research News
author: Danite Steffens
sauce: spy
contact: Danito Steffens – SPIE
image: Image credited to Neuroscience News
Original research: open access.
“Towards fairness in artificial intelligence for medical image analysis: identifying and mitigating potential biases in the roadmap from data collection to model deployment,” K. Drukker et al. Journal of Medical Imaging
overview
Towards fairness in artificial intelligence for medical image analysis: identifying and mitigating potential biases in the roadmap from data collection to model deployment
the purpose
In order to recognize and address various sources of bias that are essential for algorithmic fairness and reliability, and to contribute to the fair and equitable deployment of AI in medical image processing, medical image-based machines, also called artificial medical image processing There is a growing interest in developing learning methods. Intelligence (AI) for disease detection, diagnosis, prognosis, and risk assessment for clinical implementation. These tools aim to improve traditional human decision-making in medical image processing. However, biases introduced in steps towards clinical deployment may interfere with intended function and exacerbate inequality. Specifically, medical imaging AI can propagate or amplify biases introduced at many steps from model initiation to deployment, resulting in systematic differences in treatments for different groups.
approach
Our multi-institutional team includes medical physicists, medical imaging artificial intelligence/machine learning (AI/ML) researchers, AI/ML bias experts, statisticians, physicians, and regulatory scientists. I was. We have identified sources of bias in AI/ML, developed strategies to mitigate these biases, and made best practice recommendations in medical imaging AI/ML development.
result
Along the medical imaging AI/ML roadmap, there are five main steps: (1) data acquisition, (2) data preparation and annotation, (3) model development, (4) model evaluation, and (5) model deployment. identified. Within these steps or bias categories, we identified 29 potential sources of bias. Many of which can impact multiple steps and mitigation strategies.
Conclusion
Our findings provide a valuable resource for researchers, clinicians, and the public.