The UK’s MHRA, the US Food and Drug Administration (FDA), and Health Canada are working together on ten guiding principles for developing Good Machine Learning Practice (GMLP). These principles help us promote high quality, safe and effective devices using artificial intelligence and machine learning. AI/ML has the potential to unlock useful insights from the vast amount of data available from the everyday healthcare provided. It uses software algorithms that help you learn from the real world, so you can improve your product’s performance. These software algorithms are data-driven and iterative in nature, allowing for unique considerations arising from their complexity.
The AI/ML medical device field is continuously evolving, just like GMLP practices. These ten guiding principles are the foundation for developing good machine learning practices to address the nature of these products while at the same time facilitating future development in this rapidly growing field. These Guiding Principles implement positive practices that have proven effective in other areas, and implement practices from other sectors that may be useful in the medical technology and healthcare sectors. , and to develop innovative procedures tailored to the medical and healthcare industry. Let’s now look at some of the principles that guide these practices.
- Multiple uses and needs should be kept in mind during product development and throughout the product lifecycle. To ensure that ML-enabled medical devices are safe, effective, and address clinically relevant needs throughout the device lifecycle, the model’s intended integration into the clinical workflow and the desired benefits must be fully realized. It helps to understand. and associated patient risks.
- Model design should be implemented with careful attention to basics such as good software engineering practices, data management, and data quality. These practices also incorporate a systematic risk management and design process that can effectively explain and document design, implementation, and risk management decisions and justifications. It also ensures data integrity and reliability.
- The data collected should consist of relevant features of the patient population of interest, and the measurement input should be sufficient for training and test datasets so that the output can be reasonably generalized is needed. It is also important to manage biases to promote generalized performance across patient populations and identify situations that may underperform the model.
- The training and test datasets should be chosen independently of each other. All potential sources of trust, including patient, data acquisition, and site characteristics, are considered and addressed to ensure independence.
- Use the most effective techniques for creating reference datasets to ensure that clinically relevant and well-characterized data are collected and the limitations of the reference are recognized. When available, reputable reference datasets that support and describe the robustness and generalizability of the model across the patient population of interest are used for model creation and testing.
- The clinical benefits and hazards of the product are well recognized and are used to develop clinically relevant performance goals for the study, to ensure that the product is safe and effective for the purpose for which it was designed. It supports the idea that it can be used. Global and local performance are considered to estimate the uncertainty and variability of device inputs and outputs.
- The human factor and human interpretability should be taken into account. At the same time, we address the model’s output, not just the model’s output, but with a focus on the combined performance of the human and her AI.
- A good test plan is strategized, developed and executed. The target patient population, important subgroups, the clinical setting and its use by the team, measured inputs, and confounding variables are all factors to keep in mind.
- Users should be able to understand, contextually relevant, and target audiences such as model performance for specific groups, acceptable inputs, perceived shortcomings, user interface interpretation, and model integration into clinical workflow. Get quick access to relevant information. In addition, users receive information about device upgrades and modifications from live performance monitoring, the basis for decisions where applicable, and a channel to raise product issues with developers.
- Deployed models can be observed in real applications to maintain or improve performance. In addition, during regular or continuous training of the model after deployment, overfitting, unintentional bias or Appropriate controls are in place to reduce the risk of model degradation.
check e-paper When Reference link. All credit for this research goes to the researchers of this project.Also, don’t forget to participate our Reddit page When cacophony channelWe share the latest AI research news, cool AI projects, and more.
Avanthy Yeluri is a dual degree student at IIT Kharagpur. She has a keen interest in data her science. The reason is that data science has many applications across various industries, as well as cutting-edge technological advances and how they are used in our daily lives.