This article provides the process and importance of establishing AI systems within medical devices that comply with regulations, and reduce risk and harm.
SwissCognitive Guest Blogger: Layla Li, Co-founder and CEO of KOSA AI – “AI in Medical Devices: Trust and Compliance”
According to research, in the last five years, the number of issued patents for AI-based medical devices has increased by over 200%. It is no doubt that AI can be seen as the future in the field of medicine and more and more new studies that emerge are leaning toward this. Recently, the ARRS’ American Journal of Roentgenology (AJR), being one of the biggest communities for medical imaging, conducted the first study that showed radiologists saved an average of 93 seconds per exam off of their interpretation times when incorporating AI support into medical devices while doing clinical practice. We have covered in our previous article an introduction to using AI in medical devices and how it is one significant stepping stone into the future of AI in healthcare. This article will provide the process and importance of establishing AI systems within medical devices that comply with regulations, reduce risk and harm, and understand the human impact of the model.
Responsibility all the way
Everyone knows Hippocrates’ doctrine, “first, do no harm.” as it is the oath that healthcare workers abide by while providing the best care and service possible. If medical devices with artificial intelligence are being developed to help healthcare professionals provide care, what is the Hippocratic Oath for AI developers and/or providers? To take advantage of machine learning algorithms in the medical devices, clinicians, AI inventors, developers, executives, and patients must all understand the decision-making process and trust that the technology will do no harm. Also, it is important that there is an appropriate level of trust in the AI systems embedded in the medical devices. As machine learning models degrade over time and their performance changes as training data changes, the AI system will produce different results over time, leading to questioning of trust in the system. Similarly, there is a risk of placing too much trust in the AI due to the aforementioned limitations. In the already mentioned previous article, we noted that AI in healthcare must follow the principles of responsible AI to gain the [necessary] trust: Eliminating the causes of bias, ensuring traceability of each prediction made by the AI solution, continuously monitoring deviations in model performance, and embedding ethics from training to production in all AI projects. In other words, building trust in AI should be an oath for all AI developers, not only in integrating AI into medical devices and healthcare, but also in general, just as the Hippocratic Oath is for the practice of medicine.
One study states that “AI can fail ( become untrustworthy ) because the data were not representative or appropriate for the task to which it was applied…” The key to building trust in medical AI, therefore, is to ensure that high-quality data are available, supported by inclusive and unbiased datasets. Why is this important?
There are three types of biases that can affect the algorithms of the AI system in the medical device.
- Physical bias: a medical device can exhibit physical bias, where physical [appearances, bodily] principles are biased against certain demographics.
- Computational bias: once data are collected, computational bias, which pertains to the distribution, processing, and computation of data that are used to operate a device, must be considered.
- Interpretation bias: subsequent implementation in clinical settings can lead to interpretation bias, where clinical staff or other users may interpret device outputs differently based on demographics.
Building trust by reducing biases is more complex than it may sound, because biases can be meaningless and are sometimes necessary for training datasets and/or AI algorithms. For example, race and sex are part of human biology and sometimes essential for clinical decision making. At the same time, bias can be extremely dangerous, as there are numerous cases where it costs patients’ lives.
Number of organizations, AI experts, ethicists, and government agencies have published papers and guidelines on how best to decide what “trustworthiness” means in AI systems and how to trust the decisions and diagnoses of machine learning models.
As AI learns from variables and annotations in datasets, it is tested on its ability to recognize an important feature that it has been trained to recognize over time, with the expectation that it will be able to recognize that feature with a certain degree of accuracy. So, in order to build trust, the system must be tested on diverse datasets that represent the features fairly and free of social injustices.
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
In compliance we trust
In addition to eliminating AI bias to build trust when using AI in medical devices, a collaborative regulatory system must be established. There are several potential regulatory and standardization approaches to address trust in AI performance. They point out that countries have begun to develop new regulations that AI builders must comply with to ensure effectiveness and patient safety.
Different countries’ regulations can be viewed here:
DRCF — Digital Regulation Cooperation Forum (DRCF)
MHRA — Medicines and Healthcare products Regulatory Agency (UK)
MDR — Medical Device Regulation (European & UK)
FDA — The U.S. Food and Drug Administration (US)
The current regulations provide guidelines that can be followed by AI medical device users to ensure that the AI system as software can be officially accepted as a medical device, does no harm, can be trusted, and makes decisions within acceptable risk parameters. And if the risk exceeds a certain point, they need to know the compliance steps to investigate these issues and understand why they occur.
To date, however, there are only proposed guidelines for regulating AI in medical devices and no comprehensive approach backed by legislation. It appears that there is still much work to be done and more regulations and guidance specifically targeting AI in healthcare are on the horizon, but at the very least, companies developing and/or using AI in medical devices should begin to establish action points to ensure that AI models are properly validated and sufficiently transparent to be robust.
Conclusion
Most AI systems used in the medical field are complex technologies that people don’t really understand how they work, but still trust because they seem mature. New AI systems may lack trust for the same reason, because they cannot explain the decisions they make. In either case, when using AI in medical devices, system developers and users must be careful to build trust in machine learning models and comply with regulations so that the product they develop/use serves the right purpose. Despite the complexity of the relationship between healthcare and AI, companies should be able to provide continuous visibility into the production of the AI medical devices and respond to changes to improve them. How well this can be done is currently up to the AI and healthcare companies themselves. One thing is certain- an AI system is reliant on how well a model can be explained when something eventually goes wrong.
About the Author:
Layla Li, Co-founder and CEO of KOSA AI, an automated responsible AI system that helps enterprises put equity in their AI. She is a full-stack developer, data scientist, and data-driven global business strategist that cares for making technology inclusive and unravelling AI bias.
This article provides the process and importance of establishing AI systems within medical devices that comply with regulations, and reduce risk and harm.
SwissCognitive Guest Blogger: Layla Li, Co-founder and CEO of KOSA AI – “AI in Medical Devices: Trust and Compliance”
According to research, in the last five years, the number of issued patents for AI-based medical devices has increased by over 200%. It is no doubt that AI can be seen as the future in the field of medicine and more and more new studies that emerge are leaning toward this. Recently, the ARRS’ American Journal of Roentgenology (AJR), being one of the biggest communities for medical imaging, conducted the first study that showed radiologists saved an average of 93 seconds per exam off of their interpretation times when incorporating AI support into medical devices while doing clinical practice. We have covered in our previous article an introduction to using AI in medical devices and how it is one significant stepping stone into the future of AI in healthcare. This article will provide the process and importance of establishing AI systems within medical devices that comply with regulations, reduce risk and harm, and understand the human impact of the model.
Responsibility all the way
Everyone knows Hippocrates’ doctrine, “first, do no harm.” as it is the oath that healthcare workers abide by while providing the best care and service possible. If medical devices with artificial intelligence are being developed to help healthcare professionals provide care, what is the Hippocratic Oath for AI developers and/or providers? To take advantage of machine learning algorithms in the medical devices, clinicians, AI inventors, developers, executives, and patients must all understand the decision-making process and trust that the technology will do no harm. Also, it is important that there is an appropriate level of trust in the AI systems embedded in the medical devices. As machine learning models degrade over time and their performance changes as training data changes, the AI system will produce different results over time, leading to questioning of trust in the system. Similarly, there is a risk of placing too much trust in the AI due to the aforementioned limitations. In the already mentioned previous article, we noted that AI in healthcare must follow the principles of responsible AI to gain the [necessary] trust: Eliminating the causes of bias, ensuring traceability of each prediction made by the AI solution, continuously monitoring deviations in model performance, and embedding ethics from training to production in all AI projects. In other words, building trust in AI should be an oath for all AI developers, not only in integrating AI into medical devices and healthcare, but also in general, just as the Hippocratic Oath is for the practice of medicine.
One study states that “AI can fail ( become untrustworthy ) because the data were not representative or appropriate for the task to which it was applied…” The key to building trust in medical AI, therefore, is to ensure that high-quality data are available, supported by inclusive and unbiased datasets. Why is this important?
There are three types of biases that can affect the algorithms of the AI system in the medical device.
Building trust by reducing biases is more complex than it may sound, because biases can be meaningless and are sometimes necessary for training datasets and/or AI algorithms. For example, race and sex are part of human biology and sometimes essential for clinical decision making. At the same time, bias can be extremely dangerous, as there are numerous cases where it costs patients’ lives.
Number of organizations, AI experts, ethicists, and government agencies have published papers and guidelines on how best to decide what “trustworthiness” means in AI systems and how to trust the decisions and diagnoses of machine learning models.
As AI learns from variables and annotations in datasets, it is tested on its ability to recognize an important feature that it has been trained to recognize over time, with the expectation that it will be able to recognize that feature with a certain degree of accuracy. So, in order to build trust, the system must be tested on diverse datasets that represent the features fairly and free of social injustices.
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
In compliance we trust
In addition to eliminating AI bias to build trust when using AI in medical devices, a collaborative regulatory system must be established. There are several potential regulatory and standardization approaches to address trust in AI performance. They point out that countries have begun to develop new regulations that AI builders must comply with to ensure effectiveness and patient safety.
Different countries’ regulations can be viewed here:
DRCF — Digital Regulation Cooperation Forum (DRCF)
MHRA — Medicines and Healthcare products Regulatory Agency (UK)
MDR — Medical Device Regulation (European & UK)
FDA — The U.S. Food and Drug Administration (US)
The current regulations provide guidelines that can be followed by AI medical device users to ensure that the AI system as software can be officially accepted as a medical device, does no harm, can be trusted, and makes decisions within acceptable risk parameters. And if the risk exceeds a certain point, they need to know the compliance steps to investigate these issues and understand why they occur.
To date, however, there are only proposed guidelines for regulating AI in medical devices and no comprehensive approach backed by legislation. It appears that there is still much work to be done and more regulations and guidance specifically targeting AI in healthcare are on the horizon, but at the very least, companies developing and/or using AI in medical devices should begin to establish action points to ensure that AI models are properly validated and sufficiently transparent to be robust.
Conclusion
Most AI systems used in the medical field are complex technologies that people don’t really understand how they work, but still trust because they seem mature. New AI systems may lack trust for the same reason, because they cannot explain the decisions they make. In either case, when using AI in medical devices, system developers and users must be careful to build trust in machine learning models and comply with regulations so that the product they develop/use serves the right purpose. Despite the complexity of the relationship between healthcare and AI, companies should be able to provide continuous visibility into the production of the AI medical devices and respond to changes to improve them. How well this can be done is currently up to the AI and healthcare companies themselves. One thing is certain- an AI system is reliant on how well a model can be explained when something eventually goes wrong.
About the Author:
Layla Li, Co-founder and CEO of KOSA AI, an automated responsible AI system that helps enterprises put equity in their AI. She is a full-stack developer, data scientist, and data-driven global business strategist that cares for making technology inclusive and unravelling AI bias.
Share this: