With the recent launch of a product to evaluate artificial intelligence (AI) models for accuracy and susceptibility to bias, Mayo Clinic Platform is accelerating adoption of data-driven innovation in clinical practice.

 

Copyright: eurekalert.org – “By eliminating bias in AI models and offering access to deidentified data, Mayo Clinic Platform aims to transform health care”


 

Mayo Clinic Platform_Validate confirms the efficacy and credibility of newly developed algorithms and ensures they are fit for their intended purpose. Validate is one of the first products in the industry that provides a bias, specificity and sensitivity report for AI models. It joins Mayo Clinic Platform_Discover, which offers AI developers access to curated, deidentified electronic health data in a secure, privacy-protected environment, along with the tools for discovery, analysis and training, to help solve some of patients’ most pressing needs.

In nearly three years since it began, Mayo Clinic Platform has built an ecosystem that orchestrates multiple collaborations with health technology innovators, enabling discoveries across the health care sector.

“Our platform aims to improve patient care by fundamentally changing the nature of health care delivery,” says John Halamka, M.D., president of Mayo Clinic Platform. “By addressing the countless deficiencies and inequities built into the current system, we are ushering in a new era of democratization of health care, where accessible, compassionate and personalized care is available to everyone, everywhere.”

Mayo Clinic Platform_Validate

Much of the skepticism around AI and machine learning is focused on the poor quality of evidence supporting some algorithms. Biased algorithms can lead to incorrect diagnoses and other serious risks for patients.

With the new Mayo Clinic Platform_Validate product, algorithms are measured for model bias observed in categories from age to ethnicity. Similar to the way a nutrition label on foods and beverages details the products’ ingredients, Validate reports how an AI algorithm performs under different constraints, including racial, gender and socioeconomic demographics. Clinicians can trust that AI models were evaluated by an independent third-party source. This transparency will help reduce bias and achieve equity in health care.[…]


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!


 

Read more: www.eurekalert.org