What if your doctor could instantly test dozens of different treatments to discover the perfect one for your body, your health and your values?
Copyright: venturebeat.com – “AI is transforming medicine: Here’s how we make sure it works for everyone”
In my lab at Stanford University School of Medicine, we are working on artificial intelligence (AI) technology to create a “digital twin”: a virtual representation of you based on your medical history, genetic profile, age, ethnicity, and a host of other factors like whether you smoke and how much you exercise.
If you’re sick, the AI can test out treatment options on this computerized twin, running through countless different scenarios to predict which interventions will be most effective. Instead of choosing a treatment regimen based on what works for the average person, your doctor can develop a plan based on what works for you. And the digital twin continuously learns from your experiences, always incorporating the most up-to-date information on your health.
AI is personalizing medicine, but for which people?
While this futuristic idea may sound impossible, artificial intelligence could make personalized medicine a reality sooner than we think. The potential impact on our health is enormous, but so far, the results have been more promising for some patients than others. Because AI is built by humans using data generated by humans, it is prone to reproducing the same biases and inequalities that already exist in our healthcare system.
In 2019, researchers analyzed an algorithm used by hospitals to determine which patients should be referred to special care programs for people with complex medical needs. In theory, this is exactly the type of AI that can help patients get more targeted care. However, the researchers discovered that as the model was being used, it was significantly less likely to assign Black patients to these programs than their white counterparts with similar health profiles. This biased algorithm not only affected the healthcare received by millions of Americans, but also their trust in the system.
Getting data, the building block of AI, right
Such a scenario is all too common for underrepresented minorities. The issue isn’t the technology itself. The problem starts much earlier, with the questions we ask and the data we use to train the AI. If we want AI to improve healthcare for everyone, we need to get those things right before we ever start building our models.
First up is the data, which are often skewed toward patients who use the healthcare system the most: white, educated, wealthy, cisgender U.S. citizens. These groups have better access to medical care, so they are overrepresented in health datasets and clinical research trials.[…]
Read more: www.venturebeat.com