Artificial intelligence (AI) is transforming health care. Right now we are using AI to assist life science companies serving rare disease patient populations.


Copyright: – “Bias In Artificial Intelligence: How Can We Minimize It”


These AI-assisted products help to uncover likely undiagnosed or misdiagnosed patients and their associated healthcare providers (HCPs), who can be educated about a condition or rare disease, and life-saving therapies. By finding these patients, HCP engagement can be prioritized and the diagnostic and treatment journey can accelerate. Yet although this technology may seem like the great equalizer to humanity’s flawed motivational and cognitive biases, the truth is, even the most advanced AI technologies have biases including race, gender, socioeconomic status, and political identifiers.

We all have a responsibility to do everything we can to overcome and minimize that bias.

Technology developers must realize that ‘one size does not fit all’ when it comes to the health of human beings – one size fits one. This means – as much as possible – companies and teams utilizing AI must make the effort to include real-world interactions with patients and those impacted by the technology.

Organizations can address these challenges in two ways. First, we must correct the lack of diversity on data science teams. Right now, the tech industry is notoriously white, and male dominated. That is not likely to change any time soon. Only one in five graduates of computer science programs are women; the number of underrepresented ethnicities is even lower. Organizations can improve this by:

  1. Gathering a diverse AI/ML team that asks diverse questions.We all bring different experiences and ideas to the workplace. People from diverse backgrounds – race, gender, age, experience, culture, etc. – will inherently ask different questions. This helps you catch problems at the task definition stage. The utility of diverse ensembles is a well-known principle in AI. Similarly, organizational studies have confirmed the superiority of diverse teams.
  2. Creating an AI pipeline that uses BOTH human and data feedback. Act, assess, adjust. New projects may have unexpected twists. New data or feedback from users may alter initial assumptions that are erroneous or harmful.
  3. Thinking about end-users. Understand that your end-users won’t be like you or your team. Be empathetic. Avoid AI bias by learning to anticipate how people who aren’t like you will interact with your technology and what problems might arise.[…]

Read more: