Consulting HealthTech Pharma Research

Artificial Intelligence Can Now Explain Its Own Decision-Making

Artificial Intelligence Can Now Explain Its Own Decision-Making

IBM’s new open-source Fairness 360 toolkit claims both to check for and to mitigate bias in models, allowing an algorithm to explain its own decision-making.

SwissCognitivePeople are scared of the unknown. So naturally, one reason why () hasn’t yet been widely adopted may be because the rationale behind a machine’s decision-making is still unknown. This collection of metrics may allow researchers and enterprise architects to cast the revealing light of transparency into “black box” algorithms. People are scared of the unknown. So naturally, one reason why () hasn’t yet been widely adopted may be because the rationale behind a machine’s decision-making is still unknown.

The Black Box of

How can decisions be trusted when people don’t know where they come from? This is referred to as the black box of —something that needs to be cracked open. As technology continues to play an increasingly important role in day-to-day life and change roles within the workforce, the ethics behind algorithms has become a hotly debated topic.

Medical practitioners are thought to be among the first who will benefit greatly from and technology , which can easily scan images and analyze medical data, but whose decision-making algorithms will only be trusted once people understand how conclusions are reached.

Key thinkers warn that algorithms may reinforce programmers’ prejudice and bias, but IBM has a different view.

IBM claims to have made strides in breaking open the block box of with a software service that brings transparency.

Making Algorithms More Transparent

IBM is attempting to provide insight into how makes decisions, automatically detecting bias and explaining itself as decisions are being made. Their technology also suggests more data to include in the model, which may help neutralize future biases.

IBM previously deployed an to help in decision-making with the IBM Watson, which provided clinicians with evidence-based treatment plans that incorporated automated care management and patient engagement into tailers plans.

Experts were quick to mistrust the model as it didn’t explain how decisions were made. Watson aided in medical diagnosis and reinforces doctor’s decisions, but the hopeful technology would never replace the doctor. When Watson provided an analysis in line with the doctors, it was used as a reinforcement measure. When Watson differed, it was wrong.

But the company’s latest innovation, which is currently unnamed, appears to tackle Watson’s shortfalls. Perhaps naming it Sherlock would be fitting.

Open-Source and Ethical

It’s important to increase transparency not just in decision-making but also in records of the model’s accuracy, performance and fairness are easily traced and recalled for customer service, regulatory or compliance reasons, e.g. GDPR compliance. […]

read more – copyright by www.iotforall.com

2 Comments

  1. Mark Hogan

    And now we go full circle where the creation becomes the creator….what use does it have for us?

  2. Aditya Roy

    It’s an achievement that the AI field is understanding the importance of going open source and expl… https://t.co/Y8rT6497Ux

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.