IBM’s new open-source AI Fairness 360 toolkit claims both to check for and to mitigate bias in AI models, allowing an AI algorithm to explain its own decision-making.
copyright by www.iotforall.com
People are scared of the unknown. So naturally, one reason why artificial intelligence (AI) hasn’t yet been widely adopted may be because the rationale behind a machine’s decision-making is still unknown. This collection of metrics may allow researchers and enterprise AI architects to cast the revealing light of transparency into “black box” AI algorithms. People are scared of the unknown. So naturally, one reason why artificial intelligence (AI) hasn’t yet been widely adopted may be because the rationale behind a machine’s decision-making is still unknown.
The Black Box of AI
How can decisions be trusted when people don’t know where they come from? This is referred to as the black box of AI—something that needs to be cracked open. As technology continues to play an increasingly important role in day-to-day life and change roles within the workforce, the ethics behind algorithms has become a hotly debated topic.
Medical practitioners are thought to be among the first who will benefit greatly from AI and deep learning technology , which can easily scan images and analyze medical data, but whose decision-making algorithms will only be trusted once people understand how conclusions are reached.
Key thinkers warn that algorithms may reinforce programmers’ prejudice and bias, but IBM has a different view.
IBM claims to have made strides in breaking open the block box of AI with a software service that brings AI transparency.
Making AI Algorithms More Transparent
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
IBM is attempting to provide insight into how AI makes decisions, automatically detecting bias and explaining itself as decisions are being made. Their technology also suggests more data to include in the model, which may help neutralize future biases.
IBM previously deployed an AI to help in decision-making with the IBM Watson, which provided clinicians with evidence-based treatment plans that incorporated automated care management and patient engagement into tailers plans.
Experts were quick to mistrust the model as it didn’t explain how decisions were made. Watson aided in medical diagnosis and reinforces doctor’s decisions, but the hopeful technology would never replace the doctor. When Watson provided an analysis in line with the doctors, it was used as a reinforcement measure. When Watson differed, it was wrong.
But the company’s latest innovation, which is currently unnamed, appears to tackle Watson’s shortfalls. Perhaps naming it Sherlock would be fitting.
Open-Source and Ethical AI
It’s important to increase transparency not just in decision-making but also in records of the model’s accuracy, performance and fairness are easily traced and recalled for customer service, regulatory or compliance reasons, e.g. GDPR compliance. […]
read more – copyright by www.iotforall.com
IBM’s new open-source AI Fairness 360 toolkit claims both to check for and to mitigate bias in AI models, allowing an AI algorithm to explain its own decision-making.
copyright by www.iotforall.com
People are scared of the unknown. So naturally, one reason why artificial intelligence (AI) hasn’t yet been widely adopted may be because the rationale behind a machine’s decision-making is still unknown. This collection of metrics may allow researchers and enterprise AI architects to cast the revealing light of transparency into “black box” AI algorithms. People are scared of the unknown. So naturally, one reason why artificial intelligence (AI) hasn’t yet been widely adopted may be because the rationale behind a machine’s decision-making is still unknown.
The Black Box of AI
How can decisions be trusted when people don’t know where they come from? This is referred to as the black box of AI—something that needs to be cracked open. As technology continues to play an increasingly important role in day-to-day life and change roles within the workforce, the ethics behind algorithms has become a hotly debated topic.
Medical practitioners are thought to be among the first who will benefit greatly from AI and deep learning technology , which can easily scan images and analyze medical data, but whose decision-making algorithms will only be trusted once people understand how conclusions are reached.
Key thinkers warn that algorithms may reinforce programmers’ prejudice and bias, but IBM has a different view.
IBM claims to have made strides in breaking open the block box of AI with a software service that brings AI transparency.
Making AI Algorithms More Transparent
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
IBM is attempting to provide insight into how AI makes decisions, automatically detecting bias and explaining itself as decisions are being made. Their technology also suggests more data to include in the model, which may help neutralize future biases.
IBM previously deployed an AI to help in decision-making with the IBM Watson, which provided clinicians with evidence-based treatment plans that incorporated automated care management and patient engagement into tailers plans.
Experts were quick to mistrust the model as it didn’t explain how decisions were made. Watson aided in medical diagnosis and reinforces doctor’s decisions, but the hopeful technology would never replace the doctor. When Watson provided an analysis in line with the doctors, it was used as a reinforcement measure. When Watson differed, it was wrong.
But the company’s latest innovation, which is currently unnamed, appears to tackle Watson’s shortfalls. Perhaps naming it Sherlock would be fitting.
Open-Source and Ethical AI
It’s important to increase transparency not just in decision-making but also in records of the model’s accuracy, performance and fairness are easily traced and recalled for customer service, regulatory or compliance reasons, e.g. GDPR compliance. […]
read more – copyright by www.iotforall.com
Share this: