We’re at a pivotal moment in the path to mass adoption of artificial intelligence (AI). Google subsidiary DeepMind is leveraging AI to determine how to refer optometry patients . Haven Life is using AI to extend life insurance policies to people who wouldn’t traditionally be eligible, such as people with chronic illnesses and non-U.S. citizens.

SwissCognitiveGoogle self-driving car spinoff Waymo is tapping it to provide mobility to elderly and disabled people. But despite the good AI is clearly capable of doing, doubts abound over its safety, transparency, and bias. IBM thinks part of the problem is a lack of standard practices.

There’s no consistent, agreed-upon way AI services should be “created, tested, trained, deployed, and evaluated,” Aleksandra Mojsilovic, head of AI foundations at IBM Research and codirector of the AI Science for Social Good program, today said in a blog post. Just as unclear is how those systems should operate, and how they should (or shouldn’t) be used.

To clear up the ambiguity surrounding AI, Mojsilovic and colleagues propose voluntary factsheets — formally called “Supplier’s Declaration of Conformity” (DoC) — that’d be completed and published by companies who develop and provide AI, with the goal of “increas[ing] the transparency” of their services and “engender[ing] trust” in them.

Mojsilovic thinks that such factsheets could give a competitive advantage to companies in the marketplace, similar to how appliance companies get products Energy Star-rated for power efficiency.

“Like nutrition labels for foods or information sheets for appliances, factsheets for AI services would provide information about the product’s important characteristics,” Mojsilovic wrote. “The issue of trust in AI is top of mind for IBM and many other technology developers and providers. AI-powered systems hold enormous potential to transform the way we live and work but also exhibit some vulnerabilities, such as exposure to bias, lack of explainability, and susceptibility to adversarial attacks. These issues must be addressed in order for AI services to be trusted.”

Several core pillars form the basis for trust in AI systems, Mojsilovic explained: fairness, robustness, and explainability. Impartial AI systems can be credibly believed not to contain biased algorithms or datasets, or contribute to the unfair treatment of certain groups. Robust AI systems are presumed safe from adversarial attacks and manipulation. And explainable AI systems aren’t a “black box” — their decisions are understandable by both researchers and developers. […]


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!