AI Automotive Food Insurance Research Telecommunication

IBM researchers propose ‘factsheets’ for AI transparency

We’re at a pivotal moment in the path to mass adoption of artificial intelligence (). Google subsidiary DeepMind is leveraging to determine how to refer optometry patients . Haven Life is using to extend life insurance policies to people who wouldn’t traditionally be eligible, such as people with chronic illnesses and non-U.S. citizens.

SwissCognitiveGoogle spinoff Waymo is tapping it to provide mobility to elderly and disabled people. But despite the good is clearly capable of doing, doubts abound over its safety, transparency, and bias. IBM thinks part of the problem is a lack of standard practices.

There’s no consistent, agreed-upon way services should be “created, tested, trained, deployed, and evaluated,” Aleksandra Mojsilovic, head of foundations at IBM Research and codirector of the Science for Social Good program, today said in a blog post. Just as unclear is how those systems should operate, and how they should (or shouldn’t) be used.

To clear up the ambiguity surrounding , Mojsilovic and colleagues propose voluntary factsheets — formally called “Supplier’s Declaration of Conformity” (DoC) — that’d be completed and published by companies who develop and provide , with the goal of “increas[ing] the transparency” of their services and “engender[ing] trust” in them.

Mojsilovic thinks that such factsheets could give a competitive advantage to companies in the marketplace, similar to how appliance companies get products Energy Star-rated for power efficiency.

“Like nutrition labels for foods or information sheets for appliances, factsheets for services would provide information about the product’s important characteristics,” Mojsilovic wrote. “The issue of trust in is top of mind for IBM and many other technology developers and providers. -powered systems hold enormous potential to transform the way we live and work but also exhibit some vulnerabilities, such as exposure to bias, lack of explainability, and susceptibility to adversarial attacks. These issues must be addressed in order for services to be trusted.”

Several core pillars form the basis for trust in systems, Mojsilovic explained: fairness, robustness, and explainability. Impartial systems can be credibly believed not to contain biased algorithms or datasets, or contribute to the unfair treatment of certain groups. Robust systems are presumed safe from adversarial attacks and manipulation. And explainable systems aren’t a “black box” — their decisions are understandable by both researchers and developers. […]

  1. Ryan Carrier

    @SwissCognitive Can’t be left up to the companies. Independent Audit is the market mechanism we nee… https://t.co/lfDJhScX2u

Leave a Reply to Ryan Carrier Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.