Start Up

To Be Ethical, AI Must Become Explainable. How Do We Get There?

To Be Ethical, AI Must Become Explainable. How Do We Get There?

As over-hyped as is—everyone’s talking about it, few fully understand it, it might leave us all unemployed but also solve all the world’s problems—its list of accomplishments is growing.

SwissCognitiveAs over-hyped as is—everyone’s talking about it, few fully understand it, it might leave us all unemployed but also solve all the world’s problems—its list of accomplishments is growing. can now write realistic-sounding text , give a debating champ a run for his money, diagnose illnesses, and generate fake human faces —among much more .

After training these systems on massive datasets, their creators essentially just let them do their thing to arrive at certain conclusions or outcomes. The problem is that more often than not, even the creators don’t know exactly why they’ve arrived at those conclusions or outcomes. There’s no easy way to trace a system’s rationale, so to speak. The further we let go down this opaque path, the more likely we are to end up somewhere we don’t want to be—and may not be able to come back from.

In a panel at the South by Southwest interactive festival last week titled “ Ethics and : How to plan for the unpredictable ,” experts in the field shared their thoughts on building more transparent, explainable, and accountable systems.

Not New, but Different

Ryan Welsh, founder and director of explainable startup Kyndi , pointed out that having knowledge-based systems perform advanced tasks isn’t new; he cited logistical, scheduling, and tax software as examples. What’s new is the learning component, our inability to trace how that learning occurs, and the ethical implications that could result.

“Now we have these systems that are learning from data, and we’re trying to understand why they’re arriving at certain outcomes,” Welsh said. “We’ve never actually had this broad society discussion about ethics in those scenarios.”

Rather than continuing to build AIs with opaque inner workings, engineers must start focusing on explainability, which Welsh broke down into three subcategories. Transparency and interpretability come first, and refer to being able to find the units of high influence in a network, as well as the weights of those units and how they map to specific data and outputs.

Then there’s provenance: knowing where something comes from. In an ideal scenario, for example, Open ’s new text generator would be able to generate citations in its text that reference academic (and human-created) papers or studies.[…]

read more – copyright by singularityhub.com

8 Comments

Leave a Reply