As over-hyped as is—everyone’s talking about it, few fully understand it, it might leave us all unemployed but also solve all the world’s problems—its list of accomplishments is growing.
copyright by singularityhub.com
As over-hyped as is—everyone’s talking about it, few fully understand it, it might leave us all unemployed but also solve all the world’s problems—its list of accomplishments is growing. can now write realistic-sounding text , give a debating champ a run for his money, diagnose illnesses, and generate fake human faces —among much more .
After training these systems on massive datasets, their creators essentially just let them do their thing to arrive at certain conclusions or outcomes. The problem is that more often than not, even the creators don’t know exactly why they’ve arrived at those conclusions or outcomes. There’s no easy way to trace a system’s rationale, so to speak. The further we let
In a panel at the South by Southwest interactive festival last week titled “ Ethics and
Not New, but Different
Ryan Welsh, founder and director of explainable
“Now we have these systems that are learning from data, and we’re trying to understand why they’re arriving at certain outcomes,” Welsh said. “We’ve never actually had this broad society discussion about ethics in those scenarios.”
Rather than continuing to build AIs with opaque inner workings, engineers must start focusing on explainability, which Welsh broke down into three subcategories. Transparency and interpretability come first, and refer to being able to find the units of high influence in a
Then there’s provenance: knowing where something comes from. In an ideal scenario, for example, Open
8 Comments