Alongside the excitement and hype about our growing reliance on artificial intelligence, there’s fear about the way the technology works. A recent MIT Technology Review article titled “The Dark Secret at the Heart of AI” warned: “No one really knows how the most advanced algorithms do what they do. That could be a problem.”

SwissCognitiveThanks to this uncertainty and lack of accountability, a report by the AI Now Institute recommended that public agencies responsible for criminal justice, health care, welfare and education shouldn’t use such technology.

Given these types of concerns, the unseeable space between where data goes in and answers come out is often referred to as a “black box” — seemingly a reference to the hardy (and in fact orange, not black) data recorders mandated on aircraft and often examined after accidents. In the context of A.I., the term more broadly suggests an image of being in the “dark” about how the technology works: We put in and provide the data and models and architectures, and then computers provide us answers while continuing to learn on their own, in a way that’s seemingly impossible — and certainly too complicated — for us to understand.

Health care and the unknown

There’s particular concern about this in health care, where A.I. is used to classify which skin lesions are cancerous, to identify very early-stage cancer from blood, to predict heart disease, to determine what compounds in people and animals could extend healthy life spans and more. But these fears about the implications of black box are misplaced. A.I. is no less transparent than the way in which doctors have always worked — and in many cases it represents an improvement, augmenting what hospitals can do for patients and the entire health care system. After all, the black box in A.I. isn’t a new problem due to new tech: Human intelligence itself is — and always has been — a black box.

Black Boxes are nothing new

Let’s take the example of a human doctor making a diagnosis. Afterward, a patient might ask that doctor how she made that diagnosis, and she would probably share some of the data she used to draw her conclusion. But could she really explain how and why she made that decision, what specific data from what studies she drew on, what observations from her training or mentors influenced her, what tacit knowledge she gleaned from her own and her colleagues’ shared experiences and how all of this combined into that precise insight? Sure, she’d probably give a few indicators about what pointed her in a certain direction — but there would also be an element of guessing, of following hunches. And even if there weren’t, we still wouldn’t know that there weren’t other factors involved of which she wasn’t even consciously aware. […]