We are in the midst of a seismic global economic shift because of artificial intelligence (AI). From how work is done to how we protect the public to how we build urban infrastructure, the power of artificial intelligence can be seen on a massive scale. In fact, there is a predicted global spend of $52.2 billion annually by 2021, and billions more gained in efficiencies and savings.

SwissCognitiveConsultancy PwC estimates that AI could contribute up to $15.7 trillion to the global economy in 2030, more than the combined output of China and India today. Few are blind to AI’s enormous potential. However, there’s another dialogue emerging when it comes to AI, and that’s whether and to what extent AI is subject to bias. Critics have noted that AI models are only as good as the data you feed them, and therefore deep learning systems are not neutral.

Certainly, AI is inherently biased. However, I would also posit that all intelligent systems, including humans, are biased, for our own cognition is predicated on our personal experience and knowledge (aka “Training Data” in AI parlance). With the hyperbole surrounding AI today, bias is being cast as an evil crippling flaw unique to AI that will limit its value and widespread adoption. I strongly disagree.

As Jonathan Vanian notes in an article for Fortune, AI is only as good as the data that humans provide. Vanian goes on to write that, as AI practitioners, we know: “the data used to train deep-learning systems isn’t neutral. It can easily reflect biases, conscious and unconscious, of the people who assemble it. Data can be slanted by history, with trends and patterns that reflect centuries-old discrimination.” Vanian points out that a sophisticated AI algorithm, or even human statisticians, could scan a historical database and conclude that white men are the most likely to succeed as CEOs, not recognizing that, until recently, people who weren’t white men seldom were afforded opportunity to ascend to a CEO role. Blindness to bias is the fundamental challenge, not bias in itself. While we speak about it in careful and diplomatic terms, it is top of mind for everyone in the AI arena. But as I have seen first-hand, bias in AI can be navigated in much the same way that we can overcome bias in humans, clearing the way to recognize the full potential of artificial intelligence.

It’s instructive to examine how human bias has been handled in a historical context. For example, the proven antidote for human bias is collective wisdom. As an article in the MIT Technology Review explains: “In 1906, the English polymath Francis Galton visited a country fair in which 800 people took part in a contest to guess the weight of a slaughtered ox. After the fair, he collected the guesses and calculated their average, which was 1,208 pounds. To Galton’s surprise, this was within 1% of the true weight of 1,198 pounds.” “Vox Populi,” the article written by Galton about the experience, was published in a 1907 issue of Nature and is one of the earliest descriptions of the wisdom of the crowd phenomenon, namely how the collective opinion of a group of individuals can be better than a single expert opinion. […]