Machines have no emotions. So, they must be objective— right ? Not so fast. A new wave of algorithmic issues has recently hit the news, bringing the bias of AI into greater focus. The question now is not just whether we should allow AI to replace humans in industry, but how to prevent these tools from further perpetrating race and gender biases that are harmful to society if and when they do.

First, a look at bias itself. Where do machines get it, and how can it be avoided? The answer is not as simple as it seems. To put it simply, “ machine bias is human bias .” And that bias can develop in a multitude of ways. For example:

Data-driven bias: If there is one objective lesson machines have learned, it’s this: garbage in, SwissCognitivegarbage out. Machines do not question the data they are given—they look for patterns within it. For instance, learning systems that were trained to predict recidivism rates in parolees showed blacks were almost twice as likely as whites to be considered high-risk reoffenders—yet whites were far more likely to be labeled low-risk and go on to commit other crimes. When the data is skewed by human bias, the AI results will be skewed, as well—in this case impacting something as serious as human freedom.

Interactive bias: By now, we’re probably familiar with the disaster that was Tay , Microsoft’s Twitter-based chatbot that turned into an aggressive racist after learning through interaction from his Twitter follows. When machines are taught to learn from those around them, they don’t decide which things to filter. They simply take it all in—for better or worse.

Emergent bias: Somewhat like interactive bias, emergent bias involves what happens via interaction over time. For instance, all of us on Facebook know we don’t always see the updates our friends post. That’s because Facebook has an algorithm that decides which posts we are most likely to want to see. Unfortunately, that often means there are a lot of things we never even know about—just because Facebook’s math equation decided against it.

Similarity bias: As the country deals with a new round of political issues and racism, this similarity bias is another huge issue. Similarity bias emerges when algorithms distort the content people see when looking for news and information online. As opposed to showing them all news options, it shows them the options they are most likely to agree with—a situation that further compounds political issues on both sides.

The question remains: what do we do about it? As noted in the piece Artificial Intelligence: To Be Feared or Embraced?, one of the most maddening aspects of AI is that even the ones developing it don’t fully understand how it works. Yet, AI and machine learning seem to be on a bullet train, and most companies are showing no sign of stopping. I believe that as the awareness of AI bias and “math-washing” continues to evolve, so will the demand for greater transparency in AI development. After all, the algorithms major companies are using to feed us news and information are impacting the decisions we make in our businesses and personal lives. […]


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!