We have seen it too many times before. A major security or privacy breach creates a crisis for an enterprise. Headlines, lawsuits and, sometimes, the CEO testifying before congress. The CIO works around the clock only to be rewarded with a pink-slip and an uncertain career.

SwissCognitiveResearchers from MIT and Stanford University tested three commercially released facial-analysis programs from major technology companies and will present findings that the software contains clear skin-type and gender biases. Facial recognition programs are good at recognizing white males but fail embarrassingly with females especially the darker the skin tone. The news broke last week but will be presented in full at the upcoming Conference on Fairness, Accountability, and Transparency.

Bias will damage the relationship between the enterprise and the public. It can make a firm a target for critics who will view things like this as evidence the firm doesn’t share the values of its customers. But also, as AI makes more and more decisions related to things like investments, healthcare, lending and financial decisions, employment and so on, the risk of damaging people and financial and even criminal liability will increase.

When we began storing and transmitting valuable, often personal and financial data, we created the risk of data breech. In the age of Artificial Intelligence and automation technologies, bias is the new breech. Artificial Intelligence and automation technologies are critical to your company strategy. But with it comes new sets of risks and issues that CIOs and other leaders must address.

It is critical to create the systems and processes that will prevent bias from creeping in to a company’s AI software and detect it and mitigate the damage when it does. That will be the biggest challenge in the next few years, not loss of jobs or threats to personal safety from AI.

Public examples

It may seem odd that a software program could have built in bias, but the reasons for it are very simple. The experts that are developing AI technology are the ones feeding data into their programs. If they’re using data that already includes standard human biases, then their AI software will also reflect this bias. It’s not something that’s done consciously, but unfortunately, it hasn’t been a major consideration when initial programming begins on systems such as Alexa, Siri, or Google Home.

Some critics would like to see AI interaction be both gender and ethnically neutral. We might want to adopt more generic robot sounding voices instead of the standard female voice we’ve been exposed to. This might be taking it a bit far, but the point is valid. We need to be constantly vigilant against the possibility of bias as we integrate AI into business organizations. […]


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!