Artificial intelligence (AI) has evolved exponentially, from driverless vehicles to voice automation in households, and is no longer just a term from sci-fi books and movies. The future of artificial intelligence comes sooner than the projections that were seen in the futuristic Minority Report film.

Copyright by www.forbes.com

SwissCognitive, AI, Artificial Intelligence, Bots, CDO, CIO, CI, Cognitive Computing, Deep Learning, IoT, Machine Learning, NLP, Robot, Virtual reality, learningAI will become an essential part of our lives in the next few years, approaching the level of super-intelligent computers that transcend human analytical abilities. Imagine opening your car by coming near it or getting products delivered to your place via drones; AI can make it all a reality.

However, recent discussions about the algorithmic bias reflect the loopholes in the “so perfect” AI systems. The lack of fairness that results from the performance of a computer system is algorithmic bias. In algorithmic bias, the lack of justice mentioned comes in different ways but can be interpreted as one group’s prejudice based on a particular categorical distinction.

Human bias is an issue that has been well researched in psychology for years. It arises from the implicit association that reflects bias we are not conscious of and how it can affect an event’s outcomes. Over the last few years, society has begun to grapple with exactly how much these human prejudices, with devastating consequences, can find their way through AI systems. Being profoundly aware of these threats and seeking to minimize them is an urgent priority when many firms are looking to deploy AI solutions. Algorithmic bias in AI systems can take varied forms such as gender bias, racial prejudice and age discrimination.

The critical question to ask is: What is the root cause for introducing bias in AI systems, and how can it be prevented? In numerous forms, bias may infiltrate algorithms. Even if sensitive variables such as gender, ethnicity or sexual identity are excluded, AI systems learn to make decisions based on training data, which may contain skewed human decisions or represent historical or social inequities.

The role of data imbalance is vital in introducing bias. For instance, in 2016, Microsoft released an AI-based conversational chatbot on Twitter that was supposed to interact with people through tweets and direct messages. However, it started replying with highly offensive and racist messages within a few hours of its release. The chatbot was trained on anonymous public data and had a built-in internal learning feature, which led to a coordinated attack by a group of people to introduce racist bias in the system. Some users were able to inundate the bot with misogynistic, racist and anti-Semitic language. This incident was an eye-opener to a broader audience of the potential negative implications of unfair algorithmic bias in the AI systems.

Facial recognition systems are also under scrutiny. The class imbalance is a leading issue in facial recognition software. A dataset called “Faces in the Wild,” considered the benchmark for testing facial recognition software, had data that was 70% male and 80% white. Although it might be good enough to be used on lower-quality pictures, “in the wild” is a highly debatable topic. […]

Read more: www.forbes.com


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!