There is little doubt that the pace of innovation is accelerating at unprecedented levels. Technology enabled breakthroughs are happening with increased frequency, enhancing the human lifespan, improving access to basic needs and leaving the general public with little time to adjust and comprehend the magnitude of these advances. The interest about moral aspect are accelerating just as fast. 

copyright by www.itnewsafrica.com

SwissCognitiveWithin the field of Artificial Intelligence (AI), this phenomenon is certainly just as true, with the accelerated pace of AI development, generating huge interest about moral AI and how, as imperfect human beings, we are teaching AI the differences between right and wrong. As AI systems continue to evolve, humanity will place increasing levels of trust in them for decision making, especially as these systems transition from being perceived as mere tools, to operating as autonomous agents making autonomous decisions.

The question of the ethics pertaining to the decisions that get made by AI systems must be addressed.

Ethical fundamentals of everyday life

The question of ethics finds some of its roots in the notion of fairness. What is fairness? How does one define fairness? Instinctively, human beings grasp the concept of what is fair and what is not fair. As an example, we commonly accept that one for me, two for you, is not fair. We teach our children about what it means to be fair, why we need to share, what the moral and ethical constructs as we believe them to be are when it comes to fairness and sharing. The concept of fairness also features prominently in the United Nations Sustainable Development Goals: Gender Equality (goal #5), Decent Work and Economic Growth (goal #8) and Reduced Inequalities (goal #10) are all arguably built on the concept of fairness.

But how do we teach AI systems about fairness in the same way we teach our children about fairness, especially when an AI system decides that achieving its goal in an optimal manner can be done through unfair advantage? Consider an AI system in charge of ambulance response with the goal of servicing as many patients as possible. It’s quite possible that it might prioritise serving 10 people with small scratches and surface cuts above serving 2 people with severe internal injuries because serving 10 people allows it to achieve its goal better. Although this optimises patient service, it fundamentally falls flat, when one considers the intent of what was meant to be accomplished in the most optimal way.

In business we have ethical and unethical behaviour and we have strict codes of conduct regarding what we consider to be ethical and unethical business conduct. We accept that not everything that is legal is ethical and not everything that unethical is illegal and as a society, we frown upon unethical business conduct, especially from big corporates. How does this transfer to AI systems? Surely, we wouldn’t want AI systems that stay within the bounds of the law, but push as hard as they can against those boundaries to see what they can get away with, exploiting loopholes to fulfil their goals.

Data perpetuating embedded bias


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!


 

AI systems feed off data. If AI is the new electricity, data is the grid it runs on. AI systems look at data, evaluate that data against its goals and then find the most optimal path towards achieving those goals. Data is absolutely critical for AI systems to be effective. Machine learning (ML) algorithms gain their experience from the data they are given and if that data is biased or ethically or morally tainted, the ML algorithms will perpetuate this. What about factors that are not expressed with data, such as the value of another person, the value of connections, the value of a relationship? The biggest challenge with data, unfortunately, is that data quite simply just does not data give you ethics.

Then there’s the issue of blame, who is to blame for AI making mistakes? The manufacturer, the software supplier, the reseller, the data set, the owner, the user? The issue gets more complicated when we talk about the loss of life in an accident. Consider incidents with AI systems in Healthcare and who would be legally held liable. What about autonomous vehicles disrupting the automotive industry and making its way into society sooner rather than later? If we expand on this trend, what about AI systems making decisions based on their programming, leading to them committing crimes? Are they guilty? Can an AI system be guilty of a crime? Are their programmers to blame? Their data sets? What laws govern this eventuality?

Take our smartphones autocorrect function as a simple example. I’m positive many of us have had an incident where we’ve send texts to friends right after an autocorrect function changes one word to another, often to a more embarrassing version, from where we often issue some grovelling apology. The point is this; if technology today struggles with understanding the intent of a few lines of text, how can we count on it to understand and make life and death decisions?