A new set of principles—the Toronto Declaration—aims to put human rights front and centre in the development and application of machine learning technologies. One of the most significant risks with machine learning is the danger of amplifying existing bias and discrimination against certain. 

copyright by www.openglobalrights.org


In May 2018, Amnesty International , Access Now , and a handful of partner organizations launched the Toronto Declaration on protecting the right to equality and non-discrimination in machine learning systems. The Declaration is a landmark document that seeks to apply existing international human rights standards to the development and use of machine learning systems (or “artificial intelligence”).

Machine learning (ML) is a subset of artificial intelligence. It can be defined as “ provid[ing] systems the ability to automatically learn and improve from experience without being explicitly programmed.”

How is this technology relevant to human rights? AI is a powerful technology that could have a potentially transformative effect on many aspects of life—from transportation and manufacturing to healthcare and education. Its use is increasing in all these sectors as well as in the justice system, policing, and the military. AI can increase efficiency, find new insights into diseases, and accelerate the discovery of novel drugs. But with misuse, intentional or otherwise, it can also harm people’s rights .

One of the most significant risks with machine learning is the danger of amplifying existing bias and discrimination against certain groups—often marginalized and vulnerable communities, who already struggle to be treated with dignity and respect. When historical data is used to train machine learning systems without safeguards, ML systems can reinforce and even augment existing structural bias. Discriminatory harms can also occur when decisions made in the design of AI systems lead to biased outcomes, whether they are deliberate or not.

When Amnesty started examining the nexus of artificial intelligence and human rights, we were quickly struck by two things: the first was that there appeared to be a widespread and genuine interest in the ethical issues around AI, not only among academics, but also among many businesses. This was encouraging—it seemed like lessons were learned from the successive scandals that hit social media companies and there was a movement to proactively address risks associated with AI.  […]