A new set of principles—the Toronto Declaration—aims to put human rights front and centre in the development and application of technologies. One of the most significant risks with is the danger of amplifying existing bias and discrimination against certain.
copyright by www.openglobalrights.org
In May 2018, Amnesty International , Access Now , and a handful of partner organizations launched the Toronto Declaration on protecting the right to equality and non-discrimination in systems. The Declaration is a landmark document that seeks to apply existing international human rights standards to the development and use of systems (or “”).
Machine learning () is a subset of . It can be defined as “ provid[ing] systems the ability to automatically learn and improve from experience without being explicitly programmed.”
How is this technology relevant to human rights? is a powerful technology that could have a potentially transformative effect on many aspects of life—from transportation and manufacturing to healthcare and education. Its use is increasing in all these sectors as well as in the justice system, policing, and the military. can increase efficiency, find new insights into diseases, and accelerate the discovery of novel drugs. But with misuse, intentional or otherwise, it can also harm people’s rights .
One of the most significant risks with is the danger of amplifying existing bias and discrimination against certain groups—often marginalized and vulnerable communities, who already struggle to be treated with dignity and respect. When historical data is used to train systems without safeguards, systems can reinforce and even augment existing structural bias. Discriminatory harms can also occur when decisions made in the design of systems lead to biased outcomes, whether they are deliberate or not.
When Amnesty started examining the nexus of and human rights, we were quickly struck by two things: the first was that there appeared to be a widespread and genuine interest in the ethical issues around , not only among academics, but also among many businesses. This was encouraging—it seemed like lessons were learned from the successive scandals that hit social media companies and there was a movement to proactively address risks associated with . […]