Law Public Safety Research

How to create space for ethics in AI

How to create space for ethics in AI

In a year that has seen decades’ worth of global shocks, bad news, and scandals squeezed into 12 excruciatingly long months, the summer already feels like a distant memory.

Copyright by venturebeat.com

SwissCognitive, AI, Artificial Intelligence, Bots, CDO, CIO, CI, Cognitive Computing, Deep Learning, IoT, Machine Learning, NLP, Robot, Virtual reality, learningIn August 2020, the world was in the throes of a major social and racial justice movement, and I argued hopefully in VentureBeat that the term “ethical ” was finally starting to mean something .

It was not the observation of a disinterested observer but an optimistic vision for coalescing the ethical community around notions of power, justice, and structural change. Yet in the intervening months it has proven to be, at best, an overly simplistic vision, and at worst, a naive one. The piece critiqued “second wave” ethical as being preoccupied with technical fixes to problems of bias and fairness in . It observed that focusing on technical interventions to address ethical harms skewed the conversation away from issues of structural injustice and permitted the “co-option of socially conscious computer scientists” by big tech companies.

I realize now that this argument minimised the contribution of ethical researchers – scientists and researchers inside of tech companies, and their collaborators – to the broader justice and ethics agenda. I saw only co-option and failed to highlight the critical internal pushback and challenges to entrenched power structures that ethical researchers propagate, and the potential their radical research has to change the shape of technologies.

Ethics researchers contribute to this movement just by showing up to work every day, taking part in the everyday practice of making technology and championing a “move slow and fix things” agenda against a tide of productivity metrics and growth KPIs. Many of these researchers are taking a principled stand as members of minoritized groups. I was arguing that a focus on technical accuracy narrows the discourse on ethics in . What I didn’t recognize was that such research can itself undermine the technological orthodoxy that is at the root of unethical development of tech and .

Google’s decision to fire Dr. Timnit Gebru is clear confirmation that ethical tech researchers represent a serious challenge to the companies where they work. Dr. Gebru is a respected Black computer scientist whose most prominent work has championed technically-targeted interventions to ethical harms. Her contract termination by Google has been the subject of much commentary and debate. It reflects an important point: that it doesn’t matter if “ethical ” is starting to mean something to those of us working to improve how tech impacts society; it only matters if it means something to the most powerful companies in the world.

For that reason, Google’s decision to unceremoniously fire an expert, vocal, high-profile employee opens up a critical faultline in the ethical agenda and exposes the underbelly of big tech.

An ethical agenda holds that moral principles of right and wrong should shape the development of advanced technologies, even as those technologies are too embryonic, amorphous, or mercurial for existing regulatory frameworks to grasp or restrain at speed. “Ethical ” aims to plug the gaps with a range of tools – analysis grounded in moral philosophy, critical theory and social science; principles, frameworks and guidelines; risk and impact assessments, bias audits and external scrutiny. It is not positioned as a substitute for law and regulation but as a placeholder for it or a complement to it. Thinking about the ethical issues raises should help us identify where regulation is needed, which research should not be pursued, and whether the benefits of technology accrue equitably and sustainably. […]

Read more – venturebeat.com