FAGMA GovTech Pharma Research Solutions

Ethical algorithm design should guide technology regulation

Ethical algorithm design should guide technology regulation

Society expects people to respect certain social values when they are entrusted with making important decisions. They should make judgments fairly. They should respect the privacy of the people whose information they are privy to. They should be transparent about their deliberative process.

SwissCognitiveBut increasingly, algorithms and the automation of certain processes are being incorporated into important decision-making pipelines. Human resources departments now routinely use statistical models trained via to guide hiring and compensation decisions. Lenders increasingly use algorithms to estimate credit risk. And a number of state and local governments now use to inform bail and parole decisions, and to guide police deployments. Society must continue to demand that important decisions be fair, private, and transparent even as they become increasingly automated. “Society must continue to demand that important decisions be fair, private, and transparent even as they become increasingly automated.” Nearly every week, a new report of algorithmic misbehavior emerges. Recent examples include an algorithm for targeting medical interventions that systematically led to inferior outcomes for black patients, a resume-screening tool that explicitly discounted resumes containing the word “women” (as in “women’s chess club captain”), and a set of supposedly anonymized MRI scans that could be reverse-engineered to match to patient faces and names.

Nearly every week, a new report of algorithmic misbehavior emerges. Recent examples include an algorithm for targeting medical interventions that systematically led to inferior outcomes for black patients, a resume-screening tool that explicitly discounted resumes containing the word “women” (as in “women’s chess club captain”), and a set of supposedly anonymized MRI scans that could be reverse-engineered to match to patient faces and names.

In none of the previous cases were the root causes some malintent or obvious negligence on the part of the programmers and scientists who built and deployed these models. Rather, algorithmic bias was an unanticipated consequence of following the standard methodology of : specifying some objective (usually a proxy for accuracy or profit) and algorithmically searching for the model that maximizes that objective using colossal amounts of data. This methodology produces exceedingly accurate models—as measured by the narrow objective the designer chooses—but will often have unintended and undesirable side effects. The necessary solution is twofold: a way to systematically discover “bad behavior” by algorithms before it can cause harm at scale, and a rigorous methodology to correct it.

Many algorithmic behaviors that we might consider “antisocial” can be detected via appropriate auditing—for example, explicitly probing the behavior of consumer-facing services such as Google search results or Facebook advertising, and quantitatively measuring outcomes like gender discrimination in a controlled experiment. But to date, such audits have been conducted primarily in an ad-hoc, one-off manner, usually by academics or journalists, and often in violation of the terms of service of the companies they are auditing. […]

 

4 Comments

Leave a Reply