Society expects people to respect certain social values when they are entrusted with making important decisions. They should make judgments fairly. They should respect the privacy of the people whose information they are privy to. They should be transparent about their deliberative process.
Copyright by www.brookings.edu
But increasingly, algorithms and the automation of certain processes are being incorporated into important decision-making pipelines. Human resources departments now routinely use statistical models trained via to guide hiring and compensation decisions. Lenders increasingly use algorithms to estimate credit risk. And a number of state and local governments now use
Nearly every week, a new report of algorithmic misbehavior emerges. Recent examples include an algorithm for targeting medical interventions that systematically led to inferior outcomes for black patients, a resume-screening tool that explicitly discounted resumes containing the word “women” (as in “women’s chess club captain”), and a set of supposedly anonymized MRI scans that could be reverse-engineered to match to patient faces and names.
In none of the previous cases were the root causes some malintent or obvious negligence on the part of the programmers and scientists who built and deployed these models. Rather, algorithmic bias was an unanticipated consequence of following the standard methodology of
Many algorithmic behaviors that we might consider “antisocial” can be detected via appropriate auditing—for example, explicitly probing the behavior of consumer-facing services such as Google search results or Facebook advertising, and quantitatively measuring outcomes like gender discrimination in a controlled experiment. But to date, such audits have been conducted primarily in an ad-hoc, one-off manner, usually by academics or journalists, and often in violation of the terms of service of the companies they are auditing. […]
4 Comments