Increasingly powerful and inexpensive computers, advanced machine-learning algorithms, and the explosive growth of big data have enabled us to extract insights from all that data and turn them into valuable predictions.
Copyright by blogs.wsj.com
But the prominence of algorithmic methods has led to concerns regarding their overall fairness in the treatment of those whose behavior they’re predicting, such as whether the algorithms systematically discriminate against individuals with a common ethnicity or religion.
These concerns have been present whenever we make important decisions. What’s new is the much, much larger scale at which we now rely on algorithms to help us decide. Human errors that may have once been idiosyncratic may now become systematic.
“Artificial intelligence is the pursuit of machines that are able to act purposefully to make decisions towards the pursuit of goals,” wrote Harvard University Professor David Parkes in ” A Responsibility to Judge Carefully in the Era of Decision Machines ,” an essay recently published as part of Harvard’s Digital Initiative .
“Machines need to be able to predict to decide, but decision making requires much more,” he wrote. “Decision making requires bringing together and reconciling multiple points of view. Decision making requires leadership in advocating and explaining a path forward. Decision making requires dialogue.”
Given the widespread role of predictions in business, government and everyday life, AI is already having a major impact on many human activities. As was previously the case with arithmetic, communications and access to information, we will be able to use predictions in all kinds of new applications. Over time, we’ll discover that lots of tasks can be reframed as prediction problems.
But, “[it’s] decisions, not predictions, that have consequences,” Mr. Parkes wrote. “If the narrative of the present is one of managers who are valued for showing judgment in decision making…then the narrative of the future will be one in which we are valued for our ability to judge and shape the decision-making capabilities of machines. “
The academic community is starting to pay attention to these very important and difficult questions underlying the shift from predictions to decisions. Last year Mr. Parkes was co-organizer of a workshop on Algorithmic and Economic Perspectives on Fairness. The workshop brought together researchers with backgrounds in algorithmic decision making, machine learning and data science with policy makers, legal experts, economists, and business leaders.
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
Workshop participants were asked to identify and frame what they felt were the most pressing issues to ensure fairness in an increasingly data- and algorithmic-driven world. Let me summarize some of the key issues they came up with as well as questions to be further investigated.
Decision Making and Algorithms. It’s not enough to focus on the fairness of algorithms because their output is just one of the inputs to a human decision maker. This raises a number of important questions: How do human decision makers interpret and integrate the output of algorithms? When they deviate from the algorithmic recommendation, is it in a systematic way? And which aspects of a decision process should be handled by an algorithm and which by a human to achieve fair outcomes?
Assessing Outcomes. It’s very difficult to measure the impact of an algorithm on a decision because of indirect effects and feedback loops. Therefore, it’s very important to monitor and evaluate actual outcomes. Can we properly understand the reasons behind an algorithmic recommendation? How can we design automated systems that will do appropriate exploration in order to provide robust performance in changing environments?
Regulation and Monitoring. Poorly designed regulations may be harmful to the individuals they’re intended to protect as well as being costly to implement for firms. That means it’s important to specify the precise way in which compliance will be monitored. How should recommendation systems be designed to provide users with more control? Could the regulation of algorithms lead to firms abandoning algorithms in favor of less inspectable forms of decision-making?
Educational and Workforce Implications. The study of fairness considerations as they relate to algorithmic systems is a fairly new area. It’s thus important to understand the effect of different kinds of training on how well people will interact with AI-based decisions, as well as the management and governance structure for AI-based decisions. Are managers (or judges) who have some technical training more likely to use machine-learning-based recommendations? What should software engineers learn about the ethical implications of their technologies? What’s the relationship between domain and technical expertise in thinking about these issues?
Algorithm Research. Algorithm design is a well-established area of research within computer science. At the same time, fairness questions are inherently complex and multifaceted and incredibly important to get right. How can we promote cross-field collaborations between researchers with domain expertise (moral philosophy, economics, sociology, legal scholarship) and those with technical expertise?
Read more – blogs.wsj.com
Increasingly powerful and inexpensive computers, advanced machine-learning algorithms, and the explosive growth of big data have enabled us to extract insights from all that data and turn them into valuable predictions.
Copyright by blogs.wsj.com
But the prominence of algorithmic methods has led to concerns regarding their overall fairness in the treatment of those whose behavior they’re predicting, such as whether the algorithms systematically discriminate against individuals with a common ethnicity or religion.
These concerns have been present whenever we make important decisions. What’s new is the much, much larger scale at which we now rely on algorithms to help us decide. Human errors that may have once been idiosyncratic may now become systematic.
“Artificial intelligence is the pursuit of machines that are able to act purposefully to make decisions towards the pursuit of goals,” wrote Harvard University Professor David Parkes in ” A Responsibility to Judge Carefully in the Era of Decision Machines ,” an essay recently published as part of Harvard’s Digital Initiative .
“Machines need to be able to predict to decide, but decision making requires much more,” he wrote. “Decision making requires bringing together and reconciling multiple points of view. Decision making requires leadership in advocating and explaining a path forward. Decision making requires dialogue.”
Given the widespread role of predictions in business, government and everyday life, AI is already having a major impact on many human activities. As was previously the case with arithmetic, communications and access to information, we will be able to use predictions in all kinds of new applications. Over time, we’ll discover that lots of tasks can be reframed as prediction problems.
But, “[it’s] decisions, not predictions, that have consequences,” Mr. Parkes wrote. “If the narrative of the present is one of managers who are valued for showing judgment in decision making…then the narrative of the future will be one in which we are valued for our ability to judge and shape the decision-making capabilities of machines. “
The academic community is starting to pay attention to these very important and difficult questions underlying the shift from predictions to decisions. Last year Mr. Parkes was co-organizer of a workshop on Algorithmic and Economic Perspectives on Fairness. The workshop brought together researchers with backgrounds in algorithmic decision making, machine learning and data science with policy makers, legal experts, economists, and business leaders.
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
Workshop participants were asked to identify and frame what they felt were the most pressing issues to ensure fairness in an increasingly data- and algorithmic-driven world. Let me summarize some of the key issues they came up with as well as questions to be further investigated.
Decision Making and Algorithms. It’s not enough to focus on the fairness of algorithms because their output is just one of the inputs to a human decision maker. This raises a number of important questions: How do human decision makers interpret and integrate the output of algorithms? When they deviate from the algorithmic recommendation, is it in a systematic way? And which aspects of a decision process should be handled by an algorithm and which by a human to achieve fair outcomes?
Assessing Outcomes. It’s very difficult to measure the impact of an algorithm on a decision because of indirect effects and feedback loops. Therefore, it’s very important to monitor and evaluate actual outcomes. Can we properly understand the reasons behind an algorithmic recommendation? How can we design automated systems that will do appropriate exploration in order to provide robust performance in changing environments?
Regulation and Monitoring. Poorly designed regulations may be harmful to the individuals they’re intended to protect as well as being costly to implement for firms. That means it’s important to specify the precise way in which compliance will be monitored. How should recommendation systems be designed to provide users with more control? Could the regulation of algorithms lead to firms abandoning algorithms in favor of less inspectable forms of decision-making?
Educational and Workforce Implications. The study of fairness considerations as they relate to algorithmic systems is a fairly new area. It’s thus important to understand the effect of different kinds of training on how well people will interact with AI-based decisions, as well as the management and governance structure for AI-based decisions. Are managers (or judges) who have some technical training more likely to use machine-learning-based recommendations? What should software engineers learn about the ethical implications of their technologies? What’s the relationship between domain and technical expertise in thinking about these issues?
Algorithm Research. Algorithm design is a well-established area of research within computer science. At the same time, fairness questions are inherently complex and multifaceted and incredibly important to get right. How can we promote cross-field collaborations between researchers with domain expertise (moral philosophy, economics, sociology, legal scholarship) and those with technical expertise?
Read more – blogs.wsj.com
Share this: