As companies increasingly apply artificial intelligence, they must address concerns about trust.
Copyright by venturebeat.com
Here are 10 practical interventions for companies to employ to ensure AI fairness. They include creating an AI fairness charter and implementing training and testing.
Data-driven technologies and artificial intelligence (AI) are powering our world today — from predicting where the next COVID-19 variant will arise, to helping us travel on the most efficient route. In many domains, the general public has a high amount of trust that the algorithms that are powering these experiences are being developed in a fair manner.
However, this trust can be easily broken. For example, consider recruiting software that, due to unrepresentative training data, penalizes applications that contain the word “women”, or a credit-scoring system that misses real-world evidence of credit-worthiness and thus as a result certain groups get lower credit limits or are denied loans.
The reality is that the technology is moving faster than the education and training on AI fairness. The people who train, develop, implement and market these data-driven experiences are often unaware of the second or third-order implications of their hard work.
As part of the World Economic Forum’s Global Future Council on Artificial Intelligence for Humanity, a collective of AI practitioners, researchers and corporate advisors, we propose 10 practical interventions for companies to employ to ensure AI fairness.
1. Assign responsibility for AI education
Assign a chief AI ethics officer (CAIO) who along with a cross-functional ethics board (including representatives from data science, regulatory, public relations, communications and HR) should be responsible for the designing and implementing AI education activities. The CAIO should also be the “ombudsman” for staff to reach out to in case of fairness concerns, as well as a spokesperson to non-technical staff. Ideally this role should report directly to the CEO for visibility and implementation.
2. Define fairness for your organization
Develop an AI fairness charter template and then ask all departments that are actively using AI to complete it in their context. This is particularly relevant for business line managers and product and service owners.
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
Read more: venturebeat.com
As companies increasingly apply artificial intelligence, they must address concerns about trust.
Copyright by venturebeat.com
Here are 10 practical interventions for companies to employ to ensure AI fairness. They include creating an AI fairness charter and implementing training and testing.
Data-driven technologies and artificial intelligence (AI) are powering our world today — from predicting where the next COVID-19 variant will arise, to helping us travel on the most efficient route. In many domains, the general public has a high amount of trust that the algorithms that are powering these experiences are being developed in a fair manner.
However, this trust can be easily broken. For example, consider recruiting software that, due to unrepresentative training data, penalizes applications that contain the word “women”, or a credit-scoring system that misses real-world evidence of credit-worthiness and thus as a result certain groups get lower credit limits or are denied loans.
The reality is that the technology is moving faster than the education and training on AI fairness. The people who train, develop, implement and market these data-driven experiences are often unaware of the second or third-order implications of their hard work.
As part of the World Economic Forum’s Global Future Council on Artificial Intelligence for Humanity, a collective of AI practitioners, researchers and corporate advisors, we propose 10 practical interventions for companies to employ to ensure AI fairness.
1. Assign responsibility for AI education
Assign a chief AI ethics officer (CAIO) who along with a cross-functional ethics board (including representatives from data science, regulatory, public relations, communications and HR) should be responsible for the designing and implementing AI education activities. The CAIO should also be the “ombudsman” for staff to reach out to in case of fairness concerns, as well as a spokesperson to non-technical staff. Ideally this role should report directly to the CEO for visibility and implementation.
2. Define fairness for your organization
Develop an AI fairness charter template and then ask all departments that are actively using AI to complete it in their context. This is particularly relevant for business line managers and product and service owners.
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
Read more: venturebeat.com
Share this: