• AI is an immense opportunity for humankind and many organizations.

  • Having a clear AI governance framework based on clear principes will ensure that AI is used responsibly and optimally for an organization.

 

Copyright: weforum.org – “5 ways to avoid artificial intelligence bias with ‘responsible AI'”


Over the last few years, responsible AI has gone from a niche concept to a constant headline. Responsible, trustworthy AI is the subject of several documentariesbooks, and conferences. The more we make responsible AI an expectation and a known commodity, the more likely we are to make it our reality. This enables us to flourish with more accessible AI. This is our shared goal through the Responsible AI Badge certification programme for senior executives. Similarly, in our podcast, In AI We Trust?, we identify the key ingredients for responsible AI from which all organizations can learn and benefit.

We work with academics, organizations, and leading thinkers to understand best practices and processes for responsible AI. Cathy O’Neil has offered insight on the hazards of false reliance on AI, while Renée Cummings, founder of Urban AI, has shared thoughts on the impact of AI on our civil rights. Keith Sonderling, the US Equal Employment Opportunity Commission (EEOC) Commissioner, has shared guidance for employers on building, buying, and employing AI in HR systems. Rep. Don Beyer (D-VA) shared his enthusiasm for AI and the opportunities it offers for policy development.

From these and other discussions, we’ve identified the top 5 best practices that are critical to achieving responsible AI governance:

1) Establish AI Principles

The management team must be aligned through an established set of AI principles. The leaders must meet and discuss AI use and how it aligns to your values. Too often, tech is relegated to the IT team. With AI we must reset this approach – work cultures must be upgraded to match new working models. In our recent podcast with Richard Benjamins, he shares how Telefonica ambitiously implemented AI principles that they will strive to achieve.

2) Establish an AI governance framework

Organizational values and tech are linked and must be handled collectively. Management’s agenda should include how innovation heads share how they develop and use AI in key functions. HR heads can share where AI is used. General counsel can reflect on potential liabilities in crisis headlines and lawsuits. These discussions should lead to the establishment of a framework to guide future AI use and to shape AI culture. This checklist will offer guidance on how to do so.

An increasing number of frameworks can guide management’s efforts, including the BSA FrameworkGAO Risk Framework, and the NIST AI Risk Management Framework. The EqualAI Framework provides five pillars for responsible AI governance and the World Economic Forum produced the AI C-Suite Toolkit for AI leadership.[…]


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!


 

Read more: www.weforum.org