A continuation of the virtual conference held on March 30, 2022, on setting up an AI Centre of Excellence (CoE).

 

SwissCognitive Guest Blogger: Andeed Ma, Leader of AI Risk Chapter, Cognitive Technologies, Thought-Leader, Risk and Insurance Management Association of Singapore – “AI Centre of Excellence – AI Governance series, the internal governance structures, and measures”


 

The centre of excellence has always been a powerful strategy in bringing leadership and expertise from different disciplines throughout the organization, regardless of location or business unit into defining and developing standards, best practices, research, support and training for a focus area, in this case AI.

This article discusses one area of discipline which will be AI Governance – the internal governance structures and measures of a company.

The internal governance structures and measures spell out the need to adapt into the existing or setting up new internal governance structure and measures to incorporate risks, values, roles and responsibilities relating to algorithmic decision-making.

What is AI Governance?

AI governance is the concept that there should be a legislative framework in place to ensure that machine learning (ML) technologies are thoroughly explored and developed, with the purpose of assisting mankind in equitably adopting AI systems.

This is a huge concept as you can see from the definition and hence in this article we would like to guide organisations in first developing appropriate internal governance structures that would allow them to have proper oversight over how cognitive technologies are brought into their operations and/or products and services.


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!


 

Internal Governance Structures and Measures

Internal governance mechanisms and methods aid in ensuring that an organization’s use of AI is overseen effectively. Internal governance structures of the organization can be changed, and/or new structures can be developed as necessary. Risks related with the use of AI, for example, may be handled through an enterprise risk management system, whilst ethical issues can be incorporated as corporate principles and monitored by ethics review boards or other structures.

Organizations may also want to think about what characteristics to include in their internal governance frameworks. When depending just on a centralised governance system isn’t enough, a decentralized one might be explored to include ethical concerns into day-to-day operational decision-making, if necessary. The senior management and board of directors’ sponsorship, support, and engagement in the organization’s AI governance are critical.

Next, we will discuss 2 considerations that are relevant to the development of their internal governance structure.

The ethical deployment of AI requires clear roles and duties.

The right persons and/or departments should be assigned responsibility for and oversight of the different phases and activities involved in AI deployment. Consider forming a coordinating body, with appropriate knowledge and sufficient representation from throughout the organization, if necessary and appropriate.

Personnel and/or departments with internal AI governance obligations should be fully aware of their roles and responsibilities, get sufficient training, and be given the resources and assistance they require to carry out their jobs.

If you need to allocate key roles and responsibilities, it is best for one to use an existing risk management framework to assess and manage the risks associated with adopting AI, including any possible negative consequences for persons (e.g. who are most vulnerable, how are they impacted, how to assess the scale of the impact, how to get feedback from those impacted, etc.).

With this information, one can better determine an appropriate level of human involvement in AI-augmented decision-making.  Then the organisation can manage the AI model training and selection process.

In managing the AI model, one must perform maintenance, monitoring, documentation, and assessment of AI models that have been deployed, with the goal of taking corrective action if necessary. Examining communication channels and interactions with stakeholders in order to give transparency and effective feedback.

Finally, assuring that necessary personnel who will be working with AI systems are adequately trained. Staff that work and engage directly with AI models may need to be taught to evaluate AI model output and choices, as well as to detect and mitigate bias in data, if applicable and necessary.

Other employees who work with the AI system (for example, a customer relationship officer who answers customer questions about the AI system or a salesperson who makes a recommendation using an AI-enabled product) should be trained to be at least aware of and sensitive to the benefits, risks, and limitations of AI so that they can alert subject-matter experts within their organizations.

Internal controls and risk management

There are measures that can be considered putting in place via a solid risk management and internal control structure that explicitly tackles the risks associated with deploying the chosen AI model.

Taking reasonable steps to verify that the datasets used for AI model training are suitable for the intended purpose, as well as assessing and managing the risks of inaccuracy or bias, and examining exceptions discovered during model training.

There’s hardly any dataset that is completely unbiased. Organizations should work to understand how datasets might be skewed and account for this in their data security and deployment policies.

Developing monitoring and reporting systems, as well as processes, to guarantee that the appropriate level of management is aware of the deployed AI’s performance and any concerns.

To efficiently scale human oversight, monitoring might incorporate autonomous monitoring when appropriate. Explainability characteristics may focus on why the AI model has a given degree of confidence.

Ensure adequate knowledge transfer anytime key people participating in AI activity changes. This will lessen the possibility of internal governance gaps arising as a result of worker turnover. One recommendation would be to keep a regular knowledge base or data room that allows this transfer to happen when a change happens.

When there are substantial changes to the organizational structure or important persons engaged, the internal governance system is reviewed and steps are taken.

Internal governance structure and measures should be reviewed on a regular basis to ensure that they remain relevant and effective. Regular basis here refers to every 6 months or whenever there is an unexpected event that happened within your industry value chain, such as COVID19.

In Conclusion

It takes a village to raise a child. When we are looking at an organisation, every function is like a household and the village will be the AI Centre of Excellence where it will bring leadership and expertise from different disciplines throughout the organization, regardless of location or business unit to raise the AI.

Look out for the next AI governance series where we will determine the level of human involvement in AI-augmented decision making.

In case you missed our “How to set up an AI Centre of Excellence event, you can watch the recording here.


About the Author:

Andeed Ma is the President and AI Risk Leader of RIMAS (Risk and Insurance Management Association of Singapore). He has been a cloud business and risk management leader for more than 14 years in major technology companies such as ServiceNow, Ivanti, and ByteDance. He is also a lecturer at the Singapore University of Social Sciences (SUSS) teaching Hyperautomation and Artificial Intelligence. He strongly believes that cognitive technologies are here to enrich our lives and not replace. It is humans who need to manage the way we think, so that we can manage the way we use.