With AI rapidly evolving and taking up more room in the business landscape, it is understandable that the European Commission is eager to draft regulation to help prevent the misuse of AI, but how can we effectively govern the bots?
Copyright: https://www.globalbankingandfinance.com/
Artificial intelligence (AI) has swept across almost every industry with the purpose of automating processes, increasing efficiency and improving our personal lives and businesses.
It is widely believed that AI promises to be objective and help us to avoid human bias, opinion, or ideologies. However, there have been many instances where the opposite has been true and technology has failed to behave with impartiality. One of many examples where AI has failed to be partial comes from Amazon’s AI recruiting tool which was found to be biased against hiring women as it largely only recommended male CVs and consequently, the technology had to be scrapped by Amazon to avoid any further scrutiny.
With AI rapidly evolving and taking up more room in the business landscape, it is understandable that the European Commission is eager to draft regulation to help prevent the misuse of AI, but how can we effectively govern the bots?
A challenging question for a complicated process
On 19 February during a press conference in Brussels, the European Commission set itself up for the unenviable task of trying to regulate AI as the technology is constantly changing. What may work to regulate AI one day, may fail to stretch far enough a few weeks later as AI rapidly evolves and could be completely irrelevant after only a month of being introduced.
The need to have policies in place though is not doubted among the community as a KPMG study found that 80 per cent of risk professionals are not confident about the governance in place around AI. However, what is of concern for technology leaders is the consequence of tighter regulations stifling innovation for AI and hindering the enormous potential benefits for the world.
For example, CheXnet is an AI algorithm from Stanford that can detect pneumonia among older patients through chest X-rays, but for technologies like these to work, they need creative and scientific freedom.
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
Although AI and its innovation holds great power to be used for good, its accelerating adoption across industries comes with numerous ethical concerns that need to be addressed in governance.
Navigate evolving AI with forward-looking risk management
While the EU works hard to try and set policies in place, organisations should take the time to consider their own governance, risk and compliance (GRC) processes to ensure they are not caught out with their use of AI when legislation does finally arrive.
One way organisations can overcome unforeseen exposure to risk from evolving AI technology, as well as the ever-changing business landscape, is by implementing a governance framework around AI within and outside organisations. Unlike model management in financial services industries where internal controls and regulators require companies to validate and ‘manage’ models on a regular basis, AI model controls are already being put in place.
This reflects the proliferation of AI usage in enterprises and the need for organisations to monitor where they are being used for business decisions and avoiding inherent biases or lack of underlying datasets for them to operate with accuracy. Regulators are not far behind demanding proof points of the right controls in place.
The other added element is to set up a forward-looking risk management program around AI. This program improves an organisation’s ability to manage both existing and emerging risks by analysing past trends, predicting future scenarios, and proactively preparing for them around the hypothesis and the impact of AI – both positive and negative and monitoring them.
Once an organisation is set up in this way, it should be better prepared for any new regulation that may be introduced to govern AI and stop its misuse or bias.
Read more: https://www.globalbankingandfinance.com/
With AI rapidly evolving and taking up more room in the business landscape, it is understandable that the European Commission is eager to draft regulation to help prevent the misuse of AI, but how can we effectively govern the bots?
Copyright: https://www.globalbankingandfinance.com/
Artificial intelligence (AI) has swept across almost every industry with the purpose of automating processes, increasing efficiency and improving our personal lives and businesses.
It is widely believed that AI promises to be objective and help us to avoid human bias, opinion, or ideologies. However, there have been many instances where the opposite has been true and technology has failed to behave with impartiality. One of many examples where AI has failed to be partial comes from Amazon’s AI recruiting tool which was found to be biased against hiring women as it largely only recommended male CVs and consequently, the technology had to be scrapped by Amazon to avoid any further scrutiny.
With AI rapidly evolving and taking up more room in the business landscape, it is understandable that the European Commission is eager to draft regulation to help prevent the misuse of AI, but how can we effectively govern the bots?
A challenging question for a complicated process
On 19 February during a press conference in Brussels, the European Commission set itself up for the unenviable task of trying to regulate AI as the technology is constantly changing. What may work to regulate AI one day, may fail to stretch far enough a few weeks later as AI rapidly evolves and could be completely irrelevant after only a month of being introduced.
The need to have policies in place though is not doubted among the community as a KPMG study found that 80 per cent of risk professionals are not confident about the governance in place around AI. However, what is of concern for technology leaders is the consequence of tighter regulations stifling innovation for AI and hindering the enormous potential benefits for the world.
For example, CheXnet is an AI algorithm from Stanford that can detect pneumonia among older patients through chest X-rays, but for technologies like these to work, they need creative and scientific freedom.
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
Although AI and its innovation holds great power to be used for good, its accelerating adoption across industries comes with numerous ethical concerns that need to be addressed in governance.
Navigate evolving AI with forward-looking risk management
While the EU works hard to try and set policies in place, organisations should take the time to consider their own governance, risk and compliance (GRC) processes to ensure they are not caught out with their use of AI when legislation does finally arrive.
One way organisations can overcome unforeseen exposure to risk from evolving AI technology, as well as the ever-changing business landscape, is by implementing a governance framework around AI within and outside organisations. Unlike model management in financial services industries where internal controls and regulators require companies to validate and ‘manage’ models on a regular basis, AI model controls are already being put in place.
This reflects the proliferation of AI usage in enterprises and the need for organisations to monitor where they are being used for business decisions and avoiding inherent biases or lack of underlying datasets for them to operate with accuracy. Regulators are not far behind demanding proof points of the right controls in place.
The other added element is to set up a forward-looking risk management program around AI. This program improves an organisation’s ability to manage both existing and emerging risks by analysing past trends, predicting future scenarios, and proactively preparing for them around the hypothesis and the impact of AI – both positive and negative and monitoring them.
Once an organisation is set up in this way, it should be better prepared for any new regulation that may be introduced to govern AI and stop its misuse or bias.
Read more: https://www.globalbankingandfinance.com/
Share this: