Machine Learning is a revolutionary technology that has started to fundamentally disrupt the way that companies operate. Therefore, it is not surprising that businesses are rushing to implement it into their processes, as reported by the McKinsey & Company Global AI Survey.
Copyright by www.weforum.org
At the same time, a tiny percentage of these companies have managed to deploy Artificial Intelligence (AI) at scale – a process which seems harder to achieve given regular reports of unethical uses of AI and growing public concern about its potential adverse impacts.
These difficulties are likely to persist until companies engage in a fundamental change to become ‘responsible AI’-driven organizations. In practice, this requires addressing the governance challenges associated with AI, and then designing and executing a sound strategy. To help companies deploy responsible AI at scale, we offer a five-step guide.
AI creates unique governance challenges
We live in a world filled with uncertainty and the ability to build learning systems able to cope with this basic reality to a certain extent, by discovering patterns and relationships in data without being explicitly programmed, represents an immense opportunity.
However, there remain reasons for being concerned, because Machine Learning also creates unique governance challenges. For one thing, these systems are heavy-reliant on data, which incentivises companies to massively collect personal data, causing potential privacy issues in the process.
Second, collecting, cleaning and processing high-quality data is a costly and complex task. Consequently, business datasets often don’t accurately reflect the “real world”. Even when they do, they may simply replicate or exacerbate human bias and lead to discriminatory outcomes. That’s because the feedback loop in AI is likely to amplify any innate propensity embedded in the data.
Lastly, the power of massive computational systems with limitless storage capabilities eliminates the option of anonymity, as detailed personal behavioural information is taken into account to enable individual targeting at a previously unseen high level of granularity. […]
Read more: www.weforum.org
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
Machine Learning is a revolutionary technology that has started to fundamentally disrupt the way that companies operate. Therefore, it is not surprising that businesses are rushing to implement it into their processes, as reported by the McKinsey & Company Global AI Survey.
Copyright by www.weforum.org
At the same time, a tiny percentage of these companies have managed to deploy Artificial Intelligence (AI) at scale – a process which seems harder to achieve given regular reports of unethical uses of AI and growing public concern about its potential adverse impacts.
These difficulties are likely to persist until companies engage in a fundamental change to become ‘responsible AI’-driven organizations. In practice, this requires addressing the governance challenges associated with AI, and then designing and executing a sound strategy. To help companies deploy responsible AI at scale, we offer a five-step guide.
AI creates unique governance challenges
We live in a world filled with uncertainty and the ability to build learning systems able to cope with this basic reality to a certain extent, by discovering patterns and relationships in data without being explicitly programmed, represents an immense opportunity.
However, there remain reasons for being concerned, because Machine Learning also creates unique governance challenges. For one thing, these systems are heavy-reliant on data, which incentivises companies to massively collect personal data, causing potential privacy issues in the process.
Second, collecting, cleaning and processing high-quality data is a costly and complex task. Consequently, business datasets often don’t accurately reflect the “real world”. Even when they do, they may simply replicate or exacerbate human bias and lead to discriminatory outcomes. That’s because the feedback loop in AI is likely to amplify any innate propensity embedded in the data.
Lastly, the power of massive computational systems with limitless storage capabilities eliminates the option of anonymity, as detailed personal behavioural information is taken into account to enable individual targeting at a previously unseen high level of granularity. […]
Read more: www.weforum.org
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
Share this: