Artificial intelligence can be transformative for businesses, but increased use of the technology inevitably leads to a higher rate of AI system failures.
But companies should first invest in responsible AI, which also yields benefits in accelerating innovation and helping them become more competitive.
A prioritization approach that begins with low-effort, high-impact areas in responsible AI can minimize risk while maximizing investment.
Copyright: weforum.org – “Scaling AI: Here’s why you should first invest in responsible AI”
Artificial intelligence (AI) systems are creating transformative business value for companies who integrate it into their operations, products and services, and corporate strategy. But, unfortunately, increased use of the technology inevitably leads to a higher rate of AI system failures.
If left unmitigated these failures could harm individuals and society and will diminish the returns on investment into AI. So, what can organizations do?
Responsible AI – the practice of designing, developing and deploying AI with good intention and fairly impact society – is not just be the morally right thing to do, it also yields tangible benefits in accelerating innovation and helping organizations transition into using AI to become more competitive.
Yet, current approaches emerge dominantly from AI-native firms and may fail to meet the needs of non-AI native organizations which have different contexts, constraints, culture and AI maturity.
Companies need a purpose-fit and tailored approaches to achieve sustained success in practice. As more organizations begin their AI journeys, they are at the cusp of having to make the choice on whether to invest scarce resources towards scaling their AI efforts or channeling investments into scaling responsible AI beforehand.
We believe that they should do latter to achieve sustained success and better returns on investment.
Scaling AI leads to more failures
Modern AI systems are inherently stochastic – as they make use of randomness – and black box in nature, which means the system’s inputs and operations are not visible to the user or another interested party.
In addition, they are built on top of complex technical pipelines that ingest, transform and feed data into downstream machine learning models to achieve business goals such as automated content moderation or enabling self-driving.[…]
Read more: www.weforum.org