With all promising innovations, we are often intrigued by the potential they have to revolutionize our lives and society while simultaneously maintaining a healthy skepticism because they are different, creating a significant amount of change, or maybe just seem too good to be true.

Author & Guest Blogger: Ashley Casovan Executive Director by Responsible AI Institute

SwissCognitive, AI, Artificial Intelligence, Bots, CDO, CIO, CI, Cognitive Computing, Deep Learning, IoT, Machine Learning, NLP, Robot, Virtual reality, learningArtificial Intelligence (AI) is in a class with technologies that have reshaped our society like the steam engine, telecommunications, and the Internet. And while these tools have demonstrated the capacity to advance life as we know it, even decades later we are living with their consequences, intended or unintended, having devastating impacts on our environment, privacy, and democracy. 

While many consider AI a new technology, the concept of Artificial Intelligence dates back to 1956 when John Mcarthy hosted the first conference on the subject. Since then, there has been a long history of both real and imagined horrors that could occur due to the advancement of AI. 

With AI quickly becoming an integral part of our daily lives (with varying degrees of visibility) from the music we are recommended, the predictive text we write, the information we see and are suggested to share, the perfect heating in our homes and businesses, the services and credit we can access, and determinations on whether or not we get hired, AI is there. 

While we whole-heartedly believe in the numerous benefits of these systems and many more not listed, we have been keeping track of cases where AI has gone wrong. As such, we are part of a burgeoning community of policy makers, technologists, engineers, investors, researchers, and business leaders who are raising questions about what type of oversight is needed to ensure that with the advancement of these technologies, human rights are respected, individuals and organizations are safe, and the planet is better-off not worse-off. 

The Organization for Economic Co-operation and Development (OECD) has developed a comprehensive collection of these calls to action through their AI policy observatory. While many governments and regulators have started to take stronger stands on the oversight of AI technologies, the most notable is the European Union’s (EU) recent Proposal for a regulation laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act). The EU is not alone, in the same week as this proposal was released, the US Government’s Federal Trade Commission, warned in a blog post “Aiming for truth, fairness, and equity in your company’s use of AI”, Hold yourself accountable – or be ready for the FTC to do it for you. 

In response and alignment with these demands, the Responsible AI Institute (RAI) in partnership with the World Economic Forum (WEF)’s Global AI Action Alliance and the Schwartz Reisman Institute for Technology and Society at the University of Toronto (SRI) having been leading the development of a community-driven, measurable, and independent certification mark for the responsible and trusted use of AI systems. 

Our objective

We believe that for the most part, those who build, acquire, and use AI systems do so with the best intentions in mind. However, even good intentions can lead to serious consequences. Whether we are prioritizing speed of innovation over ethics, directly responding to market and customer demands without thinking about the global ramifications, or lack clear guidance, regulations and standards to follow, we are all susceptible to forging ahead and applying emerging research and science with a limited understanding of the potential consequences of AI systems.


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!


 

We don’t want to make AI oversight guesswork. We think it should be as simple and straightforward as possible. This is why we are dedicated to building a comprehensive and independent certification program that is grounded in human rights respecting principles. Ensuring that it is practical and measurable, is internationally recognized, and is built with trust and transparency.

Lastly, as noble as we believe many individuals and organizations to be. We all question those who mark their own homework. If increased trust and transparency is what the public are advocating for, then an independent certification is the way to address these concerns.

We know that these are lofty objectives, but we have a concrete plan to make this possible. Most importantly is recognizing that AI is everywhere, and it means a lot of different things. This is why we don’t attempt to define it too narrowly. We think that there should be a responsible use of all technology, but especially those technologies which have the capacity to adapt and learn. 

To learn more about RAI’s Certification System and the work that is being done to facilitate it, read the full whitepaper.


About the Author:

Ashley Casovan is an engaged and innovative leader who has always had a deep interest in advancing the public good. Recently leaving her long-standing career in the public service where she was last Director of Data and Digital for the Government of Canada, she has now taken on the role of Executive Director of Responsible AI Institute, a non-profit dedicated to creating practical tools to ensure the responsible use of AI.