Last Friday, the World Economic Forum (WEF) sent out a press announcement about an artificial intelligence (AI) toolkit for corporate boards.

Copyright by


SwissCognitiveThe release pointed to a section of their web site titled Empowering AI Leadership. For some reason, at this writing, there is no obvious link to the toolkit, but the press team was quick to provide the link . It is well laid out in linked we pages, and some well-produced pdfs are available for download. For purposes of this article, I have only looked at the overview and the ethics section, so here are my initial impressions.

As would be expected from an organization focused on a select few in the world, the AI toolkit is high level. Boards of directors have broad but shallow oversight over companies, so there is no need to focus on details. Still, it is wished that a bit more accuracy had been involved.

The description of AI is very nice. There are many definitions and, as I’ve repeatedly pointed out, the meaning of AI and of machine learning (ML) continue to both be changing and to have different meanings to many. The problem in the setup is one that many people miss about ML. In the introductory module, the WEF claims “The breakthrough came in recent years, when computer scientists adopted a practical way to build systems that can learn.” They support that with a link to an article that gets it wrong . The breakthrough mentioned in the article, the level of accuracy in an ML system, is far more driven by a non-AI breakthrough than a specific ML model.

When we studied AI in the 1980s, deep learning was known and models existed. What we couldn’t do is run them. Hardware and operating systems didn’t support the needed algorithms and the data volumes that were required to train them. Cloud computing is the real AI breakthrough. The ability to link multiple processors and computers in an efficient and larger virtual machine is what has powered the last decade’s growth of AI.

I was also amused about with list of “core AI techniques” where deep learning and neural networks are listed at the same level as the learning methods used to train them. I’m only amused, not upset, because boards don’t need to know the difference to start, but it’s important to introduce them to the terms. I did glance at the glossary, and it’s a very nice set of high-level definitions of some of these – so interested board members can get some clarification.

On a quick tangent, their definition of bias is well done, as only a few short sentences reference both the real world issue of bias and the danger of bias within an AI system.

Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!


Ethics are an important component (in theory…) to the management of companies. The WEF points out at the beginning of that module that “technology companies, professional associations, government agencies, NGOs and academic groups have already developed many AI codes of ethics and professional conduct.” The statement reminds me of the saying that standards are so important that everyone wants one of their own. The module then goes on to discuss a few of the issue of the different standards. […]


Read more –