Alphabet, the parent company of Google GOOG -0.5%, is the leading tech company that decided to invest a lot of resources and funding in artificial intelligence. So much so, that the WSJ recently announced that AI is central to Google’s future.

Copyright by

SwissCognitive, AI, Artificial Intelligence, Bots, CDO, CIO, CI, Cognitive Computing, Deep Learning, IoT, Machine Learning, NLP, Robot, Virtual reality, learningNot surprisingly, Google has been dealing with different challenges concerning its top AI executives and researchers. Activists shareholders are also showing interest in this. Recently, there is a rise in shareholder proposals calling on boards to ensure proper AI governance

We live in a new technological era, one where board members have to be prepared for situations where artificial intelligence (AI) affects and maybe even disrupts their deliberations, with regards to both shareholders and stakeholder groups. 

To illustrate, the latest controversy with regards to Google and ethical AI, involves the departure of one of its leading stars, Stanford Professor Timnit Gebru, who left (or was let go) after her research exposed the company’s vulnerability and approach to AI regarding its diversity efforts.

To try to figure out the ways in which tech companies, like Google, should incorporate AI in their decision-making processes, I decided to interview my long-time friend Sergio Alberto Gramitto Ricci. Sergio and I have known each other since I was a doctoral student at Cornell Law School; he is a Lecturer at Monash University, in Australia, and previously had a Visiting Assistant Professor of Law position at Cornell Law School. I reached out to him to discuss his research on the use of artificial intelligence in the boardroom.

Looking at the future, we considered forms of artificial intelligence that can develop their own “views” on given matters and the following different scenarios: the assistance of the directors’ decision-making scenario, the hybrid-boards scenario and the directors’ replacement scenario. 

The following are some questions I asked him about his latest Cornell Law Review article Artificial Agents in Corporate Boardrooms

Q: What would you answer to those who think that artificial intelligence could improve how corporations are run?

Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!


A: With respect to accountability – human directors’ decision-making should not be replaced or influenced by unaccountable artificial intelligence’s decision-making. I warn that using artificial intelligence to make decisions in boardrooms could lead to a void of accountability. The use of artificial intelligence in boardrooms could raise other issues as well. For example, I caution about the risk that directors could get captured by the artificial intelligence’s “views.”

Q: Do you expect directors not to feel comfortable disregarding the “views” of AI or deviating from such views because these views are provided by uber-intelligent machines?

A: I believe there is an omnipresent specter that directors would prefer to avoid taking the risk of disagreeing with uber-intelligent machines.

Q: Can AI machines serve as directors? 

A: At least in Delaware, this would not be workable, because Delaware corporate law, arguably the corporate law the rest of the world looks at, requires directors to be natural persons, human beings. […]

Read more: