Interview with Ryan Career, Executive Director of ForHumanity by Luca Flurin Brunner, Managing Director of CognitiveValley.

ForHumanity is a non-profit, founded to examine the downside specific and existential risks associated with AI and Automation, developing both corporate-specific solutions as well as policy-level programs. It is dedicated to raising awareness and examining the risks from the growth of AI & Automation.

CognitiveValley is a non-profit, Switzerland-based foundation which aims to position the country as the most trusted and vibrant cognitive technology environment globally. CognitiveValley wants to ensure a reliable, healthy and trustworthy AI ecosystem and strives to create new perspective.

Interview conducted by Luca Flurin Brunner, Managing Director of the CognitiveValley.

Brunner: First of all, who are you? And what is «AI audit»? 

Carrier: The mission is simple – if we can make safe and responsible AI profitable, whilst making dangerous and irresponsible AI costly, then all of humanity wins! 

Independent Audit of AI Systems is a system and process designed to replicate the inherent societal trust generated by the largely private and self-regulated financial audit and accounting process which have resulted in GAAP and IAS accounting standards.  Back in 1973, when the industry came together and normalized their rules and procedures, this collaboration was both necessary and sufficient to allow regulators and lawmakers around the world to mandate GAAP or IAS accounting as law for their corporations.  We seek a similar success.  

Independent Audit of AI Systems will be coordinated by an unaffiliated, unbiased, good-faith team who will operate a global, transparent, inclusive and iterative process of sourcing and filtering “best-practices” into audit rules.  The team will facilitate this dialogue in the areas of ethics, bias, privacy, trust and cybersecurity.  Audit rules will have the following characteristics: implementable, binary (compliant/non-compliant), iterated, consensus-driven, measurable, unambiguous and open-source.  An Independent Board of Governance will approve all rules.  All will be welcome to participate in the dialogue from companies, to academics, to individuals to regulators and lawmakers. 

More than 4 years ago, I looked at the ubiquitous advance of AI and Automation and thought. “If we could mitigate the downside risks associated with these technologies then we could achieve a better result for all of humanity.  I formed my non-profit ForHumanity to tackle that mission” 


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!


 

«The mission is simple – if we can make safe and responsible AI profitable, whilst making dangerous and irresponsible AI costly, then all of humanity wins!» (Carrier) 

 

Brunner: Why this approach?  How do you know this will work out? 

Carrier: When I considered the idea of trust in our existing systems, I reflected on our existing concepts of trust.  The numbers (financial accounting), upon which so much of the corporate world runs is the epitome of embedded and comprehensive trust.  We never question if the numbers are right.  We might quibble with the nuance and around the edges of the numbers, but these are minutiae and genuinely considered gray areas.  99.9% of the numbers are accepted without question.  We want to achieve the same level of trust in our autonomous systems.  With that mission of embedded trust in mind, it became about understanding how that trust came about.  What are the elements of financial accounting which create this infrastructure of trust.   

First, we have a set of rules which were agreed upon by the whole industry (GAAP or IAS).  Second, we have a robust accreditation and education process.  We know our auditors are well trained and tested and certified to do their jobs.  Third we have a liability regime, whereby auditors have little or no upside to falsely attest to compliance and they have everything to lose.  When an auditor attests that a company is compliant with audit rules, the world accepts that the company is compliant.  That is the true value of an Independent Audit. 

«We have done this before and that is why I felt certain it would work.» (Carrier) 

 

Carrier: We have done this before and that is why I felt certain it would work.  In 1973, the accounting industry came together to agree on a set of rules.  The world wasn’t quite so global then, so we got two systems (GAAP and IAS), but this comprehensive set of rules allowed lawmakers all around the world to insist that their companies comply with these audit standards.  In 1975, the SEC mandated annual audits for all publicly traded companies.  I believe we can have the same success, but this time with a single set of global rules, because we do have the capability now to reach all corners of the globe, efficiently and comprehensively. 

Brunner: What areas does Independent Audit of AI Systems cover? 

Carrier: We cover 5 categories: Privacy, Bias, Trust, Ethics and Cybersecurity. 

All of these areas have overlap with each other and we will work together in the community, allowing the community to define the boundaries for each category. 

«We cover 5 categories: Privacy, Bias, Trust, Ethics and Cybersecurity.» (Carrier) 

 

Brunner: What is your intrinsic motivation to create a global standard on AI audit? Why did you call your initiative initially «For Humanity»? 

Carrier: ForHumanity, the name, highlights the people group intended to be served.  It is an ambitious «client base».  But the point is that these technologies should serve, without excluding, minimizing or disadvantaging all members of the human race.  I wanted the name to reflect who we served. 

Corporations exist beyond nation-state borders.  I wanted a standard that reflected that global nature.  Further, it is obvious that the best ideas and best elements of trust, safety, control and ethics are not specific to one nation or region.  Thus, I felt we needed to be drawing in experts from all around the world!   

«Thus, I felt we needed to be drawing in experts from all around the world!» (Carrier) 

 

Brunner: Why do we need a certification for «safe AI algorithms»? 

Carrier: Independent Audit of AI Systems is more than a certification.  Auditors, people who work on behalf of the audited client will investigate based on the transparent rules whether a company has complied with the rule or not.  When they attest that a company has complied with all of the audit rules, they attach their liability, their risk that their client company has achieved, met and complied with these rules.  This audit is conducted annually. 

When these audits are done across most companies around the world, we will create an infrastructure of trust that underlies our entire autonomous systems.  One we can count on to have basic ethics, trust, privacy by design, tools to avoid bias and robust cybersecurity embedded in their design and implementation. 

Brunner: How do you make sure that the rulemaking process will be inclusive? 

Carrier: Great Question, unlike current initiatives which are either closed-door, one firm’s opinion or voluntary using a «hope people come approach».  We are trying for something greater.  Our funded model assigns a dedicated, full-time staff person to curate the dialogue.  They are tasked with holding public office hours in global time zones at fixed times throughout the week. The dialogue will be held in Slack, so someone can join the conversation and get caught up on the dialogue whenever it is convenient for them.   

The curator also has the mission to explore their field and draw in experts all around the world, from academia, corporate, legal and legislative fields, regardless of location.  All will be welcomed and encouraged to contribute.  Diversity of input and perspective is a key criterion on which curators will be held accountable 

Brunner: From which best practices in terms of standards can we learn? 

Carrier: Audit rules have 5 hallmarks they are: Measurable, Implementable, Binary (compliant/non-compliant, Iterated, unambiguous and Open-source, Consensus-driven. 

The rules will be crafted out of the dialogue with the community.  Much has been debated in these areas already, but they are more often high-minded ideals or even platitudes.   

The mission of independent audit of AI systems is to find ways to adopt these ideas into rules which meet the criteria listed above. But this isn’t done on our own.  Once the rules are suggested, then they are put back out to the community, where the community will opine on the specific rules.  Ask questions, clarify, build case studies around support or dissent against the rules until the curator feels that consensus is reached.  On a quarterly basis, the curator will prepare rules for the Board of Governance.  The presentation of a rule will be made using the consensus arguments, but the Board will also see the tracked dissent, so they may have an informed summary of the discussion and adjudicate in an informed manner. 

«Once the rules are suggested, then they are put back out to the community, where the community will opine on the specific rules.» (Carrier) 

 

Brunner: How are rules made? 

Carrier: 

  • We start with an open source dialogue on the broad topic 
  • The community or the curator will try to derive a rule based upon the criteria we discussed above 
  • Once a rule is crafted, it is proposed back to the communityAnyone can propose a rule, but it must meet the criteria above 
  • The community will then iterate on the rule, helping to ensure that it is well-defined and understood, sometimes using case studies or specific definitions, so that the audit rule matches the criteria listed above. 
  • Once the rule is settled upon then it will be debated specifically and both the consensus arguments and dissent will be tracked.  A core audit silo head will then propose the rule to the Governing Board.   
  • The Governing Board will adjudicate the rule based on the consensus and dissenting argumentsThey then vote to approve the rule or not 
  • The curator must then continue to mitigate dissent as the process is iterated again.  Rules may be tweaked or new rules created.

Brunner: Where do you see the role of endorsing organizations (e.g. the CognitiveValley) in the development of the audit standards? 

Carrier: There are numerous roles that organizations such as yours can play such as: 

  • Endorser: someone who simply spreads the word in a positive way about the efficacy of the project and progress.  Encouraging others to lend their expertise, time and energy to the project. 
  • Partners: we will have some firms that want to be directly involved in making sure Independent Audit of AI Systems is a success. These partners may help curate dialogues, or develop business or lend their expertise to the overall initiative making sure it achieves maximum market adoption.
  • Contributor: anyone can join in the discussions and lend their opinion or expertise to a dialogue.  They can provide insight, definitions, use cases and even dissent if they think the process is headed in the wrong direction 
  • Auditor: we expect as the rules are built that many will want to become auditors.  Our intent is to partner with existing accreditation agents in the field of audit.  Testing and certification will be created so that the world may know who is qualified to be an Independent Audit of AI Systems Auditor 
  • Service Provider: many firms will want to create products designed to help, aid or implement the rules of Independent Audit of AI Systems.  These products will have to comply with the rules and be audited themselves to ensure they are compliant with the rules 

«We have no interest in reinventing the wheel. Where others have gone before, we want to leverage their hard work.» 

 

Brunner: Which other standards do you want to incorporate? 

Carrier: We have no interest in reinventing the wheel. Where others have gone before, we want to leverage their hard work.  We will look to collaborate and even partner with organizations such as: 

  • IEEE on ECPAIS and EAD 
  • NIST/MITRE on cybersecurity guidelines 
  • GDPR on privacy 
  • ISO/UL on trust, control, safety

«Ideally, 10 years from now we have a robust, and rigorous process with wide inclusion and a comprehensive set of well understood rules.» 

 

Brunner: What is your vision for AI audit in 2030? 

Carrier: Ideally, 10 years from now we have a robust, and rigorous process with wide inclusion and a comprehensive set of well understood rules.  We would have an entirely new industry filled with expert auditors all around the world who are systematically ensuring and attesting that companies continue to comply with best practice rules year in and year out. 

We have achieved such a level of trust that, similar to the way we treat audited financials, with implicit trust.  Where we don’t even question the quality of the numbers, we assume they are correct.  I hope we achieve the same level of trust in our autonomous systems.  A trust that knows we have embedded procedures and checks designed to ensure that bias is being avoided.  Confidence that our privacy is being maintained consistent with the laws and rights afforded each citizen.  Assured that ethics and ethical decision making are current practices for all designers and developers.   

Where companies are choosing to consider the ethical impacts of their design, implementation and production.  Where these systems have the hallmarks of trust (accessibility, explainability, control and safety). And all of those elements of good design are wrapped in a robust wall of protection from outside malfeasance.