Artificial Intelligence ( AI ) is already transforming the way businesses operate and the way we work. Industries from financial services to education are using AI to gain competitive advantage, drive efficiencies, improve service delivery and create or enhance products and to solve long-standing issues within their industry.

SwissCognitiveWhile AI can take many forms, including chatbots and driverless cars, at its core, it refers to technology that mimics characteristics of human intelligence in performing tasks. This includes machine learning software that learns from the data it receives and keeps refining its outputs over time.

But the increased adoption of AI raises some critical business and legal questions, including how we assess business risk and liability. After all, what happens when AI is involved in an accident or causes damage? Who should be responsible, or how should responsibility shared, among the following parties: the business, the manufacturer, the retailer, the AI software developer, the consumer or the person controlling it, the different data providers, or someone else (such as the AI system itself)?

How AI will affect liability

One of the challenges of assessing business liability and risk when AI fails is that courts assess liability and damages based on prior legal precedent. This means that AI-based systems will inextricably be judged by applying legal concepts and assumptions based on human involvement and outdated caselaw. For example, common law claims of negligence involve traditional human concepts of fault, negligence, knowledge, causation and reasonableness and foreseeability. So what are some issues when human judgment is replaced with an AI program?

The second challenge is that the appropriate standard of ‘reasonable foreseeability’ will become even harder for humans to judge due to the nature of AI. In the past, ‘reasonable foreseeability’ was judged according to the objective nature of the ‘reasonable (human) person’. However, the increasing use of AI promises to change this standard to what a company in the same industry with similar experience, expertise and technology would reasonably foresee. This raises two problems. Firstly, predictive analytics relies heavily on the breadth and size of big data sets too large for humans to process, which means it will be difficult for humans to judge what is ‘reasonably foreseeable’ for a given piece of AI software. Secondly, AI predictions depend entirely on the type of data it receives. This means that, unless two companies obtain the exact same AI software and feed it exactly the same data, even competitors with the same AI technology and markets may receive and be acting on wildly different information.

The first challenge is that one of the key benefits of AI lies in predictive analytics: the ability of certain AI software to analyse vast quantities of data and make predictions based on that data. However, the sheer scale of data sets that AI can process compared to humans means that arguably far more things are now ‘reasonably foreseeable’ to the growing number of companies that use AI to make strategic decisions. This means, potentially, that there is now a dramatic increase in the scope of what a company may be liable for. […]