read more – copyright by betanews.com

New legal frameworks are needed

Reuters news agency reported on February 2017 16 that “European lawmakers called […] for EU-wide legislation to regulate the rise of robots, including an ethical framework for their development and deployment and the establishment of liability for the actions of robots including self-driving cars.” The question of determining “liability” for decision making achieved by robots or artificial intelligence is an interesting and important subject as the implementation of this technology increases in industry and starts to more directly impact our day to day lives. Indeed, as an application of Artificial Intelligence and machine learning technology grows, we are likely to witness how it changes the nature of work, businesses, industries and society. And yet, although it has the power to disrupt and drive greater efficiencies, AI has its obstacles: the issue of “who is liable when something goes awry” being one of them.

Common standards

Like many protagonists in industry, Members of the European Parliament (MEPs) are trying to tackle this liability question. Many of them are calling for new laws on artificial intelligence and robotics to address the legal and insurance liability issues. They also want researchers to adopt some common ethical standards in order to “respect human dignity.” Therese Comodini Cachia MEP, of the Maltese centre-right Nationalist Party and Parliament’s rapporteur for robotics, believes that “for the purposes of the liability for damages caused by robots, the various legal possibilities need to be explored […] How will any legal solution affect the development of robotics, those who own them and victims of the damage? To answer these questions she has invited the European Commission to consider the impact of different solutions to ensure that any unintentional harm caused by them can be properly addressed.

SwissCognitive LogoRoute to adaption

The idea of creating legislation that will create more transparency over the liability issue is a much-needed step. In essence, the European Parliament’s report does a good job and I agree that the liability issues need to be tackled. The parliament’s report also doesn’t want robots and artificial intelligence to replace humans, but to complement them. This is something we agree with and an approach we are already starting to see across sectors. One example we have seen is being employed in the legal sector with solicitors, Linklaters and Kemp Little. Andrew Joint, Partner and part of the Commercial Technology Team at Kemp Little explains: “I see artificial intelligence being used more and more in the people-heavy, junior lawyer tasks where a huge amount of the fees are being spent on each transaction. I don’t see it replacing the lawyer as the trusted advisor because I have the ability to think around problems and solutions.” Through our own work with a global law firm, we have seen how technology can be used to streamline work that requires many man-hours, whilst also informing lawyers about specific details, relating to projects, prior to discussions with clients […]