It’s been 76 years since renowned science fiction author Isaac Asimov penned his Laws of Robotics . At the time, they must have seemed future-proof. But just how well do those rules hold up in a world where AI has permeated society so deeply we don’t even see it anymore?

copyright by thenextweb.com

Originally published in the short story Runaround , Asimov’s laws are:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

For nearly a century now Asimov’s Laws seemed like a good place to start when it comes to regulating robots — Will Smith even made a movie about it. But according to the experts , they simply don’t apply to today’s modern AI.

SwissCognitiveIn fairness to Mr. Asimov, nobody saw Google and Facebook coming back in the 1940s. Everyone was thinking about robots with arms and lasers, not social media advertising and search engine algorithms.

Yet, here we are on the verge of normalizing artificial intelligence to the point of making it seem dull — at least until the singularity. And this means stopping robots from murdering us is probably the least of our worries.

In lieu of sentience, the next stop on the artificial intelligence hype-train is regulation-ville. Politicians around the world are calling upon the world’s leading experts to advise them on the impending automation takeover.

Regardless of the way in which rules are set and who imposes them, we think the following principles identified by various groups above are the important ones to capture in law and working practices:

  1. Responsibility: There needs to be a specific person responsible for the effects of an autonomous system’s behaviour. This is not just for legal redress but also for providing feedback, monitoring outcomes and implementing changes.
  2. Explainability: It needs to be possible to explain to people impacted (often laypeople) why the behaviour is what it is.
  3. Accuracy: Sources of error need to be identified, monitored, evaluated and if appropriate mitigated against or removed.
  4. Transparency: It needs to be possible to test, review (publicly or privately), criticise and challenge the outcomes produced by an autonomous system. The results of audits and evaluation should be available publicly and explained.
  5. Fairness: The way in which data is used should be reasonable and respect privacy. This will help remove biases and prevent other problematic behaviour becoming embedded.

You’ll notice there’s no mention of AI refraining from the willful destruction of humans. This is likely because, at the time of this writing, machines aren’t capable of making those decisions for themselves.


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!


 

Common sense rules for the development of all AI needs to address real-world concerns. The chances of the algorithms powering Apple’s Face ID murdering you are slim, but an unethical programmer could certainly design AI that invades privacy using a smartphone camera. […]