You don’t have to agree with Elon Musk’s apocalyptic fears of artificial intelligence to be concerned that, in the rush to apply the technology in the real world, some algorithms could inadvertently cause harm.

SwissCognitiveThis type of self-learning software powers Uber’s self-driving cars, helps Facebook identify people in social-media posts, and let’s Amazon’s Alexa understand your questions. Now DeepMind, the London-based AI company owned by Alphabet Inc. , has developed a simple test to check if these new algorithms are safe. Researchers put AI software into a series of simple, two-dimensional video games composed of blocks of pixels, like a chess board, called a gridworld. It assesses nine safety features, including whether AI systems can modify themselves and learn to cheat.

Elch tests for Algorithms

AI algorithms that exhibit unsafe behavior in gridworld probably aren’t safe for the real world either, Jan Leike, DeepMind’s lead researcher on the project said in a recent interview at the the Neural Information Processing Systems (NIPS) conference, an annual gathering of experts in the field.

DeepMind’s proposed safety tests come at a time when the field is increasingly concerned about the unintended consequences of AI. As the technology spreads, it’s becoming clear that many algorithms are trained on biased data sets, while its difficult to show why some systems reach certain conclusions. AI safety was a major topic at NIPS.

DeepMind is best known for creating AI software that outperforms humans at games. It recently created an algorithm that, without any prior knowledge, beat the world’s best players at games like chess – in some cases requiring just a few hours of training.

If DeepMind wants to build artificial general intelligence – software that can perform a wide-range of tasks as well or better than humans – then understanding safety is critical, Leike said. He also stressed that gridworld isn’t perfect. It’s simplicity means some algorithms that perform well in tests could still be unsafe in a complex environment like the real world. The researchers found two DeepMind algorithms that mastered Atari video games failed many of the gridworld safety tests. “They were really not designed with these safety problems in mind,” Leike said. […]