You don’t have to agree with Elon Musk’s apocalyptic fears of artificial intelligence to be concerned that, in the rush to apply the technology in the real world, some algorithms could inadvertently cause harm.
This type of self- software powers Uber’s self-driving cars, helps Facebook identify people in social-media posts, and let’s Amazon’s understand your questions. Now DeepMind, the London-based company owned by Alphabet Inc. , has developed a simple test to check if these new An algorithm is a fixed set of instructions for a computer. It can be very simple like "as long as the incoming number is smaller than 10, print "Hello World!". It can also be very complicated such as the algorithms behind self-driving cars. are safe. Researchers put software into a series of simple, two-dimensional video games composed of blocks of pixels, like a chess board, called a gridworld. It assesses nine safety features, including whether systems can modify themselves and learn to cheat.
Elch tests for Algorithms
algorithms that exhibit unsafe behavior in gridworld probably aren’t safe for the real world either, Jan Leike, DeepMind’s lead researcher on the project said in a recent interview at the the Neural Information Processing Systems (NIPS) conference, an annual gathering of experts in the field.
DeepMind’s proposed safety tests come at a time when the field is increasingly concerned about the unintended consequences of . As the technology spreads, it’s becoming clear that many algorithms are trained on biased data sets, while its difficult to show why some systems reach certain conclusions. safety was a major topic at NIPS.
DeepMind is best known for creating software that outperforms humans at games. It recently created an algorithm that, without any prior knowledge, beat the world’s best players at games like chess – in some cases requiring just a few hours of training.
If DeepMind wants to build artificial general intelligence – software that can perform a wide-range of tasks as well or better than humans – then understanding safety is critical, Leike said. He also stressed that gridworld isn’t perfect. It’s simplicity means some algorithms that perform well in tests could still be unsafe in a complex environment like the real world. The researchers found two DeepMind algorithms that mastered Atari video games failed many of the gridworld safety tests. “They were really not designed with these safety problems in mind,” Leike said. […]