“Deep learning” is the new buzzword in the field of artificial intelligence. As Natalie Wolchover reported in a recent Quanta Magazine article , “‘deep neural networks’ have learned to converse, drive cars, beat video games and Go champions , dream, paint pictures and help make scientific discoveries.” With such successes, one would expect deep learning to be a revolutionary new technique. But one would be quite wrong.
copyright by www.quantamagazine.org
The basis of deep learning stretches back more than half a century to the dawn of AI and the creation of both artificial neural networks having layers of connected neuronlike units and the “back propagation algorithm” — a technique of applying error corrections to the strengths of the connections between neurons on different layers. Over the decades, the popularity of these two innovations has fluctuated in tandem, in response not just to advances and failures, but also to support or disparagement from major figures in the field.
Back propagation was invented in the 1960s, around the same time that Frank Rosenblatt’s “perceptron ” learning algorithm called attention to the promise of artificial neural networks. Back propagation was first applied to these networks in the 1970s, but the field suffered after Marvin Minsky and Seymour Papert’s criticism of one-layer perceptrons. It made a comeback in the 1980s and 1990s after David Rumelhart, Geoffrey Hinton and Ronald Williams once again combined the two ideas, then lost favor in the 2000s when it fell short of expectations. Finally, deep learning began conquering the world in the 2010s with the string of successes described above.
What changed? Only brute computing power, which made it possible for back-propagation-using artificial neural networks to have far more layers than before (hence the “deep” in “deep learning”). This, in turn, allowed deep learning machines to train on massive amounts of data. It also allowed networks to be trained on a layer by layer basis, using a procedure first suggested by Hinton. […]
read more – copyright by www.quantamagazine.org
“Deep learning” is the new buzzword in the field of artificial intelligence. As Natalie Wolchover reported in a recent Quanta Magazine article , “‘deep neural networks’ have learned to converse, drive cars, beat video games and Go champions , dream, paint pictures and help make scientific discoveries.” With such successes, one would expect deep learning to be a revolutionary new technique. But one would be quite wrong.
copyright by www.quantamagazine.org
The basis of deep learning stretches back more than half a century to the dawn of AI and the creation of both artificial neural networks having layers of connected neuronlike units and the “back propagation algorithm” — a technique of applying error corrections to the strengths of the connections between neurons on different layers. Over the decades, the popularity of these two innovations has fluctuated in tandem, in response not just to advances and failures, but also to support or disparagement from major figures in the field.
Back propagation was invented in the 1960s, around the same time that Frank Rosenblatt’s “perceptron ” learning algorithm called attention to the promise of artificial neural networks. Back propagation was first applied to these networks in the 1970s, but the field suffered after Marvin Minsky and Seymour Papert’s criticism of one-layer perceptrons. It made a comeback in the 1980s and 1990s after David Rumelhart, Geoffrey Hinton and Ronald Williams once again combined the two ideas, then lost favor in the 2000s when it fell short of expectations. Finally, deep learning began conquering the world in the 2010s with the string of successes described above.
What changed? Only brute computing power, which made it possible for back-propagation-using artificial neural networks to have far more layers than before (hence the “deep” in “deep learning”). This, in turn, allowed deep learning machines to train on massive amounts of data. It also allowed networks to be trained on a layer by layer basis, using a procedure first suggested by Hinton. […]
read more – copyright by www.quantamagazine.org
Share this: