Automotive Telecommunication

How to Win at Deep Learning

 “” is the new buzzword in the field of artificial intelligenceArtificial Intelligence knows many different definitions, but in general it can be defined as a machine completing complex tasks intelligently, meaning that it mirrors human intelligence and evolves with time.. As Natalie Wolchover reported in a recent Quanta Magazine article , “‘deep neural networks’ have learned to converse, drive cars, beat video games and Go champions , dream, paint pictures and help make scientific discoveries.” With such successes, one would expect to be a revolutionary new technique. But one would be quite wrong.

The basis of stretches back more than half a century to the dawn of and the creation of both artificial neural networksNeural Networks are simplified abstract models of the human brain. Usually they have different layers and many nodes. Each layer receives input on which it carries out simple computations, and passes on the result to the next layer, by the final layer the answer to whatever problem will be produced.  having layers of connected neuronlike units and the “back propagation algorithmAn algorithm is a fixed set of instructions for a computer. It can be very simple like "as long as the incoming number is smaller than 10, print "Hello World!". It can also be very complicated such as the algorithms behind self-driving cars.” — a technique of applying error corrections to the strengths of the connections between neurons on different layers. Over the decades, the popularity of these two innovations has fluctuated in tandem, in response not just to advances and failures, but also to support or disparagement from major figures in the field.

Back propagation was invented in the 1960s, around the same time that Frank Rosenblatt’s “perceptron ” algorithm called attention to the promise of artificial neural networks. Back propagation was first applied to these networks in the 1970s, but the field suffered after Marvin Minsky and Seymour Papert’s criticism of one-layer perceptrons. It made a comeback in the 1980s and 1990s after David Rumelhart, Geoffrey Hinton and Ronald Williams once again combined the two ideas, then lost favor in the 2000s when it fell short of expectations. Finally, began conquering the world in the 2010s with the string of successes described above.

What changed? Only brute computing power, which made it possible for back-propagation-using artificial neural networks to have far more layers than before (hence the “deep” in “”). This, in turn, allowed machines to train on massive amounts of data. It also allowed networks to be trained on a layer by layer basis, using a procedure first suggested by Hinton. […]

  1. SwissCognitive

    How to Win at Deep Learning
    #Artificial_Intelligence #Bot #Deep_Learning
    https://t.co/oFLYzIaVDg

  2. Luis Andres Nieto S

    RT @SwissCognitive: How to Win at Deep Learning
    #Artificial_Intelligence #Bot #Deep_Learning
    https://t.co/oFLYzIaVDg

  3. Dalith Steiger

    How to Win at Deep Learning
    #Artificial_Intelligence #Bot #Deep_Learning
    https://t.co/73clvo6Tfz

  4. Andy Fitze

    How to Win at Deep Learning
    #Artificial_Intelligence #Bot #Deep_Learning
    https://t.co/Uy9PHQmOMe

  5. jaco_chundabad

    RT @andy_fitze: How to Win at Deep Learning
    #Artificial_Intelligence #Bot #Deep_Learning
    https://t.co/Uy9PHQmOMe

Leave a Reply