Automotive Education Research Telecommunication

New Theory Cracks Open the Black Box of Deep Neural Networks

Even as machines known as “deep neural networksNeural Networks are simplified abstract models of the human brain. Usually they have different layers and many nodes. Each layer receives input on which it carries out simple computations, and passes on the result to the next layer, by the final layer the answer to whatever problem will be produced. ” have learned to converse, drive cars, beat video games and Go champions , dream, paint pictures and help make scientific discoveries, they have also confounded their human creators, who never expected so-called “deep-algorithmsAn algorithm is a fixed set of instructions for a computer. It can be very simple like "as long as the incoming number is smaller than 10, print "Hello World!". It can also be very complicated such as the algorithms behind self-driving cars. to work so well. No underlying principle has guided the design of these systems, other than vague inspiration drawn from the architecture of the brain (and no one really understands how that operates either).

Like a brain, a deep neural network has layers of neurons—artificial ones that are figments of computer memory. When a neuron fires, it sends signals to connected neurons in the layer above. During , connections in the network are strengthened or weakened as needed to make the system better at sending signals from input data—the pixels of a photo of a dog, for instance—up through the layers to neurons associated with the right high-level concepts, such as “dog.” After a deep neural network has “learned” from thousands of sample dog photos, it can identify dogs in new photos as accurately as people can. The magic leap from special cases to general concepts during gives deep neural networks their power, just as it underlies human reasoning, creativity and the other faculties collectively termed “intelligence.” Experts wonder what it is about that enables generalization—and to what extent brains apprehend reality in the same way.

New Theory, New Insight

Last month, a YouTube video of a conference talk in Berlin, shared widely among artificial-intelligence researchers, offered a possible answer. In the talk, Naftali Tishby, a computer scientist and neuroscientist from the Hebrew University of Jerusalem, presented evidence in support of a new theory explaining how works. Tishby argues that deep neural networks learn according to a procedure called the “information bottleneck,” which he and two collaborators first described in purely theoretical terms in 1999. The idea is that a network rids noisy input data of extraneous details as if by squeezing the information through a bottleneck, retaining only the features most relevant to general concepts. Striking new computer experiments by Tishby and his student Ravid Shwartz-Ziv reveal how this squeezing procedure happens during , at least in the cases they studied.

Tishby’s findings have the community buzzing. “I believe that the information bottleneck idea could be very important in future deep neural network research,” said Alex Alemi of Google Research, who has already developed new approximation methods for applying an information bottleneck analysis to large deep neural networks. The bottleneck could serve “not only as a theoretical tool for understanding why our neural networks work as well as they do currently, but also as a tool for constructing new objectives and architectures of networks,” Alemi said. […]

  1. SwissCognitive

    New Theory Cracks Open the Black Box of Deep Neural Networks
    #Artificial_Intelligence #Bot #Deep_Learning
    https://t.co/mcSxYJz4HH

  2. TA | Greenteanime

    RT @SwissCognitive: New Theory Cracks Open the Black Box of Deep Neural Networks
    #Artificial_Intelligence #Bot #Deep_Learning
    https://t.co/…

  3. Alissa On

    RT @SwissCognitive: New Theory Cracks Open the Black Box of Deep Neural Networks
    #Artificial_Intelligence #Bot #Deep_Learning
    https://t.co/…

  4. Lucas Young

    RT @SwissCognitive: New Theory Cracks Open the Black Box of Deep Neural Networks
    #Artificial_Intelligence #Bot #Deep_Learning
    https://t.co/…

  5. Luis Andres Nieto S

    RT @SwissCognitive: New Theory Cracks Open the Black Box of Deep Neural Networks
    #Artificial_Intelligence #Bot #Deep_Learning
    https://t.co/…

  6. Dalith Steiger

    New Theory Cracks Open the Black Box of Deep Neural Networks
    #Artificial_Intelligence #Bot #Deep_Learning
    https://t.co/GT9eHr7wvm

  7. Andy Fitze

    New Theory Cracks Open the Black Box of Deep Neural Networks
    #Artificial_Intelligence #Bot #Deep_Learning
    https://t.co/r0xC9wg1wv

Leave a Reply