The electronics industry over the past several years has made tremendous strides in creating artificial intelligence in a manner imagined by Allen Turing in the 1940s.
The convergence of algorithmic advances in multilayer Neural Networks are simplified abstract models of the human brain. Usually they have different layers and many nodes. Each layer receives input on which it carries out simple computations, and passes on the result to the next layer, by the final layer the answer to whatever problem will be produced. , the evolution of PC graphics processing units as massively parallel processing accelerators, and the availability of massive data sets fueled by the Internet and widely deployed sensors — Big Data describes data collections so big that humans are not capable of sifting through all of it in a timely manner. However, with the help of algorithms it is usually possible to find patterns within the data so far hidden to human analyzers. — has enabled a renaissance in software neural network modeling techniques commonly referred to as “,” or “.”
In addition, the evolution of 3D graphic shader pipelines into general purpose compute accelerators drastically reduced the time required to train models. Training time for applications as diverse as and has been reduced from months to days — and in some cases, hours or even minutes.
These solutions have enabled new applications ranging from computational science to voice based digital assistants like and Siri. However, as far as we have come in such a short period of time, we still have much further to go to realize the true benefits of .
The Eyes Have It
often is compared to the human brain, because our brain is one of the most complex neural networks on our planet. However, we don’t completely understand how the human brain functions, and medical researchers are still studying what many of the major structures in our brains actually do and how they do it. researchers started out by modeling the neural networks in human eyes. They were early adopters of GPUs to accelerate DLs, so it is no surprise that many of the early applications of are in vision systems.
As we learn more about how our brains work, that new knowledge will drive even more model complexity. For example, researchers are still exploring the impact of numerical precision on training and inference tasks and have arrived at widely divergent views, ranging from 64- to 128-bit training precision at the high end to 8-, 4-, 2- and even 1-bit precision in some low-end inference cases. “Good enough” precision turns out to be context-driven and is therefore highly application dependent. This rapid advancement is knowledge and technology has no end in sight. […]