The electronics industry over the past several years has made tremendous strides in creating artificial intelligence in a manner imagined by Allen Turing in the 1940s.
copyright by www.ecommercetimes.com
The convergence of algorithmic advances in multilayer neural networks, the evolution of PC graphics processing units as massively parallel processing accelerators, and the availability of massive data sets fueled by the Internet and widely deployed sensors — big data — has enabled a renaissance in software neural network modeling techniques commonly referred to as “deep learning,” or “DL.”
In addition, the evolution of 3D graphic shader pipelines into general purpose compute accelerators drastically reduced the time required to train DL models. Training time for applications as diverse as image recognition and natural language processing has been reduced from months to days — and in some cases, hours or even minutes.
These solutions have enabled new AI applications ranging from computational science to voice based digital assistants like Alexa and Siri. However, as far as we have come in such a short period of time, we still have much further to go to realize the true benefits of AI.
The Eyes Have It
AI often is compared to the human brain, because our brain is one of the most complex neural networks on our planet. However, we don’t completely understand how the human brain functions, and medical researchers are still studying what many of the major structures in our brains actually do and how they do it. AI researchers started out by modeling the neural networks in human eyes. They were early adopters of GPUs to accelerate DLs, so it is no surprise that many of the early applications of DL are in vision systems.
As we learn more about how our brains work, that new knowledge will drive even more DL model complexity. For example, DL researchers are still exploring the impact of numerical precision on training and inference tasks and have arrived at widely divergent views, ranging from 64- to 128-bit training precision at the high end to 8-, 4-, 2- and even 1-bit precision in some low-end inference cases. “Good enough” precision turns out to be context-driven and is therefore highly application dependent. This rapid advancement is knowledge and technology has no end in sight. […]
read more – copyright by www.ecommercetimes.com
The electronics industry over the past several years has made tremendous strides in creating artificial intelligence in a manner imagined by Allen Turing in the 1940s.
copyright by www.ecommercetimes.com
The convergence of algorithmic advances in multilayer neural networks, the evolution of PC graphics processing units as massively parallel processing accelerators, and the availability of massive data sets fueled by the Internet and widely deployed sensors — big data — has enabled a renaissance in software neural network modeling techniques commonly referred to as “deep learning,” or “DL.”
In addition, the evolution of 3D graphic shader pipelines into general purpose compute accelerators drastically reduced the time required to train DL models. Training time for applications as diverse as image recognition and natural language processing has been reduced from months to days — and in some cases, hours or even minutes.
These solutions have enabled new AI applications ranging from computational science to voice based digital assistants like Alexa and Siri. However, as far as we have come in such a short period of time, we still have much further to go to realize the true benefits of AI.
The Eyes Have It
AI often is compared to the human brain, because our brain is one of the most complex neural networks on our planet. However, we don’t completely understand how the human brain functions, and medical researchers are still studying what many of the major structures in our brains actually do and how they do it. AI researchers started out by modeling the neural networks in human eyes. They were early adopters of GPUs to accelerate DLs, so it is no surprise that many of the early applications of DL are in vision systems.
As we learn more about how our brains work, that new knowledge will drive even more DL model complexity. For example, DL researchers are still exploring the impact of numerical precision on training and inference tasks and have arrived at widely divergent views, ranging from 64- to 128-bit training precision at the high end to 8-, 4-, 2- and even 1-bit precision in some low-end inference cases. “Good enough” precision turns out to be context-driven and is therefore highly application dependent. This rapid advancement is knowledge and technology has no end in sight. […]
read more – copyright by www.ecommercetimes.com
Share this: