Until now, most firms have been using the Graphical Processing Unit (GPU) architecture, originally developed for video games by firms such as Nvidia, to build out their Artificial Intelligence (AI) programmes. The GPU is much more capable of handling voluminous data than the humble Central Processing Unit (CPU) that is at the heart of most computers that you and I are familiar with.
copyright by www.livemint.com
A couple of weeks ago, this column wrote about a new hardware chip design for AI, and referenced a start-up firm called AlphaICs, which counts the renowned Vinod Dham among its founders. AlphaICs is trying to redefine the type of chip used for AI applications by designing a chip among a new class of processors called Tensor Processing Units (TPUs) that allow for several more pieces of data to be simultaneously processed on their chips.
Hungry AI monster programmes need to crunch through enormous data stores in order to be able to continuously “learn”, and the hope is that this new class of TPU chips, which are themselves an extension of GPUs, will be sufficient to handle the vast amount of data flying in from various devices that connect to the Internet.
Not just Data, but processing power
The realization that the war in AI is not just about the data, but also the ability to process it effectively through new hardware, has not been lost on the large tech giants. Microsoft, Amazon, Google and Facebook are huge buyers of hardware, and each has toyed with many start-ups such as AlphaICs to see whether a new class of chip would be required to handle AI tasks.
Facebook has said in the past that it might try to design new types of chips for its own use. Google realized some years ago that without these advanced chips, it would need to significantly expand the size of its already humongous computer farms. It has hired a number of engineers to design its own TPUs in order to process through the ever increasing amount of data that is coming at it from Android, its mobile phone operating system. Google also rents these chips out to its cloud customers.
Meanwhile, these large firms have meanwhile also been using new architectures in chips from more traditional CPU makers such as Intel, which use an architecture called Field Programmable Gate Arrays or FPGA.
Just this past week, Google provided previews at its developer conference to to what its Google Assistant can do when powered with even more AI capability. Its new programme called Duplex can believably mimic a human being while making automated phone calls to complete mundane tasks such as scheduling appointments at a spa or making reservations at a restaurant. […]
read more – copyright by www.livemint.com
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
Until now, most firms have been using the Graphical Processing Unit (GPU) architecture, originally developed for video games by firms such as Nvidia, to build out their Artificial Intelligence (AI) programmes. The GPU is much more capable of handling voluminous data than the humble Central Processing Unit (CPU) that is at the heart of most computers that you and I are familiar with.
copyright by www.livemint.com
A couple of weeks ago, this column wrote about a new hardware chip design for AI, and referenced a start-up firm called AlphaICs, which counts the renowned Vinod Dham among its founders. AlphaICs is trying to redefine the type of chip used for AI applications by designing a chip among a new class of processors called Tensor Processing Units (TPUs) that allow for several more pieces of data to be simultaneously processed on their chips.
Hungry AI monster programmes need to crunch through enormous data stores in order to be able to continuously “learn”, and the hope is that this new class of TPU chips, which are themselves an extension of GPUs, will be sufficient to handle the vast amount of data flying in from various devices that connect to the Internet.
Not just Data, but processing power
The realization that the war in AI is not just about the data, but also the ability to process it effectively through new hardware, has not been lost on the large tech giants. Microsoft, Amazon, Google and Facebook are huge buyers of hardware, and each has toyed with many start-ups such as AlphaICs to see whether a new class of chip would be required to handle AI tasks.
Facebook has said in the past that it might try to design new types of chips for its own use. Google realized some years ago that without these advanced chips, it would need to significantly expand the size of its already humongous computer farms. It has hired a number of engineers to design its own TPUs in order to process through the ever increasing amount of data that is coming at it from Android, its mobile phone operating system. Google also rents these chips out to its cloud customers.
Meanwhile, these large firms have meanwhile also been using new architectures in chips from more traditional CPU makers such as Intel, which use an architecture called Field Programmable Gate Arrays or FPGA.
Just this past week, Google provided previews at its developer conference to to what its Google Assistant can do when powered with even more AI capability. Its new programme called Duplex can believably mimic a human being while making automated phone calls to complete mundane tasks such as scheduling appointments at a spa or making reservations at a restaurant. […]
read more – copyright by www.livemint.com
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
Share this: