Supercomputers usually fill entire rooms. But the one on the fifth floor of an office building in the centre of Bristol fits in an average-sized drawer. Its 16 processors punch more than 1,600 teraflops, a measure of computer performance.
copyright by www.economist.com
This puts the machine among the world’s 100 fastest, at least when solving certain artificial-intelligence (AI) applications, such as recognising speech and images. The computer’s processors, developed by Graphcore, a startup, are tangible proof that AI has made chipmaking exciting again. After decades of big firms such as America’s Intel and Britain’s ARM ruling the semiconductor industry, the insatiable demand for computing generated by AI has created an opening for newcomers. And it may even be big enough to allow some startups to establish themselves as big, independent firms.
New Street, a research firm, estimates that the market for AI chips could reach $30bn by 2022. That would exceed the $22bn of revenue that Intel is expected to earn this year from selling processors for server computers. It could swell further, argue the authors of a recent report by UBS, an investment bank. AI processors, they believe, will create their own demand; they allow firms to develop cleverer services and devices, which will collect even more data, generating a need for even brainier chips.
To understand what is going on it helps to make a short detour into zoology. Broadly speaking, the world of processors is populated with two kinds of animal, explains Andrew Feldman, chief executive of Cerebras, an American competitor to Graphcore. One sort of chip resembles hyenas: they are generalists designed to tackle all kinds of computing problems, much as the hyenas eat all kinds of prey. The other type is like cheetahs: they are specialists which do one thing very well, such as hunting a certain kind of gazelle.
For much of computing history, hyenas named “central processing units” (CPUs) have dominated the chip savannah. Becoming ever more powerful according to Moore’s law, the rule that the performance of processors doubles every 18 months, they were able to gobble up computing tasks, or “workloads”, in the jargon. This is largely why Intel, for instance, in the early 1990s became the world’s biggest chipmaker and stayed that way for decades.
But in recent years the world of number-crunching has changed radically. Moore’s law has started to peter out because making ever-denser chips has hit physical limits. More importantly, cloud computing has made it extremely cheap to amass huge amounts of data. Now more and more firms want to turn this asset into money with the help of AI, meaning distilling data to create offerings such as recognising faces, translating speech or predicting when machinery will break down.
Such trends have altered the chip-design habitat. First to benefit were “graphics processing units” (GPUs), a kind of hyena which are mainly made by Nvidia. Originally developed to speed up the graphics in video games, they are also good at digesting reams of data, which is a similar computational problem. But because they are insufficiently specialised, GPUs have been hitting the buffers, too. The demand for “compute”, as geeks call processing power, for the largest AI projects has been doubling every 3.5 months since 2012, according to OpenAI, a non-profit research organisation. “Hardware has become the bottleneck,” says Nigel Toon, the chief executive of Graphcore. […]
read more – copyright by www.economist.com
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
Supercomputers usually fill entire rooms. But the one on the fifth floor of an office building in the centre of Bristol fits in an average-sized drawer. Its 16 processors punch more than 1,600 teraflops, a measure of computer performance.
copyright by www.economist.com
This puts the machine among the world’s 100 fastest, at least when solving certain artificial-intelligence (AI) applications, such as recognising speech and images. The computer’s processors, developed by Graphcore, a startup, are tangible proof that AI has made chipmaking exciting again. After decades of big firms such as America’s Intel and Britain’s ARM ruling the semiconductor industry, the insatiable demand for computing generated by AI has created an opening for newcomers. And it may even be big enough to allow some startups to establish themselves as big, independent firms.
New Street, a research firm, estimates that the market for AI chips could reach $30bn by 2022. That would exceed the $22bn of revenue that Intel is expected to earn this year from selling processors for server computers. It could swell further, argue the authors of a recent report by UBS, an investment bank. AI processors, they believe, will create their own demand; they allow firms to develop cleverer services and devices, which will collect even more data, generating a need for even brainier chips.
To understand what is going on it helps to make a short detour into zoology. Broadly speaking, the world of processors is populated with two kinds of animal, explains Andrew Feldman, chief executive of Cerebras, an American competitor to Graphcore. One sort of chip resembles hyenas: they are generalists designed to tackle all kinds of computing problems, much as the hyenas eat all kinds of prey. The other type is like cheetahs: they are specialists which do one thing very well, such as hunting a certain kind of gazelle.
For much of computing history, hyenas named “central processing units” (CPUs) have dominated the chip savannah. Becoming ever more powerful according to Moore’s law, the rule that the performance of processors doubles every 18 months, they were able to gobble up computing tasks, or “workloads”, in the jargon. This is largely why Intel, for instance, in the early 1990s became the world’s biggest chipmaker and stayed that way for decades.
But in recent years the world of number-crunching has changed radically. Moore’s law has started to peter out because making ever-denser chips has hit physical limits. More importantly, cloud computing has made it extremely cheap to amass huge amounts of data. Now more and more firms want to turn this asset into money with the help of AI, meaning distilling data to create offerings such as recognising faces, translating speech or predicting when machinery will break down.
Such trends have altered the chip-design habitat. First to benefit were “graphics processing units” (GPUs), a kind of hyena which are mainly made by Nvidia. Originally developed to speed up the graphics in video games, they are also good at digesting reams of data, which is a similar computational problem. But because they are insufficiently specialised, GPUs have been hitting the buffers, too. The demand for “compute”, as geeks call processing power, for the largest AI projects has been doubling every 3.5 months since 2012, according to OpenAI, a non-profit research organisation. “Hardware has become the bottleneck,” says Nigel Toon, the chief executive of Graphcore. […]
read more – copyright by www.economist.com
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
Share this: