EduTech FAGMA Research

Prepare for Artificial Intelligence to Produce Less Wizardry

Prepare for Artificial Intelligence to Produce Less Wizardry

Early last year, a large European supermarket chain deployed to predict what customers would buy each day at different stores, to help keep shelves stocked while reducing costly spoilage of goods.

SwissCognitiveThe company already used purchasing data and a simple statistical method to predict sales. With , a technique that has helped produce spectacular advances in recent years—as well as additional data including local weather, traffic conditions, and competitors’ actions—the company cut the number of errors by three-quarters.

It was precisely the kind of high-impact, cost-saving effect that people expect from . But there was a huge catch: The new algorithm required so much computation that the company chose not to use it.

“They were like, ‘well, it is not worth it to us to roll it out in a big way,’ unless costs come down or the algorithms become more efficient,” says Neil Thompson , a research scientist at MIT, who is assembling a case study on the project. (He declined to name the company involved.)

The story highlights a looming problem for and its users, Thompson says. Progress has been both rapid and dazzling in recent years, giving us clever game-playing programs , attentive personal assistants , and cars that navigate busy roads for themselves. But such advances have hinged on throwing ever-more computing resources at the problems.

In a new research paper, Thompson and colleagues argue that it is, or will soon be, impossible to increase computing power at the same rate in order to continue these advances. This could jeopardize further progress in areas including computer vision, translation, and language understanding.

’s appetite for computation has risen remarkably over the past decade. In 2012, at the beginning of the boom, a team at the University of Toronto created a breakthrough image-recognition algorithm using two GPUs (a specialized kind of computer chip) over five days. Fast-forward to 2019, and it took six days and roughly 1,000 special chips (each many times more powerful than the earlier GPUs) for researchers at Google and Carnegie Mellon to develop a more modern image-recognition algorithm. A translation algorithm, developed last year by a team at Google, required the rough equivalent of 12,000 specialized chips running for a week. By some estimates, it would cost up to $3 million to rent this much computer power through the cloud.

The Secret to Machine Learning? Human Teachers

“Deep neural networks are very computationally expensive,” says Song Han, an assistant professor at MIT who specializes in developing more efficient forms of and is not an author on Thompson’s paper. “This is a critical issue.”

Han’s group has created more efficient versions of popular algorithms using novel neural network architectures and specialized chip architectures, among other things. But he says there is a “still a long way to go,” to make less compute-hungry.[…]

read more – copyright by www.wired.com

 

1 Comment

Leave a Reply