Are you stressed out about the singularity? Living in fear of the day when computers decide that humans are no longer necessary? Not to worry, say some leading experts in artificial intelligence: Research in the field might have actually hit a wall.
No doubt, is everywhere. Computers assess financial news, identify viruses and even act as physics theorists, analyzing flows of fluid and heat. So-called algorithms allow services such as Google Translate and Apple’s Siri to outperform people on many basic tasks. With big tech companies such as Google and Facebook pushing the technology further, some people believe that human-level intelligence is just around the corner.
Yet we’ve been here before. In 1970, the cognitive scientist Marvin Minsky confidently claimed that “a machine with the general intelligence of an average human being” would exist within a decade. The history of artificial intelligence is littered with episodes of wild optimism that have, ultimately, given way to disappointment and gloom — and that could happen again, as Google software engineer Francois Chollet recently warned in a popular textbook about algorithmic methods. Research progress, Chollet notes , has been slowing for several years.
Now, psychologist Gary Marcus of New York University — formerly director of Uber’s labs — argues that the lack of progress isn’t surprising, as researchers are running up against a host of new challenges.
One Marcus identifies is building a more flexible technology. Today’s algorithms work only on a narrow range of problems. The goal must be extremely well-defined and unchanging, and huge amounts of data must be available for training. Examples include translating text, recognizing and identifying faces in a photo. The algorithm has one job, and researchers supply it with the masses of perfectly organized data required to learn how to do it.
Humans regularly perform many tasks that are not so clearly delineated — where the nature of an answer, or what information might be needed to approach it, is not given. Tangle up some rope in a bicycle wheel, and any five-year-old can easily work out how to extract it — not because he has trained on thousands of wheels, but because he can understand the spatial relationships. People have an impressive ability to solve problems and gain insight using almost no data at all, by using abstract reasoning.
Algorithms also can’t engage in what Marcus calls “open-ended inference,” which entails bringing background knowledge to bear on a question. We all know the difference between “John promised Mary to leave” and “John promised to leave Mary.” We make the distinction using information that isn’t explicitly included in either phrase. Researchers haven’t made much progress in getting computers to do the same. […]