The history of has been marked by repeated cycles of extreme optimism and promise followed by disillusionment and disappointment.
Copyright by bdtechtalks.com
Today’s systems can perform complicated tasks in a wide range of areas, such as mathematics, games, and photorealistic image generation. But some of the early goals of like housekeeper robots and self-driving cars continue to recede as we approach them.
Part of the continued cycle of missing these goals is due to incorrect assumptions about and natural intelligence, according to Melanie Mitchell, Davis Professor of Complexity at the Santa Fe Institute and author of Artificial Intelligence: A Guide For Thinking Humans.
In a new paper titled “Why is Harder Than We Think,” Mitchell lays out four common fallacies about that cause misunderstandings not only among the public and the media, but also among experts. These fallacies give a false sense of confidence about how close we are to achieving artificial general intelligence, systems that can match the cognitive and general problem-solving skills of humans.
Narrow and general are not on the same scale
The kind of that we have today can be very good at solving narrowly defined problems. They can outmatch humans at Go and chess, find cancerous patterns in x-ray images with remarkable accuracy, and convert audio data to text. But designing systems that can solve single problems does not necessarily get us closer to solving more complicated problems. Mitchell describes the first fallacy as “Narrow intelligence is on a continuum with general intelligence.”
“If people see a machine do something amazing, albeit in a narrow area, they often assume the field is that much further along toward general ,” Mitchell writes in her paper.
For instance, today’s systems have come a long way toward solving many different problems, such as translation, text generation, and question-answering on specific problems. At the same time, we have systems that can convert voice data to text in real-time. Behind each of these achievements are thousands of hours of research and development (and millions of dollars spent on computing and data). But the community still hasn’t solved the problem of creating agents that can engage in open-ended conversations without losing coherence over long stretches. Such a system requires more than just solving smaller problems; it requires common sense, one of the key unsolved challenges of . […]
Read more: bdtechtalks.com