The history of artificial intelligence has been marked by repeated cycles of extreme optimism and promise followed by disillusionment and disappointment.

Copyright by bdtechtalks.com

SwissCognitive, AI, Artificial Intelligence, Bots, CDO, CIO, CI, Cognitive Computing, Deep Learning, IoT, Machine Learning, NLP, Robot, Virtual reality, learningToday’s AI systems can perform complicated tasks in a wide range of areas, such as mathematics, games, and photorealistic image generation. But some of the early goals of AI like housekeeper robots and self-driving cars continue to recede as we approach them.

Part of the continued cycle of missing these goals is due to incorrect assumptions about AI and natural intelligence, according to Melanie Mitchell, Davis Professor of Complexity at the Santa Fe Institute and author of Artificial Intelligence: A Guide For Thinking Humans.

In a new paper titled “Why AI is Harder Than We Think,” Mitchell lays out four common fallacies about AI that cause misunderstandings not only among the public and the media, but also among experts. These fallacies give a false sense of confidence about how close we are to achieving artificial general intelligence, AI systems that can match the cognitive and general problem-solving skills of humans.

Narrow AI and general AI are not on the same scale

The kind of AI that we have today can be very good at solving narrowly defined problems. They can outmatch humans at Go and chess, find cancerous patterns in x-ray images with remarkable accuracy, and convert audio data to text. But designing systems that can solve single problems does not necessarily get us closer to solving more complicated problems. Mitchell describes the first fallacy as “Narrow intelligence is on a continuum with general intelligence.”

“If people see a machine do something amazing, albeit in a narrow area, they often assume the field is that much further along toward general AI,” Mitchell writes in her paper.

For instance, today’s natural language processing systems have come a long way toward solving many different problems, such as translation, text generation, and question-answering on specific problems. At the same time, we have deep learning systems that can convert voice data to text in real-time. Behind each of these achievements are thousands of hours of research and development (and millions of dollars spent on computing and data). But the AI community still hasn’t solved the problem of creating agents that can engage in open-ended conversations without losing coherence over long stretches. Such a system requires more than just solving smaller problems; it requires common sense, one of the key unsolved challenges of AI. […]

Read more: bdtechtalks.com


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!