How far are we from artificial general intelligence? And if we ever see true Artificial General Intelligence (AGI), will it operate similar to the human brain, or could there be a better path to building intelligent machines?
Copyright by searchenterpriseai.techtarget.com
Since the earliest days of artificial intelligence — and computing more generally — theorists have assumed that intelligent machines would think in much the same ways as humans. After all, we know of no greater cognitive power than the human brain. In many ways, it makes sense to try to replicate it if the goal is to create a high level of cognitive processing.
However, there is a debate today over the best way of reaching true general AI. In particular, recent years’ advancements in deep learning — which is itself inspired by the human brain, though diverges from it in some important ways — have shown developers that there may be other paths.
What is artificial general intelligence?
For many, AGI is the ultimate goal of artificial intelligence development. Since the dawn of AI in the 1950s, engineers have envisioned intelligent robots that can complete all kinds of tasks — easily switching from one job to the next. AGI would be able to learn, reason, plan, understand natural human language and exhibit common sense.
In short, AGI would be a machine that is capable of thinking and learning much in the same way that a human is. It would understand situational contexts and be able to apply things it learned about completing one task to other tasks.
What is the current state of AGI?
We’re still a long way from realizing AGI. Today’s smartest machines fail completely when asked to perform new tasks. Even young children are easily able to apply things they learn in one setting to new tasks in ways that the most complex AI-powered machines can’t.
Researchers are working on the problem. There are a host of approaches, mainly focused around deep learning, that aim to replicate some element of intelligence. Neural networks are generally considered state-of-the-art when it comes to learning correlations in sets of training data. Reinforcement learning is a powerful tool for teaching machines to independently figure out how to complete a task that has clearly prescribed rules. Generative adversarial networks allow computers to take more creative approaches to problem solving.
But there are few approaches that combine some or all of these techniques. This means today’s AI applications can only solve narrow tasks, and that leaves us far from artificial general intelligence.
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
How a more human-like approach to AGI might look
Gary Marcus, founder and CEO of Robust.ai, a company based in Palo Alto, Calif., that is trying to build a cognitive platform for a range of bots, is a proponent of AGI having to work more like a human mind. Speaking at the MIT Technology Review’s virtual EmTech Digital conference, he said today’s deep learning algorithms lack the ability to contextualize and generalize information, which are some of the biggest advantages to human-like thinking.
Marcus said he doesn’t specifically think machines need to replicate the human brain, neuron for neuron. But there are some aspects of human thought, like using symbolic representation of information to extrapolate knowledge to a broader set of problems, that would help achieve more general intelligence.
“[Deep learning] doesn’t work for reasoning or language understanding, which we desperately need right now,” Marcus said. “We can train a bunch of algorithms with labelled data, but what we need is deeper understanding.”
The reason why deep learning struggles to reason or generalize information is that algorithms only know what they’ve been shown. It takes thousands or even millions of labelled photos to train an image recognition model. And even after all that, the model is unable to perform different tasks like natural language understanding.
In spite of its limitations, Marcus doesn’t advocate moving away from deep learning. Instead, he says, developers should look for ways to combine deep learning with classical approaches to AI. These include more symbolic interpretations of information, like knowledge graphs. Knowledge graphs contextualize data — connecting pieces of information that are semantically related — while also using deep learning models to understand how people interact with information and make improvements over time.
“We need to stop building AI for ad tech and news feeds, and start building AI that can make a real difference,” Marcus said. “To get to that place you have to build systems that have deep understanding, not just deep learning.” […]