Research

How far are we from artificial general intelligence?

How far are we from artificial general intelligence? And if we ever see true Artificial General Intelligence (AGI), will it operate similar to the human brain, or could there be a better path to building intelligent machines?

Copyright by searchenterpriseai.techtarget.com

SwissCognitiveSince the earliest days of — and computing more generally — theorists have assumed that intelligent machines would think in much the same ways as humans. After all, we know of no greater cognitive power than the human brain. In many ways, it makes sense to try to replicate it if the goal is to create a high level of cognitive processing.

However, there is a debate today over the best way of reaching true general . In particular, recent years’ advancements in — which is itself inspired by the human brain, though diverges from it in some important ways — have shown developers that there may be other paths.

What is artificial general intelligence?

For many, AGI is the ultimate goal of development. Since the dawn of in the 1950s, engineers have envisioned intelligent robots that can complete all kinds of tasks — easily switching from one job to the next. AGI would be able to learn, reason, plan, understand natural human language and exhibit common sense.

In short, AGI would be a machine that is capable of thinking and learning much in the same way that a human is. It would understand situational contexts and be able to apply things it learned about completing one task to other tasks.

What is the current state of AGI?

We’re still a long way from realizing AGI. Today’s smartest machines fail completely when asked to perform new tasks. Even young children are easily able to apply things they learn in one setting to new tasks in ways that the most complex -powered machines can’t.

Researchers are working on the problem. There are a host of approaches, mainly focused around , that aim to replicate some element of intelligence. Neural networks are generally considered state-of-the-art when it comes to learning correlations in sets of training data. Reinforcement learning is a powerful tool for teaching machines to independently figure out how to complete a task that has clearly prescribed rules. Generative adversarial networks allow computers to take more creative approaches to problem solving.

But there are few approaches that combine some or all of these techniques. This means today’s applications can only solve narrow tasks, and that leaves us far from artificial general intelligence.

How a more human-like approach to AGI might look

Gary Marcus, founder and CEO of Robust.ai, a company based in Palo Alto, Calif., that is trying to build a cognitive platform for a range of bots, is a proponent of AGI having to work more like a human mind. Speaking at the MIT Technology Review’s virtual EmTech Digital conference, he said today’s algorithms lack the ability to contextualize and generalize information, which are some of the biggest advantages to human-like thinking.

Marcus said he doesn’t specifically think machines need to replicate the human brain, neuron for neuron. But there are some aspects of human thought, like using symbolic representation of information to extrapolate knowledge to a broader set of problems, that would help achieve more general intelligence.

“[Deep learning] doesn’t work for reasoning or language understanding, which we desperately need right now,” Marcus said. “We can train a bunch of algorithms with labelled data, but what we need is deeper understanding.”

The reason why struggles to reason or generalize information is that algorithms only know what they’ve been shown. It takes thousands or even millions of labelled photos to train an model. And even after all that, the model is unable to perform different tasks like .

In spite of its limitations, Marcus doesn’t advocate moving away from . Instead, he says, developers should look for ways to combine with classical approaches to . These include more symbolic interpretations of information, like knowledge graphs. Knowledge graphs contextualize data — connecting pieces of information that are semantically related — while also using models to understand how people interact with information and make improvements over time.

“We need to stop building for ad tech and news feeds, and start building that can make a real difference,” Marcus said. “To get to that place you have to build systems that have deep understanding, not just .” […]

 

Read more – searchenterpriseai.techtarget.com

0 Comments

Leave a Reply