Drone FAGMA Industry Research UAS

Artificial intelligence isn’t very intelligent and won’t be any time soon

Artificial intelligence isn’t very intelligent and won’t be any time soon

For all of the recent advances in , machines still struggle with common sense.

Copyright by massivesci.com

SwissCognitiveMany think we’ll see human-level in the next 10 years . Industry continues to boast smarter tech like personalized assistants or self-driving cars . And in computer science, new and powerful tools embolden researchers to assert that we are nearing the goal in the quest for human-level .

But history and current limitations should temper these expectations. Despite the hype, despite progress, we are far from machines that think like you and me.

Last year Google unveiled Duplex — a Pixel smartphone assistant which can call and make reservations for you. When asked to schedule an appointment, say at a hair salon, Duplex makes the phone call. What follows is a terse but realistic conversation including scheduling and service negotiation. Duplex is just a drop in the ocean of new tech. Self-driving cars, drone delivery systems, and intelligent personal assistants are products of a recent shift in research that has revolutionized how machines learn from data.

The shift comes from the insurgence of “,” a method for training machines with hundreds, thousands, or even millions of artificial neurons. These artificial neurons are crudely inspired from those in our brains. Think of them as knobs. If each knob is turned in just the right way, the machine can do different things. With enough data, we can learn how to adjust each knob in the machine to allow them to recognize objects, use language, or perhaps anything else a human could do.

Previously, a clever programmer would “teach” the machine these skills instead of a them on its own. Infamously, this was involved in both the success and demise of IBM’s chess playing machine Deep Blue, which beat the chess grandmaster and then world champion Garry Kasparov in 1997. Deep Blue’s programmers gained insights from expert chess players and programmed them into Deep Blue. This strategy worked well enough to beat a grandmaster, but failed as a general approach towards building intelligence outside chess playing. Chess has clear rules. It’s simple enough that you can encode the knowledge you want the machine to have. But most problems aren’t like this.

Take vision for example. For a to work, it needs to “see” what’s around it. If the car sees a person in its path, it should stop. A programmer could provide the car a hint to look for faces. Whenever it sees a face, the car stops. This is sensible but a recipe for disaster. For example, if someone’s face is covered, the car won’t know to stop. The programmer could amend this by adding another hint, like looking for legs. But imagine someone whose face is covered crossing the street with groceries covering their legs. Many real-world problems suffer from this sort of complexity. For every hint you provide the machine, there always seems to be a situation not covered by the hints. […]

Read more – massivesci.com

1 Comment

  1. Fabrice Angelini

    @SwissCognitive Not yet doesn’t mean never though, I have the feeling there is a strong stream of de… https://t.co/wtzppyCgAC

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.