For all of the recent advances in artificial intelligence, machines still struggle with common sense.

Copyright by massivesci.com

SwissCognitiveMany think we’ll see human-level artificial intelligence in the next 10 years . Industry continues to boast smarter tech like personalized assistants or self-driving cars . And in computer science, new and powerful tools embolden researchers to assert that we are nearing the goal in the quest for human-level artificial intelligence.

But history and current limitations should temper these expectations. Despite the hype, despite progress, we are far from machines that think like you and me.

Last year Google unveiled Duplex — a Pixel smartphone assistant which can call and make reservations for you. When asked to schedule an appointment, say at a hair salon, Duplex makes the phone call. What follows is a terse but realistic conversation including scheduling and service negotiation. Duplex is just a drop in the ocean of new tech. Self-driving cars, drone delivery systems, and intelligent personal assistants are products of a recent shift in artificial intelligence research that has revolutionized how machines learn from data.

The shift comes from the insurgence of “deep learning,” a method for training machines with hundreds, thousands, or even millions of artificial neurons. These artificial neurons are crudely inspired from those in our brains. Think of them as knobs. If each knob is turned in just the right way, the machine can do different things. With enough data, we can learn how to adjust each knob in the machine to allow them to recognize objects, use language, or perhaps anything else a human could do.

Previously, a clever programmer would “teach” the machine these skills instead of a machine learning them on its own. Infamously, this was involved in both the success and demise of IBM’s chess playing machine Deep Blue, which beat the chess grandmaster and then world champion Garry Kasparov in 1997. Deep Blue’s programmers gained insights from expert chess players and programmed them into Deep Blue. This strategy worked well enough to beat a grandmaster, but failed as a general approach towards building intelligence outside chess playing. Chess has clear rules. It’s simple enough that you can encode the knowledge you want the machine to have. But most problems aren’t like this.

Take vision for example. For a self-driving car to work, it needs to “see” what’s around it. If the car sees a person in its path, it should stop. A programmer could provide the car a hint to look for faces. Whenever it sees a face, the car stops. This is sensible but a recipe for disaster. For example, if someone’s face is covered, the car won’t know to stop. The programmer could amend this by adding another hint, like looking for legs. But imagine someone whose face is covered crossing the street with groceries covering their legs. Many real-world problems suffer from this sort of complexity. For every hint you provide the machine, there always seems to be a situation not covered by the hints. […]

Read more – massivesci.com


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!