Why Hasn’t AI Mastered Language Translation?

Why Hasn’t AI Mastered Language Translation?

In the myth about the Tower of Babel, people conspired to build a city and tower that would reach heaven. Their creator observed, “And now nothing will be restrained from them, which they have imagined to do.” According to the myth, God thwarted this effort by creating diverse languages so that they could no longer collaborate.

SwissCognitiveIn our modern times, we’re experiencing a state of unprecedented connectivity thanks to technology. However, we’re still living under the shadow of the Tower of Babel. Language remains a barrier in business and marketing. Even though technological devices can quickly and easily connect, humans from different parts of the world often can’t.

Translation agencies step in, making presentations, contracts, outsourcing instructions, and advertisements comprehensible to all intended recipients. Some agencies also offer “localization” expertise. For instance, if a company is marketing in Quebec, the advertisements need to be in Québécois French, not European French. Risk-averse companies may be reluctant to invest in these translations. Consequently, these ventures haven’t achieved full market penetration.

Global markets are waiting, but -powered language translation isn’t ready yet, despite recent advancements in and sentiment analysis. still has difficulties processing requests in one language, without the additional complications of translation. In November 2016, Google added a neural network to its translation tool. However, some of its translations are still socially and grammatically odd. I spoke to technologists and a language professor to find out why.

“To Google’s credit, they made a pretty massive improvement that appeared almost overnight. You know, I don’t use it as much. I will say this. Language is hard,” said Michael Housman , chief data science officer at RapportBoost. and faculty member of Singularity University.

He explained that the ideal scenario for machine and artificial intelligence is something with fixed rules and a clear-cut measure of success or failure. He named chess as an obvious example, and noted machines were able to beat the best human Go player. This happened faster than anyone anticipated because of the game’s very clear rules and limited set of moves.

Housman elaborated, “Language is almost the opposite of that. There aren’t as clearly-cut and defined rules. The conversation can go in an infinite number of different directions. And then of course, you need labeled data. You need to tell the machine to do it right or wrong.”

Housman noted that it’s inherently difficult to assign these informative labels. “Two translators won’t even agree on whether it was translated properly or not,” he said. “Language is kind of the wild west, in terms of data.” […]

  1. Dr Peter Lozo

    @SwissCognitive AI isn’t good at context sensitive processing (interpretation and hence translation…

  2. 0499154500

    @SwissCognitive @CryptoQBIC @QBICqa @RedHatNews #dogsnosleep #babysnosleep #warnosleeps @AppleMusic…

  3. AI hasn’t mastered natural language because AI developpers have overlooked many of its most complex aspects. Namely, that humans don’t speak in isolated words: they use prefabs most of the time (collocations, patterns, lexical bundles, formulaic patterns, clichés, etc.) with some syntactic glue to hold everything together. The amount of regular syntax varies according to topic, domain, register and of course speaker. But just look at any text on the web and start marking prefabs, named entities, addresses and dates. You’ll probably find these units are overwhelmingly present in any natural text.
    NLU systems are still essentially word-centered, implementing the same processing chain as what compilers do when transforming programming language instructions into machine code. They compose individual units (tokens) into the “appropriate” structure (if possible only one unambiguous structure), and then proceed to derive the “semantics” (meaning in the case of natural language) from the syntactic structure. This simply doesn’t work. If you add to this that integrating elements of the (social, linguistic) context, as well as machine-tractable theories about the outside world (common sense, knowledge derived from embodied perception), then it is easy to understand why AI hasn’t tackled NLU yet.
    Switching to deep neural networks for Machine Translation has seen a major improvement in translation quality. But I’m not sure this gives us more insight into how humans process natural languages (apart from the fact that they clearly do not rely on symbolic rules, à la generative grammar). The same holds for IBM Watson or Google Alphago beating humans at chess or go, respectively. It shows that, given time and enough money, very smart developers can make a machine that performs better than humans for a given task. It doesn’t tell us anything about how humans play chess, or go. And playing chess or go is just a tiny portion of what we like to think as higher cognitive processes.

  4. David Marete

    @SwissCognitive This tweet was one day early.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.