Natural language understanding (NLU) is a technical concept within the larger topic of . NLU is the process responsible for translating natural, human words into a format that a computer can interpret. Essentially, before a computer can process language data, it must understand the data.
Copyright by www.unite.ai
Techniques for NLU include the use of common syntax and grammatical rules to enable a computer to understand the meaning and context of natural human language. The ultimate goal of these techniques is that a computer will come to have an “intuitive” understanding of language, able to write and understand language just the way a human does, without constantly referring to the definitions of words.
There are numerous techniques that computer scientists and experts use to enable computers to understand human language. Most of the techniques fall into the category of “syntactic analysis”. Syntactic analytic techniques include:
- word segmentation
- morphological segmentation
- sentence breaking
- part of tagging
These syntactic analytic techniques apply grammatical rules to groups of words and attempt to use these rules to derive meaning. In contrast, NLU operates by using “semantic analysis” techniques.
Semantic analysis applies computer algorithms to text, attempting to understand the meaning of words in their natural context, instead of relying on rules-based approaches. The grammatical correctness/incorrectness of a phrase doesn’t necessarily correlate with the validity of a phrase. There can be phrases that are grammatically correct yet meaningless, and phrases that are grammatically incorrect yet have meaning. In order to distinguish the most meaningful aspects of words, NLU applies a variety of techniques intended to pick up on the meaning of a group of words with less reliance on grammatical structure and rules.
NLU is an evolving and changing field, and its considered one of the hard problems of . Various techniques and tools are being developed to give machines an understanding of human language. Most NLU systems have certain core components in common. A lexicon for the language is required, as is some type of text parser and grammar rules to guide the creation of text representations. The system also requires a theory of semantics to enable comprehension of the representations. There are various semantic theories used to interpret language, like stochastic semantic analysis or naive semantics.
Common NLU techniques include:
Named Entity Recognition is the process of recognizing “named entities”, which are people, and important places/things. Named Entity Recognition operates by distinguishing fundamental concepts and references in a body of text, identifying named entities and placing them in categories like locations, dates, organizations, people, works, etc. Supervised models based on grammar rules are typically used to carry out NER tasks.
Word-Sense Disambiguation is the process of determining the meaning, or sense, of a word based on the context that the word appears in. Word sense disambiguation often makes use of part of taggers in order to contextualize the target word. Supervised methods of word-sense disambiguation include the user of support vector machines and memory-based learning. However, most word sense disambiguation models are semi-supervised models that employ both labeled and unlabeled data. […]
Read more: www.unite.ai