Researchers at the University of Waterloo have developed an system that can detect the stance of an article related to a claim, an important step toward fighting fake news.
Copyright by bdtechtalks.com
Fighting fake news has become a growing problem in the past few years, and one that begs for a solution involving . Verifying the near-infinite amount of content being generated on news websites, video streaming services, blogs, social media, etc. is virtually impossible
There has been a push to use in the moderation of online content , but those efforts have only had modest success in finding spam and removing adult content, and to a much lesser extent detecting hate
Fighting fake news is a much more complicated challenge. Fact-checking websites such as Snopes, FactCheck.org, and PolitiFact do a decent job of impartially verifying rumors, news, and remarks made by politicians. But they have limited reach.
It would be unreasonable to expect current
In a paper presented at the 2019 NeurIPS
The automated fake-news detection pipeline
Before creating an
- Retrieving documents that are relevant to the claim
- Detecting the stance or position of those documents with respect to the claim
- Calculating a reputation score for the document, based on its source and language quality
- Verify the claim based on the information obtained from the relevant documents
Instead of going for an end-to-end
Using transformers to detect stance
This is not the first effort to use
However, while those techniques have been efficient for some tasks such as machine translation, they have had limited success on stance detection. “Previous approaches to stance detection were typically earmarked by hand-designed features or word embeddings, both of which had limited expressiveness to represent the complexities of language,” says Alex Wong, co-founder and chief scientist at DarwinAI.
The new technique uses a transformer, a type of
Transformers use special techniques to find the relevant bits of information in a sequence of bytes instead. This enables them to become much more memory-efficient than other
“The beauty of bidirectional transformer language models is that they allow very large text corpuses to be used to obtain a rich, deep understanding of language,” Wong says. “This understanding can then be leveraged to facilitate better decision-making when it comes to the problem of stance detection.” […]
0 Comments