Media Politics Research Solutions

Deep learning won’t detect fake news, but it will give fact-checkers a boost

Deep learning won’t detect fake news, but it will give fact-checkers a boost

Researchers at the University of Waterloo have developed an system that can detect the stance of an article related to a claim, an important step toward fighting fake news.

Copyright by bdtechtalks.com

SwissCognitiveFighting fake news has become a growing problem in the past few years, and one that begs for a solution involving . Verifying the near-infinite amount of content being generated on news websites, video streaming services, blogs, social media, etc. is virtually impossible

There has been a push to use in the moderation of online content , but those efforts have only had modest success in finding spam and removing adult content, and to a much lesser extent detecting hate .

Fighting fake news is a much more complicated challenge. Fact-checking websites such as Snopes, FactCheck.org, and PolitiFact do a decent job of impartially verifying rumors, news, and remarks made by politicians. But they have limited reach.

It would be unreasonable to expect current technologies to fully automate the fight against fake news. But there’s hope that the use of can help automate some of the steps of the fake news detection pipeline and augment the capabilities of human fact-checkers.

In a paper presented at the 2019 NeurIPS conference , researchers at DarwinAI and Canada’s University of Waterloo presented an system that uses advanced language models to automate stance detection, an important first step toward identifying disinformation.

The automated fake-news detection pipeline

Before creating an system that can fight fake news, we must first understand the requirements of verifying the veracity of a claim. In their paper, the researchers break down the process into the following steps:

  • Retrieving documents that are relevant to the claim
  • Detecting the stance or position of those documents with respect to the claim
  • Calculating a reputation score for the document, based on its source and language quality
  • Verify the claim based on the information obtained from the relevant documents

Instead of going for an end-to-end -powered fake-news detector that takes a piece of news as input and outputs “fake” or “real”, the researchers focused on the second step of the pipeline. They created an algorithm that determines whether a certain document agrees, disagrees, or takes no stance on a specific claim.

Using transformers to detect stance

This is not the first effort to use for stance detection. Previous research has used various algorithms and components, including recurrent neural networks (RNN), long short-term memory (LSTM) models, and multi-layer perceptrons, all relevant and useful artificial neural network (ANN) architectures. The efforts have also leveraged other research done in the field, such as work on “word embeddings,” numerical vector representations of relationships between words that make them understandable for neural networks.

However, while those techniques have been efficient for some tasks such as machine translation, they have had limited success on stance detection. “Previous approaches to stance detection were typically earmarked by hand-designed features or word embeddings, both of which had limited expressiveness to represent the complexities of language,” says Alex Wong, co-founder and chief scientist at DarwinAI.

The new technique uses a transformer, a type of algorithm that has become popular in the past couple of years. Transformers are used in state-of-the-art language models such as GPT-2 and Meena. Though transformers still suffer from the fundamental flaws, they are much better than their predecessors in handling large corpora of text.

Transformers use special techniques to find the relevant bits of information in a sequence of bytes instead. This enables them to become much more memory-efficient than other algorithms in handling large sequences. Transformers are also an unsupervised algorithm, which means they don’t require the time- and labor-intensive data-labeling work that goes into most contemporary work.

“The beauty of bidirectional transformer language models is that they allow very large text corpuses to be used to obtain a rich, deep understanding of language,” Wong says. “This understanding can then be leveraged to facilitate better decision-making when it comes to the problem of stance detection.” […]

 

read more – copyright by bdtechtalks.com

0 Comments

Leave a Reply