Fact-checking charity Full Fact has recommended that Facebook be more open about how it intends to use artificial intelligence (AI) to flag fake content on its platform.

SwissCognitiveUK-based Full Fact is one of a number of independent fact-checkers hired by Facebook to help tackle disinformation and fake news on its platform, from vaccine misinformation to whether a tampon can help someone who has been stabbed ( it doesn’t ).

The AI recommendation is one of ten made by Full Fact to the social media giant, which found Facebook’s Fact Checking programme overall to be “worthwhile”. In its first report after six months, Full Fact recommended that Facebook “be explicit about plans for machine learning”.

Machine learning, a subset of AI, involves computer programs being given examples of patterns in data to then recognise patterns by itself, ‘learning’ and improving its accuracy over time.

Technology companies with large user bases – Facebook has 2.41 billion monthly users – face the challenge of implementing fact-checking at scale, something that artificial intelligence is well positioned to address.

Facebook CEO Mark Zuckerberg has publicly said that he wants to see content flagged by AI in the future.

However, machine learning is not considered to be advanced enough yet to understand the nuances of human social media posts.

What did Full Fact say about machine learning?

In its 45-page report, Full Fact said:


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!


 

“These systems do not yet exist in any general sense. Creating these technologies involves solving some very hard problems, including ethical as well as technological problems. And attempts to do so need to be carefully scrutinised, which is one role Full Fact plays in this area.”

However, Full Fact recognises that AI can play a key role in “identifying content and patterns of inaccurate content that may lead to specific harms”.

“Effective and ethical technology could in time help to make human efforts to tackle specific harmful inaccurate information more effective by identifying and classifying it at scale,” the report states.

Currently, Facebook flags content – through a combination of algorithms and Facebook users flagging content – and then adds them to a queue for fact-checkers such as Full Fact to investigate.

Third-party fact-checkers then work their way through the list, researching potentially false stories and then tagging the post as one of nine categories, such as ‘false’, ‘true’ and ‘satire’.

Full Fact then attaches a link to their own article that explains the veracity of the Facebook post. Users can then see a disclaimer explaining why the post has been flagged.

In a number of posts, the Full Fact team struggled to find the right category for flagged content, something that a computer would struggle with even more.

Avoiding the “serious negative side effects” of AI

Full Fact recommended that the current categories used to flag content are too broad for AI to accurately learn from “without serious negative side effects”, given that machine learning depends on a large amount of good quality data to improve.[…]

read more – copyright by www.verdict.co.uk