The UK fact-checking organisation Full Fact also said that while the initiative “is not perfect”, other internet platforms should be “running similar programmes”.
In its report on bad information, published today, Full Fact said other sites creating fact-checking schemes “would not only help tackle misinformation, but also give fact checkers and academics greater insights into what methods do and don’t work for different platforms and audiences.”
“We have also called for more data to be shared with us and other fact checkers on the programme, so that we can understand more about the reach of the content that appears in the queue and what benefits it has for users.
“This would allow us to be better informed when choosing what to fact check, and when, and would help us assess our impact,” the report reads.
Bad information ruins lives. We’re calling for more action to tackle it.
Read the Full Fact Report 2020: https://t.co/3XVqtCPzJd pic.twitter.com/dr1vuvIL3N
— Full Fact (@FullFact) April 30, 2020
Full Fact joined Facebook’s initiative in January 2019, which sees fact-checkers provided with a ‘queue’ of content flagged by users, “disbelief comments” and the site’s own algorithms.
“Full Fact then chooses what content to check, during which we must assign a rating to the piece of content: False, Partly False, False headline, True, Not eligible, Not Rated and Prank Generator.
“Posts rated ‘false’ or ‘partly false’ are downrated by Facebook’s algorithm and if it contains an image, a grey overlay will appear with a notice to users and a link to the fact check,” the report reads.
The charity went on to add that the gradings “have obvious implications on freedom of speech”, saying that they were able to flag posts as ‘opinion’ to prevent the algorithm downrating information in newsfeeds before the option was removed by Facebook last year.
“Now, the limited number of ratings means we sometimes have no option but to rate something as Partly False.
“To help us deal with this, we want to see Facebook introduce a ‘more context needed’ rating that would not downrank posts but would flag the potential problems to other users,” they said.
Elsewhere, the charity said private messaging apps like WhatsApp “pose a particular challenge to fact checkers”, with the organisation relying on posts being shared directly with them to investigate.
The app is known for using end-to-end encryption for its messages, meaning content can only be read by the sender and receiver.
“In the UK, there were fewer reports of misinformation circulating on WhatsApp compared with other countries – despite concerns that it would be problematic during the 2019 general election.
“However, that appeared to shift as the country faced the novel coronavirus pandemic – which was ongoing as this report was published – and posts of bad advice spread widely on WhatsApp.”
The publication of the report, titled Fighting the causes and consequences of bad information, comes on the same day Facebook’s UK Public Policy Manager gave evidence to Parliament’s Digital, Culture, Media and Sport Sub-committee on Online Harms and Disinformation.
Speaking to the group of MPs, Richard Earley said: “When it comes to general misinformation, our usual approach is that we tend to agree with those who say that it shouldn’t be for companies like Facebook to decide what is and isn’t true.
“Therefore we have, since 2017, built a network of third-party fact-checking partners around the world. We have three in the UK: Full Fact, FactCheckNI and just earlier this month, we added Reuters to that list.
“They’re able to act on instances of content which they feel are misleading, they issue a rating on that where they choose to do so, and then we take action to show it to fewer people.”
Earley also went on to add that when people are shown a piece of misinformation with content coverings on the platform, 95% of people do not go on to click through it.
On WhatsApp, he said “there’s a difference between the way that people are using text messaging and email” to spread misinformation and how that can be done on the messaging platform.
“We take significant action on WhatsApp to prevent people from sending fake messages, and we ban millions of fake accounts a day on WhatsApp who are engaged in that kind of behaviour.
“Just in the last week, we’ve extended the limitations which we place on the ability of people to forward messages multiple times in WhatsApp.
“We’ve dropped the limit of how many people they can forward [a message they’ve received] from five to one, and our initial research suggests that that has actually reduced the number of those messages being sent by 70%,” he said.
More information about Full Fact’s report can be found on the organisation’s website.
Want More?
Read our report on what YouTube is doing around fact-checking on its platform, or find out more about how you can soon get Google Meet for free.
For updates follow @TenEightyUK on Twitter or like TenEighty UK on Facebook.