It’s shaped social media for the better part of a decade, and how we deal with fake news is a debate not going anywhere soon. TenEighty takes a look at how misinformation is shaping our lives in 2020, and why addressing it is more important – and more difficult – than ever.
The COVID-19 pandemic has pushed online misinformation, among a plethora of other issues, to the front of the public consciousness. As the pandemic started to take hold around the world and people struggled to make sense of their new reality, conspiracy theories started appearing.
With the first cases of COVID-19 appearing in the Wuhan province of China, a number of theories emerged linking the virus to the sanitary conditions of ‘wet markets’ in the area, as well as a number of animals, including bats and pangolins. The origin of the virus has yet to be definitively identified, but these often unfounded theories have still seen real world consequences.
Speaking to a government committee, the Minister of State for Countering Extremism Susan Williams said, “I’ve been speaking to our hate crime lead, and there’s been a 21% uptick in hate incidents against the IC4 and IC5 community.” IC codes are used in police radio communications to describe the apparent ethnicity of suspects or victims – IC4 and IC5 refers to South Asian and East Asian, respectively.
These aren’t the only theories that have seen violent repercussions – 5G technology has also been targeted. Fuelled by recent high profile adoptions in a number of countries alongside the global spread of the virus, depending which conspiracy you subscribe to 5G is either the direct cause of the virus, an immune system weakener, or a range of other baseless theories. There have been numerous examples of 5G conspiracies in the UK alone, including a USB stick that provides you with a ‘5G Bio Shield’ and a number of arrests for vandalisation or arson attacks on 5G masts.
This is not a solely UK based problem, however, with Twitter making a concerted effort to crack down on 5G fake news worldwide, putting tags on tweets spreading misinformation and suspending accounts. This is an inelegant solution to the problem, however, with the tag being applied inconsistently to tweets that include the words ‘5G’ and ‘Corona’ with little consideration for the context in which they are being used.
The extreme anti 5G activist Mark Steele who’s repeatedly claimed that COVID-19 is a hoax designed to kill millions has had his Twitter account suspended pic.twitter.com/lQi4UXBgIU
— Rory Cellan-Jones (@ruskin147) June 12, 2020
As more is learnt about the virus and its effects, information about treatments and safety has also come under scrutiny. While social distancing and wearing masks in public have become standard policy in recent weeks, particularly as countries start easing out of lockdown, as the virus took hold information was often disparate and inaccurate.
The UK communications regulator Ofcom has been monitoring misinformation throughout the pandemic, with week one of the lockdown revealing that “35% of online adults” saw the false claim that drinking more water can flush out the infection, with 24% seeing claims that avoiding cold food and gargling salt water can help you avoid the virus.
But how do we deal with fake news when we don’t know it’s fake?
The same Ofcom research shows that 40% of people are struggling to discern what is accurate and what isn’t, rising to 52% in 18-24 year olds, and this issue is at the heart of why addressing misinformation about COVID-19 is so difficult. While unscientific cures or USB sticks that protect you from 5G radiation are relatively easy to dismiss, what happens when the lines are blurred, or a definitive answer has yet to be determined? For example, a common claim at the beginning of the pandemic was that the virus was only a risk to older people or people with pre-existing conditions. While children only make up 2% of global cases, they can still be affected by the virus, with lower case numbers partially attributed to lower levels of testing.
Misinformation and content moderation are taking up more space than ever in the priorities of social media companies, but when dealing with these issues in a situation this fluid and changeable, all while misinformation is shaping public behaviour and government policy, it’s an unprecedented challenge.
All these issues are compounded by a lack of consensus on what addressing a given piece of misinformation might entail. This lack of certainty can help falsehoods gain traction to a degree that makes them far harder to refute further down the line. If you repeat something enough times (or at least if the algorithm does), particularly when people are scared and looking to make sense of an uncertain world, it’s not hard for it to become reality, and social media has yet to find an adequate solution to the issue.
There may never be a solution good enough. Platforms have been slowly moving towards solutions like decentralised content moderation or directing users towards neutral information in an effort to balance the safety of users against rights to free expression, but these solutions are ill-suited to move as quickly as they need to in a pandemic. At a time when the role of social media in society is in flux, platforms will need to demonstrate they can be trusted. As with many things in this pandemic, whether governments, businesses or users deem this to be a good enough answer remains to be seen.
- Being Black On BookTube: Creators Speak Out
- Facebook To Allow Users To Turn Off Political Ads Ahead Of US Election
- What Happened to The Janoskians?
- Facebook Gaming Expands Access to Monetisation Features
- How Streaming Platforms Are Cornering The Coronavirus Market