From the initial tweet to the signing of the executive order, TenEighty chronicles US President Donald Trump’s current dispute with social media companies.
“There is NO WAY (ZERO!) that Mail-In Ballots will be anything less than substantially fraudulent,” tweets Donald Trump on Tuesday. “Mail boxes will be robbed, ballots will be forged & even illegally printed out & fraudulently signed.”
These claims were unsubstantiated, and labels were affixed to the tweet, saying “get the facts about mail-in ballots”.
Clicking on the notice takes users to a Twitter Moment, in which it says, “These claims are unsubstantiated, according to CNN, Washington Post and others. Experts say mail-in ballots are very rarely linked to voter fraud.”
We added a label to two @realDonaldTrump Tweets about California’s vote-by-mail plans as part of our efforts to enforce our civic integrity policy. We believe those Tweets could confuse voters about what they need to do to receive a ballot and participate in the election process.
— Twitter Safety (@TwitterSafety) May 28, 2020
In tweets detailing the decision, the Twitter Safety account writes that in some instances they add labels which link to further context.
“We added a label to two @realDonaldTrump Tweets about California’s vote-by-mail plans as part of our efforts to enforce our civic integrity policy,” they explain. “We believe those Tweets could confuse voters about what they need to do to receive a ballot and participate in the election process.”
President Trump responded by accusing Twitter of interfering in the presidential election. “They are saying my statement on Mail-In Ballots, which will lead to massive corruption and fraud, is incorrect, based on fact-checking by Fake News CNN and the Amazon Washington Post,” he writes. “Twitter is completely stifling FREE SPEECH, and I, as President, will not allow it to happen!”
More tweets from Trump followed. The next day he tweeted to say that “Republicans feel that Social Media Platforms totally silence conservatives [sic] voices.
“We will strongly regulate, or close them down, before we can ever allow this to happen.”
He concludes by telling social media companies to “clean up your act, NOW!!!!”
Yet, a few hours later, he posted: “Twitter has now shown that everything we have been saying about them (and their other compatriots) is correct.” He adds that ‘big action’ is to follow.
Twitter has now shown that everything we have been saying about them (and their other compatriots) is correct. Big action to follow!
— Donald J. Trump (@realDonaldTrump) May 27, 2020
The promised ‘action’ came the next day, on 28 May, when President Trump signed an executive order on ‘preventing online censorship’.
“We cannot allow a limited number of online platforms to hand pick the speech that Americans may access and convey on the internet,” it reads. “This practice is fundamentally un-American and anti-democratic.
“When large, powerful social media companies censor opinions with which they disagree, they exercise a dangerous power. They cease functioning as passive bulletin boards, and ought to be viewed and treated as content creators.
It goes on to add: “Twitter, Facebook, Instagram, and YouTube wield immense, if not unprecedented, power to shape the interpretation of public events; to censor, delete, or disappear information; and to control what people see or do not see.”
Legally, the order focusses on Section 230 of the Communications Decency Act 1996. The piece of legislation gives protection from civil liability for ‘interactive computer service’ providers when restricting access, in good faith, tocontent considered “obscene, lewd, lascivious […] or otherwise objectionable”.
“Section 230 was not intended to allow a handful of companies to grow into titans controlling vital avenues for our national discourse under the guise of promoting open forums for debate, and then to provide those behemoths blanket immunity when they use their power to censor content and silence viewpoints that they dislike,” argues President Trump. “When an interactive computer service provider removes or restricts access to content and its actions do not meet the criteria of subparagraph (c)(2)(A), it is engaged in editorial conduct.”
The order continues: “It is the policy of the United States that such a provider should properly lose the limited liability shield of subparagraph (c)(2)(A) and be exposed to liability like any traditional editor and publisher that is not an online provider.”
It was an executive order described as a ‘reactionary and politicized approach to a landmark law’ by Twitter. “#Section230 protects American innovation and freedom of expression, and it’s underpinned by democratic values,” tweets their Public Policy account. “Attempts to unilaterally erode it threaten the future of online speech and Internet freedoms.”
“[Section 230] protects American innovation and freedom of expression, and it’s underpinned by democratic values. Attempts to unilaterally erode it threaten the future of online speech and Internet freedoms.”
Facebook also issued a statement opposing the order. “Facebook is a platform for diverse views,” says spokesperson Liz Bourgeois. “We believe in protecting freedom of expression on our services, while protecting our community from harmful content including content designed to stop voters from exercising their right to vote. Those rules apply to everybody.
“Repealing or limiting Section 230 will have the opposite effect,” it continues. “It will restrict more speech online, not less. By exposing companies to potential liability for everything that billions of people around the world say, this would penalize companies that choose to allow controversial speech and encourage platforms to censor anything that might offend anyone.”
Elsewhere, ahead of the order being signed, YouTube CEO Susan Wojcicki said in an interview with David Rubenstein that they hadn’t yet seen the executive order, but that they “take all concerns very seriously”.
“We have worked extraordinarily hard to make sure that all of our policies and systems are built in a fair and neutral and consistent way,” says Wojcicki, before going on to talk about the ability to appeal decisions taken by the platform.
The boss continues to say that YouTube and social platforms have enabled new voices to join the conversation. “We’ve been really proud that across the spectrum we see a lot of new voices and a lot of new opinions,” she says.
However, the initial two notices wouldn’t be the last issued by Twitter on tweets posted by Trump.
After George Floyd, a black man, was killed by a police officer pressing down on his neck with his knee, Americans took to the streets to protest. On 29 May, Trump commented on the situation.
“I can’t stand back & watch this happen to a great American City, Minneapolis,” he tweets. “A total lack of leadership. Either the very weak Radical Left Mayor, Jacob Frey, get his act together and bring the City under control, or I will send in the National Guard & get the job done right.”
He followed his remarks up with a second tweet: “These THUGS are dishonoring the memory of George Floyd, and I won’t let that happen,” he continues. “Just spoke to Governor Tim Walz and told him that the Military is with him all the way.
“Any difficulty and we will assume control but, when the looting starts, the shooting starts. Thank you!”
It was this second message which was flagged by Twitter, and another notice was issued. “This Tweet violated the Twitter Rules about glorifying violence. However, Twitter has determined that it may be in the public’s interest for the Tweet to remain accessible,” it reads.
Yet what was initially a conflict involving just President Trump and Twitter soon concerned Facebook’s entities as well. The above text was also shared on his personal Facebook and Instagram pages, but were not taken down by the platform.
An explanation from CEO Mark Zuckerberg followed: “I know many people are upset that we’ve left the President’s posts up, but our position is that we should enable as much expression as possible unless it will cause imminent risk of specific harms or dangers spelled out in clear policies.”
“I know many people are upset that we’ve left the President’s posts up, but our position is that we should enable as much expression as possible unless it will cause imminent risk of specific harms or dangers spelled out in clear policies.”
He continues: “We looked very closely at the post that discussed the protests in Minnesota to evaluate whether it violated our policies. Although the post had a troubling historical reference, we decided to leave it up because the National Guard references meant we read it as a warning about state action, and we think people need to know if the government is planning to deploy force.”
Zuckerberg also explains that Facebook’s policy concerning “incitement of violence” allows for a “discussion around state use of force”, but adds that “today’s situation raises important questions about what potential limits of that discussion should be”.
In addition to commenting on President Trump’s initial message, the note also says that a follow-up post on Facebook, shared by the politician later that same day, “explicitly discouraged violence” and as such “does not violate our policies and is important for people to see”.
“Unlike Twitter, we do not have a policy of putting a warning in front of posts that may incite violence because we believe that if a post incites violence, it should be removed regardless of whether it is newsworthy, even if it comes from a politician,” Zuckerberg goes on to add.
“People can agree or disagree on where we should draw the line, but I hope they understand our overall philosophy is that it is better to have this discussion out in the open, especially when the stakes are so high.”
It was a statement met with criticism from Facebook employees. Staff took to Twitter to voice their concerns, with some taking part in “virtual walkouts” against Zuckerberg’s approach to the issue.
On 1 June, two days after the CEO’s note was published, one software engineer at the company announced his resignation.
“For years, President Trump has enjoyed an exception to Facebook’s Community Standards,” writes Timothy Aveni. “Over and over he posts abhorrent, targeted messages that would get any other Facebook user suspended from the platform. He’s permitted to break the rules, since his political speech is ‘newsworthy’.”
He adds that he cannot continue to excuse the organisation’s behaviour. “Facebook is providing a platform that enables politicians to radicalize individuals and glorify violence, and we are watching the United States succumb to the same kind of social media-fueled division that has gotten people killed in the Philippines, Myanmar, and Sri Lanka,” he says. “I’m scared for my country and I’m done trying to justify this.”
Another post was published by Zuckerberg later in the week, sharing a note sent to employees about his recent decision.
Alongside acknowledging his stance left many of his employees “angry, disappointed and hurt”, the message goes on to detail seven areas the platform will explore going forward. Split into three categories, these include “ideas related to specific policies, ideas related to decision-making, and proactive initiatives to advance racial justice and voter engagement”.
The list starts with a review of policies “allowing discussion and threats of state use of force”, focussing on two specific situations.
“The first is around instances of excessive use of police or state force. Given the sensitive history in the US, this deserves special consideration,” Zuckerberg writes. “The second case is around when a country has ongoing civil unrest or violent conflicts.
“We already have precedents for imposing greater restrictions during emergencies and when countries are in ongoing states of conflict, so there may be additional policies or integrity measures to consider around discussion or threats of state use of force when a country is in this state.”
The CEO goes on to confirm a review of future options for dealing with violating content, “aside from the binary leave-it-up or take-it-down decisions”.
“I know many of you think we should have labeled the President’s posts in some way last week,” he says. “Our current policy is that if content is actually inciting violence, then the right mitigation is to take that content down — not let people continue seeing it behind a flag.”
“I know many of you think we should have labeled the President’s posts in some way last week. Our current policy is that if content is actually inciting violence, then the right mitigation is to take that content down — not let people continue seeing it behind a flag.”
He adds that the policy has no exceptions “for politicians or newsworthiness”, and that the company needs to proceed carefully in this area, saying that it “has a risk of leading us to editorialize on content we don’t like even if it doesn’t violate our policies”.
Other areas Zuckerberg mentions include establishing a ‘voter hub’, creating a “clearer and more transparent decision-making process”, launching a work stream “for building products to advance racial justice” and reviewing voter suppression policies.
“For example, as politicians debate what the vote-by-mail policies should be in different states, what should be the line between a legitimate debate about the voting policies and attempts to confuse or suppress individuals about how, when or where to vote,” asks Zuckerberg. “If a newspaper publishes articles claiming that going to polls will be dangerous given Covid, how should we determine whether that is health information or voter suppression?”
The Facebook boss also says the company will look at whether structural changes need to be made “to make sure the right voices are at the table”.
“Not only when decisions affecting a certain group are being made, but when other decisions that may set precedents are being made as well,” he adds. “I’m committed to elevating the representation of diversity, inclusion and human rights in our processes and management team discussions, and I will follow up soon with specific thoughts on how we can structurally improve this.”
The next day, President Trump criticises Twitter once more. A video posted on the platform by his campaign team, Team Trump, was disabled on Wednesday due to a copyright report.
Sharing a news article on the issue, Trump tweets: “They are fighting hard for the Radical Left Democrats. A one sided battle. Illegal. Section 230!”
Twitter co-founder and CEO Jack Dorsey responded later, saying the President’s comments were “not true” and the takedown was “not illegal”.
“This was pulled because we got a DMCA [Digital Millennium Copyright Act] complaint from copyright holder,” he explains.
The video takedown is the latest development in the ongoing dispute, following the US President making repeated calls for Section 230 to be repealed.
For updates follow @TenEightyUK on Twitter or like TenEighty UK on Facebook.