Twitter said Thursday that starting next week it will label or remove misleading claims that try to undermine public confidence in elections.
The policy will apply to tweets that attempt to undermine people’s faith in the electoral process itself, such as false claims about election rigging or ballot tampering, or about the outcome of the vote, Twitter said.
The policy goes into effect Sept. 17, a few weeks before the Nov. 3 US presidential election. Many Americans are expected to vote by mail due to the COVID-19 pandemic, which is likely to delay election results. Social media companies have been working to strengthen their policies to prevent misinformation, but it’s not clear if their efforts will be enough.
Facebook said last week it will restrict new political ads in the week before the election and remove posts that convey misinformation about COVID-19 and voting. It will also attach links to official results to posts by candidates and campaigns that prematurely declare victory.
Twitter has had more aggressive policies than Facebook. It has banned political ads altogether and began labeling President Donald Trump’s tweets with fact checks in May, earning his ire.
San Francisco-based Twitter said its policy of labeling, rather than removing violating tweets from world leaders, will still apply with its newest rules. This means even if a candidate posts misleading claims about the election outcome, the post would likely stay up because Twitter deems it in the “public interest.” That said, the post’s visibility would be reduced and people won’t be able to retweet it.
“We will not permit our service to be abused around civic processes, most importantly elections,” Twitter said in a blog post Thursday. “Any attempt to do so — both foreign and domestic — will be met with strict enforcement of our rules, which are applied equally and judiciously for everyone.”
Though the policy comes weeks before the US election, more than 80 percent of Twitter’s users are outside of the US and it will apply globally.