An illustration of a phone with warning misinformation warning labels and notifications.

Between Sept. 30 and Oct. 8, an advertisement on Facebook promoted false stories about the election, including one about “hundreds of piles" of Republican ballots being dumped as part of “a massive underground mail-in ballot fraud coordinated by Democrats.”

Facebook displayed the advertisement 30,000 to 40,000 times, mostly to people over 65, at a total cost of less than $200. Then, after being notified about the falsehoods by the Election Integrity Partnership (EIP), a nonpartisan coalition of researchers, Facebook removed the ad.

But the untrue stories remained on the platform. 

Like many ads on Facebook, this one appeared in a regular post that a person managing a Facebook page paid to promote. The page belongs to a shadowy operation called Plain Truth Now that is run out of Nigeria, according to the EIP. It seems to be coordinating its messages with a related website to spread false claims about the election and social issues.

Facebook stopped promoting the post as an ad but didn’t take it down. The same post appears on the Plain Truth Now page and on other pages where it's been reposted, including one Facebook Group with tens of thousands of members.

That incident shows the surprising ways the rules can play out as social media companies try to slow the spread of election-related falsehoods or misinformation. Blocking misleading ads is just one of several tactics Facebook, YouTube, Twitter, and other companies are using. If you use these sites regularly, you may find the platforms making it harder to share certain posts, adding warning labels to many others, and employing an evolving group of other tactics. 

More On Voting and Misinformation

“There’s no silver bullet solution to fighting misinformation, which is why our strategy attacks it from all angles,” says Andrea Vallone, a Facebook spokesperson.

The social platforms CR contacted say their efforts will continue past Nov. 3, as vote-counting and potential court challenges take place.

But these companies are running into criticism from both political parties and nonpartisan organizations. “To be fair, moderating content at scale is a really challenging job,” says Spandana Singh, a policy analyst with New America's Open Technology Institute. “I don't even think they necessarily know the exact limits of where the lines are, or how they're drawing them.”

Social media platforms are using several major strategies. If you're on any of the platforms these days, you're likely to see them in action.

Links and Labels on False Claims

One tactic you'll probably encounter are the labels and links social media companies are adding to some posts, including those they identify as misleading.

Facebook says it added such labels to 150 million pieces of content between March and September of this year. Twitter and YouTube use similar labels.

A screenshot of Facebook's voting information center.
Facebook (shown here), along with Twitter, YouTube, and other platforms are promoting voting information hubs and labeling election-related misinformation.

Facebook, Snapchat, TikTok, Twitter, and YouTube are maintaining voting-information hubs on their platforms. And the companies are placing links to information on or near a variety of posts that discuss the election. The links appear on many posts, not just those that may contain misleading information. 

Some people who study misinformation and content moderation say that labeling dubious—but not harmful—content and providing links to better information can be better than deleting a post. “Taking down misinformation doesn't necessarily mean people will be directed to true information,” says Evelyn Douek, a lecturer at Harvard Law School who studies social media. “I think we need to get out of the ‘take down, leave up’ false dichotomy. There are so many more things that platforms can do." 

On the other hand, critics of the technique say there’s some evidence that adding a label may draw more attention to a misleading post that might have faded into the background otherwise.

The 'Virality Circuit Breaker'

Social media companies have also been making it harder for people to find and share some kinds of posts, without labeling them. 

For instance, in July Twitter started testing an alert box that asks users if they’d like to stop and read an article before they retweet a link to it. Twitter says early results show the feature encourages people to open articles 40 percent more often. Starting Oct. 20 and continuing through the election, Twitter says it's adding “some extra friction” to retweets by prompting people to add a comment before they retweet a post. 

The techniques are sometimes called “virality circuit breakers,” and the idea is to slow problematic posts before they go viral.

In a similar vein, YouTube sometimes keeps certain problematic videos out of users' recommendation lists. Such videos might promote phony miracle cures for a serious illness or make blatantly false claims about historic events such as 9/11. According to the company, the tactic is effective.

A screenshot of Twitter's new pop-up promting users to read an article before they retweet it.
A new feature on Twitter prompts users to read articles before they retweet them. It's meant to improve the quality of discussions on the platform.
Photo: Twitter

But it's hard to know just how well these techniques work, according to Kathy Qian, co-founder of Code for Democracy, a nonprofit organization that uses public data to study misinformation and dark money in politics. "There isn't really a systematic way for someone who isn't a partner with these companies to get access to the right data to make any kind of judgment about how effective these efforts are," Qian says. "You basically have to take the platforms at their word."

People can also disagree sharply over what kinds of posts should be suppressed. Making those calls can sometimes be more difficult than putting the techniques into action.

Just look at what happened on Oct. 14, when The New York Post used social media to post links to an article saying that documents allegedly taken from a laptop by a computer shop implicated Joe Biden in a corruption scandal. 

Twitter displayed a warning that said the “link may be unsafe” if users tried to retweet a post that linked to the article, and stopped users from tweeting fresh links to it. The company said it was enforcing policies against tweets based on hacked or stolen material, and against images of personal information such as email addresses. Facebook temporarily made the article less likely to appear in a user's feed while it said it was determining its accuracy.

But the companies were both accused by many people of suppressing a legitimate news story. This week Twitter's CEO, Jack Dorsey, was grilled on the issue at a Senate hearing.  

Banning Political Ads

You'll probably see fewer political ads on social media right now, especially ones that refer to breaking news.

Facebook barred advertisers from purchasing new political ads starting a week before the election, and says the ban will continue for an unspecified length of time after. 

Google says it will place a moratorium on ads on its platforms related to the election while votes are being counted. Political ads are allowed on Snapchat, but a spokesperson told CR that they must go through a human review process before posting and that they must be factual. 

In addition, all political ads are banned on Twitter, Tik-Tok, and Pinterest.

Experts on social media and misinformation say that ads are just a small part of how conspiracy theories, deliberately misleading information, and lies about the voting process spread on social media platforms. During the 2016 election, Facebook says, approximately 126 million people—about 1 in 3 Americans—may have been exposed to about 80,000 pieces of content sponsored by Russian state actors and their proxies, and spread by regular accounts, not ads. 

But that doesn't mean these rules won't help, says Harvard’s Douek. “Advertising policies are the lowest hanging fruit," she says. "I'll take it."

Blocking QAnon and Similar Content

The most extreme thing social media companies do is to delete content or cancel entire user accounts. 

The clearest example of this is how Facebook, Twitter, YouTube, and other platforms have been trying to tamp down QAnon messages. The wide-ranging conspiracy theory alleges that the President Donald Trump is waging a secret war against prominent Democrats and deep-state actors who worship Satan and run an international child sex-trafficking ring. 

Reddit banned a number of QAnon-related groups in 2018 and  expanded its takedowns this summer in response to harassment and incitements to violence among Q followers. Other platforms have recently started deleting accounts linked to QAnaon as well as groups devoted to the Boogaloo, a far-right, antigovernment movement. Facebook, reversing a longstanding policy, said it would start removing Holocaust-denial content

Some critics of these companies say they should have acted more forcefully earlier, before QAnon, in particular, exploded in popularity on the platforms. 

Questions about where social media companies should draw the lines around free speech are complicated, says Brendan Nyhan, a political science professor at Dartmouth College who studies media and misinformation.

But, he says, they should explain their decisions better. "There needs to be more accountability and more discretion about their decisions, and the effects those decisions have," Nyhan says.