A screenshot showing Facebook's pop-up notification when users share outdated content, part of efforts to fight misinformation.
Photo: Facebook

Facebook announced it will warn users before they share content that is more than 90 days old, saying the effort will help fight misinformation on the platform.

The pop-up feature will allow users to cancel the post or continue to share it if they feel the information is still relevant. The move is similar to a new feature Twitter is testing, which asks users if they've read an article before retweeting a link they haven't clicked on.  

The Facebook warning will make it easier for people to identify content that’s timely and reliable, says John Hegeman, Facebook’s vice president of feed and stories, in a press release.

“Over the past several months, our internal research found that the timeliness of an article is an important piece of context that helps people decide what to read, trust and share,” Hegeman says.  

Discussions around misinformation often center around content that is outright wrong or intentionally misleading. But experts agree that posting old news articles, videos, and other content out of context is part of the problem.

More on Misinformation

“Something we regularly see around controversial moments in time, such as elections or demonstrations, is people repurposing historical content to make it look as if events from years ago on a different continent are actually happening here, today,” says Ben Nimmo, director of investigations at Graphika, a social-media monitoring company. “It's one of the simplest ways people have of spreading false information, because they don't need to create the content, they simply present it out of context.”

Outdated information can lead to negative consequences even when users have good intentions.

The coronavirus pandemic provides a clear example. The CDC and the World Health Organization now say that wearing a mask is critical to stop the spread of the virus, but in the early days of the outbreak, both organizations gave the opposite advice. Sharing an article with outdated guidance from March could cause confusion about what public health experts say the best practices are.

News outlets have recognized the problem for years. Some publishers have taken steps to address the issue on their own, such as the Guardian, which displays a prominent yellow label on older content noting when an article was first published. 

Facebook Needs to Do More, Experts Say

“Old content gets posted to Facebook all the time, and it can be part of a cycle of this sort of renewed outrage,” says Melissa Ryan, CEO of CARD Strategies, an organization that works to curb the threats posed by online toxicity and extremism. “Part of the way misinformation works is it generates an emotional response, good or bad, and when it resonates with people they often don't take the time to check if it's true or relevant.”

Facebook says it will test similar methods on users' posts in the near future. A notification screen will soon appear on mentions of the coronavirus pandemic, providing information about the sources of links and directing people to the Facebook’s COVID-19 information center.

The company’s new notifications on users’ posts could have a positive effect. “Even if this only works point-one percent of the time, it’s going to have a significant impact because there are so many people on Facebook,” says Bill Fitzgerald, a privacy and misinformation researcher at Consumer Reports.

However, experts reached by Consumer Reports say the move won't address the broader problem of misinformation on the platform.

Facebook and other social media companies have policies in place that forbid users from posting hate speech, manipulated videos, and misleading information about issues such as health or voting.

But many experts agree that companies like Facebook, Twitter, and Google don’t dedicate the resources necessary to police posts that violate the rules.

In April, a Consumer Reports investigation submitted advertisements with deliberate misinformation about coronavirus to test Facebook's review process. Facebook approved the ads, but CR canceled the posts before they ran. When informed about the test, Facebook said they remove millions of misleading ads and are constantly working to improve the system.

CR's Fitzgerald, for one, says Facebook needs to do more.

“These notifications Facebook is rolling out should be seen as a distraction from the fact that they're still not staffing appropriately for detecting misinformation and disinformation,” Fitzgerald says.

Facebook did not respond to CR’s request for comment.

A Growing Trend

Misinformation on social media was the subject of a recent hearing by the House Intelligence Committee. Representatives from Facebook, Google, and Twitter testified about their efforts to curb foriegn interference into the American democratic process. Several Democratic representatives argued the companies aren’t going far enough.

Still, some of Facebook's efforts may be working. A recent study showed that media literacy tips Facebook posted in 2017 to help users “spot false news” can help consumers separate truthful sources from misinformation. Facebook hasn’t released any data on how many people saw those posts, or what effect they had on people’s behavior.

Whether or not interventions like these are effective, they’re part of a growing trend. Social media companies are testing interventions into users' posts which the companies say are meant to enforce policies and improve the quality of conversations online.

Facebook, Twitter, Youtube, and other social media platforms now add labels to content users are posting about COVID-19, with links to authoritative sources of information. Both Facebook and Twitter have also fact-checked, hidden, or added warning labels to posts by President Trump over recent months in cases where the companies said posts violated their policies.

Twitter started experimenting with a new feature in early June that will prompt people to read an article if they retweet a link without opening it first. Last year, Instagram introduced a feature that asks users to reconsider before sharing comments that might be offensive.

“I certainly don't think outdated posts are the biggest issue that Facebook has to tackle by a long shot. But I think it could be helpful,” Ryan says. “The tech firms have to recenter their policies and practices on the people who are being harmed.”