Facebook CEO Mark Zuckerberg answers questions about hate groups' use of the social network during testimony in October 2019.
Facebook CEO Mark Zuckerberg answers questions about hate groups' use of the social network during testimony in October 2019.

An independent audit of Facebook’s track record on civil rights finds the company still allows misinformation and hate speech on the platform to flourish, years after Facebook committed to correct the issue and do more to protect its users and the public.

The authors say they worry that the platform could be used to suppress voting in the upcoming Presidential election, despite official Facebook policies to prevent it.

“Many in the civil rights community have become disheartened, frustrated and angry after years of engagement where they implored the company to do more to advance equality and fight discrimination, while also safeguarding free expression,” said Laura W. Murphy, a noted expert on civil rights who led the official audit, in a sprawling, 89-page report released Wednesday (PDF).

However, the report said the company has done a good job in some areas, for instance in blocking misinformation apparently aimed at discouraging participation in the U.S. Census this spring. The audit was suggested by civil rights leaders and several members of Congress, and conducted with Facebook's cooperation.

The audit arrives in the midst of a national debate over how social media companies should police the content that users post online. More than 1,000 advertisers, including major corporations such as Coca-Cola, Ford, and Unilever, have pulled advertisements from the company in a boycott organized by #StopHateForProfit, a coalition spearheaded by civil rights organizations. (Consumer Reports has also paused its purchase of ads on Facebook and Instagram.)

More on Misinformation

The audit’s release “couldn’t come at a more important time,” said Sheryl Sandberg, Facebook’s chief operating officer, in a statement. “As hard as it has been to have our shortcomings exposed by experts, it has undoubtedly been a really important process for our company.”

On Tuesday, Facebook executives met with organizers of the #StopHateForProfit campaign, including representatives of the NAACP, the Anti-Defamation League, Color of Change, and Free Press, but civil rights advocates reportedly left unconvinced that Facebook would do enough to combat hate speech and harassment on the platform.

“I don't want to say there has been zero progress, but we aren’t seeing a commitment to excellence from Mark Zuckerberg on removing hate and disinformation from the site,” says Jessica González, co-CEO of Free Press and one of the organizers of #StopHateForProfit who met the executives. “There hasn’t been enough meaningful action on the issues that matter to communities that are often targeted on Facebook.”

A 'Seesaw of Progress and Setbacks'

The audit says Facebook has taken some positive steps in the two years since it was commissioned, but that overall the company's efforts come up short.

Among other problems, Facebook doesn’t do enough to combat widespread anti-Muslim hate speech, fails to effectively prevent the promotion of white nationalism, and allows its platform to continue “driving people toward self-reinforcing echo chambers of extremism,” the audit says. The report points out that Facebook lets politicians and other influential figures violate content moderation rules that are strictly enforced when it comes to many other users.

The audit called out instances where Facebook left up posts that could lead to voter suppression. In a series of posts in May 2020, President Trump called official state-issued mail-in ballots or ballot applications “illegal” and gave false information about how to obtain a ballot. For instance, he posted, falsely, that "California is sending Ballots to millions of people, anyone living in the state, no matter who they are or how they got there."

The state was actually sending ballots only to registered voters.

Among other problems, the report says, Californians reading the posts could have falsely concluded they didn't need to register to get a ballot, and therefore could miss their opportunity to vote. 

“We are concerned that politicians, and any other user for that matter, will capitalize on the policy gaps made apparent by the president’s posts and target particular communities to suppress the votes of groups based on their race or other characteristics,” the audit says.

At the same time, the report also found progress, including Facebook's efforts to comply with a 2019 civil rights settlement in which the company reworked its advertising platform to prevent discrimination in ads for housing, credit, and employment.

The Free Speech Debate

For years, Facebook has argued that the less the company moderates user content, the better it is for society. While hate speech is officially banned on the platform, “We err on the side of free expression because, ultimately, the best way to counter hurtful, divisive, offensive speech, is more speech,” Nick Clegg, Facebook’s vice president of global affairs and communications wrote on July 1.

Some groups support Facebook's restrained approach to content moderation.

“Companies like Facebook have let people move past the traditional gatekeepers that historically shut out people who have minority views,” says Neil Chilson, a senior research fellow at Stand Together, a libertarian think tank. “Social media companies have a role in making sure civil conversations happen on their platforms. But if we end up with a world where everybody is trying to lobby Facebook to quiet speech rather than spending the time to engage our opponents, I don't think it's good for democracy or for civil rights.”

But others argue that Facebook's practices end up suppressing speech.

“Elevating free expression is a good thing, but it should apply to everyone,” Murphy writes in the audit. While Facebook allows posts by prominent politicians to stay up when they appear to violate content policies, Black Lives Matter activists, in particular, have complained that their posts are frequently taken down if they discuss racism. 

At times, the audit found, anti-racism activists and other users were subject to coordinated attacks in which large numbers of their posts were reported for violating Facebook's rules, resulting in posts being taken down or accounts suspended. 

In addition, activists and members of marginalized groups say they are often harassed on the platform. “Speech is not free for members of historically oppressed groups when they have to self censor in order to avoid death threats, hate, harassment, and defamation,” Free Press' González says.

This debate isn’t happening in a vacuum. Nearly every social media company has taken unprecedented steps to moderate content over the past few months, with players such as Twitter and YouTube labeling content with false information about voting and healthcare, and even banning users for promoting hate speech and white nationalism.

Facebook recently banned hate groups associated with the "boogaloo" movement after some members allegedly used the platform to plan the murder of security officers outside a federal courthouse. However, the company has taken a more hands-off approach to offensive content than most of its competitors, according to a number of experts.

“The world is at a tipping point, and I think we are seeing companies take these issues more seriously,” González says. “Facebook has the potential to do great harm, and right now, it is doing great harm. It also has the potential to shift course and to ensure that it is on the right side of history.”