Fight Against Coronavirus Misinformation Shows What Big Tech Can Do When It Really Tries
But technology companies' unprecedented efforts reveal the limitations of policing their vast platforms
Big tech companies are being confronted with the swift spread of online misinformation about the coronavirus—from dangerous health advice to racist conspiracies to scammy products—and the industry has launched what looks like all-out war to fight it. It’s a high-stakes test case for defense operations at companies including Amazon, Facebook, Google, and Twitter, and experts say their efforts appear more aggressive than any previous crackdown on false and misleading information.
The push shows how much the platforms can do when they pull out all the stops, according to scholars who study the subject—going far beyond their efforts leading up to the 2016 election, when political misinformation became a prominent issue, and in the years since. But it also reveals some inherent limitations to fighting bad information, even with Big Tech’s vast resources.
“They’ve definitely been more aggressive in responding to the coronavirus crisis than they have been in going after political misinformation,” says Paul Barrett, a New York University professor who studies online misinformation.
The companies are up against a buffet of misleading and potentially dangerous info, such as hoaxes alleging that the Chinese government or the pharmaceutical industry cooked up the coronavirus in a lab, false claims that it’s a stolen bioweapon, or a widely promoted “cure” that the FDA has likened to drinking bleach.
The response may be more forceful than ever before, but these aren’t some newly developed break-in-case-of-emergency superweapons, experts say. “All of a sudden, they’re doing some things that are actually quite effective. And they’re not magical, either—they didn’t require years and years of research,” says Jevin West, director of the Center for an Informed Public at the University of Washington.
The Limits of Misinformation Defenses
The companies’ efforts, while beyond what they’ve done in years past, are nowhere near shutting down the online coronavirus “infodemic.” Misinformation still thrives on these sites—in one example, Consumer Reports’ Ryan Felton reported on fraudulent virus-related products on Amazon, which he found are still available even after the site’s purge; others have reported that price-gouging remains rampant on the platform, too, as do pernicious lies on Facebook and Twitter.
“I don’t think we’ve ever seen the social media world come together on an issue like this—and yet still it’s falling short,” says UW’s West.
That’s in part because the platforms’ misinformation defenses have never been tested with a crisis this fast-moving and big. Election-related skullduggery orbits one country or region at a time; other health- or science-related misinformation operates at a constant hum rather than inundating the internet all at once in the span of a few months. “There’s always been health misinformation on Facebook,” says Renee DiResta, research manager at the Stanford Internet Observatory. “But now the entire world is posting about the same thing.”
Even in an all-hands moment like this one, some efforts are controversial. For instance, Barrett says he supports removing “provably false content”—especially when health and safety are at stake. But takedowns can also backfire, DiResta says. “That then creates the perception that the information is being censored, and there’s a little bit of concern that that creates or feeds a conspiracy that the platform is trying to prevent you from knowing the truth.”
In interviews with the press, Facebook CEO Mark Zuckerberg has promised that better artificial intelligence tools are under development that could overcome the enormity of the misinformation problem. But an automated solution that works across languages and at scale is unlikely to arrive anytime soon, experts say. For now, Facebook uses AI to surface claims that need a closer look and pass them to fact-checkers, who are often overwhelmed. “This is not something AI does well,” West says. “There’s too much context and too many ways to subvert and adapt to the system.”
A Google spokesman contacted by CR pointed to Google-owned YouTube’s work to staunch misinformation as a sign of the company’s progress in this area. “In 2019 alone, we launched over 30 different changes to reduce recommendations of borderline content and harmful misinformation, including climate change misinformation and other types of conspiracy videos,” said Farshad Shadloo. “Thanks to this change, watch time this type of content gets from nonsubscribed recommendations has dropped by over 70 percent in the U.S.”
Facebook and Twitter did not respond to CR’s requests for comment on the issue.
The companies haven’t exhausted all their options. But there’s likely a ceiling to their ability to keep bad information away from their users, especially during a sudden global crisis.
“They could do more—but they can’t do everything,” says Justin Brookman, CR’s advocacy director for consumer privacy and technology. “They can’t solve for human nature; they can’t police that racist or confusing or crazy email forward from Grandma.”
Individuals can also use the “SIFT technique” to investigate questionable content. The acronym stands for Stop, Investigate the source, Find better coverage, and Trace claims, quotes, and media to the original context. Developed by Mike Caulfield, a digital information literacy expert at Washington State University, the method can help readers separate reliable information from sketchy posts online.