advertising

Facebook’s plan to prevent election misinformation: Allowing it, mostly

A casually dressed man speaks in front a stylized padlock symbol.
Enlarge / Mark Zuckerberg speaking at Facebook’s F8 developer summit in 2018.

Although it may feel like the campaigns have been going on forever and will continue forever, linear time inexorably marches on and we are, at last, exactly two months away from the 2020 US presidential election. The logistics alone are more complicated than ever this year, thanks to the COVID-19 pandemic, and voters around the nation are likely to encounter complications of one kind or another.

Into this milieu we now add Facebook. The company has a bad track record of being used as a tool of misinformation and manipulation when it comes to elections. In a Facebook post today, company CEO Mark Zuckerberg outlined a whole bunch of new steps the company will be taking to “protect our democracy” this year. Some of those measures, alas, feel like shutting the barn door when the horse left so long ago you forgot you ever even had one.

“This election is not going to be business as usual,” Zuckerberg began, accurately. Misinformation about voting, the election, and both candidates for the presidency is already rampant on Facebook and every other media platform, and it’s being spread by actors both foreign and domestic. So what is Facebook going to do about it? “Helping people register and vote, clearing up confusion about how this election will work, and taking steps to reduce the chances of violence and unrest,” Zuckerberg promised.

Voting (mis)information

Facebook has for several weeks been plugging its voter information center in both Facebook and Instagram. That landing page takes a user’s location—approximate or specific, if enabled—to display voter-registration information, vote-by-mail eligibility, voter ID requirements, and other state and local information. It also contains a voting-related fact-checking section, covering topics such as, “Both voting in person and voting by mail have a long history of trustworthiness in the US.”

That will continue, Zuckerberg said, and the company will not only continue to enforce its policies against voter suppression, but expand them:

We already remove explicit misrepresentations about how or when to vote that could cause someone to lose their opportunity to vote—for example, saying things like “you can send in your mail ballot up to 3 days after election day”, which is obviously not true. (In most states, mail-in ballots have to be *received* by election day, not just mailed, in order to be counted.) We’re now expanding this policy to include implicit misrepresentations about voting too, like “I hear anybody with a driver’s license gets a ballot this year”, because it might mislead you about what you need to do to get a ballot, even if that wouldn’t necessarily invalidate your vote by itself.

Facebook is expanding its fact-checking in general to add labels to content that “seeks to delegitimize the outcome of the election or discuss the legitimacy of voting methods, for example, by claiming that lawful methods of voting will lead to fraud.”

Unfortunately, a primary source of exactly that kind of misinformation is the sitting US president, Donald Trump. Facebook has been slow to fact-check claims made by Trump or his campaign, but it does seem to be making good on appending labels to posts explicitly relating to voter misinformation. For example, the social media giant labeled this post Trump shared today around 10:45am EDT:

The labels, however, require a user to click through to find out what, exactly, the facts might be.

The homestretch

Zuckerberg also announced that Facebook will not accept any new “political or issue ads” during the final week of the campaign—so, beginning around October 27.

“In the final days of an election there may not be enough time to contest new claims,” he wrote, explaining the suspension. However: “Advertisers will be able to continue running ads they started running before the final week and adjust the targeting for those ads, but those ads will already be published transparently in our Ads Library so anyone, including fact-checkers and journalists, can scrutinize them.”

Unfortunately, the ads that political advertisers place before the deadline are allowed to be full of lies, and Facebook will not fact-check them. That’s been the company’s long-standing policy, and it evidently has no intention of reversing course for the rest of this campaign season. Fact-checkers and journalists will be scrutinizing the ads, as quickly as they can, but they may not have the same reach as a paid-for ad campaign that could be actively spreading falsehoods.

The final week before November 3 will no doubt be a valuable homestretch for campaigns—but it may prove less important than in previous years, as voters nationwide are expected to take advantage of early voting, mail-in voting, and mailed or in-person absentee voting to avoid exposure to groups and crowding in the continuing pandemic.

The Trump campaign vigorously objected to Facebook’s decision. “In the last seven days of the most important election in our history, President Trump will be banned from defending himself on the largest platform in America,” the campaign’s deputy national press secretary, Samantha Zager, said in a statement, adding that the president will be “silenced by the Silicon Valley Mafia.”

This is, of course, also false; Facebook’s prohibition only applies to paid advertising, not posts created or shared by any individual, page, or campaign. Neither will it prevent Trump (or his rival, Democratic candidate Joe Biden) from issuing new ads on any other broadcast, print, or online platform or stop him from holding press conferences, delivering interviews, hiring a skywriter, or getting his message out in any and every other way.

The big day—and the aftermath

Facebook’s stance with content by and large tends to be reactive, not proactive. The company is, however, proactively looking at ways to mitigate what are extremely likely to be high-tension feelings on election night itself.

Polls begin closing on the East Coast at 7pm ET and continue to close in waves over the following several hours. Typically, millions of Americans wait with their eyes glued to their favorite cable or broadcast network waiting for returns to come in and each state to be called. As we learned in 2000, however, reports come in at their own pace—not when viewers desperate for news want them to.

While the outcome in a state such as California or Alabama is likely to be a foregone conclusion, we are just as likely to have to wait hours or perhaps even days to learn the final outcomes in key swing states thanks to the projected increase in mail-in and absentee voting. Facebook is working now to try to teach its more than 200 million US users that this wait is not fraudulent and is, instead, a sign the system is working properly.

“We’ll use the Voting Information Center to prepare people for the possibility that it may take a while to get official results,” Zuckerberg said. “This information will help people understand that there is nothing illegitimate about not having a result on election night.”

Facebook will also work on and after Election Day to fact-check any claim of victory that is not backed up by its partners, Reuters and the National Election Pool. The election module will show results as they come in, and Facebook will “notify people proactively” as those results become available. Most importantly, Zuckerberg added, “If any candidate or campaign tries to declare victory before the results are in, we’ll add a label to their post educating that official results are not yet in and directing people to the official results.”

The platform is also trying to strengthen enforcement against “militias, conspiracy networks like QAnon, and other groups that could be used to organize violence or civil unrest in the period after the elections,” Zuckerberg said.

The FBI determined extremist movements, including QAnon, to be threats more than a year ago, but the content was allowed to proliferate on social networks until very recently. Twitter banned QAnon activity in July, calling it “coordinated harmful activity” that spread offline into real-world injury. The conspiracy movement has been linked to several violent episodes, including a kidnapping and a murder.

Facebook, however, has a record of failing to act against violent threats until it’s too late. The company cracked down on QAnon about two weeks ago, removing thousands of Facebook and Instagram groups, pages, and accounts.

As The Verge’s Casey Newton pointed out, that’s probably already too little, too late.

“On its face, [QAnon] seems no less a joke than the idea that TidePods secretly taste delicious,” Newton wrote. “But as with the laundry detergent, thousands of Americans have now been poisoned by QAnon, and the consequences seem likely to be far more dire, and long-lasting.”

Zuckerberg promises future change as Facebook advertiser boycott grows

A man in a T-shirt looks worried.
Enlarge / Facebook CEO Mark Zuckerberg speaking about Facebook News in New York, Oct. 25, 2019.

Facebook CEO Mark Zuckerberg said the company will change the way it handles rule-breaking speech from high-profile politicians in the future amid an advertising boycott that has drawn participation from large firms across several sectors.

Several nonprofits, including the Anti-Defamation League, the NAACP, and Color of Change, launched the Stop Hate for Profit campaign about two weeks ago. The boycott accuses Facebook of a “long history of allowing racist, violent, and verifiably false content to run rampant on its platform” and asks advertisers to “show they will not support a company that puts profit over safety.”

The boycott drew early support from outdoor apparel retailers Patagonia, The North Face, and REI. By Friday, the movement seemed to hit critical mass as food and personal care behemoth Unilever said it would suspend US ad campaigns on both Facebook and Twitter for the rest of the year. Telecom giant Verizon also said Friday it would suspend Facebook advertising for the time being.

The additions to the pile have come swiftly in the days since, especially from the food and beverage sector. Coca-Cola, Hershey, PepsiCo, Starbucks, and Denny’s have agreed to pause advertising either on Facebook properties or all social media outright for at least 30 days, as have Levi’s, Eileen Fisher, Honda, and liquor producer Diageo. All told, more than 160 entities have signed on to the boycott, and the organizers have now started pressuring advertisers to suspend their ad spending outside of the US as well.

Are the times a-changin’?

Zuckerberg on Friday said that his company will make a small shift in the way it handles rule-breaking posts from politicians.

“A handful of times a year, we leave up content that would otherwise violate our policies if the public interest value outweighs the risk of harm,” Zuckerberg said in a Facebook Live video and accompanying post, repeating his usual argument that everyone should be able to read whatever a politician chooses to say on the platform.

But it seems that extreme leeway given to newsworthy figures has finally found its limit, and Facebook will start letting people know when content breaks the rules:

We will soon start labeling some of the content we leave up because it is deemed newsworthy, so people can know when this is the case. We’ll allow people to share this content to condemn it, just like we do with other problematic content, because this is an important part of how we discuss what’s acceptable in our society—but we’ll add a prompt to tell people that the content they’re sharing may violate our policies.

Zuckerberg did not give a timeline for when this feature might be added to the platform.

Civil rights advocacy groups have been pressuring Facebook to change its policies and actions for several years, but the matter seems finally to have come to a head during the widespread protest movement sweeping the nation since the footage of Minnesota man George Floyd’s death at the hands of police went public in late May.

As protests spread, Twitter began labeling tweets by President Donald Trump and the White House that called for violence against protesters in violation of the platform’s social media policies. Those Tweets remained visible due to a newsworthiness exception in the rules, with a warning on them saying, “This Tweet violated the Twitter Rules about glorifying violence. However, Twitter has determined that it may be in the public’s interest for the Tweet to remain accessible.”

The president made an identical post on Facebook, but that platform refused to take any action, leading to sustained public outcry not only from civil rights advocates but also from company employees.