Politics

Judge: Trump Admin‘s TikTok ban would cause “irreparable harm” to creators

Visitors visit the booth of Douyin (TikTok) at the 2019 smart expo in Hangzhou, China, Oct. 18, 2019.
Enlarge / Visitors visit the booth of Douyin (TikTok) at the 2019 smart expo in Hangzhou, China, Oct. 18, 2019.

A federal judge in Pennsylvania has blocked a Trump administration order that would have banned TikTok from operating inside the United States as of November 12, finding that content creators who use the short-form video platform to make a living would suffer “irreparable harm” if the ban were to go through.

The “significant and unrecoverable economic loss caused by the shutdown of the TikTok platform” was grounds for granting an injunction, Judge Wendy Beetlestone of the US District Court for Eastern Pennsylvania wrote in a ruling (PDF) today.

President Donald Trump in August issued an executive order declaring TikTok (as well as another China-based app, WeChat) to be a national emergency. That order gave the Department of Commerce 45 days to put a list of banned actions into place. Commerce did so, prohibiting new TikTok downloads after September 20 and banning nearly every other TikTok feature after November 12.

The case is not the same one in which TikTok received a reprieve from a federal judge over the September deadline. In that case, TikTok itself was suing the federal government. Judge Carl Nichols of the US District Court for DC issued an injunction prohibiting the September 27 ban from going into effect, finding that the Trump administration probably overstepped its legal authority in enacting the ban.

In this case, three TikTok content creators—Douglas Marland, Cosette Rinab, and Alec Chambers—filed suit. The three among them have more than 6.5 million total subscribers, and they argued they would lose access to “tens of thousands of potential viewers and creators every month” if the ban went through.

Beetlestone rejected the same plaintiffs’ request for a preliminary injunction in September, finding at the time that such a ban would “undoubtedly [pose] an inconvenience,” but finding that they did not prove they would suffer “immediate, irreparable harm” as existing TikTok users would still be able to access the platform. This time around, however, Beetlestone found that the ban on using the platform in any meaningful way would indeed be harmful.

TikTok meanwhile is still in talks with Oracle over a not-actually-an-acquisition deal that would theoretically put control of the company into US hands and alleviate the White House’s stated national security concerns.

Senate hauls Zuckerberg, Dorsey in to hearing to yell at them about tweets

A man with a massive beard talks on a flatscreen between a pair of faux columns.
Enlarge / Twitter CEO Jack Dorsey (and his COVID beard?) testifying remotely before the Senate Commerce, Science, and Transportation Committee on October 28, 2020.

The Senate Commerce Committee met for a hearing Wednesday meant to probe some of the most seemingly intractable tech questions of our time: is the liability shield granted to tech firms under Section 230 of the Communications Decency Act helpful or harmful, and does it need amending?

Section 230 is a little slice of law with enormously broad implications for the entire Internet and all the communication we do online. At a basic level, it means that if you use an Internet service such as Facebook or YouTube to say something obscene or unlawful, then you, not the Internet service, are the one responsible for having said the thing. The Internet service, meanwhile, has legal immunity from whatever you said. The law also allows space for Internet services to moderate user content how they wish—heavily, lightly, or not at all.

Since Section 230 became law in 1996, the Internet has scaled up from something that perhaps 15 percent of US households could access to something that almost every teenager and adult has in their pocket. Those questions of scale and ubiquity have changed our media and communications landscape, and both Democrats and Republicans alike have questioned what Section 230 should look like going forward. What we do with the law—and where we go from here—is a matter of major import not just for big social media firms such as Facebook, Google, and Twitter, but for the future of every other platform from Reddit to Ars to your favorite cooking blog—and every nascent site, app, and platform yet to come.

Unfortunately, instead of dealing substantively with any of those questions, Wednesday’s hearing began as an act of political theater divorced from reality, and it got worse from there.

Following the script

The hearing featured testimony from three of tech’s biggest names: Facebook CEO Mark Zuckerberg, Twitter CEO Jack Dorsey, and Google CEO Sundar Pichai, who all showed up (remotely) to the hearing after the committee issued them subpoenas earlier this month. To a man, their opening remarks (Dorsey PDF, Pichai PDF, Zuckerberg PDF) went exactly as you would expect. Each thanked the committee for holding the hearing, talked about the Internet (and their own platforms) as a force for good, and explained why Section 230 helped launch their business.

After Dorsey, Pichai, and Zuckerberg kicked off the hearing by following the script we all know so well by now, so, too, did the politicians who asked them to come.

As expected, Republican members of the committee, beginning but not ending with chairman Roger Wicker (R-Miss.), largely spent their time yelling at the three CEOs for “censoring” conservative content. According to a New York Times analysis, 85 percent of Republicans’ questions to the witnesses focused on the platforms’ alleged anti-conservative bias. This supposed suppression of conservative viewpoints has been a particularly potent rallying call among US right-wing politicians and personalities for more than a year, and it is the driving force behind Republican calls to amend or abolish Section 230.

Reality, however, does not generally bear this out. On Twitter, President Donald Trump’s is among the top 10 most highly-followed accounts, with approximately 87 million followers. Over on Facebook meanwhile, most or all of the top-10 most heavily engaged-with posts on on any given day come from conservative commentators or right-leaning websites. Both Facebook’s own commissioned audit—led by a former Republican US Senator—as well as studies by third-party groups have found absolutely no evidence that conservative voices are suppressed on the platform.

“Censorship”

Sen. Mike Lee (R-Utah) had a pointed line of questioning that laid bare the conservative point of view by conflating any level of content moderation with “censorship.”

“You take censorship-related actions against the President, members of his administration,” and a slew of conservative media outlets and groups, Lee said. “In fact I think the trend is clear, that you almost always censor—and when I use the word ‘censor’ here, I mean block content, fact-check, or label content, or demonetize websites—of conservative, Republican, or pro-life individuals, groups, or companies.”

Appending a fact-check or “read-more” type of label to existing content, however, is not censorship—and a private company is fully permitted to do so under all current law.

Democrats, too, followed a predictable script. Theirs similarly had nothing to do with Section 230, instead focusing on the actions of their Republican colleagues. Sen. Brian Schatz (D-Hawaii) gave the most succinct summary of the Democrats’ perspective when he refused to use his allotted time for questions.

“We have to call this hearing what it is: It’s a sham,” Schatz said. “This is nonsense, and it’s not going to work this time.”

Schatz’s sentiment was echoed by several others, including Sens. Amy Klobuchar (D-Minn.), Tammy Duckworth (D-Ill.), and Richard Blumenthal (D-Conn.), all of whom called into question their colleagues’ motives for calling such a hearing less than a week before the presidential election ends. Blumenthal in particular described the hearing as an opportunity for Republican Senators to “bully and browbeat the platforms here to try to tilt them towards President Trump” between now and November.

What about Section 230, though?

Although Section 230 was mentioned as a rare afterthought, in a hearing nominally dedicated to it, it did come up. Pichai and Dorsey both cautioned the Senate to be extremely thoughtful and careful with any potential changes. Zuckerberg, however, seemed much more amenable to throwing wide the gates of reform.

“People of all political persuasions are unhappy with the status quo,” Zuckerberg noted in his written testimony. “Changing [Section 230] is a significant decision. However, I believe Congress should update the law to make sure it’s working as intended.”

It’s not surprising Facebook would be the most amenable to changing the very law that let it become the behemoth it is today. Facebook has more than 2 billion daily active users across its platforms (Facebook, Instagram, Messenger, and WhatsApp) and, as such has become the poster child for failures to moderate content at scale. Its failures to communicate clearly and update policies in a proactive, rather than reactive, way are well-documented. So, too, is itswildly inconsistent moderation of frequently deceptive content.

In short, the company seems to be flailing trying to keep up with it all, and current state, federal, and international investigations into Facebook highlight how it is certainly failing to keep anyone happy in its efforts to do so. This is not the first time Zuckerberg has indicated that he would like Congress to perhaps solve some of his problems for him. Regulation—that Facebook’s government relations team would of course helpfully co-write—would create guardrails that Facebook could point to for justifying every action or inaction it takes, without getting dragged to Capitol Hill every time.

Last year Zuckerberg wrote in a Washington Post op-ed that he would like expanded US regulation relating to harmful content, election integrity, privacy, and data portability. He echoed the call for regulation earlier this year in another op-ed, this time in the Financial Times. And in July, Facebook published a white paper saying it would be happy to “co-create” new privacy regulation hand in hand with Congress.

Obviously Congress isn’t doing anything this year, with the election deadline less than a week away, to be followed by what is likely to be an awkward and acrimonious lame duck session. But the furor over Section 230 is not going to end after November 3 no matter whose presidential term starts on January 20. Democratic candidate Joe Biden has called for abolishing Section 230 entirely, and House Speaker Nancy Pelosi (D-Calif.) indicated she is open to an overhaul of the law.

Facebook’s plan to prevent election misinformation: Allowing it, mostly

A casually dressed man speaks in front a stylized padlock symbol.
Enlarge / Mark Zuckerberg speaking at Facebook’s F8 developer summit in 2018.

Although it may feel like the campaigns have been going on forever and will continue forever, linear time inexorably marches on and we are, at last, exactly two months away from the 2020 US presidential election. The logistics alone are more complicated than ever this year, thanks to the COVID-19 pandemic, and voters around the nation are likely to encounter complications of one kind or another.

Into this milieu we now add Facebook. The company has a bad track record of being used as a tool of misinformation and manipulation when it comes to elections. In a Facebook post today, company CEO Mark Zuckerberg outlined a whole bunch of new steps the company will be taking to “protect our democracy” this year. Some of those measures, alas, feel like shutting the barn door when the horse left so long ago you forgot you ever even had one.

“This election is not going to be business as usual,” Zuckerberg began, accurately. Misinformation about voting, the election, and both candidates for the presidency is already rampant on Facebook and every other media platform, and it’s being spread by actors both foreign and domestic. So what is Facebook going to do about it? “Helping people register and vote, clearing up confusion about how this election will work, and taking steps to reduce the chances of violence and unrest,” Zuckerberg promised.

Voting (mis)information

Facebook has for several weeks been plugging its voter information center in both Facebook and Instagram. That landing page takes a user’s location—approximate or specific, if enabled—to display voter-registration information, vote-by-mail eligibility, voter ID requirements, and other state and local information. It also contains a voting-related fact-checking section, covering topics such as, “Both voting in person and voting by mail have a long history of trustworthiness in the US.”

That will continue, Zuckerberg said, and the company will not only continue to enforce its policies against voter suppression, but expand them:

We already remove explicit misrepresentations about how or when to vote that could cause someone to lose their opportunity to vote—for example, saying things like “you can send in your mail ballot up to 3 days after election day”, which is obviously not true. (In most states, mail-in ballots have to be *received* by election day, not just mailed, in order to be counted.) We’re now expanding this policy to include implicit misrepresentations about voting too, like “I hear anybody with a driver’s license gets a ballot this year”, because it might mislead you about what you need to do to get a ballot, even if that wouldn’t necessarily invalidate your vote by itself.

Facebook is expanding its fact-checking in general to add labels to content that “seeks to delegitimize the outcome of the election or discuss the legitimacy of voting methods, for example, by claiming that lawful methods of voting will lead to fraud.”

Unfortunately, a primary source of exactly that kind of misinformation is the sitting US president, Donald Trump. Facebook has been slow to fact-check claims made by Trump or his campaign, but it does seem to be making good on appending labels to posts explicitly relating to voter misinformation. For example, the social media giant labeled this post Trump shared today around 10:45am EDT:

The labels, however, require a user to click through to find out what, exactly, the facts might be.

The homestretch

Zuckerberg also announced that Facebook will not accept any new “political or issue ads” during the final week of the campaign—so, beginning around October 27.

“In the final days of an election there may not be enough time to contest new claims,” he wrote, explaining the suspension. However: “Advertisers will be able to continue running ads they started running before the final week and adjust the targeting for those ads, but those ads will already be published transparently in our Ads Library so anyone, including fact-checkers and journalists, can scrutinize them.”

Unfortunately, the ads that political advertisers place before the deadline are allowed to be full of lies, and Facebook will not fact-check them. That’s been the company’s long-standing policy, and it evidently has no intention of reversing course for the rest of this campaign season. Fact-checkers and journalists will be scrutinizing the ads, as quickly as they can, but they may not have the same reach as a paid-for ad campaign that could be actively spreading falsehoods.

The final week before November 3 will no doubt be a valuable homestretch for campaigns—but it may prove less important than in previous years, as voters nationwide are expected to take advantage of early voting, mail-in voting, and mailed or in-person absentee voting to avoid exposure to groups and crowding in the continuing pandemic.

The Trump campaign vigorously objected to Facebook’s decision. “In the last seven days of the most important election in our history, President Trump will be banned from defending himself on the largest platform in America,” the campaign’s deputy national press secretary, Samantha Zager, said in a statement, adding that the president will be “silenced by the Silicon Valley Mafia.”

This is, of course, also false; Facebook’s prohibition only applies to paid advertising, not posts created or shared by any individual, page, or campaign. Neither will it prevent Trump (or his rival, Democratic candidate Joe Biden) from issuing new ads on any other broadcast, print, or online platform or stop him from holding press conferences, delivering interviews, hiring a skywriter, or getting his message out in any and every other way.

The big day—and the aftermath

Facebook’s stance with content by and large tends to be reactive, not proactive. The company is, however, proactively looking at ways to mitigate what are extremely likely to be high-tension feelings on election night itself.

Polls begin closing on the East Coast at 7pm ET and continue to close in waves over the following several hours. Typically, millions of Americans wait with their eyes glued to their favorite cable or broadcast network waiting for returns to come in and each state to be called. As we learned in 2000, however, reports come in at their own pace—not when viewers desperate for news want them to.

While the outcome in a state such as California or Alabama is likely to be a foregone conclusion, we are just as likely to have to wait hours or perhaps even days to learn the final outcomes in key swing states thanks to the projected increase in mail-in and absentee voting. Facebook is working now to try to teach its more than 200 million US users that this wait is not fraudulent and is, instead, a sign the system is working properly.

“We’ll use the Voting Information Center to prepare people for the possibility that it may take a while to get official results,” Zuckerberg said. “This information will help people understand that there is nothing illegitimate about not having a result on election night.”

Facebook will also work on and after Election Day to fact-check any claim of victory that is not backed up by its partners, Reuters and the National Election Pool. The election module will show results as they come in, and Facebook will “notify people proactively” as those results become available. Most importantly, Zuckerberg added, “If any candidate or campaign tries to declare victory before the results are in, we’ll add a label to their post educating that official results are not yet in and directing people to the official results.”

The platform is also trying to strengthen enforcement against “militias, conspiracy networks like QAnon, and other groups that could be used to organize violence or civil unrest in the period after the elections,” Zuckerberg said.

The FBI determined extremist movements, including QAnon, to be threats more than a year ago, but the content was allowed to proliferate on social networks until very recently. Twitter banned QAnon activity in July, calling it “coordinated harmful activity” that spread offline into real-world injury. The conspiracy movement has been linked to several violent episodes, including a kidnapping and a murder.

Facebook, however, has a record of failing to act against violent threats until it’s too late. The company cracked down on QAnon about two weeks ago, removing thousands of Facebook and Instagram groups, pages, and accounts.

As The Verge’s Casey Newton pointed out, that’s probably already too little, too late.

“On its face, [QAnon] seems no less a joke than the idea that TidePods secretly taste delicious,” Newton wrote. “But as with the laundry detergent, thousands of Americans have now been poisoned by QAnon, and the consequences seem likely to be far more dire, and long-lasting.”