Banning TikTok is a terrible idea

Politics & Current Affairs

Measures are needed to ensure data security. But here’s a more sensible approach.

titkok orange logo on phone on black background
Illustration by John Oquist

It’s been a rough spell for TikTok. Secretary of State Mike Pompeo said the U.S. would ban the app because it “puts your private information in the hands of the Chinese Communist Party.” In case any observers dismissed his threat as more bark than bite, White House advisor Peter Navarro followed up with a warning that “strong action” against TikTok is coming. The onslaught is not just coming from U.S. policymakers, either. In June, the Indian government announced it would block TikTok along with nearly 60 other Chinese mobile apps for “stealing and surreptitiously transmitting users’ data in an unauthorized manner to servers which have locations outside India.”

There are several options in play for what so-called “strong action” could look like. The Committee on Foreign Investment in the United States (CFIUS) has been reviewing the national security implications of ByteDance’s 2017 acquisition of Musical.ly (TikTok’s predecessor) and could lead to divestment. The Trump administration could invoke the International Emergency Economic Powers Act (IEEPA), declaring a national emergency to ban the app or place TikTok on the Entity List, which would mean that Google and Apple could no longer offer it in their app stores.

When it comes to TikTok, there are legitimate questions related to data security and censorship that need to be addressed, but shutting down the app altogether is a terrible idea. It sets a dangerous precedent in which the U.S. government can blacklist companies based on country of origin using blanket national security as justification. Going down this path means taking a page from Beijing’s playbook in asserting a U.S. version of cyber sovereignty to control how Americans use the internet.

Instead, we need to identify the risks created by TikTok and find effective ways to respond to those problems. Doing so means identifying specific potential for harm, and not simply moving to block it for being Chinese.

Let’s first look at the risk of data access by the Chinese government. TikTok states that it never has shared data with the Chinese government, which backed up its transparency report on July 9. However, its terms of service state that even if data collected overseas is stored on servers in the U.S. and Singapore, it may be shared with its parent company ByteDance and its affiliates, potentially opening the door to access to Beijing even if TikTok’s operations are firewalled off from Douyin, the domestic version of the app. Indeed, a lawsuit last December alleges that TikTok “vacuumed up and transferred to servers in China vast quantities of private and personally-identifiable user data.” According to the filing, plaintiff Misty Hong alleges that the company transferred information about her device and websites she had visited to servers in China, and claims that source code from Baidu that security researchers discovered in 2017 was being used to install spyware on user phones. But “legal documents did not provide evidence of the data transfers or the existence of Baidu or Igexin source code in the app,” Reuters reports.

The way to deal with this problem is to develop a country-agnostic set of criteria with robust rules not just for TikTok, but for how all companies collect, retain, and share their data. Instead of playing a game of whack-a-mole against a rotating cast of Chinese tech companies, the U.S. would be wise to spend more time developing legislation and standards for how all companies, regardless of country of origin, protect online privacy and secure data. No company should have access to and retain sensitive data in the first place that could then be transmitted to a government that could use it to do harm or be hacked by state actors.

With such criteria in place, TikTok could be audited by an outside party to verify how it manages its data, including to verify a firewall with its parent company in China. According to “internal sources,” ByteDance has put in place restrictions on access to source code and sensitive data for domestic employees in China. This is exactly what needs to happen, but without being independently verified against a trusted set of criteria, these steps will not assuage concerns by U.S. policymakers.

 

Instead of playing whack-a-mole against Chinese tech companies, the U.S. should develop standards for how all companies, regardless of country of origin, protect online privacy and secure data.

 

The mere fact that a Chinese company handles U.S. citizen data in and of itself may not necessarily warrant banning investment under CFIUS or blacklisting a specific company for use in the U.S. The U.S. national security risks should be evaluated based on an investigation, with regular audits, to determine (a) what kind of U.S. citizen data is being accessed (for example, metadata, images, geographic data, critical infrastructure data), (b) how that data is being used and what data protection measures are in place to protect the rights and interests of U.S. consumers, and (c) with whom that data is being shared and through what mechanisms. If, based on the outcomes of such an evaluation, the U.S. government cannot verify that the interests and rights of U.S. consumers will be protected, then that company should be prohibited from storing and sharing U.S. personal data.

Such an assessment also must consider what intelligence value the data collected on TikTok’s platform would provide to Beijing. Videos of lip syncing and dancing are of limited strategic use even for an “adversary government” (which the Trump administration is increasingly calling China) — whether to target individuals for coercion or even as used in aggregate form as part of a mass collection effort. In this way, the data security risk posed by TikTok is different from that of Grindr, the gay dating app acquired by a Chinese gaming company deemed a national security threat by CFIUS. In the case of Grindr, there was potential for blackmail if data on intimate relationships could be triangulated with, say, the national security personnel data presumed to have been obtained by the Chinese government in a security breach of the Office of Personnel Management. It’s still a hypothetical risk, but unlike the TikTok case, there is at least some clear articulation of what harm could result.

There is another reason why we need stricter rules for data security and privacy for all companies, not just Chinese ones. U.S. citizen data held by all unregulated private companies, not just Chinese companies, will be more vulnerable to breaches by state hackers as well. For example, Equifax’s many security issues are well documented, such as the company’s failure to patch known vulnerabilities that ultimately left exposed the data of 145 million Americans. But the hack was also conducted by a Chinese government entity with sophisticated hacking capabilities and access to considerable state resources. Setting minimum standards on what data can be collected and retained by all companies will help protect U.S. personal data, regardless of whether the risk is exacerbated by a state-sponsored hacker, a data broker, or a private company transferring the data to China.

The second risk with TikTok is that the Chinese government could influence the kind of content promoted or taken down from the platform. The idea is that TikTok’s algorithm could boost or hide content that would in effect serve as an extraterritorial version of Beijing’s censorship and propaganda machine.

Here again the answer is not to play whack-a-mole with the Chinese tech company threat of the day, but to spend more time developing legislation and the development of standards for how all companies, regardless of country of origin, manage online content in an era of misinformation, when U.S. and Chinese tech platforms alike both hold tremendous power to affect the way we consume information where the stakes could not be more high, from public health to election security.

Again, the TikTok transparency report shows that the company did not receive or comply with any requests for content removal by the Chinese government. But we need to open up the hood to verify independence from ByteDance regarding content removal decisions and algorithms used to push content. One important data point is an investigation by Buzzfeed last year that found no evidence that TikTok took down content related to Hong Kong protests, and more of this kind of assessment with specific examples would be helpful. Auditing for content moderation or outright censorship may be tricky, however. It’s important to keep in mind that TikTok is user-driven, and users have many options for speech, so they can simply opt out.

The irony is that shutting down TikTok (or other actions short of that, like an Entity List designation) will paradoxically contribute to the erosion of online freedom and openness, while doing little to address the underlying security issues. Additionally, TikTok is one of the most important competitors in the U.S. to Facebook, and more competition is a good thing at a moment when the concentration of power in the hands of U.S. tech platforms is under scrutiny.

The better path is for U.S. policymakers to offer an alternative to Chinese cyber sovereignty with a vision for internet governance to help better secure data online and prevent the spread of disinformation. These are problems that are bigger than TikTok and must be dealt with in a separate lane from the U.S.-China conflict. Let’s leave the creation of national walls in cyberspace to Beijing.