China sets restrictions on generative AI, but leaves room for innovation
Earlier this month, China released a national regulation on generative artificial intelligence. The primary purpose is to assert control, but revisions from a previous draft suggest Beijing understands the importance of leaving space for development.
In June, China announced its plan to initiate an artificial intelligence law, with an initial draft expected to come out in late 2024. Regulations, standards, and the reality of AI ecosystems will all have a bearing on the eventual outlook of China’s AI law.
In the meantime, intermediate steps have been taken: In April, a draft regulation on generative AI — artificial intelligence that can generate images or text based on prompts, such as ChatGPT and Baidu’s Ernie Bot — was published for public comment. Earlier this month, China released the latest update on that draft — a binding national regulation, not a law, but a document nonetheless worth looking at in more detail.
The Interim Measures for the Management of Generative AI Services — henceforth, the “Interim Measures” — was published on July 10 after an unusually quick turnaround of three months since the initial draft. It makes several major clarifications and changes from April’s Draft Regulation. These changes signal a policy environment that encourages AI innovation and adoption while allowing the party to retain control over access to information.
Background
At the moment, China regulates tech firms through the triad of security, privacy, and competition, with each pillar having laws and a set of regulations. The Interim Measures provides clarity on how existing laws and regulations apply to generative AI.
One of the primary goals of the Interim Measures is to assert state control over how AI generates and disseminates information. Generative AI services are forbidden to generate information that goes against core socialist values or promotes extremism and hatred.
For the most part, China has taken a nimble problem-driven approach to AI. The Interim Measures shows continuity from two other regulations on platform algorithms and deep synthesis, respectively. All of them were vertical regulations, born as a response to the regulatory vacuum left by fast-changing technologies. They collectively pave the bedrock of substantive regulations on AI services in the absence of a general law such as the forthcoming AI Law.
The Interim Measures requires AI service providers to label AI-generated content as such and assume personal information protection responsibility on training data.
The Internet Information Service Algorithmic Recommendation Management Provisions established a mandatory registration system for algorithms with public opinion properties or having social mobilization capabilities, with the intention to address public discontent over price discrimination by ecommerce platforms and algorithmic exploitation of delivery workers. Such an algorithm registration system was inherited in the Interim Measures. Similar to the Internet Information Service Deep Synthesis Management Provisions published in December 2022, the Interim Measures also requires AI service providers to label AI-generated content as such and assume personal information protection responsibility on training data.
There were four major changes between the Interim Measures and its draft, which we will now unpack.
1. More targeted scope
The deep synthesis regulation last December had an ambiguously defined scope, whereas the Interim Measures clarifies that it is aimed at generative AI services offered to the public in mainland China.
The words “to the public” are an important addition. Research activities and services developed for use within an organization or industry are now exempted from the regulation. The Interim Measures further clarified that generative AI used for news, publishing, film production, and literary and artistic creation will be subject to separate rules. These areas could see tighter control in the form of licensing and ex-ante assessment.
The more targeted scope reflects the expectation that generative AI will be widely deployed within organizations and that treating in-house applications in the same way as public-facing ones only hinders the adoption of generative AI.
2. Leaving room for innovation
The Interim Measures added the Science and Technology Progress Act as a new legal basis for its drafting and enforcement (a reminder that the Interim Measures is a regulation, not a law), alongside the previously enacted trio of the Cybersecurity Law, Data Security Law, and Personal Information Protection Law. The drafting process also involved more ministries. The Cybersecurity Administration of China (CAC) unilaterally released the draft regulation, but the Interim Measures enlisted other government entities who want a seat at AI regulation, including the Ministry of Industry and Information Technology (MIIT) and the Ministry of Public Security (MPS).
The prominence of the NDRC and Ministry of Science and Technology in the drafting of the Interim Measures shows a clear emphasis on innovation.
The inclusion of MIIT and MPS should come as no surprise, since they implemented the data security and privacy laws. But what makes the Interim Measures interesting is the inclusion of the National Development and Reform Commission (NDRC) and Ministry of Science and Technology (MOST), which are listed above MIIT and MPS among the ministerial-level drafting entities. This shows a clear emphasis on innovation.
Notably, the NDRC hosts the National Data Administration that designs high-level strategies for managing data resources. MOST, empowered by being the administrative office of the Science and Technology Commission chaired by President Xí Jìnpíng 习近平, is poised to provide a counterweight to the CAC and moderate the CAC’s sole mandate on security and information control.
Several articles were added to reiterate commitment to pro-innovation strategies that China has been attempting to operationalize. In particular, Article 6 promotes the establishment of generative AI infrastructure and platforms for publicly available training data, the sharing of computing resources, and open access to high-quality public data.
China wants to be more state of the art. First, there is the growing consensus in China that open-source data, algorithms, and supporting tools create more positive externality to the society than proprietary ones. To that end, the government has sponsored the development of open-source technology stack for commercial use, such as FlagOpen developed by the Beijing Academy of Artificial Intelligence. Second, sharing computing power maximizes utilization of available GPUs, which is important amid a U.S. export control on advanced GPUs (and potentially a further ban on circumventing restrictions through a public cloud). Third, open-source datasets help generative AI developers overcome the scarcity of Chinese-language data on the internet and comply with the Interim Measures requirement that all training data must be legally obtained.
3. Less compliance burden
The Interim Measures softened the language in several places to lessen the compliance burden on generative AI service providers, likely as a result of consultation with the industry.
For example, the earlier draft requires providers to fine-tune models within three months of knowing that certain content has become outlawed. This creates financial costs on providers to regularly fine-tune models beyond normal development cycles, including support for legacy models. The compliance burden is particularly high for the open-source community and could lead to an undersupply of open-source models. Now the three-month deadline has been lifted, and fine-tuning is one but not the only way to permanently tune out content.
The Interim Measures is also more consistent with the characteristics of large language models. The phrase “prevent fake information” was changed to “enhance transparency and reduce fake information that is harmful.” The change acknowledges that even the most advanced generative AI model is not immune to hallucination, a phenomenon of the AI model fabricating facts when it does not know the answer.
4. Classification of AI systems
The idea of categorized and graded regulatory oversight has been a common theme, which China has applied to data, algorithms, and internet platforms. A classification system for AI systems was originally proposed in the AI Industry Promotion Measures of Shenzhen, which the Shenzhen local government issued to experiment with AI governance in its locality. Now the Interim Measures formally adopts this risk-based framework at the national level, which will set forth multi-year efforts to delineate risks of AI systems.
A classification system allows regulators to direct limited resources to high-risk applications. Low-risk applications could be allowed to roll out with minimal ex-ante obligations.
The Interim Measures incorporated inputs from the industry and government stakeholders, as evident in that it softened extreme requirements and created a more fertile ground for AI innovation.
As informed by the classification of algorithms and internet platforms, a comprehensive classification of AI systems could be expected to take into account the generality of AI models, the impacts of end use on human safety and rights, the sensitivity of training data, and the scale of AI models and user base. The development of classification standards may be left to sectoral regulators. Those industries that carry national security risks and benefit from the penetration of general-purpose AI could become the pioneer in designing sectoral rules.
In the realm of data classification, it was the automobile industry that made the most progress in creating detailed rules for data collection and cross-border transfer, because of national security concerns with data collected by cars and industry demand for policy clarity.
Drawing closer to a national AI law
For China, the Interim Measures marks a crucial step toward formulating a national law on artificial intelligence. The revised text incorporated inputs from the industry and government stakeholders, as evident in that it softened extreme requirements and created a more fertile ground for AI innovation.
Still, political elements threaten to create bifurcated AI ecosystems within and outside China — if, for instance, foreign providers are not willing to bend to rules on removing content “against socialist values,” and ultimately leave the China market to be served by Chinese providers. But the impact of this could be contained to public-facing generative AI services. AI applications for enterprise use are exempted from the regulation, and this opens up an addressable market for foreign providers.
It is also clear from changes to the earlier draft that China wants its foundational technology and downstream ecosystem to evolve, rather than reining it in with heavy-handed regulation. The revised text shows a more mature understanding of the technology and lowered compliance burden for low-risk applications.
There remains known unknowns — notably the exact allocation of responsibilities between upstream AI infrastructure providers and downstream application developers. As the ecosystems of general-purpose AI take shape, China could be expected to continue to fill in major policy vacuums with vertical regulations and gradually build out technical standards to operationalize regulatory requirements. All of these efforts will eventually inform the outlook of China’s artificial intelligence law.