This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
Digital developments in focus
| 5 minutes read

Marathon or sprint: China's approach to generative AI

Generative AI’s recent and explosive launch into the public consciousness has unsurprisingly landed it firmly at the top of global regulatory agendas. Although large language models (LLMs), which underly offerings such as OpenAI’s ChatGPT, Google Bard and—in China—Baidu’s ERNIE Bot (文心一言) and Alibaba’s Tongyi Qianwen (通义千问) have captured the public imagination, their use brings a range of novel risks (some of which we discuss here).

Against this backdrop of heightened regulatory scrutiny, the Chinese government made headlines when, in April this year, it published a draft of the ‘Administrative Measures for Generative Artificial Intelligence Services’ (the Regulation), one of the world’s first regulations specifically addressing generative AI.

In this blog post, I explore how China proposes to regulate generative AI and unpack some of the potential implications for businesses providing or planning to provide generative AI products and services to the Chinese market.  

Out of the starting blocks: China takes the lead

China has demonstrated its regulatory agility in the AI space by rapidly publishing draft measures to promote the “sound development and standardised application of generative artificial intelligence technologies”. The consultation closed in May 2023, and a revised draft of the Regulation is expected within this year.

Key takeaways from the proposed Regulation

The draft Regulation applies to the research, development and use of generative AI products that provide services to the Chinese public. The measures therefore seek to regulate both the inputs to, and outputs of, generative AI. Headline points include:

  • Extraterritorial effect: the Regulation is triggered by user location and may apply extraterritorially to foreign companies offering generative AI-related products and services in Mainland China. Providers must also make a security assessment application to the Cyberspace Administration of China (CAC).
  • Downstream responsibility for content: significantly, generative AI and related service providers (including API providers, such as OpenAI and Google) bear downstream responsibility for the content created by users of their generative AI products.
  • Substantive content requirements: AI-generated content must (i) reflect socialist core values; (ii) not contain subversive, discriminatory, or false material; and (iii) not infringe IP rights or personal privacy rights. In addition, providers must have regard to user welfare.
  • Detailed technical requirements: generative AI providers are responsible for the accuracy and legitimacy of training data, and must ensure that data is authentic, accurate, objective and diverse. Where non-compliant content is identified, providers must impose measures, such as content filtering or model optimisation training, to prevent further generation of such content.
  • Varied enforcement routes: whilst the maximum fine under the draft Regulation is limited to RMB 100,000 (approximately GBP 10,915), generative AI providers who breach its provisions may also have their services terminated or become subject to further liability where other relevant laws—such as the Cybersecurity Law, Data Security Law or Personal Information Protection Law—have been breached. 

Regulatory hurdles: what’s next for generative AI in China?

Once it comes into force, the Regulation is likely to have major implications for businesses developing and/or offering generative AI products and services in China. 

Whilst the Regulation is yet to be finalised, the current draft seeks to impose extensive responsibilities on generative AI providers to comply with substantive and technical obligations. The inclusion of exacting technical requirements is characteristic of China’s emerging approach to AI regulation, which focuses on developing tailored, technology-specific regulations to regulate different applications of AI (see, for example, earlier measures addressing “deep synthesis” or deepfake technologies and algorithmic recommendation technologies).

Given the speed with which AI technology is evolving, some have argued that the draft Regulation’s provisions will slow innovation and commercialisation of generative AI in China. Hurdles include the fact that providers may, for example, have limited visibility over the contents, quality and validity of training data—and grey areas remain regarding the issue of potential IP infringement where LLMs are trained on licensed materials. Similarly, it may be challenging for providers of generalised, public-facing generative AI products to achieve the granular level of output control required under the Regulation (to, for example, block or filter out discussion of certain topics).

The Regulation may not, however, be as restrictive as some imagine:  

  • Specialised generative AI models: there is growing interest in specialised generative AI models trained on defined datasets, which may comply more easily with the Regulation’s requirements. Bloomberg, for example, recently announced BloombergGPT, an in-house LLM trained partly on proprietary data to focus on financial analysis. In China, Baidu indicated that its ERNIE Bot “is already a very localised AI foundation model for the China market”, suggesting the company has already taken steps to adapt its LLM for the Chinese context.
  • Regulation as an innovation catalyst: regulation can help to clarify the ‘rules of engagement’ for a particular jurisdiction, allowing businesses to make informed decisions about how they allocate R&D and other resources. Where businesses have strong incentives to access the lucrative Chinese AI market, the Regulation’s technical requirements may encourage further investment into presently under-prioritised aspects of generative AI technologies.
    Progress is already being made in relation to certain technical features required by the Regulation. NVIDIA, for example, recently announced its NeMo Guardrails toolkit, which assists developers in setting topical and safety boundaries on LLMs.

Assessing the field: the global race to regulate heats up

China’s proactive approach to developing its AI regulatory framework signals the socio- and geo-political significance of AI to the Chinese national agenda. Contrast the development timeline of China’s Personal Information Protection Law, which came into effect in late 2021 (some three years after the European General Data Protection Regulation). In the AI space, China is clearly positioning itself as a “maker”, rather than a “taker” of regulation, and businesses should expect it to continue to take a proactive approach to digital regulation.

As key jurisdictions like the EU and UK take differing approaches to AI regulation, the picture for multinational companies seeking to comply with regulatory frameworks in different jurisdictions will become increasingly complex. Jurisdictional differences should not, however, be overstated.

Whilst the EU’s proposed AI Act focuses on risk-based regulation of different applications of AI technologies, and prohibits certain high-risk activities, the latest draft also requires foundation model providers to: (i) reduce and mitigate risks relating to fundamental rights, democracy and the rule of law; (ii) process only datasets subject to appropriate governance measures; and (iii) develop their models to achieve appropriate levels of predictability, interpretability, corrigibility, safety and cybersecurity.

Similarly, the UK Government’s White Paper on AI contains significant emphasis on safety, transparency and accountability, and fairness as core cross-sectoral principles. In particular, the White Paper highlights the importance of technical standards to address transparency and explainability—noting that such standards “would act as tools for industry to operationalise compliance” with regulatory principles.

Closer inspection of these emerging AI regulatory frameworks therefore demonstrates a shared regulatory focus on: (i) the quality of input data; (ii) the ‘normative’ risks posed by AI-generated content; and (iii) the importance of safety, transparency and accountability to further development of generative AI.

China’s generative AI Regulation should not, therefore, necessarily be seen as an ultra-prescriptive outlier in the global regulatory landscape. Whilst much is being made of the ‘race to regulate’, it seems that key regulators are headed towards similar finish lines.

As the AI regulatory landscape continues to change at speed, regular horizon-scanning for businesses will become increasingly important. We have a range of blogs and resources (which can be accessed here) to help you stay ahead of the curve.

China’s generative AI Regulation should not, therefore, necessarily be seen as an ultra-prescriptive outlier in the global regulatory landscape. Whilst much is being made of the ‘race to regulate’, it seems that key regulators are headed towards similar finish lines.


generative ai, china, apac, ai, emerging tech, regulating digital