Innovate & Regulate: A Comparative Study of AI Governance in the UK and China

April 24, 2025

About the author

Tian Kun

Fellow of Taihe Institute


 

Introduction: The Rise of AI and the Digital Economy

In recent years, global artificial intelligence (AI) technology has developed rapidly, and the transformation of the digital economy has become a key arena of competition among nations. Against this backdrop, AI has not only changed modes of production and daily life, but also raised new demands in areas such as national security, data privacy, and ethical governance. As major global economies, both the United Kingdom and China attach great importance to the development of AI, though they emphasize different aspects in their governance approaches and strategic planning.


UK AI Governance: An Innovation-Driven, Risk-Aware Approach

The UK government adheres to a principle of innovation-driven with risk mitigation, and has attempted to build an open ecosystem that both stimulates technological breakthroughs and effectively mitigates risks through the issuance of policy documents such as the AI Opportunities Action Plan and the AI Cyber Security Code of Practice. In contrast, the Chinese government relies on national strategic planning and a robust regulatory system to achieve technological self-reliance through independent innovation and industrial upgrades, while continuously refining its governance framework in areas such as data security, privacy protection, and algorithmic ethics.


The AI Opportunities Action Plan

The overall strategy of the UK government in AI can be summarized in three main aspects: first, vigorously promoting technological innovation and infrastructure construction; second, establishing a flexible and forward-looking regulatory framework; and third, enhancing talent recruitment and international cooperation. In January 2025, UK Prime Minister Keir Starmer launched the AI Opportunities Action Plan, which put forward a total of 50 policy recommendations with the goal of increasing publicly-controlled AI computing capacity twentyfold by 2030. This plan particularly supports the accelerated deployment of the Isambard-AI supercomputer in Bristol, aiming to boost the nation's international competitiveness in core technological areas such as large-scale data processing and deep learning model training. At the same time, the plan emphasizes the promotion of AI applications in key sectors such as healthcare, transportation, and public services, with the goal of achieving 50% technological integration by 2027 in order to drive government digital transformation and improve the efficiency of public services.
Although the plan is ambitious, it has also sparked extensive discussions on issues such as data privacy and copyright protection. For example, while the proposal to establish a national data library would provide abundant data support for AI training, it might involve privacy risks concerning National Health Service (NHS) patient data as well as copyright disputes, issues that will need to be properly addressed in subsequent policy implementations.


Securing the Future: The AI Cyber Security Code of Practice

Complementing the AI Opportunities Action Plan is the UK AI Cyber Security Code of Practice, issued on January 31, 2025. This code, set within a structured yet voluntary framework, puts forward 13 security management principles covering the entire lifecycle of AI systems - from design and development to deployment and decommissioning - with the aim of protecting AI systems against emerging cyber threats such as data poisoning, model inversion, and adversarial attacks. The code requires companies to incorporate security design at the early stages of product development, implement secure Application Programming Interface (API) management, and strengthen employee training to enhance overall awareness of AI risks. In addition, companies must establish and refine data pipeline protection mechanisms, conduct regular vulnerability assessments, and carry out incident recovery drills to ensure a swift response and prompt repairs in the event of a security threat. The code clearly delineates the responsibilities of developers, operators, and data managers, helping to promote a comprehensive security management system. More importantly, it encourages companies to align with the upcoming global standards from the European Telecommunications Standards Institute (ETSI), thereby providing technical support and trust assurances for the competitiveness of British AI products in the international market. Nevertheless, since the code is voluntary, adoption levels may vary significantly across different industries and companies, especially for resource-limited small and medium-sized enterprises, which may face high compliance costs early on. This is an area that the UK government will need to address and refine in the future.


Sector-Specific Initiatives and Regulatory Flexibility

The UK's regulatory approach - characterized by industry-specific guidance and non-binding principles - stands in stark contrast to the EU's comprehensive and binding AI Act. British regulatory agencies can flexibly develop risk assessment plans based on the specific circumstances of each industry and, if necessary, impose stricter regulatory measures on high-risk AI systems. Currently, the UK government has already initiated a series of pilot projects in sectors such as finance, public safety, and data governance. For example, the Financial Services Survey launched in February 2025 aims to evaluate the specific impacts of AI on bank stability, consumer protection, and cybersecurity, thereby providing support for the future formulation of specialized industry rules. This regulation model, based on actual needs and risk assessments, enables the UK to stimulate technological innovation while timely identifying and mitigating potential risks, but it also requires close coordination among regulatory agencies to avoid oversight gaps and redundant regulation.


Media outlets and think tanks have offered multifaceted interpretations of the UK government's AI governance strategy. A Guardian report on January 12, 2025, detailed the government's strategic layout in enhancing public computing capacity, promoting the digitization of public services, and strengthening international cooperation, while also noting that the national data library and relaxed data usage policies might pose challenges to personal privacy and copyright protection.1 The report argued that only after establishing comprehensive legal and technical safeguard mechanisms could the strategy truly achieve its intended goals. In an analysis report released in January 2025, RAND Corporation affirmed the UK government's measures to promote AI technological innovation through flexible regulation and market incentives from a strategic perspective, predicting that future regulation might gradually shift from voluntary guidelines to binding regulations.2 Meanwhile, institutions such as Clifford Chance and the Financial Times have also praised the regulatory flexibility gained by the UK post-Brexit,3 while warning that an overly lenient regulatory environment might weaken Britain's technological influence in international competition - particularly in the global digital content market, where copyright protection issues require due attention.


China's Centralized Strategy: State-Led Innovation and Regulation

In contrast, the Chinese government's approach to AI governance is characterized by state-led, centrally planned strategies. The Chinese government attaches great importance to the development of AI technology; as early as July 2017, in the Next Generation Artificial Intelligence Development Plan, China elevated AI to a national strategic priority, clearly outlining goals in core technological research and development, industrial upgrading, and market application. China emphasizes independent innovation and technological self-reliance, striving to break its dependence on external core technologies and critical equipment, and on this basis, promotes the healthy development of the AI industry. In recent years, China has continuously improved its Cybersecurity Law and Data Security Law to strengthen governance over data security, privacy protection, and algorithmic ethics, and has subsequently introduced a series of regulatory requirements specifically for AI applications. The Chinese government not only promotes the implementation of relevant laws and regulations domestically, but also actively participates in international rule-making, seeking to secure greater influence through multilateral mechanisms and international dialogue.


Comparative Insights: Balancing Innovation and Risk

While the UK and China have their own emphases on AI governance, they also face many common challenges. First, whether it is the UK's flexible regulatory model or China's centralized management, both face the challenge of balancing the stimulation of technological innovation with the effective mitigation of security risks. Technological advancement is often accompanied by new risks such as data breaches, cyberattacks, and ensuring that citizen privacy, national security, and intellectual property are not compromised while pushing for technological breakthroughs is a challenge that all countries must address. Second, international standards and cross-border regulatory coordination are pressing issues that need resolution. The UK relies on cooperation with international standards organizations such as ETSI to enhance the competitiveness of its AI products in the global market, while China is actively participating in international rule-making, seeking to exert greater influence in global data governance and algorithmic ethics. Each model has its advantages and provides valuable practical experience for international cooperation.


Furthermore, the UK government has recently introduced a series of strategic investments and policy measures to consolidate its leading position in the field of AI. The Financial Services Survey launched in February 2025 is a typical example; through investigative research, the government aims to provide a basis for formulating more precise industry regulations in the future. In terms of copyright reform, the government's proposed "opt-out" mechanism is designed to provide greater flexibility for the use of data in AI training. Although this proposal has met with opposition from some publishers and content creators who believe it might weaken intellectual property protection, its original intention is to promote data circulation and technological innovation. At the same time, the government plans to allocate 300 million GBP to establish the AI Security Institute, focusing on cutting-edge models and security technology research and development; it also intends to set up regulatory sandboxes to provide experimental platforms for AI applications in high-risk areas such as autonomous driving and healthcare. In addition, the launch of an AI Talent Visa program aims to attract 5,000 international experts by 2026, thereby further enhancing the UK's position in the global competition for technological talent.


In summary, both the UK and Chinese governments have their respective merits in AI governance. The UK focuses on achieving a balance between technological innovation and risk mitigation through market-driven approaches and flexible regulation, with its policy system emphasizing autonomous adjustments and alignment with international standards to continually refine its security and risk control systems in an open environment. In contrast, China, through state-led centralized planning and robust regulation, promotes technological self-reliance and industrial upgrading, accelerating the establishment of unified standards across the industry while ensuring national security and social stability. The different choices in governance models reflect differences in national conditions and policy traditions and provide rich case studies and experience for global AI governance.


Looking Ahead: International Cooperation and the Future of AI Governance

Looking ahead, as AI technology continues to evolve, international cooperation in data governance, privacy protection, ethical review, and cross-border regulation will become increasingly important. Only through international dialogue and cooperation can countries complement each other's strengths in the fierce technological competition and promote the sustainable development of the global digital economy. Whether it is the UK's flexible regulation + market incentives model or China's centralized planning + strong regulation path, both will play important roles in shaping the future global AI governance landscape. Policymakers need to fully learn from the successes and challenges of different countries to establish a governance system that both stimulates technological innovation and effectively mitigates security risks, providing a solid safeguard for meeting the challenges of the AI era.


In conclusion, the different strategies adopted by the UK and China in AI governance reflect their differing perspectives on technological prospects, risk mitigation, and international competition. The governance experiences of both countries offer diverse pathways and valuable lessons for global AI development, and future international cooperation and multilateral dialogue will be key to promoting the stable development of the global digital economy. Only by continuously balancing the drive for innovation with security risk prevention can countries around the world jointly construct a new, open, transparent, and sustainable order of AI governance that brings genuine progress to society.

 

1. Robert Booth, "'Mainlined into UK's Veins': Labor Announces Huge Public Rollout of AI," The Guardian, January 13, 2025, https://www.theguardian.com/politics/2025/jan/12/mainlined-into-uks-veins-labour-announces-huge-public-rollout-of-ai.

2. Paul Khullar and Sana Zakaria, "UK Government's AI Plan Gives a Glimpse of How It Plans to Regulate the Technology," RAND, January 27, 2025, https://www.rand.org/pubs/commentary/2025/01/uk-governments-ai-plan-gives-a-glimpse-of-how-it-plans.html.

3. Jonathan Kewley, "Unpacking the UK's AI Action Plan," Clifford Chance, January 17, 2025, https://www.cliffordchance.com/insights/resources/blogs/talking-tech/en/articles/2025/01/unpacking-the-uk-ai-action-plan.html.

 

This article is from the March issue of TI Observer (TIO), which explores the AI-powered digital economy, analyzing how nations navigate the balance between development and governance, while examining the impact of technological advancements on global competition and the broader international order. If you are interested in knowing more about the March issue, please click here:

http://en.taiheinstitute.org/UpLoadFile/files/2025/3/31/14372768c45e0ae0-6.pdf

 

——————————————

ON TIMES WE FOCUS.

Should you have any questions, please contact us at public@taiheglobal.org