
Abstract
Artificial Intelligence (AI) in China constitutes a strategic project that transcends technological innovation to become an instrument of economic, geopolitical, and social transformation. This study provides a comprehensive and multidisciplinary analysis of China’s AI ecosystem, tracing its historical origins, the interplay between public and private actors, financial and investment mechanisms, and the ambitions projected in the economic, governance, and foreign policy spheres. It highlights the critical intersection with semiconductors, identifying vulnerabilities stemming from dependence on advanced foreign nodes as well as resilience strategies through domestic substitution and industrial diplomacy. Furthermore, it examines the ideological framework of China’s algorithmic governance, anchored in a synthesis of Confucian traditions and Marxist principles, which legitimises the use of AI to guarantee stability and social harmony. The research contrasts China’s trajectory with that of the United States and the European Union, analysing their respective strengths and weaknesses, and proposes a roadmap for Europe to achieve shared technological sovereignty by 2030. The study concludes that AI in China is a vector of modernity with global implications, and that the future international order will depend on the capacity of the major powers to align innovation, governance, and political legitimacy.
Introduction
Artificial intelligence has emerged as a central vector in the development strategy of the People’s Republic of China, operating simultaneously as a catalyst of technological innovation and as an instrument for consolidating economic power, national security, and social control. Since the mid-2010s—and most explicitly with the publication of the New Generation Artificial Intelligence Development Plan (AIDP) in 2017—AI has occupied a priority position in state planning, with implications that extend well beyond technology narrowly understood. In the Chinese context, AI is not conceived as a neutral tool of progress; it is deeply entangled with industrial policy, security objectives, geopolitical projection, and the governance of society.
This study offers an exhaustive, multidisciplinary examination of China’s AI ecosystem. It covers its historical origins and trajectories of development (including generative and “open-weights” AI), the main public and private actors, the strategic plans and financing mechanisms, as well as the ambitions projected in economic, geopolitical, governance, and resource-management dimensions. A core axis is the relationship between AI and semiconductors—arguably China’s most significant structural vulnerability—where strengths and weaknesses are revealed in a global technological contest marked by rivalry with the United States and its allies.
The theoretical framework draws on a growing literature on industrial policy, technological governance, and the geopolitics of innovation. Hauge (2023, 2025) argues that contemporary industrial policy must adapt to the megatrend of digitalisation, highlighting how China has transformed its historic dependence on manufacturing into a structural advantage by integrating AI across global value chains. This perspective dovetails with Mazzucato’s (2013) concept of the “entrepreneurial state”, which underscores the centrality of public investment in steering innovation, and with Rodrik’s (2015) emphasis on externalities and market-failure correction.
At the geopolitical level, Allison’s (2017) “Thucydides’s Trap” has framed the structural tensions between a rising and a ruling power, with AI and semiconductors intensifying the rivalry between China and the United States. In this context, Chan (2025) and colleagues at RAND contend that Western sanctions have accelerated, rather than arrested, China’s quest for technological self-reliance, particularly in semiconductors, catalysing what they describe as “industrial diplomacy” aimed at reorganising supply chains and deepening ties across the Global South.
On the sociopolitical plane, Creemers (2024) shows how AI in China is embedded within a governance model that synthesises Confucian and Marxist-Leninist traditions. The result is a form of “digital nationalism” that elevates social harmony and stability above individual privacy, and that legitimises algorithmic surveillance and credit-scoring mechanisms. Institutions such as MERICS, CSIS and the McKinsey Global Institute have documented both advances and limits: the integration of AI into core industrial sectors; the proliferation of “smart industrial parks”; and, conversely, persistent vulnerabilities in advanced microelectronics (MERICS, 2025; CSIS, 2025).
In short, AI in China must be analysed as a multidimensional construction where economy, politics, society and culture converge. This work adopts an interdisciplinary approach—economic history, innovation studies, international relations, and political theory—to illuminate both the ambitions and the constraints of China’s AI project.
Origins and Development of AI in China
The contemporary development of AI in China is the cumulative result of industrial policy decisions dating back to the late 20th century. Although the first experiments in cybernetics during the Maoist period were marked by Soviet influence.
The Soviets were ahead of the United States, but the clumsiness and lack of clarity of those in decision-making positions prevented the realisation of a fascinating project conceived by a computer genius, Viktor Glushkov, whose idea was to create a civilian network interconnecting the entire USSR in the early 1970s, almost 20 years before the World Wide Web.
But even more than that, Glushkov thought of Cybertonia, a kind of extraordinary socialist Silicon Valley. At that time, the Soviets knew that the United States had just started Arpanet, the computer network created by the Department of Defense to serve as a communication system between state and academic institutions. Unlike the United States, the USSR had large-scale military defence communication networks since the 1950s, so it had the experience to develop an unprecedented civil network communication project. To this end, on 1 October 1970, Glushkov proposed cybersocialism.
Glushkov was an engineer and the first director of the Kiev Cybernetic Centre. He had an extraordinary scientific background, spoke perfect German and English, and was very familiar with the sources of socialism, such as Marx, among others, as explained in Benjamin Peters' book, How Not to Network a Nation. The Uneasy History of the Soviet Internet. Glushkov built on the previous work of Anatoly Kitov, who in 1959 wanted to propose to Premier Khrushchev a computer network that would unite the USSR by bringing together professionals in the field from the civil sphere and the Red Army.
But Glushkov had even greater ambitions. The acronym OGAS, ‘Automated System for the Collection and Processing of Information for Accounting, Planning and Governance of the National Economy’, concealed the project to set up a central network based in Moscow to connect every corner of the country through 200 centres scattered throughout the Union of Soviet Socialist Republics. In total, it required 20,000 terminals.
Added to this was an electronic payment system that would put an end to the circulation of banknotes and coins, as well as vending machines, an office that did not require paper for its operation and communication, and a language for communicating between humans and computers. To this was added a kind of social network, Cybertonia, which started in 1960, issuing passports and marriage certificates, as well as drafting a constitution for the platform. It was governed by a committee of robots accountable to a central robot that played the saxophone. It had its own currency, the cyberton, its own newspaper, Evening Cyber, and a cybersauna as a leisure area.
Based on this evolution and with these references, among others, the decisive change in China came with the launch of the 863 Programme in 1986, designed to accelerate the fields of high technology and reduce the gap with the industrial powers. This initiative, accompanied by the Torch Programme (1988), aimed at industrialisation through technology parks such as Zhongguancun in Beijing, and the 973 Programme (1997), focused on basic research with long-term national objectives, laid the institutional, scientific and business foundations for China's subsequent AI ecosystem (State Council, 1986; Zhu, 2024).
In the 2000s, the state adopted the principle of zizhu chuangxin—indigenous innovation—through the Medium- and Long-Term Plan for Science and Technology (2006–2020). The aim was not merely to absorb foreign technologies but to create endogenous capabilities that would support an “innovative society” by 2020 and scientific leadership by mid-century. This reorientation consolidated a state-coordinated model of innovation in which universities, the Chinese Academy of Sciences and industrial clusters operate as interlinked cogs in a national strategy (Zhu, 2024; Hauge, 2025).
A further acceleration occurred with Made in China 2025 (MIC2025), launched in 2015, which placed AI at the heart of a broader agenda of industrial modernisation and import substitution across strategic sectors such as robotics, biomedicine and electric vehicles. Although initially read in the West as a declaration of manufacturing intent, MIC2025 in practice framed AI as a productivity engine and a tool for reinforcing global competitiveness. MERICS (2025) notes significant advances in facial recognition, big data and industrial robotics, alongside enduring vulnerabilities in advanced semiconductors, where self-reliance remains well below official targets.
China’s definitive commitment came with the New Generation Artificial Intelligence Development Plan (AIDP, 2017), which set milestones: to establish foundations and a leading position in patents by 2020; to achieve substantive advances in industrial applications by 2025; and to become the world leader in AI by 2030 (State Council, 2017). In contrast to earlier plans, the AIDP explicitly integrated economic objectives with national security, social governance and international positioning.
The global surge of generative AI after 2022 presented both a challenge and an opportunity. In response, China enacted the Interim Measures for the Administration of Generative AI Services (2023), a pioneering regulatory instrument that set criteria for safety, data quality and traceability in publicly offered systems while granting greater leeway to non-public research and development (Ding, 2024). This hybrid design allowed the state to exercise tight control over commercial deployment while laboratories and universities continued to experiment.
A distinctive feature of China’s strategy has been the embrace of open-weights models and tools, a pragmatic response to hardware constraints created by US export controls. Firms and laboratories such as DeepSeek and the Shanghai AI Lab have focused on efficient, low-cost training regimes, with results that have surprised the international community. The launch of DeepSeek-R1 in 2025—demonstrating reasoning capabilities at markedly lower compute cost—signalled a doctrinal shift: when access to frontier chips is restricted, the priority becomes optimising the performance-to-cost ratio (DeepSeek, 2025; UC Berkeley, 2025).
Geographically, development concentrates in specialised poles. Beijing—Zhongguancun and top universities—functions as a hub of policy and R&D; Shanghai, via WAIC and institutions such as the Shanghai AI Lab, is an international showcase and a centre for generative AI; Shenzhen and the Pearl River Delta, driven by Huawei and an extensive hardware base, lead hardware-AI integration; and Hefei, anchored by iFLYTEK and “China Speech Valley”, specialises in speech and language technologies (Arcesati, 2025; Lee, 2018).
Finally, sanctions have been pivotal. Restrictions on access to leading-edge chips and lithography equipment have imposed immediate constraints on frontier model training. Yet, as Chan (2025) and RAND (2025) show, these pressures have also catalysed domestic innovation: more efficient models, development of indigenous accelerators (e.g., Huawei’s Ascend), and expansion of industrial diplomacy to secure strategic minerals. The trajectory is thus characterised less by the absence of obstacles than by an attempt to convert them into engines of industrial policy.
Types and basic definition of different types of AI
Artificial intelligence (AI) has transformed the way we interact with technology, encompassing a wide range of approaches and applications.
Although generative AI, which creates content such as text, images or music from training data, has gained a lot of attention, there are other equally relevant types of AI, each with unique characteristics and applications.
Below is a detailed description of the main types of AI, in addition to generative AI.
I will start with reactive AI, which is the most basic type of artificial intelligence. These systems respond to specific stimuli in the environment without the ability to learn or store past experiences. They work using predefined rules that map inputs to outputs, making them fast but limited. An iconic example is Deep Blue, the IBM programme that defeated chess champion Garry Kasparov in 1997, making decisions based solely on the current position on the board (Russell & Norvig, 2021). Applications of reactive AI include games such as chess and Go, and simple control systems such as basic thermostats.
Unlike reactive AI, limited-memory AI can store recent information to improve its decisions. These systems process historical data within a limited time frame, but do not build a permanent model of the world. For example, an autonomous vehicle uses data from the last few seconds (e.g., the position of other vehicles) to adjust its driving (Russell & Norvig, 2021). This type of AI is common in applications such as navigation systems, voice assistants that interpret recent commands, and robots that need to adapt to dynamic environments.
AI based on theory of mind represents a more advanced level, although it is still under development. These theoretical systems could understand the intentions, emotions, and beliefs of other agents, whether human or other AIs, enabling more sophisticated social interactions. For example, such an AI could negotiate a contract by interpreting the motivations of the parties involved (Copeland, 1993). Although not yet fully implemented, this AI is expected to be key in advanced personal assistants or human-machine interaction systems that require artificial empathy.
Self-aware AI is a hypothetical category involving systems with self-awareness, capable of reflecting on their own existence, goals and decisions. This level of AI does not currently exist and raises profound philosophical and ethical questions. A self-aware AI could, for example, decide to change its purpose after assessing its impact on the world (Copeland, 1993). Although this is a matter of speculation, its future development could revolutionise areas such as global governance or autonomous ethics.
Expert systems or rule-based AI operate using a set of predefined logical rules (‘if-then’) designed by human experts to solve problems in specific domains. They do not learn from data, but rather apply explicitly encoded knowledge. For example, a medical diagnostic system can suggest diseases based on entered symptoms (Luger, 2009). These systems are highly specialised but lack flexibility outside their domain. They are used in medical diagnostics, technical support, and financial planning.
Machine learning-based AI allows systems to learn patterns from data without explicit programming. This type includes several approaches:
- Supervised learning: Uses labelled data to predict outcomes, such as classifying emails as spam or not spam.
- Unsupervised learning: Finds patterns in unlabelled data, such as segmenting customers by behaviour.
- Reinforcement learning: Learns through trial and error to maximise a reward, such as AI that plays video games (Sutton & Barto, 2018). This type of AI is fundamental in applications such as fraud detection, recommendation systems and robot control.
Deep learning is a subfield of machine learning that uses multi-layered artificial neural networks to process complex data. Although often associated with generative AI, it is also applied to non-generative tasks such as facial recognition and machine translation (Goodfellow et al., 2016).
This approach requires significant computing power and is key in applications such as image processing, speech recognition, and autonomous driving. Symbolic AI represents knowledge using symbols and logical rules, allowing reasoning about facts and solving complex problems.
Unlike machine learning, which relies on data, symbolic AI uses explicit representations, such as ontologies or knowledge bases (Luger, 2009). It is more interpretable but less adaptable to dynamic environments. Examples include logical reasoning systems and rule-based chatbots, used in planning and knowledge assistants.
Hybrid AI combines multiple approaches, such as machine learning, rule-based systems, and symbolic techniques, to leverage the strengths of each. For example, a hybrid medical system could integrate expert rules with data-based predictions to diagnose diseases more accurately (Russell & Norvig, 2021). This approach seeks greater robustness and is applied in complex systems, such as advanced robotics, personalised recommendation, and resource management.
Each type of AI has strengths and limitations, and its application depends on the problem to be solved. While reactive AI and expert systems dominated the early decades of AI, machine learning and deep learning have driven recent advances in areas such as autonomous driving and natural language processing. The concepts of theory of mind and self-awareness, although futuristic, open debates about the potential and ethical risks of AI. Hybrid AI, meanwhile, represents an effort to integrate the best of each approach, seeking more versatile and reliable systems.
Public and Private Actors
The development of artificial intelligence in China cannot be understood without considering the network of public and private actors that, in a coordinated manner, form a unique ecosystem. Unlike the Western model, where innovation tends to arise from private initiatives with subsequent state support, in China the state maintains a guiding role that directs resources, defines priorities and regulates the pace of technological expansion, while companies—from technology conglomerates to emerging start-ups—act as executors and multipliers of that strategy.
In the public sphere, state agencies play a key role in defining policies and channelling financial resources. The Ministry of Science and Technology (MOST) leads the implementation of the New Generation Artificial Intelligence Development Plan (AIDP), while the National Development and Reform Commission (NDRC) integrates AI into five-year plans, linking it to industrial transformation and infrastructure modernisation (Zhu, 2024). Similarly, the Chinese Academy of Sciences and elite universities such as Tsinghua, Peking University and the University of Science and Technology of China have established themselves as leading research centres. Tsinghua, in particular, has promoted models such as GLM-4, capable of competing in reasoning with Western systems such as GPT-4 (Lee, 2018). This academic role is not autonomous, but rather responds to a design in which higher education institutions are strongly integrated into national innovation priorities.
In the private sector, the so-called ‘national champions’ stand out, conglomerates that combine vast financial resources with massive deployment capabilities. Alibaba and Tencent lead the integration of AI into cloud computing services and digital platforms, respectively; Baidu, a pioneer in the field of computer vision and voice recognition, has repositioned itself in the generative AI race; SenseTime has emerged as a world leader in computer vision and surveillance applications; and Huawei has played a dual role, both in the design of chips for AI and in the development of its own models, in a context marked by international sanctions that have encouraged technological self-sufficiency (Chan, 2025g).
Alongside these giants, the Chinese ecosystem has seen a multitude of startups with specialised profiles flourish. DeepSeek is the most paradigmatic case: in 2025, it presented the R1 model, which is open source and capable of significantly reducing inference costs, positioning itself as a global benchmark in algorithmic efficiency (DeepSeek, 2025). Other startups such as iFLYTEK in Hefei have achieved international recognition in the field of voice processing, while ByteDance has diversified its capabilities beyond TikTok to explore AI applied to entertainment, algorithmic recommendation and, more recently, synthetic content generation.
The relationship between public and private actors is structured around a model of close collaboration. The state subsidises access to data and computing infrastructure, while encouraging the creation of joint ventures and strategic partnerships. Within this framework, companies enjoy a margin for innovation that is nevertheless conditioned by the broader political objectives of the Chinese Communist Party. As Allen (2019) has shown, AI industrial policy is based on the idea that the positive externalities derived from cooperation—such as technological spillovers—are essential for building a robust ecosystem. Thus, the boundary between the public and private spheres is deliberately blurred: companies act as the executive arm of a state vision, but at the same time they feed back into the government agenda with innovations that are then standardised or scaled up at the national level.
The international impact of this configuration is equally relevant. Through the Belt and Road Initiative (BRI), China has begun to export AI solutions, especially to countries in the Global South, thereby projecting not only technological capabilities but also an alternative model of governance. Chan (2025f) has characterised this phenomenon as ‘industrial diplomacy’, in which AI functions as a foreign policy instrument to consolidate alliances and reconfigure value chains. Reports by RAND (2025) and CSIS (2025) emphasise that this strategy is not limited to the economic sphere, but also incorporates security and control dimensions, given that many of the exported technologies, such as facial recognition systems, have dual civilian and military applications.
In short, the ecosystem of public and private actors in Chinese AI reflects a characteristic hybridisation: a planning state with a long-term vision and the capacity to mobilise resources on a massive scale, combined with a dynamic private sector that adapts to strategic guidelines and finds opportunities for global expansion in them. This symbiosis, far from being accidental, is one of the central pillars of the Chinese innovation model and explains both its successes in certain areas of AI and external criticism regarding its political and surveillance uses.
Public and Private Plans, and Financing Mechanisms
The deployment of artificial intelligence in China has been accompanied by a complex network of strategic plans and financial instruments that reinforce its long-term viability. These mechanisms combine centralised planning with initiatives from local governments, state financial institutions and private capital in a coordinated effort that illustrates China's ability to mobilise resources on a large scale.
At the public level, the New Generation Artificial Intelligence Development Plan (AIDP, 2017) and the AI Plus strategy are the pillars of national planning. Both documents go beyond setting general goals and establish measurable objectives with specific deadlines: consolidating a domestic AI market worth $126 billion by 2025, achieving global leadership in industrial applications within the next decade, and training a talent pool of at least five million qualified professionals by 2030 (State Council, 2017; OpenEdition, 2025). These goals are part of a broader system of five-year plans that articulate sectoral investments, tax incentives and human capital training programmes.
Public funding is supported by large-scale instruments such as the National Integrated Circuit Investment Fund (‘Big Fund’), with a budget of more than $40 billion, and preferential credit mechanisms granted by state-owned banks such as the Bank of China, which in 2025 announced lines of credit worth one trillion yuan for the expansion of green computing (Wang, 2025). In addition, there are specific funds such as the $138 billion venture capital Guidance Fund, which is intended to support the creation of start-ups and stimulate applied research projects (Chan, 2025i). These financial instruments, in many cases jointly managed by ministries and local governments, make it possible to sustain both experimental initiatives and large-scale deployments, reducing the uncertainty inherent in disruptive innovation.
In parallel, private companies have rolled out complementary strategic plans that reinforce the state agenda. Giants such as Alibaba and Tencent have invested heavily in cloud infrastructure and data platforms, while Huawei has redirected part of its resources towards designing chips for AI, seeking to reduce dependence on foreign suppliers. Baidu, for its part, has promoted an ‘open AI’ strategy with the development of models such as Ernie Bot, which not only compete with Western alternatives but also align with the Chinese government's regulatory priorities. These private initiatives are supported by a growing venture capital environment: according to data from the World Intellectual Property Organisation (WIPO, 2025), China has led the way in recent years in the number of AI-related patents, thanks to annual investment exceeding £100 billion.
The synergy between public and private plans can also be seen in the field of education. Reports in French (OpenEdition, 2025) highlight that Chinese universities have implemented specialised AI training programmes to respond to the growing demand for talent, with the aim of reaching the aforementioned five million trained professionals by 2030. This investment in human capital seeks to address one of the structural shortcomings most often pointed out by Western analysts: the lack of engineers with international experience and the shortage of profiles capable of integrating knowledge in hardware, software and data management.
However, the rollout of these plans has not been without difficulties. CSIS reports (2025) highlight that, while targets for AI applications have far exceeded expectations — with a jump from just 9.6% of industrial companies using AI in 2024 to over 47% in 2025 — advances in semiconductors have fallen far short of official targets, with self-sufficiency at around 30% compared to the projected 70%. Cases such as the failure of Hongxin Semiconductor, attributed to corruption and management problems, show that the mobilisation of massive resources alone does not guarantee technological success (CSIS, 2025; Merics, 2025).
Despite these setbacks, the overall picture reveals a highly resilient financing and investment model. As Hauge (2025j) points out, long-term planning allows for the absorption of specific failures without destabilising the whole, while the coexistence of public and private capital diversifies risks and multiplies opportunities for innovation. In the words of Rodrik (2015), Chinese industrial policy acts as a corrector of externalities, transforming investments that could be considered failures into sources of collective learning and experiences that fuel future cycles of innovation.
In short, the combination of state plans, sovereign wealth funds, development banks, venture capital and private business strategies forms a unique financial framework on the global stage. Unlike the predominantly private models of Silicon Valley or the more regulatory approach of the European Union, China has opted for a hybrid formula that maximises scale and minimises dispersion. This scheme, which reflects the logic of Mazzucato's (2013) ‘entrepreneurial state,’ is one of the most solid pillars on which rests the ambition to make artificial intelligence one of the backbone of the Chinese economy in the 21st century.Economic, Geopolitical, Governance and Resource Ambitions
Artificial intelligence in China is not conceived solely as a tool for technological innovation, but as a strategic component for achieving economic, geopolitical and internal governance objectives. The deployment of AI is based on a long-term strategy that combines economic growth, the consolidation of state power, the reconfiguration of international alliances and the management of human and material resources.
In the economic sphere, AI is projected as an essential driver of productivity and competitiveness. Reports by McKinsey (2018) already estimated that the adoption of these technologies could contribute more than $600 billion annually to China's GDP by 2030. Subsequent projections, such as those by Forbes (2024), raise this figure, predicting that the AI sector will account for up to 26% of GDP in 2030. This ambition is part of a broader strategy to transform China into a high-income economy based not on the comparative advantage of low labour costs, but on innovation and added value. As Hauge (2025f) has pointed out, this commitment is based on the conviction that large-scale public investment generates productivity spillovers that justify state intervention, in line with Mazzucato's (2013) theoretical framework of the ‘entrepreneurial state’.
The geopolitical dimension of AI is marked by rivalry with the United States and, more broadly, by the struggle for global technological hegemony. Chan (2025j) interprets artificial intelligence as a ‘diplomatic weapon’ capable of reconfiguring value chains and excluding strategic rivals through what he calls ‘industrial diplomacy’. In this sense, AI becomes an instrument of the Belt and Road Initiative (BRI), through which China exports digital models, platforms and solutions to countries in the Global South. These transfers not only strengthen economic ties, but also promote an alternative regulatory and technological framework to Western liberalism. Tobin (2024) has emphasised that this industrial diplomacy creates structural dependencies that extend Chinese influence, while weakening the ability of third countries to maintain technological autonomy.
In terms of governance, artificial intelligence is an extension of the Chinese Communist Party's political project. Rogier Creemers (2024) has argued that AI embodies a fusion of Confucian traditions and Marxist frameworks, giving rise to a model of “digital nationalism” that prioritises social harmony over individual privacy, in line with Confucian tradition and Marxist materialist philosophy. The deployment of the social scoring system illustrates this logic: far from being a mere control mechanism, it is presented as a tool for optimising governance, strengthening trust in economic transactions and ensuring political stability. From this perspective, AI is not only a means of managing social complexity, but also a legitimising device that reinforces the authority of the state.
Finally, the management of human and material resources is a critical dimension of China's strategy. On the human level, the emphasis on talent training seeks to overcome dependence on experts trained abroad and consolidate a domestic corps of AI engineers and scientists. In parallel, the issue of material resources, particularly rare earths and industrial magnets, is intertwined with AI and semiconductor policy. Chan (2025b) has documented how China's control of more than 90% of global gallium and a significant proportion of rare earths has been used as a bargaining chip in the face of Western restrictions on chip trade. Beijing's ability to manage these strategic resources constitutes a comparative advantage in the reconfiguration of supply chains, thereby strengthening the resilience of its technological ecosystem.
In short, China's ambitions in artificial intelligence transcend the purely technological. Economically, AI is seen as an engine of growth and productivity; in geopolitics, as a weapon of industrial diplomacy and a vector for power projection; in governance, as a tool for social control and political legitimisation, with a clever and key focus on mass politics, without which there can be no hegemony; and in resource management, as a mechanism for resilience against external vulnerabilities. This multidimensional convergence explains why AI occupies a central place in China's development strategy and why its evolution will have profound implications for the international order in the coming decades.
Confucian Traditions and Marxist Frameworks in China’s Algorithmic Governance
The claim that AI governance in China merges Confucian traditions with Marxist frameworks invokes two long arcs of legitimacy and organisation. The Confucian canon—Analects (Lúnyǔ), The Great Learning (Dàxué), Doctrine of the Mean (Zhōngyōng), The Book of Rites (Lǐjì), as well as Mencius (Mèngzǐ) and Xunzi— posits that political stability derives from the virtue of the ruler (dé), observance of appropriate rituals (lǐ), benevolence (rén), and the ‘rectification of names’ (zhèngmíng), aligning categories with reality. The “golden mean” (zhōngyōng) prescribes moderation and emotional balance as prerequisites for social harmony (hé). These notions, which historically underpinned meritocratic selection through imperial examinations, reappear in contemporary official semantics — “harmony”, “sincerity”, “civility” — and help interpret the regulation of algorithms and content as a modern extension of moral-administrative rectification.
This classical repertoire is not a relic. Under Hu Jintao, the slogan of the ‘harmonious society’ explicitly revived Confucian language; under Xi Jinping, fundamental socialist values combine moral vocabulary with legality and security. In regulatory practice, this synthesis materialises in rules that subject recommendation systems and generative AI to requirements of security, traceability and ‘socialist values’, while exempting non-public R&D, an ex ante model of governance that combines moral guardianship with raison d'état, and which is better understood if one recognises the continuity with lǐ (form/procedure) and with the ideal of governing by virtue.
The Marxist-Leninist pillar, reworked by Mao and Xi, provides a theory of history, classes and the state. From Lenin come the vanguard party and the transitional state; from Mao, the primacy of practice as the source of truth, contradiction as the engine of change and the mass line as an iterative method of governance (‘of the masses, for the masses’), for the two principles that today structure the relationship between data, politics and society are: the primacy of practice as the source of truth (‘knowing by transforming’) and contradiction as the engine of change, together with the mass line (‘from the masses to the masses’) as a method of iterative government. Xi's codification of “socialism with Chinese characteristics for a new era” (an aspect of Xi Jinping's thinking to which I will devote a specific study) reframes this legacy around “comprehensive national security”, data sovereignty and “rule by law and virtue”, legitimising the extensive use of AI for stability and development.
This dual genealogy, Confucian and Marxist-Leninist, through Mao and Xi's reworking, explains why AI in China is conceived simultaneously as a technology of efficiency and as a device for moralising order.
These logics have earlier analogues: imperial examinations as moralised meritocracy; the Maoist mass line as feedback and control; and, in the 21st century, the social credit system and algorithmic recommendation rules as the digitisation of long-standing aspirations for a “sincere” and trustworthy order. The novelty lies less in the spirit than in the scale and granularity: AI puts Confucian (harmony, rectification) and Marxist (planning, mobilisation) principles into practice with large-scale data, feedback loops and real-time risk assessment.
Compared to other experiences—the Soviet cybernetic tradition, which I discussed above; or the Cybersyn, also known as the Synco Project, which was implemented in Chile by Salvador Allende's government and designed by British scientist Stafford Beer, whose cybernetic principles are outlined in his book Brain of the Firm: Managerial Cybernetics of Organisation—the Chinese fusion is distinguished by its moral and civilisational anchoring. All share the ambition to govern through information; China adds a Confucian vocabulary of order and a Leninist party-state as a vector of coordination, now complemented by dense regulatory and algorithmic infrastructures. Proponents claim that this model “improves” governance by providing regulatory coherence, mobilisation capacity and strategic alignment. Critics highlight the costs in terms of privacy and pluralism, as well as the risk of path dependency when moralised technique becomes orthodoxy (Peters, 2017; Gerovitch, 2004; Medina, 2014; Beer, 1972).
Read together, the Confucian and Marxist corpora, together with Mao's essays on practice and contradiction and Xi's programmatic texts, provide the lexicon that justifies China's regulatory and technical architecture for AI.
How does this model improve on other approaches? Fundamentally, I would put forward three key arguments:
- First, normative coherence: the combination of Confucian values and socialist objectives provides a comprehensive legitimising framework for regulating AI from the outset, avoiding—according to its promoters—the short circuit between innovation and order.
- Second, mobilisation capacity: the “masses to masses” method, translated into data and platforms, favours rapid pilot-evaluation-standardisation cycles that fuel the expansion of “smart” public policies.
- Third, strategic alignment: the discourse of digital sovereignty and comprehensive security integrates AI, industry and geopolitics into a state policy with a horizon of 2035–2050.
These strengths coexist, however, with regulatory costs (tension with privacy and freedoms) and risks of dependency on trajectories when the moralisation of technology becomes orthodoxy.
Finally, this Confucian-Marxist synthesis underpins concrete policies linked to AI. Recent documents and regulations—from algorithmic recommendation management (2022) to Provisional Measures for Generative AI Services (2023)—incorporate clauses on socialist values, security and responsibility, while think tanks and academics have shown how the ‘smart governance’ agenda draws on both moral tradition and party-state strategy. Read together, the works of Confucius, Mencius and Xunzi, Mao's essays and Xi's programmatic texts are not mere references: they provide the lexicon with which the regulatory and technical architecture of AI in China is justified.
Current and Future Capabilities
The current landscape of artificial intelligence in China shows a remarkable degree of maturity in areas ranging from foundational models to AI applied to industrial and service processes. On the generative models front, laboratories and companies have converged towards an efficiency strategy that combines expert blending architectures, distillation, quantisation and training with carefully curated data. The case of DeepSeek-R1, which in 2025 demonstrated competitive results in reasoning tasks with significantly lower computing costs, symbolises a doctrinal shift: in the face of hardware constraints, the priority is no longer to replicate the training scale of frontier players, but to optimise the relationship between performance and cost. Along these lines, families such as Qwen2.5 and GLM have consolidated a repertoire of versatile models that perform well in Chinese and English, which, together with the proliferation of open weights, has accelerated the diffusion of capabilities in the productive fabric (DeepSeek, 2025; UC Berkeley, 2025; Lee, 2018).
This technical advance translates into applications with a measurable impact on productivity. In the manufacturing industry, AI is integrated into digital twins, predictive maintenance, visual quality control and robotic line orchestration, shortening design cycles and reducing process deviations. Smart industrial parks promoted by local governments and anchor companies have served as testing grounds and standardisation sites for solutions that, once stabilised, are scaled up to other industrial hubs. In logistics, the combination of computer vision, optimisation and decision agents has improved warehouse efficiency and last-mile delivery. In financial services, language models are applied to fraud prevention, customer service and document analysis, while in healthcare their use is becoming established in support of diagnostic imaging, triage and record management, always within regulatory frameworks that emphasise traceability and accountability (CSIS, 2025; Merics, 2025).
The medium-term growth potential is not limited to text models. The convergence of vision-language, audio and tabular data feeds multimodal systems with applications in robotics and natural interfaces. China has accumulated advantages in speech recognition and synthesis—as illustrated by the Hefei cluster around iFLYTEK—and in industrial vision, fields in which the availability of local data and proximity to the end user accelerate improvement cycles. The transition from large generalist models to agents capable of planning, calling external tools and executing sequences of actions opens up a path for automating administrative, engineering and supervisory tasks in complex environments. At the same time, integration with ‘edge AI’ will bring perception and decision-making capabilities closer to devices and machinery, reducing latency and transmission costs and thereby expanding adoption in extensive supply chains.
An additional source of potential lies in algorithmic governance and the use of public and corporate data under trusted frameworks. The accumulated experience in standardisation and licensing — from recommendation system management to measures for generative AI — has generated a regulatory infrastructure that, although demanding, reduces regulatory uncertainty for large-scale deployments. In economic terms, this regulatory design allows for a transition from pilot projects to mass adoption without disrupting business continuity or eroding political legitimacy. For companies, the result is an environment in which investment in AI can be counted as a strategic asset with multi-year return horizons; for the state, it is a mechanism for directing externalities towards productivity and security goals (State Council, 2017; Ding, 2024).
However, two structural limitations remain.
- Hardware: limited access to cutting-edge chips and tools increases the cost of cutting-edge training and forces austerity, which, paradoxically, fosters an efficiency advantage and an open-source culture that benefits SMEs and local governments.
- Private capital: despite the dynamism of the domestic venture capital sector, the gap with the United States in terms of private investment in advanced stages is considerable, as shown by the latest figures from Stanford's AI Index, which place the United States well ahead in terms of capital mobilised, especially in advanced stages (Stanford HAI, 2025). These gaps do not negate the potential, but they do shape it: they favour leadership in applied AI — manufacturing, logistics, public services, finance — and delay, for now, full convergence in frontier research. (Stanford HAI, 2025).
These gaps determine the results: leadership in applied AI and slower convergence at the scientific frontier.
Looking ahead to 2030, three plausible trajectories can be anticipated:
- First, sectoral leadership in domains where proximity to the manufacturing base, availability of operational data, and regulatory alignment produce cumulative advantages: industrial robotics, quality control, energy optimisation, and urban network management.
- Second, qualitative convergence in general-purpose generative models through intensive exploitation of technology: agents with long-term memory, systematic integration of search-augmented retrieval, and curriculum training that improves reasoning ability without multiplying computation.
- Third, selective international projection through the export of turnkey solutions to the Global South, in which AI, data infrastructure and cloud services are packaged with financing and support, reinforcing the links created by industrial diplomacy (Chan, 2025; RAND, 2025).
This horizon of potential should not obscure the ethical and institutional design dilemmas that accompany the expansion of AI. The tension between efficiency and rights—privacy, non-discrimination, due process—will require auditing and verification capabilities that are commensurate with deployment. Future research should address reproducible evaluation standards, the mitigation of biases in multimodal data, and the security of models in sensitive contexts. At the same time, it will be important to closely monitor the fit between innovation and sustainability: the shift toward energy-efficient data centres, the orchestration of loads in the ‘Data East, Computing West’ network, and green computing initiatives will be critical to ensuring that the expansion of AI does not exacerbate energy and environmental tensions (Merics, 2025).
In short, the potential of AI in China is real and substantial, but not homogeneous. Where a strong industrial base, stable regulatory frameworks and a growing culture of efficiency and openness converge, the chances of leadership are high. Where progress depends on access to cutting-edge computing or deep private capital, gaps remain. China's strategy seems to have accepted this reality: consolidate advantages in applied AI while working to reduce, through targeted investment and public-private cooperation, the gaps that still separate it from the global frontier in basic research and hardware.
AI and Semiconductors: Dependence, Corrective Strategies, and Self-Reliance
AI's advance is inseparable from the development of the semiconductor sector, whose importance goes beyond the technical realm to become a central element of industrial policy and contemporary geopolitics. Semiconductors are the material infrastructure essential for training and deploying AI models, and at the same time represent one of the most vulnerable areas of China's strategy.
China's structural dependence on the United States, Taiwan, South Korea and the Netherlands is most evident in advanced nodes of less than 7 nanometres, where access to extreme ultraviolet lithography equipment and finely tuned global supply chains is crucial. The Information Technology and Innovation Foundation (ITIF) estimates in its 2024 report that there is a gap of several years in advanced logic, although China has consolidated its position in mature nodes (≥28 nm), which represent a growing share of global capacity, with China already accounting for more than 30% of global capacity. US export controls since 2019, which were tightened in 2022, have restricted access to high-end GPUs, increasing training costs and limiting access to the cutting edge, as this has had a direct impact on the domestic AI industry: by restricting access to high-performance chips such as Nvidia's A100 and H100, they have raised training costs and limited access to cutting-edge hardware (CSIS, 2025).
The restrictions have also catalysed a reorientation. Huawei and SMIC have intensified their work on alternative process flows and have reportedly achieved around 5 nm using multi-patterns, albeit at a higher cost and with scale limitations; The launch of Huawei's Ascend 910C processor illustrates a logic of resilience aimed at ensuring a sufficient base for applied AI, while reducing price and performance gaps relevant to the “real economy” (RAND, 2025; Chan, 2025). Ultimately, it is not a question of competing immediately with the most advanced GPUs, but of ensuring a minimum of strategic autonomy that guarantees the continuity of research and deployment, and then scaling up from there.
Mineral resources are part of the plan. China controls around 70% of global rare earth production and more than 90% of the global supply of gallium, a critical material for the manufacture of next-generation semiconductors; export restrictions on gallium in 2023-2024 reduced global availability by 50% and served as leverage against Western controls on chips (Chan, 2025). This ‘mineral counterweight’ strategy is not new: since the 2010s, Beijing has used its dominant position in the rare earth chain as a tool for diplomatic pressure, but in the current context it takes on a broader meaning as it is integrated into an AI industrial policy that seeks to ensure resilience in the face of sanctions.
The industrial dimension of this effort can be seen in the continuation of the Made in China 2025 plan and in the allocation of massive funds, such as the Big Fund, specifically earmarked for building domestic capabilities in chip design and manufacturing. In 2024, new rounds of investment were announced, bringing the cumulative figure to more than $1 trillion, earmarked for projects ranging from mature nodes to research into new materials such as silicon carbide. According to MERICS (2025), the much-cited goal of 70% self-sufficiency by 2025 is unachievable, but it sets the direction: import substitution, national champions and an autonomous base for AI expansion.
Internationally, the semiconductor strategy is aligned with industrial diplomacy: the export of AI systems to the Global South is often accompanied by cooperation on hardware and technology transfer (Chan, 2025; Tobin, 2024). The intersection between AI and chips thus crystallises China's duality: scale, integration and mobilisation versus border dependence and exposure to sanctions. This approach reinforces the logic of ‘competitive exclusion’ described by Tobin (2024), in which Beijing aspires to weave resilient networks that reduce vulnerability to external shocks and expand geopolitical room for manoeuvre.
In short, the intersection between AI and semiconductors reveals the structural duality of China's strategy: strength in scale, in the integration of hardware and software, and in the mobilisation of resources; vulnerability at the technological frontier and in dependence on critical nodes dominated by rival powers. The state response combines domestic resilience, industrial diplomacy and the strategic use of mineral resources, in a gamble that reflects the principle, inherited from both the Confucian tradition and Sinicised Marxism, of transforming constraints into drivers of innovation. In the short term, the technological gap persists; in the medium and long term, the accumulation of capabilities and the building of alliances could redefine the global architecture of artificial intelligence and the semiconductor industry.
China 2030 in Chan’s Anticipation: Vectors of Catch-Up
Chan's proposition (2025), according to which China could ‘catch up’ with the United States by 2030, does not predict scientific supremacy, but rather a practical convergence across the entire AI stack that aims to close gaps through accumulation, economies of scale and application orientation. Chan identifies three fundamental vectors.
- First, industrial diplomacy: the relocation and ‘friend-shoring’ of supply chains through foreign investment, transfer agreements and the attraction of the domestic market, thereby mitigating sanctions and securing demand and inputs, while exporting turnkey AI packages. In her reading, the “turn to the Global South” is not rhetorical: it is a vector of reverse de-risking, through which China mitigates sanctions, expands demand and secures inputs (from strategic minerals to assembly and back-end), while deploying turnkey AI solutions (models, cloud, services) that build customer and government loyalty (Chan, 2025a; 2025f; RAND, 2025).
- Second, layered support for R&D and adoption: talent pools, subsidised computing, pilot zones and a national computing network that prioritises availability and energy. This overinvestment pushes laboratories toward paradigms that prioritise efficiency—smaller models, distillation, expert mixes—and closes gaps in useful performance at a dramatically lower cost. In RAND's analysis, the government is willing to overinvest in computing and energy—including rapid expansion of network capacity—to compensate for the high-end chip bottleneck; in doing so, pushing labs toward an efficiency paradigm (smaller models, distillation, expert blending, weight sharing) that has dramatically reduced training costs and brought performance closer to the frontier in reasoning tasks. DeepSeek's trajectory and the regulatory response to generative AI fit this pattern: ex ante controls for public services, scope for non-public R&D, and an ecosystem that exploits open weights as an adoption multiplier (RAND, 2025; CSIS, 2025).
- Third, the resilience of semiconductors: import substitution where possible (mature nodes, national accelerators, advanced packaging) and tactical use of minerals as a bargaining chip. The result is not an immediate leap to 3-2 nm, but an adequate foundation to sustain large-scale applied AI, with reduced price and performance gaps.
RAND's warnings remain valid: inefficiencies in chip allocation, talent bottlenecks and higher costs for national replicas. The notion of a five-year ‘catch-up’ should be interpreted as a close competition in useful performance and adoption, and as leading sectoral adoption where proximity to manufacturing, energy and local data generate scale and learning effects, but not as a total displacement of the frontier.
East Asia in Comparative Perspective: Japan, South Korea, Taiwan
China's trajectory is better aligned with neighbouring models.
In Japan, the AI Strategy 2022 consolidates a vision that combines a drive for R&D with guidelines for responsible use and standardisation. in 2024, the government published the AI Guidelines for Business v1.0, with practical criteria for the private sector, and in 2024–2025, it launched the Japan AI Safety Institute (J-AISI) as a hub for technical assessment and international cooperation (G7/Hiroshima). This three-pronged approach—strategy, guidelines, and public evaluation—outlines a gradualist and coordinated approach that prioritises regulatory interoperability and dialogue with industry and academia. Unlike China's emphasis on “ex ante governance” with a strong socialist and national security imprint, Japan institutionalises safety through sectoral guidelines and a network of safety institutes with an international standardisation focus.
South Korea has moved forward through a legal architecture: the Basic AI Act (2024; effective in 2026) establishes a national control tower, a safety institute and incentives for reliable AI, based on ethical guidelines and a government Guide to Reliable AI. The model is conducive to innovation, with enforceable trust requirements, closer to OECD/G7 practice than to content regulation.
Taiwan links AI to industrial sovereignty. The AI Action Plan 2.0 (2023-2026) and a draft Basic AI Act (2024) propose risk-based governance, aligned with international standards and coordinated by the Ministry of Digital Affairs. With an industrial ecosystem focused on semiconductors and hardware, Taiwan emphasises openness, regulatory compatibility and manufacturing depth, in contrast to mainland China's emphasis on stability.
Taken together, these trajectories not only offer points of comparison; they also limit China's room for manoeuvre. The institutionalisation of safety and assurance in Japan, South Korea's pro-trust juridification and Taiwan's risk-based approach are pushing towards technical convergences (evaluation metrics, traceability, synthetic content labelling) that China has already begun to develop on its own with standards on algorithmic recommendation (2022), deep synthesis (2023) and generative AI (2023). The crucial difference is that, in China, these standards are part of a framework of sovereignty and comprehensive security with strong content control; in its neighbours, they are part of a pro-market co-regulation regime oriented towards international standards.
The European Union towards 2030: Weaknesses, Potentials, and a Roadmap
Against a 2030 horizon, the EU must align norms, infrastructure, capital and industry. The AI Act (in force since 2024, with obligations phased through 2026 and specific rules for general-purpose AI from 2025) offers legal certainty but must be coupled with accessible compute and patient capital lest compliance costs stall experimentation. In infrastructure, Europe’s EuroHPC Joint Undertaking (JUPITER, LUMI, Leonardo, MareNostrum 5) and the nascent AI Factories provide a public supercomputing base that, if well governed, can become general-purpose AI compute for science, industry and the public sector. Sectoral data spaces (health, mobility, energy, manufacturing) complement this with trusted data.
In semiconductors, the European Chips Act (2023) set three pillars—R&D and pilot lines through the Chips JU; incentives for first-of-a-kind facilities; and a Chips Fund to bridge the equity gap—aiming at 20% of global production by 2030. IPCEI projects in microelectronics mobilise significant public support; ventures such as the Dresden joint foundry project (with production planned in mature nodes) exemplify the approach. The EIB Group (EIB+EIF) has expanded financing with debt and equity windows under InvestEU. Still, the European Court of Auditors has warned of fragmentation risks absent stronger governance.
Weaknesses remain: fragmented capital markets, late-stage financing gaps, high energy costs and permitting timelines, limited leading-edge logic, and under-developed OSAT/advanced packaging. Strengths include world-class firms in power electronics (SiC, GaN), sensors and microcontrollers; leadership in manufacturing equipment; and institutional capacity for multi-level coordination (e.g., NextGenerationEU, IPCEI). Europe’s near-term window is less about replicating 2–3 nm than about dominating the system: design, materials, packaging, high-volume mature nodes, and their integration with applied AI across sectors of strength (automotive, Industry 4.0, energy). Discussions of a possible “Chips Act 2.0” oriented also to foundational nodes are aligned with this logic.
A feasible roadmap (2025–2030) combines selective federalisation, financial programming and industrial reform. Governance: federalise critical resources by upgrading EuroHPC to a true “EuroCompute” mission—reserved quotas for SMEs, standard tooling and regulated pricing—and integrate the AI Act with pan-European sandboxes and a public model-evaluation capability. Finance: the EIB should act as Europe’s compute-and-chips bank, issuing Compute/Chip Bonds with InvestEU guarantees to fund energy-efficient data centres, AI Factories, first-of-a-kind foundries and OSAT; the EIF should scale a thematic Chips Fund-of-Funds to close the deep-tech equity gap. Industry: pursue concentration with competition—fewer national micro-projects, more cross-border integrators; build an EU OSAT backbone; and launch industrial doctorates in AI+chips funded by Horizon Europe/Digital Europe with national top-ups. Conditionality—transparency, open access to pilot lines, non-discrimination for SMEs—should be enforced by the Commission and competition authorities, with parliamentary scrutiny and audits by the Court of Auditors.
Europe thus need not “be the US” or “be China”; it can be Europe: a large market with competitive public compute, clear rules, patient capital and industrial clusters that connect AI and semiconductors where it enjoys comparative advantage.
Conclusion
China's AI project is a civilisational gamble at the intersection of technology, tradition and power. From the fundamental programmes of the 1980s to the current wave of generative models, AI has been conceived as a means of structural transformation, not an end in itself. The party-state has mobilised resources on a large scale, designed long-term plans and orchestrated an ecosystem in which public and private actors operate in close collaboration. This hybrid configuration—planning plus entrepreneurial dynamism—explains both its resilience and its speed.
Economically, AI is projected as an engine of productivity and a lever for transition to an innovation- and high value-added economy. The projected figures—from a domestic market worth $126 billion in 2025 to $600 billion annually in 2030—show a level of ambition that is difficult to match elsewhere (McKinsey, 2018; Forbes, 2024).
In the geopolitical arena, artificial intelligence functions as an instrument of industrial diplomacy which, in line with the Belt and Road Initiative, strengthens China's presence in the Global South and raises the issue of US technological hegemony (Chan, 2025; Tobin, 2024).
Algorithmic governance, for its part, embodies the Confucian-Marxist synthesis highlighted by Creemers (2024) and other analysts: social harmony, moral rectification and virtue, combined with socialist planning, mass mobilisation and digital sovereignty. The deployment of social scoring systems and ex ante regulations on algorithms and generative models are not mere technical policies, but contemporary extensions of a tradition in which social order is conceived as the result of the moralisation of power. Unlike liberal approaches, which focus on the protection of individual rights, the Chinese model emphasises collective stability and the role of the state as the guarantor of harmony, pure Confucian thinking reinforced with Marxist materialist philosophical principles.
The semiconductor sector clearly exposes the dialectic between vulnerability and resilience. Dependence on advanced nodes dominated by external actors limits China's ability to compete at the technological frontier, but international sanctions have encouraged a dynamic of efficiency-oriented innovation and revalued strategic resources such as gallium and rare earths. This mineral counterweight strategy, combined with sovereign wealth funds, venture capital and industrial diplomacy, demonstrates China's ability to turn constraints into vectors for accelerating its industrial policy. Ultimately, in the semiconductor sector, we are currently in a field where vulnerabilities are offset by national substitution, ingenuity in packaging and resource exploitation.
However, partial success does not hide the remaining dilemmas. The private investment gap with the United States remains significant; dependence on advanced lithography equipment persists; and the ethical dilemmas arising from the deployment of AI—from privacy to algorithmic bias—require responses that go beyond mere efficiency. In this regard, critical voices within China itself have warned of the risks of ‘goal-oriented’ innovation without sufficient public debate. Researchers such as Yuan (2025), writing in Mandarin, have highlighted the energy and ethical dilemmas of hybrid models that seek efficiency at the expense of transparency, while academics linked to the Chinese Academy of Social Sciences have raised the need for a theory of ‘socialist digital civilisation’ that articulates technological development with environmental sustainability and social equity. These internal debates show that, far from being monolithic, the Chinese AI ecosystem is riven by tensions and genuine discussions about the way forward.
Comparatively, the Chinese model offers both strengths and challenges. Compared to the dynamism of the US private sector or the European regulatory framework, China's commitment to a hybrid formula maximises scale, strategic direction and mobilisation capacity. However, the price of this coherence is the limitation of pluralism in the definition of ends, which raises questions about the system's adaptability to unforeseen scenarios. The contrast with other historical experiences—Soviet cybernetics or the Cybersyn project in Chile—shows that China's uniqueness lies in having endowed algorithmic governance with a moral and civilisational foundation, anchored in Confucius and Marx, which gives it its own legitimacy vis-à-vis its citizens and the world.
Artificial intelligence in China cannot therefore be analysed solely in technological terms: it is a comprehensive project of modernity, with historical, philosophical and political roots that distinguish it from any other contemporary experiment, which also revolves around these same axes in line with its motivations for hegemony and domination in the phase of empire building by the United States. This is another reason why Europe must focus on resolving this issue, and do so immediately. Its achievements—in industrial applications, efficient models and the export of solutions—are already undeniable, as are its vulnerabilities in hardware and private investment. Looking ahead, three areas deserve special attention: the ability to close the semiconductor gap, the evolution of algorithmic governance in the face of individual rights demands, and the international projection of an alternative model of digital modernity. Future research should closely monitor how China resolves these tensions, as not only its technological leadership but also the balance of the global order in the 21st century will depend on it.
For the international order, the key issue is whether the major powers can align innovation with governance and legitimacy. China's trajectory suggests that applied leadership may be as important as cutting-edge advances. The United States retains deep strengths in science and capital; the European Union can, if it chooses, turn regulatory clarity, public computing, and industrial depth into shared technological sovereignty.
Epilogue: Europe 2030 as a Digital and Industrial Superpower
The decisive issue is not whether Europe can become a first-rank actor in the digital order, but whether it wills it.
Artificial intelligence and semiconductors are the new battleground for global power. China and the United States have moved quickly, each with its own model of innovation, regulation and international outreach. Europe, on the other hand, has vacillated between its regulatory strength and its industrial weakness.
The material capacity exists: a 450-million market, world-class science, diversified industry, and institutions capable of mobilising common resources when political will coalesces. The task is to transform potential into sovereign technological power, and this can only be achieved through federalism.
Becoming a European superpower in AI and semiconductors entails three strategic moves.
- First, selectively federalise critical resources without the need to open all treaties, making maximum use of existing legal bases in the areas of industry, the internal market and R&D, with the aim of expanding tried and tested European joint undertakings: converting EuroHPC (computing for AI) and the pilot lines and packaging of the Chips Joint Undertaking into genuine federal champions with clear missions, transparent governance and binding multi-annual funding. The Commission should propose, and the Parliament and Council approve, a strengthened mandate for EuroHPC as “EuroCompute” for industrial use, with quotas reserved for start-ups and SMEs, standard tooling (frameworks and benchmarks) and regulated prices to avoid rationing based on cost. In parallel, the AI Act framework (GPAI obligations from August 2025) can be integrated with pan-European sandboxes and a model assessment system hosted by EuroHPC/AI Factories, so that regulation is an accelerator rather than a bottleneck (DG CNECT; EuroHPC JU).
- Second, create a single market for technological capital, with the EIB (European Investment Bank) and the EIF (European Investment Fund) acting as the European bank for IT and chips, issuing thematic bonds (for example, the EIB should issue something like ‘Compute Bonds’ and ‘Chip Bonds’ with an InvestEU guarantee, channelling debt and quasi-capital towards efficient data centres, AI Factories, foundries FOAK and OSAT on European territory; while the EIF could expand the Chips Fund as a thematic fund-of-funds — co-investing with pan-European VCs — to close the equity gap in deep-tech) and aligning investments with the priorities of the Parliament and the Commission, in a model in which both institutions should set annual gap-closing targets (packaging capacity, mature nodes, gigawatts of green computing) and request performance audits from the European Court of Auditors, so that scale does not lead to capture or waste (EIB/EIF, InvestEU; ECA, 2025). This EIB/EIF pillar should be the basis for creating a monetary, fiscal and banking union for a federal Europe, in all respects and not just in AI and semiconductors.
- Third, consolidate a pan-European industrial ecosystem, reducing fragmentation and promoting cross-border integrators that produce continental leaders without sacrificing pluralism and competition. Europe needs concentration with competition: less national dispersion and more cross-border projects with anchor integrators; a European OSAT value chain connecting design-wafer-assembly-testing; and talent programmes linked to clusters (e.g., year-in-industry in AI+chips, industrial doctorates) funded by Horizon Europe/Digital Europe and top-up from Member States. In semiconductors, the rule of thumb is 80/20: 80% bet globally on segments where Europe is already strong (power, sensors, microcontrollers, advanced packaging, photonics), 20% on leading-edge logic through joint ventures with non-EU partners and ecosystem conditionality (design centres, local suppliers, training). The approval of the ESMC project in Dresden and the IPCEIs show that this path, although complex, is operational; the challenge is implementation and speed. Europe has more than enough competition in various global hubs... Have we really not understood, despite the fact that it has been increasingly evident every six months for many years (and our position is now unsustainable), that we are defeating ourselves and sowing the seeds of populism and hyper-nationalism that destroy rights of way?
This federal programme must be accompanied by accountability, beyond a scaling instrument: control by the European Court of Auditors, parliamentary oversight and open access clauses for SMEs and start-ups. In this way, scale becomes a guarantee against capture, rather than a risk of it.
Such federalism must be matched by responsibility: Court of Auditors scrutiny, parliamentary control, and open-access clauses for SMEs and start-ups. In this way, scale becomes a guarantee against capture rather than a risk of it. Europe need not mimic the US or China. It can pursue a path truer to its strengths—intelligent regulation, multinational cooperation, targeting Japan, South Korea and Taiwan as its first priority development partners, and leadership in advanced industry—to be not merely a regulatory market, but a sovereign power capable of shaping the international order of the twenty-first century.
References
Allen, G. C. (2019). Understanding China’s AI strategy. Center for a New American Security.
Allison, G. (2017). Destined for war: Can America and China escape Thucydides’s trap? Houghton Mifflin Harcourt.
Arcesati, R. (2024). China’s advanced AI research. CSIS.
Arcesati, R. (2025). China’s AI ecosystem. MERICS.
Beer, S. (1972). Brain of the Firm: Managerial Cybernetics of Organization. Allen Lane.
Cairn.info. (2025). La stratégie chinoise en matière d’IA. Revue française d’études chinoises, 45(2), 112–130.
Chan, K. (2025a). Testimony on Made in China 2025. U.S.–China Economic and Security Review Commission.
Chan, K. (2025b). China is trying to reshape global supply chains. High Capacity (Substack).
Chan, K., et al. (2025). Full Stack: China’s evolving industrial policy for AI. RAND Corporation.
Confucio. (2019). Analectas (Traductor Simon Leys). Edaf.
Copeland, J. (1993). Artificial intelligence: A philosophical introduction. Blackwell.
Creemers, R. (2024). China’s digital nationalism. Oxford University Press.
CSIS. (2025). Wins and losses: Chinese industrial policy’s uneven success. Center for Strategic and International Studies.
Dai De, Dai Sheng, Ma Rong, & Zheng Xuan. (2013). El Libro de los Ritos: El clásico confuciano de la ética y los valores (Traducción y notas de Alfonso Araujo). Quadrata.
DeepSeek. (2025). DeepSeek-R1 technical report. Shanghai AI Laboratory.
Ding, J. (2024). China’s open-source AI ecosystem. Brookings Institution.
EuroHPC Joint Undertaking. (2024–2025). AI Factories and EuroHPC supercomputing.
Executive Yuan (Taiwan). (2023). Taiwan AI Action Plan 2.0 (2023–2026).
Gerovitch, S. (2004). From Newspeak to Cyberspeak: A History of Soviet Cybernetics. MIT Press.
Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT Press.
Hauge, J. (2023). The future of the factory. Oxford University Press.
Hauge, J. (2025). Articles and podcast contributions on industrial policy and innovation.
Hauge, J., & Chan, K. (2024). Industrial diplomacy in US–China relations. RAND Corporation.
ISPI. (2025). Innovazione IA con caratteristiche cinesi. Istituto per gli Studi di Politica Internazionale.
ITIF. (2024). How innovative is China in semiconductors? Information Technology and Innovation Foundation.
Japan Cabinet Office (CSTI). (2022). AI Strategy 2022.
Japan METI. (2024). AI Guidelines for Business v1.0.
Japan AI Safety Institute (J-AISI). (2024–2025). Safety and evaluation reports.
Juhász, R., & Lane, N. (2024). Industrial policies in comparative perspective. NBER Working Paper.
Lee, K. (2018). AI superpowers: China, Silicon Valley, and the new world order. Houghton Mifflin Harcourt.
Lenin. (2012). El Estado y la revolución. Alianza Editorial.
Luger, G. F. (2009). Artificial intelligence: Structures and strategies for complex problem solving (6th ed.). Pearson.
Mao, Z. (1968). Obras Escogidas de Mao Tse-Tung (Sobre la práctica y Sobre la Contradicción). Ediciones en Lenguas Extranjeras.
Marx, K. (2017). Crítica del programa de Gotha
Marx, K., & Engels, F. (2019). El manifiesto comunista (Traductora Lara Cortés Fernández). Austral.
Mazzucato, M. (2013). The entrepreneurial state. Anthem Press.
Medina, E. (2014). Cybernetic Revolutionaries: Technology and Politics in Allende's Chile. The MIT Press.
McKinsey Global Institute. (2018). Notes from the AI frontier.
MERICS. (2019). Made in China 2025—Study.
MERICS. (2025). Evolving Made in China 2025. Mercator Institute for China Studies.
Ministry of Science and ICT (Republic of Korea). (2024). AI Basic Act.
National Science and Technology Council (Taiwan). (2024). Draft AI Basic Act.
NISTEP (Japan). (2025). Comparative analysis of AI competition in Asia.
OpenEdition. (2025). L’intelligence artificielle dans l’enseignement supérieur en Chine. Revue d’études chinoises, 12(1), 45–67.
Peters, B. (2017). How Not to Network a Nation. The Uneasy History of the Soviet Internet. MIT Press.
Rodrik, D. (2015). Economics rules. W. W. Norton.
Russell, S. J., & Norvig, P. (2021). Artificial intelligence: A modern approach (4th ed.). Pearson.
State Council of the PRC. (1986). Programme 863.
State Council of the PRC. (2015). Made in China 2025.
State Council of the PRC. (2017). New Generation Artificial Intelligence Development Plan.
Stanford HAI. (2025). AI Index 2025.
Sutton, R. S., & Barto, A. G. (2018). Reinforcement learning: An introduction (2nd ed.). MIT Press.
TechWire Asia. (2025). China’s open AI models.
Tobin, L. (2024). China’s industrial diplomacy. Harvard University Press.
UC Berkeley. (2025). AI benchmarks report.
Wang, H. (2025). 中国AI地缘政治雄心 [China’s AI geopolitical ambitions]. Chinese Journal of International Politics, 18(2), 210–235.
WIPO. (2025). World Intellectual Property Report. World Intellectual Property Organization.
Yuan, L. (2025). 中国人工智能发展论文 [On China’s AI development]. Journal of Chinese AI Studies, 20(4), 567–589.
Zengzi, & Zisi. (2024). Gran Saber y Doctrina de la medianía (Traductor Finnegan Hayes). Clásicos Confucianos Chinos.
Zhu, J. (2024). China’s science and technology policy. MIT Press.