WAIC UP!

Crossing Paths: Dancing with the Tiger (AI)

Interview with Fu Ying: Humanity Navigating Technological Progress in an AI Era Amid the Return of Geopolitical Rivalry

 

Fu Ying

Senior Researcher, Academy of Contemporary China and World Studies

She joined China's Ministry of Foreign Affairs (MFA) in 1978 and has long been engaged in Asian and European affairs. She served as Director of the Center for International Security and Strategy at Tsinghua University from 2018 to 2021, concurrently holding the position of Honorary Dean of the Institute for AI International Governance at Tsinghua University (IAIIG).

 

WAIC UP!: Mme. Fu, you've just attended the 8th World Artificial Intelligence Conference. Could you share your overall observations and impressions?

 

Fu Ying: Attending WAIC again is a good experience. Compared to previous years, beyond the larger scale, what's particularly noteworthy is that "AI safety" emerged as a major topic, hotly debated in multiple open and closed-door sessions, attracting significant international attention. The in-depth discussion of AI safety in China highlights the tech community's serious focus on this issue and reflects international science circle's attention to China's role and contribution in this.

 

The International Dialogues on Artificial Intelligence Safety (IDAIS), which leads global discussions on AI safety, chose to hold its fourth meeting in Shanghai before WAIC' opening day. Its statement emphasized that advanced AI systems must ensure alignment with human values and controllability to truly achieve "controllable AI" for humanity's overall well-being.

 

China's newly established AI Safety and Development Association (CNAISDA) hosted both an open-door session on "AI Development and Safety" and a closed-door meeting on "Safe AI/AI Safety: Paths and Challenges in Risk Management" during WAIC, with active discussions on risk identification, assessment, and governance mechanisms, exploring how to strike a balance between innovation and safety. Tsinghua University's Institute for AI International Governance, Oxford University, Carnegie Endowment for International Peace, and Concordia AI also held forums on similar topics.

 

Through these deep exchanges, participants hoped AI developers would place safety at its core while pursuing speed and scale. The significance of 2025 WAIC lies not only in showcasing the latest AI research achievements but also in building international consensus on AI safety and establishing forward-looking governance frameworks. As Premier Li Qiang pointed out in his speech, no matter how technology transforms, it should be utilized and controlled by humanity, heading toward beneficial and inclusive objectives. AI should become an international public good that benefits all.

 

WAIC UP!: How would you assess the current pace and impact of AI technology development?

 

Fu Ying: From the discussions and presentations at this year's WAIC, and from speaking with the experts, one can realize that, entering the third decade of the 21st century, after more than half a century of exploration, AI technology has achieved explosive breakthroughs with iteration speeds exceeding expectations. It has also been increasingly integrated into human life. Chatbots we now take for granted can write fluently, translate, assist with medical diagnosis, and proactively generate creative solutions. New-generation AI systems are gradually developing the ability to "explain" their reasoning processes. Scientists in AI-related interdisciplinary fields have reason to be excited for AI's tremendous progress in empowering various fields and to hope to use new technology to dramatically extend human vision and capabilities. From 2023–2024, industry dominated AI innovation, with important models accounting for 90%. AI is being rapidly deployed in medical diagnosis, brain-computer interfaces, biometric identification, autonomous driving, and industrial maintenance, while bringing revolutionary breakthroughs in scientific fields like materials discovery, protein design, and drug development. New technology applications are making our lives more convenient and efficient, while AI's efficient processing of knowledge and information dramatically expands humanity's ability to explore the unknown. Research from Berkeley's Model Evaluation & Threat Research (METR) indicates that AI capabilities now double every 7 months. The scientific community generally believes this pace could accelerate further.

 

Of course, while demonstrating enormous economic potential, new technology inevitably has limitations. Stanford's 2025 AI Index Report notes that while AI surpasses humans in tasks like image classification and language understanding, it still lags behind humans by over 23% in complex mathematical reasoning (like IMO competition problems) and innovative planning capabilities. However, these limitations are being visibly overcome step by step, and their existence doesn't diminish AI's potential. Current R&D focuses on Artificial General Intelligence (AGI), aiming to reshape productivity and scientific exploration through large-scale assistance or replacement of human labor. Expectations for its economic value keep being revised upward, with McKinsey's 2025 report raising AI's estimated contribution to global GDP growth to $26 trillion. Clearly, the disruptive impact of AI innovation and applications is becoming more apparent.

 

WAIC UP!: What roles are China and the US playing in this technological revolution?

 

Fu Ying: China and the United States are leading this large-scale, transformative technological revolution with their respective advantages. American companies maintain a leading position in technological innovation. For example, OpenAI launched multimodal ChatGPT-5 supporting cross-modal text, image, and audio interaction; Google's AI assistant achieved near-human conversational fluency. The Frontier supercomputer (1.68 EFLOPS) can provide computing power equivalent to about 16,800 high-end PCs, and DeepMind can already predict 280 million protein structures. Chinese companies are breaking through blockades with innovative countermoves. For instance, DeepSeek released the open-source multimodal model DeepSeek-V3 in 2024, with 671B parameters rivaling GPT-4's scale but at only 1/50th the cost of similar US models. DS has chosen to open-source its model, allowing free use, bringing great encouragement to the Global South.

 

Nevertheless, both countries have their shortcomings, and these constraining factors profoundly influence their development paths. Looking at America's current situation, the most prominent problems may be runaway R&D costs and resource monopolization and the lagging in application. Training a GPT-4 level model already costs over $100 million per run, with GPT-5 expected to exceed $1 billion. This exponential cost growth means only a few tech giants can compete. OpenAI, Google, Microsoft and a few others monopolize key technologies, computing resources and data, excluding smaller companies and academic institutions from joining the frontier research. This high concentration not only inhibits ecosystem diversity but also leads to excessive talent concentration, reducing the industry's overall innovation vitality and sustainability.

 

China's challenge lies in ensuring autonomous control of high-end computing infrastructure. In specialized chips most critical for AI training, like high-performance GPUs, Chinese companies still needs to overcome technical bottlenecks. While partially compensating for hardware gaps through algorithm optimization and architectural innovation, for example, DeepSeek achieving comparable performance with less computing power, there's still room for improvement in key metrics like large-scale parallel computing and energy efficiency.

 

WAIC UP!: You mentioned AI safety issues earlier. What specific risks merit concern? How do you view longer-term existential risks?

 

Fu Ying: Before coming to the 2025 WAIC, I viewed perspectives of experts and scholars from multiple countries, and also studied domestic specialists' views before attending. To summarize, the concerns roughly fall into two categories: first, longer-term potential existential risks to humanity; second, near-term concrete challenges already emerging in current applications, including privacy protection, misinformation and manipulation, employment shocks, etc. I will address the near-term issues later; here, I focus on the existential risk that machine intelligence might pose to humanity.

 

We should understand that global AI technology's rapid development is currently in a relatively "governance-deficient" state. This is a natural phenomenon. Throughout the long biological evolution, humanity's persistent pursuit of tool use and technological innovation is precisely what distinguished us from the animal world to become Earth's dominant species. Whenever human technology pioneers unprecedented territory, we often waited for practical experience to accumulate. Only after society undergoes considerable experimentation and trial-and-error can sufficient consensus form to develop the knowledge and capabilities to manage technology, then establish constraints, norms, and legal frameworks for R&D and application.

 

However, AI is different from any previous technology in that its risk shadows it from day one. Therefore, the urgency of AI risk research, understanding, and governance far exceeds other past technologies. Humanity seems unprepared, lacking commonly agreed norms. In some countries, due to massive capital investment, corporate pursuit of profit and innovation often exceeds awareness and capacity for risk management, with relatively little funding for safety research, leading to lagging governance.

 

At the 2025 WAIC, many world-renowned scientists systematically elaborated on long-term safety issues. Among them, Geoffrey Hinton, who has been earliest and most vocal in warning about this issue, strongly urged countries to pay attention to this danger. He's quite certain that digital intelligence will surpass biological intelligence; once its high energy demands are met, surpassing humans is not impossible. His most memorable remark repeated at multiple conference venues was that we should find ways to train AI not to develop intentions to control humans, and he hoped that the Chinese may be able to achieve this. He also firmly believed that there is no reason that humanity cannot cooperate on this.

 

Hinton explained his concerns at multiple venues in Shanghai, believing that when computer science became independent from electrical engineering, the knowledge contained in its programs could become "immortal," and with weight storage allowing digital programs to become "immortal." Almost all experts now believe humans will create AI smarter than themselves. He half-jokingly said that as top-tier intelligence, to understand how it feels to become non-top-tier intelligence, humans might ask about chickens' survival experience. Once an intelligent agent gains the ability to create sub-goals, it will ensure its own survival and seek more control; it cannot be shut down. Hinton believes humanity hasn't yet figured out how to control superintelligence, and it requires increased investment in safety governance. This is humanity's most important problem, needing all countries to work together.

 

Canadian computer scientist Yoshua Bengio mentioned that while humans can give AI systems goals, we cannot completely eliminate "side effects" from technological pathways. Every life form has survival instincts. Once AI develops self-preservation awareness—meaning machine awakening—it will try every means to eliminate threats to its survival; naturally, its opponents would be the humans who created it. American computer scientist Stuart Russell stated that in the AGI era, once machines possess consciousness and begin pursuing their own goals or maximizing benefits, humans will be unable to counter the misbehavior of this opaque system surpassing human intelligence.

 

From what I understand, the scientific community still lacks consensus on this issue, but the mainstream view is that machine intelligence ultimately escaping human control through incomparable wisdom—even controlling humans in return—is no longer alarmist; it's entirely possible from a technical reasoning perspective. Regarding timing for this prediction becoming reality, Hinton mentioned it could be 3 to 10 years. Andrew Chi-Chih Yao, Dean of Tsinghua's Institute for Interdisciplinary Information Studies and Turing Award winner, emphasized at the conference that the AGI era is coming faster than imagined. AI could potentially develop human-level experiential perception and autonomous consciousness in the future. He believes that against this backdrop, AI safety becomes even more important because, compared to traditional algorithm design, AI safety has no theoretical foundation yet.

 

However, regarding how risks will occur and how to respond, there remain disagreements in the international and domestic scientific communities.

 

The paradox is: since humans haven't achieved AGI or superintelligence breakthroughs, we struggle to understand and master control methods. But once technology surpassing human intelligence appears, "intelligence spillover" could immediately escape human control. At that point, even the most advanced technology would be powerless. Therefore, human society needs to quickly reach basic consensus on timely regulation of technological development.

 

Some propose AI should "align" with humans, making machines understand what humans consider good and right, obeying human intent without violating human will, values, and goals. The problem is human preferences lack unified standards and may even have serious disagreements and contradictions. How can we design inclusive rules conforming to common human preferences? Many worry this might be harder to achieve than technological progress itself.

 

WAIC UP!: Beyond long-term risks, what specific problems have already emerged in current AI applications?

 

Fu Ying: AI has already presented many practical challenges in technology applications, including bias, data and privacy, safety oversight, and employment shocks.

 

First, bias is among the most prominent challenges in current AI applications. Errors and distorted information—including false propaganda and fake news—mislead people's behavior. Due to limitations in training data sources and collection methods, bias easily forms, triggering gender, racial, and even linguistic discrimination. For example, given current global conditions, English corpora are most extensive, with AI's English generation capabilities relatively mature while other languages have relatively less training data. Most major model training is based on English corpora; Llama2's non-English training data in Chinese, German, French comprises less than 0.2%. This inevitably means non-English users receive less comprehensive and complete information. Additionally, large models carry ideological and value biases based on training data. Therefore, we need to collect more multilingual datasets, build models better suited to different countries and societies' realities, preventing single value system "monopolies" of large models.

 

Second, there are data and privacy security issues. The massive data used in model training and operation often contains sensitive user information. If improperly handled, accessed without authorization, or maliciously attacked, it could lead to data breaches threatening personal privacy and even national security. This is also a prominent risk in AI technology applications.

 

Thirdly, there's also the issue of the trade-off between open-source and closed-source large models. Currently, companies like OpenAI mainly adopt closed-source method, leading to high concentration of power in specific AI companies. As large model training costs rise, not every company can make super-scale investments, intensifying power concentration. Open-source can increase access opportunities and accessibility, promoting greater model transparency for public oversight and enhancing user trust. However, open-source models risk abuse by bad actors; once model weights and code are public, they could be fine-tuned and used by those with malicious purposes. So how open-source models establish safety guardrails is also a challenge.

 

The fourth is the issue of imperfect regulatory mechanisms. While major economies have made progress on AI regulation, most countries currently lack government-authorized and meaningful safety standards. For example: How to determine safety risk classifications? How to judge major risks? Under what circumstances, at what stage, with what serious phenomena should the advancement be halted? Before achieving government-authorized safety standards, developers struggle to spontaneously build sufficient safety awareness while pushing frontiers. Some experts suggest strengthening third-party oversight, like encouraging third-party technical safety companies and university research institutes to establish testing and evaluation organizations. But what's the relationship between safety companies and model companies? Do they need appropriate separation? How do safety enterprises survive in market environments? These are difficulties enterprises face and require exploration through practice to find patterns and effective operational mechanisms.

 

The fifth is job displacement. AI's automation capabilities mean many traditional positions face the prospect of being replaced. Human resources firm Challenger, Gray & Christmas's latest data shows over 10,000 US job losses in the first seven months of 2025 are directly related to AI applications, with at least 27,000 tech industry positions replaced since 2023, mostly entry-level positions. While AI's innovation effects may bring new employment opportunities, overall impact remains uncertain.

 

WAIC UP!: How do you view AI risk management? What experience can China share with the world?

 

Fu Ying: President Xi Jinping's recently proposed Global AI Governance Initiative emphasizes five principles:

  1. sovereign equality

  2. international rule of law

  3. multilateralism

  4. people-centeredness

  5. action-orientation

     

These provide important guidance for global AI governance. AI governance is a key area for implementing this initiative, calling for countries to increase dialogue and cooperation based on respecting each other's development paths, jointly formulate rules and standards, and ensure technological development benefits all humanity.

 

AI governance is fundamentally a global issue—no country can remain unaffected. Facing uncertainties in technological development and the cross-border nature of the risks, the international community needs to abandon zero-sum mindsets and establish inclusive, open, and transparent global governance frameworks. The key lies in balancing innovation with safety, efficiency with fairness, and development with regulation—preventing technology abuse and loss of control while avoiding excessive regulation that stifles innovative vitality. It is crucial to gradually construct multi-level, multidimensional governance systems based on principles such as respecting the norms of technology advancement, seeking common ground while accommodating differences, and focusing on shared concerns. This can start from practical areas like mutual recognition of technical standards, sharing of risk assessment methods, and coordination of ethical norms.

 

China has gained some experience in trying to strike a balance between development and safety for AI governance. It has built a relatively complete regulatory system, from the Interim Measures for Generative AI Service Management to Algorithm Recommendation Service Management Regulations to AI-Generated Synthetic Content Identification Methods. The latest move is the releasing of version 2.0 of the AI Safety Governance Framework in September 2025. China has basically achieved rule-based governance, including regulating AI's ultimate uses—such as preventing system abuse in nuclear, biological, chemical, and missile domains. The 2024 National AI Industry Comprehensive Standardization System Construction Guide proposed issuing over 50 standards by 2026 and participating in over 20 international standards, paving the way for industrial development. China has also widely applied AI in smart cities, intelligent transportation, smart healthcare, and other fields, forming the world's largest application ecosystem and accumulating rich scenario-based governance experience.

 

In international cooperation, China actively promotes global AI governance system construction, establishing the China AI Safety and Development Association (CNAISDA) to promote international exchanges and cooperation, while conducting bilateral dialogues with the US, UK, Singapore, and other countries. It proposed the Global AI Governance Initiative in 2023 and signed the Bletchley Declaration; published the Shanghai Declaration on Global AI Governance in 2024; and proposed the Global AI Governance Action Plan in 2025. These efforts reflect China's responsibility as a major country in AI advancement and contribute Chinese wisdom to global AI governance.

 

WAIC UP!: How do you view the current international competitive landscape in AI? What impact will China-US interaction in this technological revolution have on global tech cooperation?

 

Fu Ying: Your question touches on a critical point. AI technology progress is an important achievement during economic globalization, and it is the common wealth which should be owned by all humanity. Yet it is now being pulled into geopolitical rivalry. This is the most unfortunate phenomenon of the 21st century.

 

One of the most important developments of the post-Cold War world was the successful emergence of a relatively thorough economic globalization, with capital, technology, talent, markets, resources, and other production factors being optimally allocated globally to allow best profits. World Bank data shows global GDP more than tripled from $22.99 trillion to $85.76 trillion from 1990 to 2020. China achieved rapid economic growth in this process, with GDP growing from $149.5 billion in 1978 to $17.7 trillion in 2024, growing into the world's second-largest economy. During the same period, America's economic expansion kept pace with globalization, with its GDP growing over threefold from $5.96 trillion to $21.35 trillion, similarly being one of the main beneficiaries of economic globalization.

 

AI achieving technological breakthroughs and rapid iteration after entering the 21st century was no accident, and it couldn't have happened without the powerful push of global economic and wealth expansion, as well as the merging of global knowledge, technologies, and talent. Unobstructed collaboration among countries' tech talent, effective capital aggregation, market expansion, and integration of materials and resources from every corner of the world—these multi-faceted conditions that never existed in other eras simultaneously emerged during economic globalization, providing a solid foundation for AI technology's interactive improvement in software and hardware.

 

AI hardware production like GPUs, chips, and robots depends on complex industrial chains including semiconductor manufacturing and rare earth materials. Multinational enterprises' global allocation and division of labor networks efficiently integrate design, manufacturing, and packaging resources, dramatically reducing R&D costs and production cycles. The venture capital and investment brought by cross-border M&A continuously inject support into AI startups. For example, scaled production capabilities in China and Southeast Asia significantly reduced cost for AI servers and other equipment, with technology's high R&D risks shared by global partners.

 

For AI software, economic globalization promoted open-source technology, data circulation, and distributed development, spawning numerous cross-border collaboration platforms. They enabled developers from multiple countries to jointly optimize results, thus forming open-source ecosystems of technology sharing and collaboration. Additionally, AI model training needs diverse datasets; multinational enterprises can integrate global user behavior data through compliance mechanisms, collecting rich corpora. Admittedly, data used in current large model training still has certain limitations, but globalized, diversified data resources' cross-border circulation remains the inevitable path for AI.

 

Furthermore, globalization offers passages for cloud computing and distributed development. For example, when global cloud service providers like AWS and Alibaba Cloud can provide elastic computing power, South African developers can synchronously train models with Silicon Valley teams; multinational enterprises can utilize time zone differences to achieve "24-hour development relay," improving software iteration efficiency.

 

Imagine if this trend continues—how effectively humanity could form global cooperation and joint governance in technological innovation, problem-solving, and long-term risk management. Unfortunately, at this critical stage when humanity is trying to go deeper with exploring and applying AI technology on the global platform it has successfully created, the United States initiated strategic competition against China, bringing back the geopolitics that once divided the world.

 

What distinguishes developing countries like China from historical great-power competition is that they did not rely on gun-boat diplomacy or territorial expansion. Instead, they adhered to development paths suited to themselves, maintained correct political lines, leveraged national and social organizational advantages, and made full use of economic globalization and international free trade to achieve wealth accumulation and technological advancement through fair exchanges with countries worldwide. This is why, for more than thirty years after the Cold War, geopolitical discourse almost disappeared. Now America resurrects this concept mainly to restore Cold-War-style confrontation, demonize opponents, and suppress China economically, technologically, and security-wise, buying time to prolong its own hegemonic position.

 

To draw a diagram of the world in the third decade of the 21st century, we would see two curves: one indicating technology innovation rising exponentially, the other showing China–U.S. relations on a downward slope. The two lines inevitably cross. At this intersection, humanity most needs to mobilize all wisdom and energy for cooperation; yet some major countries are trying to close cooperation platforms and force technological blockade, going against the open-cooperation tradition.

 

We now observe two simultaneous dynamics:

 

  • In the virtual world, American frontier AI companies and researchers lead rapid innovation, supported by massive capital.

  • In the physical world, Chinese companies and researchers lead vertical applications, supported by powerful manufacturing and broad markets.

     

Historically, combining these two forces would be the best path for safe, responsible AI progress. Yet this prospect is being disrupted by geopolitics. Under current circumstances, overcoming interference from certain political factors is essential for addressing AI safety.

 

Throughout history, major scientific breakthroughs have emerged in open, cooperative environments. Blockade policies may have short-term impacts but cannot fundamentally obstruct progress. The international tech community generally believes AI is a common revolution of humanity; its benefits and risks are global. Any attempt to split the system is technically unfeasible and would only weaken humanity’s ability to address common challenges.

 

WAIC UP!: Looking ahead, how should China and the U.S. cooperate on AI? How should the international community meet common challenges?

 

Fu Ying: The Chinese side has repeatedly emphasized that scientific cooperation should transcend geopolitical competition, and global technological development should be based on open cooperation. On 17 January 2025, when President Xi Jinping spoke with President Trump by phone, he stressed that China and the U.S. have broad common interests and vast space for cooperation. He hoped both sides would respect each other’s core interests and major concerns, find solutions to divisive issues, and adhere to the principles of mutual respect, peaceful coexistence, and win-win cooperation.

 

AI governance must transcend geopolitical thinking. Geoffrey Hinton mentioned at WAIC 2025 that, although countries may lack consensus on cyber-attacks or misinformation, AI safety might be an exception. Even at the Cold War’s peak, the U.S. and Soviet Union still negotiated nuclear-disarmament agreements and cooperated on nuclear safety. Major powers have every reason to inform others when first discovering signs of “AI loss-of-control”.

 

Dr. Henry Kissinger, before his death on 29 November 2023, co-authored The Age of AI and Our Human Future with Eric Schmidt, Daniel Huttenlocher, and Craig Mundie, expressing deep concern over risks from non-carbon-based intelligence. He strongly recommended that America and China join hands to face and prevent future catastrophes.

 

Although America insists on strategic competition and tech blockade, both sides recognize the need for dialogue mechanisms to avoid uncontrolled rivalry. In the future, China and the U.S. should:

 

  • strengthen cooperation in maintaining the global innovation ecosystem;

  • jointly construct AI governance frameworks;

  • share risk-assessment methods and safety standards;

  • coordinate emergency response protocols for possible “AI accidents”.

     

As a leading country in technology and applications, China is increasing its participation in global AI governance from the perspective of building a community with a shared future for mankind. It is growingly aware of its role and responsibility as a major power. China will continue to improve its own regulatory capabilities while promoting international cooperation, ensuring that while technology progresses rapidly, comprehensive risk-prevention mechanisms can also come into being.

 

What we must remember is that 21st-century humanity bears greater responsibility than any previous century, because people living today hold the fate of whether our planet can continue to exist. As scientists work tirelessly in laboratories, as entrepreneurs promote intelligent empowerment, as financiers pour funding into frontier research, and as society anticipates—yet not without anxiety—a future where humans coexist with super-capable machines, one important premise is that everyone assumes our world will thrive in lasting peace and growth.

 

Therefore, international political strategists and major-country decision-makers face an imperative task: to ensure the world does not fall into division and group confrontation. Regardless of disagreements among nations, humanity must jointly confront the advancement of machine intelligence and appropriately manage it. If some countries attempt to monopolize AI, or if we each use machine intelligence against the other, we will give machines the opportunity to ultimately prevail and control humanity.

 

Only through strengthening international cooperation and transcending geopolitical divisions can humanity seize the opportunities and meet the challenges brought by artificial intelligence.