When an international consortium of government AI safety institutes met for the first time in San Francisco last November, China was conspicuously absent. Despite spending billions to become an AI superpower, the nation had yet to officially launch an equivalent national safety body. While key Chinese figures were invited as individual experts, they were shut out of formal meetings reserved for member institutes. That struck Tsinghua University’s Xue Lan as a problem. "If you wanted to be safe,” Lan says, “you wanted to be sure that indeed all the major players are in the network."
The episode underscored a problem Lan and his colleagues were already working to solve. They announced their solution, the China AI Safety and Development Association, ahead of the Paris AI Action Summit in February. Unlike its U.S. and U.K. counterparts, which were formed as newly-created bodies, the association coordinates existing institutions with "full support" from the government, Lan says.
The group has no single director. While Turing Award laureate Andrew Yao is its "spiritual leader," much of the coordination falls to Lan, who is also chair of China's national expert committee for AI governance. He describes his role as a “bridge” between technical experts like Yao and policymakers.
Will China be invited to future meetings in a full capacity? "We certainly hope so," Lan says.