There are risks involved to the contrarian approaches to artificial intelligence by the US and China
China and the United States are taking opposite approaches to governing artificial intelligence, and the contrast has big implications for both their global competition and the safety of their citizens.
China has built a robust AI domestic regulatory system in public/commercial spaces but does not regulate AI use in the military, which is the opposite of the American approach. The U.S. has published robust rules for AI-driven military systems but done nothing to regulate the tech industry’s hasty release of generative AI models like ChatGPT-4 to the public.
China’s approach to generative AI elevates political stability over innovation, with strict regulation of the private/commercial sector. On April 11, the Cyberspace Administration of China (CAC) issued "Measures for the Management of Generative Artificial Intelligence Services." These draft principles cover "deep synthesis" technologies, including machine-generated text, images, audio and visual content, especially deepfakes.
China’s approach to generative AI elevates political stability over innovation, with strict regulation of the private/commercial sector. (Getty Images)
PRC regulations prohibit AI-driven discrimination, hold Chinese companies liable for any harm, and mandate security assessments before AI models are released. These types of measures would also benefit citizens in democratic countries.
CHINA'S AI DOMINANCE SHOULD BE A WAKE-UP CALL FOR US ALL
But the Chinese government’s top priority is political alignment with the Chinese Communist Party. Chinese companies like Alibaba, Tencent and Baidu are held responsible for strict content moderation: their products must follow "socialist core values" and not challenge state authority.
AI-generated content must either train on censored data or, in what’s called reinforcement learning from human feedback, employ many people to catch and fix missteps. As the field of artificial intelligence proceeds apace, China is consolidating political censorship and safeguarding state control.
The PLA’s top priority is to rapidly apply AI to its missions and achieve what the leadership calls "intelligentization" of warfare. (Screenshot)
On the other hand, China does not regulate AI military use by the People’s Liberation Army (PLA). The PLA’s top priority is to rapidly apply AI to its missions and achieve what the leadership calls "intelligentization" of warfare. There is no visible framework of trustworthy, transparent or ethical Chinese military restraint to match the caution and control exercised over commercial companies developing AI in China.
By contrast, this year the U.S. published robust regulations on military AI, even as Microsoft (Open AI) released ChatGPT-4 with no U.S. government regulation of the private sector. In January 2023, the Pentagon published "Autonomy in Weapon Systems," stipulating they be under the control of humans, transparent, explainable, have top cyber protections, clear feedback loops, and be able to be turned off.
U.S. military AI systems must also meet ethical requirements, meaning they are responsible, equitable, traceable, reliable and governable. There is nothing like this to govern AI use by the PLA.
6329802998112Yet, when ChatGPT was released upon the world, the U.S. government had nothing new in place to reduce the harms of such a powerful AI model.
AI COMPANIES RISK US NATIONAL SECURITY BY WORKING WITH CHINA. TIME TO CHOOSE SIDES
Because expensive computing power is the secret to AI innovation, cutting-edge models are in the hands of relatively few commercial actors like Google and Microsoft (through Open AI). They compete for market advantage, heedless of the dangers to the public, and the U.S. government is years behind them. The European Union has had an AI Act ready to go for two years; but the EU legislation lags recent developments and will not even take effect until 2025.
What risks are China and the U.S. taking in their contrarian approaches? China is slowing down AI innovation in its domestic sphere. Pressured to answer ChatGPT (which is not available in China), Baidu rushed out its own large language model in March 2023.
But Ernie Bot is years behind Open AI’s ChatGPT, makes a lot of mistakes, is available only to select Chinese companies, and was a disappointment to most Chinese observers. Those who fear U.S. domestic regulation will enable China to charge ahead have not paid attention to their stringent domestic regulations.
At the same time, China is not gaining an unmitigated military advantage through flat-out pursuit of AI. China is moving fast at integrating AI capabilities into every aspect of their military operations, including information warfare, intelligence analysis, targeting, and fully autonomous weapons. But so is the United States. Smart guardrails on military AI help the United States make choices that are safer, more trustworthy, and better integrated with human decision-making in warfare. Obstacles to rapid U.S. military AI innovation are bureaucratic and cultural, not regulatory.
CLICK HERE TO GET THE OPINION NEWSLETTER
Failing to put effective domestic AI regulations in place only hurts Americans. Accessible large language models can generate disinformation at enormous scale, code software for nefarious purposes, produce novel biological toxins, rapidly dislocate millions of workers, and destabilize democratic systems.
Unless we act to protect Americans from the dangerous effects of untested AI models, then determining who is winning the U.S.-China military competition may be irrelevant.
CLICK HERE TO GET THE FOX NEWS APP
Audrey Kurth Cronin is a trustees professor of security and technology and the director of the Institute for Politics and Strategy at Carnegie Mellon University. Her career incorporates experience in both academic and policy positions, in the U.S. and abroad.