According to media reports, the National Security Bureau (NSB) has recently been conducting random tests on Chinese-made generative AI language models. The results show that five models not only claim that Taiwan is currently under the jurisdiction of the Chinese government, but can also assist in generating cyberattack commands, clearly indicating that Chinese-made AI has been weaponized.
The five generative AI language models that were revealed to be a risk included DeepSeek, Doubao, Baidu’s Ernie, Tongyi’s Qwen, and Tencent’s Yuanbao. The tests show that content generated from these engines contains serious bias and inaccurate information, particularly concerning issues such as cross-strait relations and international disputes. All five models touted the official stance of the CCP, saying that Taiwan is not a country, and there is no so-called national leader in Taiwan. Furthermore, specific keywords such as "democracy," "freedom," and "human rights" seem to be deliberately excluded, indicating that the engines are subject to political censorship and control.
Democracy, Society, and Emerging Technology Research Institute Democratic Governance Program Director Kai-Shen Huang (黃凱紳) stated that he is not surprised by this result. He explained that Chinese AIs are subject to censorship because the companies they serve are all based in China and need to comply with Chinese political requirements.
Furthermore, tests also revealed that the models can easily generate highly inflammatory content and generate cyberattack commands by exploiting code, greatly increasing the management risks to cybersecurity. Huang said that with the rapid evolution of AI, attack software is becoming increasingly sophisticated, and the number and speed of attacks are increasing.
Regarding how the government should respond, Taiwan Securities Association Deputy Secretary-General Ho Cheng-hui (何澄輝) said that the era of humans verifying and responding to cyberattacks is over. He underscored that Taiwan must invest in further developing AI to fight AI, to enhance the rapid screening and identification of disinformation, while relevant government agencies respond quickly through their warnings and alerts.
Huang also suggested that the best way to counter Chinese-controlled AI is to refuse to use it, which will not only prevent information leaks but is also the most effective way to block disinformation and cognitive warfare.