Taiwan has officially prohibited government agencies from using DeepSeek AI, a Chinese artificial intelligence platform, citing national security risks and data leakage concerns.
According to a statement from Taiwan's Ministry of Digital Affairs, reported by Radio Free Asia, "Government agencies and critical infrastructure should not use DeepSeek, as it endangers national information security. DeepSeek AI is a Chinese product involving cross-border data transmission, raising information security concerns."
This move aligns Taiwan with other countries scrutinizing DeepSeek's data handling practices. Italy recently blocked the AI platform due to a lack of transparency regarding data privacy. Multiple companies have also restricted access to DeepSeek AI, fearing potential security threats.
DeepSeek AI: A Controversial Yet Cost-Effective AI Model
DeepSeek AI has gained significant attention for being open-source and delivering capabilities comparable to top-tier large language models (LLMs) at a fraction of the cost. However, concerns persist regarding its vulnerability to jailbreak techniques and its alignment with Chinese government censorship policies.
Adding to its challenges, DeepSeek AI has been the target of large-scale cyberattacks. Cybersecurity firm NSFOCUS reported that between January 25 and 27, 2025, the platform faced three waves of distributed denial-of-service (DDoS) attacks against its API interface. The attacks, originating mainly from the U.S., U.K., and Australia, lasted an average of 35 minutes and employed methods such as NTP reflection and memcached reflection attacks.
On January 20 and January 25, additional DDoS attacks lasting around an hour were detected. Analysts describe these as "well-planned and organized" operations.
Cybercriminals Exploit DeepSeek's Popularity
Hackers have leveraged the AI's rising popularity to distribute malware. Fake DeepSeek-related Python packages, deepseeek and deepseekai, appeared on the Python Package Index (PyPI), harvesting sensitive data from compromised systems. These packages, downloaded over 222 times before removal on January 29, primarily targeted users in the U.S., China, Russia, Hong Kong, and Germany.
According to Russian cybersecurity firm Positive Technologies, "These packages were designed to steal user and system data, leveraging Pipedream as a command-and-control server. The irony? The malicious Python script was likely created with the help of AI."
Global AI Regulations Tighten Amid Rising Security Risks
This development coincides with the European Union's Artificial Intelligence Act, effective February 2, 2025. The act bans AI systems posing unacceptable risks and enforces legal requirements on high-risk applications.
Meanwhile, the U.K. government has introduced an AI Code of Practice, focusing on security threats such as data poisoning, model obfuscation, and indirect prompt injection.
Big Tech Responds to AI Threats
Tech giants are taking proactive steps to curb AI-related cybersecurity threats:
Meta has unveiled the Frontier AI Framework, pledging to halt AI models that reach critical risk thresholds. The company highlighted scenarios such as:
AI-driven end-to-end corporate compromises, even in secure environments.
Automated zero-day vulnerability discovery and exploitation before security patches are applied.
Large-scale AI-powered scam operations, such as romance scams (pig butchering frauds).
Google's Threat Intelligence Group (GTIG) disclosed that over 57 threat actors linked to China, Iran, North Korea, and Russia have attempted to manipulate AI models like Gemini for malicious purposes.
Anthropic, a leading AI safety research company, has introduced Constitutional Classifiers to defend against jailbreak attacks, minimizing unauthorized AI manipulations while maintaining efficiency.
The Growing AI Security Debate
As nations and corporations grapple with the risks of generative AI, Taiwan’s ban on DeepSeek AI underscores the growing concern over AI security and data privacy. With increasing cyber threats and regulatory measures, the future of AI will likely be shaped by stringent oversight and robust defensive strategies.