Taiwan has become the latest country to prohibit its government agencies from using the Chinese startup DeepSeek’s Artificial Intelligence (AI) platform. The Taiwan Ministry of Digital Affairs raised concerns about national security, particularly regarding the risk of information leakage due to cross-border data transmission. These security concerns are echoed by other nations, including Italy, which recently blocked the AI service after questioning its data handling practices. As a result of its Chinese origins, DeepSeek has drawn scrutiny over its use of personal data, with many organizations imposing restrictions on its access due to similar fears.
DeepSeek has gained attention for its open-source chatbot, which is highly capable and far less expensive to build than its competitors. However, it has also faced challenges related to security vulnerabilities, including susceptibility to jailbreak techniques. These weaknesses, coupled with its censorship of sensitive topics, have raised alarms about potential misuse of the platform. Moreover, DeepSeek has been the target of several cyberattacks, with malicious actors launching distributed denial-of-service (DDoS) attacks on its system in late January 2025. These attacks primarily originated from the U.S., U.K., and Australia, indicating a well-coordinated effort to disrupt its operations.
In addition to DDoS attacks, malicious actors have exploited the growing popularity of DeepSeek by distributing harmful packages through the Python Package Index (PyPI). These packages, masquerading as legitimate API clients for DeepSeek, were designed to steal sensitive data from developers. They were downloaded hundreds of times before being removed in late January 2025. The cybercriminals behind these packages used a command-and-control server to collect stolen information, further complicating the platform’s security issues.
This pattern highlights the vulnerabilities that accompany the rise of widely-used AI systems.
The rise of AI technologies, including DeepSeek, has sparked a global conversation about the risks posed by these tools. The European Union’s new Artificial Intelligence Act, effective from February 2025, aims to regulate AI applications that pose unacceptable risks. In the U.K., a new AI Code of Practice aims to secure AI systems against security threats such as data poisoning and model obfuscation. Meanwhile, Meta has committed to stopping the development of AI models deemed too risky. As AI tools continue to advance, the potential for them to be weaponized by malicious actors remains a significant concern, prompting both the public and private sectors to strengthen their cybersecurity measures.