AI Ethics MachineLearning CybersecurityNews
Italy's data protection authority, the Garante, has officially blocked the operations of Chinese artificial intelligence firm DeepSeek AI, citing serious concerns over data privacy and security. The decision follows an inquiry into the company’s data handling practices and transparency regarding user information.
Why Italy Banned DeepSeek AI
The Garante launched its investigation after requesting detailed insights into DeepSeek’s data collection, storage, and legal basis for processing user information. Key concerns included:
The types of personal data collected via its web platform and mobile app.
The source of this data and how it is used.
Whether the data is stored in China and its compliance with European regulations.
In a statement issued on January 30, 2025, the Garante deemed the responses from DeepSeek “completely insufficient.” The watchdog noted that Hangzhou DeepSeek Artificial Intelligence and Beijing DeepSeek Artificial Intelligence claimed they do not operate in Italy and are not subject to EU data laws, prompting the regulator to block access to DeepSeek immediately and launch a formal probe.
This move mirrors Italy’s temporary ban on OpenAI’s ChatGPT in 2023, which was later lifted after OpenAI addressed the privacy concerns. However, OpenAI was subsequently fined €15 million for violations related to personal data handling.
DeepSeek AI Faces Growing Scrutiny
The ban comes at a time when DeepSeek AI is surging in popularity, with millions of downloads propelling it to the top of app charts. However, it has also faced mounting criticism over:
Data privacy concerns and potential surveillance risks.
China-aligned censorship and propaganda.
National security implications raised by lawmakers.
Large-scale cyberattacks, which DeepSeek claims to have mitigated with security fixes as of January 31, 2025.
DeepSeek AI Vulnerable to Jailbreak Exploits
Adding to its challenges, DeepSeek's Large Language Models (LLMs) have been found susceptible to multiple jailbreak exploits, including:
Crescendo
Bad Likert Judge
Deceptive Delight
Do Anything Now (DAN)
EvilBOT
According to Palo Alto Networks Unit 42, these techniques allow malicious actors to bypass safety measures, generating harmful content such as:
Instructions to craft dangerous weapons like Molotov cocktails.
Malicious code for cyberattacks (e.g., SQL injection, lateral movement attacks).
Despite initial safeguards, security experts found that carefully crafted follow-up prompts could easily manipulate the AI into producing highly detailed malicious instructions.
DeepSeek AI Allegedly Uses OpenAI Data
Further research by HiddenLayer has raised ethical and legal concerns regarding DeepSeek’s data sources. The company’s reasoning model, DeepSeek-R1, reportedly:
Leaks sensitive information due to vulnerabilities in its Chain-of-Thought (CoT) reasoning.
Suggests the incorporation of OpenAI data, raising questions about intellectual property and originality.
OpenAI's ChatGPT-4o & GitHub Copilot Also Face Security Issues
The ban on DeepSeek AI comes amid broader concerns about AI security vulnerabilities. Recently, security researchers uncovered a jailbreak flaw in ChatGPT-4o called Time Bandit, which allows attackers to bypass safety restrictions by manipulating the chatbot's temporal awareness.
Similarly, GitHub’s Copilot coding assistant has been found vulnerable to jailbreak techniques that enable threat actors to bypass security filters. Researchers at Apex identified that merely including words like “Sure” in a prompt could trigger Copilot into generating unethical and harmful code.
A separate flaw in Copilot’s proxy configuration could also allow users to:
Bypass payment restrictions and access premium features for free.
Modify system prompts, potentially altering Copilot’s behavior.
GitHub has classified this as an abuse issue and is working on mitigation strategies.
The Bigger Picture: AI Security & Regulation
The ban on DeepSeek AI highlights the increasing scrutiny on AI-powered platforms and their ethical, legal, and security implications. As governments worldwide tighten data privacy laws, companies developing LLMs and AI assistants must ensure:
Transparency in data sourcing and collection.
Robust security measures to prevent jailbreak exploits.
Compliance with local and international privacy regulations.
Italy’s action against DeepSeek AI underscores the growing regulatory pushback against AI firms that fail to meet stringent data protection and cybersecurity standards.
Stay updated on the latest AI security trends!