DeepSeek AI Database Exposed: Over 1 Million Logs & Secret Keys Leaked

jinia
By -
2 minute read



Major Security Breach: DeepSeek AI Database Left Unprotected

DeepSeek, a rapidly rising Chinese artificial intelligence (AI) startup, has suffered a significant security lapse, exposing a critical database to the internet. This misconfiguration could have allowed cybercriminals to access highly sensitive data.


According to Gal Nagli, a security researcher at Wiz, the exposed ClickHouse database granted full control over internal data and operations, posing a severe security risk.


What Was Leaked?

The database exposure included over one million log lines containing:

  • Chat history

  • Secret keys

  • Backend details

  • API secrets

  • Operational metadata


Security researchers at Wiz promptly reported the issue, leading to DeepSeek patching the vulnerability. However, the database was publicly accessible at oauth2callback.deepseek[.]com:9000 and dev.deepseek[.]com:9000, allowing unauthorized access to critical internal data without authentication.


Exploitation Risks: Privilege Escalation & SQL Injection

Wiz highlighted that attackers could leverage ClickHouse's HTTP interface to execute SQL queries directly through a web browser. This loophole not only allowed database control but also increased the risk of privilege escalation within DeepSeek's infrastructure.


It remains unclear whether threat actors have accessed or downloaded this data before the security fix was implemented.


AI Security Risks: The Growing Threat Landscape

Nagli emphasized that the rapid adoption of AI services without strong security measures presents a major risk. While discussions on AI security often focus on futuristic threats, basic cybersecurity lapses—such as misconfigured databases—can be just as damaging.


"Protecting user data should be the top priority for security teams," Nagli stated. "AI engineers must collaborate closely with security professionals to prevent such exposures."




DeepSeek’s Meteoric Rise & Privacy Concerns

DeepSeek has been making waves in the AI space with its open-source models, touted as cost-effective rivals to OpenAI's technology. Its reasoning model R1 has even been described as "AI's Sputnik moment."


The company’s AI chatbot skyrocketed to the top of app store charts across iOS and Android. However, it has also faced an uptick in large-scale cyberattacks, leading to a temporary pause in registrations.


Regulatory Scrutiny & Allegations of Data Misuse

Beyond this security breach, DeepSeek is facing intense scrutiny over its:


Reports from Bloomberg, The Financial Times, and The Wall Street Journal suggest that both OpenAI and Microsoft are investigating whether DeepSeek illegally used OpenAI’s API output in a technique known as distillation.


"We know that groups in [China] are actively using methods, including distillation, to replicate advanced U.S. AI models," an OpenAI spokesperson told The Guardian.


Regulatory Actions in Italy

DeepSeek’s apps were recently removed from Italy following inquiries by the country’s data protection regulator regarding data handling practices and training data sources. It remains unclear whether the withdrawal was voluntary or prompted by regulatory concerns.


The Future of AI Security

As AI adoption continues to surge, security vulnerabilities like DeepSeek’s database exposure serve as a critical wake-up call. Organizations must implement robust cybersecurity measures to protect sensitive data and maintain user trust.


For more updates on AI security and data privacy, stay tuned to our latest coverage.