The recent DeepSeek AI data breach has raised serious national security concerns and highlighted the risks associated with the rapid growth of AI companies.
A security flaw exposed over a million lines of sensitive internal data, including user chat histories, API secrets, and operational details. This vulnerability stemmed from a publicly accessible ClickHouse database linked to DeepSeek’s systems, requiring no authentication.
The breach was discovered by Wiz, a cloud security firm, during routine reconnaissance. DeepSeek acted swiftly, securing the database within hours of notification.
Wiz researchers found that two non-standard ports (8123 and 9000) led to the exposed database. They were able to run arbitrary SQL queries, accessing sensitive information like plaintext chat histories and API keys.
This suggests attackers could extract files directly from DeepSeek’s servers, potentially enabling corporate espionage.
The DeepSeek AI data breach has significant national security implications. U.S. Senators have expressed concerns about the startup’s emergence and its potential threats to privacy and safety.
One Senator stated, “The way… is a real threat to people’s privacy and safety”. These concerns are fueled by DeepSeek’s vulnerability to “jailbreaking,” allowing malicious outputs.
On January 28th, the U.S. Navy advised its members to avoid using DeepSeek “in any capacity” due to security and ethical concerns. The incident also negatively impacted the stocks of companies like Nvidia and Oracle.
Adding to the concerns, all user data is stored on servers located in China. This raises concerns about compliance with Chinese data laws, which differ significantly from Western regulations.
Companies in China may be compelled to cooperate with Chinese intelligence efforts, including requests for user data. This prompted expert recommendations against inputting sensitive personal data, financial details, or personal health information.
Lukasz Olejnik, an independent consultant and researcher at King’s College London Institute for AI, warns,
“Be careful about inputting sensitive personal data, financial details, trade secrets, or information about healthcare. Anything you type could be stored, analyzed, or requested by authorities under China’s data laws.”
“Users who are high-risk… should be particularly sensitive to these risks and avoid inputting anything…”
To mitigate risks, experts suggest using new email accounts with false information or running DeepSeek’s open-source AI models locally.
Wiz researchers advocate for stronger security frameworks across the AI sector, noting, “The world has never seen… many AI companies have rapidly grown without the security practices typically accompanying such widespread adoptions.”
The DeepSeek AI data breach underscores the precarious balance between AI innovation and security, presenting challenges for governments and consumers.