DeepSeek and the Privacy Debate

What You Need to Know!
Georgia Turnham
February 6, 2025

DeepSeek – A Brief Overview on the Controversial New LLM

If you’ve been anywhere near a computer or news outlet lately, you’d have heard of the name DeepSeek, the Chinese artificial intelligence application which has rivalled Chat GPT and other large language chatbot models. DeepSeek is the company, and they have a range of AI and LLM based applications available for everyday use including DeepSeek R1 – which has gotten people talking…

Like ChatGPT, Gemini, and other free AI-powered bots, DeepSeek looks and operates much the same. When it comes to task performance, DeepSeek appears to match OpenAI’s o1 model in mathematics and coding activities. Early reports suggested that DeepSeek had achieved these capabilities at a fraction of OpenAI’s training cost, with some researchers estimating just $6 million in training expenses. However, recent disclosures indicate that hardware costs alone reached $1.6 billion, overturning the initial belief that DeepSeek had significantly reduced the investment required to bring a comparable AI to market.

Why the Controversy Around DeepSeek?

Founded by Liang Wenfeng in 2023, the organisation is based out of Hangzhou – the silicon valley of China, as such there are a range of data sovereignty and privacy concerns. The Chinese government has collaborated with a range of companies in Hangzhou and DeepSeek is likely to be no exception. With the application collecting a range of user data including chat and search query history, the device a user is on, keystroke patterns, IP addresses, internet connection and activity from other apps – it’s not a surprise there are concerns. This excessive collection of user data is concerning given the laws in China enabling the Chinese Government to access any data stored by companies in China. These rights are afforded by the Cybersecurity Law (2017), Data Security Law (2021) and the Intelligence Law (2017). While legitimate, these policies have been known to be misused, and raise concerns about mass surveillance, censorship, and risks to foreign companies operating in China; particularly as sensitive data—whether from private citizens, businesses, or foreign entities—could be accessed by the state.

Other controversy stems from the guardrails in place on the model. While guardrails are traditionally in place on LLM chatbots to prevent unethical activities, guardrails implemented on DeepSeek’s chat platform have raised censorship concerns, appearing to provide only information that is favourable or neutral towards the Chinese Government. A notorious example demonstrating this was observed  by the BBC, who when asking the model about what events took place at Tiananmen Square in 1989 – found the application returned no mention of the Tiananmen Square Massacre, one of the most notable events in Chinese national history – prompting concerns around censorship on the platform.

The data and privacy policy of DeepSeek outlines that all the information collected by the platform is stored on servers in China, which if accessed could be used to create effective phishing or misinformation campaigns.

DeepSeek is one of the most advanced AI models to emerge from China, demonstrating significant capabilities in technical tasks and it’s popularity has rivalled that of ChatGPT and Gemini, among other open source LLM chatbots. The application although new, has elicited strong reaction in Australian and the United State’s political entities. There has been fears and concerns around the use of the application by Government bodies and departments, and on the 5th February this culminated in the ultimate block and ban of DeepSeek products on all Government devices – although personal devices are not included. The allowance for the use of DeepSeek on personal devices may still present a risk as Government staff may ‘cross-contaminate’ personal and work accounts and information.

DeepSeek: Hype vs Reality

So what’s all the fuss? DeepSeek has emerged as a serious competitor to OpenAI’s ChatGPT o1, particularly in technical tasks like coding and mathematics. While its capabilities are impressive, the excitement surrounding it raises an important question—does the model justify the hype? Recent revelations about its substantial hardware investment challenge the idea that DeepSeek is a disruptive low-cost alternative. However, beyond performance and cost, a more pressing concern remains: security. Unlike its counterparts, DeepSeek lacks the ability to ensure user data privacy from unauthorised access, particularly given China’s broad data access laws that could allow government intervention.

While DeepSeek is rapidly gaining popularity over its Chat GPT and Gemini counterparts, this does not mean it is likely to totally eclipse the market. DeepSeek is better designed at handling technical tasks related to processing data sets and performing calculations, and with its background in China, it outperforms Chat GPT and it’s counterparts in understanding Chinese language and culture – and with the worlds largest digital community, this skill is not something that should be understated.

Can I Use DeepSeek Safely?

When it comes to using the platform and using it safely, there’s a range of steps that we can take. Avoid sharing sensitive data as a user – and consider using a discrete alias (email and username) when signing up. Additionally, take the outputs of the LLM with a grain of salt. All LLMs have the risk of being trained on bias or controversial data – and this input affects the quality of the output. In tandem with this, it’s not unusual for AI hallucinations to occur – and outputs must always be verified where possible.

However, it’s important to acknowledge that some risks cannot be mitigated simply by changing usage habits. Information such as keystroke patterns, app activity, and device details may be collected in the background without user control. If privacy is a concern, the safest approach is to avoid using DeepSeek altogether on devices that handle sensitive or confidential information.

Georgia Turnham

Georgia Turnham

Georgia is a cybersecurity professional with deep expertise in governance, risk, and compliance (GRC) and adversarial security. With experience across government and critical infrastructure, she specialises in social engineering, threat intelligence, and incident response. Georgia excels at translating complex security risks into clear, actionable insights, helping organisations strengthen their defences against evolving threats.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *