Get all your news in one place.
100’s of premium titles.
One app.
Start reading
TechRadar
TechRadar
Ellen Jennings-Trace

US Navy bans use of DeepSeek “in any capacity” due to “potential security and ethical concerns"

DeepSeek on an iPhone.

  • The US Navy has banned the use of new chatbot DeepSeek
  • DeepSeek is a Chinese owned AI
  • The chatbot has emerged as a ChatGPT competitor

New AI chatbot DeepSeek has caused a stir recently due to its disruption of the market after its open source Large Language Model appeared to be severely undercutting existing models.

But DeepSeek is a Chinese firm, owned and operated by a hedge fund in Hangzhou, which has spooked US tech firms and government institutions alike, with the US Navy instructing all members to avoid using the technology in ‘any capacity’, due to “potential security and ethical concerns associated with the model’s origin and usage.”

The move is reportedly part of the Department of the Navy’s Chief Information Officer’s generative AI policy, and email recipients were asked to “refrain from downloading, installing, or using the DeepSeek model.”

AI’s privacy problems

The privacy policy for DeepSeek would probably unsettle the privacy-conscious among us, given the chatbot apparently does collect the personal information of users, which is stored on servers in China.

However, it's worth noting this is not specific to DeepSeek, and ChatGPT is also a privacy nightmare. Most of us have probably grown accustomed to the claims of tech companies harvesting our data, but that doesn’t mean we should forget it's happening - especially with big and familiar industry names.

But the privacy policy isn’t the only concern, as Deepseek suffered from its success in the form of ‘large-scale malicious attacks against the platform. The incident, most likely a Distributed Denial-of-Service (DDoS) attack, meant the platform was forced to temporarily pause new signups.

"Open-source AI models like DeepSeek, while offering accessibility and innovation, are increasingly vulnerable to supply chain attacks triggered during large-scale cyberattacks” said Aditya Sood, VP of Security Engineering and AI Strategy at Aryaka.

“These attacks, where adversaries exploit the reliance on third-party dependencies, pre-trained models, or public repositories, can have severe consequences. Adversaries may tamper with pre-trained models by embedding malicious code, backdoors, or poisoned data, which can compromise downstream applications.”

Via CNBC

You might also like

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.