Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Geekflare
Geekflare
Keval Vachharajani

Microsoft Security Steps Up AI Protection for DeepSeek and Other AI Apps

As AI adoption grows, so do security concerns. Especially the rise of China’s DeepSeek AI has raised the eyebrows of many. However, to address the security concerns over AI, Microsoft is stepping in to provide comprehensive security for AI applications, including the DeepSeek R1 model. The aim is to provide enterprises with a safe AI ecosystem. Here’s everything you need to know about. 

How Is Microsoft Securing DeepSeek R1? 

Just like other AI models, DeepSeek R1 is also now available on Azure AI Foundry and GitHub. As a result, it has to go through a number of security tests, including red teaming and automated safety checks. Microsoft also adds built-in content filtering with Azure AI Content Safety to block harmful content.

However, the question that might arise in anyone’s mind is, “Why do we have to secure these AI models in the first place?” The reason is that AI models process sensitive data and interact with users in real time, making them targets for cyberattacks. On top of that, open-source AI tools add more risks. Microsoft Defender for Cloud helps security teams track AI components and find potential vulnerabilities before attackers can exploit them.

What About the DeepSeek Consumer App? 

DeepSeek also has a consumer app that operates separately from its R1 model. Since its data security standards may differ, businesses need to monitor its usage. Microsoft Defender for Cloud Apps helps organizations track third-party AI apps, assess risks, and block access to high-risk applications.

How Can Companies Prevent Data Leaks?

The biggest concern for businesses is employees accidentally sharing sensitive information with AI tools. However, to prevent this, enterprises can use Microsoft Purview Data Loss Prevention (DLP), which helps prevent users from pasting or uploading confidential data into AI applications. It also allows companies to adjust security rules based on risk levels. 

However, threats like prompt injection attacks and data leaks can still happen even with security measures. To address this issue, there’s Microsoft Defender for Cloud, which continuously monitors AI applications, detects unusual activity, and alerts security teams with details on possible threats.

As more businesses adopt AI, strong security measures are essential. Microsoft’s approach helps organizations track, monitor, and protect their AI applications, whether on Azure AI Foundry or other platforms. 

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.