Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Windows Central
Windows Central
Technology
Zachary Boddy

Microsoft wants to put an end to its cybersecurity woes with AI and automation

Microsoft Azure.

What you need to know

  • A new report reveals that Microsoft has announced the Secure Future Initiative (SFI) to improve its cybersecurity efforts.
  • This is in direct response to growing cybersecurity threats and more frequent attacks and exploits levied against Microsoft services in recent years.
  • The initiative will see the company use AI and automation to improve the security and stability of its software development.
  • The company will also build an AI-powered cyber shield to detect new threats faster than current methods are able.
  • Finally, Microsoft plans to improve security for its customers with more thorough encryption and superior out-of-the-box security options.

Cybersecurity is a real and pervasive threat in our digital world, and Microsoft is a constant target thanks to its influence in cloud, AI, and software. The company has been afflicted by a myriad of cyber attacks in recent months and years, with multiple security flaws and exploits discovered in Microsoft Azure and other cloud products. These dangerous security weaknesses combined with criticism levied against Microsoft's security efforts have clearly driven the company to take substantial action, and that action has been revealed today.

According to a new report from the Verge, Microsoft has announced the Secure Future Initiative (SFI), a three-tier program to improve the company's cybersecurity across all of its products and for all its customers. The initiative is highly driven by AI and automation, and will hopefully see major advancements in cybersecurity for Microsoft (and the industry as a whole).

How is Microsoft using AI in security?

To begin, Microsoft intends to use AI and automation, specifically the CodeQL code analysis engine developed by GitHub and integrated with Copilot. The AI-powered engine will help with static and dynamic code analysis, which Microsoft is calling the dynamic security development lifecycle. It should ideally aid Microsoft's developers in finding and fixing bugs in software and AI development. With this tool, security exploits and flaws will be more reliably found and dealt with before they ever reach customers. 

We have carefully considered what we see across Microsoft and what we have heard from customers, governments, and partners to identify our greatest opportunities to impact the future of security. As a result, we have committed to three specific areas of engineering advancement we will add to our journey of continually improving the built-in security of our products and platforms. We will focus on 1. transforming software development, 2. implementing new identity protections, and 3. driving faster vulnerability response.

Charlie Bell, Executive Vice President

How is Microsoft using AI to transform software development?

Microsoft is looking to use AI in 3 specific ways. First, the team at Microsoft is looking to transform the way software is developed by using automation and AI. Most cybersecurity engineers and good software developers know that security has to be baked into software as it is developed and can't be tacked on afterward. In 2004, Microsoft coined the term Security Development Lifecycle (SDL). Microsoft is now evolving this idea to dynamic SDL (dSDL). This is done with the hope that AI will allow for better continuous integration and continuous delivery during all phases of the lifecycle.

With the promise to deploy CodeQL for code analysis to 100 percent of commercial products, they hope to use AI to completely eliminate software vulnerabilities in the build phase before the software is ever even pushed to the public. Microsoft also discusses the need to meet customers where they are and work with legacy infrastructure by offering better security controls in all of their products such as with multi-factor authentication.

How is Microsoft using AI to help with identity protection?

Microsoft isn't just working to protect itself — it wants to help protect everyone that uses its services. (Image credit: Fly:D @ Unsplash)

The second thing that is being upgraded is the identity protection Microsoft uses. One of the most successful causes of breaches and security incidents is the compromise of a user's credentials. Microsoft wants to make it harder for a malicious actor or criminal operator to log in as a user, even if they somehow were able to get the username and password. They plan to do this by moving identity signing keys to an Azure HSM which will encrypt signing keys while at rest, in transit, and while being used in computational processes. They also promise automated key rotation for better security. They plan on enforcing standard identity libraries across all of Microsoft for better security.

Most of these changes are not only internal to Microsoft but will apply to all of their customers both personal and enterprise; as long as the techniques used are sound and built with both security and convenience in mind, they should be great improvements.

How is Microsoft using AI to respond faster to vulnerabilities?

The final way that Microsoft is planning to utilize AI in its security is through its incident and vulnerability response with rapid updates for the cloud to remediate vulnerabilities. Microsoft is promising with AI, they will be able to cut the time to mitigate cloud vulnerabilities by 50 percent.

Two of the biggest issues facing cybersecurity and corporations today are long detection times and long recovery times. Detection time is how long it takes for a company to realize they have been compromised or breached.

With a global shortage of more than three million people, organizations need all the productivity they can muster from their cybersecurity workforce. Additionally, the speed, scale, and sophistication of attacks creates an asymmetry where it’s hard for organizations to prevent and disrupt attacks at scale.

Brad Smith, Vice Chair & President

Recovery time is how long it takes to get their network and devices back to a pre-compromised state. As we can see with the recent Boeing breach, many times the ransomware group has to tell the companies that they have been breached. Unfortunately, detection time can be upwards of several months. Recovery on the other hand can take even longer, usually causing significant business costs from loss of revenue and needing to hire third-party specialists for incident response.

Microsoft is promising that the help of Microsoft Security Copilot will allow incident responders to act with "machine speed" as they battle threat actors and attempt to repel attacks.

Why is Microsoft using AI for security?

AI is imperfect in its current iteration, but still provides key advantages versus traditional security measures. (Image credit: Kevin Okemwa | Bing Image Creator)

As we enter the age of AI, it has never been more important for us to innovate, not only with respect to today’s cyber threats but also in anticipation of those to come. We are confident making these changes will improve the security, availability, and resilience of our systems as well as increase our speed of innovation.

Charlie Bell - Executive Vice President

Microsoft has decided to integrate AI so completely into their entire company, that at its core the software, security protections, and even incident response will be completely saturated with AI logic and potentially its fallacies. However, Microsoft was stuck between a rock and a hard place and AI was one way it thought it could escape.

This is a needed evolution for Microsoft as they have been the target and at the center of several high-profile security incidents and breaches over the last few years. Microsoft has been attacked by Chinese hackers, Russian hackers were able to compromise Microsoft Teams, DDOS attacks have been used to disrupt Office 365, and a Microsoft breach affected 65,000 people in 111 countries. These are just some of the issues it has had to deal with in recent memory.

Do you think Microsoft should be trusting AI with its security? Do you think AI can deliver on all of these promises made by Microsoft? Let us know in the comments.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.