Get all your news in one place.
100’s of premium titles.
One app.
Start reading
TechRadar
TechRadar
Fabien Rech

The growing threat of data breaches in the age of AI and data privacy

Padlock against circuit board/cybersecurity background.

It’s not unknown that artificial Intelligence (AI) has contributed significantly to innovation within the cybersecurity industry. AI offers enhanced benefits to existing security infrastructure due to its ability to automate tasks, detect threats and analyze swathes of data. Specifically, it can be used to identify trends in phishing attacks and is effective in spotting critical coding errors missed due to human oversight. The technology can also simplify complex technical concepts, and even develop script and resilient code. Ultimately, helping to keep cybercriminals at bay.

However, AI is a double-edged sword, as bad actors also have access to these tools and can leverage them for malicious purposes. Following the recent AI Safety Summit in the UK, questions around AI and its impact on data privacy have become more pressing than ever before. As the technology evolves in real-time, so does the fear surrounding AI as it becomes difficult to predict how it will continue to develop in the future.

AI-powered systems rely largely on personal data to learn and make predictions, raising concerns about the collection, processing, and storage of such data. With easy access to AI tools like chatbots and image generators, tactics such the use of deepfake technology to bypass data privacy regulations are becoming more prevalent. In fact, recent research from Trellix has shown that AI, and the integration of Large Language Models (LLMs), is drastically changing the social engineering strategies used by bad actors.

With the fast-growing adoption of AI by cybercriminals, organizations need to stay ahead of the curve to prevent falling victim to these attacks and losing vital data. So how can organizations do this?

Malicious use of Generative AI

The internet is filled with tools that use AI to make people's life easier. Some of these tools include ChatGPT, Bard, or Perplexity AI which come with security mechanisms to prevent the chatbot from writing malicious code.

However, this is not the case for all tools especially the ones being developed on the dark web. The availability of these tools has led to the rise of ‘Script Kiddies’. These are individuals with little to no technical expertise using pre-existing automated tools or scripts to launch cyberattacks. It’s important that they are not dismissed as unskilled amateurs, as the rise of AI will only make it easier for them to execute sophisticated attacks.

It’s clear that today’s AI applications offer potent and cost-effective tools for hackers, eliminating the need for extensive expertise, time, and other resources. The recent developments in AI have led to the emergence of Large Language Model (LLMs) which can generate human-like texts. Cybercriminals can leverage LLM tools to improve the key stages of a successful phishing campaign by gathering background information, extracting data to craft tailored content. This makes it easy for these threat actors to generate phishing emails quickly and at scale with low marginal costs.

Infiltrating businesses through AI voice scams

Our recent research found that social engineering tactics are the number one cause of major cyberattacks according to 45% of UK CISOs. Cybercriminals are increasingly turning to technology to automate social engineering making use of bots to gather data and deceive victims into sharing sensitive information like one-time passwords. AI generated voices play a huge role in this.

These AI voice scams mimic human speech patterns, making it difficult to differentiate between real and fake voices. This approach reduces the need for extensive human involvement and minimizes post-attack traces.

Scammers make use of these voices alongside psychological manipulation techniques to deceive individuals. They instil trust and urgency in their victims making them susceptible to manipulation. In our November Threat Report, we found that Al-generated voices can also be programmed to speak multiple languages, allowing scammers to target victims across diverse geographic regions and linguistic backgrounds.

As a result, phishing and vishing attacks continue to rise as threat actors leverage these tactics on live phone calls to manipulate businesses to share company data. Amid ever-evolving phishing threats, organizations need to remain one step ahead of cybercriminals or risk exploiting their system, employees, and valuable data to threats.

Security teams must build more resilient threat environments

Organizations need to use AI themselves, not only to protect again AI tactics used by cybercriminals but also to reap the benefits it will have on day-to-day processes. AI can be used to enhance operational efficiency and productivity, integrated into the decision-making process and helps organizations stay competitive in the rapidly evolving landscape.

Given that these AI-based cyberattacks can make it increasingly difficult for organisations to detect them, it’s increasingly important that they implement the right technology that can anticipate and respond to these threats. This is where implementing an Extended Detection and Response (XDR) solution is key.

XDR, at a fundamental level, revolutionises threat detection and response. Our research found that, when there is a data breach, 44% of CISOs believe that XDR can help to identify and prioritise critical alerts, 37% find that it can accelerate threat investigations and 33% have fewer false positive threat alerts. It also allows security teams to have improved visibility over the wider threat attack surface and a clearer prioritisation of risks. As a result, of the global CISOs that dealt with a data breach, 76% agree that if they had XDR, the major cybersecurity incident would have had a lesser impact.

Staying on top of AI in the age of data privacy

Overall, organizations need to approach AI with an air of caution. It is no secret that its benefits have revolutionized the way people work today through simplifying complex technical concepts however due to its dual nature, care should be taken when implementing it. Cybercriminals are no stranger to AI and have recently been using it to their advantage manipulating and creating fake data to create confusion or impersonate officials for example the recent Booking.com attacks.

To avoid becoming a victim to these AI attacks businesses need to embrace the evolving cybersecurity landscape and invest more in methods that defend against sophisticated cyberattacks. With a combination of the right technologies, talent and tactics, security operation (SecOps) teams will be well equipped to mitigate cyber threats from their organization.

We've featured the best encryption software.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.