Get all your news in one place.
100’s of premium titles.
One app.
Start reading
TechRadar
TechRadar
Darren Thomson

Balancing innovation and security in an era of intensifying global competition

A hand reaching out to touch a futuristic rendering of an AI processor.

The global AI race is intensifying with new developments coming every week - take China’s launch of DeepSeek for example, which caught the industry by complete surprise. Despite a united approach being attempted with the publication of the International AI Safety Report, countries across the globe continue to strive for leadership.

The new US government had already been quick to announce 500 billion dollars of private sector investment in Project Stargate to build advanced AI infrastructure, with a landmark collaboration of backers including OpenAI, Oracle, and Softbank. This came hot on the heels of the UK’s recent launch of the AI Opportunities Action Plan, supported by funding of £14 billion from leading tech firms.

The widening regulatory disconnect

However, while both the UK and US set out aggressive plans for growth, the gap between regulatory approaches is widening. The US government swiftly revoked its earlier Executive Order which was supposed to guard against the risks AI posed to consumers and national security, which had required important safety disclosures regarding development. The about turn underlines the new administration’s commitment to prioritize AI innovation above what it deems as barriers to progress, even those relating to security, privacy, and bias.

Following suit to some degree, the UK is maintaining a lighter touch to governance than the EU. Its AI Action Plan sets out a commendable vision for the future, but, arguably, with insufficient regulatory oversight. This is potentially leaving the UK exposed to cyber threats and undermining public trust in AI systems.

The proposal to create a new National Data Library to unlock the value of high impact public data to support AI development also raises more security questions than it answers. How will the data sets be assembled? Who is responsible for their protection? And how can their integrity be guaranteed several years down the line when they are part of many AI models integral to businesses, public sector services, and the supply chain?

In sharp contrast, the EU is moving forward with its AI Act, a comprehensive, legally binding framework that clearly prioritizes regulation of AI, transparency, and prevention of harm. It sets out unequivocal obligations for AI development and deployment, including mandatory risk assessments and significant fines for non-compliance.

Adapting security principles to AI

This regulatory divergence is creating a complex landscape for organizations building and implementing AI systems. The lack of cohesion makes for an uneven playing field and conceivably, a riskier AI-powered future.

Organizations will need to determine a way forward that balances innovation with risk mitigation, adopting robust cybersecurity measures and adapting them specifically for the emerging demands of AI. Areas already raising concerns include data poisoning and the data supply chain.

Poisoning data models

Data poisoning, where bad actors deliberately manipulate training data to alter the performance of models, will be a major risk for AI. This could be subtle changes that are difficult to identify, perhaps slight modifications that generate errors and incorrect outcomes. Or attackers could alter code so they can remain hidden inside a model and gain ongoing control over its behavior. Such imperceptible tampering could slowly compromise a business over time, leading to bad decision-making and financial ruin. Or even, if politically motivated, could promote biases and influence attitudes.

The stealthy nature of these attacks makes them hard to detect until the damage is too late to reverse, as bad data can blend seamlessly with legitimate data. Combatting data poisoning requires robust data validation, anomaly detection, and continuous monitoring of datasets to identify and remove malicious entries as poisoning can be perpetrated at any stage. It may occur initially during data collection, later injected into data repositories, or maybe introduced inadvertently from other infected sources.

Protecting the data supply chain

The government’s proposal to create a National Data Library highlights the risk of apparently reliable models becoming compromised and flowing rapidly through the supply chain. In a couple of years, it’s likely many organizations will be dependent on such models to run their business and daily operations. With criminals already taking advantage of AI’s capabilities to supercharge their attacks, the consequences of rogue AI entering the supplier ecosystem could be catastrophic and widespread.

Business leaders will need to have strong protection and defenses to ensure resilience throughout their supply chain and have tried and tested disaster recovery plans. Effectively this means prioritizing the applications that really matter, and defining what constitutes a minimum viable business and acceptable risk posture. Ony then can they be confident that critical backups can be restored quickly and completely in the event of compromise.

Staying mindful of the risks

While AI offers immense potential for innovation, it's crucial to consider its implementation with caution. The vast capabilities of AI bring equally substantial risks, particularly in terms of cybersecurity, privacy, and ethics. As AI models become more ingrained in organizational infrastructures, the scope for security breaches and abuse will escalate dramatically.

Maintaining reliable safeguards, transparent development processes, and ethical standards are vital to mitigate these risks. Only by balancing innovation with zero tolerance of misuse can businesses safely reap the benefits of AI and protect against its dangerous downsides. In tandem, although its looking unlikely, coordinated government-led regulation remains essential for establishing enforceable frameworks for AI safety and security worldwide.

We provide a comprehensive list of the best AI tools.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.