Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Evening Standard
Evening Standard
Technology
Andrew Williams

UK’s AI checks and safeguards to target computing power and chip sales

Rishi Sunak will reveal UK plans for AI regulation and development at a summit later this year.

This event was announced in June, but firm details on the plan have effectively been non-existent to date.

According to Bloomberg, Bletchley Park is being considered as the venue for the event. It has a blunt resonance as the place where the enigma code was cracked during World War II.

It is reported world leaders and AI pioneers will be invited. Among them are Joe Biden and the other G7 country leaders, as well as OpenAI boss Sam Altman, DeepMind CEO Demis Hassabis, Microsoft CEO Satya Nadella, and co-founder of San Francisco AI company Anthropic Dario Amodei.

Bloomberg cites “people familiar with the plans” as the source for these fairly predictable invitees, and suggests there’s an ongoing debate as to whether China should be invited to the summit.

The other question is whether the event will lean towards an investigation of how AI should be controlled and regulated, or ways in which it should be used and exploited.

Is AI regulated in the UK?

To date, the UK has taken a more relaxed approach to AI regulation than the EU. In a white paper published in March, the Government outlined a “pro-innovation” stance.

It suggests the role of regulation is to establish the UK as “a global leader in artificial intelligence,” rather than to protect society from its more harmful potential effects.

Sunak has previously pitched the concept of the UK as the “geographical home” of AI regulation. ChatGPT creator OpenAI opened a London office in June 2023, as its “first international expansion,” but its headquarters remain in San Francisco.

London as the second holiday home of western AI development might be a somewhat more realistic aim.

However, the UK Government is said to be considering regulation based on the chipsets used to power AI systems.

This would rely on a “threshold” of AI chip power, to determine whether such hardware would be subject to regulation or not.

There is also talk of monitoring who buys chips from the largest AI chip-producing companies, including Nvidia. However, many of these chips are actually manufactured in Taiwan by TSMC.

The UK Government announced a domestic semiconductor investment plan worth £1 billion “over the next decade” in May. This figure is dwarfed by investments elsewhere, though. For example, TSMC plans to spend £2.26 billion to increase its manufacturing capacity, with an AI chip plant due to finish construction by the end of 2026.

The state of AI regulation

While the UK Government established the Office for AI within the Department for Science, Innovation, and Technology in 2020, the EU is far ahead of the UK in establishing a somewhat robust regulation plan, with the EU AI Act.

It seeks to regulate not just the hardware used to run AI software, but AI implementations themselves. Those labelled “high risk” are subject to strict limitations and transparency stipulations.

The AI Act itself lists what would implicate an AI model as high risk, but the relatively amorphous nature of spotlight-stealing large-language model-based AIs means they could be labelled as such. Recently, it was revealed a US school board used ChatGPT to judge whether books should be banned from libraries or not. Sounds pretty high risk, don’t you think?

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.