Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Comment
Georg Riekeles and Max von Thun

Rishi Sunak’s AI plan has no teeth – and once again, big tech is ready to exploit that

Rishi Sunak with Tesla and SpaceX CEO Elon Musk in London, 2 November 2023
Rishi Sunak with Tesla and SpaceX CEO Elon Musk in London, 2 November 2023. Photograph: Reuters

This month, the British prime minister, Rishi Sunak, convened government representatives, AI companies and experts at Bletchley Park – the historic home of Allied code-breaking during the second world war – to discuss how the much-hyped technology can be deployed safely.

The summit has been rightly criticised on a number of grounds, including prioritising input from big tech over civil society voices, and fixating on far-fetched existential risks over tangible everyday harms. But the summit’s biggest failure – itself a direct result of those biases – was that it had nothing meaningful to say about reining in the dominant corporations that pose the biggest threat to our safety.

The summit’s key “achievements” consisted of a vague joint communique warning of the risks from so-called frontier AI models and calling for “inclusive global dialogue” plus an (entirely voluntary) agreement between governments and large AI companies on safety testing. Yet neither of these measures have any real teeth, and what’s worse, they give powerful corporations a privileged seat at the table when it comes to shaping the debate on AI regulation.

Big tech is currently promoting the idea that its exclusive control over AI models is the only path to protecting society from major harms. In the words of an open letter signed by 1,500 civil society actors, accepting this premise is “naive at best, dangerous at worst”.

Governments truly serious about ensuring that AI is used in the public interest would pursue a very different approach. Instead of noble-sounding statements of intent and backroom deals with industry, what is really needed are tough measures targeting that corporate power itself. Aggressive enforcement of competition policy and tough regulatory obligations for dominant gatekeepers are key.

As it stands, a handful of tech giants have used their collective monopoly over computing power, data and technical expertise to seize the advantage when it comes to large-scale AI foundation models. These are models trained on large swathes of data that can be adapted to a wide range of tasks, as opposed to AI applications designed with a specific purpose in mind. Smaller companies without access to these scarce resources find themselves signing one-sided deals with (or being acquired by) larger players to gain access to them. Google’s takeover of Deepmind and OpenAI’s $13bn “partnership” with Microsoft are the best-known examples, but not the only ones.

The tech giants are also dominant in many other markets (including search engines, cloud computing and browsers), which they can exploit to lock users into their own AI models and services. As ever more people gravitate and provide data to a handful of AI models and services, network effects and economies of scale are set to magnify this considerable initial advantage further.

In the jargon of economists, this a market that is prone to tipping. A concentrated market for foundation models would allow a handful of dominant corporations to steer the direction and speed of AI innovation, and enable them to exploit, extort and manipulate the many businesses, creators, workers and consumers dependent on their services and infrastructure.

In short: the tech winners of yesterday are now extending and reinforcing their monopolistic hold through AI. Governments should not be complicit in that. Antitrust authorities may have failed to prevent digital technologies from being monopolised in the past, but competition policy – if enforced effectively – has a major role to play in tackling AI concentration.

Competition authorities must use their existing powers to police takeovers, cartels and monopolistic conduct to prevent a handful of digital gatekeepers from running amok in their quest for ever-greater profits. This now requires investigating and where necessary breaking up anti-competitive deals between big Tech and AI startups, and preventing digital gatekeepers from leveraging their control over dominant platforms, such as in search and cloud computing, to entrench their hold on AI.

Yet it is also important to acknowledge that even in a competitive market, the number of large-scale model providers will be limited, given the resources required to train and deploy such models. This is where regulation must step in to impose unprecedented responsibilities on dominant companies.

As AI gains a bigger role in decision-making across society, safety, reliability, fairness and accountability are critical. AI systems can perpetuate biases from their underlying datasets or training, and generate plausible-sounding but false responses known as “hallucinations”. If deployed by those with malicious intent, they can also be used to create convincing propaganda, spy on workers and manipulate or discriminate against individuals.

These harms are particularly grievous when they stem from a foundation model because of the cascading effects into downstream uses. Biases, errors, discrimination, manipulation or arbitrary decisions therefore present singular risks at the foundation level.

The European Union is currently vying to become the first global authority to put forward binding AI rules imposing obligations on different uses of AI according to risk. However, the EU’s AI Act is struggling to measure up to the foundation model threat.

Currently, EU legislators are considering imposing a new set of tiered obligations on foundation model providers. Among other things, these providers could be required to share information with regulators on training processes (including the use of sensitive and copyrighted data) and submit to auditing of systemic risks.

This will go some way towards mitigating the risks of AI. But above and beyond this, given their central role in the AI ecosystem, the dominant corporations providing large-scale models must be given strict overarching responsibilities to behave fairly and in the public interest.

One way of achieving this, building on ideas developed by Jack Balkin at Yale Law School and Luigi Zingales at the University of Chicago, would be to impose a certain number of fiduciary duties on GPAI providers. A fiduciary duty is the highest standard of care in law and implies being bound both legally and ethically to act in others’ interests.

An alternative or complementary approach would entail designating digital gatekeepers as public utilities (or “common carriers” in US terminology), mandated to treat their customers fairly and ensure operational safety. This legal status could conceivably be applied to both the foundation models upon which AI applications are built and the cloud computing infrastructure hosting them. The EU’s Digital Markets Act, which imposes pro-competition obligations on dominant tech firms, is one potential avenue for pursuing this.

What is abundantly clear is that we cannot trust self-regulation by individual companies, let alone big tech, to guarantee safety and openness in AI. Only by tackling monopoly power, and ensuring that power comes with responsibility, can we realise the promise – and mitigate the risks – of this emerging technology.

  • Georg Riekeles is associate director of the European Policy Centre, an independent thinktank based in Brussels. Max von Thun is director of Europe and transatlantic partnerships at the Open Markets Institute, an anti-monopoly thinktank

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.