Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Technology

The Y2K bug should teach us to be wary of AI

Queues to withdraw money in Hong Kong on 30 December 1999, due to fears about the millennium bug’s effects on technology.
Queues to withdraw money in Hong Kong on 30 December 1999, due to fears about the millennium bug’s effects on technology. Photograph: Robyn Beck/AFP/Getty Images

Re the existential threat from AI (Letters, 2 June), Phyl Hyde says the concerns over Y2K were “a panic” about an “overblown future cause”. Like many IT specialists across the world, I am fed up with this misinterpretation of what happened. Organisations put serious money into employing thousands of people to inspect their systems and to amend them to ensure the issue was avoided.

Because of this, systems continued to function normally over the century change and lives were not affected. The result was that people thought it was a fuss about nothing. It took several years of planning, resourcing and working to achieve the desired result. It’s probably going to take a lot more to understand and cope with the unintended consequences of AI.
John Thow
Basingstoke, Hampshire

• Regarding Isaac Asimov’s three laws of robotics, many of his stories show how impractical they are – such as Little Lost Robot, where harm-anticipating robots keep dragging researchers out of the potentially hazardous environment they’re working in, forcing the first law to be suspended – or else how robots bend and evade laws they are ostensibly programmed to obey. In another short story, The Evitable Conflict, the three laws ironically create the situation that they were supposed to prevent: robots use the first law’s order that a robot “may not through inaction allow a human being to come to harm” to justify overthrowing a human government with AI-controlled dictatorship: humans cannot rule without harming themselves, so the law requires robots to rule in our place.

Asimov deliberately designed his three laws of robotics to be broken. They are not a guide to follow, but a warning to avoid: AI will always follow its programming to the letter – but only the letter.
Robert Frazer
Salford

• Have an opinion on anything you’ve read in the Guardian today? Please email us your letter and it will be considered for publication in our letters section.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.