We've already seen AI images used in this year's NZ campaign advertising. All the warning signs are that AI can produce disinformation on an industrial scale – we must demand transparency and accountability
Opinion: I was in Edinburgh a couple of weeks ago spending time with a friend and her son, who has just completed a degree in philosophy.
As part of his degree, he did a dissertation on the ethics of artificial intelligence. This led to a fascinating discussion around the dinner table.
I have been exploring ChatGPT of late, so I shared the answer it had given to a question about the shortcomings of AI in decision-making:
READ MORE:
* Bipartisan politics up in smoke as election looms
* NZ ‘underprepared’ for election interference
* Cracking the misinformation code
‘AI's limitations in decision-making arise from its lack of common sense, emotional understanding, and creativity. It can perpetuate biases and discrimination due to flawed training data and struggles to handle unforeseen situations. Additionally, AI lacks ethical reasoning and may make decisions that are technically correct but ethically questionable. These limitations highlight the need for human oversight and intervention to ensure responsible and unbiased decision-making in AI systems.’
This was the short version that I asked for when the original answer would have taken my entire column. The longer version also talked about AI’s inability to grasp context, the lack of intuition, creativity, innovation, adaptability, flexibility and empathy, and of course the lack of accountability.
This raises the broader issues about the range of uses AI is capable of being employed for, and the ethical considerations they raise.
When it comes to elections, however, it seems that there are even more risks. According to ChatGPT we will need to be alert to disinformation, fake news, deep fakes, targeted propaganda, microtargeting, manipulation of social media, algorithmic bias, and lack of accountability.
That’s quite a list of challenges so how are we set up to address them given this is an election year?
I remember seeing a talk on deep fakes a few years ago, and feeling concerned with what this might mean for democracy. It’s hard to put aside the influence of something you have literally seen with your own eyes even when you have been told subsequently it’s not true.
This takes me back to the 2010 Christchurch City Council election when the late Jim Anderton ran for the mayoralty and was interviewed for local television. After the earthquake happened, some mischief-maker spliced an answer given to one question asked during the interview onto another question. This made it sound as if he thought that only something of ‘seismic proportions’ could prevent him winning the mayoralty, when that was the answer he gave to a question about leaving the Labour Party. The doctored video was released on YouTube and received considerable publicity, a lot more than the subsequent news that it was a scam.
That was with 2010 technology at a time when the phrase ‘going viral’ was not yet the everyday phrase it’s become, and yet to this day people repeat the false narrative even though it was publicly exposed as a lie.
It is clear that we are on a whole new level now.
This has led me to think about what protections are needed in a world where it will become harder to distinguish fact from fiction.
I remember visiting Washington University in Seattle when I was a relatively new MP. I visited one of the researchers on the Human Genome project which was unravelling the mysteries of DNA. I asked him about the ethical issues that this raised. Shouldn’t we ask ourselves how we would deal with these before we developed the capacity? For example, if we were going to be able to identify if we were more vulnerable to certain conditions by testing our DNA, would we want insurance companies being able to require us to be tested before getting insurance?
Ethical issues are hard. But they must be addressed.
Philosophy was the discipline that led the development of medical ethics, so my friend’s son is at the vanguard of a debate that is much more than academic when it comes to AI.
And the topic has started to dominate the headlines everywhere.
Anything that is said to pose an ‘existential threat’ is guaranteed to ensure headlines, and that is the phrase that has been picked up by UK Prime Minister Rishi Sunak. He is calling for ‘guardrails’ to be put in place internationally.
There has also been a call for a six-month pause in the development of these AI systems, as a result of these concerns.
Over a thousand experts have signed an open letter essentially saying we are building increasingly powerful technology but haven't developed the regulation to go along with it. The development of law is slow, whereas the development of technology is exponential. They want the regulation to catch up before going further.
It still isn’t clear to me, however, what form this regulation is intended to take. All the warning signs are that AI can produce disinformation on an industrial scale. Something that is already sounding alarm bells for the forthcoming elections in the UK and the United States, elections that are over a year away.
And that is why I am at a loss to understand where New Zealand stands in all of this. Our election is this year. Surely some steps are required – even mandatory disclosure of the use of AI, the source and the amplification mechanisms being used would help voters assess the information they are receiving.
Transparency and accountability, two of the bedrocks of democracy, demand that we at least take these steps.