The discussions between experts at the AI Safety Summit have avoided killer robot concerns and been more “measured” than expected, one delegate has said.
Poppy Gustafsson, chief executive of AI cyber security firm Darktrace, said she had been concerned that discussions at the summit would focus too much on “hypothetical risks of the future” – a concern raised by several experts before the summit.
But Ms Gustafsson said debate in the closed meetings at the summit had been more focused on the “daily reality”.
“I was a little worried that we were all going to be chatting about hypothetical risks of the future and robots are going to kill us all, and not talking enough about AI like we are using it now,” she told the PA news agency.
I don't think safety and innovation are at loggerheads— Poppy Gustafsson, chief executive of Darktrace
“Coming in this morning, I heard the opening plenary from Michelle (Donelan, Technology Secretary) and she made a comment that really stuck with me that artificial intelligence isn’t a natural phenomenon, it’s not something happening to us, it is something we are creating.
“And in the first breakout session this morning we were talking about the risk of loss of control and the idea that AI is not wrestling control from us. We are giving AI control and we have a choice to what extent we hand over the keys if you like.
“I think that was my big, resonant point this morning – that this is our choice, we are the drivers of this, it is not being done to us.”
She said the discussions were “measured and more much real time and much more here and now than perhaps I was worried about”.
Ms Gustafsson said getting the safety aspect of AI development right was crucial as “people embrace things much quicker when they know it’s safe”.
“I don’t think safety and innovation are at loggerheads,” she added.
The Darktrace chief executive said we would not see a “selection of specific regulation” coming out of the summit, which continues on Thursday, and hoped there would also be a focus on how to “go after the opportunity (around AI), but having put in the safeguards against those risks”.