Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Newcastle Herald
Newcastle Herald
National
Damon Cronshaw

Cyber trust between human and machine

Cybersecurity: Combat systems operator Michael Curry monitors his console onboard HMAS Toowoomba. Picture: Department of Defence

Newcastle researchers are examining how humans can better trust machines in the murky world of cyberwarfare.

The $750,000 project, funded by the Australian and US defence departments, will combine psychology and computer science to improve cybersecurity.

It is part of an $8 million project over three years, involving Australian and US academics.

University of Newcastle Professor Scott Brown said the research was important to the defence establishment for security reasons.

It's also crucial to corporations and governments, with cybercrime a pervasive and difficult problem.

Detecting a cyberattack among millions of events on a big network can be like finding a needle in a haystack. While artificial intelligence [AI] does much of the detection work, humans play a vital role.

AI detects the possible threats, but it's humans who must decide whether these threats are legitimate.

The research will examine interactions between humans and bots [software programs that perform automated and repetitive tasks].

Associate Professor Ami Eidels, of University of Newcastle, said understanding of context was needed.

"Even the smartest bot does not have this, yet," A/Prof Eidels said.

Professor Brown said the bots "don't make any decisions on their own".

"It's not Terminator. Bots just make recommendations to the humans. The humans have to make the decision and that's where the psychology is," he said.

He said bots filter network events "in very quick time and make recommendations to human cybersecurity managers".

These are similar to recommendations on Netflix and Amazon.

"A bot can look at things you have previously liked and recommend new ideas. You can choose to accept or reject these ideas," he said.

While making a bad choice on Netflix can be quickly rectified, human errors in cybersecurity can be costly.

"Cybersecurity in large settings is managed by human-machine teams, but they don't always work as well as they should," Professor Brown said.

"This is sometimes because the humans don't trust the machine recommendations. Or, the machines do not provide their recommendations in a way that humans can best interpret them.

"We aim to improve that. Human cybersecurity managers need to know when to trust bot recommendations and when not to."

A key problem is that hackers can be deceptive. They can make attacks look like harmless events.

Hackers seek to exploit human weaknesses such as greed, curiosity, fear and familiarity.

"People might think it's black and white and obvious when there's a hacker trying to infiltrate a system," Professor Brown said.

"The trouble is, hackers are smarter than that."

For example, a Russian hacker might use a VPN [virtual private network] to make their traffic appear to come from a friendly country.

"When a real hacker gets into your system, they might copy data like bank account passwords so they can sell it."

The bots watch networks around the clock, but there are grey areas.

"If a dumb hacker tries to get in the system in an obvious way, the AI will spot it in a microsecond," he said.

But other situations may simply involve someone "doing some slightly unusual internet activity".

"Some users are doing slightly dodgy stuff, but they're not hackers and you don't want to shut them all down."

A human might be copying data for legitimate purposes, but a bot might perceive them as a hacker.

"They might be doing it because they had to make an unscheduled backup."

These grey areas mean cybersecurity can't be left to bots alone.

However, the bots are crucial because there's too much traffic in a network for a human to track.

The researchers will examine psychological factors in a bid to improve human-bot interactions. They run experiments to measure the trust that people put in the bots and examine how bots can be set up to change this trust.

"It's all about human decision-making, which we study at the cognition lab at University of Newcastle," Professor Brown said.

A/Prof Eidels said the lab specialises in cognitive modelling.

"We describe how people process information and make decisions using mathematical equations and computational models," he said.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.