Get all your news in one place.
100’s of premium titles.
One app.
Start reading
ABC News
ABC News
National
Exclusive by political reporter Jake Evans

Home Affairs experimenting with ChatGPT in refugee and cyber divisions

ChatGPT is being used in supervised settings in the refugee and cyber divisions of Home Affairs, among others. (AP Photo: Michael Dwyer)

The Home Affairs department is experimenting with using the ChatGPT artificial intelligence tool under supervision in several of its divisions, including its refugee, humanitarian and settlement team, and in cyber security.

The federal government has not issued a directive to the public service on the use of tools like ChatGPT, which can generate text and images.

Instead, it is leaving it up to departments to decide whether to allow the technology or limit its use.

Home Affairs boss Mike Pezzullo said yesterday that he was hopeful a whole-of-government response would emerge.

In the meantime, the department responsible for immigration and national security has blocked ChatGPT, but allowed exceptions to teams who make a case to use it.

"Parts of the department can seek a business case to access that capability, and have done so to this point in time, noting there's certainly some value in exploring the capabilities as a tool for experimentation and learning and looking at utility for innovation and the like," chief operating officer Justine Saunders said.

Home Affairs has since confirmed to Greens senator David Shoebridge that its Information Computer Technology Division, Refugee Humanitarian & Settlement Division, Data & Economic Analysis Centre and Cyber and Critical Technology Coordination Centre had all been identified using ChatGPT for "experimentation and learning purposes".

Home Affairs said the use of ChatGPT was "coordinated and monitored", and that it was not aware of any employees using the tool as part of their ordinary jobs.

Senator Shoebridge said a precautionary policy should be applied to all of government, instead, individual agencies were "making this up as they go".

"This is the first clear acknowledgement that the government is using ChatGPT and it seems to be happening without any clear guardrails or protections," senator Shoebridge said.

"What is especially concerning is that Home Affairs is using this to experiment within the Refugee Humanitarian & Settlement Division, where a leak of personal information could literally cost lives.

“Given all the public discussion about the risks of AI it is disturbing that there is no government-wide guidance or policy to limit the use of this technology."

While Home Affairs is allowing limited use, other national security and law enforcement agencies are taking a stricter approach.

The Australian Federal Police (AFP) has blocked ChatGPT, and the Australian Criminal Intelligence Commission (ACIC) has also prohibited its use, and is in the process of restricting access to the tool within its "protected" environment.

ACIC employees have been advised not to enter any work-related information into AI services from their personal devices.

Policy vacuum a 'recipe for disaster'

The government has come under fire in recent months after a succession of risky technologies were identified as being in use by government departments without any advice issued to departments on their use.

MPs have questioned the government on Chinese-made Hikvision cameras installed in several departments, since banned; the use of TikTok on government devices, since banned; and the use of DJI camera drones, which the AFP confirmed yesterday it was "transitioning" away from.

Whole-of-government advice on each of those technologies has been either missing or not issued for months.

In the UK, the National Cyber Security Centre within Britain's signals intelligence agency has formally warned that generative AI programs are an emerging security risk, in part because they store the information typed into them and use it to train the tools.

Senator Shoebridge said that design feature should throw up a "red flag" to any government agency.

"When you have a policy vacuum it hands these critical decisions over to agency staff who don't have the framework, resources or skills needed to properly assess the risks, and that is a recipe for disaster," the senator said.

"These technologies are not going away and there are many potential benefits from applying them, but the risks are also very real."

Answering on behalf of the social services minister, Special Minister of State Don Farrell told Senator Shoebridge that the government was "continuously evaluating emerging technologies to assess their potential use in the public sector".

"All technology, Artificial Intelligence (AI) or otherwise, should be reviewed and assessed for suitability before adoption," Senator Farrell said.

"There is no current Commonwealth policy on the use of generative AI technologies, such as ChatGPT. Therefore, their use for delivery of services or making decisions is not supported."

The government is considering "enhancement to the existing guardrails" to provide greater clarity on AI use by government.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.