Get all your news in one place.
100’s of premium titles.
One app.
Start reading
InnovationAus
InnovationAus
Politics

Services Australia lays ethical AI footing

Services Australia has set interim guardrails for the development and use of artificial intelligence at the agency, including technology that is used for public-facing government services.

As work continues a broader policy response to the use of generative AI in government, the services agency responsible for Centrelink and Medicare has developed the first cut of its own AI strategy that it will now put past a group of external experts.

The interim AI strategy helps to ensure AI solutions are developed and used in a “safe, responsible and ethical manner”, the agency said in answers to questions on notice from the recent round of Senate Estimates.

A spokesperson told InnovationAus.com the strategy – which will be released publicly once finalised later this year – will help the agency “assess and mitigate the risks and capitalise on the potential benefits of using AI”.

“Services Australia is laying the foundations for how AI can help us deliver value to our customers, business and staff, using a human-centred design approach,” the spokesperson said.

It is unclear when the agency began developing the strategy, or whether it was developed directly in response to ChatGPT, which prompted the Digital Transformation Agency to release interim guidance to all agencies on the use of generative AI.

The advice, which was last updated in November, permits agencies to experiment with generative AI tools, but warns that they must not be the “final decision-maker on government advice or services”.

Services Australia’s efforts to consider the ethics of AI come almost a decade after the agency first began using natural language processing for both public-facing government services and as an internal tool for staff.

An internal chatbot called Roxy was the first digital assistant to be rolled out in 2016, followed by two more digital assistants, Sam and Oliver, in 2017. In the years since, it has deployed several others, including a myGov assistant called Charles.

It also uses Optical Character Recognition to automatically check whether information contained in forms lodged with Centrelink is accurate and complete, having first trialed the technology during the pandemic.

But the agency is yet to dip its toes into generative AI, blocking access to ChatGPT and turning down an opportunity to participate in the federal government’s Microsoft Copilot trial to focus on staff onboarding.

It is also applying caution to the use of automation for social security and welfare claims processing where it applies to discretionary decision-making while it reviews automation capabilities.

The federal government is contemplating new laws to ensure “automation in government services can operate ethically, without bias and with appropriate safeguards” in response to the Robodebt Royal Commission.

It set aside $5.6 million in December to introduce a consistent legal framework for automated decision-making across government, which is expected to allow impacted persons to seek a review of decisions.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.