Get all your news in one place.
100’s of premium titles.
One app.
Start reading
InnovationAus
InnovationAus
Politics
Joseph Brookes

Law reform for high-risk AI on the way

Australia will have mandatory safeguards for high-risk artificial intelligence use like autonomous vehicles and health care but make little to no intervention for low-risk use like filtering spam emails under a new risk-based regulatory approach to be announced by the federal government on Wednesday.

The commitment to mandatory regulation follows crackdowns in the EU and Canada and is expected to require organisations developing and deploying high-risk AI in Australia to have it independently tested before release, disclose to end users when AI is in use, and designated responsibility for AI safety to specific individuals.

An interim advisory panel will be appointed to explore options for the mandatory guardrails, including a potential dedicated AI legislative framework and reforms to existing areas like privacy, copyright and online safety laws.

A voluntary standard will also be developed with industry for less risky uses of the technology, while the lowest risk use will continue largely unimpeded by regulation.

Industry and Science minister Ed Husic will announce the approach on Wednesday while releasing the government’s interim response to consultation on AI safety, which last year attracted more than 500 submissions.

Industry and Science Minister Ed Husic

“Australians understand the value of artificial intelligence, but they want to see the risks identified and tackled,” he said.

“We have heard loud and clear that Australians want stronger guardrails to manage higher-risk AI.”

There was near consensus from stakeholders that voluntary guardrails for AI are proving insufficient as the technology is increasingly deployed in sensitive areas and raised issues with online safety, copyright and misinformation.

But industry and consumer groups were split on the level of outside intervention needed to minimise the risks without jeopardising the benefits of AI.

Tech firms argued for a light touch, technology neutral approach based on strengthening existing laws, while consumer groups and some academics pushed for new specific AI laws.

Australia’s risk based approach will look for balance but new mandatory safeguards for high-risk but legitimate AI uses – where harms are difficult to reverse – has been flagged as potentially requiring a dedicated legislative framework.

The government’s interim response says this is on the table and commits to establishing a temporary expert advisory group to explore the form of new mandatory guardrails. No date has been set for establishing the group, which is expected to contain a mix of independent experts and industry leaders and will report to the Industry department.

These new rules will include requirements for testing AI independently before release and ongoing audits, transparency requirements like watermarks and disclosures so people no when AI has been used, and accountability measures like designating AI safety roles and requiring training for developers.

Exactly what constitutes a high-risk use of AI and attracts the mandatory safeguards will be determined in the next phase of consultations.

The initial discussion paper released in March had proposed determining high-risk by considering if the impacts of the AI are “systemic, irreversible or perpetual”, with AI enabled robots performing surgeries or using the technology in autonomous vehicles as examples.

The European Union’s upcoming AI Act is more prescriptive, listing specific use cases considered high-risk like biometric identification, medical devices and law enforcement systems, while Canada’s approach allows use cases to be prescribed by regulation. Both countries combine this with voluntary standards.

As options for the mandatory rules are explored in Australia, the Albanese government will also develop a voluntary AI Safety Standard for implementing other risk-based guardrails for industry through the National AI Centre, and consult on options for voluntary labelling and watermarking of AI-generated materials.

“These immediate steps will start building the trust and transparency in AI that Australians expect,” Mr Husic said.

“We want safe and responsible thinking baked in early as AI is designed, developed and deployed.”

Peak ICT group the Australian Information Industry Association welcomed the risk based approach and said it application along with investment will help grow the local sector.

“The regulation of AI will be seen as a success by industry if it builds not only societal trust in the adoption and use of AI by citizens and businesses, but also that it fosters investment and growth in the Australian AI sector,” AIIA chief executive Simon Bush said.

The Albanese government repurposed unspent funding from the Morrison government’s AI Action Plan last year with around $76 million in dedicated AI investments, including new centres to foster business adoption, a graduates program and an expansion of the National AI Centre.

The government’s interim response to AI safety consultations released on Wednesday also flags more government spending on the adoption and development of the technology through a new “AI Investment Plan”.

According to a 2021 Tech Council of Australia report, Australia is home to just over 1 per cent of the AI and machine learning startups globally.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.