Creating and sharing fake, sexual images could soon be much harder as an online regulator targets problematic apps that mainly target women.
The generation of "deepfakes" is rising rapidly alongside technology powered by artificial intelligence.
Deepfakes involve digitally altered images of a person or their body, while AI can be used to generate an image based on a photo or superimpose faces on to pornographic material.
Several experts detailed the potential harms during a parliamentary hearing on Tuesday.
Nicole Lambert, from the National Association of Services Against Sexual Violence, said young people had taken their lives after becoming victims of deepfake material.
Rachel Burgin, from Rape and Sexual Assault Research and Advocacy, outlined a survey of perpetrators.
Many respondents admitted the biggest deterrent to committing abuse would have been criminal penalties.
"What we're doing for prevention in Australia doesn't work, that's why we've had more than 50 women killed at the hands of men this year," she told the committee.
Deepfakes were often accompanied by doxxing, where personal information was shared online.
This made people fear for their safety because sexual violence was a precursor to homicide, she said.
Abuse victim Noelle Martin chastised tech companies and social media platforms for failing to take down such material or search results, with billions of people accessing the top 40 non-consensual nude sites.
She called for severe fines and possible criminal liability.
The eSafety commissioner said she would welcome powers to enable her to take down apps that primarily exist to "nudify" women or create synthetic child sexual abuse material.
"Some might wonder why apps like this are allowed to exist at all, given their primary purpose is to sexualise, humiliate, demoralise, denigrate and create child sexual abuse material," Julie Inman Grant told the hearing.
"These apps make it simple and cost free for the perpetrator, while the cost to the target is one of lingering and incalculable devastation."
The use of deepfakes to control women in abusive relationships was also explored.
"People will and do create images of their partners as a mechanism of exerting control over them in a family violence context," Ms Burgin said.
More than 96 per cent of deepfakes targeted women, the inquiry heard.
A 2020 incident included more than 680,000 women having nude images of them generated and shared by an AI chatbot, law professor Rebecca Delfino said.
A cybersecurity company that tracked deepfake videos since December 2018 found 90 to 95 per cent were non-consensual porn, she said.
The Albanese government wants to criminalise the transmission of sexual material relating to adults without their consent.
The offences would capture unaltered material as well as content produced using 'deepfake' technology.
Its legislative changes should capture the creation of images and threats to produce such material, the committee heard.
Attorney-General Mark Dreyfus argued the Commonwealth had legal limits to what it could tackle but Marque Lawyers managing partner Michael Bradley believed the government had the power to broaden its bill.
"In the terrorism realm, there are some pretty broad offences that criminalise accessing and creating content so I don't think it's that much of a stretch," Mr Bradley said.
The committee will report by August 8.
Lifeline 13 11 14
beyondblue 1300 22 4636
Kids Helpline 1800 55 1800 (for people aged 5 to 25)