A one-stop portal for victims to report AI deepfakes to police should be established, the federal police union has said, lamenting that police were forced to “cobble together” laws to charge the first person to face prosecution for spreading deepfake images of womenlast year.
The attorney general, Mark Dreyfus, introduced legislation in parliament in June that will create a new criminal offence of sharing, without consent, sexually explicit images that have been digitally created using artificial intelligence or other forms of technology.
The Australian Federation Police Association (Afpa) supports the bill, arguing in a submission to a parliamentary inquiry that the current law is too difficult for officers to use.
They pointed to the case of a man who was arrested and charged in October last year for allegedly sending deepfake imagery to Brisbane schools and sporting associations. The eSafety commissioner separately launched proceedings against the man over his failure to remove “intimate images” of several prominent Australians last year from a deepfake pornography website.
The man was fined $15,000 as part of the civil case for contempt of court. His criminal and civil cases are otherwise ongoing, with the civil matter returning to court in August.
“Due to limited resources and a lack of dedicated and direct relevant legislation relating to deepfake sexually explicit material, investigators were forced to ‘cobble’ together offences to prosecute,” Afpa said. “Six further charges relating to ‘obscene publications and shows’ were brought against [the man].”
Non-profit internet liberties organisation Electronic Frontiers Australia told Guardian Australia in May that the parliament should not rush to give police new powers until this case has determined whether current powers are adequate.
Afpa said the eSafety approach to file a civil lawsuit also had drawbacks because it is expensive and there was a “good chance” the offender is a “low-income, asset-light” individual who is “therefore, effectively impervious to civil proceedings”. If they can figure out who created the image is, that is.
“It is frequently impossible to determine who distributed the images; typically, the offenders are very tech-savvy and adept at covering their tracks to avoid prosecution.”
Afpa said it is often difficult for police to determine who the victim is, whether they’re a real person, and where they are located, which means deepfake investigations can take “countless hours”.
“With the creation of deepfake child exploitation material increasing, the role of law enforcement and identifying a victim is becoming exponentially more difficult,” the union said. “How long do investigators spend trying to find a child who potentially doesn’t even exist or who had their likeness stolen but has ultimately not been abused themselves?”
It is also difficult to determine where the image was first created, Afpa said, noting people often use virtual private network (VPN) connections to mask their location.
While victims can currently report to the eSafety commissioner when an image – real or not – of them has been shared online without their consent, Afpa said this model should be overhauled to allow victims to report directly to law enforcement.
Afpa proposed the AFP-led Australian Centre to Counter Child Exploitation could assess initial reports and then share them with the relevant state or territory police force for further investigation.
This would also help victims report the cases, the union added, becausemany find it traumatic and difficult to walk into a police station with the sexually-explicit images to report to police.
Afpa argued the legislation should be coupled with an education campaign to reduce the stigma around reporting and educate the public on deepfakes, in addition to the reporting portal.
The committee will hold its first hearing on the legislation next week.