Australian adults who are being bullied online will be able to report incidents to the eSafety commissioner from Sunday.
New powers were given to the eSafety commissioner, Julie Inman Grant, as part of the wide-ranging Online Safety Act, which passed in 2021. Under the new laws, social media companies and other websites will be compelled to remove content deemed to be bullying within 24 hours or face fines of up to 500 penalty units (up to $111,000 for individuals and up to $555,000 for companies).
The laws also modify a scheme already in place for reporting the online bullying of children, broadening who they apply to beyond social media platforms, and shortening the removal timeframe to 24 hours from 48 hours.
Here’s what we know about how the system will work.
If I am being bullied online where do I go first?
Under the scheme, you first need to ask for the bullying content to be removed by the company that is hosting it, such as Facebook or Twitter.
You should also report it to the police. It remains a crime to harass people online, and the eSafety commissioner’s powers only extend to the removal of content, not legal action.
What if they refuse or don’t respond?
If the platform or website fails to remove the content, you can report it to the commissioner, who can launch an investigation and issue notices for the content to be removed within 24 hours, or the host risks a fine.
What sort of content counts as bullying?
The government has sought to downplay concerns the scheme would amount to censorship of speech by setting the bar “deliberately high”, Inman Grant said.
Content is considered adult cyber abuse if it is intended to cause serious harm and is menacing, harassing or offensive, and targeted at an individual. You cannot report offensive comments said about minority groups for example.
“Serious harm could include material which sets out realistic threats, places people in real danger, is excessively malicious or is unrelenting,” Inman Grant said.
Something people find offensive or disagreeable on its own would not be enough.
“The scheme is not intended to regulate hurt feelings, purely reputational damage, bad online reviews, strong opinions or banter,” she said.
It won’t cover defamatory comments, either.
In a submission to the parliamentary inquiry into social media and online safety, Inman Grant said examples of what would reach the threshold would be publishing private or identifying information about someone with malicious intent, encouraging violence against an Australian adult based on their religion, race or sexuality, and threats of violence.
Is there a review process?
If a notice is issued to remove content, the legislation requires there be a process for a decision to be reviewed by the office of the eSafety commissioner. The decision can also be appealed to the Administrative Appeals Tribunal.
Is this different to the anti-trolling legislation?
At the end of last year, Scott Morrison announced draft anti-trolling legislation, which is separate from the Online Safety Act. The anti-trolling legislation is largely focused on making it easier for people to unmask anonymous online commenters who they want to sue for defamation.
Under the draft legislation, social media companies can be fined and potentially held liable for defamatory comments posted online by commenters if they do not assist in unmasking the anonymous commenter.
The government is still seeking comment on the legislation and it is not entirely clear whether it will be introduced or passed in parliament before the federal election.
What about the other parts of the Online Safety Act?
The Online Safety Act also gives the commissioner significant powers over the regulation of other content online – including adult content.
The commissioner is currently working with industry on developing codes to enforce the new rules around adult content, with a view to developing a roadmap to require age verification for adult content online by the end of this year.