The Free Speech Union's early and vocal opposition to new measures to protect children and vulnerable people from harmful content online does a disservice to the importance of the issue, Marc Daalder writes
Comment: In the aftermath of the March 15 terror attack – and the wave of online hatred and harassment it sparked, as well as the long history of abuse it spotlighted in retrospect – we should be under no illusions about the importance of online safety.
Most of us understand this is no trifling matter.
In 2022, the Classification Office found 83 percent of New Zealanders were concerned about harmful or inappropriate content online. This year, 65 percent of respondents told the office it was hard to protect children from this content and 41 percent said it was hard to avoid it themselves.
READ MORE: * Government to regulate Facebook * Govt harbours concerns over Netsafe's online code
Internet NZ, in its annual survey, reported in March that 58 percent of those polled worried about the internet being a forum for extremist material and hate speech and nearly three-quarters were concerned about children accessing inappropriate content.
As it stands, the government only directly regulates illegal content – the worst of the worst. That includes child sexual abuse material and graphic terrorist content like the March 15 footage.
It's up to platforms to handle the rest. They were already failing in 2019, when the Christchurch terrorist live-streamed his attack for 17 minutes on Facebook. While they've improved on handling the most extreme content, the Covid-19 pandemic has displayed social networks' complete inability to handle abuse, inauthentic harassment campaigns and other discriminatory content.
In its 2022 survey, the Classification Office found just 33 percent of respondents agreed that online platforms provided what people needed to keep themselves safe.
That's why the Department of Internal Affairs (DIA) released a consultation document on Thursday with proposals for a major revamp of the regulation of content in New Zealand. It came nearly two years after Newsroom first reported the Government was looking to regulate the likes of Facebook. Under the proposal, the existing bodies, like the Broadcasting Standards Authority, the Classification Office and a number of others would be rolled into a new independent regulator.
This entity would work with different sectors to establish codes of practice about how to handle harmful content – what's often called "lawful but awful". One code of practice might govern professional media entities, while another would govern social media and another online gaming, for example. The regulator wouldn't police any individual posts or actions, but ensure more broadly that platforms were complying with and enforcing the code of practice.
Importantly, no changes would be made to what counts as illegal or criminal content. That's still reserved for the worst of the worst. But DIA's proposal to regulate big tech firms is a recognition there's another category of legal content that private companies aren't stepping up and handling. The current regulatory framework mostly pre-dates the web, so the government has little ability to intervene now.
There are important questions to ask here, about whether the proposals go too far in granting the government the ability to regulate our online lives. Over the two-month consultation period and the months of further policy work to come, these questions should be asked and rebutted and debated. That's what we'd expect from a thoughtful democratic society, determined to both safeguard civil liberties and deal with the very important issue of online safety.
Unfortunately, we probably won't get to have that robust conversation.
Instead, the online safety proposals are likely to be hijacked into yet another round of culture wars, posing as legitimate defences of free speech rights.
Freedom of expression is critical in a democracy. But when we know online harms are curtailing people's lives – and sometimes ending them – free speech shouldn't be used as a red card to halt all discussion of how to deal with the issue.
On Thursday morning, the Free Speech Union released a slanted take on the online safety proposals on the back of documents it said had been leaked to it.
"Silencing Kiwis online is not the way to promote social cohesion or build trust. Kiwis will see this work as nothing more than online hate speech laws, and will resist this overreach also," Jonathan Ayling, the union's chief executive, said.
The Act Party jumped in too, also denouncing the proposals as "hate speech laws 2.0".
Putting aside that the proposals won't render illegal anything that is currently legal, we are poorly-served by a reflexive retort of 'unfair censorship'. There are almost certainly issues with the proposals, as with any policies, but to rubbish them without offering an alternative beyond "counter-speech" is unserious.
What is the counter-speech to a death threat? What is the counter-speech to people encouraging a child to take their life or develop an eating disorder? What is the counter-speech to a coordinated inauthentic campaign of vicious abuse on the basis of someone's gender identity or sexual orientation or religion?
These are all situations that unfold regularly on social media. They often do not violate the law. They often are not adequately handled by platforms' content moderation teams.
These are also the issues our most thoughtful experts and affected communities are grappling with, while they are still processing the detail and implications of the 90-page document rather than firing off hot takes and press releases.
Hopefully the first day of consultation is an aberration and the serious, nuanced and even heated debate we deserve plays out over the next two months.
I'm not holding my breath.