Facebook, Youtube and other platforms are under pressure to disclose how content is moderated.
A report commissioned by the NZ Super Fund recommends investors demand greater transparency from social media companies following the Christchurch mosque terror attacks.
In response to the March 2019 shootings, the fund led a coalition of 105 global investors representing NZ$13.5 trillion to urge the owners of Facebook, Google, YouTube and Twitter to strengthen controls to prevent the live-streaming and dissemination of objectionable content.
The news comes as the tech-heavy Nasdaq index and new media companies like Facebook ensure severe pressure from investors, spooked by changing community needs and expectations. Facebook shares are down 43 percent from last year's high, alongside other companies that rode the tech-friendly work-from-home wave.
The Super Fund commissioned the tech consultancy Brainbox Institute to analyse whether the platforms had made adequate changes.
Brainbox’s research found the measures put in place by the companies were likely to reduce the scale that objectionable material was spread online, but it was unlikely to entirely prevent its spread.
Curtis Barnes, research director at Brainbox, told Newsroom pressure still needed to be placed on social media companies to be transparent on their content moderation processes.
Barnes said the best way forward was for regulation to require internet companies to publicly release data and information on their content moderation processes, so that external researchers could analyse and scrutinise it.
“The best course of regulation is around creating a system that demands the platforms to provide this,” he said.
He cautioned against heavy-handed approaches that punished platforms for displaying objectionable content, or moderation processes that overly relied on artificial intelligence automation.
There was a trade-off between the speed at which objectionable content could be removed from a platform, and the accuracy that such content was identified and classified as such.
Barnes said the platforms “drip-fed” information about their processes, such as Facebook employing more moderators to spot offensive content, but he said context was necessary to gauge how effective these measures were in accurately identifying objectionable material.
The information shared could include data on the amount of content identified as potentially in breach of standards, and how much of this was accurately identified as objectionable and removed. He would also like to know the number of false-positives, where acceptable content was mistakenly flagged and removed by the platform.
Barnes said this was the best way to regulate the companies without taking an aggressive and punitive approach that could violate human rights by removing content and impeding freedom of expression.
The chief executive of InternetNZ, Jordan Carter, agreed that more transparency would help governments and independent researchers get a true picture of the inner workings of the companies, rather than the information they voluntarily shared.
Regulators would want to avoid a situation where platforms were pushed to overreach and use broad content removal tools to keep on the right side of the law, he said.
This could lead to scenarios where media outlets had stories mentioning a topic taken down, or discussion of a topic on social media was removed.
An NZ Super Fund spokesperson, Conor Roberts, said its initiative focused on restricting the dissemination of objectionable material, such as the video of the Christchurch terror attack, by engaging with social media companies.
“The outcome was that while substantial progress has been made on this specific issue, the companies still have work to do to ensure they are operating in a socially responsible manner,” he said.
While the focus was on objectionable material and did not look at wider issues relating to content and privacy, the NZ Super Fund pointed out that failure to meet social expectations could result in regulatory action and reputational damage.
"We remain an active shareholder on responsible investment issues and will continue to advocate for improvements in the practices of social media companies," Roberts said.
NZ Super Fund’s responsible investment framework was in line with international standards, he added, including the OECD Guidance on Responsible Business Conduct for Institutional Investors and UN Principles for Responsible Investment.
According to Facebook’s parent company, Meta, it publishes regular transparency reports sharing some information on content removal. The platform said it was working on further voluntary transparency reporting in New Zealand, but there was no information on what this specifically involved.
Facebook's most recent Community Standards Enforcement Report, covering July to September 2021, said more than 95 percent of the 13.6 million pieces of content removed for violating their violence and incitement policy were detected before they were reported by users.
According to YouTube, it started to release some data in 2018 and regularly shared updates on content removals, including the total number of videos, how the content was first identified, and why it was removed.
In its latest data covering July to September 2021, the platform removed 4.8 million channels, 6.2 million videos, and 1.1 billion comments that violated its community guidelines. Of those videos removed, about 251,000 (four percent) promoted violence and violent extremism.
Newsroom has also approached Twitter for comment on this story.