Volunteers from marginalised communities put hours into combating online hate speech but are mostly untrained. Now there's a free resource for them from Otago university
Opinion: Social media platforms are a key site of community for many people. Along with grouping people around interests or hobbies, most platforms host a range of ‘identity-based’ communities: including groups or pages related to sexuality, gender identity, shared experiences of disability or health problems, and/or neuro-identities. These spaces can be safe havens for people who are often targets in the more public parts of the internet.
However, the social admins and moderators in these spaces doing key work combating harmful content for their communities are unpaid and mostly untrained.
They are often on the frontline of responding to dangerous speech and hate speech, but aren’t acknowledged, even by the organisations and ministries that usually support volunteers or community workers.
READ MORE:
* NZ’s hate speech laws explained
* Capitulation on hate speech the worst of all worlds
Aotearoa New Zealand research indicates a significant number of people experience or witness harmful speech online. In 2019, 15 percent of New Zealand adults reported having been personally targeted with hate speech in the past year, and 30 percent had seen online hate speech that targeted someone else. Online hate is more prevalent for people from ethnic minorities; and people with disabilities and from the rainbow community are targeted at higher rates too, as a Netsafe report confirms.
Legal protections and their limits
Hate speech laws vary between different countries. In New Zealand, the law only prohibits harmful, threatening or insulting speech likely to “excite hostility against” or “bring into contempt” people based on their race, skin colour, or national origin.
Last year, the Government proposed expanding these legal frameworks to include rainbow/LGBTQIA+ people, disabled people, women, and religious groups. But under the recently announced amendment, only religious groups will be added to protected groups. This has generated disappointed responses, even while the Government has asked the Law Commission to continue to review the case for extending protections to further groups.
Some other forms of harmful speech are covered by a different framework - The Harmful Digital Communications Act (HDCA). However, the HDCA is limited to digital communication sent with the intent to cause harm to a specific individual. This means many instances of harmful speech fall outside our legislative frameworks.
Outside of the more legal focus, the framework of ‘dangerous speech’ works on identifying the hallmarks of speech that has historically led to violence. This year, InternetNZ put out a round of small grants focused on this. Our research team used one of these to analyse the role of social media admins, in responding to hate speech and dangerous speech, and other forms of harmful speech online.
Working with admins from the online rainbow and disability communities, our goal was to explore which skills and strategies they already have for dealing with the murky side of the internet. We called it the Tagging In Project.
What do social media admins do?
Most social media platforms require that a group or page have one or more people nominated as ‘admins’ or ‘moderators’. This role is different from the job of ‘content moderators’, who are trained and employed by platforms, or news agencies, to detect and remove offensive or inappropriate posts or comments. Rather, these are unpaid, untrained individuals who volunteer to do general upkeep of the group (admitting new members, approving posts, setting purpose and rules, etc), and in doing so, often take on a broader leadership role for their communities.
Social media admins and moderators regularly deal with hate speech, dangerous speech, misinformation, interpersonal conflict, mental health crises, and more. To do this well, they develop knowledge of a spectrum of common issues in online spaces, and with communication and online behaviour more generally. We tracked some of the problems they deal with, regularly, in their spaces.
There are also specific issues relevant to their communities that admins must get a nuanced feel for. For example, ableism appeared in slippery ways in discussions of Covid-19 and vaccines, harmful narratives can promote ‘cures’ or therapies for autism, and anti-trans rhetoric sometimes circulates in otherwise feminist spaces. Even the basics can be tricky, with language shifting and evolving rapidly. Some terms used freely by older people are seen as offensive by younger ones, and vice versa.
There are also difficult decisions to make about when to keep problematic posts/threads/comments up, to allow for discussion and education, and when to delete them, and similar difficulties in knowing how to assess whether someone is wanting to engage with a tricky topic in good faith, or is trolling, or sealioning.
Admins have to navigate their way through learning how to use the platform features, and more general social strategies, to deal with these challenges, typically without any training or support. They often face their own burnout, exhaustion, and feeling overwhelmed, trying to set personal boundaries, while also doing their best to serve their communities.
The Tagging In project
There is no measure for the amount of unpaid labour that goes into this work, nationally, or worldwide. It is work typically not acknowledged, even by organisations that deal with and support community volunteers. There are no courses, conferences, or publications designed to train or support social media admins and mods.
With this in mind, our team have gathered the knowledge and experiences from the admins we interviewed and met with, and curated it into a new, free online resource.
The resource includes definitions of hate speech and dangerous speech, and some suggestions of pathways to respond to more obvious or extreme issues online. But it covers a lot of other common issues and considerations too, being clear that the legal frameworks of hate speech and harmful digital communication only cover so much - and in terms of hate speech, only covers some groups of people.
It is important to take legal reforms seriously, as a chance to make the law more useful, for more people. But it is also important to acknowledge the breadth of slippery issues these communities face, online and off, that cannot be legislated against in straightforward ways. And it is therefore important to support the volunteers who are already using considerable skill and knowledge to respond to these challenges.