Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Newsroom.co.nz
Newsroom.co.nz
Politics
Susan Wardell

The intriguing social media response to the Christchurch mosque attacks

A message left at the Wellington Islamic Centre and mosque in Kilbirnie after the Christchurch terrorist attack in 2019. Photo: Lynn Grieveson

A new research report on social media and the Christchurch attacks finds a ‘kinder’ online space in the aftermath was short-lived, but notes the power of everyday users to help shape it 

Comment: As the world reels from yet another racially motivated mass shooting, this time in Buffalo, New York, swift action is being taken to stop the gunman’s messages of hate being spread far and wide on social media.

The livestream of his horrific act on May 14 and his online manifesto have quickly been banned in New Zealand as objectionable material after thousands of interactions on social media.

This harks back to the nation’s own experience on the afternoon of March 15, 2019, when the livestream of the Christchurch mosque shootings first appeared on Facebook. This was viewed just 200 times during the live broadcast, but another 4,000 times before it was removed from Facebook. Over the following 24 hours, there were 1.5 million attempts to re-upload it, along with copies of the gunman's manifesto, also classified as ‘objectionable’.

This month it was announced that the coroner's inquest into the Christchurch attack would investigate the role of social media in the radicalisation of the terrorist, in the years leading up to the attack. But as this week reminds us, the impact of social media was also in the way it shaped New Zealanders' experiences and responses in the aftermath.

It was a scary, ugly, beautiful, and fragile time on the internet as, hour by hour, social media users grappled with the question of what was the right and good response to an unprecedented event.

My recent research report focused on a series of in-depth interviews with Aotearoa-based social media users, along with online observations and analysis, and raised questions about how much responsibility social media platforms should take over what goes on there, and how much power everyday users have to shape these spaces.

Many people first became aware of the mosque shootings on social media: receiving a message from a friend online, or seeing something pop up in their feeds, rupturing an otherwise normal Friday afternoon.

An unlucky few tumbled into the livestream.

This was at the same time many New Zealanders were rushing to social media to stay up to date with news, to record their shock and grief, to check in on friends and family, and to reach out to those most affected - through messages and visual tributes, donations to the Givealittle crowdfunding campaign for the victims, and participation in memorial events.

But as people turned to the internet as a tool, there remained a sense of fear and contamination, because there was a lack of control over could pop up in their feeds that they could never unsee.

Over a matter of hours, some of the people I interviewed became overwhelmed or felt physically sick from passive exposure to snippets or screenshots of the livestream, or other explicit images, and had to take a break from the internet. Others took more extreme measure to protect themselves, and left one or more of their usual social media platforms for days or even weeks. Some left permanently.

Many of these people also tried to engage with the systems the platforms had in place for reporting objectionable content; an act of care not only for people in their immediate networks, but also for the overall social terrain of the internet.

But they often received minimal or delayed feedback from platforms about their complaint - or none at all - and were left feeling abandoned, disheartened by the “pathetic excuses” platforms were making. As one person said, the platforms were “interested in money and not interested in keeping us safe”.

At the same time, many people observed an unusual shift in the ‘feel’ of social media, as everyday users adjusted their normal patterns of online communication, in recognition of something like a shared period of mourning. One person described social media as becoming quieter, another described it as slower or more “tentative”, and another simply as kinder. This lasted between a few days and a week, depending on who you asked. After that, discussions become more heated and politicised again, although many people recognised the New Zealand response as distinct from typical responses to mass violence in other nations.

Muslim New Zealanders were also paying close attention to all of this: watching not only the news stories about the attack, but also the comments and ‘reacts’ below them, using the metrics of social media to gauge public sentiment, and thus their own relative safety in a nation where this was suddenly no longer a given.

Those I interviewed had a variety of experiences online. One person, whose immediate family were badly injured in the attack, stayed up late with her laptop in the hospital every night for weeks, to report, one-by-one, the gruesome images of her loved ones lying injured on the mosque floor, that kept recurring online.

Another, in a nearby city, felt fearful to leave her flat wearing her hijab but invested hours into responding individually to thank the non-Muslim Kiwis participating in the #scarfday social media campaign. Another spent time recording the names of people applauding or ‘laughing’ at the attacks on Facebook, as the news broke - things that stood out painfully, event amid the tide of care and well-wishes.

This is indicative of a wider trend: that it is often the people most at risk who are also most heavily engaged in managing explicit, hateful, or harmful content online. This includes large amounts of work done by untrained, unpaid individuals who have put up their hands to be admins or moderators of different social media groups or pages, to create safer spaces for others in their communities.

Social media is an important locus of everyday life for huge swathes of the population. Many people want platforms to be held more accountable for what goes on in the spaces they make money from. Especially as much of what shapes these spaces - commercial ownership structures and algorithmic architectures - are beyond the control of everyday users.

But the legal frameworks to hold platforms accountable, or intervene in content being circulated, can be slippery and difficult to apply.

The Christchurch Call is continuing to try to bring government and tech companies into conversation about this. The coroner’s inquest may test what is possible in a new way too.

Meanwhile, findings from my report suggest there is a ‘middle layer’ to acknowledge, in which individuals are already taking an active part in shaping online spaces, including in times of crises.

We should work to understand the power of everyday users, and to recognise and support the work of admins and others, while continuing to ask why platforms are leaving so many gaps for them to fill in the first place.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.