Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - US
The Guardian - US
Technology
Kari Paul

High-profile women on Instagram face ‘epidemic of misogynist abuse’, study finds

Photo illustration of a silhouette of a woman looking at a mobile phone in front of an Instagram logo background.
Researchers examined thousands of direct messages to women and found a wave of misogynistic abuse. Photograph: Rafael Henrique/SOPA Images/REX/Shutterstock

A new report analyzing thousands of direct messages sent to high profile women on Instagram has uncovered what researchers describe as “systemic” failures to protect women in the public eye from “misogynist harassment”.

The report, released by the Center for Countering Digital Hate (CCDH), analyzed thousands of messages sent to five well known Instagram users: including actor Amber Heard, UK television presenter Rachel Riley, activist Jamie Klingler, journalist Bryony Gordon, and magazine founder Sharan Dhaliwal.

Researchers examined a cache of 8,717 messages from the women who participated and say they uncovered a wave of misogynistic abuse, pointing to various ways in which Instagram, owned by Meta, has been “negligent” in addressing the problem.

Concerns include the fact that users are unable to report abusive voice notes sent directly to them, that users cannot report messages sent in “vanish mode” – when an image is shown just briefly before disappearing – without viewing them, and that users struggle to download evidence of abusive messages. Instagram also allows strangers to place voice calls to women they don’t know over direct message. Faith Eischen, a spokeswoman from Meta, said that when a user receives a voice call on Instagram from someone they do not follow, the call is displayed as a request that must be accepted to hear.

You will only receive calls from people you don’t follow after you’ve accepted their DM request.

“There is an epidemic of misogynist abuse taking place in women’s DMs,” said Imran Ahmed, CCDH chief executive director. “Meta and Instagram must put the rights of women before profit.”

The messages analyzed in the study were obtained via the Instagram’s data download feature, which allows users to access a record of messages sent by strangers requesting to send them a direct message. However, in some cases participants who had previously blocked abusive users could not access the full “requests” dataset. Thus, only Rachel Riley, Jamie Klingler, and Sharan Dhaliwal were able to see a full analysis of their direct message history.

A portrait of Sharan Dhaliwal against a teal blue background.
Sharan Dhaliwal, founder of Burnt Roti magazine. Photograph: Alecsandra Raluca Drăgoi/The Guardian

CCDH said such difficulties in accessing data are not limited to the study’s participants and mark a broader problem in the quest to address widespread harassment online. Heard, who received “error” messages when trying to download her own data, said Instagram’s reporting function is “not user friendly, not intuitive, not common sense based.”

Although the study surveyed the DMs of just a handful of high profile users, researchers say the findings underscore a wider reality for women on the platform, where death threats, harassment, and unsolicited nude videos and images are sent without recourse.

Cindy Southworth, Meta’s head of women’s safety, said that while the company disagrees with “many of the CCDH’s conclusions” that it does agree “the harassment of women is unacceptable”.

“That’s why we don’t allow gender-based hate or any threat of sexual violence, and last year we announced stronger protections for female public figures,” she said.

Southworth highlighted measures including a separate request inbox for messages from unknown senders and tools to filter messages with common abusive words and phrases.

Despite those measures, the CCDH study chronicled dozens of examples of users sending women pornography, nude images of themselves, and pornography edited to feature other faces, known as “deep fakes”, without consent.

Aside from the “image-based” sexual abuse, women also received countless violent messages and more specific death threats. Heard said she frequently receives messages imploring her to kill herself, or threatening her baby – to the extent that she no longer feels safe reading any of her messages.

Instagram’s community guidelines bans violence and attacks on anyone “based on their race, ethnicity, national origin, sex, gender, gender identity, sexual orientation, religious affiliation, disabilities, or diseases”. Such messages are supposed to be taken down immediately, but the CCDH study found this often was not the case.

The study showed one in every 15 of the thousands of DMs analyzed violated such rules. It also showed that Instagram failed to act on nine in 10 accounts sending violent threats over DM.

It is possible that a larger volume of abuse against famous women goes unaddressed, as Instagram’s guidelines do note the platform does “generally allow stronger conversation around people who are featured in the news or have a large public audience”.

Amber Heard, a study participant, in Santa Monica.
Amber Heard, a study participant, in Santa Monica. Photograph: Lucas Jackson/Reuters

But for many users, harassment on Instagram can have a chilling effect. In 2020, the largest ever global survey on online violence found that one in five girls (19%) have left or significantly reduced use of a social media platform after being harassed, while another one in 10 (12%) have changed the way they express themselves. Several women in the study said they no longer feel safe using Instagram.

That aversion can have a particularly detrimental effect on the many women who rely on the platform to make a living, including financial and professional connections and sponsorships.

“Digital spaces provide increasingly important ways to maintain relationships, communicate and build personal brands,” Ahmed said. “For women, however, the cost of admission to social media is misogynist abuse and threats sent by abusers with impunity.”

Meta has in the past taken steps toward removing the accounts of users who send abusing messages. In February 2021 it began to allow users to block abusive accounts from sending further messages for a set period, disabling accounts that repeatedly send abusive DMs, and disabling new accounts created to get around the above measures.

But the study’s participants say this is not enough. Gordon, who has been open about her struggles with eating disorders, said she gets ceaseless abusive messages about her weight.

“When we look back on this period of our completely unboundaried use of social media we’ll look back with the same horror as we do adverts with the Marlboro man saying smoking is good for you,” she said.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.