Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Street
The Street
Samuel O'Brient

Dating app company makes shocking change to enhance user safety

Almost every woman who has ever used a dating app seems to have walked away with at least one horror story.

One anonymous BuzzFeed  (BZFD)  user recalls the time a man she had matched with told her he had multiple other dates scheduled for the same day before claiming he could communicate with Dolphins in a “special language.” Another reports that a man she had recently matched with attempted to scale her fire escape and break into her apartment.

💰💸 Don’t miss the move: SIGN UP for TheStreet’s FREE Daily newsletter 💰💸

Examples of strange, abusive or genuinely frightening behavior by men using dating apps abound across almost all social media channels. Entire Reddit  (RDDT)  forums are dedicated to horror stories from specific dating apps and a popular Facebook  (META)  group invites women to find out if they are dating the same guy.

After years of persistent problems, the company that dominates the industry has finally announced plans to address them.

Dating apps owned by Match Group have come under fire recently as female users report abusive behavior from men (Photo by Alicia Windzio/picture alliance via Getty Images)

picture alliance/Getty Images

Dating app leader thinks technology can solve its safety problems

If you’ve ever used a dating app, it’s probably owned by Match Group  (MTCH) . The company’s holdings include Match.com, Tinder, Hinge, OkCupid, and Plenty of Fish, to name just a few.

According to 2024 data from Statista, roughly 60 million people in the United States use dating apps and Match Group's are some of the most popular. But while these platforms have helped revolutionize dating, they have also led to significant problems, as women have reported consistent abusive behavior from the men they match with.

Related: Tinder makes a major change that tackles a top complaint from users

One study published by the National Library of Medicine reveals that “sexual harassment when using dating apps is prevalent and ranges between 57 and 88.8%, with two populations being at higher risk: women and individuals who identify as a sexual minority.”

Match Group seems to believe it has found a solution to curbing toxic male behavior. The company has announced that it is using artificial intelligence (AI) technology to detect messages sent on its platforms that may be inappropriate.

In this case, inappropriate refers to messages containing language that is either overly sexual or demeaning. Match Group describes this as part of an initiative to prompt male users to behave more politely, which will ideally help female users feel more at ease.

According to the Financial Times, “when a user types an “off-colour” message, Match’s apps will generate an automated prompt asking them if they are sure they want to send it.” The company claims that one-fifth of users who receive these messages reconsider what they were about to send.

If those numbers are correct, however, that still leaves a fairly significant number of users who proceed with messages flagged as inappropriate. But that isn’t the only concerning element of Match Group’s plan.

An 18-month investigation conducted by the Pulitzer Center’s AI Accountability Network and The Markup recently revealed that the company has a history of failing to act when a male user is accused of sexual misconduct. It highlighted the recent story of a Denver, Colo. cardiologist who remained on Hinge after multiple reports of sexual abuse from women he met on the app.

AI may not be able to solve problems that go as deep as Match Group’s

Match Group could not be reached for comment on this story nor to provide context on how its leaders think that using AI will be able to address these problems.

However, a lack of technology may not be the primary factor holding the company back from implementing effective solutions.

Related: Elizabeth Holmes shocks the world with first interview from prison

“While Match Group has long possessed the tools, financial resources, and investigative procedures necessary to make it harder for bad actors to resurface, internal documents show the company has resisted efforts to spread them across its apps, in part because safety protocols could stall corporate growth,” the report states.

Granted, this isn’t the first time digital dating platforms have attempted to use AI to enhance user safety. Fellow popular dating app Bumble  (BMBL)  has been doing so for years, with tools that can automatically blur nude photos and flag profiles that may be engaging in inappropriate behavior.

Tools that can stop dating app users from receiving unwanted photos could certainly help enhance overall safety on any platform. But there are still plenty of unknowns when it comes to leveraging AI to make dating apps safer.

CNET recently conducted an analysis of this topic, highlighting “the distinct possibility that all these AI tools eventually end up paywalled, accessible only to premium subscribers,” thereby creating a new mechanism for driving profits for leaving some users exposed to abusive behavior.

Related: Veteran fund manager issues dire S&P 500 warning for 2025

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.