
AI bots will soon be rolled out on dating apps to flirt with people, craft messages on users’ behalf and write their profiles for them.
But depending on artificial intelligence to foster budding relationships risks eroding what little human authenticity is left on dating platforms, experts have warned.
Match Group, the technology company with the world’s largest portfolio of dating platforms, including Tinder and Hinge, has announced it is increasing investment in AI, with new products due this month. AI bots will be used to help users choose which photographs will be most popular, write messages to people and provide “effective coaching for struggling users”.
But those “struggling users” who may be lacking in social skills, and begin to rely on AI assistants to craft conversations for them, may have difficulty once they arrive on real-life dates, without the use of their phone to help them converse. This could lead to anxiety and further retreat into the comfort of the digital space, a group of academics has claimed. It could also erode the trust users have in the authenticity of others on the app. Who is using AI and who is a genuine, flesh-and-blood human tapping away behind the screen?
Dr Luke Brunning, a lecturer in applied ethics at the University of Leeds, has coordinated an open letter calling for regulatory protections against AI on dating apps. He believes that trying to solve the social problems caused by technology with yet more unregulated technology will make things worse, and that automated profile-enhancing also entrenches a dating app culture where people feel they must constantly outperform others to win.
“Many of these companies have correctly identified these social problems,” he said. “But they’re reaching for technology as a way of solving them, rather than trying to do things that really de-escalate the competitiveness, [like] make it more easy for people to be vulnerable, more easy for people to be imperfect, more accepting of each other as ordinary people that aren’t all over 6ft [tall] with a fantastic, interesting career, well written bio, and constant sense of witty banter. Most of us just aren’t like that all the time.”
He is one of dozens of academics from across the UK, as well as the US, Canada and Europe, who have warned that the hasty adoption of generative-AI “may degrade an already precarious online environment”. AI on dating platforms risks multiple harms, they say, including worsening the loneliness and youth mental health crises, exacerbating biases and inequality, and further eroding people’s real-life social skills. They believe the explosion of AI features on dating apps must be regulated quickly.
In the UK alone, 4.9 million people use dating apps, with at least 60.5 million users in the US. Around three-quarters of dating app users are aged 18-34.
Many single people say that it has never been more difficult to find a loving relationship. Yet the letter warns that dating app AI risks degrading the landscape even further: making manipulation and deception easier, reinforcing algorithmic biases around race and disability, and homogenising profiles and conversations even more than they currently are.
But proponents of dating app AI say that assistants and “dating wingmen”, as they’re known, could help reduce dating app fatigue, burnout and the admin of trying to set up dates. Last year, product manager Aleksandr Zhadan programmed ChatGPT to swipe through and chat to more than 5,000 women on his behalf on Tinder. Eventually, he met the woman who is now his fiancée.
Brunning says he isn’t anti-app, but believes the apps are currently working for corporations, rather than the people on them. He’s frustrated that the digital dating sector receives such little scrutiny compared with other areas of online life, like social media.
“Regulators are waking up to the need to think about social media, and they’re worrying about the social impact of social media, its effect on mental health. I’m just surprised that dating apps haven’t been folded into that conversation.
“In many respects, [dating apps] are very similar to social media”, he said. “In many other respects, they’re explicitly targeting our most intimate emotions, our strongest romantic desires. They should be drawing the attention of regulators.”
A Match Group spokesperson said: “At Match Group We are committed to using AI ethically and responsibly, placing user safety and well-being at the heart of our strategy... Our teams are dedicated to designing AI experiences that respect user trust and align with Match Group’s mission to drive meaningful connections ethically, inclusively and efficiently.” A spokesperson for Bumble said: “We see opportunities for AI to help enhance safety, optimise user experiences, and empower people to better represent their most authentic selves online while remaining focused on its ethical and responsible use. Our goal with AI is not to replace love or dating with technology, it’s to make human connection better, more compatible, and safer.”
Ofcom highlighted that the Online Safety Act does apply to harmful generative AI chatbots. An Ofcom spokesperson said: “When in force, tThe UK’s Online Safety Act will put new duties on platforms to protect their users from illegal content and activity. We’ve been clear on how the Act applies to GenAI, and we’ve set out what platforms can do to safeguard their users from harm it poses by testing AI models for vulnerabilities.”