We’re more prone to trust ads with celebs, especially when it’s billionaire Elon Musk who promotes crypto investments and pretty Taylor Swift with elite cookware. Scammers know that and actively use it. And due to deepfake videos and voice cloning, they are good at it.
According to a report by the Global Anti-Scam Alliance, international fraudsters stole more than a trillion dollars in 2024. Thanks to artificial intelligence (AI), their “income” is growing, with celebrity deepfakes becoming a go-to trick for fooling unsuspecting people.
For those who are new to the term, “deepfake” is a blend of “deep learning,” which refers to a specific type of AI-based technology, and the word “fake.” The term first surfaced in 2017 when a user with the username “deepfakes” shared AI-developed videos with the faces of some celebrities superimposed on other people's bodies. Now, the term is used indiscriminately for every type of AI-manipulated media, i.e., video, audio, and text.
Research reveals that Donald Trump, Elon Musk, and Taylor Swift were among the celebrities with the most deepfake videos in 2024, with more than 12K, 9K, and 8K fake media materials, respectively.
That’s why the HeyLocate team emphasizes the necessity of checking information online and using reverse image search to check the media origin. But first, let’s try to understand how scammers do their job.
Fake Elon Musk Selling Crypto
Amy Nofziger, director of the AARP Fraud Watch Network, reported that many people had lost their hard-earned money by trusting illegal investments or products backed by Elon Musk.
Deepfake detection company Sensity issued a report in 2024 explaining how scammers use deepfake videos of Elon Musk to market bogus investments for a crypto-based platform, Quantum AI.
Scammers took a real 2015 interview with Elon Musk and used AI-powered deepfake technology to change his facial expressions and sync his lips to a fake audio recording. They swapped out his actual words for a scripted endorsement of a bogus investment platform, making it look like Musk was backing a high-return cryptocurrency trading scheme.
It was a highly realistic but entirely fabricated video that was then spread across social media and fraudulent websites. The purpose was to endorse the victims so that they invest or transfer their money blindly but get nothing in return.
Pig Butchering with Fake Brad Pitt and Johnny Dep
You should have heard recently that scammers pulled off an elaborate scheme by posing as a “sick Brad Pitt” to deceive a French woman named Anne, a 53-year-old interior designer. Over 18 months, they gained her trust and ultimately swindled her out of around $865,000.
Meanwhile, a 60-year-old woman from Minnesota was also targeted. Believing she was speaking with Johnny Depp, she ended up losing $1,700 to the fraudsters.
This kind of deepfake scam, called pig butchering, is all about emotional manipulation. Scammers build trust for weeks or even months, using fake identities or AI-generated images to create a convincing relationship. They shower their victims with attention and affection, only to exploit that trust and trick them into handing over large amounts of money.
Fake Taylor Swift and Le Creuset ad
Some devoted Taylor Swift fans also became victims of celeb deepfake videos. The scam involved an AI-generated clip and the singer's cloned voice to promote Le Creuset cookware. The fans were assured they could likely purchase the sets of pots by answering a few queries and paying a small shipping fee of $9.96.
However, Le Creuset abandoned this advertisement and denied any partnership with a superstar.
How Does Deepfake Work?
Deepfake clips use AI and ML technologies to create clones of one’s body, facial features, and audio. The fraudsters use people's original voice recordings to generate videos in which they say things that they might not have said in reality. Likewise, they mimic the original mouth movements or facial expressions to look real. Sometimes, scammers can even use a single picture and a few voice recordings to create deepfake videos.
Such material can spread rumors, illegitimate ads, and misleading information. As the content of the deepfake video looks real, it could impact a person's decision, such as how they vote, invest on the platform, or buy any product.
The real danger isn’t just about losing money; it can lead to destroying someone’s life, especially if it’s deepfake pornography. Kristen Bell and Scarlett Johansson fell victim to this, and one of the fake videos gained over 1.5 million views.
Even more shocking, as the HeyLocate research on sextortion revealed, is that scammers use deepfake pornography against youth to blackmail them and extort money. Contrary to celebrities, teens don’t have personal lawyers and a stable mentality to battle such scammers. In the end, this often leads even to teen suicides.
How to Reveal Deepfake?
Realizing the potential danger of AI-backed deepfake technology, one must know the strategies to protect oneself from falling prey to such scams.
1. Check the official websites of celebs.
With the ongoing technological revolution, it is important not to believe everything that you hear or see. Instead, consider doing a cross-check. Almost every celebrity has their own social media account or website. If you witness any famous personality promoting any investment-based scheme, opportunity, or anything related, always cross-check it from their official channels.
2. Look up any contacts from the ad.
If there are any phone numbers in the ad, look them up to check whether they are legitimate. This way, you can protect yourself and your loved ones from being scammed. You can have a free scammer phone number lookup through various apps and tools available on the market.
3. Do a reverse image search to check the origin of a photo with a celeb.
For suspicious images or videos, utilize reverse image search tools to trace the origin of the media. This can reveal if the content has been manipulated or is being used elsewhere without authorization.
4. Be cautious.
Ultimately, just be cautious of unsolicited messages, especially those requesting personal information or money. If something seems too good to be true, it likely is.
What’s Next?
Currently, there is an acute dearth of legislation concerning the governance and regulation of AI use and its development. However, more than 120 AI bills covering a wide range of topics are pending approval in the US Congress.
To tackle deepfakes, a bill called the “No Fakes Act” is also proposed, but it has yet to be passed, among other things. This bill would create a federal intellectual property right to a person’s voice and likeness.
Meanwhile, the fight against deepfake scams falls to tech companies, law enforcement, and the public. Social media platforms and AI developers should introduce watermarking, better detection tools, and stricter content policies. But for now, we just must stay sharp and question what we see online before taking it at face value.