Artificial intelligence (AI) is slowly gaining traction and becoming a part of our everyday lives. But the world has mixed feelings about it. Some see it as a positive driving force for innovation and creativity, while others worry about it being used to spread fake news or misinformation.
One of the biggest concerns about AI is the creation and use of deepfakes. Although they can be made for entertainment, they’re sometimes used for more sinister purposes, and this woman became the victim of one such deepfake ad.
More info: TikTok
Woman shares worrying account of how her likeness was used without her consent in a deepfake ad to sell pills to cure impotence
Michel Janse, a 27-year-old YouTuber and TikToker, shared the harrowing account of finding a deepfake ad of herself online
She told her followers that the incident happened while she was on her honeymoon. Concerned friends and family probably came across the ad and shared it with her. The company that was promoting their pills had pulled her image from one of the videos on her YouTube channel. She also said, “this ad was me in my bedroom, in my old apartment in Austin, wearing my clothes, talking about their pill. The only thing is, it wasn’t my voice.”
The idea behind showing the clip of the AI video was to inform her followers about the dangers of deepfakes and how “we need to question everything we see.” As Michel pointed out, “someone that you know could be in a video saying something to you, looks exactly like them, and it could be completely fabricated.” It goes to show that we need to start questioning the content we see online because it could be completely fake.
“The internet is changing fast, I guess, trust no one, believe nothing on the internet, it’s just a motto to live by for a while”
It’s difficult to know how to deal with unethical deepfakes like this one, especially if they’ve used your image without your consent. To get an expert opinion, Bored Panda reached out to Professor Siwei Lyu, Ph.D. He is a SUNY Empire Innovation Professor in the Department of Computer Science and Engineering at the University at Buffalo. Dr. Lyu’s areas of interest include digital media forensics, computer vision, and machine learning.
He explained the steps Michel could take to deal with the ad. Dr. Lyu said, first, one must “ensure that all evidence is thoroughly documented, including time stamps, and securely stored. Notify social media platforms and request the removal of the deepfake content. If [it is] is of a criminal nature (e.g., involving explicit content or causing financial harm), consider pursuing legal action and report the incident to federal or local law enforcement agencies.”
Hear the full story from Michel’s perspective
@michel.c.jansestorytime: AI stole my likeness and created a deepfake of me ✌🏼😅 believe nothing 🫡♬ original sound – Michel Janse
AI has reached a whole new level of sophistication, and it’s become harder to figure out what’s real
Artificial intelligence and deep learning algorithms are used to create deepfakes, which often occur in the form of videos, audio, or images. These algorithms are so advanced that they can change or replace existing content seamlessly. They’re so well done in today’s day and age that a study found that only 46% of adults could tell the difference between real and AI-generated content.
Research even states that we can expect to see around 8 million digitally manipulated videos like this posted online by 2025. If Michel had not shared her story, people would have believed that she had actually starred in that ad and endorsed those pills. But luckily, as soon as she learned about the video, she decided to inform people about it.
Professor Lyu explained how someone could figure out if a video they came across was AI-generated. He said, “the person could know whether it was them who made the specific video recording. If they can find the original video that was used to make the deepfake video, then it is more strong evidence. For someone who is not the subject or knows the subject, it is generally difficult to spot the deepfake.”
“One can notice some artifacts in deepfake videos, especially those made with software and did not undergo manual cleanup operations. This may include blurry mouth regions or lack of synchronization between mouth movement and voices (signs of the video being a lip-syncing one). One can also use available deepfake detection tools (Reality Defender, Deepware scanner, etc),” he mentioned.
As Michel stated, there isn’t exactly a guidebook for us to figure out what’s fact and what’s fiction. In 2018, researchers found that deepfakes don’t normally blink. But, just as soon as that research had been published, many digitally altered videos popped up with characters that could blink. It shows just how fast AI can learn from its mistakes.
Poor-quality AI videos are easier to spot. Just as Dr. Lyu mentioned, there might be bad lip-syncing, blurriness around the person’s face, or general inconsistencies in their movement. Sometimes, the person’s skin tone may also appear patchy, their teeth or hands might be badly rendered, and even weird lighting effects can be observed.
The problem is people consume massive amounts of content every day. Not everyone has the patience to stop and figure out if a video has been digitally altered. They might just believe whatever they’re shown. That’s why there should be sanctions placed on companies to protect folks from such media.
Dr. Lyu stated: “First, companies offering generative AI technology should integrate provenance features, such as watermarking, into their tools. This will allow media created using their technology to be reliably traced back to the source.”
“Second, these companies must ensure that deepfakes are not created using a person’s likeness or voice without their explicit consent. The original subject must agree to both the use of their identity and the specific message the deepfake is intended to convey,” he added.
Until proper laws are created to protect people from digitally manipulated videos like this, we must take matters into our own hands. Raise the alarm if you come across unethical deepfakes, and always remember… constant vigilance!
Netizens were worried after listening to Michel’s story, and many urged her to sue the company that made the video
Image credits: ThisIsEngineering (not the actual photo)
Woman Returns From Her Honeymoon To Find A Creepy AI Deepfake Video Of Her In An Ad Bored Panda