Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Windows Central
Windows Central
Technology
Kevin Okemwa

Microsoft wants Congress to pass "a comprehensive deepfake fraud statute" to prevent AI-generated scams and abuse

Microsoft logo.

What you need to know

  • As AI becomes more advanced and sophisticated, deepfakes continue to flood the internet, spreading misinformation.
  • Microsoft calls on the US Congress to pass a comprehensive deepfake fraud statute to prevent cybercriminals from leveraging AI capabilities to cause harm.
  • The new legal framework will provide law enforcers with the basis to prosecute AI-generated scams and fraud.

As generative AI tools like Microsoft Copilot and OpenAI's ChatGPT become more advanced and sophisticated, cases of deepfake AI-generated content flooding the internet continue to rise (see Elon Musk this week). Aside from the security and privacy issues riddling the progression of the technology, the prevalence of deepfakes continues to hurt the authenticity of content surfacing online, making it difficult for users to determine what's real.

Bad actors use AI to generate deepfakes for fraud, abuse, and manipulation. A lack of elaborate regulations and guardrails has contributed to deepfakes becoming widespread. However, Microsoft Vice Chair and President Brad Smith recently outlined new measures it intends to use to protect the public from deepfakes.

Smith says Microsoft and other key players in the industry have been focused on ensuring that AI-generated deepfakes aren't used to spread misinformation about the forthcoming US Presidential election. 

While the company seemingly has a firm grasp on this front, the top exec says more can be done to prevent the widespread use of deepfakes in crime. "One of the most important things the US can do is pass a comprehensive deepfake fraud statute to prevent cybercriminals from using this technology to steal from everyday Americans," added Smith. 

Congress should require AI system providers to use state-of-the-art provenance tooling to label synthetic content. This is essential to build trust in the information ecosystem and will help the public better understand whether content is AI-generated or manipulated.

Microsoft President, Brad Smith

In the same breath, Smith wants policymakers to ensure federal and state laws designed to protect children from sexual exploitation, abuse, and non-consensual intimate imagery include AI-generated content as the technology becomes more prevalent and advanced.

It all a work in progress

A robot that looks like a Terminator looking over AI (Image credit: Windows Central | Image Creator by Designer)

Previously, Microsoft CEO Satya Nadella pointed out that there's enough technology to protect the forthcoming US presidential elections from AI deepfakes and misinformation. This is despite several reports highlighting Copilot AI's shortcomings after the tool was spotted generating false information about the forthcoming elections

Following explicit AI-generated images of pop star Taylor Swift surfacing online, the Senate passed a bill that addresses the issue. Users featured in explicit content generated using AI have grounds to sue for damages. 

On the other hand, OpenAI rolled out a new strategy designed to help users identify AI-generated content. ChatGPT and DALL-E 3 images are now watermarked, though the startup admits it's "not a silver bullet to address issues of provenance." OpenAI announced that it was working on a tool to help identify AI-generated images and promises 99.9% accuracy.

🔥The hottest trending deals🔥

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.