Get all your news in one place.
100’s of premium titles.
One app.
Start reading
TechRadar
TechRadar
Ellen Jennings-Trace

Microsoft names cybercriminals who created explicit deepfakes

Microsoft.

  • A lawsuit against criminal gang Storm-2139 has been updated
  • Four defendants have been named by Microsoft
  • The group is allegedly responsible for creating illegal deepfakes

A lawsuit has partially named a group of criminals who allegedly used leaked API keys from “multiple” Microsoft customers to access the firm’s Azure OpenAI service and generate explicit celebrity deepfakes. The gang reportedly developed and used malicious tools that allowed threat actors to bypass generative AI guardrails to generate harmful and illegal content.

The group, dubbed the “Azure Abuse Enterprise”, are said to be key members of a global cybercriminal gang, tracked by Microsoft as Storm-2139. The individuals were identified as; Arian Yadegarnia aka “Fiz” of Iran, Alan Krysiak aka “Drago” of United Kingdom, Ricky Yuen aka “cg-dot” of Hong Kong, China, and Phát Phùng Tấn aka “Asakuri” of Vietnam.

Microsoft’s Digital Crimes Unit (DCU) filed a lawsuit against 10 “John Does” for violating US law and the acceptable use policy and code of conduct for the generative AI services - now amended to name and identify the individuals.

A global network

This is an update to the previously filed lawsuit, in which Microsoft outlined the discovery of the abuse of Azure OpenAI Service API keys - and pulled a Github repository offline, with the court allowing the firm to seize a domain related to the operation.

“As part of our initial filing, the Court issued a temporary restraining order and preliminary injunction enabling Microsoft to seize a website instrumental to the criminal operation, effectively disrupting the group’s ability to operationalize their services.”

The group is organized into creators, providers, and users. The named defendants reportedly used customer credentials scraped from public sources (most likely involved in data leaks), and unlawfully accessed accounts with generative AI services.

“They then altered the capabilities of these services and resold access to other malicious actors, providing detailed instructions on how to generate harmful and illicit content, including non-consensual intimate images of celebrities and other sexually explicit content,” said Steven Masada, Assistant General Counsel at Microsoft’s DCU.

You might also like

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.