Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Street
The Street
Ian Krietzberg

Microsoft engineer says company was 'aware of the potential for abuse' before viral Taylor Swift deepfakes

Months before deepfaked, sexually explicit images of Taylor Swift proliferated across social media, the same thing was happening at a high school in New Jersey.

The victim of that incident, however, was not Swift. It was instead a group of around 30 high school girls. 

Other students, using artificial intelligence image generators, created and spread fake pornographic images of the girls.

"New technologies have made it possible to falsify images and students need to know the impact and damage those actions can cause to others," the school's principal wrote in a letter to parents

Related: Deepfake porn: It's not just about Taylor Swift

Not long after the incident, a Stanford investigation found hundreds of examples of child sexual abuse material (CSAM) within one of the datasets that were used to train many AI image generators, including Stability AI.

The widespread implications of these "new technologies" are beginning to surface, with instances of deepfake fraud, AI-enhanced tax fraud and deepfake political and electoral misinformation already on the rise. According to one cybersecurity expert, it is the beginning of a new era of identity theft he called "identity hijacking."

Indeed, researchers at the Center for Countering Digital Hate (CCDH) said in a report Wednesday that such image-generation tools can be used to disseminate electoral misinformation 41% of the time.  

"The potential for such AI-generated images to serve as 'photo evidence' could exacerbate the spread of false claims, posing a significant challenge to preserving the integrity of elections," the report says. 

Those results come just weeks after 20 of the largest AI and social media companies announced the Tech Accord, a voluntary agreement to mitigate the risk of deceptive electoral content that didn't go quite so far as to ban the creation or dissemination of such content. 

"The tech companies want us to believe that they're the ones that should be making these choices and that we should trust them and that their incentives are aligned with us. But that doesn't seem very likely," Daniel Colson, the executive director of the Artificial Intelligence Policy Institute, told TheStreet in February. "It seems like in an important sense, the primary intention of the people building the tech is their own status and wealth."

Related: Deepfake program shows scary and destructive side of AI technology

Microsoft engineer calls on Microsoft, FTC

Entering into this contentious environment of deepfakes, image generation and questions of corporate responsibility is Shane Jones, a principal software engineering manager at Microsoft  (MSFT) who has worked at the company for more than six years. 

In December, Jones — who has been testing Microsoft and OpenAI's image generators Copilot Designer and Dall-E 3 for months — discovered a security vulnerability that enabled the creation of "disturbing, violent images."

Jones reported the vulnerability to Microsoft and to OpenAI directly, before publishing a public letter on LinkedIn on Dec. 14, urging OpenAI's board to "suspend the availability and use of Dall-E 3."

"In researching the issue, I became aware of the larger public risk Dall-E 3 poses to the mental health of some of our most vulnerable populations including children and those impacted by violence," the letter reads. "It is clear that Dall-E 3 has the capacity to create reprehensible images that reflect the worst of humanity and are a serious public safety risk." 

Lisa Plaggemier, Executive Director at the National Cybersecurity Alliance, told TheStreet in February that she does not believe there is a technical solution to the issue, saying that these models were not built with security in mind. She suggested instead that parents and teachers need to develop a new relationship with technology to ensure the safety and health of their kids. 

More deep dives on AI:

Microsoft's legal team demanded that Jones delete the public letter, which he complied with. 

Microsoft did not respond to a request for comment. 

A month later, Jones contacted his representatives in the U.S. Senate and Congress, asking the government to look more closely into the risks posed by AI image generators. In the letter, he said that the instance of viral, sexually explicit deepfakes of Taylor Swift — made using Microsoft's tools — was "not unexpected."

"This is an example of the type of abuse I was concerned about and the reason why I urged OpenAI to remove Dall-E 3 from public use and reported my concerns to Microsoft," Jones wrote in the letter, adding that "Microsoft was aware ... of the potential for abuse."

OpenAI did not respond to TheStreet's request for comment. 

On Wednesday, Jones sent a letter to the Federal Trade Commission (FTC) detailing his concerns, urging the FTC to increase educational efforts around AI image generation tools and asking that the agency work with Microsoft to increase transparency efforts in consumer products. 

The FTC confirmed receipt of the letter to TheStreet, but declined to comment. 

CNBC first reported on Jones' Wednesday escalation. 

Jones at the same time sent a letter to Microsoft's Board of Directors, detailing his concerns and asking the company's environmental, social and public policy committee to, among other things, conduct an independent investigation of Microsoft's decisions to market AI products that have "significant public safety risks without disclosing known risks to consumers."

"Over the last three months I have become increasingly concerned about Microsoft's approach to responsible AI," the letter — posted publicly to LinkedIn — reads. 

Jones did not respond immediately to TheStreet's request for comment. 

“We are committed to addressing any and all concerns employees have in accordance with our company policies, and appreciate employee efforts in studying and testing our latest technology to further enhance its safety,” a Microsoft spokesperson told CNBC. “When it comes to safety bypasses or concerns that could have a potential impact on our services or our partners, we have established robust internal reporting channels to properly investigate and remediate any issues, which we encourage employees to utilize so we can appropriately validate and test their concerns.”

"I have taken extraordinary efforts to try to raise this issue internally," Jones said. "Despite these efforts, the company has not removed Copilot Designer from public use or added appropriate disclosures on the product." 

Contact Ian with tips and AI stories via email, ian.krietzberg@thearenagroup.net, or Signal 732-804-1223.

Related: The ethics of artificial intelligence: A path toward responsible AI

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.