Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Sharon Goldman

Canva plans to acquire an AI startup criticized for generating deepfake porn. Its systems are ‘tightened,’ but can it avoid controversy?

(Credit: Jaap Arriens/NurPhoto via Getty Images)

Hello and welcome to Eye on AI!

Canva, the Australian graphic design platform with over 190 million monthly users, announced Monday that it plans to acquire Leonardo.AI, a popular Australian image generation startup that has raised nearly $39 million in funding, and would integrate its models into Canva's generative AI tools.

Upon hearing the news, I immediately recalled that the startup had gotten some unwanted attention back in March, after 404 Media reported on its lack of guardrails against users generating nonconsensual deepfake porn—and compared it to the similarly criticized Civitai, another image generation community that is backed by Andreessen Horowitz

I asked Canva cofounder Cameron Adams about that in an interview yesterday. He responded by saying that Leonardo has “definitely done a ton of work to tighten up their systems and has a stronger focus on trust and safety.” 

I’m sure he’s right about that—in fact, Reddit threads are filled with Leonardo users complaining that the filters are now too restrictive. “Your Content Filters Are Ridiculous!” wrote one Reddit user last week, bemoaning the fact that seemingly innocuous phrases like "Black speckled markings on his lips and nose" were blocked. 

Balancing creativity and safety is an issue that is always evolving, said Adams. “You need to keep up with all the types of content that people are going to try and create," he explained. "You need to be constantly monitoring them, adjusting them, and making sure that they meet your values.” 

But is Leonardo’s evolution—that also includes its own recently released AI model, called Phoenix—enough to forget its unfiltered past? And can it avoid future controversies, such as copyright lawsuits, around how its models were trained? To be clear, those questions aren’t just for Leonardo—they also apply to other image generation startups rumored to be seeking buyers, such as Character AI, founded by a prominent former Google researcher, and Stability AI, whose model Stable Diffusion was used by Leonardo to launch its platform, and which has been challenged in several copyright-focused lawsuits.

Still, it’s interesting to see a company like Leonardo pivot towards an acquisition by the increasingly B2B-focused Canva, which offers brands the opportunity to create assets for marketing and advertising campaigns. Canva, which was founded in 2013, was also early to the generative AI game, launching its AI-generated Magic Write tool in December 2022, just weeks after OpenAI’s ChatGPT launched. 

But in many ways, it is also a good fit for the two Australian companies: Leonardo, which boasts a community of over 19 million users (Canva said it would continue to offer Leonardo as a standalone tool), also targets creative professionals and teams looking to create graphic design, concept art, marketing and fashion imagery. One of its key features is the ability to train small models on its platform using specific data—a set of photos, for example—so that images include the same character. 

Adams mostly waved away any concerns about Leonardo's past controversies. The startup's recently released Phoenix model is trained on “publicly available” data and open data from Creative Commons, he said—though there is no proof that this does not include scraped copyrighted data from the web. 

In any case, Canva’s enterprise business clients don’t have to worry about those issues, because Canva has long-offered indemnification for those customers. Meanwhile, Canva's strict Terms of Service place liability for any image output issues on its customers. And, in general, as more large companies integrate AI models into their platforms and tools, it abstracts issues such as copyright farther and farther away from the original training data—something only future legal rulings and regulations will be able to tackle.

There is no doubt, though, that text and image models have proved fairly easy to hack with the right prompts—and it will remain difficult to keep fully protected. One popular hacker, for example, who goes by Pliny the Prompter on X, has posted about being able to quickly generate deepfakes and NSFW (Not Safe For Work) content on image generators like Midjourney and Stability AI. If Phoenix is anything like the others, he told Fortune, it would not be hard to get it to output deepfake and porn images, though he admitted that image models could be more difficult to "jailbreak" than text. “There’s a whole lot of randomness, so it can take a bit of luck and a lot of retries,” he said. 

But he did not buy Leonardo's claim that its robust filters could keep hackers from getting around guardrails. “If I had a nickel for every time I’ve heard that,” he said. 

With that, here’s more AI news.

Sharon Goldman
sharon.goldman@fortune.com
@sharongoldman

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.