Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Sharon Goldman

Exclusive: Lakera snags $20 million to prevent business Gen AI apps from going haywire and revealing sensitive data

(Credit: Photo courtesy of Lakera)

It’s the potential nightmare that haunts Fortune 500 company leaders working to develop chatbots and other generative AI applications: Hackers figure out how to trick their AI into revealing sensitive corporate or customer data.

Lakera, a startup based in Zurich, Switzerland, announced today it has raised $20 million to help those leaders sleep peacefully. European VC Atomico led the funding round, with participation from Citi Ventures, Dropbox Ventures, and existing investors including Redalpine, bringing Lakera's total funding to $30 million. The company did not disclose its valuation in the latest fundraising.

Lakera's platform, which is used by Dropbox, Citi, and a number of Fortune 100 tech and finance companies, lets companies set their own guardrails and boundaries around how a generative AI application can respond to prompts featuring text, images, or video. The technology is supposed to protect against the most widely used method of hacking into generative AI models, known as “prompt injection attacks,” in which hackers manipulate generative AI to access a company’s systems, steal confidential data, take unauthorized actions, and generate harmful content. 

Most Fortune 500 companies hope to put generative AI to work over the next two years, said Lakera CEO David Haber. Those businesses typically use off-the-shelf models like the one powering OpenAI's ChatGPT. Then, they build applications on top of that model—a customer service chatbot, for example, or a research assistant—that is connected to a company’s sensitive data and integrated into business-critical functions. Safety and security must therefore be a top priority. 

“Existing security teams are facing completely new challenges in securing these Gen AI applications,” Haber said. “We are processing everything that goes in and everything that comes out, and what we ultimately make sure is that these highly-capable generative AI applications do not take any unintended actions.” He added that Lakera's platform is built on the company’s own internal AI models—not off-the-shelf options. “You can't be using ChatGPT to secure ChatGPT—terrible idea.” 

But the most important thing, Haber emphasized, is that customers can specify the context of what the Gen AI applications can and can’t do, and assess any possible security issues, in real time. Customers can also implement concrete policies around what a chatbot can talk about, he said. For example, a company might not want it to discuss competitors or reveal any financial data. 

Haber said Lakera has one unique advantage in tracking AI threats: Gandalf, its online AI security game that has millions of users worldwide, including Microsoft (which uses it for security training). As users test their prompt injection skills with Gandalf’s AI ‘jailbreaking’ game, the tool generates a real-time database of AI threats, which the company says is growing by tens of thousands of “uniquely new attacks every day,” and helps keep Lakera’s software up to date. 

Lakera plays in a competitive Gen AI security landscape along with other startups like HackerOne and BugCrowd. But Matt Carbonara, of Citi Ventures, said the Lakera team “has the background to build and evolve this product the market needs,” adding that he liked its focus on prompt injection attacks.  

“When you have new attack surfaces, you need new countermeasures,” he said. “The prompt injection attack approach is the first place people will be focused.” 

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.