Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Street
The Street
Ross Kohan

How to keep AI on a leash and defend against the biggest threats

As AI continues to make progress and becomes more integrated in business, Cisco  (CSCO)  is implementing some measures to make sure it can be used safely.

Transcript: 

Conway Gittens: So tell me, as AI continues to make progress and becomes more integrated in business, how can companies make sure the technology they're using is safe?

Jeetu Patel: Yea that's a great question, Conway. The way that we think about it, there's going to be two classes of companies in the world as we move forward. There's going to be ones that are going to be great at the use of AI, and then ones that are going to struggle for relevance. And the ones, the great ones, what we're finding is they want to move fast, but they oftentimes get held, get held back because of safety and security. And so that's an area that actually really needs to get focused on, because by definition, the models that AI applications are built on tend to be non-deterministic, and they tend to be rather unpredictable. And so you need to make sure that you've got the right level of safety and security guardrails so that they do behave in fact, the way that we want them to behave.

Related: Experts sound the alarm on controversial company’s new AI model

Conway Gittens: So what is your number one safety concern when using artificial intelligence in a business setting.

Jeetu Patel: If you think about the big areas that there's concerns that organizations have, it's around - in safety that you might have things like toxicity or prompt injection attacks that might happen where the behavior of the model is not quite what you want it to be. So that's what we need to make sure that we can ensure that there are guardrails for. So these models, which are inherently unpredictable, can behave in a way that are far more predictable for the context of the application.

Conway Gittens: So how do you combat those unpredictable models?

Jeetu Patel: So that's exactly where Cisco comes in. We just launched a product called AI defense. And AI defense product is essentially a common safety and security solution for the market. Because if you think about it, we're going to be living in a multi-modal world. You'll have many, many modals that applications are built on. And what we want to do is make sure that there's a common layer or substrate of security across all of these different modals, across all the clouds and across all applications. And so what we do is provide the enforcement of guardrails for both the modal itself, as well as any external attacks that might happen on the model from threat actor. We want to make sure that both the safety concerns of the modal behaving the way you want it to behave, and the security attacks that might happen on the modal to change the behavior of the modal can be compensated for. And that's what AI defense does, is allows organizations to innovate fearlessly where they don't have to worry about safety and security, because we can take care of that for them.

Conway Gittens: One of the things that comes to my mind, of course, is like the movies that we've seen in Hollywood about AI and how it could take over. So what are the guardrails that are put in place to avoid a kind of doomsday situation where companies integrate some kind of AI in order to protect themselves against AI, and then they lose control.

Jeetu Patel: So let me take a step back, because what's happening right now is the composition of our workforce is going to change quite a bit. So today 100% of our workforce is humans. Tomorrow you're going to have augmenting of that workforce with AI agents. You might have robots, you might have humanoids. And we need to make sure that these different AI augmentations can actually work the way that we want them to work. So what we would do is if you think about a model before a model goes to production for a specific application, let's say it's a loan processing application. We want to make sure that model is behaving exactly the way that you want it to behave. So we have an algorithmic way of going out and doing a level of validation on the model to make sure. And typically for an organization, it takes 7 to 10 weeks, Conway to go out and validate a model for us with AI defense, you can now do it within 30 seconds. And so that level of compression of time and not having to worry about the details makes a huge difference in not just the velocity, but also the safety and security, where you can enforce guardrails on this, saying that if there's a model that's behaving in a different way, you can actually provide a compensating control on that so it doesn't behave the way that it should behave.

Watch ICYMI This Week:

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.