Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
John Kell

AI is about to face many more legal risks. Here’s how businesses can prepare

illustration of one working lightbulb and one broken lightbulb with a copyright symbol (Credit: Illustration by Nick Little)

When clients approach Bennett Borden to ask questions about generative artificial intelligence and the impact on their businesses, the chief data scientist at law firm DLA Piper turns to fiction. He believes AI is more like Iron Man than the Terminator. 

For those not attuned to science fiction, Iron Man is a tech-advanced armor suit that helps its inventor, billionaire Tony Stark, save the world in the Marvel universe. In the original Terminator film, an AI-powered assassin is sent back in time to murder the heroine Sarah Connor. Both represent the emotions that AI can evoke, ranging from exhilaration about the potential of the technology to fears about how it could harm humanity.

Borden says humans and AI can complement each other by pairing different strengths. People are better at judgment, inference, compassion, and persuasion, while AI is really good at collecting, summarizing, and analyzing information. 

“Is AI going to replace lawyers?” Borden asks. “No, but lawyers who use AI are going to replace lawyers who don’t.” 

Humans are worried. Nearly four out of 10 U.S. employees say they are concerned that AI might take some, or all, of their job duties in the future, according to a survey by the American Psychological Association.

These fears are leading to litigation. There are over 100 AI-related lawsuits working their way through the legal system, according to experts, ranging from intellectual property, to the amplification of dangerous content, to discrimination. The pace of AI litigation is expected to accelerate in the years to come. 

Pending cases include copyright infringement lawsuits by scores of writers against OpenAI and lawsuits by a group of visual artists against AI companies including Stability AI, Midjourney, and DeviantArt. Legal experts say that intellectual property litigation is just the first wave of AI litigation to reach the courts. Cases centered on data accuracy, safety, and employment tools are likely next. This spring, a joint statement from the U.S. Equal Employment Opportunity Commission and other agencies warned public and private organizations would be expected to responsibly use AI for employment-related decisions. 

“If you think about how these models have been trained, they’ve been trained on trillions of examples of how human beings have expressed themselves in text in the case of large language models and whatever has been digitalized over the last 60 years,” says Borden. “Well, some of that stuff is factually accurate, fair, balanced, and nontoxic. And some of it just isn’t.”

Companies, in particular multinationals, will have to navigate an uncertain AI regulatory landscape for at least a few more years. Sweeping federal legislation for AI is unlikely until after the 2024 presidential election, putting pressure on states, federal agencies, and regulators to fill in the gap. The European Union’s AI Act, meanwhile, is a risk-based proposal that has faced criticism from major employers including Heineken and Airbus

“If you can understand one sentence of it, you will understand one more sentence than I, and I think you will understand one more sentence than the people who wrote it,” says Tom Siebel, CEO and founder of enterprise AI company C3.ai.

Regardless, the EU’s AI Act likely won't go into effect until 2025 at the earliest.

“We’re more likely to get guidance from the courts than we are for a big pronouncement from a legislator,” says Jordan Jaffe, a partner at Wilson Sonsini Goodrich & Rosati’s San Francisco office.

“It is a hot mess,” says Brad Newman, a litigation partner resident in Baker McKenzie’s Palo Alto, Calif., office. “What I’m a big advocate for is a rational, pro-innovation, pro-protection omnibus federal AI legislation. I think Congress is starting to get there.”

States and cities across America are already starting to act, which adds complexity to any business operating nationwide. An Illinois law requires job candidates to be informed if AI is used to analyze their interviews. New York City has a similar law on the books. “The state to watch in particular is California,” says Duane Pozza, partner at law firm Wiley Rein LLP. Under the Golden State’s privacy laws, it can create rules that regulate automated decision-making. 

On October 30, the Biden administration announced a sweeping executive order that set new standards for AI safety and security. The order shared guidance on privacy, equity, principles to support workers, and the promotion of U.S. competition in AI innovation. “The White House has helped to secure voluntary commitments from a number of industry players around the way AI is managed,” says Pozza.

“I think overregulation is as bad as underregulation,” says Danny Tobey, chair of DLA Piper’s AI practice. “Companies should be really focused on the differences between generative AI and traditional AI.”

Traditional AI is easier to corral and has narrower purposes, like predictive AI, while generative AI can do just about anything and would expose companies to greater risks. In the absence of clear regulatory guidance and with litigation working its way through the courts, companies are left to their own devices to sort out how to protect their reputations, while also steering billions of dollars toward AI investments. 

“There has to be C-suite responsibility for material use of AI in the corporation,” says Newman. He advises that companies designate a chief AI officer to understand how AI is being used, set a policy that’s both transparent and lawful, and doesn’t merely designate AI tools to data scientists.

Newman says companies must also constantly test—both predeployment and post-release—to make sure the AI is fair—not violating privacy laws and not discriminatory. Firms should also notify employees and customers when AI is being used.

Generative AI, which is used more as a creative tool, in particular presents risks to privacy, trade secrets, intellectual property, and quality issues.

“You can imagine a situation where you want to use generative AI and it’s not really being rigorously reviewed by a human being before it goes out,” says Jaffe. “You want to have safeguards around that.” 

Pozza says businesses must have clear policies for how generative AI is used. “The importance of a policy is in large part because employees and contractors want to use this technology for new and interesting things, but management needs to have a handle on what it’s being used for, so that they can run the sort of legal and compliance risks down,” says Pozza. “Anybody who's involved in the life cycle of AI has to be aware that there could be potential liability from using it.”

Leaders, including a chief privacy officer or chief information officer, should enact firm AI governance, while other committees or internal structures should be in place to vet new uses of AI and manage compliance, experts say. The board and CEO also have a critical role to play. 

Some companies are trying to integrate AI into existing governance structures, though experts warn generative AI is a vast technological advancement that requires unique governance frameworks.

One popular AI risk management framework was released at the beginning of the year by the National Institute of Standards and Technology. Both the opportunities and risks AI presents should be considered, with a risk matrix that evaluates the low, medium, and high risk of AI that is deployed. AI that’s closer to touching decisions around customers and employees would be deemed higher risk use cases. 

And while legal experts naturally fret about AI’s risks, many say companies and regulators shouldn’t lose sight of the ways AI can improve lives, when used ethically and responsibly. 

“It is very important to me that lawyers are not seen as impediments to AI,” says Tobey. “Our view is very much that legal should be seen as business enablement.”

His colleague at DLA Piper, Borden, agrees. “This is like the second industrial revolution, where companies that understood the transformative effect of the new technology and really leaned into it, those are the companies that took off and became the rulers of the world.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.