Arnab Chakraborty, a senior managing director at consulting giant Accenture, recently recalled a conversation he had with a chief financial officer at a multinational restaurant chain. The executive admitted that leadership still wasn't fully aware of all the ways artificial intelligence was being used by its workforce.
“For them to scale AI, and generative AI more importantly across the enterprise, we need to address these risks,” says Chakraborty. Lately, the conversations Chakraborty has with chief executives, board members, and chief security officers focus on understanding the risks of AI and figuring out how to comply with evolving regulations.
The gap in AI awareness at that restaurant company may explain why over the past five years, 17 states have enacted 29 bills regulating AI. Of those, 12 were focused on ensuring data privacy and accountability. These laws generally aim to protect consumers from the unauthorized use of their data and ensure they have clear insight into how AI systems collect and use the data they willingly share. Hundreds more are aimed at targeting deepfakes, preventing discrimination, or addressing how the tools can be used for hiring.
“Name your favorite state,” says Intuit chief privacy officer Elise Houlik. “I bet they have an AI regulation proposed.”
Millions use Intuit’s financial products, including TurboTax, QuickBooks, and Credit Karma, giving Intuit a lot of direct access to highly sensitive data. “It is a powerful responsibility,” says Houlik. “When someone makes the decision to give us their data, we have to take that super seriously.”
With new generative AI capabilities emerging, Intuit takes a cross-functional approach to thinking through product development, including perspectives from technologists, security professionals, compliance, and legal.
“There has to be that balancing act between the commercial value of the technology and doing it the right way,” says Houlik.
“I think the legislative surface is becoming more and more stringent,” says Vijay Sankaran, chief technology officer at Johnson Controls, which makes industrial products that make buildings more sustainable. “We try to be very proactive about that.”
Johnson Controls has put up “significant” controls around data and to protect from different aspects of risk. It follows the National Institute of Standards and Technology, or NIST, framework to manage cybersecurity risks. Johnson Controls also practices DevSecOps, which is short for “development, security, and operations,” and integrates security practices at every phase of the software development life cycle, from design to testing to deployment.
In 2022, Johnson Controls launched a privacy center that sought to proactively share with customers what data Johnson Controls used, how it ensured that data was secure, and the frameworks the company followed to ensure it was compliant with applicable laws in all jurisdictions.
Johnson Controls also set up an AI council that established principles around data security and explained the company’s use of AI in a digestible manner. “I think it is really important for us to make sure that our customers have confidence in the types of AI models that we're producing,” says Sankaran.
Each year, Accenture helps clients work through roughly 50,000 AI projects. Before any project begins, it goes through an assessment process that determines level of risk. As an example, one client may be working on developing a loan origination AI system, models that could create bias based on age, race, or gender and thus impact final loan decisions.
Because that project was deemed high-risk at the start, “it will then initiate a whole series of testing that has to happen in terms of what datasets we are actually looking to use for this particular solution,” says Chakraborty. That could help the client avoid running afoul of emerging laws regulating bias in AI—lawmakers in at least seven states are considering proposals on the issue.
When Jeremy Barnes thinks about data privacy at cloud-based software provider ServiceNow, he says consistency is key.
“Our principals, because we've been thinking about them for a long time for the most part, they stayed fairly stable,” says Barnes, ServiceNow’s vice president of AI product and platform. That steadiness helps employees stay educated on ensuring that the data they do touch, especially when intersected with artificial intelligence, is safe.
But what continues to evolve is how ServiceNow applies those principles. How do they turn them into policies employees can follow? What are the standard operating procedures for customer data? When a product is built, what are the proper AI guidelines and data privacy requirements that need to be considered?
“There is no one solution you can turn on, tick the box, and it’s done. You have to be able to adapt,” says Barnes. “The world is changing so fast with AI. Anyone who says they know the full solution is probably not looking in the right places.”
Data breaches like the ongoing saga at UnitedHealth Group highlight the elevated risks that companies are facing, especially as AI gives bad actors new tools to play with to create deepfake images, audio, and video that can more easily trip up employees. A survey from consulting firm Ernst & Young found that one-third of workers are worried they may be responsible for causing a cybersecurity breach, with Gen Z and millennial workers less likely to feel equipped to respond to those threats than older colleagues.
“You have to have a level of adaptability,” says Ibrahim Gokcen, chief data and analytics officer at management consulting firm Aon. “How do you then quickly adopt the organization—the right training and upskilling of the employees—because ultimately, a lot of these data protection, data privacy, and cyber-related things sit with an individual employee? The weakest point is usually an individual employee.”
Aon closely monitors data protection laws that are emerging in various markets, including the United Kingdom, Canada, and the state of California. As both a practitioner of AI and a consumer of Microsoft’s enterprise offerings, Aon says it acts as a “fast follower” on the technology. “That gives us ways of understanding the regulations and implications and making sure once they are law that we comply with them, because we have the right processes and infrastructure to operate in that environment,” says Gokcen.