Good morning,
“I think finance is in danger of becoming real laggards in the area of A.I. and automation and even traditional analytics,” said Tom Davenport, Babson College professor and coauthor of the book, All-in On A.I. For example, a survey he was once involved with found “HR people were well ahead of finance in terms of using predictive analytics and machine learning,” he said.
During Fortune’s Emerging CFO virtual event in partnership with Workday (a sponsor of CFO Daily) on Wednesday, Davenport was joined by Katie Rooney, CFO at Alight, and Vanessa Kanu, CFO at TELUS International. The leaders talked tech and the human element.
Regarding tech in finance, “I think the bread and butter is going to be those traditional technologies like machine learning, predictive analytics, and RPA [robotic process automation],” said Davenport, who is also a fellow of the MIT Initiative for the Digital Economy and a senior advisor to Deloitte Analytics.
He includes RPA in the A.I. category, although “some people don't,” he said. “But it uses rule-based decision-making to gather information from various systems and make a decision on it, and then take an action. [It’s] all automated and relatively easy to do. Doesn't require a lot of investment.” Davenport agreed that RPA could be used in finance in an area where the job function is repetitive. But “I don't think there's been much evidence of large-scale automation from RPA yet,” he said.
Generative A.I. has potential use cases in finance, Davenport said. However, “right now, the only obvious applications of generative tools are for very analysis-oriented companies like hedge funds, and investment organizations that do tons of spreadsheets,” he said. “Generative A.I. systems can create spreadsheet formulas just by telling it what you want. So, that can save a huge amount of labor-intensive work and I think ultimately require fewer lower level analysts and these organizations.”
Data and accountability
"A.I. is only as good, and the insights are only as good, as the underlying data,” said Rooney of Alight, a provider of benefits administration and cloud-based HR and financial solutions. “Our first focus has really been on streamlining the data infrastructure.” All of the company’s systems, including finance and HR, are on Workday, she said.
“We leverage payroll, health, and wealth data across 36 million people,” Rooney explains. The “power of A.I.” can inform customers where they're going to get the best service and quality within their health plan based on location or claims history, for example, she said. “But there's still that human element in how we help people make decisions,” Rooney said. If it’s a complex issue, they may want to speak with a doctor, she said.
Alight’s finance organization has used predictive models to do scenario planning around the risk of potential recessionary environments, Rooney said. “But at the end of the day, if I look at those models, and think, I want to look at a downside case, and it tells me to take a certain action, there's still a human element,” she explains. “There are cultural implications. There are customer implications.”
“I fully agree that for anything that requires ethics, somebody's got to make the decision at the end of the day,” Kanu said. “So, I don't believe A.I. will ever disintermediate the need for smart humans.”
Kanu said at TELUS, automation, like using RPA to free employees from repetitive tasks, is a key part of the company’s strategy. So, getting "consistently defined clean data" across all aspects of the organization is vital, she said. Another example of tech used at the company is the metaverse. "The ability to do recruiting in the metaverse has been huge for us," Kanu said. "It has allowed us to actually cut down the time from application to onboarding by well over half."
One area that can’t be automated? Accountability. “We still need to hold someone accountable for financial performance, the CFO and the external auditors and so on,” Davenport said. “So, reviewing everything that the automated systems have come up with and saying, 'Yes, that makes sense'—I think we're never going to ask an A.I. system to do that because we can't sue it or put it in jail if it screws up.”
Sheryl Estrada
sheryl.estrada@fortune.com