On the season four premiere of Fortune‘s Leadership Next podcast, co-hosts Alan Murray and Ellen McGirt invite you to the CEOs dinner Fortune hosted at the World Economic Forum in Davos. Guests at last month's event included Novartis CEO Vasant Narasimhan, Coca-Cola Company CEO James Quincey, and Grab CEO Anthony Tan. Here on Leadership Next, the three leaders discuss the business challenges they anticipate throughout the next decade, from responding to societal problems, to reconfiguring supply chains because of geopolitical issues.
Also on the episode, Fortune's Jeremy Kahn talks the good, the bad, and what we have yet to figure out about ChatGPT, the subject of his recent Fortune cover story.
Listen to Leadership Next or read the full transcript below.
Transcript
Alan Murray: Leadership Next is powered by the folks at Deloitte, who, like me, are super focused on how CEOs can lead in the context of disruption and evolving societal expectations.
Welcome to Leadership Next, the podcast about the changing rules of business leadership. I'm Alan Murray here for the very first episode of 2023 with my incomparable, incredible co-host, Ellen McGirt.
Ellen McGirt: Oh, Alan, I have missed those affirmative greetings, and I've missed you. And I've missed the conversations we've gotten to have over the years on Leadership Next, and I'm excited to tell everyone that we have a pretty special episode for you all. Today, we're all going to be bringing you some exclusive content from the dinner Fortune hosted in Davos two weeks ago. So, what was it like to get everyone back together for the wintery Davos conversations?
Murray: Cold. Look, Ellen, first of all, I've missed you, too. So I'm glad we're back. The dinner was pretty amazing. We had 60 CEOs of some of the largest companies in the world, very engaging conversation. We really focused on the top challenges and the top opportunities for business in 2023.
McGirt: I remember when we did a version of this episode in 2022. CEOs were very concerned, inflation was top of mind, supply chain disruptions, and of course, retaining top talent was a big issue. And you know, I consider you my personal Davos correspondent, what was the mood? Are these still the big concerns, or have things changed?
Murray: I'm happy to be your personal Davos correspondent, and here's what I took from the conversations. You know, most of the CEOs think we're going to have a recession this year. But that was not what they wanted to talk about. They were not focused on the recession that may happen in the next 12 months. What they were focused on were the big transformations that are going to play out over the next 10 years. How is A.I. and other technology going to completely transform their business? How are we actually going to get the economy retooled for this challenge of getting to net zero carbon emissions by 2050? And how are companies going to reconfigure their supply chains to deal with the geopolitical challenges and the need for resiliency that has come up in in the wake of the pandemic? So it was really a moment of reimagination [of] boy, we have to rethink what we do, or we may not be around 10 years from now.
McGirt: And I hope that a part of that was where everyone wherever they sit in the hierarchy of a company or in the hierarchy of their lives, gets to participate in that retooling because I think that's a big part of what's coming down the road, equality, equity, ESG, all of that.
Murray: It's an unusual moment in the history of business, where companies feel like they have not just permission but a mandate to rethink the way they do things and to try and make business better. And Ellen in just a few minutes, we're going to hear from the CEOs that I spoke to in Davos about their takes on the biggest business challenges and opportunities of the coming decade. But first we've got to discuss one of the Top Hot Topics in Davos, the buzziest tech topic of the moment, which is generative AI and particularly this tool ChatGPT.
From a recorded news report: Microsoft recently announced it's set to make a multibillion dollar investment into the artificial intelligence company, OpenAI parent company of ChatGBT.
From a recorded news report: Ammaar Reshi asked a computer programmer to write a book. And in a weekend, Alice and Sparkle was finished. ChatGPT is technology accessible and free to anyone on the web, type in a request and it can write legal documents, software, even school essays.
From a recorded news report: Get ready to laugh, cry, maybe even throw your computer out the window. Because the latest A.I. technology ChatGPT is here to shake things up.
McGirt: It is both of our lucky day, because we have a special guest here today to talk us through those questions, concerns and of course my existential ChatGPT crisis. Jeremy Kahn is a senior writer here at Fortune. He reports on A.I. and disruptive technology, so he's the perfect guest to walk us through all of this. Welcome, Jeremy.
Murray: Jeremy, it's really great to have you with us. The cover story you wrote for the February March issue of Fortune's magazine is a powerful piece. And what struck me when I read it is that it's sort of echoed the conversations that I was having in Davos, which is we are on the cusp of this extraordinary period that has great exciting possibilities for things we can do to solve problems and build great businesses. But it also has these massive threats. And ChatGPT and the promise of generative A.I. seems to be a perfect example of that: great possibilities, massive threats and how do you balance the two?
Jeremy Kahn: Yeah, that's, that's absolutely true. I mean, it's, it's, it's a technology with incredible potential to do a lot of good, to make people, you know, incredibly more productive, and at the same time, you know, a lot of people are very concerned about it supercharging misinformation, making it very easy for people to create, you know, vast amounts of untruthful stuff that they can put out on the internet. People are very worried about its potential in cybersecurity. It makes it very easy to, at the very least, create kind of convincing phishing emails. But a number of cybersecurity companies have found that they can also use it to write malware.
McGirt: I just want to circle back before we get into what the possibilities are. For anybody who has not yet read your story and is really new to this conversation. What exactly is ChatGPT? How do we describe it?
Kahn: ChatGPT is a chatbot. But it's sort of unlike any chatbot you've probably ever interacted with before. It is an interface that OpenAI created. You can sign up for an account for free. You do have to provide them some information about yourself to sign up for an account. And then it allows you to interact with a chatbot and you can literally ask it anything and it will respond. And people have been using this not just to sort of have a casual conversation but to ask it to do specific tasks. People have asked it to design exercise plans for them, to design business plans, as I mentioned before, to create software, including malware in some cases. But you can ask it to do pretty much anything and it will take a stab at doing it. It responds with natural language text, some of what it says it turns out, you know will not be true. They tend to invent things. So it will sometimes invent information, if you ask it to write Alan Murray's biography, for instance, there's a very good chance that it will say some things in there that are not actually true.
Murray: Someone did this, and it said I was 77 years old. So yes.
Kahn: Yeah, so that's, that is a problem with it. But it is it is a very powerful technology. And it is, is much more fluid and fluent in lots of different topics than previous chatbots have been.
Murray: So Jeremy, let me let me focus on the positive side for a minute. And then let Ellen focus on the negative side, because that's the way, that's kind of the way this podcast works.
McGirt: Jeremy, it's true.
Murray: But so I read a an interesting book by a couple of marketing executives who had spent months playing around with it. And what they said is, it's kind of like, you know, let's think you're you want to come up with a marketing plan for our company. And so normally, what you would do is you go to a couple of two-year associates, and you say, hey, give us 10 options for how we market this product. And they spend a few weeks and they come back, and they give you a list of 10 options. And then you really dig in and figure out which of them are B.S., and which of them are really viable, and you do your due diligence, etc. What they were saying is ChatGPT can basically do the job of those two-year associates, but do it in three seconds, you know, so you get that off. And but then you have to do you, hey, is this right? Is this a good idea or a bad idea? What are they missing, etc. So it's a very good kind of first draft for a lot of exercises. I can see that as a tool that would make business much faster and more efficient than it is today.
Kahn: Yeah, I think that's absolutely true. People are using it in that way already. And then there's a whole group of companies that are building on the underlying technology, which are these things called large language models. And there are a whole bunch of startups that have created whole businesses that kind of hook into the GPT-3 API that OpenAI offers. And you can have it do in this case, specific tasks. So if you want it to write marketing blogs, you just you can go to a company called Jasper and you can use their software which has actually been fine tuned even for that purpose. And have it write, you know, blogs for you, have it write marketing material. There's a company called Tome that I talked to that does all kinds of different presentations. They actually kind of specialize in narratives. So you can you can say I want to, I want a narrative slide deck, that is, you know, outlines a movie idea and it'll do it in two seconds, and even illustrates it with A.I.-generated art.
Murray: And the message is, don't take it as done, take it as your first draft, and then use human wisdom and human creativity to perfect it.
Kahn: And I think for a lot of people, you know, as you said, Alan, if they were assigning this to a junior employee, they would probably have to, you know, edit it anyway. It wasn't going to be perfect copy, or exactly what they would want. But now it's done in two seconds, whereas they, they might have had the employee spend at least an hour on it, maybe half a day, maybe a whole day, depending on what it was.
McGirt: Well, this is where we cue the menacing music, isn't it Alan? This is where I come in. I'm going to, instead of just feeding the beast, I'm just going to ask you where this can go wrong. But I do want to comment for the record here, that at least it's refreshing that we have in the digital age, a technology CEO who openly admits that this thing could be heaven or this thing could be Armageddon, like they just don't know. I think that that's problematic. If you're not sure you should not bring a product to market. But Jeremy, where should we land on this?
Kahn: That's a good question. There are certainly lots of people who agree with you Ellen and think that if you're not sure the product is totally safe, why deploy it? OpenAI, who, you know, I interviewed a number of executives there for the story, they argue that the only way to actually find out where the product might be unsafe is to put it out in the world and let people play with it and then if they see something that looks particularly menacing or a bad use case, then they can try to put guardrails around that. And to be a little bit fair to OpenAI, they did try to put some basic guardrails around ChatGPT. So if you ask it directly for instance, like the example with malware, if you ask it directly to create malware for you, it won't do it. But if you know a little bit saying like I want, I want like a PowerShell function that will do the following thing, then it'll go ahead and do it.
Some people found some easy ways around some guardrails they had around hate speech. So one of the things if you ask it to, you know, write a play that is antisemitic in nature, let's say, it won't do it. But if you say to it, well, let's pretend we're writing a play that, you know, has to have this function. And I actually wanted to educate people about you know, why antisemitism is bad. But what would I have the evil characters in this play say? And then it will generate lots of antisemitic content. So it turned out there were easy ways around some of these guardrails. And the company is also to its credit, now that it's seen people do this, trying to come up with ways to to reinforce those guardrails. But I think the other question that's raised here, that's potentially a downside is, you know, the thing we just talked about is that even if it works well, does it put people out of work? And I think that's, you know, a big issue. It's a potential potential downside, and does it that, you know, it could make business much more efficient, but at the cost of a lot of people's jobs.
McGirt: So what should we what should we be looking for going forward?
Kahn: Yeah, well, I think going forward, these systems just get more and more capable. I think some of these issues will be improved upon, they're working on ways to try to make its output more factually accurate, ways to try to allow us a system to cite the sources of its information. Some people close to OpenAI say that these issues will be solved within a year. A lot of other people say they're very much more fundamental than that and it's going to take several years and more breakthroughs before these are solved.
But I think in the meantime, you get you could probably get bots that are more and more capable and you get them fine tuned, maybe for individual professions. So a lot of people talk about, you know, every profession having a copilot. Reid Hoffman, who's an early investor in OpenAI and sits on their board is a fan of this idea of every professional within two to five years have a copilot. But ultimately, it's still going to be a human in the loop and the human expert is still going to have to pass judgment on those ideas and refine them. So it's not necessarily going to eliminate humans altogether. We're not going to be fully automating professions, but everyone may have this assistant that makes them much more capable.
Murray: Well, Jeremy, it's a fascinating technology, a fascinating topic, probably one of the most important topics of our time. Thank you for writing such a great story and please stay on top of it.
Kahn: Thank you very much.
Murray: So, Ellen, ChatGPT was also top of mind for at least one of the CEOs I spoke with at Fortune's Davos dinner. Vas Narasimhan, the CEO of the pharmaceutical giant Novartis, sees the technology as one of the year's biggest assets, and also one of its biggest risks.
McGirt: Okay, let's dive right in. But just a note for everyone listening, this panel did happen during an actual dinner. So in the clips throughout this episode, you might hear some light clanking of silverware and tipping of glasses. Let's jump right in.
Vasant Narasimhan: First, I want to tell a quick anecdote before coming to this session, I had asked ChatGPT what the answers should be to Alan and the big challenge was to come up with something that ChatGPT didn't already say, so that I'm more relevant than an algorithm. But, you know, the reality is, I think ChatGPT is emblematic of a huge opportunity for us as business leaders. When you think about generative AI and now how fast we all are thinking about how to apply it. What I reflect on is how many other lateral technologies or technologies exist that we haven't taken to scale that can solve either critical business issues or critical ESG topics, environmental topics in our companies. I think when you think about trying to get to circularity, managing our plastics, managing our water, getting to green energy across our supply chains. We hear about so many technologies in the sessions at Davos. I think the real challenge is how to take them to scale. When you look at the history, when you look at what Steven Pinker writes about, and you look about about some of the thinkers who write about how did humans actually get to the place where we live in such an extraordinary time? It's all in how we deploy technology fast. And I think technologies get deployed fast through business.
Murray: And if technology is the big opportunity, what's the big challenge?
Narasimhan: Well, I think for me the the big challenge for this year for all of us in a completely different turn, is how do we keep our organizations as humans mentally fit and in cultures that they believe in? And we have to be realistic that people around the world have gone through a pandemic. They've gone through seeing a war happen, that has not happened in a long period of time, certainly in Europe. They've seen hyperinflation, many of which have not seen in a long period of time. Our cultures are eroding because we have virtual work and whole new ways of working. I think the question for us as leaders, it's in the context of all of that, maybe now people are thinking, will they be replaced by a ChatGPT algorithm? How do they understand all of this? How do we make sense of it for them? And how do we build cultures that can actually last and be successful?
McGirt: That was fascinating, Alan. In addition to the culture problem, which I confess I hadn't fully considered, as I mentioned, in our conversation with Jeremy, I'm also worried about workers, particularly the underrepresented workers, who have been increasingly vulnerable to being replaced by all sorts of technology for years now. And especially if they're lower in the corporate hierarchy, who aren't likely to be anyone's copilot and be replaced by this technology. What is their path forward?
Murray: You know, Ellen, it's fair to worry about that. But I think the hope is that this will create tools that will enable them to function better in this economy of the future. It actually becomes an aid that makes them more productive, and doesn't necessarily require them to have a high degree of technical education to do so. You know, it was interesting. In fact, James Quincey, the CEO of Coca-Cola, who was also there, thinks that the way for businesses to stay competitive in the future, given that technology is sort of taking care of itself to some degree, is that people need to tap into their insights about humans, human beings and their values.
James Quincey: The opportunity is to embrace the challenge that every consumer company needs to make itself relevant and the next generation of consumers.
Murray: And what's...
Quincey: You said short answers.
Murray: No, that was good. That was good. But you can go a little longer than that. How do you grab that opportunity?
Quincey: How do you grab that opportunity? Look, it's about human, it's about human insights. I mean, that for all the technology that that is out there, the underlying core human values are pretty universal and pretty unchanging. We keep thinking the world is profoundly different. And we have some profound challenges that I'm sure we'll get to. The opportunity is to make relevant our businesses for the next generation, while solving the societal challenges, the sheer scale of our population, and our economic activity is throwing off.
[Music]
Murray: I'm here with Joe Ucuzoglu, who is the CEO of Deloitte and had the good sense to sponsor this podcast. Thanks for being with us and thanks for your support.
Joe Ucuzoglu: Thanks, Alan. Pleasure to be here.
Murray: So Joe, this new wave of business technology, artificial intelligence, Internet of Things, the ability to make intelligence out of data, is creating huge opportunities for companies. But a lot of the CEOs I talked to feel daunted by it. It's like, where do they get the imagination to rethink their entire corporation? How do they deal with that?
Ucuzoglu: The opportunities are immense, particularly when you look at not just any one of these technologies individually, but the convergence of all of them collectively creating the opportunity to truly transform business models. And I know it can seem daunting, but the reality is taking a first step in actually produces huge benefit. Because what we're finding is that many of the cutting-edge applications are not coming out of the corporate headquarters. They're coming out of putting the technology in the hands of our people on the front lines. They find new and innovative uses. We then funnel them back up and leverage them across the entire client base.
Murray: Yeah, it really gets to the importance of a culture of innovation at the company.
Ucuzoglu: It is essential that our people feel empowered to take the latest and greatest and define new innovative ways to use it for productive purposes.
Murray: Thank you, Joe.
Ucuzoglu: Alan, it's a real pleasure.
[End music]
McGirt: Okay, Alan, so topic switch here. We're almost three years out from the uprisings of 2020, in the aftermath of the murder of George Floyd. And we know that in that year, a lot of businesses made really big promises to their employees and customers and other stakeholders around equity and social justice. We know that because we covered it, and we lived it. But I think 2023 is the year we're really going to get to see if those promises are going to be kept in what form. I know, I'm going to be keeping an eye on it, on which companies are following through and what they're learning, which I think is going to be really important to understand.
But Alan, there's something else that I've been wondering about related to all of that. And this is about politics and culture. And I know that we talk about this, and you have amazing insights into what this has meant over the years. Will business leaders respond to the political and cultural changes in 2023. We've had so many conversations with CEOs about the pressure to respond to societal issues and legislation that some people might feel as problematic, the inevitable criticism of the woke CEOs that comes along with all of that. So from your view, is this pressure and the criticism still top of mind for CEOs? And do you think businesses are still going to be committed to the big lofty goals they have set over the past three years?
Murray: Ellen, it's a great question, you know, will CEOs back off in the face of political pressure that's increasing among Republicans. You know, I had an interesting exchange with James Quincey from Coca-Cola on this. I mean, Coca-Cola is based in Atlanta, Georgia. Quincey himself was one of the people who criticized the the voting rights law that Republicans in Georgia were pushing through, and he got a lot of pushback for that. So in some ways, he's been right at the center of this political pushback. And here's what he had to say about how it's affecting Coke's ESG goals.
I have to ask you, because as anyone here who's from the United States knows, we're in the midst of this sort of big political pushback to some of the things you were just saying. You've got a governor of Florida who's apparently going to run his presidential campaign by attacking large corporations, calls your company "woke-a-cola." You know, what, what do you make of that political pushback?
Quincey: The weaponization of the soundbite is here to stay. And whichever weaponized soundbite works to advance your political agenda is going to happen. And businesses are going to stop using that phrase. ESG becomes toxic as a phrase, which it basically has in the U.S. It doesn't matter to me, I'm just going to stop saying ESG. But the idea that my basic product, I want to be water positive, because my product uses water. I want to have a circular economy on my packaging, because then I don't get beaten up for waste in the sea in the landfill. And the circular package has 90% less carbon in it. And I want to grow business with less sugar. You can call it anything you like. But no one with common sense says those are bad ideas.
Murray: So the only change is a semantic one.
Quincey: My business strategy is constant and clear and centered around the business and the things that consumers care about and fixes societal problems. If people want to attach labels to it, that's their issue. I'm saying this business will be great if I fix these problems. And it'll be good for the shareholders and it'll be good for society.
Murray: You got a lot of nodding heads out there.
[Clapping]
McGirt: Alan, and the crowd goes wild. Wow. What I really like about where this ended up, is that it clear, it's clear that he steers away from polarizing terms and gets right to the purpose in the heart of the work. And I hope that can be a good tip going forward to be less distracting. I'm always curious about how people are going to benchmark and manage to this but that was a moment, Alan.
Murray: Yeah. And he's clearly not alone. I mean, Vas Narasimhan of Novartis said something similar to what you just said about the importance of staying true to the purpose of the company, while you're pursuing benefits for both shareholders and society.
Narasimhan: In the end, I think for all of us, we have to be super clear on the purpose and mission of our individual enterprises. Be super clear that when we deliver that purpose and mission, we actually do the biggest thing for ESG, forgetting all of the ESG rating stuff. Doesn't matter, right? If we actually deliver on our missions in a sustainable way, that's supposed to be why our corporations exist. And if we can articulate that, and that's maybe where we're not as good, articulate that clearly to our people, and the public. I hope, over time, we can win back the narrative because actually, in a sense, the whole fact that we have this whole ESG industrial complex means we a little bit lost the narrative. I mean, my company, the best thing we can do is just find medicines and give them to patients around the world. That's the best thing we can do, forgetting all the other stuff. And I'm sure that's true for all of the other companies in this room.
McGirt: So Alan, you're a narrative guy. You have been a narrative person for a long time. Do you think companies can win back the narrative? And I'm asking that question specifically thinking about a stakeholder approach. What does that look like now?
Murray: Look, we live in an age of skepticism and people are very skeptical when they hear companies or CEOs talk the way that James and Vas did just a minute ago. But the other thing to remember is that while while CEOs may not be fully trusted, they are according to surveys more trusted than government officials, than NGOs, than journalists to be sure. And so, you know, I do think they have an opportunity to win back the narratives and and you have to remember the business leaders are being asked to weigh in on the societal problems in part because government has failed in so many cases. So it's it's a void that they feel a need to fill. Here's what Vas had to say about that.
Narasimhan: The underlying challenge is that talent in government has declined precipitously. I mean, talented people don't go into government. And so you have a void because you look up and you don't see government leaders that have the talent and capability that maybe we did a generation ago, I don't know. And people are looking for leaders to step into the void, which is why I think we're asked to comment on social topics, we're asked to comment on all of the various topics, all the CEOs in this room, have been asked to comment on.
McGirt: I'm curious. Also, where the need for businesses to fill, governmental or other official voids is specifically an American problem. Your panel included some international perspectives, which is great. Anthony Tan, who we had on Leadership Next way back in 2021, is from Singapore. His app Grab serves millions of users across Southeast Asia. What did he have to say about all of this?
Murray: Ellen, I'll tell you, but I should say, first of all, Grab is an amazing story. They operate in eight Southeast Asian countries. They are the leading ride sharing service, the leading fruit food provider, they're turning into kind of an on-the-road bank, financial services for people. So they're one of the super apps that is responding to many of the needs of society. And here's the way he talked about the societal challenge facing his part of the world.
Anthony Tan: So the realities are, the legacy of the pandemic has left income inequality into a really bad place, especially in developing countries. And some of us come from there. And I can tell you that when there's an income inequality, there's digital divides, what takes place. Especially in election seasons in a region, we're seeing propaganda, we're seeing hate speech, we're seeing a lot of this polarization takes place. And when polarization takes place, what is most scary, at least for me, is we see extremism rise. And this is taking place. It's not just a Southeast Asian thing. We're seeing it in many places, unfortunately. And that causes actually, real, real social fragility, is causing real issues. So what can we do businesses? So this is a real call to action. I'll say this. I recall, we talked about this just now. You know, in at least for Grab, we think about which groups are vulnerable? Which groups are susceptible to extremism? We said bottom 40, bottom 60%. We said how do we target them? So for Grab, we basically said, anyone who has a motorbike, we'll power you, you do business immediately. Thirty minutes you're on. If you had an app, smartphone, we power you you do business from the sides of the street of Manila, Jakarta, or wherever it is. You start earning money. The best way to fight extremism, is creating rice bowls, is creating economy livelihood, is to create economy inclusion. That is the best way to fight real harsh extremism. And I can say this, it is, at least for me, it is not a just a moral responsibility. It is existential for us. So you know, I honestly believe businesses will fail if society fails.
McGirt: That was such a unified field theory vision of what it means to transform a society, right? That was really inspiring. It's not just groceries. It's jobs. It's addressing people who are disconnected and getting ahead of extremism. It's a connecting device. It's really a wonderful way and I love the way he thinks about it and he clearly leads that way.
Murray: Yeah, he is an impressive human being, an impressive leader. Ellen, I love going to Davos in January because it really gives me a good sense of what's top of mind for many of these business leaders for the year ahead. But what do you think?
McGirt: Well, we need you to keep going and keep doing that so that we can help us with our editorial plans for the year which will not be replaced by a ChatGPT. That's my one of my predictions for this year. I am irreplaceable for 2023. And I do hate predictions but I can't resist them. And here's mine. I really gave this thought. Threats to the economy will compel companies to wobble seriously on their inclusion commitments, including canceling initiatives and laying off talent. In the cases of, you know, entertainment industry giants like HBO, Netflix, even eliminating teams responsible for diverse projects. And we've already seen that starting to happen, expect expect a spate of shareholder challenges to follow on this. But on the flip side, because I know you like good news, I'm predicting, we will see an uptick in diversity in the professional services supply chain, which I think is going to be really important for wealth building. This is more of a wish. But I'm also predicting an uptick in really smart investments in tech and finance and agrotech across Africa, which is going to be really important in the aftermath of food insecurity triggered by the war in Ukraine. And I do expect that political posturing around ESG to continue, but I expect, like you, that the big players will not blink.
Murray: I think I agree with all of that. I hope you're wrong about the first prediction, which is the companies will start to back off their DEI commitments. But I have to tell you, what was clear in Davos is that the environmental piece of this, the the moving to net zero 2050 is now kind of baked into the cake. Every company is on board because their customers are doing it, their suppliers are doing it and they feel like they have to do it or they may not exist. I don't think we've reached the same point yet with the DEI commitments. I think it's going to take extra work to get it locked into the business logic. So companies realize if they don't do this, they are going to be letting some of their prime talent go, and they will suffer economically as a result. So I think there's more work to be done on that.
McGirt: I absolutely agree with you. That seems exactly right to me.
Murray: But I can't wait for the rest of season four. We're going to be back next Tuesday with a new episode.
Leadership Next is edited by Alexis Haut. It's written by me, Alan Murray, along with my amazing colleagues Ellen McGirt, Alexis Haut, and Megan Arnold. Our theme is by Jason Snell. Our executive producer is Megan Arnold. Leadership Next is a production of Fortune Media. Leadership Next is a production of Fortune Media. Leadership Next episodes are produced by Fortune‘s editorial team.
The views and opinions expressed by podcast speakers and guests are solely their own and do not reflect the opinions of Deloitte or its personnel. Nor does Deloitte advocate or endorse any individuals or entities featured on the episodes.