Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Sharon Goldman

Reid Hoffman’s new book touts humanity’s ‘superagency’ over AI—but he says DeepSeek proves why the U.S. must stay ahead

(Credit: FREDERIC J. BROWN—AFP/Getty Images)

LinkedIn cofounder, PayPal mafia member, and early OpenAI investor Reid Hoffman’s new book about AI, Superagency, offers a subtitle that highlights his optimistic view: What Could Possibly Go Right With Our AI Future

In a conversation with Fortune, Hoffman—who earned a master’s degree in philosophy at Oxford University—shared his views on some of the thorny societal questions surrounding AI, as well as his own experiences as a founder, and funder, of AI startups. 

And at a moment when surprise developments like China’s DeepSeek have upended some common assumptions about AI, Hoffman stressed that the U.S. must double down on staying ahead in the AI race. 

Fortune: Your new book is called Superagency, with an optimistic message around AI. it seems like a purposeful counterpoint to what is often described as a “doomer” mindset. You call yourself a “bloomer.” 

Hoffman: I hold very deeply the conviction that people should have a curious and mostly optimistic view about what happens in the creation of technology, versus a fearful, pessimistic view. 

I was thinking about what kinds of things lead to fear and pessimism about AI—the idea that Terminator robots may be coming for our lives, robots are taking away our jobs, tech companies are taking away our privacy, all of this is around the notion of human agency. 

Every time we have general purpose technology—even back to the printing press and the car and the power loom and the smartphone—we always have this discussion. It doesn’t mean there aren’t reasons to take this discussion seriously. But we’ve actually grown our agency a lot. The agency to ride my horse down the street gets diminished, but the agency to go further and have a car gets increased.

You say in Superagency that we can’t avoid the pain, though. That doesn’t sound good. 

I worry that we fumble it in bad ways. The transition is going to be painful. If you look at any of these general purpose technologies, it always is. So figuring out how to navigate the transition with as much grace, as little pain, as much opportunity as possible is important, but it’s not going to be possible to do without stress and difficulty.

But in the book you say that while as individuals we might be able to avoid the pain, as a society we cannot. That’s where some people might push back and say humanity gets to decide how much pain they want to take on as a society. What do you say to that?

In a sense, yes, we all as human beings decide X. But I’ve never seen it happen. What happens is we have a bunch of different other human rights—the right to build a new product or service and offer it to people, and then the right for people to adopt that product or service. If I’m saying, “Here, have cigarettes or opioids,” it’s more challenging, but broadly, the market can choose. 

The notion of “we all decide” doesn’t work that way. Unless large swaths of people within multiple industries and multiple systems of government say, “No, this stuff is going to end up being built.” Part of my message is that it’s going to be built in good ways, and we can steer it to even better ways.

UNITED STATES - JULY 07: Sam Altman, chief executive officer and co-founder of Loopt Inc., left, speaks with Reid Hoffman, chairman and co-founder of LinkedIn Corp., as they arrive at the annual Allen & Co. Media and Technology Conference in Sun Valley, Idaho, U.S., on Tuesday, July 7, 2009. The conference runs until Saturday, July 11. (Photo by Matthew Staver/Bloomberg via Getty Images)

What do you think about all the buzz surrounding DeepSeek? Are you concerned about China developing such a sophisticated AI model to rival OpenAI’s o1? Do you think open-sourcing such a model is a good thing? 

DeepSeek demonstrates how immediate and strong the competitive talent from China is and why it’s crucial for America to continue to be at the forefront of AI development. There’s many misunderstandings, of course, surrounding DeepSeek. One of the key things to know is that, to the best of my understanding, DeepSeek required access to ChatGPT to train the way it did. This is a pattern we will see intensely in the future—the large models will be deployable to train really effective smaller models.  

Open-sourcing models has a bunch of benefits for research institutions and other kinds of development, so that’s good. The primary concern is, is it easily deployable and retrainable to enable rogue states, terrorists, or criminals? And as yet, there’s an absence of testing data.

Are you disappointed in how things played out with Inflection, as far as Microsoft hiring Mustafa Suleyman and the deal that was made with them? Are you still very optimistic there? 

The original thesis of Inflection was hoping that you create the trusted agent. That is, it’s as important to have high EQ as it is high IQ to be a kind of a trusted companion. We think we’re right about that thesis. And then we concluded that, when we were training these large-scale models, that startups would not be able to, at least in the space we were in. 

So we said, “Okay, the team that wants to continue to try to create those large-scale models, they should go with Microsoft. Then we can take the technology and use it.” It’s disappointing that we weren’t able to realize the first vision, but that was partially market realities. This happens a lot in startup plans. 

But in this AI landscape, when ChatGPT first came out, I don’t think it was clear that startups wouldn’t be able to take on the biggest visions. Are you surprised by that? 

There’s a reason why we have this term “pivot.” PayPal started as encryption on mobile devices, then turned to cash on Palm Pilots, then turned to cash on Palm Pilots synced with an email payment service, then turned into an email payment service and got rid of the Palm Pilot. 

Speaking of massive pivots—as one of the earliest investors in OpenAI as a nonprofit, what are your thoughts on its efforts to become a for-profit? 

We started OpenAI as a nonprofit because the mission—which still remains true for the organization—was “how do you maximally benefit humanity with AI?” Then what OpenAI discovered is it needed massive capital. That capital was not available philanthropically. So what you’ve been seeing is a set of micro pivots. 

The first micro pivot was a commercialization LLP deal, then a for-profit subsidiary that’s run by the nonprofit. Now, we still even need a lot more capital. So let’s change to a public benefit corporation, which is the process happening right now. In each of these cases, OpenAI is doing its best to keep the governance centered around this AGI-for-humanity mission, while realizing the capital and talent and deals that are necessary to do that. 

I think their plan is to have the public benefit corporation, that’s already essentially controlled by the nonprofit, give a bunch of equity to the nonprofit and then be more independent as a public benefit corporation, versus controlled by the 501(c)(3), so that you can raise a lot more capital in that new structure.

Reid Hoffman, chairman and co-founder of LinkedIn Corp., second from left, and Jeffrey Weiner, chief executive officer of LinkedIn, center, applaud during the opening bell ceremony at the New York Stock Exchange in New York, U.S., on Thursday, May 19, 2011. LinkedIn Corp., the largest professional-networking website, more than doubled in the first day of trading after its initial public offering. Photographer: Michael Nagle/Bloomberg via Getty Images

But this is no easy task. Folks like Mark Zuckerberg are writing letters to the California attorney general saying, “Hold on. It sounds like other companies could just start as nonprofits, and then become for-profits and benefit from the nonprofit structure early on.”

If you told the OpenAI people: “Would you have rather started as a for-profit and not had like, literally tens of thousands of hours of difficulties as a nonprofit?”

So you’re saying that no company would really want to try to start as a nonprofit and then gain all the notoriety and research for profit.

No, the only real structural advantage you have with a nonprofit is the taxes. You’re not paying taxes. But by the way, of course, OpenAI has net operating losses. They’re not gonna be paying taxes for years. They’ve never paid any taxes. 

I, as a founder can, or a funder can, contribute money with a tax deduction, but I also don’t get shares for it. That’s part of the reason why Elon [Musk] is grumpy, because he’s like, “I want shares,” right? Well, that’s not the deal that you get with a 501(c)(3). 

So you, as an investor in the nonprofit, you just got a tax deduction. You didn’t get any shares, right? 

Yes, exactly. Literally, if we had a lens on the future, we would have started OpenAI as for-profit. It’s a far simpler, far better way to do it now. [A spokesperson noted that Hoffman later invested in OpenAI’s for-profit subsidiary through his nonprofit foundation. Any upside from the shares owned by his foundation will go to supporting his philanthropic work.]

What is your opinion on the new Stargate Project? I don’t really understand how OpenAI can afford to chip in $18 billion.

Stargate is an amazing project for America that OpenAI has been instrumental in facilitating. As OpenAI has been leading the world in AI development on several important vectors, numerous massive investors maintain a strong interest in continuing to fund it. As such, OpenAI raising money, both for itself and to invest in Stargate, doesn’t surprise me and strikes me as a natural way to align OpenAI’s future in business with Stargate and the investors.

Do you think Elon Musk could have any influence on the new Trump administration to do things that would not benefit OpenAI?

My hope for the incoming administration is that they will care about all American industry, not any particular retaliatory things. And as such, I hope that there won’t be any challenges, but I think it’s definitely something that we should all be paying attention to.

There were rumors that you were planning to abandon the U.S. after Trump won the election. Are you sticking around? 

I am. My residence is still outside Seattle, Washington. 

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.