Your voice has been heard. OpenAI has ‘heard from the Attorney Generals’ of Delaware and California, and as a result the OpenAI nonprofit will retain control of OpenAI under their new plan, and both companies will retain the original mission.
Technically they are not admitting that their original plan was illegal and one of the biggest thefts in human history, but that is how you should in practice interpret the line ‘we made the decision for the nonprofit to retain control of OpenAI after hearing from civic leaders and engaging in constructive dialogue with the offices of the Attorney General of Delaware and the Attorney General of California.’
Another possibility is that the nonprofit board finally woke up and looked at what was being proposed and how people were reacting, and realized what was going on.
The letter ‘not for private gain’ that was recently sent to those Attorney Generals plausibly was a major causal factor in any or all of those conversations.
The question is, what exactly is the new plan? The fight is far from over.
As previously intended, OpenAI will transition their for-profit arm, currently an LLC, into a PBC. They will also be getting rid of the capped profit structure.
However they will be retaining the nonprofit’s control over the new PBC, and the nonprofit will (supposedly) get fair compensation for its previous financial interests in the form of a major (but suspiciously unspecified, other than ‘a large shareholder’) stake in the new PBC.
Bret Taylor (Chairman of the Board, OpenAI): The OpenAI Board has an updated plan for evolving OpenAI’s structure.
OpenAI was founded as a nonprofit, and is today overseen and controlled by that nonprofit. Going forward, it will continue to be overseen and controlled by that nonprofit.
Our for-profit LLC, which has been under the nonprofit since 2019, will transition to a Public Benefit Corporation (PBC)–a purpose-driven company structure that has to consider the interests of both shareholders and the mission.
The nonprofit will control and also be a large shareholder of the PBC, giving the nonprofit better resources to support many benefits.
Our mission remains the same, and the PBC will have the same mission.
We made the decision for the nonprofit to retain control of OpenAI after hearing from civic leaders and engaging in constructive dialogue with the offices of the Attorney General of Delaware and the Attorney General of California.
We thank both offices and we look forward to continuing these important conversations to make sure OpenAI can continue to effectively pursue its mission of ensuring AGI benefits all of humanity. Sam wrote the letter below to our employees and stakeholders about why we are so excited for this new direction.
The rest of the post is a letter from Sam Altman, and sounds like it, you are encouraged to read the whole thing.
Sam Altman (CEO OpenAI): The for-profit LLC under the nonprofit will transition to a Public Benefit Corporation (PBC) with the same mission. PBCs have become the standard for-profit structure for other AGI labs like Anthropic and X.ai, as well as many purpose driven companies like Patagonia. We think it makes sense for us, too.
Instead of our current complex capped-profit structure—which made sense when it looked like there might be one dominant AGI effort but doesn’t in a world of many great AGI companies—we are moving to a normal capital structure where everyone has stock. This is not a sale, but a change of structure to something simpler.
The nonprofit will continue to control the PBC, and will become a big shareholder in the PBC, in an amount supported by independent financial advisors, giving the nonprofit resources to support programs so AI can benefit many different communities, consistent with the mission.
Joshua Achiam (OpenAI, Head of Mission Alignment): OpenAI is, and always will be, a mission-first organization. Today’s update is an affirmation of our continuing commitment to ensure that AGI benefits all of humanity.
I find the structure of this solution not ideal but ultimately acceptable.
The current OpenAI structure is bizarre and complex. It does important good things some of which this new arrangement will break. But the current structure also made OpenAI far less investable, which means giving away more of the company to profit maximizers, and causes a lot of real problems.
Thus, I see the structural changes, in particular the move to a normal profit distribution, as a potentially a fair compromise to enable better access to capital – provided it is implemented fairly, and isn’t a backdoor to further shifts.
The devil is in the details. How is all this going to work?
What form will the nonprofit’s control take? Is it only that they will be a large shareholder? Will they have a special class of supervoting shares? Something else?
This deal is only acceptable if and only he nonprofit:
-
Has truly robust control going forward, that is ironclad and that allows it to guide AI development in practice not only in theory. Is this going to only be via voting shares? That would be a massive downgrade from the current power of the board, which already wasn’t so great. In practice, the ability to win a shareholder vote will mean little during potentially crucial fights like a decision whether to release a potentially dangerous model.
-
What this definitely still does is give cover to management to do the right thing, if they actively want to do that, I’ll discuss more later.
-
-
Gets a fair share of the profits, that matches the value of its previous profit interests. I am very worried they will still get massively stolen from on this. As a reminder, right now most of the net present value of OpenAI’s future profits belongs to the nonprofit.
-
Uses those profits to advance its original mission rather than turning into a de facto marketing arm or doing generic philanthropy that doesn’t matter, or both.
-
There are still clear signs that OpenAI is largely planning to have the nonprofit buy AI services on behalf of other charities, or otherwise do things that are irrelevant to the mission. That would make it an ‘ordinary foundation’ combined with a marketing arm, effectively making its funds useless, although it could still act meaningfully via its control mechanisms.
-
Remember that in these situations, the ratchet only goes one way. The commercial interests will constantly try to wrestle greater control and ownership of the profits away from us. They will constantly cite necessity and expedience to justify this. You’re playing defense, forever. Every compromise improves their position, and this one definitely will compared to doing nothing.
Or: This deal is getting worse and worse all the time.
Quintin Pope: Common mistake. They forgot to paint “Do Not Open” on the box.
There’s also the issue of the extent to which Altman controls the nonprofit board.
The reason the nonprofit needs control is to impact key decisions in real time. It needs control of a form that lets it do that. Because that kind of lever is not ‘standard,’ there will constantly be pressure to get rid of that ability, with threats of mild social awkwardness if these pressures are resisted.
So with love, now that we have established what you are, now it’s time to haggle over the price.
He had an excellent thread explaining the attempted conversion, and he has another good explainer on what this new announcement means, as well as an emergency 80,000 Hours podcast on the topic that should come out tomorrow.
Consider this the highly informed and maximally skeptical and cynical take. Which, given the track records here, seems like a highly reasonable place to start.
The central things to know about the new plan are indeed:
-
The transition to a PBC and removal of the profit cap will still shift priorities, legal obligations and incentives towards profit maximization.
-
The nonprofit’s ‘control’ is at best weakened, and potentially fake.
-
The nonprofit’s mission might effectively be fake.
-
The nonprofit’s current financial interests could largely still be stolen.
It’s an improvement, but it might not effectively be all that much of one?
We need to stay vigilant. The fight is far from over.
Rob Wiblin: So OpenAI just said it’s no longer going for-profit and the non-profit will ‘retain control’. But don’t declare victory yet. OpenAI may actually be continuing with almost the same plan & hoping they can trick us into thinking they’ve stopped!
Or perhaps not. I’ll explain:
The core issue is control of OpenAI’s behaviour, decisions, and any AGI it produces.
Will the entity that builds AGI still have a legally enforceable obligation to make sure AGI benefits all humanity?
Will the non-profit still be able to step in if OpenAI is doing something appalling and contrary to that mission?
Will the non-profit still own an AGI if OpenAI develops it? It’s kinda important!
The new announcement doesn’t answer these questions and despite containing a lot of nice words the answers may still be: no.
(Though we can’t know and they might not even know themselves yet.)
The reason to worry is they’re still planning to convert the existing for-profit into a Public Benefit Corporation (PBC). That means the profit caps we were promised would be gone. But worse… the nonprofit could still lose true control. Right now, the nonprofit owns and directly controls the for-profit’s day-to-day operations. If the nonprofit’s “control” over the PBC is just extra voting shares, that would be a massive downgrade as I’ll explain.
(The reason to think that’s the plan is that today’s announcement sounded very similar to a proposal they floated in Feb in which the nonprofit gets special voting shares in a new PBC.)
Special voting shares in a new PBC are simply very different and much weaker than the control they currently have! First, in practical terms, voting power doesn’t directly translate to the power to manage OpenAI’s day-to-day operations – which the non-profit currently has.
If it doesn’t fight to retain that real power, the non-profit could lose the ability to directly manage the development and deployment of OpenAI’s technology. That includes the ability to decide whether to deploy a model (!) or license it to another company.
Second, PBCs have a legal obligation to balance public interest against shareholder profits. If the nonprofit is just a big shareholder with super-voting shares other investors in the PBC could sue claiming OpenAI isn’t doing enough to pursue their interests (more profits)! Crazy sounding, but true.
And who do you think will be more vociferous in pursuing such a case through the courts… numerous for-profit investors with hundreds of billions on the line, or a non-profit operated by 9 very busy volunteers? Hmmm.
In fact in 2019, OpenAI President Greg Brockman said one of the reasons they chose their current structure and not a PBC was exactly because it allowed them to custom-write binding rules including full control to the nonprofit! So they know this issue — and now want to be a PBC. See here.
If this is the plan it could mean OpenAI transitioning from:
• A structure where they must prioritise the nonprofit mission over shareholders
To:
• A new structure where they don’t have to — and may not even be legally permitted to do so.
(Note how it seems like the non-profit is giving up a lot here. What is it getting in return here exactly that makes giving up both the profit caps and true control of the business and AGI the best way to pursue its mission? It seems like nothing to me.)
So, strange as it sounds, this could turn out to be an even more clever way for Sam and profit-motivated investors to get what they wanted. Profit caps would be gone and profit-motivated investors would have much more influence.
And all the while Sam and OpenAI would be able to frame it as if nothing is changing and the non-profit has retained the same control today they had yesterday!
(As an aside it looks like the SoftBank funding round that was reported as requiring a loss of nonprofit control would still go through. Their press release indicates that actually all they were insisting on was that the profit caps are removed and they’re granted shares in a new PBC.
So it sounds like investors think this new plan would transfer them enough additional profits, and sufficiently neuter the non-profit, for them to feel satisfied.).
Now, to be clear, the above might be wrongheaded.
I’m looking at the announcement cynically, assuming that some staff at OpenAI, and some investors, want to wriggle out of non-profit control however they can — because I think we have ample evidence that that’s the case!
The phrase “nonprofit control” is actually very vague, and those folks might be trying to ram a truck through that hole.
At the same time maybe / hopefully there are people involved in this process who are sincere and trying to push things in the right direction.
On that we’ll just have to wait and see and judge on the results.
Bottom line: The announcement might turn out to be a step in the right direction, but it might also just be a new approach to achieve the same bad outcome less visibly.
So do not relax.
And if it turns out they’re trying to fool you, don’t be fooled.
Gretchen Krueger: The nonprofit will retain control of OpenAI. We still need stronger oversight and broader input on whether and how AI is pursued at OpenAI and all the AI companies, but this is an important bar to see upheld, and I’m proud to have helped push for it!
Now it is time to make sure that control is real—and to guard against any changes that make it harder than it already is to strengthen public accountability. The devil is in the details we don’t know yet, so the work continues.
Roon says the quiet part out loud. We used to think it was possible to do the right thing and care about whether AI killed everyone. Now, those with power say, we can’t even imagine how we could have been so naive, let’s walk that back as quickly as we can so we can finally do some maximizing of the profits.
Roon: the idea of openai having a charter is interesting to me. A relic from a bygone era, belief that governance innovation for important institutions is even possible. Interested parties are tasked with performing exegesis of the founding documents.
Seems clear that the “capped profit” mechanism is from a time in which people assumed agi development would be more singular than it actually is. There are many points on the intelligence curve and many players. We should be discussing when Nvidia will require profit caps.
I do not think that the capped profit requires strong assumptions about a singleton to make sense. It only requires that there be an oligopoly where the players are individually meaningful. If you have close to perfect competition and the players have no market power and their products are fully fungible, then yes, of course being a capped profit makes no sense. Although it also does no real harm, your profits were already rather capped in that scenario.
More than that, we have largely lost our ability to actually ask what problems humanity will face, and then ask what would actually solve those problems, and then try to do that thing. We are no longer trying to backward chain from a win. Which means we are no longer playing to win.
At best, we are creating institutions that might allow the people involved to choose to do the right thing, when the time comes, if they make that decision.
For several reasons, recent developments do still give me hope, even if we get a not-so-great version of the implementation details here.
The first is that this shows that the right forms of public pressure can still work, at least sometimes, for some combination of getting public officials to enforce the law and causing a company like OpenAI to compromise. The fight is far from over, but we have won a victory that was at best highly uncertain.
The second is that this will give the nonprofit at least a much better position going forward, and the ‘you have to change things or we can’t raise money’ argument is at least greatly weakened. Even though the nine members are very friendly to Altman, they are also sufficiently professional class people, Responsible Authority Figures of a type, that one would expect the board to have real limits, and we can push for them to be kept more in-the-loop and be given more voice. De facto I do not think that the nonprofit was going to get much if any additional financial compensation in exchange for giving up its stake.
The third is that, while OpenAI likely still has the ability to ‘weasel out’ of most of its effective constraints and obligations here, this preserves its ability to decide not to. As in, OpenAI and Altman could choose to do the right thing, even if they haven’t had the practice, with the confidence that the board would back them up, and that this structure would protect them from investors and lawsuits.
This is very different from saying that the board will act as a meaningful check on Altman, if Altman decides to act recklessly or greedily.
It is easy to forget that in the world of VCs and corporate America, in many ways it is not only that you have no obligation to do the right thing. It is that you have an obligation, and will face tremendous pressure, to do the wrong thing, in many cases merely because it is wrong, and certainly to do so if the wrong thing maximizes shareholder value in the short term.
Thus, the ability to fight back against that is itself powerful. Altman, and others in OpenAI leadership, are keenly aware of the dangers they are leading us into, even if we do not see eye to eye on what it will take to navigate them or how deadly are the threats we face. Altman knows, even if he claims in public to actively not know. Many members of technical stuff know. I still believe most of those who know do not wish for the dying of the light, and want humanity and value to endure in this universe, that they are normative and value good over bad and life over death and so on. So when the time comes, we want them to feel as much permission, and have as much power, to stand up for that as we can preserve for them.
It is the same as the Preparedness Framework, except that in this case we have only ‘concepts of a plan’ rather than an actually detailed plan. If everyone involved with power abides by the spirit of the Preparedness Framework, it is a deeply flawed but valuable document. If those involved with power discard the spirit of the framework, it isn’t worth the tokens that compose it. The same will go for a broad range of governance mechanisms.
Have Altman and OpenAI been endlessly disappointing? Well, yes. Are many of their competitors doing vastly worse? Also yes. Is OpenAI getting passing grades so far, given that reality does not grade on a curve? Oh, hell no. And it can absolutely be, and at some point will be, too late to try and do the right thing.
The good news is, I believe that today is not that today. And tomorrow looks good, too.