I first learned about The Clause from Microsoft CEO Satya Nadella. During an interview with him in May 2023, I asked about the deal between Microsoft and OpenAI that granted his company exclusive access to the startup’s groundbreaking AI technology. I knew the contract had set a cap on how much profit Microsoft could make from the arrangement, and I asked him what would happen if and when that point was reached. The answer was a bit puzzling.
“Fundamentally, their long-term idea is we get to superintelligence,” he told me. “If that happens, I think all bets are off, right?” He seemed almost jaunty about the possibility, leading me to wonder how seriously he took it. “If this is the last invention of humankind, then all bets are off,” he continued. “Different people will have different judgments on what that is, and when that is.”
I didn’t realize how important that determination would be until a few weeks later. Working on a feature about OpenAI, I learned that the contract basically declared that if OpenAI’s models achieved artificial general intelligence, Microsoft would no longer have access to its new models. The terms of the contract, which otherwise would have extended until 2030, would be void. Though I wrote about it in my story, and The Clause has never really been a state secret, it didn’t generate much discussion.
That’s no longer the case. The Clause has been at the center of the increasingly frayed relationship between Microsoft and OpenAI and is under renegotiation. It has been the subject of investigative stores by The Information, The Wall Street Journal, the Financial Times, and, yes, WIRED.
But the significance of The Clause goes beyond the fates of the two companies that agreed to it. The tenuous conditions of that contract go to the heart of a raging debate about just how world-changing—and lucrative—AGI might be if realized, and what it would mean for a profit-driven company to control a technology that makes Sauron’s Ring of Power look like a dime-store plastic doodad. If you want to understand what’s happening in AI, pretty much everything can be explained by The Clause.
Let’s dig into the details. Though the precise language hasn’t been made public, sources with knowledge of the contract confirm that The Clause has three parts, each with its own implications.
There are two conditions that must be satisfied for OpenAI to deny its technology to Microsoft. First, the OpenAI board would determine that its new models have achieved AGI, which is defined in OpenAI’s charter as “a highly autonomous system that outperforms humans at most economically valuable work.” Fuzzy enough for you? No wonder Microsoft is worried that OpenAI will make that determination prematurely. Its only way to object to the OpenAI board’s declaration would be to sue.
But that’s not all. The OpenAI board would also be required to determine whether the new models have achieved “sufficient AGI.” This is defined as a model capable of generating profits sufficient to reward Microsoft and OpenAI’s other investors, a figure upwards of $100 billion. OpenAI doesn’t have to actually make those profits, just provide evidence that its new models will generate that bounty. Unlike the first determination, Microsoft has to agree that OpenAI meets that standard, but can’t unreasonably dispute it. (Again, in case of a dispute a court may ultimately decide.)
Altman himself admitted to me in 2023 that the standards are vague. “It gives our board a lot of control to decide what happens when we get there,” he said. In any case, if OpenAI decides it has reached sufficient AGI, it doesn’t have to share those models with Microsoft, which will be stuck with the now outdated earlier versions. It won’t even have to use Microsoft’s cloud servers; currently Microsoft has the right of first refusal for the work.