Business
OpenAI’s master builder: Greg Brockman is steering a $1.4 trillion AI infrastructure surge
Published
1 month agoon
By
Jace Porter
In early October, OpenAI president Greg Brockman and AMD CEO Lisa Su made the rounds of TV news shows, smiling ear to ear as they announced a multiyear partnership worth tens of billions of dollars—one that will see OpenAI deploy hundreds of thousands of AMD chips across its Stargate Project data center mega-campuses. The deal represents roughly six gigawatts of computing power, or about three times the amount of electricity the Hoover Dam can generate.
Su told Fortune that Brockman’s insistence on thinking big was essential to making the deal—which sent AMD’s stock soaring 24% the day it was announced.
“What I love the most about working with Greg is he’s just so clear in his vision that compute is the currency of intelligence, and his just maniacal focus on ensuring there’s enough compute in this world,” Su said.
She recalled that the negotiations with Brockman were different from any she’s had with other potential partners over the years. Partnerships like this usually unfold in stages, she said. “We start at the first stage of the partnership, and then we do something a little bigger, and then something a little bit bigger.”
However, Brockman wanted to go big or go home. “I think Greg was like, ‘failure is not an option,’” she said. “The infrastructure we’re building is at a very different scale from how normal people build. We’re building gigawatts of compute in a very short amount of time. It’s really about, how do we break the laws of physics?”
Sam Altman may be OpenAI’s globe-trotting visionary and public face of the company, but it is Brockman, his longtime ally and cofounder, who has become the company’s high-visibility operator. He is the executive leading OpenAI’s aggressive infrastructure buildout, a project to which it has already committed roughly $1.4 trillion to deploying the equivalent of 30 gigawatts of compute capacity. That also makes Brockman the point-person for a high-stakes financial gamble, given that the company is reportedly currently making only about $13 billion a year in revenue.
All this dealmaking is in service of what Brockman calls “completing the mission”—reaching artificial general intelligence, or AGI, that “benefits all of humanity.” In an interview with Fortune, Brockman described building AGI as an end-to-end engineering challenge, one that spans everything from how the models are designed to the chips, servers, and data centers that power the training and running of models.
“The fundamental bet is that AGI is possible, and if we are right about that, then it will really change everything,” he said. “In my mind, the real question is, do you believe in continued AI progress?” Brockman is certainly a believer: “There’s no bend in the scaling laws,” he said of the idea that if you build bigger AI models, feed them more data, and train them on larger clusters of AI-specific chips, their performance improves in predictable, smooth curves. “The thing that’s hard is execution.”
A remarkable re-emergence
His central role in executing on OpenAI’s infrastructure mission—which he explained includes building and managing the chips, data centers, software, and the actual operations to “deliver intelligence at unprecedented scale” marks a remarkable re-emergence for an executive whose future at the company once seemed uncertain. He had been removed from OpenAI’s nonprofit board at the time of Altman’s firing and later took a months-long sabbatical beginning in August 2024. Media outlets reported that he and Altman had agreed to the sabbatical amid ongoing concerns that his demanding leadership style had created tension within teams. It wasn’t clear he would ever come back to OpenAI, or if he did, what role he would have.
But these days, Brockman has become ubiquitous. There he is, with President Trump in Tokyo. There he is, dining at the White House. There he is, pouring millions of his own money into Leading the Future, a $100 million political action committee dedicated to lobbying against AI regulation. Behind the scenes, Brockman reportedly helped shape OpenAI’s corporate restructuring into a Public Benefit Corporation, announced last week, a move that enables the company to raise even more capital. And now, OpenAI is, according to news reports, laying the groundwork for an initial public offering that could value the company at up to $1 trillion, in what would be the largest IPO ever and a first for a former nonprofit.
This comeback of sorts puts Brockman at the center of OpenAI’s most consequential shift yet—as it transitions from merely building AI models to building the systems to run and serve them—what is known as inference in the AI field. Brockman is leading the most ambitious (and expensive) infrastructure buildout in tech history, serving as the behind-the-scenes architect translating Altman’s vision into hardware, investment, and political capital.
“Greg is some of the secret sauce…behind actually bringing these [deals] together and making partners want to get to announcements,” said Peter Hoeschele, an OpenAI executive who, as the head of the Stargate team, reports to Brockman.
Still, the story of Brockman’s resurgence isn’t just about one executive’s rebound—it’s about who controls the next industrial revolution. Brockman has become one of the biggest power brokers of the AI era. As OpenAI’s “builder-in-chief,” he sits at the crossroads of AI, energy, and capital, orchestrating deals that will shape how — and where — the world’s computing power is developed and deployed.
Completing the mission
OpenAI’s charter defines AGI as an autonomous system that can outperform humans at most economically valuable work. But at the company’s recent Dev Day, Brockman described AGI as a “continuous process… an important milestone, but not the end.”
Continuous or not, the current route to reaching AGI requires what would be the largest infrastructure build in history. “It really makes programs like the Apollo program almost small in comparison, which is a really wild statement,” Brockman recently told CNBC’s Squawk on the Street, adding that he believes there will be economic returns. “This is really going to be the underpinning of our future economy and is already showing the promise and benefit to people’s lives,” he said.
But the effort has also become a lightning rod. Building the infrastructure to pursue AGI could ultimately cost trillions of dollars—enough to reshape power markets and test the limits of the electrical grid. The surge in demand is already driving up energy prices and fueling political backlash as sprawling data centers turn into election-season flashpoints in the communities where they are being constructed. Critics also question whether demand will continue to grow at a fast enough pace to justify the investment.
The financing methods being used to fund the infrastructure build out adds an additional dimension of risk. For example, as part of its agreement with OpenAI, Nvidia has reportedly discussed guaranteeing loans the startup would use to build its own data centers—a move that could leave the chipmaker on the hook for billions in debt if OpenAI can’t repay. Analysts have also raised concerns about the circular nature of the deal: OpenAI pays Nvidia cash for chips, while Nvidia, in turn, takes a non-controlling equity stake in OpenAI and backstops its loans.
OpenAI’s partnership with AMD, while not similarly circular, is symbiotic—OpenAI has an option to acquire up to a 10% stake in AMD.
Brockman has acknowledged the difficulty of building sufficient computing infrastructure to handle what he calls the “avalanche of demand” for AI, and that creative financing mechanisms would be necessary. Still, analysts are wary of how intertwined the major players have become. “There’s a healthy part and an unhealthy part to the AI ecosystem,” Gil Luria, managing director at D.A. Davidson, told NBC in early October. “The unhealthy part has become marked by related-party transactions like the ones involving these companies,” he said, which can artificially prop up valuations.
If investors decide those ties are getting too close, Luria warned, “there will be some deflating activity.” In other words, investors might bail on companies such as Nvidia, Oracle, and CoreWeave, whose fates are deemed too closely tied up with OpenAI’s.
Brockman as builder
Having grown up on what he has called a “hobby farm” in North Dakota, Brockman may seem like an unlikely figure to end up at the heart of one of the biggest technological transformations in modern history. But he has long enjoyed building things—in fact, his own LinkedIn bio reads simply: “I love to build.”
And the drive to solve complex problems started early. Robert Nishihara, now CEO of software platform Anyscale, first met Brockman when they were teenagers at the Canada/USA Math Camp, an intense five-week program for students who “just love math and are solving problems all day.” Even then, Nishihara said, “Greg was clearly one of the smartest people there,” Years later, when Nishihara was visiting Harvard as a prospective student, Brockman, who was already attending, served as a mentor, showing him around campus and taking him to a notoriously difficult freshman math class.
Ultimately, Brockman spent only a short time at Harvard before transferring to MIT; he then dropped out of university entirely in 2010. That was when he joined Patrick and John Collison as online payment startup Stripe’s fourth employee, serving as its first CTO and building the company’s early engineering systems, often coding through the night. Stripe was one of tech incubator Y Combinator’s breakout companies, and in 2015, Patrick Collison introduced Brockman to Altman, who was president of Y Combinator at the time. That year, he teamed up with Altman, Ilya Sutskever, and others to launch OpenAI, where he was, according to a blog post, excited to have “something impactful to build once again.”
In the company’s early years, prior to Microsoft’s first $1 billion investment into OpenAI, Brockman essentially served as the AI lab’s CEO, while Altman continued to run Y Combinator. Brockman’s intense work ethic quickly became legend. One former OpenAI engineer recalled a pivotal moment in 2020 when the company needed to prove it could become a viable business. “Greg basically hacked together the first API one weekend, I think over Christmas,” the person said, referring to the launch of OpenAI’s first commercial product — an API, or application programming interface, which let developers plug OpenAI’s language models into their own apps and products.
The former engineer also recalled that when OpenAI was far smaller—around 200 people—Brockman had set his Slack to a mode in which he would get a notification for every single message from anybody in the company, on every channel. “You could be in some random technical thread and Greg would chime in with some incredibly informed and knowledgeable idea,” he explained. That said, it was “effectively impossible” for anyone to match his pace on anything: “So when I was assigning people to work with Greg, I chose very carefully—because you weren’t going to be sleeping.”
After those sprints, Brockman would disappear for a while. “He’d go super hard, then go off like a bear and hibernate for a few weeks, and then come back,” the colleague said.
While Brockman took on a less public-facing role at the company after Altman became CEO in 2019, to many inside the company, Brockman is both the engine and the metronome of OpenAI. “He’s the heartbeat of OpenAI—the one who sets the pace,” said another former researcher at the company. “He has incredibly high standards and expects results.”
That intensity can also make him impatient. “If something’s not moving fast enough, Greg will take it into his own hands and work around people if necessary,” said another former OpenAI employee. “He’s very much an ends-over-means kind of person.”
His way of working with staffers sometimes caused friction. Keach Hagey, in her 2025 book The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future, suggested that Greg Brockman’s management style at OpenAI drew internal complaints, and that one of two self-deleting documents, emailed by Ilya Sutskever to the OpenAI board before Sam Altman’s firing, laid out concerns about Brockman’s “alleged bullying.” The memo — dubbed the “Brockman memo” — has since become central to Elon Musk’s lawsuit against OpenAI. In an October 1 deposition before a U.S. district court, Sutskever acknowledged its existence, and the judge ordered him to produce it as part of discovery.
In response to Sutskever’s allegations, an OpenAI spokesperson told Fortune that “These claims aren’t true. Ilya signed the petition asking for Greg and Sam to be reinstated, and the Board’s independent review further concluded that he and Sam are the right leaders for OpenAI.”
Today, Brockman says he remains focused on building—whether that means writing software or leading OpenAI’s infrastructure project—which he calls “really the theme of what I do,” even as the balance between technical and strategic work has shifted over time.
Infrastructure from the get-go
From the start, Brockman viewed infrastructure as central to OpenAI’s mission. Back in 2017, he said, the company began writing down hardware projections that suddenly dwarfed its early assumptions. “We started to think, okay, maybe we’ll need $10 billion worth of hardware,” Brockman recalled. “At that point, you need data centers.”
Today, those physical infrastructure requirements—the chips and the data centers behind them—operate on a staggering scale, with energy needs measured in gigawatts. Each gigawatt represents 1,000 megawatts of power—roughly what it takes to supply 750,000 American homes. “There are very few people in the world who’ve ever thought about building a gigawatt-scale data center and what that requires,” said Hoeschele.
Stargate marks OpenAI’s shift from relying largely on leased cloud compute—mostly from Microsoft— to committing to its own large-scale infrastructure, with data-center builds announced across multiple U.S. states including Texas, New Mexico and, just last week, Michigan. It is also expanding internationally in countries like Norway and the UAE.
Hoeschele recalled early debates about whether the company should really commit to such an audacious investment. “Three years ago, I kept asking, ‘Okay, how much do you think we are really going to need?’” he said. “Greg has always been the voice, both behind the scenes and when he needs to be public, about the scale of compute that’s required to keep testing and deploying the technology. We are going to continue to make these investments.”
And while critics worry about the environmental and economic toll of AI infrastructure, Brockman insists the long-term benefits will outweigh the costs. “At the end of the day, what this technology is for is to benefit people,” he said. “I think it is worth really looking at the fundamentals, to make sure that we’re looking at the right data – I’ve seen a lot of numbers about data centers and their impacts on communities that are definitely not accurate.”
However, he added that he knows OpenAI needs to prove its value to local communities. “That is really our focus, to really show that it is actually good for your community, for your life, for there to be a data center nearby. I think that that is something that we will show to people over time.”
Brockman’s power influence
According to an OpenAI spokesperson, during his 2024 sabbatical Brockman was still in touch with the company and following its developments–which included closing a $6.6 billion funding round that valued the company at around $157 billion. Once Brockman returned in November 2024, he seemed newly energized. In an internal memo, he wrote that he had been working with Altman to create a new role focused on “significant technical challenges.” Within weeks, that mandate had a name: a new group called Scaling, which Brockman told Fortune “merged the deep learning engineering of both our research and applied teams.” Scaling’s job, he explained, “is to make sure we have (and can maximally harness) the computing power we need to train and run our models.”
This team, he continued, “works on everything from how we train our frontier models to how we run ChatGPT for millions of people. It’s where some of the hardest technical challenges live, because as we make new breakthroughs and push the horizons of our current ones, we constantly need to invent new ways to debug, manage, and scale the computing systems that support them.”
Just two months later–the day after President Trump’s inauguration–OpenAI unveiled the Stargate Project, a joint venture announced at the White House alongside President Trump, Oracle and SoftBank—an audacious public-private plan to invest up to $500 billion over four years to build massive data centers and other infrastructure in the United States to power AI. By July, Brockman, known as a top recruiter, had poached four high-profile engineers away from rivals, including Spas Lazarov, former director of data center engineering at Apple; David Lau, former vice president of software engineering at Tesla; Uday Ruddarraju, the former head of infrastructure engineering at xAI and X; Mike Dalton, an infrastructure engineer from xAI; and Angela Fan, an AI researcher from Meta.
Stargate showed the sheer scale of OpenAI’s ambition, but it also made clear that the company gets there through the connection between Altman’s vision and Brockman’s execution. “That’s the beauty of their partnership,” Hoeschele added. “When OpenAI is at its best, Sam is laying out our vision and Greg is
making it a reality, leaning on his technical expertise and relationships. He is working closely with people like Lisa Su and Jensen Huang to make these deals happen.”
That combination of technical credibility and dealmaking reach has also made Brockman an increasingly influential political player. In recent months, he has poured millions of his own money into Leading the Future, a $100 million pro-AI super PAC backed by Brockman, venture capital firm Andreessen Horowitz, and other tech leaders, which supports candidates favoring deregulation and faster AI deployment.
Brockman was also among a high-powered group of tech executives who attended a White House dinner in September, where he praised Trump for his “optimism” in embracing AI and the massive infrastructure buildout required to support it. The following month, he returned to the White House for a fundraising dinner aimed at raising money for a planned $200 million ballroom addition–though an OpenAI spokesperson emphasized that “he attended the October dinner in his personal capacity, but hasn’t donated to the ballroom effort.” Many view these moves, however, as part of a broader effort to ease regulatory friction around the Stargate build-out OpenAI is leading.
Still, not everyone sees him as fully independent. “My strong sense, based on what I know from close friends who were at OpenAI for years, is that Greg is not super-independent from Sam—even as he makes his own commitments and puts his money in places that Sam might not,” said a Washington-based technology consultant who previously worked with Palantir and the federal government. “When it comes to OpenAI and the business, Greg is his own person, but he does not go sideways with Sam on company strategy—especially partnerships.”
The path forward is to keep building
Even as OpenAI’s ambitions draw scrutiny and criticism—from regulators, rivals, and local communities—Brockman faith in building seems unshaken. In a podcast with Stripe cofounder Patrick Collison, Brockman asked viewers to imagine having one entire Stargate data center think about one problem. “Imagine it just thinking about how to solve a Millennium Problem [one of seven well-known, unsolved complex mathematical problems] or how to cure a specific kind of cancer,” he said. “That level of computational power coupled with the ability to experiment and learn from your ideas, that is going to be something the world has never seen.”
As for the eye-watering spending commitments recently announced, he recently said they would pay for themselves. “If we had 10 [times] more compute [computing power], I don’t know if we’d have 10 [times] more revenue, but I don’t think we would be that far.”
If Altman remains OpenAI’s evangelist, Brockman is doing some crusading of his own, beating the drum about the need for more computing power across the entire AI industry. “If the market does wake up to the demand that we’re really very loudly trying to say is coming, not just from us but from the whole industry, then great,” he said during OpenAI’s recent Dev Day. “I would love not to have to go and figure out how to build energy ourselves, but we’re here to do the mission.”
He remains undaunted by that mission, even as skeptics warn that OpenAI’s audacious buildout risks becoming a monument to overreach rather than innovation. Seven years ago, he told Fortune, the part of OpenAI’s mission that required building gigantic data centers would have been just a sketch on paper. Today, those mega facilities are actually rising out of former ranchland in Abilene, Texas, and emerging from the abandoned hulk of an auto assembly plant in Lordstown, Ohio, with others already announced in New Mexico, Wisconsin and Michigan. Whether those vast complexes are ultimately remembered as glory or folly, Brockman’s imprint will be there — in the acres of cables and racks, the engineering ambition, and the unshaken belief that it was worth building at all.
You may like
Business
Senate Dems’ plan to fix Obamacare premiums adds nearly $300 billion to deficit, CRFB says
Published
8 hours agoon
December 5, 2025By
Jace Porter
The Committee for a Responsible Federal Budget (CRFB) is a nonpartisan watchdog that regularly estimates how much the U.S. Congress is adding to the $38 trillion national debt.
With enhanced Affordable Care Act (ACA) subsidies due to expire within days, some Senate Democrats are scrambling to protect millions of Americans from getting the unpleasant holiday gift of spiking health insurance premiums. The CRFB says there’s just one problem with the plan: It’s not funded.
“With the national debt as large as the economy and interest payments costing $1 trillion annually, it is absurd to suggest adding hundreds of billions more to the debt,” CRFB President Maya MacGuineas wrote in a statement on Friday afternoon.
The proposal, backed by members of the Senate Democratic caucus, would fully extend the enhanced ACA subsidies for three years, from 2026 through 2028, with no additional income limits on who can qualify. Those subsidies, originally boosted during the pandemic and later renewed, were designed to lower premiums and prevent coverage losses for middle‑ and lower‑income households purchasing insurance on the ACA exchanges.
CRFB estimated that even this three‑year extension alone would add roughly $300 billion to federal deficits over the next decade, largely because the federal government would continue to shoulder a larger share of premium costs while enrollment and subsidy amounts remain elevated. If Congress ultimately moves to make the enhanced subsidies permanent—as many advocates have urged—the total cost could swell to nearly $550 billion in additional borrowing over the next decade.
Reversing recent guardrails
MacGuineas called the Senate bill “far worse than even a debt-financed extension” as it would roll back several “program integrity” measures that were enacted as part of a 2025 reconciliation law and were intended to tighten oversight of ACA subsidies. On top of that, it would be funded by borrowing even more. “This is a bad idea made worse,” MacGuineas added.
The watchdog group’s central critique is that the new Senate plan does not attempt to offset its costs through spending cuts or new revenue and, in their view, goes beyond a simple extension by expanding the underlying subsidy structure.
The legislation would permanently repeal restrictions that eliminated subsidies for certain groups enrolling during special enrollment periods and would scrap rules requiring full repayment of excess advance subsidies and stricter verification of eligibility and tax reconciliation. The bill would also nullify portions of a 2025 federal regulation that loosened limits on the actuarial value of exchange plans and altered how subsidies are calculated, effectively reshaping how generous plans can be and how federal support is determined. CRFB warned these reversals would increase costs further while weakening safeguards designed to reduce misuse and error in the subsidy system.
MacGuineas said that any subsidy extension should be paired with broader reforms to curb health spending and reduce overall borrowing. In her view, lawmakers are missing a chance to redesign ACA support in a way that lowers premiums while also improving the long‑term budget outlook.
The debate over ACA subsidies recently contributed to a government funding standoff, and CRFB argued that the new Senate bill reflects a political compromise that prioritizes short‑term relief over long‑term fiscal responsibility.
“After a pointless government shutdown over this issue, it is beyond disappointing that this is the preferred solution to such an important issue,” MacGuineas wrote.
The off-year elections cast the government shutdown and cost-of-living arguments in a different light. Democrats made stunning gains and almost flipped a deep-red district in Tennessee as politicians from the far left and center coalesced around “affordability.”
Senate Minority Leader Chuck Schumer is reportedly smelling blood in the water and doubling down on the theme heading into the pivotal midterm elections of 2026. President Donald Trump is scheduled to visit Pennsylvania soon to discuss pocketbook anxieties. But he is repeating predecessor Joe Biden’s habit of dismissing inflation, despite widespread evidence to the contrary.
“We fixed inflation, and we fixed almost everything,” Trump said in a Tuesday cabinet meeting, in which he also dismissed affordability as a “hoax” pushed by Democrats.
Lawmakers on both sides of the aisle now face a politically fraught choice: allow premiums to jump sharply—including in swing states like Pennsylvania where ACA enrollees face double‑digit increases—or pass an expensive subsidy extension that would, as CRFB calculates, explode the deficit without addressing underlying health care costs.
Business
Netflix–Warner Bros. deal sets up $72 billion antitrust test
Published
8 hours agoon
December 5, 2025By
Jace Porter
Netflix Inc. has won the heated takeover battle for Warner Bros. Discovery Inc. Now it must convince global antitrust regulators that the deal won’t give it an illegal advantage in the streaming market.
The $72 billion tie-up joins the world’s dominant paid streaming service with one of Hollywood’s most iconic movie studios. It would reshape the market for online video content by combining the No. 1 streaming player with the No. 4 service HBO Max and its blockbuster hits such as Game Of Thrones, Friends, and the DC Universe comics characters franchise.
That could raise red flags for global antitrust regulators over concerns that Netflix would have too much control over the streaming market. The company faces a lengthy Justice Department review and a possible US lawsuit seeking to block the deal if it doesn’t adopt some remedies to get it cleared, analysts said.
“Netflix will have an uphill climb unless it agrees to divest HBO Max as well as additional behavioral commitments — particularly on licensing content,” said Bloomberg Intelligence analyst Jennifer Rie. “The streaming overlap is significant,” she added, saying the argument that “the market should be viewed more broadly is a tough one to win.”
By choosing Netflix, Warner Bros. has jilted another bidder, Paramount Skydance Corp., a move that risks touching off a political battle in Washington. Paramount is backed by the world’s second-richest man, Larry Ellison, and his son, David Ellison, and the company has touted their longstanding close ties to President Donald Trump. Their acquisition of Paramount, which closed in August, has won public praise from Trump.
Comcast Corp. also made a bid for Warner Bros., looking to merge it with its NBCUniversal division.
The Justice Department’s antitrust division, which would review the transaction in the US, could argue that the deal is illegal on its face because the combined market share would put Netflix well over a 30% threshold.
The White House, the Justice Department and Comcast didn’t immediately respond to requests for comment.
US lawmakers from both parties, including Republican Representative Darrell Issa and Democratic Senator Elizabeth Warren have already faulted the transaction — which would create a global streaming giant with 450 million users — as harmful to consumers.
“This deal looks like an anti-monopoly nightmare,” Warren said after the Netflix announcement. Utah Senator Mike Lee, a Republican, said in a social media post earlier this week that a Warner Bros.-Netflix tie-up would raise more serious competition questions “than any transaction I’ve seen in about a decade.”
European Union regulators are also likely to subject the Netflix proposal to an intensive review amid pressure from legislators. In the UK, the deal has already drawn scrutiny before the announcement, with House of Lords member Baroness Luciana Berger pressing the government on how the transaction would impact competition and consumer prices.
The combined company could raise prices and broadly impact “culture, film, cinemas and theater releases,”said Andreas Schwab, a leading member of the European Parliament on competition issues, after the announcement.
Paramount has sought to frame the Netflix deal as a non-starter. “The simple truth is that a deal with Netflix as the buyer likely will never close, due to antitrust and regulatory challenges in the United States and in most jurisdictions abroad,” Paramount’s antitrust lawyers wrote to their counterparts at Warner Bros. on Dec. 1.
Appealing directly to Trump could help Netflix avoid intense antitrust scrutiny, New Street Research’s Blair Levin wrote in a note on Friday. Levin said it’s possible that Trump could come to see the benefit of switching from a pro-Paramount position to a pro-Netflix position. “And if he does so, we believe the DOJ will follow suit,” Levin wrote.
Netflix co-Chief Executive Officer Ted Sarandos had dinner with Trump at the president’s Mar-a-Lago resort in Florida last December, a move other CEOs made after the election in order to win over the administration. In a call with investors Friday morning, Sarandos said that he’s “highly confident in the regulatory process,” contending the deal favors consumers, workers and innovation.
“Our plans here are to work really closely with all the appropriate governments and regulators, but really confident that we’re going to get all the necessary approvals that we need,” he said.
Netflix will likely argue to regulators that other video services such as Google’s YouTube and ByteDance Ltd.’s TikTok should be included in any analysis of the market, which would dramatically shrink the company’s perceived dominance.
The US Federal Communications Commission, which regulates the transfer of broadcast-TV licenses, isn’t expected to play a role in the deal, as neither hold such licenses. Warner Bros. plans to spin off its cable TV division, which includes channels such as CNN, TBS and TNT, before the sale.
Even if antitrust reviews just focus on streaming, Netflix believes it will ultimately prevail, pointing to Amazon.com Inc.’s Prime and Walt Disney Co. as other major competitors, according to people familiar with the company’s thinking.
Netflix is expected to argue that more than 75% of HBO Max subscribers already subscribe to Netflix, making them complementary offerings rather than competitors, said the people, who asked not to be named discussing confidential deliberations. The company is expected to make the case that reducing its content costs through owning Warner Bros., eliminating redundant back-end technology and bundling Netflix with Max will yield lower prices.
Business
The rise of AI reasoning models comes with a big energy tradeoff
Published
9 hours agoon
December 5, 2025By
Jace Porter
Nearly all leading artificial intelligence developers are focused on building AI models that mimic the way humans reason, but new research shows these cutting-edge systems can be far more energy intensive, adding to concerns about AI’s strain on power grids.
AI reasoning models used 30 times more power on average to respond to 1,000 written prompts than alternatives without this reasoning capability or which had it disabled, according to a study released Thursday. The work was carried out by the AI Energy Score project, led by Hugging Face research scientist Sasha Luccioni and Salesforce Inc. head of AI sustainability Boris Gamazaychikov.
The researchers evaluated 40 open, freely available AI models, including software from OpenAI, Alphabet Inc.’s Google and Microsoft Corp. Some models were found to have a much wider disparity in energy consumption, including one from Chinese upstart DeepSeek. A slimmed-down version of DeepSeek’s R1 model used just 50 watt hours to respond to the prompts when reasoning was turned off, or about as much power as is needed to run a 50 watt lightbulb for an hour. With the reasoning feature enabled, the same model required 7,626 watt hours to complete the tasks.
The soaring energy needs of AI have increasingly come under scrutiny. As tech companies race to build more and bigger data centers to support AI, industry watchers have raised concerns about straining power grids and raising energy costs for consumers. A Bloomberg investigation in September found that wholesale electricity prices rose as much as 267% over the past five years in areas near data centers. There are also environmental drawbacks, as Microsoft, Google and Amazon.com Inc. have previously acknowledged the data center buildout could complicate their long-term climate objectives.
More than a year ago, OpenAI released its first reasoning model, called o1. Where its prior software replied almost instantly to queries, o1 spent more time computing an answer before responding. Many other AI companies have since released similar systems, with the goal of solving more complex multistep problems for fields like science, math and coding.
Though reasoning systems have quickly become the industry norm for carrying out more complicated tasks, there has been little research into their energy demands. Much of the increase in power consumption is due to reasoning models generating much more text when responding, the researchers said.
The new report aims to better understand how AI energy needs are evolving, Luccioni said. She also hopes it helps people better understand that there are different types of AI models suited to different actions. Not every query requires tapping the most computationally intensive AI reasoning systems.
“We should be smarter about the way that we use AI,” Luccioni said. “Choosing the right model for the right task is important.”
To test the difference in power use, the researchers ran all the models on the same computer hardware. They used the same prompts for each, ranging from simple questions — such as asking which team won the Super Bowl in a particular year — to more complex math problems. They also used a software tool called CodeCarbon to track how much energy was being consumed in real time.
The results varied considerably. The researchers found one of Microsoft’s Phi 4 reasoning models used 9,462 watt hours with reasoning turned on, compared with about 18 watt hours with it off. OpenAI’s largest gpt-oss model, meanwhile, had a less stark difference. It used 8,504 watt hours with reasoning on the most computationally intensive “high” setting and 5,313 watt hours with the setting turned down to “low.”
OpenAI, Microsoft, Google and DeepSeek did not immediately respond to a request for comment.
Google released internal research in August that estimated the median text prompt for its Gemini AI service used 0.24 watt-hours of energy, roughly equal to watching TV for less than nine seconds. Google said that figure was “substantially lower than many public estimates.”
Much of the discussion about AI power consumption has focused on large-scale facilities set up to train artificial intelligence systems. Increasingly, however, tech firms are shifting more resources to inference, or the process of running AI systems after they’ve been trained. The push toward reasoning models is a big piece of that as these systems are more reliant on inference.
Recently, some tech leaders have acknowledged that AI’s power draw needs to be reckoned with. Microsoft CEO Satya Nadella said the industry must earn the “social permission to consume energy” for AI data centers in a November interview. To do that, he argued tech must use AI to do good and foster broad economic growth.
JaMarcus Russell & Dwayne Bowe Co-Sign Lane Kiffin To LSU, Expect Results ASAP
Sexy Sydney Sweeney — What’s The Big Frigin’ Difference?!
George Pickens Unloads On Richard Sherman In Deleted Post, ‘P**** Ass’
Trending
-
Politics8 years agoCongress rolls out ‘Better Deal,’ new economic agenda
-
Entertainment8 years agoNew Season 8 Walking Dead trailer flashes forward in time
-
Politics8 years agoPoll: Virginia governor’s race in dead heat
-
Entertainment8 years agoThe final 6 ‘Game of Thrones’ episodes might feel like a full season
-
Entertainment8 years agoMeet Superman’s grandfather in new trailer for Krypton
-
Politics8 years agoIllinois’ financial crisis could bring the state to a halt
-
Business8 years ago6 Stunning new co-working spaces around the globe
-
Tech8 years agoHulu hires Google marketing veteran Kelly Campbell as CMO
