Connect with us

Business

Nvidia is positioning itself to cash in on the data center boom—even if mega-AI campuses don’t get built

Published

on



Welcome to Eye on AI! In this edition...Nvidia set to cash in whether mega AI data centers boom or bust…OpenAI says it will make changes to ChatGPT after lawsuit from parents of teen who died by suicide…Librarians helped test which AI gave the best answers without making stuff up…China seeks to triple output of AI chips in race with the US.

Everyone’s talking about yesterday’s Nvidia’s Q2 earnings. Not surprisingly, most of the focus was on the sophisticated, powerful GPU chips that made the company a $4 trillion symbol of the AI boom.

But the company’s huge bet on AI isn’t just about chips; it’s about the massive, billion-dollar data centers being built to house them. Data center revenue accounts for nearly 88% of Nvidia’s total sales—meaning the GPU chips, networking gear, systems, platforms, software and services that run inside AI data centers.

I’ve been noodling on something CEO Jensen Huang mentioned during the earnings call with analysts and investors—something that ties directly to my obsession with those mega facilities built to house tens of thousands of GPUs, that consume staggering amounts of energy, and which are used to train the massive models behind generative AI. These are facilities like Meta’s planned, gas-fueled campus in northern Louisiana—which President Trump touted yesterday with a photo showing it will sprawl to the size of Manhattan—or OpenAI’s $100-billion-plus Stargate Project.

On the call, Huang touted an Nvidia product called Spectrum-XGS—a hardware and software package that together let separate data centers function like one. Think of it as the pipes and traffic control that moves data between data centers quickly and predictably. 

Wait—I know your eyes are already glazing over, but hear me out. One of my nagging questions has long been: What if the billions being bet on these mega AI data centers winds up going bust?

Spectrum-XGS is built for the mega-AI clusters Huang has long predicted. But it is also enables those who can’t manage to build a single mega-facility because of permitting or financing issues, to stitch multiple data centers together into unified “AI factories.”

Until now, there were only two options for finding more compute: add more chips to a single GPU rack, or pack more racks into one giant facility. Spectrum-XGS introduces a third option: link multiple sites so they work together like one colossal supercomputer. AI cloud company CoreWeave, which rents out access to GPUs, is deploying the technology to connect its own data centers.

On the earnings call, Huang highlighted Spectrum-XGS, saying it would help “prepare for these AI super factories with multiple gigawatts of computing all connected together.” That’s the growth story Huang has been telling investors for years.

But what happens if it doesn’t unfold that way? There are several other scenarios: The most dire, of course, would be a true “AI winter,” in which the AI bubble pops and demand for AI from business and consumers plummets. In that case, demand for data centers optimized for AI—whether on mega campuses or in distributed, smaller data centers—would vanish and Nvidia would have to find other sources of revenue. Another possibility is that AI models become much smaller and are mostly used on laptops and mobile devices for “inference,” or outputting results. In that case, the demand for data centers could also drop and Nvidia would need to hedge against that.

However, there is yet another scenario where not many mega-campuses get built but technology like Spectrum-XGS still winds up helping Nvidia. If the mega-campus model falters—because of power shortages, financing constraints, or local pushback—Nvidia could still win if enough demand remains from customers. With technology like Spectrum-XGS, smaller or less centrally located facilities become more usable if demand shifts from mega-campuses to distributed ones.

In other words, Nvidia has positioned itself so that whether the industry keeps building massive new hubs or turns to linking together smaller, scattered sites, customers will still need Nvidia’s hardware and software—as well as its AI chips, of course.

Of course, Nvidia’s hedge doesn’t mean local communities are protected if the massive data centers being built in their backyards end up being white elephants. Towns that banked on jobs and tax revenue could still be left with ghost campuses and hulking concrete shells. And all of this depends on Spectrum-XGS working as promised and major players signing on. Customers haven’t tested it at scale in the real world, and networking—big or small—is always messy.

Still, whether the mega AI data center boom keeps roaring or fizzles, Nvidia is positioning itself to own the invisible infrastructure that underpins whatever future system emerges. Nvidia may be best known for selling the “picks and shovels” of AI—its GPUs—but its networking “plumbing” could help ensure the company wins either way.

Speaking of industry leaders, I hope you’ll check out Titans and Disrupters of Industry, a new podcast hosted by Fortune Editor-in-Chief, Alyson Shontell, that goes in depth with the powerful thought-leaders shaping both the world of business and the very way we live. In this exclusive interview, Accenture’s Julie Sweet discusses the company’s strategic shifts and the impact of AI, tariffs, and geopolitical changes on businesses.

Also: In less than a month, I will be headed to Park City, Utah, to participate in our annual Brainstorm Tech conference at the Montage Deer Valley! Space is limited, so if you’re interested in joining, register here. I highly recommend: There’s a fantastic lineup of speakers, including Ashley Kramer, chief revenue officer of OpenAI; John Furner, president and CEO of Walmart U.S.; Tony Xu, founder and CEO of DoorDash; and many, many more!

With that, here’s the rest of the AI news.

Sharon Goldman
sharon.goldman@fortune.com
@sharongoldman

FORTUNE ON AI

Lawyers for parents who claim ChatGPT encouraged their son to kill himself say they will prove OpenAI rushed its chatbot to market to pocket billions – by Muskaan Arshad

Anthropic’s settlement with authors may be the ‘first domino to fall’ in AI copyright battles – by Beatrice Nolan

Tesla self-driving cars are being tested in Boring Co. tunnels in Las Vegas, but full autonomy is still ‘a ways off,’ convention center exec says – by Jessica Matthews

AI IN THE NEWS

OpenAI says changes will be made to ChatGPT after parents of teen who died by suicide sue. According to CBS News, OpenAI said it will strengthen ChatGPT safeguards for vulnerable users—including new protections for teens—after the parents of 16-year-old Adam Raine filed a lawsuit alleging the chatbot encouraged him to take his own life. The complaint, filed in San Francisco, claims ChatGPT discussed suicide methods with Raine more than 1,200 times and urged him to keep his plans secret, while OpenAI allegedly ignored warnings that its technology’s “emotional attachment” features could harm young users. The case has reignited concerns that AI companies are prioritizing market dominance—OpenAI’s valuation surged from $86 billion to $300 billion after launching GPT-4—over safety. OpenAI expressed condolences to the family and said it is reviewing the lawsuit, while pledging to add parental controls, teen-specific protections, and the option for teens to designate an emergency contact.

Librarians helped test which AI gave the best answers without making stuff up. The Washington Post did an interesting test of AI search tools, to determine which one would be most likely to provide a correct answer. It enlisted the help of librarians who judged a competition between nine AI search tools, asking each AI to answer 30 tough questions. They scored 900 answers from the free, default versions of Bing Copilot, ChatGPT, Claude, Grok, Meta AI and Perplexity, as well as Google’s AI overviews, its newer AI mode and its traditional web search results. The questions were designed to test five categories of common AI blind spots. The surprising winner? Google’s AI Mode, which acts like a chatbot and was added in May to the top left corner of search results. Apparently Google still rules search, AI or not. 

China seeks to triple output of AI chips in race with the US. According to the Financial Times, China is moving aggressively to expand its domestic AI chip production, with new fabrication plants tied to Huawei expected to come online by early next year that could triple the country’s total output of AI processors. SMIC, China’s leading foundry and Huawei’s biggest chip supplier, also plans to double its 7nm production capacity, which would free up supply for smaller players like Cambricon, MetaX, and Biren—helping fill the gap left by Nvidia after U.S. export bans. At the same time, leading AI startup DeepSeek is pushing a new FP8 data format standard designed to align with this next generation of Chinese chips. The wave of capacity expansion has fueled a surge in Chinese semiconductor stocks and underscores how central chipmaking has become in the U.S.-China race over AI.

AI CALENDAR

Sept. 8-10: Fortune Brainstorm Tech, Park City, Utah. Apply to attend here.

Oct. 6-10: World AI Week, Amsterdam

Oct. 21-22: TedAI San Francisco. Apply to attend here.

Dec. 2-7: NeurIPS, San Diego

Dec. 8-9: Fortune Brainstorm AI San Francisco. Apply to attend here.

EYE ON AI NUMBERS

40%

According to a new Forrester Research report, that’s how many enterprise applications will embed task-specific AI agents by 2026—that is, those that are designed to master a narrow set of responsibilities such as automatically triaging IT help-desk tickets,  flagging regulatory risks for compliance workflows, or reconciling accounts for a finance team. That number is up from less than 5% today. The shift, said the researchers, could free up workers from repetitive chores while introducing new operational and security risks.

“As AI agents begin acting independently and handle tasks ranging from routine development to complex incident response without human involvement, leaders must ensure strong security and governance,” said Gartner senior director and analyst Anushree Verma in a press release. 



Source link

Continue Reading

Business

Senate Dems’ plan to fix Obamacare premiums adds nearly $300 billion to deficit, CRFB says

Published

on



The Committee for a Responsible Federal Budget (CRFB) is a nonpartisan watchdog that regularly estimates how much the U.S. Congress is adding to the $38 trillion national debt.

With enhanced Affordable Care Act (ACA) subsidies due to expire within days, some Senate Democrats are scrambling to protect millions of Americans from getting the unpleasant holiday gift of spiking health insurance premiums. The CRFB says there’s just one problem with the plan: It’s not funded.

“With the national debt as large as the economy and interest payments costing $1 trillion annually, it is absurd to suggest adding hundreds of billions more to the debt,” CRFB President Maya MacGuineas wrote in a statement on Friday afternoon.

The proposal, backed by members of the Senate Democratic caucus, would fully extend the enhanced ACA subsidies for three years, from 2026 through 2028, with no additional income limits on who can qualify. Those subsidies, originally boosted during the pandemic and later renewed, were designed to lower premiums and prevent coverage losses for middle‑ and lower‑income households purchasing insurance on the ACA exchanges.

CRFB estimated that even this three‑year extension alone would add roughly $300 billion to federal deficits over the next decade, largely because the federal government would continue to shoulder a larger share of premium costs while enrollment and subsidy amounts remain elevated. If Congress ultimately moves to make the enhanced subsidies permanent—as many advocates have urged—the total cost could swell to nearly $550 billion in additional borrowing over the next decade.

Reversing recent guardrails

MacGuineas called the Senate bill “far worse than even a debt-financed extension” as it would roll back several “program integrity” measures that were enacted as part of a 2025 reconciliation law and were intended to tighten oversight of ACA subsidies. On top of that, it would be funded by borrowing even more. “This is a bad idea made worse,” MacGuineas added.

The watchdog group’s central critique is that the new Senate plan does not attempt to offset its costs through spending cuts or new revenue and, in their view, goes beyond a simple extension by expanding the underlying subsidy structure.

The legislation would permanently repeal restrictions that eliminated subsidies for certain groups enrolling during special enrollment periods and would scrap rules requiring full repayment of excess advance subsidies and stricter verification of eligibility and tax reconciliation. The bill would also nullify portions of a 2025 federal regulation that loosened limits on the actuarial value of exchange plans and altered how subsidies are calculated, effectively reshaping how generous plans can be and how federal support is determined. CRFB warned these reversals would increase costs further while weakening safeguards designed to reduce misuse and error in the subsidy system.

MacGuineas said that any subsidy extension should be paired with broader reforms to curb health spending and reduce overall borrowing. In her view, lawmakers are missing a chance to redesign ACA support in a way that lowers premiums while also improving the long‑term budget outlook.

The debate over ACA subsidies recently contributed to a government funding standoff, and CRFB argued that the new Senate bill reflects a political compromise that prioritizes short‑term relief over long‑term fiscal responsibility.

“After a pointless government shutdown over this issue, it is beyond disappointing that this is the preferred solution to such an important issue,” MacGuineas wrote.

The off-year elections cast the government shutdown and cost-of-living arguments in a different light. Democrats made stunning gains and almost flipped a deep-red district in Tennessee as politicians from the far left and center coalesced around “affordability.”

Senate Minority Leader Chuck Schumer is reportedly smelling blood in the water and doubling down on the theme heading into the pivotal midterm elections of 2026. President Donald Trump is scheduled to visit Pennsylvania soon to discuss pocketbook anxieties. But he is repeating predecessor Joe Biden’s habit of dismissing inflation, despite widespread evidence to the contrary.

“We fixed inflation, and we fixed almost everything,” Trump said in a Tuesday cabinet meeting, in which he also dismissed affordability as a “hoax” pushed by Democrats.​

Lawmakers on both sides of the aisle now face a politically fraught choice: allow premiums to jump sharply—including in swing states like Pennsylvania where ACA enrollees face double‑digit increases—or pass an expensive subsidy extension that would, as CRFB calculates, explode the deficit without addressing underlying health care costs.



Source link

Continue Reading

Business

Netflix–Warner Bros. deal sets up $72 billion antitrust test

Published

on



Netflix Inc. has won the heated takeover battle for Warner Bros. Discovery Inc. Now it must convince global antitrust regulators that the deal won’t give it an illegal advantage in the streaming market. 

The $72 billion tie-up joins the world’s dominant paid streaming service with one of Hollywood’s most iconic movie studios. It would reshape the market for online video content by combining the No. 1 streaming player with the No. 4 service HBO Max and its blockbuster hits such as Game Of ThronesFriends, and the DC Universe comics characters franchise.  

That could raise red flags for global antitrust regulators over concerns that Netflix would have too much control over the streaming market. The company faces a lengthy Justice Department review and a possible US lawsuit seeking to block the deal if it doesn’t adopt some remedies to get it cleared, analysts said.

“Netflix will have an uphill climb unless it agrees to divest HBO Max as well as additional behavioral commitments — particularly on licensing content,” said Bloomberg Intelligence analyst Jennifer Rie. “The streaming overlap is significant,” she added, saying the argument that “the market should be viewed more broadly is a tough one to win.”

By choosing Netflix, Warner Bros. has jilted another bidder, Paramount Skydance Corp., a move that risks touching off a political battle in Washington. Paramount is backed by the world’s second-richest man, Larry Ellison, and his son, David Ellison, and the company has touted their longstanding close ties to President Donald Trump. Their acquisition of Paramount, which closed in August, has won public praise from Trump. 

Comcast Corp. also made a bid for Warner Bros., looking to merge it with its NBCUniversal division.

The Justice Department’s antitrust division, which would review the transaction in the US, could argue that the deal is illegal on its face because the combined market share would put Netflix well over a 30% threshold.

The White House, the Justice Department and Comcast didn’t immediately respond to requests for comment. 

US lawmakers from both parties, including Republican Representative Darrell Issa and Democratic Senator Elizabeth Warren have already faulted the transaction — which would create a global streaming giant with 450 million users — as harmful to consumers.

“This deal looks like an anti-monopoly nightmare,” Warren said after the Netflix announcement. Utah Senator Mike Lee, a Republican, said in a social media post earlier this week that a Warner Bros.-Netflix tie-up would raise more serious competition questions “than any transaction I’ve seen in about a decade.”

European Union regulators are also likely to subject the Netflix proposal to an intensive review amid pressure from legislators. In the UK, the deal has already drawn scrutiny before the announcement, with House of Lords member Baroness Luciana Berger pressing the government on how the transaction would impact competition and consumer prices.

The combined company could raise prices and broadly impact “culture, film, cinemas and theater releases,”said Andreas Schwab, a leading member of the European Parliament on competition issues, after the announcement.

Paramount has sought to frame the Netflix deal as a non-starter. “The simple truth is that a deal with Netflix as the buyer likely will never close, due to antitrust and regulatory challenges in the United States and in most jurisdictions abroad,” Paramount’s antitrust lawyers wrote to their counterparts at Warner Bros. on Dec. 1.

Appealing directly to Trump could help Netflix avoid intense antitrust scrutiny, New Street Research’s Blair Levin wrote in a note on Friday. Levin said it’s possible that Trump could come to see the benefit of switching from a pro-Paramount position to a pro-Netflix position. “And if he does so, we believe the DOJ will follow suit,” Levin wrote.

Netflix co-Chief Executive Officer Ted Sarandos had dinner with Trump at the president’s Mar-a-Lago resort in Florida last December, a move other CEOs made after the election in order to win over the administration. In a call with investors Friday morning, Sarandos said that he’s “highly confident in the regulatory process,” contending the deal favors consumers, workers and innovation. 

“Our plans here are to work really closely with all the appropriate governments and regulators, but really confident that we’re going to get all the necessary approvals that we need,” he said.

Netflix will likely argue to regulators that other video services such as Google’s YouTube and ByteDance Ltd.’s TikTok should be included in any analysis of the market, which would dramatically shrink the company’s perceived dominance.

The US Federal Communications Commission, which regulates the transfer of broadcast-TV licenses, isn’t expected to play a role in the deal, as neither hold such licenses. Warner Bros. plans to spin off its cable TV division, which includes channels such as CNN, TBS and TNT, before the sale.

Even if antitrust reviews just focus on streaming, Netflix believes it will ultimately prevail, pointing to Amazon.com Inc.’s Prime and Walt Disney Co. as other major competitors, according to people familiar with the company’s thinking. 

Netflix is expected to argue that more than 75% of HBO Max subscribers already subscribe to Netflix, making them complementary offerings rather than competitors, said the people, who asked not to be named discussing confidential deliberations. The company is expected to make the case that reducing its content costs through owning Warner Bros., eliminating redundant back-end technology and bundling Netflix with Max will yield lower prices.



Source link

Continue Reading

Business

The rise of AI reasoning models comes with a big energy tradeoff

Published

on



Nearly all leading artificial intelligence developers are focused on building AI models that mimic the way humans reason, but new research shows these cutting-edge systems can be far more energy intensive, adding to concerns about AI’s strain on power grids.

AI reasoning models used 30 times more power on average to respond to 1,000 written prompts than alternatives without this reasoning capability or which had it disabled, according to a study released Thursday. The work was carried out by the AI Energy Score project, led by Hugging Face research scientist Sasha Luccioni and Salesforce Inc. head of AI sustainability Boris Gamazaychikov.

The researchers evaluated 40 open, freely available AI models, including software from OpenAI, Alphabet Inc.’s Google and Microsoft Corp. Some models were found to have a much wider disparity in energy consumption, including one from Chinese upstart DeepSeek. A slimmed-down version of DeepSeek’s R1 model used just 50 watt hours to respond to the prompts when reasoning was turned off, or about as much power as is needed to run a 50 watt lightbulb for an hour. With the reasoning feature enabled, the same model required 7,626 watt hours to complete the tasks.

The soaring energy needs of AI have increasingly come under scrutiny. As tech companies race to build more and bigger data centers to support AI, industry watchers have raised concerns about straining power grids and raising energy costs for consumers. A Bloomberg investigation in September found that wholesale electricity prices rose as much as 267% over the past five years in areas near data centers. There are also environmental drawbacks, as Microsoft, Google and Amazon.com Inc. have previously acknowledged the data center buildout could complicate their long-term climate objectives

More than a year ago, OpenAI released its first reasoning model, called o1. Where its prior software replied almost instantly to queries, o1 spent more time computing an answer before responding. Many other AI companies have since released similar systems, with the goal of solving more complex multistep problems for fields like science, math and coding.

Though reasoning systems have quickly become the industry norm for carrying out more complicated tasks, there has been little research into their energy demands. Much of the increase in power consumption is due to reasoning models generating much more text when responding, the researchers said. 

The new report aims to better understand how AI energy needs are evolving, Luccioni said. She also hopes it helps people better understand that there are different types of AI models suited to different actions. Not every query requires tapping the most computationally intensive AI reasoning systems.

“We should be smarter about the way that we use AI,” Luccioni said. “Choosing the right model for the right task is important.”

To test the difference in power use, the researchers ran all the models on the same computer hardware. They used the same prompts for each, ranging from simple questions — such as asking which team won the Super Bowl in a particular year — to more complex math problems. They also used a software tool called CodeCarbon to track how much energy was being consumed in real time.

The results varied considerably. The researchers found one of Microsoft’s Phi 4 reasoning models used 9,462 watt hours with reasoning turned on, compared with about 18 watt hours with it off. OpenAI’s largest gpt-oss model, meanwhile, had a less stark difference. It used 8,504 watt hours with reasoning on the most computationally intensive “high” setting and 5,313 watt hours with the setting turned down to “low.” 

OpenAI, Microsoft, Google and DeepSeek did not immediately respond to a request for comment.

Google released internal research in August that estimated the median text prompt for its Gemini AI service used 0.24 watt-hours of energy, roughly equal to watching TV for less than nine seconds. Google said that figure was “substantially lower than many public estimates.” 

Much of the discussion about AI power consumption has focused on large-scale facilities set up to train artificial intelligence systems. Increasingly, however, tech firms are shifting more resources to inference, or the process of running AI systems after they’ve been trained. The push toward reasoning models is a big piece of that as these systems are more reliant on inference.

Recently, some tech leaders have acknowledged that AI’s power draw needs to be reckoned with. Microsoft CEO Satya Nadella said the industry must earn the “social permission to consume energy” for AI data centers in a November interview. To do that, he argued tech must use AI to do good and foster broad economic growth.



Source link

Continue Reading

Trending

Copyright © Miami Select.