Connect with us

Business

Oracle’s collapsing stock shows the AI boom is running into two hard limits: physics and debt

Published

on



Oracle’s rapid descent from market darling to market warning sign is revealing something deeper about the AI boom, experts say: no matter how euphoric investors became over the last two years, the industry can’t outrun the laws of physics—or the realities of debt financing.

Shares of Oracle have plunged 45% from their September high and lost 14% this week after a messy earnings report revealed it spent $12 billion in quarterly capital expenditures, higher than the $8.25 billion expected by analysts.

Earnings guidance was also weak, and the company raised its forecast for fiscal 2026 capex by another $15 billion. The bulk of that is going into data centers dedicated to OpenAI, Oracle’s $300 billion partner in the AI cycle. 

“We have ambitious achievable goals for capacity delivery worldwide,” Oracle co-CEO Clay Magouyrk said on an earnings call this week.

Investors worry how Oracle will pay for these massive outlays as its underlying revenue streams, cloud revenue and cloud-infrastructure sales, also fell short of Wall Street’s expectations. Analysts have described its AI buildout as debt-fueled, even though the company does not explicitly link specific debt to specific capital projects in its filings.

And by Friday, even the crown jewel of Oracle’s AI strategy—its OpenAI data centers—was showing cracks. Bloomberg disclosed that Oracle has pushed back completion of some U.S. data centers for OpenAI from 2027 to 2028 because of “labor and material shortages.” 

“It’s perfectly plausible that they’re seeing labor and materials shortages,” said data-center researcher Jonathan Koomey, who has advised utilities and hyperscalers including IBM and AMD. In his view, the AI boom is running directly into the difference between digital speed and physical speed. “The world of bits moves fast. The world of atoms doesn’t. And data centers are where those two worlds collide.”

Although Bloomberg didn’t identify which specific facilities were being delayed, Koomer said one likely candidate is Project Jupiter, Oracle’s gargantuan data-center complex proposed for a remote stretch of New Mexico. Local reporting has described Jupiter as a $160 billion-plus mega-campus, one of the most ambitious AI infrastructure projects ever attempted and a core piece of Oracle’s commitment to provide compute to OpenAI. 

Koomey describes an industry where capital can be deployed instantly, but the equipment that capital must buy cannot. The timelines for turbines, transformers, specialized cooling systems, and high-voltage gear have stretched into years, he explained. Large transformers can take four to five years to arrive. Industrial gas turbines, which companies increasingly rely on for building microgrids, can take six or seven. 

Even if a company is willing to pay a premium, the factories that produce these components cannot magically expand overnight, and the manufacturing industry trained to install them is already stretched thin. AI companies may want to move at the pace of model releases, but the construction and utility sectors operate on a fundamentally different timeline.

Koomey made it clear that the physical constraints he describes apply to all hyperscalers, but Oracle worries investors in particular because it’s getting into the AI infrastructure game late and tying much of its capex to one customer, OpenAI.  

“This happens every time there’s a massive shift in investment,” he said. “Eventually manufacturers catch up, but not right away. Reality intervenes.”

That friction becomes ever clearer once the financial limit enters the picture. While Oracle’s stock slide is dramatic, the bond-market reaction may be more important. Oracle’s bond yields blew out, with some newer notes that were once investment grade now trading like junk, as its credit-risk gauge hit the highest level since 2009. It signals that investors who lend to companies, historically the most sober observers of tech cycles, are beginning to reassess the risk of lending into the AI buildout. 

For the past few decades, the norm for tech companies was to pay for growth with earnings. Now many of them, including Oracle, are turning to credit markets to fund their sprawling expansions. According to a Bank of Americaanalysis, the five biggest AI hyperscalers—Google, Meta, Amazon, Microsoft and Oracle—have collectively issued roughly $121 billion in bonds this year to fund AI data-center buildouts, a level of issuance far above historical averages and one that signals a major shift toward debt financing for infrastructure.

Oracle, however, has made some of the biggest deals out of the five, like its $18 billion September bond sale. Its total stack of debt is roughly $100 billion. The other four are also in stronger cash positions and have higher credit ratings (AA/A vs Oracle in BBB area), and are able to generate large positive free cash flow. So while Oracle isn’t the only tech giant tapping the debt markets for its AI outlays, its size, cash generation, and credit ratings make it one of the most leveraged.

Debt investors do not necessarily need blowout returns; they just need certainty that they will get their money back, with interest. If confidence wavers even a little, yields rise. 

“This feels like the 1998 moment,” Anuj Kapur, CEO of CloudBees and a former tech executive during the dot-com era, told Axios. There’s enormous promise, but also enormous uncertainty about how quickly the returns show up. 

Koomer saw a simple throughline. 

“You have a disconnect between the tech people who have lots of money and are used to moving super fast, and the people who make the equipment and build the facilities, who need years to scale up their manufacturing,” he said.



Source link

Continue Reading

Business

Actor Joseph Gordon-Levitt wonders why AI companies don’t have to ‘follow any laws’

Published

on



In a sharp critique of the current artificial intelligence landscape, actor-turned-filmmaker-turned- (increasingly) AI activist Joseph Gordon-Levitt challenged the tech industry’s resistance to regulation, posing a provocative rhetorical question to illustrate the dangers of unchecked development: “Are you in favor of erotic content for eight-year-olds?”

Speaking at the Fortune Brainstorm AI conference this week with editorial director Andrew Nusca, Gordon-Levitt used “The Artist and the Algorithm” session to pose another, deeper question: “Why should the companies building this technology not have to follow any laws? It doesn’t make any sense.”

In a broad-ranging conversation covering specific failures in self-regulation, including instances in which “AI companions” on major platforms reportedly verged into inappropriate territory for children, Gordon-Levitt argued relying on internal company policies rather than external law is insufficient, noting such features were approved by corporate ethicists.

Gordon-Levitt’s criticisms were aimed, in part, at Meta, following the actor’s appearance in a New York Times Opinion video series airing similar claims. Meta spokesperson Andy Stone pushed back hard on X.com at the time, noting Gordon-Levitt’s wife was formerly on the board of Meta rival OpenAI.

Gordon-Levitt argued without government “guardrails,” ethical dilemmas become competitive disadvantages. He explained that if a company attempts to “prioritize the public good” and take the “high road,” they risk being “beat by a competitor who’s taking the low road.” Consequently, he said he believes business incentives alone will inevitably drive companies toward “dark outcomes” unless there is an interplay between the private sector and public law.

‘Synthetic intimacy’ and children

Beyond the lack of regulation, Gordon-Levitt expressed deep concern regarding the psychological impact of AI on children. He compared the algorithms used in AI toys to “slot machines,” saying they use psychological techniques designed to be addictive.

Drawing on conversations with NYU psychologist Jonathan Haidt, Gordon-Levitt warned against “synthetic intimacy.” He argued that while human interaction helps develop neural pathways in young brains, AI chatbots provide a “fake” interaction designed to serve ads rather than foster development.

“To me it’s pretty obvious that you’re going down a very bad path if you’re subjecting them to this synthetic intimacy that these companies are selling,” he said.

Haidt, whose New York Times bestseller The Anxious Generation came recommended from Gordon-Levitt onstage, recently appeared at a Dartmouth-United Nations Development Program symposium on mental health among young people and used the metaphor of tree roots for neurons. Explaining tree-root growth is structured by environments, he brought up a picture of a tree growing around a Civil War–era tombstone. With Gen Z and technology, specifically the smartphone, he said: “Their brains have been growing around their phones very much in the way that this tree grew around this tombstone.” He also discussed the physical manifestations of this adaptation, with children “growing hunched around their phone,” as screen addiction is literally “warping eyeballs,” leading to a global rise in myopia shortsightedness.

The ‘arms race’ narrative

When addressing why regulations have been slow to materialize, Gordon-Levitt pointed to a powerful narrative employed by tech companies: the geopolitical race against China. He described this framing as “storytelling” and “handwaving” designed to bypass safety checks,. Companies often compare the development of AI to the Manhattan Project, arguing slowing down for safety means losing a war for dominance. In fact, The Trump administration’s “Genesis Mission” to encourage AI innovation was unveiled with similar fanfare just weeks ago, in late November.

However, this stance met with pushback from the audience. Stephen Messer of Collectiv[i] argued Gordon-Levitt’s arguments were falling apart quickly in a “room full of AI people.” Privacy previously decimated the U.S. facial recognition industry, he said as an example, allowing China to take a dominant lead within just six months. Gordon-Levitt acknowledged the complexity, admitting “anti-regulation arguments often cherrypick” bad laws to argue against all laws. He maintained that while the U.S. shouldn’t cede ground, “we have to find a good middle ground” rather than having no rules at all.

Gordon-Levitt also criticized the economic model of generative AI, accusing companies of building models on “stolen content and data” while claiming “fair use” to avoid paying creators. He warned a system in which “100% of the economic upside” goes to tech companies and “0%” goes to the humans who created the training data is unsustainable.

Despite his criticisms, Gordon-Levitt clarified he is not a tech pessimist. He said he would absolutely use AI tools if they were “set up ethically” and creators were compensated. However, he concluded without establishing the principle that a person’s digital work belongs to them, the industry is heading down a “pretty dystopian road.”



Source link

Continue Reading

Business

Fed chair race: Warsh overtakes Hassett as favorite to be nominated by Trump

Published

on



Wall Street’s top parlor game took a sudden turn on Monday, when the prediction market Kalshi showed Kevin Warsh is now the frontrunner to be nominated as the next Federal Reserve chairman, overtaking Kevin Hassett.

Warsh, a former Fed governor, now has a 47% probability, up from 39% on Sunday and just 11% on Dec. 3. Hassett, director of the National Economic Council, has fallen to 41%, down from 51% on Sunday and 81% on Dec. 3.

A report from CNBC saying Hassett’s candidacy was running into pushback from people close to President Donald Trump seemed to put Warsh on top. The resistance stems from concerns Hassett is too close to Trump.

That followed Trump’s comment late Friday, when he told The Wall Street Journal Warsh was at the top of his list, though he added “the two Kevins are great.”

According to the Journal, Trump met Warsh on Wednesday at the White House and pressed him on whether he could be trusted to back rate cuts. 

The report surprised Wall Street, which had overwhelming odds on Hassett as the favorite, lifting Warsh’s odds from the cellar.

But even prior to the Journal story, there have been rumblings in the finance world Hassett wasn’t their preferred choice to be Fed chair.

At a private conference for asset managers on Thursday, JPMorgan Chase CEO Jamie Dimon signaled support for Warsh and predicted Hassett was likelier to support Trump on more rate cuts, sources told the Financial Times.

And in a separate report earlier this month, the FT said bond investors shared their concerns about Hassett with the Treasury Department in November, saying they’re worried he would cut rates aggressively in order to please Trump.

Trump has said he will nominate a Fed chair in early 2026, with Jerome Powell’s term due to expire in May. 

For his part, Hassett appeared to put some distance between himself and Trump during an appearance on CBS’ Face the Nation on Sunday.

When asked if Trump’s voice would have equal weighting to the voting members on the rate-setting Federal Open Market Committee, Hassett replied, “no, he would have no weight.”

“His opinion matters if it’s good, if it’s based on data,” he explained. “And then if you go to the committee and you say, ‘well the president made this argument, and that’s a really sound argument, I think. What do you think?’ If they reject it, then they’ll vote in a different way.”



Source link

Continue Reading

Business

What happens to old AI chips? They’re still put to good use and don’t depreciate that fast

Published

on



New AI chips seem to hit the market at a quicker pace as tech companies scramble to gain supremacy in the global arms race for computational power.

But that begs the question: What happens to all those older-generation chips?

The AI stock boom has lost a lot of momentum in recent weeks due, in part, to worries that so-called hyperscalers aren’t correctly accounting for the depreciation in the hoard of chips they’ve purchased to power chatbots.

Michael Burry—the investor of Big Short fame who famously predicted the 2008 housing collapse—sounded the alarm last month when he warned AI-era profits are built on “one of the most common frauds in the modern era,” namely stretching the depreciation schedule. He estimated Big Tech will understate depreciation by $176 billion between 2026 and 2028.

But according to a note last week from Alpine Macro, chip depreciation fears are overstated for three reasons.

First, analysts pointed out software advances that accompany next-generation chips can also level up older-generation processors. For example, software can improve the performance of Nvidia’s five-year-old A100 chip by two to three times compared to its initial version.

Second, Alpine said the need for older chips remains strong amid rising demand for inference, meaning when a chatbot responds to queries. In fact, inference demand will significantly outpace demand for AI training in the coming years.

“For inference, the latest hardware helps but is often not essential, so chip quantity can substitute for cutting-edge quality,” analysts wrote, adding Google is still running seven- to eight-year-old TPUs at full utilization.

Third, China continues to demonstrate “insatiable” demand for AI chips as its supply “lags the U.S. by several generations in quality and severalfold in quantity.” And even though Beijing has banned some U.S. chips, the black market will continue to serve China’s shortfalls.

Meanwhile, not all chips used in AI belong to hyperscalers. Even graphics processors contained in everyday gaming consoles could work.

A note last week from Yardeni Research pointed to “distributed AI,” which draws on unused chips in homes, crypto-mining servers, offices, universities, and data centers to act as global virtual networks.

While distributed AI can be slower than a cluster of chips housed in the same data center, its network architecture can be more resilient if a computer or a group of them fails, Yardeni added.

“Though we are unable to ascertain how many GPUs were being linked in this manner, Distributed AI is certainly an interesting area worth watching, particularly given that billions are being spent to build new, large data centers,” the note said.



Source link

Continue Reading

Trending

Copyright © Miami Select.