Connect with us

Business

AI bubble talk grips the market. But in the C-suite there’s more FOMO over AI’s benefits than fear of an AI bustup

Published

on



Hello and welcome to Eye on AI. In this edition…Nvidia becomes the first $5 trillion market cap company…Anthropic finds AI models have ‘introspection,’ of a kind…and Meta, Alphabet, and Microsoft tell investors just how much they’ve been spending on AI data centers. 

Hello, it’s Jeremy here. I’m just back from Fortune Global Forum in Riyadh, where AI was very much a central feature in many of the discussions. I will provide a few insights from what I learned there.

Of course, there was a lot of discussion at the event about whether there’s an “AI bubble”—and that was before we got the latest earnings and cap ex numbers from Meta, Microsoft, and Alphabet. Wall Street’s disparate reactions to the companies’ quarterly report cards show the market’s growing impatience to see tangible results from hefty AI investments. They will only support companies who can show they are seeing notable revenue impact today.

Why the market reacted so differently to Meta’s, Microsoft’s and Alphabet’s capex numbers

Consider Alphabet, which saw its shares climb after its earnings report. With its quarterly search revenues growing 14.5% year-over-year, and cloud revenues up 32%, Alphabet continues to defy concerns that AI poses an existential innovator’s dilemma to its core advertising-based business model. By contrast, Meta said capital expenses on AI data centers next year would be even larger than the already whopping $70 billion to $72 billion it’s spending this year as CEO Mark Zuckerberg races to build “super-intelligence,” an incredibly ambitious effort with limited immediate revenue impact. Investors weren’t having it, and Meta’s shares got hammered, dropping 9% in pre-market trading.

Investor reaction to Microsoft’s earnings fell somewhere between these two extremes. Like Alphabet, it reported revenue numbers that exceeded consensus analyst forecasts, but not by much, and it also said capital expenditures would climb more than analysts had anticipated. So it saw its shares slide about in line with investors’ disappointment in the size of the gap between revenue acceleration and capital expense growth, even though Microsoft’s cloud computing sales were up an impressive 40% from last year, a figure it largely attributed to AI spending.

What was striking at Fortune Global Forum, however, was how little global executives seemed to care about these financial market dynamics. If there was any consensus from the discussions in Riyadh, it was that the current moment is a lot like the early days of the internet or the roll out of cloud computing in the mid-2000s and early-2010s. In other words, a real technological transformation is underway. Yes, it might involve some companies becoming overvalued—as did happen with the internet boom. But almost all agreed that AI is going to have a transformative and lasting impact on their companies, and on the world economy, even if there is a market correction.

Executives are finding value in AI

At an IBM-sponsored dinner at FGF that Fortune-hosted, Ana Paula Assis, IBM’s senior vice president and chair for EMEA and growth markets, said that, in her experience, it wasn’t the fear of an AI bubble—the concern that AI might just a flash in the pan that doesn’t live up to the hype—that held companies back from investing in the technology. Instead, it was the speed of AI innovation that was actually the problem. Some companies, she said, seemed worried they would build systems around one set of models and capabilities, only to have those eclipsed in just a few months or a year, requiring them to change those workflows and swap models again. She described some potential customers as “like deer in the headlights” dazzled and frozen in place by the pace of change.

On stage at the conference, Ruth Porat, the president and chief investment officer at Alphabet, echoed Assis’s view to some degree. She noted that there was a big disparity between the speed of AI advances and the speed at which companies were adopting the technology. She said this disparity was largely the result of how difficult it is for large enterprises to change internal processes in general. And to get the most out of AI requires companies to rethink every process, she said, so it is perhaps not surprising that this is happening much more slowly than the rate at which AI companies, including Google, are rolling out new AI models and capabilities.

IBM put out some survey results this week for EMEA enterprises that show companies are indeed moving ahead with deploying AI at scale. Its survey of 3,500 senior executives in 10 countries found that two-thirds reported “significant productivity gains” from deploying AI. In some sectors, such as finance, the figure was 72%. Adoption in Saudi Arabia was even higher still—84%. What’s more, across EMEA, 92% of those surveyed were confident that AI agents would deliver ROI within the next two years. (Which may prove the point about the tech capabilities running far ahead of adoption. You might remember how many top tech execs declared 2025 to be “the year of AI agents.” I guess the real year of AI agents might be 2027!)

Ok, with that, here’s more AI news.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

FORTUNE ON AI

Character.AI bans teens from talking to its chatbots amid mounting lawsuits and regulatory pressure—by Beatrice Nolan

Everyone thinks AI is replacing factory workers, but Amazon’s layoffs show it’s coming for middle management first—by Eva Roytburg

Martin Sorrell says AI has already ‘missed the Oppenheimer moment’—by Allie Garfinkle

Longevity science is on the cusp of major breakthroughs thanks to AI, but significant ‘data gaps’ need to be filled, expert says—Alexei Oreskovic

AI IN THE NEWS

Nvidia becomes world’s first $5 trillion company as it reveals $500 million order backlog. The AI chip company became the first business ever to reach a $5 trillion market capitalization, after its shares rose earlier in the week following several announcements by its CEO and founder, Jensen Huang, at a developer conference in Washington, D.C. Huang revealed that the company has a $500 billion order backlog for its latest Blackwell GPUs and its upcoming Rubin GPUs. The company has also recently announced deeper partnerships and investments with OpenAI, Oracle, and Eli Lilly. Nvidia has seen its market cap add $3 trillion in value since early 2024. Read more from The Wall Street Journal here.

Fed Chair Powell says AI boom not comparable to dot com bubble. U.S. Federal Reserve Chair Jerome Powell said the current artificial intelligence boom differs from the dot-com bubble because today’s leading companies—and here he seems to have been referring to the likes of Nvidia, Alphabet, Microsoft, and Meta, as opposed to the AI model makers such as OpenAI and Anthropic—actually generate profits. He also noted that the AI boom is driving tangible economic growth through investments in data centers and chips. (Although it should be said that the dot com bubble also fueled capital investment in fiber optics and networking equipment.) He contrasted this with the 1990s internet frenzy, when many high-valued firms collapsed after failing to turn a profit. You can read more from CNBC here.

Anthropic says cutting-edge AI models may have a kind of introspection. The AI company said its Claude Opus 4 and 4.1 models exhibit early signs of introspection—the ability to detect and describe aspects of their own internal states rather than just generate plausible text. In experiments, Anthropic researchers “injected” specific neural activation patterns that they knew were associated with particular concepts into the model at times when it was not considering topics related to those concepts. It then asked the model whether it noticed anything different about its thinking in these instances. The models were able to correctly identify some of these “thoughts” as not their own some of the time, indicating a limited form of self-monitoring, according to the Anthropic researchers. This introspective behavior, however, was highly inconsistent—occurring only about 20% of the time—and its underlying mechanisms remain unclear. Anthropic cautions that while intriguing, these findings do not imply human-like self-awareness but could help advance future work on model transparency and interpretability. You can read more in Anthropic’s blog post on the research here.

Study finds top AI models can’t construct predictive “world models.” A group of researchers from the non-profit AI lab the Basis Research Institute and affiliated with MIT, Harvard University, the University of Montreal, the University of Cambridge and Cornell University, built a new benchmark to test how leading LLMs perform at tasks that require understanding a virtual world, including discovering links between cause and effect and the “rules” by which the world operates. Their new “AutumnBench” involves a suite of 43 grid-world environments with 129 tasks, including predicting which objects are behind an obstruction, planning, and detecting what’s changed in a scene and the likely cause. They looked at how three state-of-the-art reasoning models— Anthropic’s Claude 4 Sonnet, Google’s Gemini 2.5 Pro, and OpenAI’s o3—compared against 517 human participants. They allowed the test subjects to spend some time exploring each virtual world and deploying strategies to figure out the rules of the world before testing them on the tasks. The results show that humans significantly outperform the AI models across all task types and environments. What’s more, they found that the models fail to adopt human-like strategies for determining the rules of the virtual worlds and how to perform the tasks, such as hypothesis-testing and updating their beliefs to account for new evidence. You can read the research paper here.

AI CALENDAR

Nov. 10-13: Web Summit, Lisbon. 

Nov. 26-27: World AI Congress, London.

Dec. 2-7: NeurIPS, San Diego.

Dec. 8-9: Fortune Brainstorm AI San Francisco. Apply to attend here.

EYE ON AI NUMBERS

$78.2 billion

That’s the amount that just Meta, Microsoft, and Alphabet collectively spent building new AI data centers and buying AI hardware in the three months between the end of June and the end of September. And all three companies signaled they plan to continue to ramp up that spending further over the next quarter and throughout 2026. You can read more here from the Financial Times. 



Source link

Continue Reading

Business

The rise of AI reasoning models comes with a big energy tradeoff

Published

on



Nearly all leading artificial intelligence developers are focused on building AI models that mimic the way humans reason, but new research shows these cutting-edge systems can be far more energy intensive, adding to concerns about AI’s strain on power grids.

AI reasoning models used 30 times more power on average to respond to 1,000 written prompts than alternatives without this reasoning capability or which had it disabled, according to a study released Thursday. The work was carried out by the AI Energy Score project, led by Hugging Face research scientist Sasha Luccioni and Salesforce Inc. head of AI sustainability Boris Gamazaychikov.

The researchers evaluated 40 open, freely available AI models, including software from OpenAI, Alphabet Inc.’s Google and Microsoft Corp. Some models were found to have a much wider disparity in energy consumption, including one from Chinese upstart DeepSeek. A slimmed-down version of DeepSeek’s R1 model used just 50 watt hours to respond to the prompts when reasoning was turned off, or about as much power as is needed to run a 50 watt lightbulb for an hour. With the reasoning feature enabled, the same model required 7,626 watt hours to complete the tasks.

The soaring energy needs of AI have increasingly come under scrutiny. As tech companies race to build more and bigger data centers to support AI, industry watchers have raised concerns about straining power grids and raising energy costs for consumers. A Bloomberg investigation in September found that wholesale electricity prices rose as much as 267% over the past five years in areas near data centers. There are also environmental drawbacks, as Microsoft, Google and Amazon.com Inc. have previously acknowledged the data center buildout could complicate their long-term climate objectives

More than a year ago, OpenAI released its first reasoning model, called o1. Where its prior software replied almost instantly to queries, o1 spent more time computing an answer before responding. Many other AI companies have since released similar systems, with the goal of solving more complex multistep problems for fields like science, math and coding.

Though reasoning systems have quickly become the industry norm for carrying out more complicated tasks, there has been little research into their energy demands. Much of the increase in power consumption is due to reasoning models generating much more text when responding, the researchers said. 

The new report aims to better understand how AI energy needs are evolving, Luccioni said. She also hopes it helps people better understand that there are different types of AI models suited to different actions. Not every query requires tapping the most computationally intensive AI reasoning systems.

“We should be smarter about the way that we use AI,” Luccioni said. “Choosing the right model for the right task is important.”

To test the difference in power use, the researchers ran all the models on the same computer hardware. They used the same prompts for each, ranging from simple questions — such as asking which team won the Super Bowl in a particular year — to more complex math problems. They also used a software tool called CodeCarbon to track how much energy was being consumed in real time.

The results varied considerably. The researchers found one of Microsoft’s Phi 4 reasoning models used 9,462 watt hours with reasoning turned on, compared with about 18 watt hours with it off. OpenAI’s largest gpt-oss model, meanwhile, had a less stark difference. It used 8,504 watt hours with reasoning on the most computationally intensive “high” setting and 5,313 watt hours with the setting turned down to “low.” 

OpenAI, Microsoft, Google and DeepSeek did not immediately respond to a request for comment.

Google released internal research in August that estimated the median text prompt for its Gemini AI service used 0.24 watt-hours of energy, roughly equal to watching TV for less than nine seconds. Google said that figure was “substantially lower than many public estimates.” 

Much of the discussion about AI power consumption has focused on large-scale facilities set up to train artificial intelligence systems. Increasingly, however, tech firms are shifting more resources to inference, or the process of running AI systems after they’ve been trained. The push toward reasoning models is a big piece of that as these systems are more reliant on inference.

Recently, some tech leaders have acknowledged that AI’s power draw needs to be reckoned with. Microsoft CEO Satya Nadella said the industry must earn the “social permission to consume energy” for AI data centers in a November interview. To do that, he argued tech must use AI to do good and foster broad economic growth.



Source link

Continue Reading

Business

SpaceX to offer insider shares at record-setting valuation

Published

on



SpaceX is preparing to sell insider shares in a transaction that would value Elon Musk’s rocket and satellite maker at a valuation higher than OpenAI’s record-setting $500 billion, people familiar with the matter said.

One of the people briefed on the deal said that the share price under discussion is higher than $400 apiece, which would value SpaceX at between $750 billion and $800 billion, though the details could change. 

The company’s latest tender offer was discussed by its board of directors on Thursday at SpaceX’s Starbase hub in Texas. If confirmed, it would make SpaceX once again the world’s most valuable closely held company, vaulting past the previous record of $500 billion that ChatGPT owner OpenAI set in October. Play Video

Preliminary scenarios included per-share prices that would have pushed SpaceX’s value at roughly $560 billion or higher, the people said. The details of the deal could change before it closes, a third person said. 

A representative for SpaceX didn’t immediately respond to a request for comment. 

The latest figure would be a substantial increase from the $212 a share set in July, when the company raised money and sold shares at a valuation of $400 billion.

The Wall Street Journal and Financial Times, citing unnamed people familiar with the matter, earlier reported that a deal would value SpaceX at $800 billion.

News of SpaceX’s valuation sent shares of EchoStar Corp., a satellite TV and wireless company, up as much as 18%. Last month, Echostar had agreed to sell spectrum licenses to SpaceX for $2.6 billion, adding to an earlier agreement to sell about $17 billion in wireless spectrum to Musk’s company.

Subscribe Now: The Business of Space newsletter covers NASA, key industry events and trends.

The world’s most prolific rocket launcher, SpaceX dominates the space industry with its Falcon 9 rocket that launches satellites and people to orbit.

SpaceX is also the industry leader in providing internet services from low-Earth orbit through Starlink, a system of more than 9,000 satellites that is far ahead of competitors including Amazon.com Inc.’s Amazon Leo.

SpaceX executives have repeatedly floated the idea of spinning off SpaceX’s Starlink business into a separate, publicly traded company — a concept President Gwynne Shotwell first suggested in 2020. 

However, Musk cast doubt on the prospect publicly over the years and Chief Financial Officer Bret Johnsen said in 2024 that a Starlink IPO would be something that would take place more likely “in the years to come.”

The Information, citing people familiar with the discussions, separately reported on Friday that SpaceX has told investors and financial institution representatives that it is aiming for an initial public offering for the entire company in the second half of next year.

A so-called tender or secondary offering, through which employees and some early shareholders can sell shares, provides investors in closely held companies such as SpaceX a way to generate liquidity.

SpaceX is working to develop its new Starship vehicle, advertised as the most powerful rocket ever developed to loft huge numbers of Starlink satellites as well as carry cargo and people to moon and, eventually, Mars.



Source link

Continue Reading

Business

U.S. consumers are so strained they put more than $1B on BNPL during Black Friday and Cyber Monday

Published

on



Financially strained and cautious customers leaned heavily on buy now, pay later (BNPL) services over the holiday weekend.

Cyber Monday alone generated $1.03 billion (a 4.2% increase YoY) in online BNPL sales with most transactions happening on mobile devices, per Adobe Analytics. Overall, consumers spent $14.25 billion online on Cyber Monday. To put that into perspective, BNPL made up for more than 7.2% of total online sales on that day.

As for Black Friday, eMarketer reported $747.5 million in online sales using BNPL services with platforms like PayPal finding a 23% uptick in BNPL transactions.

Likewise, digital financial services company Zip reported 1.6 million transactions throughout 280,000 of its locations over the Black Friday and Cyber Monday weekend. Millennials (51%) accounted for a chunk of the sizable BNPL purchases, followed by Gen Z, Gen X, and baby boomers, per Zip.

The Adobe data showed that people using BNPL were most likely to spend on categories such as electronics, apparel, toys, and furniture, which is consistent with previous years. This trend also tracks with Zip’s findings that shoppers were primarily investing in tech, electronics, and fashion when using its services.

And while some may be surprised that shoppers are taking on more debt via BNPL (in this economy?!), analysts had already projected a strong shopping weekend. A Deloitte survey forecast that consumers would spend about $650 million over the Black Friday–Cyber Monday stretch—a 15% jump from 2023.

“US retailers leaned heavily on discounts this holiday season to drive online demand,” Vivek Pandya, lead analyst at Adobe Digital Insights, said in a statement. “Competitive and persistent deals throughout Cyber Week pushed consumers to shop earlier, creating an environment where Black Friday now challenges the dominance of Cyber Monday.”

This report was originally published by Retail Brew.



Source link

Continue Reading

Trending

Copyright © Miami Select.