Good morning. CEOs and CFOs are clearly focused on AI—it is the most-used term in S&P 500 earnings calls this year.
FactSet examined conference call transcripts for all S&P 500 companies that held earnings calls from September 15 through December 4 and found that the term “AI” was cited on 306 calls. This is the highest number of S&P 500 earnings calls on which “AI” has been cited over the past 10 years; the previous record was 292 in Q2 2025, according to John Butters, VP and senior earnings analyst at FactSet. In addition, the 306 figure is significantly above the five-year average of 136 and the 10-year average of 86.
At the sector level, information technology (95%) and communication services (95%) sectors have the highest percentages of earnings calls citing “AI” for Q3.
In addition, S&P 500 companies that cited “AI” on their Q3 earnings calls have seen a higher average price increase than those that did not—since Dec. 31, 2024 (13.9% vs. 5.7%), June 30, 2025 (8.1% vs. 3.9%), and Sept. 30, 2025 (1.0% vs. 0.3%).
Navigating uncertainty
Besides AI, another term I was curious about is “uncertainty,” so I asked Butters for his take. He analyzed S&P 500 earnings calls (per quarter) in which the term “uncertainty” was cited at least once, going back to 2020. He found that, similar to the pattern seen with “tariff” citations, mentions of “uncertainty” spiked in Q1 2025 but declined significantly over the following two quarters. In Q1 2025, there were 415 mentions of “uncertainty,” compared to 282 in Q2 and 201 in Q3.
Following President Donald Trump’s “Liberation Day” earlier this year, significant uncertainty emerged around the new administration’s economic and geopolitical agenda, Yuval Atsmon, CFO at McKinsey, recently told me. Atsmon explained that at the peak of uncertainty, his focus as a CFO was on identifying actions that would be helpful in any scenario. “The worst thing is inaction,” he added. Acting on what you can control builds resilience, he said.
Operating in uncertainty has seemingly become a constant, which may help explain why explicit mentions of the term have tapered off during earnings calls. While uncertainty often drives defensive moves, Atsmon emphasized the importance of revisiting long-standing strategies and seizing competitive opportunities.
Global AI spending is expected to climb in 2026, and it is likely that “AI” will remain a top term in Q4 earnings calls in January as companies discuss investment, margins, capex, and productivity.
Neil Berkley was promoted to CFO of Alector, Inc. (Nasdaq: ALEC), a clinical-stage biotechnology company. Berkley has served as Alector’s chief business officer (CBO) since March 2024, and CBO and interim CFO since June 2025. He is a biotech executive with more than two decades of experience leading corporate strategy, finance, business development, and operations across both early- and late-stage companies.
Caleb Noel was promoted to EVP and CFO of NFP, an Aon company, a property and casualty broker and benefits consultant. Noel has served in various corporate finance and operational roles during his 23-year career with NFP, most recently as SVP of finance and operations. He previously served as VP of finance for Scottish Holdings, a division of Scottish Re, and as an analyst in the investment banking division of Prudential Securities (now Wells Fargo & Company).
Big Deal
CFOs have a long-term focus when it comes to AI, according to research by RGP, a global professional services firm. The report, “The AI Foundational Divide: From Ambition to Readiness,” describes a finance landscape that is racing toward an AI-powered future yet constrained by issues such as fragile data foundations.
Although 66% of CFOs surveyed expect significant AI ROI within two years, only 14% report meaningful value today. However, optimism persists despite key obstacles to AI ROI, including deep structural barriers such as data trust issues (only 10% fully trust enterprise data), technical debt (86% say legacy systems limit AI readiness), and skills shortages that threaten to slow adoption.
The findings are based on insights from 200 U.S. CFOs at enterprises with more than $10 billion in annual revenue. Sectors include technology, health care, financial services, and CPG/retail.
Going deeper
A new episode of “This Week in Business,” a Wharton podcast, focuses on AI and technological evolution. Lynn Wu, a Wharton associate professor of operations, information and decisions, addresses the rise of transformative technologies and the cycles of tech bubbles throughout history. Wu discusses where AI fits within these cycles, describing it as a necessary phase of technological evolution that lays the groundwork for transformative advancements across industries.
Overheard
“In the end, consumers will win if courts and enforcers act based on evidence.”
—Satya Marar, a research fellow at the Mercatus Center at George Mason University, writes in a Fortune opinion piece titled “Netflix, Warner, Paramount and antitrust: Entertainment megadeal’s outcome must follow the evidence, not politics or fear of integration.” Marar specializes in competition, innovation, and governance, and is an AI and antitrust fellow at the Innovators Network.
New AI chips seem to hit the market at a quicker pace as tech companies scramble to gain supremacy in the global arms race for computational power.
But that begs the question: What happens to all those older-generation chips?
The AI stock boom has lost a lot of momentum in recent weeks due, in part, to worries that so-called hyperscalers aren’t correctly accounting for the depreciation in the hoard of chips they’ve purchased to power chatbots.
Michael Burry—the investor of Big Short fame who famously predicted the 2008 housing collapse—sounded the alarm last month when he warned AI-era profits are built on “one of the most common frauds in the modern era,” namely stretching the depreciation schedule. He estimated Big Tech will understate depreciation by $176 billion between 2026 and 2028.
But according to a note last week from Alpine Macro, chip depreciation fears are overstated for three reasons.
First, analysts pointed out software advances that accompany next-generation chips can also level up older-generation processors. For example, software can improve the performance of Nvidia’s five-year-old A100 chip by two to three times compared to its initial version.
Second, Alpine said the need for older chips remains strong amid rising demand for inference, meaning when a chatbot responds to queries. In fact, inference demand will significantly outpace demand for AI training in the coming years.
“For inference, the latest hardware helps but is often not essential, so chip quantity can substitute for cutting-edge quality,” analysts wrote, adding Google is still running seven- to eight-year-old TPUs at full utilization.
Third, China continues to demonstrate “insatiable” demand for AI chips as its supply “lags the U.S. by several generations in quality and severalfold in quantity.” And even though Beijing has banned some U.S. chips, the black market will continue to serve China’s shortfalls.
Meanwhile, not all chips used in AI belong to hyperscalers. Even graphics processors contained in everyday gaming consoles could work.
A note last week from Yardeni Research pointed to “distributed AI,” which draws on unused chips in homes, crypto-mining servers, offices, universities, and data centers to act as global virtual networks.
While distributed AI can be slower than a cluster of chips housed in the same data center, its network architecture can be more resilient if a computer or a group of them fails, Yardeni added.
“Though we are unable to ascertain how many GPUs were being linked in this manner, Distributed AI is certainly an interesting area worth watching, particularly given that billions are being spent to build new, large data centers,” the note said.
Today, Amazon’s market cap is hovering around $2.38 trillion, and founder Jeff Bezos is one of the world’s richest men, worth $236.1 billion. But three decades ago, in 1995, getting the first million dollars in seed capital for Amazon was more grueling than any challenge that would follow. One year ago, at New York’s Dealbook Summit, Bezos told Andrew Ross Sorkin those early fundraising efforts were an absolute slog, with dozens of meetings with angel investors—the vast majority of which were “hard-earned no’s.”
“I had to take 60 meetings,” Bezos said, in reference to the effort required to convince angel investors to sink tens of thousands of dollars into his company. “It was the hardest thing I’ve ever done, basically.”
The structure was straightforward: Bezos said he offered 20% of Amazon for a $5 million valuation. He eventually got around 20 investors to each invest around $50,000. But out of those 60 meetings he took around that time, 40 investors said no—and those 40 “no’s” were particularly soul-crushing because before getting an answer, each back-and-forth required “multiple meetings” and substantial effort.
Bezos said he had a hard time convincing investors selling books over the internet was a good idea. “The first question was what’s the internet? Everybody wanted to know what the internet was,” Bezos recalled. Few investors had heard of the World Wide Web, let alone grasped its commercial potential.
That said, Bezos admitted brutal honesty with his potential investors may have played a role in getting so many rejections.
“I would always tell people I thought there was a 70% chance they would lose their investment,” he said. “In retrospect, I think that might have been a little naive. But I think it was true. In fact, if anything, I think I was giving myself better odds than the real odds.”
Bezos said getting those investors on board in the mid-90s was absolutely critical. “The whole enterprise could have been extinguished then,” he said.
You can watch Bezos’ full interview with Andrew Ross Sorkin below. He starts talking about this interview gauntlet for seed capital around the 33-minute mark.
Google cofounder Sergey Brin thought retiring from Google in 2019 would mean quietly studying physics for days on end in cafés.
But when COVID hit soon after, he realized he may have made a mistake.
“That didn’t work because there were no more cafés,” he told students at Stanford University’s School of Engineering centennial celebration last week, Business Insider reported.
The transition from president of Google parent company Alphabet to a 40-something retiree ended up not being as smooth as he imagined, and soon after he said he was “spiraling” and “kind of not being sharp” as he stepped away from busy corporate life.
Therefore, when Google began allowing small numbers of employees back into the office, Brin tagged along and put his efforts into what would become Google’s AI model, Gemini. Despite being the world’s fourth-richest man with a net worth of $247 billion, retirement wasn’t for him, he said.
“To be able to have that technical creative outlet, I think that’s very rewarding,” Brin said. “If I’d stayed retired, I think that would’ve been a big mistake.”
By 2023, Brin was back to work in a big way, visiting the company’s office three to four times a week, the Wall Street Journalreported, working with researchers and holding weekly discussions with Google employees about new AI research. He also reportedly had a hand in some personnel decisions, like hiring.
Skip forward to 2025 and Brin’s plans for a peaceful retirement of quiet study are out the window. In February, he made waves for an internal memo in which, despite Google’s three-day in-office policy, he recommended Google employees go into the company’s Mountain View, Calif. offices at least every weekday, and that 60 hours a week was the “sweet spot” of productivity.
Brin’s newfound efforts at work may have been necessary as OpenAI’s release of ChatGPT in 2022 caught the tech giant off guard, after it had led the field of AI research with DeepMind and Google Brain for years.
To be sure, Google for its part has been rising in the AI race. Analysts raved last month about Gemini 3, the company’s latest update to its LLM, and Google’s stock is up about 8% since its release. Meanwhile, OpenAI earlier this month declared a “code red,” its highest alert level, to improve ChatGPT.
Brin added in the talk at Stanford that Google has an advantage in the AI arms race precisely because of the foundation it laid over years through its neural network research, its custom AI chips, and its data center infrastructure.