Connect with us

Business

Bubble or not, the AI backlash is validating one critic’s warnings

Published

on



First it was the release of GPT-5 that OpenAI “totally screwed up,” according to Sam Altman. Then Altman followed that up by saying the B-word at a dinner with reporters. “When bubbles happen, smart people get overexcited about a kernel of truth,” The Verge reported on comments by the OpenAI CEO. Then it was the sweeping MIT survey that put a number on what so many people seem to be feeling: a whopping 95% of generative AI pilots at companies are failing.

A tech sell-off ensued, as rattled investors sent the value of the S&P 500 down by $1 trillion. Given the increasing dominance of that index by tech stocks that have largely transformed into AI stocks, it was a sign of nerves that the AI boom was turning into dotcom bubble 2.0. To be sure, fears about the AI trade aren’t the only factor moving markets, as evidenced by the S&P 500 snapping a five-day losing streak on Friday after Jerome Powell’s quasi-dovish comments at Jackson Hole, Wyoming, as even the hint of openness from the Fed chair toward a September rate cut set markets on a tear.

Gary Marcus has been warning of the limits of large language models (LLMs) since 2019 and warning of a potential bubble and problematic economics since 2023. His words carry a particularly distinctive weight. The cognitive scientist turned longtime AI researcher has been active in the machine learning space since 2015, when he founded Geometric Intelligence. That company was acquired by Uber in 2016, and Marcus left shortly afterward, working at other AI startups while offering vocal criticism of what he sees as dead-ends in the AI space.

Still, Marcus doesn’t see himself as a “Cassandra,” and he’s not trying to be, he told Fortune in an interview. Cassandra, a figure from Greek tragedy, was a character who uttered accurate prophecies but wasn’t believed until it was too late. “I see myself as a realist and as someone who foresaw the problems and was correct about them.”

Marcus attributes the wobble in markets to GPT-5 above all. It’s not a failure, he said, but it’s “underwhelming,” a “disappointment,” and that’s “really woken a lot of people up. You know, GPT-5 was sold, basically, as AGI, and it just isn’t,” he added, referencing artificial general intelligence, a hypothetical AI with human-like reasoning abilities. “It’s not a terrible model, it’s not like it’s bad,” he said, but “it’s not the quantum leap that a lot of people were led to expect.”

Marcus said this shouldn’t be news to anyone paying attention, as he argued in 2022 that “deep learning is hitting a wall.” To be sure, Marcus has been wondering openly on his Substack on when the generative AI bubble will deflate. He told Fortune that “crowd psychology” is definitely taking place, and he thinks every day about the John Maynard Keynes quote: “The market can stay solvent longer than you can stay rational,” or Looney Tunes’s Wile E. Coyote following Road Runner off the edge of a cliff and hanging in midair, before falling down to Earth.

“That’s what I feel like,” Marcus says. “We are off the cliff. This does not make sense. And we get some signs from the last few days that people are finally noticing.”

Building warning signs

The bubble talk began heating up in July, when Apollo Global Management’s chief economist, Torsten Slok, widely read and influential on Wall Street, issued a striking calculation while falling short of declaring a bubble. “The difference between the IT bubble in the 1990s and the AI bubble today is that the top 10 companies in the S&P 500 today are more overvalued than they were in the 1990s,” he wrote, warning that the forward P/E ratios and staggering market capitalizations of companies such as Nvidia, Microsoft, Apple, and Meta had “become detached from their earnings.”

In the weeks since, the disappointment of GPT-5 was an important development, but not the only one. Another warning sign is the massive amount of spending on data centers to support all the theoretical future demand for AI use. Slok has tackled this subject as well, finding that data center investments’ contribution to GDP growth has been the same as consumer spending over the first half of 2025, which is notable since consumer spending makes up 70% of GDP. (The Wall Street Journal‘s Christopher Mims had offered the calculation weeks earlier.) Finally, on August 19, former Google CEO Eric Schmidt co-authored a widely discussed New York Times op-ed on August 19, arguing that “it is uncertain how soon artificial general intelligence can be achieved.”

This is a significant about-face, according to political scientist Henry Farrell, who argued in the Financial Times in January that Schmidt was a key voice shaping the “New Washington Consensus,” predicated in part on AGI being “right around the corner.” On his Substack, Farrell said Schmidt’s op-ed shows that his prior set of assumptions are “visibly crumbling away,” while caveating that he had been relying on informal conversations with people he knew in the intersection of D.C. foreign policy and tech policy. Farrell’s title for that post: “The twilight of tech unilateralism.” He concluded: “If the AGI bet is a bad one, then much of the rationale for this consensus falls apart. And that is the conclusion that Eric Schmidt seems to be coming to.”

Finally, the vibe is shifting in the summer of 2025 into a mounting AI backlash. Darrell West warned in Brookings in May that the tide of both public and scientific opinion would soon turn against AI’s masters of the universe. Soon after, Fast Company predicted the summer would be full of “AI slop.” By early August, Axios had identified the slang “clunker” being applied widely to AI mishaps, particularly in customer service gone awry.

History says: short-term pain, long-term gain

John Thornhill of the Financial Times offered some perspective on the bubble question, advising readers to brace themselves for a crash, but to prepare for a future “golden age” of AI nonetheless. He highlights the data center buildout—a staggering $750 billion investment from Big Tech over 2024 and 2025, and part of a global rollout projected to hit $3 trillion by 2029. Thornhill turns to financial historians for some comfort and some perspective. Over and over, it shows that this type of frenzied investment typically triggers bubbles, dramatic crashes, and creative destruction—but that eventually durable value is realized.

He notes that Carlota Perez documented this pattern in Technological Revolutions and Financial Capital: The Dynamics of Bubbles and Golden Ages. She identified AI as the fifth technological revolution to follow the pattern begun in the late 18th century, as a result of which the modern economy now has railroad infrastructure and personal computers, among other things. Each one had a bubble and a crash at some point. Thornhill didn’t cite him in this particular column, but Edward Chancellor documented similar patterns in his classic Devil Take The Hindmost, a book notable not just for its discussions of bubbles but for predicting the dotcom bubble before it happened. 

Owen Lamont of Acadian Asset Management cited Chancellor in November 2024, when he argued that a key bubble moment had been passed: an unusually large number of market participants saying that prices are too high, but insisting that they’re likely to rise further.

Wall Street banks are largely not calling for a bubble. Morgan Stanley released a note recently seeing huge efficiencies ahead for companies as a result of AI: $920 billion per year for the S&P 500. UBS, for its part, concurred with the caution flagged in the news-making MIT research. It warned investors to expect a period of “capex indigestion” accompanying the data center buildout, but it also maintained that AI adoption is expanding far beyond expectations, citing growing monetization from OpenAI’s ChatGPT, Alphabet’s Gemini, and AI-powered CRM systems.

Bank of America Research wrote a note in early August, before the launch of GPT-5, seeing AI as part of a worker productivity “sea change” that will drive an ongoing “innovation premium” for S&P 500 firms. Head of U.S. Equity Strategy Savita Subramanian essentially argued that the inflation wave of the 2020s taught companies to do more with less, to turn people into processes, and that AI will turbo-charge this. “I don’t think it’s necessarily a bubble in the S&P 500,” she told Fortune in an interview, before adding, “I think there are other areas where it’s becoming a little bit bubble-like.” 

Subramanian mentioned smaller companies and potentially private lending as areas “that potentially have re-rated too aggressively.” She’s also concerned about the risk of companies diving into data centers too such a great extent, noting that this represents a shift back toward an asset-heavier approach, instead of the asset-light approach that increasingly distinguishes top performance in the U.S. economy.

“I mean, this is new,” she said. “Tech used to be very asset-light and just spent money on R&D and innovation, and now they’re spending money to build out these data centers,” adding that she sees it as potentially marking the end of their asset-light, high-margin existence and basically transforming them into “very asset-intensive and more manufacturing-like than they used to be.” From her perspective, that warrants a lower multiple in the stock market. When asked if that is tantamount to a bubble, if not a correction, she said “it’s starting to happen in places,” and she agrees with the comparison to the railroad boom.

The math and the ghost in the machine

Gary Marcus also cited the fundamentals of math as a reason that he’s concerned, with nearly 500 AI unicorns being valued at $2.7 trillion. “That just doesn’t make sense relative to how much revenue is coming [in],” he said. Marcus cited OpenAI reporting $1 billion in revenue in July, but still not being profitable. Speculating, he extrapolated that to OpenAI having roughly half the AI market, and offered a rough calculation that it means about $25 billion a year of revenue for the sector, “which is not nothing, but it costs a lot of money to do this, and there’s trillions of dollars [invested].”

So if Marcus is correct, why haven’t people been listening to him for years? He said he’s been warning people about this for years, too, calling it the “gullibility gap” in his 2019 book Rebooting AI and arguing in The New Yorker in 2012 that deep learning was a ladder that wouldn’t reach the moon. For the first 25 years of his career, Marcus trained and practiced as a cognitive scientist, and learned about the “anthropomorphization people do. … [they] look at these machines and make the mistake of attributing to them an intelligence that is not really there, a humanness that is not really there, and they wind up using them as a companion, and they wind up thinking that they’re closer to solving these problems than they actually are.” He said he thinks the bubble inflating to its current extent is in large part because of the human impulse to project ourselves onto things, something a cognitive scientist is trained not to do.

These machines might seem like they’re human, but “they don’t actually work like you,” Marcus said, adding, “this entire market has been based on people not understanding that, imagining that scaling was going to solve all of this, because they don’t really understand the problem. I mean, it’s almost tragic.”

Subramanian, for her part, said she thinks “people love this AI technology because it feels like sorcery. It feels a little magical and mystical … the truth is it hasn’t really changed the world that much yet, but I don’t think it’s something to be dismissed.” She’s also become really taken with it herself. “I’m already using ChatGPT more than my kids are. I mean, it’s kind of interesting to see this. I use ChatGPT for everything now.”



Source link

Continue Reading

Business

Why the worst leaders sometimes rise the fastest

Published

on



History is crowded with CEOs who have flamed out in very public ways. Yet when the reckoning arrives, the same question often lingers: How did this person keep getting promoted? In corporate America, the phenomenon is known as “failing up,” the steady rise of executives whose performance rarely matches their trajectory. Organizational psychologists say it’s not an anomaly. It’s a feature of how many companies evaluate leadership.

At the core is a well-documented bias toward confidence over competence. Studies consistently show that people who speak decisively, project certainty, and take credit for wins—whether earned or not—are more likely to be perceived as leadership material. In ambiguous environments, boards and senior managers often mistake boldness for ability. As long as a leader can narrate failure convincingly—blaming market headwinds, legacy systems, or uncooperative teams—their upward momentum may continue.

Another driver is asymmetric accountability. Senior executives typically oversee vast, complex systems where outcomes are hard to tie directly to individual decisions. When results are good, credit flows upward. When results are bad, blame diffuses downward, and middle managers, project leads, and market conditions become convenient shock absorbers. This allows underperforming leaders to survive long enough to secure their next promotion.

Then there’s the mobility illusion. In many industries, frequent job changes are read as ambition and momentum rather than warning signs. An executive who leaves after short, uneven tenures can reframe each exit as a “growth opportunity” or a strategic pivot. Recruiters and boards, under pressure to fill top roles quickly, often rely on résumé signals, like brand-name firms, inflated titles, and elite networks, rather than deep performance audits.

Ironically, early visibility can also accelerate failure upward. High-profile roles magnify both success and failure, but they also increase name recognition. An executive who runs a troubled division at a global firm may preside over mediocre results, yet emerge with a reputation as a “big-company leader,” making them attractive for a CEO role elsewhere.

The reckoning usually comes only at the top. As CEO, the buffers disappear. There is no one left to blame, and performance is judged in the blunt language of earnings, stock price, profitability, or layoffs. The traits that once fueled ascent, such as overconfidence, risk-shifting, and narrative control, become liabilities under full scrutiny.

The central lesson for aspiring CEOs is that the very system that rewards confidence, visibility, and narrative control on the way up often masks weak execution until the top job strips those protections away. Future leaders who want to avoid “failing upward” must deliberately build careers grounded in verifiable results and direct ownership of outcomes because at the CEO level, there is no narrative strong enough to substitute for performance.

Ruth Umoh
ruth.umoh@fortune.com

Smarter in seconds

Big biz buy-in. Anthropic is all in on ‘AI safety’—and that’s helping the $183 billion startup win over big business

Old guard upgrade. How the bank founded by Alexander Hamilton is transforming for the future of finance

Pressure test. Inside the Fortune 500 CEO pressure cooker: surviving is harder than ever and requires an ‘odd combination’ of traits

Rank racing. The one-upmanship driving CEOs

Leadership lesson

Anthropic’s Dario Amodei on when a startup gets too big to know all employees: “It’s an inevitable part of growth.”

News to know

Investors are questioning OpenAI’s profitability amid its massive spending while increasingly viewing Alphabet as the deeper-pocketed winner in the AI race. Fortune

Trump warned that Netflix’s $72 billion bid for Warner Bros. Discovery could face antitrust scrutiny, suggesting it would create an overly dominant force in streaming. Fortune

An etiquette camp is trying to help Silicon Valley shed its sloppy image by teaching tech elites how to dress and behave as their influence grows. WaPo

IBM is reportedly in advanced talks to buy data-infrastructure firm Confluent for about $11 billion, bolstering its AI data capabilities. WSJ

Even as women reach top roles in politics and business at record levels, public confidence in their leadership is stagnating or declining. Bloomberg

Terence “Bud” Crawford, the undefeated 38-year-old boxing champion, has earned more than $100 million and even turned Warren Buffett into a fan. Forbes

Big Tech leaders now warn that artificial intelligence is advancing to the point where it could begin replacing even CEOs, reshaping the very top of corporate leadership. WSJ

This is the web version of the Fortune Next to Lead newsletter, which offers strategies on how to make it to the corner office. Sign up for free.



Source link

Continue Reading

Business

The workforce is becoming AI-native. Leadership has to evolve

Published

on



One of the most insightful conversations I have had recently about artificial intelligence was not with policymakers or peers. It was with a group of Nokia early-careers talents in their early 20s. What stood out was their impatience. They wanted to move faster in using AI to strengthen their innovation capabilities. 

That makes perfect sense. This generation began university when ChatGPT launched in 2022. They now account for roughly half of all ChatGPT usage, applying it to everything from research to better decision-making in knowledge-intensive work. 

Some people worry that AI-driven hiring slowdowns are disproportionately impacting younger workers. Yet the greater opportunity lies in a new generation of AI-native professionals entering the workforce equipped for how technology is transforming roles, teams, and leadership.

Better human connectivity 

One of the first tangible benefits of generative AI is that it allows individual contributors to take on tasks once handled by managers. Research by Harvard Business School found that access to Copilot increased employee productivity by 5% in core tasks. As productivity rises and hierarchies flatten, early-career employees using AI are empowered to focus on outcomes, learn faster, and contribute at a higher level.

Yet personal productivity is not the real measure of progress. What matters most is how well teams perform together. Individual AI gains only create business impact when they align with team goals and that requires greater transparency, alignment, and accountability.

At Nokia, we ensure that everyone has clear, measurable goals that support their teams’ objectives. Leaders need to be open about their goals to their managers and to their reports. And everyone means everyone. Me included. That way goals are not only about recognition and reward. They become an ongoing dialogue between leaders and their teams. It’s how we’re building a continuous learning culture that thrives on feedback and agility, both essential in the AI era. 

Humans empowered with AI, not humans versus AI

AI’s true power lies in augmenting human skills. Every role has a core purpose – whether in strategy, creativity, or technical problem-solving – and AI helps people focus on that. 

During the COVID-19 pandemic, more than 60 chatbots were deployed in 30 countries to handle routine public health queries, freeing up healthcare workers to focus on critical patient care. Most health services never looked back. 

The same pattern applies inside companies. Some of the routine tasks given to new hires are drudge work and not a learning experience. AI gives us a chance to rethink the onboarding, training, and career development process.

Take an early-career engineer. Onboarding can be a slow process of documentation and waiting for reviews. AI can act as an always-on coach that gives quick guidance and helps people ramp up. Mentors then spend less time on the basics and more time helping engineers solve real problems. Engineers can also have smart agents testing their designs, ideas, and simulating potential outcomes. In this way, AI strengthens, rather than substitutes, the human connection between junior engineers and their mentors and helps unlock potential faster.

Encourage experimentation and entrepreneurship 

During two decades of the Internet Supercycle (1998-2018), start-ups created trillions of dollars in economic value and roughly half of all new jobs in OECD countries

As AI lowers the barriers to launching and scaling ventures, established companies must find new ways to encourage experimentation, nurture innovation through rapid iterations, and give employees the chance to commercialize and scale their ideas.

There is a generational shift that increases the urgency: more than 60% of Gen Z Europeans hope to start their own businesses within five years, according to one survey. To secure this talent, large organizations must provide the attributes that make entrepreneurship attractive. Empowering people with agility, autonomy, and faster decision-making creates an edge in attracting and keeping top talent.

At Nokia, our Technology and AI Organization is designed to strengthen innovation capabilities, encourage entrepreneurial thinking, and give teams the support to turn ideas into real outcomes.

More coaching, less managing 

Sporting analogies are often overused in business as the two worlds don’t perfectly align, yet the evolution of leadership in elite football offers useful lessons. Traditionally, managers oversaw everything on and off the pitch. Today, head coaches focus on building the right team and culture to win. 

Luis Enrique, the manager of Paris-St. Germain football club, last season’s UEFA Champion’s League winner, exemplifies this shift. He transformed a team of stars into a star team, while also evolving his coaching style, elevating both individual and collective potential.

Of course, CEOs must switch between both roles (as I said, the worlds don’t perfectly align) – setting vision and strategy while also cultivating the right team and culture to succeed. AI can help leaders do both with more focus. It gives us quicker insight into what is working, what is not, and where teams need support.

I have been testing these tools with my own leadership team. We are using generative AI to help us evaluate our decisions and to understand how we work together. It has revealed patterns we might have missed, and it has helped us get to the real issues faster. It does not replace judgment or experience. It supports them.

Yet the core of leadership does not change. AI cannot build trust. It cannot set expectations. It cannot create a culture that learns, improves, and takes responsibility. That still comes from people. And in a world shaped by AI, the leaders who succeed will be the ones who coach, who listen, and who help teams move faster with confidence.

Nokia’s technology connects intelligence around the world. Inside the company, connecting intelligence is about how people work together. It means giving teams the tools, support and culture they need to grow and perform with confidence. Connecting intelligence is how teams win.

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.



Source link

Continue Reading

Business

Procurement execs often don’t understand the value of good design, experts say

Published

on



Behind every intricately designed hotel or restaurant is a symbiotic collaboration between designer and maker.

But in reality, firms want to build more with less—and even though visions are created by designers, they don’t always get to see them to fruition. Instead, intermediaries may be placed in charge of procurements and overseeing the financial costs of executing designs.

“The process is not often as linear as we [designers] would like it to be, and at times we even get slightly cut out, and something comes out on the other side that wasn’t really what we were expecting,” said Tina Norden, a partner and principal at design firm Conran and Partners, at the Fortune Brainstorm Design forum in Macau on Dec. 2.

“To have a better quality product, communication is very much needed,” added Daisuke Hironaka, the CEO of Stellar Works, a furniture company based in Shanghai. 

Yet those tasked with procurement are often “money people” who may not value good design—instead forsaking it to cut costs. More education on the business value of quality design is needed, Norden argued.

When one builds something, she said, there are both capital investment and a lifecycle cost. “If you’re spending a bit more money on good quality furniture, flooring, whatever it might be, arguably, it should last a lot longer, and so it’s much better value.”

Investing in well-designed products is also better for the environment, Norden added, as they don’t have to be replaced as quickly.

Attempts to cut costs may also backfire in the long run, said Hironaka, as business owners may have to foot higher maintenance bills if products are of poor design and make.

AI in interior and furniture design

Though designers have largely been slow adopters of AI, some luminaries like Daisuke are attempting to integrate it into their team’s workflow.

AI can help accelerate the process of designing bespoke furniture, Daisuke explained, especially for large-scale projects like hotels. 

A team may take a month to 45 days to create drawings for 200 pieces of custom-made furniture, the designer said, but AI can speed up this process. “We designed a lot in the past, and if AI can use these archives, study [them] and help to do the engineering, that makes it more helpful for designers.” 

Yet designers can rest easy as AI won’t ever be able to replace the human touch they bring, Norden said. 

“There is something about the human touch, and about understanding how we like to use our spaces, how we enjoy space, how we perceive spaces, that will always be there—but AI should be something that can assist us [in] getting to that point quicker.”

She added that creatives can instead view AI as a tool for tasks that are time-consuming but “don’t need ultimate creativity,” like researching and three-dimensionalizing designs.

“As designers, we like to procrastinate and think about things for a very long time to get them just right, [but] we can get some help in doing things faster.”



Source link

Continue Reading

Trending

Copyright © Miami Select.