Connect with us

Business

AI startup valuations are doubling and tripling within months as back-to-back funding rounds fuel a stunning growth spurt

Published

on



Everyone keeps asking: “Are we in an AI bubble?” But just as often, I hear a different question, followed by recognition: “Wait—they raised another round?”

This year, a handful of top AI startups—some now so large that calling them “startups” feels vaguely ironic—have raised not just one giant round of funding, but two or more. And with each round, the startups’ valuations are doubling, sometimes even tripling, to reach astonishing new heights.

Take Anthropic. In March it raised a $3.5 billion Series E at a $61.5 billion valuation. Just six months later, in September, it pulled in a $13 billion Series F round. New valuation: $183 billion.

OpenAI, the startup that ignited the AI boom with ChatGPT, remains the pace setter, fetching an unprecedented $500 billion valuation in a tender offer last month. That’s up from the $300 billion valuation it garnered during a March funding round, and the $157 billion valuation it started off this year with as a result of an October 2024 funding.

In other words, in the 12 months between October 2024 and October 2025, OpenAI’s valuation increased by roughly $29 billion every month—almost $1 billion per day.

It’s not just the LLM giants. Further down (but still high on) the AI food chain, recruiting startup Mercor in February raised its $100 million Series B at a $2 billion valuation—and then by October raised another $350 million as the company’s valuation leapt to $10 billion. 

Well over a dozen startups have raised two or more funding rounds this year with escalating valuations, including Cursor, Reflection AI, OpenEvidence, Lila Sciences, Harmonic, Fal, Abridge, and Doppel. Some, like Harvey and Databricks, are currently reported to be in their third rounds. 

These valuation growth spurts, especially at a scale of billions and tens of billions of dollars, are extraordinary and raise a number of dizzying questions, beginning with: Why is this even happening? Is the phenomenon a reflection of the strength of these startups, or the unique business opportunity presented by the AI revolution, or a bit of both? And how healthy is this kind of thing—what risks are the startups, and the broader market, taking on by raising so much capital so fast and pumping valuations up so quickly? 

The specter of 2021

To hear some industry insiders explain it, there’s more to the current phenomenon than frothy market conditions. While the ZIRP, or zero interest rate policy, era that peaked in 2021 saw its share of startups raising multiple back-to-back rounds (Cybersecurity startup Wiz was valued at $1.7 billion in its May 2021 round, and when it raised $250 million in October its valuation sprung to $6 billion), the underlying dynamics were completely different back then (not least because ChatGPT hadn’t launched yet).

Tom Biegala, founding partner at Bison Ventures, said that he doesn’t believe this is anything like 2021, when “companies would raise a round… not because they’ve made any sort of real progress or any technical or commercial milestones.” Investor enthusiasm was so high and capital flowed so effortlessly back then that the perception of momentum was often enough to draw more than one round of capital in a year, Biegala said.

And for every successful Wiz, there were numerous startups in the ZIRP-era that also raised two or more rounds within 12 months that have since struggled (like grocery delivery app Jokr, NFT marketplace OpenSea, and telehealth startup Cerebral).

Terrence Rohan, managing director at Otherwise Fund, says today’s multi-round startups are demonstrating real business traction: “The revenue growth we’re seeing in select companies is without precedent. In certain cases, one could argue that we are dealing with a new phenotype of startup,” Rohan said via email.

Many of today’s high-flying AI startups are putting up impressive numbers, even if we should be suspicious of ARR at this moment. You have young companies like vibe coding startup Lovable, which went from zero to $17 million in ARR in three months, and conversational AI startup Decagon hit “seven figures” in ARR over its first half-year. Cursor is perhaps the most famous of all: The developer-focused AI coding tool went from zero to $100 million in ARR in one year. 

Felicis Ventures founder and managing partner Aydin Senkut describes the back-to-back fundings as a sign of a high velocity market where the costs of being wrong are higher than ever. “The prize now goes to those who identify and support these outliers earliest,” Senkut says, “because being in the wrong sector or too late may not just reduce returns, it may zero them out.”

“The prize is so big”

While broad excitement over generative AI is fueling the series of funding rounds, startups pushing the boundaries in certain verticals are among the biggest beneficiaries of the trend.

Cursor, the buzzy AI coding startup, finished 2024 with a healthy $2.6 billion valuation. Its valuation jumped to $10 billion in June 2025, when Cursor raised $900 million in funding. This month, Cursor announced that it’s now worth $29.3 billion, as it scooped up $2.3 billion in additional capital from investors including Accel, Thrive, and Andreessen Horowitz.

Harvey, an AI startup aimed at the legal industry, raised a total of $600 million in two separate funding rounds within the first six months of 2025, lifting its valuation first to $3 billion and then to $5 billion. In October, several outlets, including Bloomberg and Forbes, reported that Harvey just raised another round of funding that gives the startup an $8 billion valuation. 

Each is representative of their respective sectors: Both coding and legal AI are booming right now. Legal AI company Norm AI in November raised $50 million from Blackstone—shortly after raising a $48 million Series B raised in March. Likewise, in coding, Lovable raised its $15 million seed round in February, followed up with a $200 million Series A at a $1.8 billion valuation by July. 

Healthcare and AI is also hot, with companies like OpenEvidence raising its July Series B of $210 million at $2.5 billion valuation, only to follow up in October with another $200 million at a $6 billion valuation. Abridge (last valued at $5.3 billion) and Hippocratic AI (last valued at $3.5 billion) fall into this category, as well.

Max Altman, Saga Ventures cofounder and managing partner, says the trend isn’t simply the result of exuberant startup investors throwing money around. For some startups, rapid-fire fundraising is becoming part of the strategic playbook—an effective means of taking on competition. 

“What these companies are doing is, very smartly, salting the Earth for their competitors,” Altman told Fortune. “The prize is so big now, with so many people going after it. So, a really amazing strategy is to suck up all the capital, have the best funds invest in your company so they’re not investing in your competitors. Stripe did this really early on, it was smart—you become this force of nature that’s too big to fail.”

That said, that doesn’t mean everyone attracting massive capital is a winner waiting in the wings. 

When the foundation isn’t set

If raising multiple rounds quickly can be a strategic advantage, it can also become a dangerous liability. Or, as Andreessen Horowitz general partner Jennifer Li puts it, these back-to-back fundraisings can go right—and they can go wrong.

“They go right when the capital directly fuels product market fit and execution,” Li said via email. “For example, when the company uses new resources to expand infrastructure, improve models, or meet outsized demand.”

So when do they go wrong?

“When the focus shifts from building to fundraising before the foundation is set,” said Li.

Like a skyscraper built on unstable ground, startups that can’t support overly lofty valuations risk a painful comedown. The valuations of some of hyped AI startups may look untenable (perhaps even unhinged) in the public markets, should the startup make it that far. The resulting recalibration manifests itself in the plummeting value of employees’ equity, creating talent retention and recruiting risks. Many of 2025’s biggest IPOs, such as Chime and Klarna, were decisive valuation cuts from their 2021 highs.

Within the private markets, rapid rounds of fund raising means cap tables can get quickly complex as founder stakes dilute. And then perhaps, the biggest risk of all: That some of these excessively funded startups end up with wild burn rates that they can’t roll back if times get tough and capital dries up. That can lead to layoffs, or worse.

Ben Braverman, Altman’s Saga cofounder and managing partner, said this is ultimately a story about both the concentration of capital in AI and about how VCs have evolved their strategies in the aftermath of 2021. Venture capital has always been about the Power Law—that big winners keep winning big—but that’s become especially true as VCs chase consensus favorites more than ever.

“The story of 2021 to now, on all sides of the market, is a flight to quality,” said Braverman. “Seemingly VCs made the same decision over the last cycle: ‘We’re going to put the majority of our dollars into a few brand names we really trust. And obviously, that has its own consequences.”

One of those consequences is that more capital than ever is flowing into a limited set of AI darlings. And while term sheets are being signed at a feverish pace today, even bullish investors acknowledge that, like any cycle, there will be winners and losers.

“In this type of environment, investors sometimes fall into a trap where they think every new AI model company is going to look like OpenAI or Anthropic,” Bison Ventures’s Biegala told Fortune.

“They’re assigning big valuations to those businesses, and it’s an option value on those companies becoming the next OpenAI or Anthropic,” Biegala said. But, he notes, “a lot of them are not necessarily going to grow into those valuations…and you’re going to see some losses for sure.”



Source link

Continue Reading

Business

Billionaire Marc Benioff challenges the AI sector: ‘What’s more important to us, growth or our kids?’

Published

on



Imagine it is 1996. You log on to your desktop computer (which took several minutes to start up), listening to the rhythmic screech and hiss of the modem connecting you to the World Wide Web. You navigate to a clunky message board—like AOL or Prodigy—to discuss your favorite hobbies, from Beanie Babies to the newest mixtapes.

At the time, a little-known law called Section 230 of the Communications Safety Act had just been passed. The law—then just a 26-word document—created the modern internet. It was intended to protect “good samaritans” who moderate websites from regulation, placing the responsibility for content on individual users rather than the host company.

Today, the law remains largely the same despite evolutionary leaps in internet technology and pushback from critics, now among them Salesforce CEO Marc Benioff. 

In a conversation at the World Economic Forum in Davos, Switzerland, on Tuesday, titled “Where Can New Growth Come From?” Benioff railed against Section 230, saying the law prevents tech giants from being held accountable for the dangers AI and social media pose.

“Things like Section 230 in the United States need to be reshaped because these tech companies will not be held responsible for the damage that they are basically doing to our families,” Benioff said in the panel conversation which also included Axa CEO Thomas Buberl, Alphabet President Ruth Porat, Emirati government official Khaldoon Khalifa Al Mubarak, and Bloomberg journalist Francine Lacqua.

As a growing number of children in the U.S. log onto AI and social media platforms, Benioff said the legislation threatens the safety of kids and families. The billionaire asked, “What’s more important to us, growth or our kids? What’s more important to us, growth or our families? Or, what’s more important, growth or the fundamental values of our society?”

Section 230 as a shield for tech firms

Tech companies have invoked Section 230 as a legal defense when dealing with issues of user harm, including in the 2019 case Force v. Facebook, where the court ruled the platform wasn’t liable for algorithms that connected members of Hamas after the terrorist organization used the platform to encourage murder in Israel. The law could shield tech companies from liability for harm AI platforms pose, including the production of deepfakes and AI-Generated sexual abuse material.

Benioff has been a vocal critic of Section 230 since 2019 and has repeatedly called for the legislation to be abolished. 

In recent years, Section 230 has come under increasing public scrutiny as both Democrats and Republicans have grown skeptical of the legislation. In 2019 the Department of Justice under President Donald Trump pursued a broad review of Section 230. In May 2020, President Trump signed an Executive Order limiting tech platforms’ immunity after Twitter added fact-checks to his tweets. And in 2023, the U.S. Supreme Court heard Gonzalez v. Google, though, decided it on other grounds, leaving Section 230 intact.

In an interview with Fortune in December 2025, Dartmouth business school professor Scott Anthony voiced concern over the “guardrails” that were—and weren’t—happening with AI. When cars were first invented, he pointed out, it took time for speed limits and driver’s licenses to follow. Now with AI, “we’ve got the technology, we’re figuring out the norms, but the idea of, ‘Hey, let’s just keep our hands off,’ I think it’s just really bad.”

The decision to exempt platforms from liability, Anthony added, “I just think that it’s not been good for the world. And I think we are, unfortunately, making the mistake again with AI.”

For Benioff, the fight to repeal Section 230 is more than a push to regulate tech companies, but a reallocation of priorities toward safety and away from unfettered growth. “In the era of this incredible growth, we’re drunk on the growth,” Benioff said. “Let’s make sure that we use this moment also to remember that we’re also about values as well.”



Source link

Continue Reading

Business

Palantir CEO says AI “will destroy” humanities jobs but there will be “more than enough jobs” for people with vocational training

Published

on



Some economists and experts say that critical thinking and creativity will be more important than ever in the age of artificial intelligence (AI), when a robot can do much of the heavy lifting on coding or research. Take Benjamin Shiller, the Brandeis economics professor who recently told Fortune that a “weirdness premium” will be valued in the labor market of the future. Alex Karp, the Palantir founder and CEO, isn’t one of these voices. 

“It will destroy humanities jobs,” Karp said when asked how AI will affect jobs in conversation with BlackRock CEO Larry Fink at the World Economic Forum annual meeting in Davos, Switzerland. “You went to an elite school and you studied philosophy — I’ll use myself as an example — hopefully you have some other skill, that one is going to be hard to market.”

Karp attended Haverford College, a small, elite liberal arts college outside his hometown of Philadelphia. He earned a J.D. from Stanford Law School and a Ph.D. in philosophy from Goethe University in Germany. He spoke about his own experience getting his first job. 

Karp told Fink that he remembered thinking about his own career, “I’m not sure who’s going to give me my first job.” 

The answer echoed past comments Karp has made about certain types of elite college graduates who lack specialized skills.

“If you are the kind of person that would’ve gone to Yale, classically high IQ, and you have generalized knowledge but it’s not specific, you’re effed,” Karp said in an interview with Axios in November. 

Not every CEO agrees with Karp’s assessment that humanities degrees are doomed. BlackRock COO Robert Goldstein told Fortune in 2024 that the company was recruiting graduates who studied “things that have nothing to do with finance or technology.” 

McKinsey CEO Bob Sternfels recently said in an interview with Harvard Business Review that the company is “looking more at liberal arts majors, whom we had deprioritized, as potential sources of creativity,” to break out of AI’s linear problem-solving. 

Karp has long been an advocate for vocational training over traditional college degrees. Last year, Palantir launched a Meritocracy Fellowship, offering high school students a paid internship with a chance to interview for a full-time position at the end of four months. 

The company criticized American universities for “indoctrinating” students and having “opaque” admissions that “displaced meritocracy and excellence,” in their announcement of the fellowship. 

“If you did not go to school, or you went to a school that’s not that great, or you went to Harvard or Princeton or Yale, once you come to Palantir, you’re a Palantirian—no one cares about the other stuff,” Karp said during a Q2 earnings call last year.

“I think we need different ways of testing aptitude,” Karp told Fink. He pointed to the former police officer who attended a junior college, who now manages the US Army’s MAVEN system, a Palantir-made AI tool that processes drone imagery and video.  

“In the past, the way we tested for aptitude would not have fully exposed how irreplaceable that person’s talents are,” he said. 

Karp also gave the example of technicians building batteries at a battery company, saying those workers are “very valuable if not irreplaceable because we can make them into something different than what they were very rapidly.”

He said what he does all day at Palantir is “figuring out what is someone’s outlier aptitude. Then, I’m putting them on that thing and trying to get them to stay on that thing and not on the five other things they think they’re great at.” 

Karp’s comments come as more employers report a gap between the skills applicants are offering and what employers are looking for in a tough labor market. The unemployment rate for young workers ages 16 to 24 hit 10.4% in December and is growing among college graduates. Karp isn’t too worried. 

“There will be more than enough jobs for the citizens of your nation, especially those with vocational training,” he said. 



Source link

Continue Reading

Business

AI is boosting productivity. Here’s why some workers feel a sense of loss

Published

on



Welcome to Eye on AI, with AI reporter Sharon Goldman. In this edition…Why some workers feel a sense of loss while AI boosts productivity…Anthropic raising fresh $10 Billion at $350 billion valuation…Musk’s xAI closed $20 billion funding with Nvidia backing…Can AI do your job? See the results from hundreds of tests.

For months, software developers have been giddy with excitement over “vibe coding”– prompting desired software functions or features in natural language—with the latest AI code generation tools. Anthropic’s Claude Code is the darling of the moment, but OpenAI’s Codex, Cursor and other tools have also led engineers to flood social media with examples of tasks that used to take days and are now finished in minutes. 

Even veteran software design leaders have marvelled at the shift. “In just a few months, Claude Code has pushed the state of the art in software engineering further than 75 years of academic research,” said Erik Meijer, a former senior engineering leader at Meta

Skills honed seem less essential

However, that same delight has turned disorienting for many developers, who are grappling with a sense of loss as skills honed over a lifetime suddenly seem less essential. The feeling of flow—of being “in the zone”—seems to have vanished as building software becomes an exercise in supervising AI tools rather than writing code. 

In a blog post this week titled “The Grief When AI Writes All the Code,” Gergely Orosz of The Pragmatic Engineer, wrote that he is “coming to terms with the high probability that AI will write most of my code which I ship to production.” It already does it faster, he explained, and for languages and frameworks he is less familiar with, it does a better job. 

“It feels like something valuable is being taken away, and suddenly,” he wrote. “It took a lot of effort to get good at coding and to learn how to write code that works, to read and understand complex code, and to debug and fix when code doesn’t work as it should.” 

Andrew Duca, founder of tax software Awaken Tax, wrote a similar post this week that went viral, saying that he was feeling “kinda depressed” even though he finds using Claude Code “incredible” and has “never found coding more fun.” 

He can now solve customer problems faster, and ship more features, but at the same time “the skill I spent 10,000s of hours getting good at…is becoming a full commodity extremely quickly,” he wrote. “There’s something disheartening about the thing you spent most of your life getting good at now being mostly useless.” 

Software development has long been on the front lines of the AI shift, partly because there are decades of code, documentation and public problem-solving (from sites like GitHub) available online for AI models to train on. Coding also has clear rules and fast feedback – it runs or it doesn’t – so AI systems can easily learn how to generate useful responses. That means programming has become one of the first white-collar professions to feel AI’s impact so directly.

These tensions will affect many professions

These tensions, however, won’t be confined to software developers. White-collar workers across industries will ultimately have to grapple with them in one way or another. Media headlines often focus on the possibility of mass layoffs driven by AI; the more immediate issue may be how AI reshapes how people feel about their work. AI tools can move us past the hardest parts of our jobs more quickly—but what if that struggle is part of what allows us to take pride in what we do? What if the most human elements of work—thinking, strategizing, working through problems—are quietly sidelined by tools that prize speed and efficiency over experience?

Of course, there are plenty of jobs and workflows where most people are very happy to use AI to say buh-bye to repetitive grunt work that they never wanted to do in the first place. And as Duca said, we can marvel at the incredible power of the latest AI models and leap to use the newest features even while we feel unmoored. 

Many white-collar workers will likely face a philosophical reckoning about what AI means for their profession—one that goes beyond fears of layoffs. It may resemble the familiar stages of grief: denial, anger, bargaining, depression, and, eventually, acceptance. That acceptance could mean learning how to be the best manager or steerer of AI possible. Or it could mean deliberately carving out space for work done without AI at all. After all, few people want to lose their thinking self entirely.

Or it could mean doing what Erik Meijer is doing. Now that coding increasingly feels like management, he said, he has turned back to making music—using real instruments—as a hobby, simply “to experience that flow.”

With that, here’s more AI news.

Sharon Goldman
sharon.goldman@fortune.com
@sharongoldman

FORTUNE ON AI

As Utah gives the AI power to prescribe some drugs, physicians warn of patient risks – by Beatrice Nolan

Google and Character.AI agree to settle lawsuits over teen suicides linked to AI chatbots – by Beatrice Nolan

OpenAI launches ChatGPT Health in a push to become a hub for personal health data – by Sharon Goldman

Google takes first steps toward an AI product that can actually tackle your email inbox – by Jacqueline Munis

Fusion power nearly ready for prime time as Commonwealth builds first pilot for limitless, clean energy with AI help from Siemens, Nvidia – by Jordan Blum

AI IN THE NEWS

Anthropic raising fresh $10 Billion at $350 billion valuation. According to the Wall Street Journal, OpenAI rival Anthropic is planning to raise $10 billion at a roughly $350 billion valuation, nearly doubling its worth from just four months ago. The round is expected to be led by GIC and Coatue Management, following a $13 billion raise in September that valued the company at $183 billion. The financing underscores the continued boom in AI funding—AI startups raised a record $222 billion in 2025, per PitchBook—and comes as Anthropic is also preparing for a potential IPO this year. Founded in 2021 by siblings Dario Amodei and Daniela Amodei, Anthropic has become a major OpenAI rival, buoyed by Claude’s popularity with business users, major backing from Nvidia and Microsoft, and expectations that it will reach break-even by 2028—potentially faster than OpenAI, which is itself reportedly seeking to raise up to $100 billion at a $750 billion valuation.

Musk’s xAI closed $20 billion funding with Nvidia backing. Bloomberg reported that xAI, the AI startup founded by Elon Musk, has completed a $20 billion funding round backed by investors including Nvidia, Valor Equity Partners, and the Qatar Investment Authority, underscoring the continued flood of capital into AI infrastructure. Other backers include Fidelity Management & Research, StepStone Group, MGX, Baron Capital Group, and Cisco’s investment arm. The financing—months in the making—will fund xAI’s rapid infrastructure buildout and product development, the company said, and includes a novel structure in which a large portion of the capital is tied to a special-purpose vehicle used to buy Nvidia GPUs that are then rented out, allowing investors to recoup returns over time. The deal comes as xAI has been under fire for its chatbot Grok producing non-consensual “undressing” images of real people.

Can AI do your job? See the results from hundreds of tests. I wanted to shout-out this fascinating new interactive feature in the Washington Post, which presented a new study that found that despite fears of mass job displacement, today’s AI systems are still far from being able to replace humans on real-world work. Researchers from Scale AI and the Center for AI Safety tested leading models from OpenAI, Google, and Anthropic on hundreds of actual freelance projects—from graphic design and creating dashboards to 3D modeling and games—and found that the best AI systems successfully completed just 2.5% of tasks on their own. While AI often produced outputs that looked plausible at first glance, closer inspection revealed missing details, visual errors, incomplete work, or basic technical failures, highlighting gaps in areas like visual reasoning, long-term memory, and the ability to evaluate subjective outcomes. The findings challenge predictions that AI is poised to automate large swaths of human labor anytime soon, even as newer models show incremental improvement and the economics of cheaper, semi-autonomous AI work continue to put pressure on remote and contract workers.

EYE ON AI NUMBERS

91.8%

That’s the percentage of Meta employees who admitted to not using the company’s AI chatbot, Meta AI, in their day-to-day work, according to new data from Blind, a popular anonymous professional social network. 

 

According to a survey of 400 Meta employees, only 8.2% said they use Meta AI. The most popular chatbot was Anthropic’s Claude, used by more than half (50.7%) of Meta employees surveyed. 17.7% said they use Google’s Gemini and 13.7% said they used OpenAI’s ChatGPT. 

 

When approached for comment, Meta spokesperson pointed out that the number (400 of 77,000+ employees) is “not even a half percent of our total employee population.”

AI CALENDAR

Jan. 19-23: World Economic Forum, Davos, Switzerland.

Jan. 20-27: AAAI Conference on Artificial Intelligence, Singapore.

Feb. 10-11: AI Action Summit, New Delhi, India.

March 2-5: Mobile World Congress, Barcelona, Spain.

March 16-19: Nvidia GTC, San Jose, Calif.

April 6-9: HumanX, San Francisco. 



Source link

Continue Reading

Trending

Copyright © Miami Select.