Business

This Sequoia partner thinks AI-enabled services are the new software. Here’s why



Hello and welcome to Eye on AI. In this edition…Are services the new software?…Anthropic’s Mythos has financial regulators and bankers freaking out…more executive turnover at OpenAI…these measures may mean China may soon surpass the U.S. in developing the best AI models…are AI inference costs getting too steep?

Julien Bek never expected to go viral. Bek, who is an early stage investor in the London office of the venerable Silicon Valley venture capital firm Sequoia, says he merely wanted to highlight one of the firm’s recent investing theses and use the piece to highlight some of the startups Sequoia had recently backed. So he penned a blog with the title “Services: The New Software” and posted it to his social feeds. Within days, it had surpassed 1 million views on X. It is now closing in on 3 million. It has done more than 450k impressions on LinkedIn.

“I certainly didn’t expect to have this kind of reach,” Bek told me on a call earlier this week.

The provocative headline no doubt helped. But Bek’s thesis also struck a nerve. In short, he thinks that the world’s next $1 trillion company won’t sell hardware or software as a product. Instead, it will sell an outcome, and use AI-powered software to help deliver it, alongside human expertise. Instead of selling customer service software, for instance, it will simply deliver customer service for a client, the way business outsource processing companies do today. But these new entrants will be AI-native from the start. Instead of selling legal tech, these firms will sell legal services, etc.

Good examples of companies already pursuing this model that I’ve written about before include both Robin AI and Legora in the legal space and Dwelly in the real estate market. There’s also Dystyl AI in the consulting space, Rogo in financial services, and WithCoverage in the insurance brokerage market. Bek thinks there are many, many more to come. And he is sure the market potential is huge, noting that for every dollar enterprises spend on software, they spend six on services.

Intelligence vs. judgment

Bek has developed a taxonomy for thinking about these possibilities. First, he distinguishes between intelligence and judgment. Intelligence is basically anything with a pretty clear definition between the set of correct and incorrect answers—think tasks in coding, mathematics, physics, and even some tasks in accounting, law, or medicine. AI models are getting pretty good at delivering intelligence. Judgment on the other hand is more about taste, professional intuition, and subtle but often critical qualitative distinctions that often require both talent and experience. Lots of companies are trying to figure out how to imbue AI models with judgment, but for the most part, they aren’t there yet.

He then performs a matrix analysis that plots how a given service ranks on an intelligence-judgment scale on one axis, and whether companies already tend to outsource a particular service, or perform it in-house, on the other axis. (This is a complex decision governed by economic ideas that Ronald Coase developed in the mid-20th Century and that I recently wrote about in the context of the so-called SaaSpocalypse for Fortune here.)

First Bek looks at those tasks that companies already outsource to service providers, things like legal services, auditing, insurance brokerage, etc. Then he looks at the subset of those that are mostly about intelligence, with mainly just a dash of human professional judgment needed. This the sweet spot Bek thinks is ripe for AI-native service firms. “If [a customer] paid $100 for a service, but you offer them the same service for $80, but you can still do it at a high gross margin because you’re using a lot of AI to deliver that service, then we think that’s really interesting,” he says. Among the functions he sees being in this category are things like insurance brokerages, insurance claims adjustment, IT managed services, tax advisory services, accounting and audit services, simple legal services, payroll services, and certain compliance services.

Bek calls startups in this category—heavy on intelligence, with a dash of judgment, in categories that customers already outsource—”autopilots.” He says his use of that term in his viral essay has been the source of a lot of misunderstanding and misplaced criticism. He didn’t use the term to mean that services could be performed entirely by AI agents to the exact same standard as human experts. What he meant was that the processes that deliver these services could be largely automated in the same way that autopilots function in aviation—a human is still there monitoring the systems and handling the hardest tasks (like take off and landing) and ready to step in if something goes wrong, but a lot of the process is automated. He contrasts this to AI “copilots,” where he says there is a lot more back and forth between the human expert and the AI system.

I asked Bek about the theory that AI will enable some companies to in-source functions that they once outsourced. (That theory is part of what underlies the SaaSpocalypse—the idea that companies will choose to make their own software using AI coding tools.) He allows that this may be true for some functions, but insists that there are many things that will never be in-sourced either because of regulatory requirements—for example, financial auditing, in which companies must hire an independent firm—or for what he calls “softer” reasons. The latter category includes things like management consulting, which exists in part to provide external validation of decisions management already wanted to take—essentially helping to bolster their case to boards and investors, and, cynically, so that there is someone else to blame if it turns out to be a bad decision. The logic applies even in some IT functions. The old saying “no one ever got fired for hiring IBM” exists for a reason.

Not just a lower bill, a different bill

One of the biggest advantages the AI-native companies may have, Bek thinks, is around pricing. It’s not just that the AI-powered service firms can potentially charge less, they can charge differently. Many services firms in many sectors have billed by time. Billing by outcome changes the game completely. “When you’re a smaller company, the best thing you can do to compete with the larger ones is actually disrupt them on pricing,” he says. But he allows that it can take time to bring customers around to a different way of paying. For instance, people have been talking about getting rid of the billable hour in legal services for decades. The billable hour is, for most law firms that do corporate work, still here.

Bek insists that there are signs the billable hour really is going away, in large part thanks to AI. (There was movement in this direction before AI, but AI certainly seems to have accelerated it.) But, at the same time, he acknowledges that impediments remain. Some large companies use RFPs from services like consulting that ask for “an hourly rate”—if you price differently, you might not get past that screening because you can’t even complete the standardized form.

What about margins? One reason investors have loved software businesses is because they have often been extremely high-margin. Once you create the product, you can replicate it and distribute it at almost zero marginal cost. Anything based on human labor doesn’t scale the same way. Bek says the equation here is not as bad as some assume. In insurance broking, for instance, he says an AI-native startup like WithCover can sell 10x per human expert what a traditional insurance broker sells. “So I think the efficiency is proven, at least in some categories, not all,” he says. “But I think this is very encouraging.”

Two costs that are a potential issue: the cost of AI inference and the go-to-market costs of selling a service. Inference costs for running AI agents can, in some cases, eat up a substantial sum of money. (More on that in the Brain Food section below.) Bek cites figures from Bret Taylor, the CEO of Sierra, which sells AI-based customer service solutions, that gross profit margins probably look like 70% instead of 90% for some pure SaaS companies. But 70% is still a healthy margin. But the go-to-market costs remain an unsolved challenge, Bek says. You can’t scale enterprise service sales the same way you can software sales.

Sequoia isn’t the only investor with this idea. Private equity shops are betting that they can roll-up existing non-software businesses, infuse them with AI-driven efficiencies, and sell them off at much higher multiples. That’s why OpenAI and Anthropic both have major sales channels being built around private equity firms. But Bek thinks AI-native startups will be able to grab substantial marketshare faster than legacy firms can metamorphize into AI-first orgs.

He may be right. Change is hard. And having to reinvent both existing processes and existing business models is exponentially harder. The legacy companies have the relationships and the trust of existing customers. That’s often a trump card, especially for the highest-value work. But at some point, delivering an outcome at a lower price might tempt many to at least try the AI-natives.

With that, here’s more AI news.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

But before we get to the news: Do you want to learn more about how AI is likely to reshape your industry? Do you hear insights from some of tech’s savviest executives and mingle with some of the best investors, thinkers, and builders in Silicon Valley and beyond? Do you like fly fishing or hiking? Well, then come join me and my fellow Fortune Tech co-chairs in Aspen, Colorado, for Fortune Brainstorm Tech, the year’s best technology conference. And this year will be even more special because we are celebrating the 30th anniversary of the conference’s founding. We will hear from CEOs such as Carol Tomé from UPS, Snowflake CEO Sridhar Ramaswamy, Anduril CEO Brian Schmipf, Yahoo! CEO Jim Lanzone, and many more. There are AI aces like Boris Cherny, who heads Claude Code at Anthropic, and Sara Hooker, who is cofounder and CEO of Adaption Labs. And there are tech luminaries such as Steve Case and Meg Whitman. And you, of course! Apply to attend here.

FORTUNE ON AI

Anthropic’s Mythos cybersecurity capabilities require urgent international cooperation, ‘AI Godfather’ Yoshua Bengio says—by Beatrice Nolan

Exclusive: Doctors and education experts who studied AI’s impact on the young call for a 5-year moratorium in schools—by Catherina Gioino

Commentary: The hidden ROI of AI: What leaders should actually measure—by Beena Ammananth and Jim Rowan

AI IN THE NEWS

Anthropic’s Mythos model alarms international financial regulators. Global financial officials warned at the IMF and World Bank spring meetings that advanced AI models—particularly Anthropic’s Claude Mythos—could pose systemic risks by rapidly uncovering and exploiting cyber vulnerabilities across banks and critical infrastructure. Policymakers including the Bank of England’s Andrew Bailey and EU Central Bank chief Christine Lagarde said the technology could shift the balance between attackers and defenders, prompting calls for international coordination, though many regulators—especially in Europe—have yet to access or assess the model themselves. Read more from the Financial Times here.

NSA using Anthropic’s Mythos model despite supply chain risk designation. That’s according to a scoop from Axios, which cited two anonymous sources. It said it was unclear how  the National Security Agency was using the powerful new model, which Anthropic has only made available to a select few organizations to help them harden their cyber defenses. The model, according to Anthropic, possess unprecedented cyber capabilities, able to find zero day vulnerabilities in software code and string together multiple vulnerabilities into sophisticated autonomous attacks. (But the same methods can be used to point out flaws to defenders and help them patch them.) The use of Mythos though puts the NSA, which is part of the Department of War, and the U.S. government in an odd position since it recently designated Anthropic a ‘supply chain risk’ for insisting on a contract that would prohibit the U.S. military from using its AI models for lethal autonomous weapons or mass surveillance of U.S. citizens. Under the designation, the DoW and all of its contractors were supposed to stop using Anthropic’s AI models.

Anthropic CEO Dario Amodei holds White House talks viewed as effort at ‘peace deal.’ The CEO met with White House chief of staff Susie Wiles. It is unclear exactly what was discussed at the meeting and how much dealt with Mythos and how much with Anthropic’s ‘supply chain risk’ designation, but the meeting was portrayed as an effort by the two sides to reach some kind of agreement to de-escalate their dispute and remove the supply chain risk label. You can read more here from The Washington Post. Meanwhile, Anthropic’s legal challenge to that designation is heading towards a key May hearing in federal court in Washington, D.C., but at least some legal experts think the company’s chances of having that particular three-judge panel overturn the designation aren’t good. The same judges, who are mostly Trump appointees, refused to grant Anthropic a stay to prevent the ruling from taking effect. Anthropic won a separate challenge to the designation in federal court in California, but because the Pentagon used two different legal statutes to issue the designation, one of which can only be reviewed by the D.C. federal court, it remains in place.

Amazon invests up to $25 billion into Anthropic. Amazon is investing an additional $5 billion in Anthropic, with the potential to commit up to $20 billion more tied to unspecified commercial milestones, expanding on its previous $8 billion stake, the companies said. As part of the deal, Anthropic plans to spend over $100 billion on Amazon Web Services over the next decade, using its infrastructure and Trainium chips while making its Claude platform more deeply integrated into AWS products. The agreement also secures Anthropic access to massive computing capacity—including five gigawatts of AI compute and expanded inference resources across Asia and Europe—as it continues partnerships with other providers like Google, Microsoft, and CoreWeave. Read more from CNBC here.

Trio of senior OpenAI execs including Kevin Weil depart in further management shakeup. Senior OpenAI executive Kevin Weil, who was formerly one of Instagram’s cofounders, is leaving the company, he announced last week. Weil had been vice president of product at OpenAI before moving over to a new AI for science division in October 2025. But now OpenAI is shuttering the Prism AI tool for scientific workflows that Weil’s team developed, folding its capabilities into its Codex product. On the same day as Weil announced his departure, two other executives, Srinivas Narayanan, who had been CTO for OpenAI’s B2B applications, and Bill Peebles, who had headed its now-discontinued Sora video-generation AI model team, also announced they were leaving. Their departures add to a wave of leadership turnover at OpenAI amid a wider restructuring that reflects the company’s decision to focus on enterprise and coding products as it faces intensifying competition from Anthropic and prepares for a potential IPO.

Startup from Google DeepMind, OpenAI alums valued at $4 billion just months after founding. The four-month old startup, called Recursive Superintelligence, raised at least $500 million at a $4 billion valuation, the Financial Times reported. Google’s venture arm GV led the round with support from Nvidia. The company aims to develop a novel form of AI capable of continuously improving itself without human intervention. Its founders include Richard Socher, who founded genAI company You.com and previously led AI research at Salesforce, as well as Google DeepMind veteran Tim Rocktäschel, and former OpenAI researchers Jeff Clune, Josh Tobin and Tim Shi.

EYE ON AI RESEARCH

The U.S. is still leads in AI, but the trends are not favorable. That’s one of the big takeaways from Stanford University’s Human-Centered AI Institute’s Annual AI Index, which dropped last week. (The Index is always a fantastic snapshot of where we are in AI development and contains so much information it would take many newsletter to summarize it all. Here I am just going to focus on this geopolitical point.)

For years the U.S. dominated AI across nearly every meaningful dimension, but the new report documents how China has nearly closed that gap: U.S. and Chinese models traded places at the top of performance rankings multiple times during 2025, and as of March 2026 the leading U.S. model’s edge over its Chinese rival had shrunk to just 2.7%, according to Stanford. China now leads the U.S. in AI publication volume, citations, patent output, and industrial robot installations, while the U.S. still produces more top-tier models and higher-impact patents—a distinction that may not hold for long given the talent trends. The number of AI scholars moving to the United States has dropped 89% since 2017, with that decline accelerating—down 80% in the last year alone, as the Trump Administration has cracked down on both student and work visas.

On investment the U.S. still vastly outspends China in disclosed private capital, but that comparison likely understates China’s total commitment, given its extensive use of government guidance funds estimated at $912 billion deployed across industries since 2000.The full picture, as ever, is more complicated than either the “America’s winning” or “China’s winning” narratives suggest, but certainly the U.S. may not be assured of having an edge in AI. You can read the rest in the full AI Index here

AI CALENDAR

April 23-27: International Conference on Learning Representations (ICLR), Rio de Janeiro, Brazil.

April 22-24: Google Next, Las Vegas, Nevada.

June 8-10: Fortune Brainstorm Tech, Aspen, Colo. Apply to attend here.

June 17-20: VivaTech, Paris.

July 6-11: International Conference on Machine Learning (ICML), Seoul, South Korea.

July 7-10: AI for Good Summit, Geneva, Switzerland.

BRAIN FOOD

Are AI inference costs getting so steep that humans workers are a better deal? Only a few weeks ago, everyone was talking about “tokenmaxxing”—developers competing with one another to use up all the available tokens in a particular license tier for top coding models such as Anthropic’s Claude Code and OpenAI’s Codex. (See this New York Times story on the phenomenon.)

But, as ever in AI, a few weeks is a lifetime. Anthropic, experiencing a compute crunch, has capped the number of tokens users can consume on some pricing tiers during peak hours of the day. Meanwhile, its latest model, Claude Opus 4.7, also consumes more tokens and is more expensive to use, per query than its predecessors. OpenAI also recently changed its Codex pricing to charge users per token consumed as opposed to per message. The result is that some companies are finding their inference costs are soaring. So much so that one person posted to a Claude Code Reddit thread popular with developers that they had “fired” five AI coding agents and hired two mid-level human developers instead. (The downside, the person wrote, was that the company’s coffee costs had now soared.)

The post seems to have been intended as a joke. But it does reflect the feeling of a lot of developers after the recent price rises. Inference costs are also now a major pressure point—one that may significantly slow AI diffusion across large enterprises. 



Source link

Exit mobile version