Connect with us

Business

AI was supposed to be the end of consultants. It’s not happening, Capgemini strategy chief says



Hello and welcome to Eye on AI. In this edition…Nvidia sees $1 trillion in AI chip sales by the end of 2027…Meta delays the debut of its latest AI model (again)…Moonshot AI develops a new architecture for large neural networks…and why we may soon be worrying about ‘moral crumple zones.’

Since the advent of ChatGPT in November 2022, one of the professions that people often claim is now toast is consulting. After all, what is it that consultants do? They advise companies on strategy; they help them restructure their businesses to create new organizational designs and processes, often with the help of technology from third-party vendors; and they act as providers of outsourced services, or at least conduits to outsourced services, such as customer support or software development. Well, a frontier AI model can offer strategic advice. It can also advise on how to restructure an organization and about which software to buy. AI agents can actually help stitch some of those systems together too. Finally, AI agents can also now handle coding and customer support. So it’s lights out for consultants, right?

Well, it hasn’t turned out that way so far. AI companies have discovered that they need consultants, or “systems integrators” as they are sometimes called in the software world, to help them sell their AI agents, as a story in last week’s Wall Street Journal highlighted. The reason is that using AI agents effectively often requires quite a lot of organizational transformation—cleaning up data, redesigning workflows, and thinking about how to redeploy human workers—as well as strategic thinking about how AI might be used to provide a real competitive advantage.

The AI model vendors have found they don’t have the resources to provide this kind of advice at scale—OpenAI only has about 70 so-called “forward deployed engineers” who go on site with customers to help them implement solutions based on their AI models; Anthropic is thought to have a similar number. And while it is possible that AI itself could serve this function, AI still suffers from a trust deficit—most boards would still rather put their faith in advice from McKinsey or BCG than ChatGPT. (A more cynical take: CEOs still like to use consultants to justify their own decisions to boards, as well as to have someone else to blame if it all goes wrong.)

OpenAI has formed what it calls its Frontier Alliance with McKinsey, Boston Consulting Group, Capgemini, and Accenture to help clients use its Frontier platform for building and managing AI agents. (You can read my coverage of that announcement here.) Anthropic has struck similar deals with Deloitte, Accenture, and Cognizant and is reportedly in talks with private equity groups, such as Blackstone, to implement Claude-based solutions in their portfolio companies.

I recently caught up with Capgemini’s Chief Strategy Officer Fernando Alvarez to talk about how his firm is viewing the future of consulting in an AI world.

Domain expertise matters

First, Alvarez says that while every client wants to use AI agents, they also recognize the need to govern those agents, make sure there is adequate cybersecurity around them, and ensure they can interact with legacy systems and fragmented data sources. Advising clients on all of that stuff and often helping them build it has been Capgemini’s bread and butter. He says clients still want Capgemini to provide these services. They aren’t ready to hand it off to AI.

The other big selling point for the consulting firms, Alvarez says, is deep industry and domain expertise. The frontier AI labs don’t have the expertise in how to optimize a pharmaceutical manufacturing plant or the best way to run logistics for a fast-fashion retailer. Consulting firms do. And that makes a difference when trying to use AI agents successfully. Alvarez says the conversations clients want to have are not about how many agents you can spin up or how you orchestrate them. “The conversation is, do you have the domain expertise to understand my problem?” he says.

‘People want the cake, not the recipe’

That doesn’t mean that Capgemini itself isn’t using AI to help serve clients. Alvarez says the big shift that Capgemini, as well as some competitors such as Accenture, are trying to make is to move from selling technology and advice, to selling outcomes. In this model, the consulting firm takes on the risk of trying to figure out how to deliver, say, better customer support, whether that is through business process outsourcing to humans in lower wage countries, such as the Philippines or India, or through AI agents.

“At the end, people want the cake,” he says—not a tour of the ingredients or the recipe. The new pitch boils down to a simple proposition: “Here is the problem. Here is the risk I’m willing to take, and this is the outcome I give you.” The client pays for the outcome: improved KPIs like successful customer issue resolutions and improved net promoter scores. The difference too is that the consultants in this model charge for the outcome, not by the number of people deployed on a project as some consultants have traditionally billed.

Alvarez says that AI is also enabling Capgemini and other consulting firms to move into market segments, such as midmarket companies, that it couldn’t service previously because the economics didn’t make sense. The engagements often required more staff and cost than the client was willing to pay for. But now AI has lowered those staffing and cost requirements, meaning that Capgemini can offer a solution at a price point that is attractive to midmarket companies while maintaining a decent enough profit margin. 

Perhaps the biggest challenge for consulting firms, though, is retraining their own people to work alongside AI agents. “Some people will make it, some people will not,” Alvarez says.

For all the disruption, Alvarez is unmistakably energized. He calls this moment “probably the best opportunity I’ve seen in the history of technology.” The question now is whether Capgemini and other consultants can rewire themselves as fast as the technology demands—which is, of course, exactly what they are advising their clients to do.

With that, here’s more AI news.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

FORTUNE ON AI

‘The Karpathy Loop’: Former OpenAI researcher’s autonomous agents ran 700 experiments in 2 days—and gave a glimpse of where AI is heading—by Jeremy Kahn

Nvidia just forecast $1 trillion in AI demand. So why isn’t Jensen Huang a target of AI backlash?—Sharon Goldman

Elon Musk admits xAI ‘wasn’t built right’ as only 2 co-founders remain and its biggest AI bet stalls out—by Marco Quiroz-Gutierrez

AI is reviving tech sectors that VCs had all but forgotten—by Lily Mae Lazarus

AI IN THE NEWS

Meta delays rollout of new AI model, considers licensing AI from Google. That’s according to a story in the New York Times that cited unnamed sources inside the company. The model, code-named Avocado, has been delayed after internal tests showed it still trails top models from OpenAI, Anthropic, and Google, according to the sources. The setback is a blow to Mark Zuckerberg’s ambitions to put Meta at the forefront of AI development, a push for which he has spent tens of billions of dollars, including a $14.3 billion investment in Scale AI that brought Scale cofounder and CEO Alexandr Wang in to lead Meta’s new AI effort. The delay has also fueled internal discussion over whether Meta should temporarily license outside models like Google’s Gemini, according to the Times, as well as debate over whether Meta’s next model should be open-source, as Meta’s previously generative AI models have been, and over the extent to which Meta’s AI efforts should directly support the company’s advertising business.

Nvidia CEO predicts $1 trillion in future revenue, rolls out new chips and ‘claw’ agent. Nvidia CEO Jensen Huang used his keynote at the company’s annual GTC developer conference on Monday to unveil several big announcements. He said the company is now predicting it will sell $1 trillion of its Blackwell and Rubin AI chips between 2025 and 2027,  signaling continued strong demand—last year, he had predicted $500 billion in sales by the end of 2026.  Huang also unveiled a new Nvidia data center rack that combines 72 of Nvidia’s next-generation Vera Rubin GPUs with 256 LPUs, or language processing units, made by AI chip startup Groq. The company says the new rack can process AI workloads 350 times faster than its latest Hopper racks. Nvidia acquired Groq’s leadership team and licensed its technology in a $20 billion deal in December. Huang also unveiled an open-source agentic “harness” based on the popular OpenClaw but with improved safety guardrails. You can read more from CNBC here.

License-plate reading AI company faces backlash over ICE use. Flock Safety, a fast-growing maker of AI-enabled license plate readers, is facing mounting backlash as 53 cities across 20 states have deactivated or rejected its cameras amid concerns that the technology enables a vast domestic surveillance network and can aid immigration enforcement, the Financial Times reported. The company says it has no contracts with ICE, has restricted some federal and immigration-related access, and argues that local customers control data-sharing. But critics say Flock’s cloud-based network and scale make it far more powerful than traditional license plate readers. Despite the controversy, Flock’s business has surged to more than 12,000 customers and over $300 million in annual recurring revenue, and the company has begun expanding into analyzing drone footage and selling AI that the company says can detect the location of gunshots from audio.

OpenAI to refocus on enterprise systems and coding in shift away from consumers. That’s according to an exclusive story in the Wall Street Journal, which cited remarks from executives made at a company all-hands meeting that it said it had reviewed. The company has decided to narrow its focus to coding and enterprise productivity, after executives concluded that its broad push into everything from video and hardware to shopping and browsers had diluted its efforts, the newspaper reported. The shift, outlined internally by applications chief Fidji Simo, is partly a response to mounting pressure from Anthropic, whose tighter focus on coding and business tools has helped it become a leading AI provider for enterprise customers. OpenAI is now trying to regain ground by prioritizing products like its coding assistant Codex and work-focused models, while cutting back or folding-in side projects such as video generation tool Sora as it sharpens its pitch ahead of a possible IPO.

EYE ON AI RESEARCH

Moonshot AI says it has found a better way to configure and train large neural networks. The Chinese AI company, which is best known for its Kimi open-source agentic models, has been getting a lot of attention among Chinese AI watchers this week for debuting a new tweak on building large, transformer-based neural networks. Called “Attention Residuals,” the idea is that rather than apply fixed weights to every layer of a trained network—which is what standard neural network architectures do—the network learns to apply different weights to different lower-level layers depending on the task the model is working on. To avoid the computational difficulty of doing this for hundreds of layers in large neural networks, the Moonshot researchers also developed what they call “block attention residuals,” in which multiple layers are organized into blocks, with their outputs summarized as a single unit, and the residual attention mechanism is then applied to these blocks instead of the individual layers themselves. The method improves the speed and stability of training a large transformer-based neural network, which is what all of today’s leading AI models are. The Moonshot researchers said the architecture delivered a 1.25 times improvement in computing efficiency compared to the standard architecture. You can read the paper here on arxiv.org.

AI CALENDAR

March 16-19: Nvidia GTC, San Jose, Calif.

April 6-9: HumanX 2026, San Francisco. 

June 8-10: Fortune Brainstorm Tech, Aspen, Colorado. Apply to attend here.

June 17-20: Viva Tech, Paris.

July 7-10: AI for Good Summit, Geneva, Switzerland.

BRAIN FOOD

Moral crumple zones and cognitive surrender. I’ve frequently argued in this newsletter that using AI effectively in large organizations depends so much on exactly how AI systems are designed to interface with people and vice versa. It’s a huge human-computer interaction puzzle to be solved, as much as anything. A column in the Financial Times by Sarah O’Connor raises the issue of how it is easy to say we want a “human in the loop” but that it is difficult to maintain meaningful human control and accountability.

O’Connor mentions two neologisms that caught my attention. Two Wharton business school professors apparently have coined the term “cognitive surrender” to refer to situations in which people simply assume the AI knows best and surrender all judgment to the AI system. It’s an extreme form of the phenomenon known as automation bias. (O’Connor notes some studies in which people are put in self-driving cars and told they can intervene to prevent the car from hitting something, but still allow the car to roll directly into an obvious obstacle.) She also mentions the term “moral crumple zone” (coined by academic Madeline Clare Elish) which refers to complex systems in which the role of humans is reduced to being there to absorb the blame when something goes wrong even though the system’s complexity and speed renders meaningful human control over the system impossible.

Expect to start seeing those terms crop up more and more in our AI conversations.

Fortune AIQ: One Strategy, Real AI Results

AI transformation can be overwhelming—there are countless tools, strategies, and approaches competing for attention, often without any proven results. But for some companies, AI’s biggest impact can be attributed to a single tool, strategy, or approach. Explore all of Fortune AIQ, and read the latest playbook below:

–How to decide if gen AI is the right path for your product

–Inside Bank of America’s ‘build once’ AI strategy

–Why Pinterest is going all in on open-source AI

–How cutting out product management enabled Kilo to compete in the hyper-fast AI coding market

–How Seismic’s AI incubation team became its ultimate AI strategy



Source link

Continue Reading

Copyright © Miami Select.