Connect with us

Business

At Davos, AI hype gives way to focus on ROI

Published

on



Hello and welcome to Eye on AI. In this edition….a dispatch from Davos…OpenAI ‘on track’ for device launch in 2026…Anthropic CEO on China chip sales…and is Claude Code Anthropic’s ChatGPT moment?

Hi. I’m in Davos, Switzerland, this week for the World Economic Forum. Tomorrow’s visit of U.S. President Donald Trump is dominating conversations here. But when people aren’t talking about Trump and his imposition of tariffs on European allies that oppose his attempt to wrest control of  Greenland from Denmark, they are talking a lot about AI.

The promenade in this ski town turns into a tech trade show floor at WEF time, with the logos of prominent software companies and consulting firms plastered to shopfronts and signage touting various AI products. But while last year’s Davos was dominated by hype around AI agents and overwrought hand-wringing that the debut of DeepSeek’s R1 model, which happened during 2025’s WEF, could mean the capital-intensive plans of the U.S. AI companies were for naught, this year’s AI discussions seem more sober and grounded.

The business leaders I’ve spoken to here at Davos are more focused than ever on how to drive business returns from their AI spending. The age of pilots and experimentation seems to be ending. So too is the era of imagining what AI can do. Many CEOs now realize that implementing AI at scale is not easy or cheap. Now there is much more attention on practical advice for using AI to drive enterprise-wide impact. (But there’s still a tinge of idealism here too as you’ll see.) Here’s a taste of some of the things I’ve heard in conversations so far:

CEOs take control of AI deployment

There’s a consensus that the bottom-up approaches—giving every employee access to ChatGPT or Microsoft Copilot, say—popular in many companies two years ago, in the initial days of the generative AI boom, are a thing of the past. Back then, CEOs assumed front line workers, closest to the business processes, would know how best to deploy AI to make them more efficient. This turned out to be wrong—or, perhaps more accurately, the gains from doing this tended to be hard to quantify and rarely added up to big changes in either the top or bottom line.

Instead, top-down, CEO-led initiatives aimed at transforming core business processes are now seen as essential for deriving ROI from AI. Jim Hagemann Snabe, the chairman of Siemens and former co-CEO at SAP, told a group of fellow executives at a breakfast discussion I moderated here in Davos today that CEOs need to be “dictators” in identifying where their businesses would deploy AI and pushing those initiatives forward. Similarly, both Christina Kosmowski, the CEO of IT and business data analytics company LogicMonitor, and Bastian Nominacher, the cofounder and co-CEO of process mining software company Celonis, told me that board and CEO sponsorship was an essential component to enterprise AI success.

Nominacher had a few other interesting lessons, including how, in research Celonis commissioned, establishing a center of excellence for figuring out how to optimize work processes with AI resulted in an 8x better return than for companies that failed to set up such a center. He also said that having data in the right place was essential to running process optimization successfully.

The race to become the orchestration layer for enterprise AI agents

There is clearly a race on among SaaS companies to become the new interface layer for AI agents that work in companies. Carl Eschenbach, Workday’s CEO, told me that he thinks his company is well-positioned to become “the front door to work” not only because it sits on key human resources and financial data, but because the company already handled onboarding, data access and permissioning, and performance management for human workers. Now it can do the same for AI agents.

But others are eyeing this prize too. Srini Tallapragada, Salesforce’s chief engineering and customer success officer, told me how his company is using “forward deployed engineers” at 120 of Salesforce’s largest customers to close the gap between customer pain points and product development, learning the best way to create agents for specific industry verticals and functions that it can then offer to Salesforce’s wider customer base. Judson Althof, Microsoft’s commercial CEO, said that his company’s Data Fabric and Agent 365 products were gaining traction among big companies that need an orchestration layer for AI agents and a unified way to access data stored in different systems and silos without having to migrate that data to a single platform. Snowflake CEO Sridhar Ramaswamy meanwhile thinks the deep expertise his company has is maintaining cloud-based data pools and controlling access to that data combined with newfound expertise in creating its own AI coding agents, make his company ideally suited to win the race to be the AI agent orchestrator. Ramaswamy told me his biggest fear is whether Snowflake can keep moving fast enough to realize this vision before OpenAI or Anthropic move down the stack—from AI agents into the data storage—potentially displacing Snowflake.

A couple more insights from Davos so far: while there is still a lot of fear about AI leading to widespread job displacement, it hasn’t shown up yet in economic data. In fact, Svenja Gudell, the chief economist at recruiting site Indeed, told me that while the tech sector has seen a huge decline in jobs since 2022, that trend predates the generative AI boom and is likely due to companies “right sizing” after the massive pandemic-era hiring boom rather than AI. And while many industries are not hiring much at the moment, Gudell says global macroeconomic and geopolitical uncertainty are to blame, not AI.

Finally, in a comment relevant to one of this week’s bigger AI news stories—that OpenAI is introducing ads to ChatGPT—Snabe, the Siemens chairman had an interesting answer to a question about how AI should be regulated. He said that rather than trying to regulate AI use cases—as the EU AI Act has done—governments should mandate more broadly that AI adhere to human values. And the one piece of regulation that would do more than anything to ensure this, he said, would be to ban AI business models based on advertising. Ad-based AI models will lead companies to optimize for user engagement with all of the negative consequences for mental health and democratic consensus that we’ve seen from social media, only far worse. 

With that, here’s more AI news.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

Beatice Nolan wrote the news and sub-sections of Eye on AI.

FORTUNE ON AI

Wave of defections from former OpenAI CTO Mira Murati’s $12 billion startup Thinking Machines shows cutthroat struggle for AI talent–by Jeremy Kahn and Sharon Goldman

ChatGPT tests ads as a new era of AI begins—by Sharon Goldman

A filmmaker deepfaked Sam Altman for his movie about AI. Then things got personal—by Beatrice Nolan

PwC’s global chairman says most leaders have forgotten ‘the basics’ as 56% are still getting ‘nothing’ out of AI adoption–by Diane Brady and Nick Lichtenberg

AI IN THE NEWS

EYE ON AI RESEARCH

Researchers say ChatGPT has a “silicon gaze” that amplifies global inequalities. A new study from the Oxford Internet Institute and the University of Kentucky analyzed over 20 million ChatGPT queries and found the AI systematically favors wealthier, Western regions, rating them as “smarter” and “more innovative” than poorer countries in the Global South. The researchers coined the term “silicon gaze” to describe how AI systems view the world through the lens of biased training data, mirroring historical power imbalances rather than providing objective answers. They argue these biases aren’t errors to be corrected, but structural features of AI systems that learn from data shaped by centuries of uneven information production, privileging places with extensive English-language coverage and strong digital visibility. The team has created a website–inequalities.ai–where people can explore how ChatGPT ranks their own neighborhood, city, or country across different lifestyle factors.

AI CALENDAR

Jan. 19-23: World Economic Forum, Davos, Switzerland.

Jan. 20-27: AAAI Conference on Artificial Intelligence, Singapore.

Feb. 10-11: AI Action Summit, New Delhi, India.

March 2-5: Mobile World Congress, Barcelona, Spain.

March 16-19: Nvidia GTC, San Jose, Calif.

BRAIN FOOD

Is Claude Code Anthropic’s ChatGPT moment? Anthropic has started the year with a viral moment most labs dream of. Despite Claude Code’s technical interface, the product has captured attention beyond the developer pool, with users building personal websites, analyzing health data, managing emails, and even monitoring tomato plants—all without writing a line of actual code. After several users pointed out that the product was much more of a general-use agent than the marketing and name suggested, the company launched Cowork—a more user-friendly version with a graphical interface built for non-developers.

Both Claude Code and Cowork’s ability to autonomously access, manipulate, and analyze files on a user’s computer has given many people a first taste of an AI agent that can actually take actions on their behalf, rather than just provide advice. Anthropic also saw a traffic lift as a result. Claude’s total web audience has more than doubled from December 2024, and its daily unique visitors on desktop are up 12% globally year-to-date compared with last month, according to data from market intelligence companies Similarweb and Sensor Tower published by The Wall Street Journal. But while some have hailed the products as the first step to getting a true AI personal assistant, the launch has also sparked concerns about job displacement and appears to put pressure on a few dozen startups that have built similar file management and automation tools.

FORTUNE AIQ: THE YEAR IN AI—AND WHAT’S AHEAD

Businesses took big steps forward on the AI journey in 2025, from hiring Chief AI Officers to experimenting with AI agents. The lessons learned—both good and bad–combined with the technology’s latest innovations will make 2026 another decisive year. Explore all of Fortune AIQ, and read the latest playbook below: 

The 3 trends that dominated companies’ AI rollouts in 2025.

2025 was the year of agentic AI. How did we do?

AI coding tools exploded in 2025. The first security exploits show what could go wrong.

The big AI New Year’s resolution for businesses in 2026: ROI.

Businesses face a confusing patchwork of AI policy and rules. Is clarity on the horizon?



Source link

Continue Reading

Business

Billionaire Marc Benioff challenges the AI sector: ‘What’s more important to us, growth or our kids?’

Published

on



Imagine it is 1996. You log on to your desktop computer (which took several minutes to start up), listening to the rhythmic screech and hiss of the modem connecting you to the World Wide Web. You navigate to a clunky message board—like AOL or Prodigy—to discuss your favorite hobbies, from Beanie Babies to the newest mixtapes.

At the time, a little-known law called Section 230 of the Communications Safety Act had just been passed. The law—then just a 26-word document—created the modern internet. It was intended to protect “good samaritans” who moderate websites from regulation, placing the responsibility for content on individual users rather than the host company.

Today, the law remains largely the same despite evolutionary leaps in internet technology and pushback from critics, now among them Salesforce CEO Marc Benioff. 

In a conversation at the World Economic Forum in Davos, Switzerland, on Tuesday, titled “Where Can New Growth Come From?” Benioff railed against Section 230, saying the law prevents tech giants from being held accountable for the dangers AI and social media pose.

“Things like Section 230 in the United States need to be reshaped because these tech companies will not be held responsible for the damage that they are basically doing to our families,” Benioff said in the panel conversation which also included Axa CEO Thomas Buberl, Alphabet President Ruth Porat, Emirati government official Khaldoon Khalifa Al Mubarak, and Bloomberg journalist Francine Lacqua.

As a growing number of children in the U.S. log onto AI and social media platforms, Benioff said the legislation threatens the safety of kids and families. The billionaire asked, “What’s more important to us, growth or our kids? What’s more important to us, growth or our families? Or, what’s more important, growth or the fundamental values of our society?”

Section 230 as a shield for tech firms

Tech companies have invoked Section 230 as a legal defense when dealing with issues of user harm, including in the 2019 case Force v. Facebook, where the court ruled the platform wasn’t liable for algorithms that connected members of Hamas after the terrorist organization used the platform to encourage murder in Israel. The law could shield tech companies from liability for harm AI platforms pose, including the production of deepfakes and AI-Generated sexual abuse material.

Benioff has been a vocal critic of Section 230 since 2019 and has repeatedly called for the legislation to be abolished. 

In recent years, Section 230 has come under increasing public scrutiny as both Democrats and Republicans have grown skeptical of the legislation. In 2019 the Department of Justice under President Donald Trump pursued a broad review of Section 230. In May 2020, President Trump signed an Executive Order limiting tech platforms’ immunity after Twitter added fact-checks to his tweets. And in 2023, the U.S. Supreme Court heard Gonzalez v. Google, though, decided it on other grounds, leaving Section 230 intact.

In an interview with Fortune in December 2025, Dartmouth business school professor Scott Anthony voiced concern over the “guardrails” that were—and weren’t—happening with AI. When cars were first invented, he pointed out, it took time for speed limits and driver’s licenses to follow. Now with AI, “we’ve got the technology, we’re figuring out the norms, but the idea of, ‘Hey, let’s just keep our hands off,’ I think it’s just really bad.”

The decision to exempt platforms from liability, Anthony added, “I just think that it’s not been good for the world. And I think we are, unfortunately, making the mistake again with AI.”

For Benioff, the fight to repeal Section 230 is more than a push to regulate tech companies, but a reallocation of priorities toward safety and away from unfettered growth. “In the era of this incredible growth, we’re drunk on the growth,” Benioff said. “Let’s make sure that we use this moment also to remember that we’re also about values as well.”



Source link

Continue Reading

Business

Palantir CEO says AI “will destroy” humanities jobs but there will be “more than enough jobs” for people with vocational training

Published

on



Some economists and experts say that critical thinking and creativity will be more important than ever in the age of artificial intelligence (AI), when a robot can do much of the heavy lifting on coding or research. Take Benjamin Shiller, the Brandeis economics professor who recently told Fortune that a “weirdness premium” will be valued in the labor market of the future. Alex Karp, the Palantir founder and CEO, isn’t one of these voices. 

“It will destroy humanities jobs,” Karp said when asked how AI will affect jobs in conversation with BlackRock CEO Larry Fink at the World Economic Forum annual meeting in Davos, Switzerland. “You went to an elite school and you studied philosophy — I’ll use myself as an example — hopefully you have some other skill, that one is going to be hard to market.”

Karp attended Haverford College, a small, elite liberal arts college outside his hometown of Philadelphia. He earned a J.D. from Stanford Law School and a Ph.D. in philosophy from Goethe University in Germany. He spoke about his own experience getting his first job. 

Karp told Fink that he remembered thinking about his own career, “I’m not sure who’s going to give me my first job.” 

The answer echoed past comments Karp has made about certain types of elite college graduates who lack specialized skills.

“If you are the kind of person that would’ve gone to Yale, classically high IQ, and you have generalized knowledge but it’s not specific, you’re effed,” Karp said in an interview with Axios in November. 

Not every CEO agrees with Karp’s assessment that humanities degrees are doomed. BlackRock COO Robert Goldstein told Fortune in 2024 that the company was recruiting graduates who studied “things that have nothing to do with finance or technology.” 

McKinsey CEO Bob Sternfels recently said in an interview with Harvard Business Review that the company is “looking more at liberal arts majors, whom we had deprioritized, as potential sources of creativity,” to break out of AI’s linear problem-solving. 

Karp has long been an advocate for vocational training over traditional college degrees. Last year, Palantir launched a Meritocracy Fellowship, offering high school students a paid internship with a chance to interview for a full-time position at the end of four months. 

The company criticized American universities for “indoctrinating” students and having “opaque” admissions that “displaced meritocracy and excellence,” in their announcement of the fellowship. 

“If you did not go to school, or you went to a school that’s not that great, or you went to Harvard or Princeton or Yale, once you come to Palantir, you’re a Palantirian—no one cares about the other stuff,” Karp said during a Q2 earnings call last year.

“I think we need different ways of testing aptitude,” Karp told Fink. He pointed to the former police officer who attended a junior college, who now manages the US Army’s MAVEN system, a Palantir-made AI tool that processes drone imagery and video.  

“In the past, the way we tested for aptitude would not have fully exposed how irreplaceable that person’s talents are,” he said. 

Karp also gave the example of technicians building batteries at a battery company, saying those workers are “very valuable if not irreplaceable because we can make them into something different than what they were very rapidly.”

He said what he does all day at Palantir is “figuring out what is someone’s outlier aptitude. Then, I’m putting them on that thing and trying to get them to stay on that thing and not on the five other things they think they’re great at.” 

Karp’s comments come as more employers report a gap between the skills applicants are offering and what employers are looking for in a tough labor market. The unemployment rate for young workers ages 16 to 24 hit 10.4% in December and is growing among college graduates. Karp isn’t too worried. 

“There will be more than enough jobs for the citizens of your nation, especially those with vocational training,” he said. 



Source link

Continue Reading

Business

AI is boosting productivity. Here’s why some workers feel a sense of loss

Published

on



Welcome to Eye on AI, with AI reporter Sharon Goldman. In this edition…Why some workers feel a sense of loss while AI boosts productivity…Anthropic raising fresh $10 Billion at $350 billion valuation…Musk’s xAI closed $20 billion funding with Nvidia backing…Can AI do your job? See the results from hundreds of tests.

For months, software developers have been giddy with excitement over “vibe coding”– prompting desired software functions or features in natural language—with the latest AI code generation tools. Anthropic’s Claude Code is the darling of the moment, but OpenAI’s Codex, Cursor and other tools have also led engineers to flood social media with examples of tasks that used to take days and are now finished in minutes. 

Even veteran software design leaders have marvelled at the shift. “In just a few months, Claude Code has pushed the state of the art in software engineering further than 75 years of academic research,” said Erik Meijer, a former senior engineering leader at Meta

Skills honed seem less essential

However, that same delight has turned disorienting for many developers, who are grappling with a sense of loss as skills honed over a lifetime suddenly seem less essential. The feeling of flow—of being “in the zone”—seems to have vanished as building software becomes an exercise in supervising AI tools rather than writing code. 

In a blog post this week titled “The Grief When AI Writes All the Code,” Gergely Orosz of The Pragmatic Engineer, wrote that he is “coming to terms with the high probability that AI will write most of my code which I ship to production.” It already does it faster, he explained, and for languages and frameworks he is less familiar with, it does a better job. 

“It feels like something valuable is being taken away, and suddenly,” he wrote. “It took a lot of effort to get good at coding and to learn how to write code that works, to read and understand complex code, and to debug and fix when code doesn’t work as it should.” 

Andrew Duca, founder of tax software Awaken Tax, wrote a similar post this week that went viral, saying that he was feeling “kinda depressed” even though he finds using Claude Code “incredible” and has “never found coding more fun.” 

He can now solve customer problems faster, and ship more features, but at the same time “the skill I spent 10,000s of hours getting good at…is becoming a full commodity extremely quickly,” he wrote. “There’s something disheartening about the thing you spent most of your life getting good at now being mostly useless.” 

Software development has long been on the front lines of the AI shift, partly because there are decades of code, documentation and public problem-solving (from sites like GitHub) available online for AI models to train on. Coding also has clear rules and fast feedback – it runs or it doesn’t – so AI systems can easily learn how to generate useful responses. That means programming has become one of the first white-collar professions to feel AI’s impact so directly.

These tensions will affect many professions

These tensions, however, won’t be confined to software developers. White-collar workers across industries will ultimately have to grapple with them in one way or another. Media headlines often focus on the possibility of mass layoffs driven by AI; the more immediate issue may be how AI reshapes how people feel about their work. AI tools can move us past the hardest parts of our jobs more quickly—but what if that struggle is part of what allows us to take pride in what we do? What if the most human elements of work—thinking, strategizing, working through problems—are quietly sidelined by tools that prize speed and efficiency over experience?

Of course, there are plenty of jobs and workflows where most people are very happy to use AI to say buh-bye to repetitive grunt work that they never wanted to do in the first place. And as Duca said, we can marvel at the incredible power of the latest AI models and leap to use the newest features even while we feel unmoored. 

Many white-collar workers will likely face a philosophical reckoning about what AI means for their profession—one that goes beyond fears of layoffs. It may resemble the familiar stages of grief: denial, anger, bargaining, depression, and, eventually, acceptance. That acceptance could mean learning how to be the best manager or steerer of AI possible. Or it could mean deliberately carving out space for work done without AI at all. After all, few people want to lose their thinking self entirely.

Or it could mean doing what Erik Meijer is doing. Now that coding increasingly feels like management, he said, he has turned back to making music—using real instruments—as a hobby, simply “to experience that flow.”

With that, here’s more AI news.

Sharon Goldman
sharon.goldman@fortune.com
@sharongoldman

FORTUNE ON AI

As Utah gives the AI power to prescribe some drugs, physicians warn of patient risks – by Beatrice Nolan

Google and Character.AI agree to settle lawsuits over teen suicides linked to AI chatbots – by Beatrice Nolan

OpenAI launches ChatGPT Health in a push to become a hub for personal health data – by Sharon Goldman

Google takes first steps toward an AI product that can actually tackle your email inbox – by Jacqueline Munis

Fusion power nearly ready for prime time as Commonwealth builds first pilot for limitless, clean energy with AI help from Siemens, Nvidia – by Jordan Blum

AI IN THE NEWS

Anthropic raising fresh $10 Billion at $350 billion valuation. According to the Wall Street Journal, OpenAI rival Anthropic is planning to raise $10 billion at a roughly $350 billion valuation, nearly doubling its worth from just four months ago. The round is expected to be led by GIC and Coatue Management, following a $13 billion raise in September that valued the company at $183 billion. The financing underscores the continued boom in AI funding—AI startups raised a record $222 billion in 2025, per PitchBook—and comes as Anthropic is also preparing for a potential IPO this year. Founded in 2021 by siblings Dario Amodei and Daniela Amodei, Anthropic has become a major OpenAI rival, buoyed by Claude’s popularity with business users, major backing from Nvidia and Microsoft, and expectations that it will reach break-even by 2028—potentially faster than OpenAI, which is itself reportedly seeking to raise up to $100 billion at a $750 billion valuation.

Musk’s xAI closed $20 billion funding with Nvidia backing. Bloomberg reported that xAI, the AI startup founded by Elon Musk, has completed a $20 billion funding round backed by investors including Nvidia, Valor Equity Partners, and the Qatar Investment Authority, underscoring the continued flood of capital into AI infrastructure. Other backers include Fidelity Management & Research, StepStone Group, MGX, Baron Capital Group, and Cisco’s investment arm. The financing—months in the making—will fund xAI’s rapid infrastructure buildout and product development, the company said, and includes a novel structure in which a large portion of the capital is tied to a special-purpose vehicle used to buy Nvidia GPUs that are then rented out, allowing investors to recoup returns over time. The deal comes as xAI has been under fire for its chatbot Grok producing non-consensual “undressing” images of real people.

Can AI do your job? See the results from hundreds of tests. I wanted to shout-out this fascinating new interactive feature in the Washington Post, which presented a new study that found that despite fears of mass job displacement, today’s AI systems are still far from being able to replace humans on real-world work. Researchers from Scale AI and the Center for AI Safety tested leading models from OpenAI, Google, and Anthropic on hundreds of actual freelance projects—from graphic design and creating dashboards to 3D modeling and games—and found that the best AI systems successfully completed just 2.5% of tasks on their own. While AI often produced outputs that looked plausible at first glance, closer inspection revealed missing details, visual errors, incomplete work, or basic technical failures, highlighting gaps in areas like visual reasoning, long-term memory, and the ability to evaluate subjective outcomes. The findings challenge predictions that AI is poised to automate large swaths of human labor anytime soon, even as newer models show incremental improvement and the economics of cheaper, semi-autonomous AI work continue to put pressure on remote and contract workers.

EYE ON AI NUMBERS

91.8%

That’s the percentage of Meta employees who admitted to not using the company’s AI chatbot, Meta AI, in their day-to-day work, according to new data from Blind, a popular anonymous professional social network. 

 

According to a survey of 400 Meta employees, only 8.2% said they use Meta AI. The most popular chatbot was Anthropic’s Claude, used by more than half (50.7%) of Meta employees surveyed. 17.7% said they use Google’s Gemini and 13.7% said they used OpenAI’s ChatGPT. 

 

When approached for comment, Meta spokesperson pointed out that the number (400 of 77,000+ employees) is “not even a half percent of our total employee population.”

AI CALENDAR

Jan. 19-23: World Economic Forum, Davos, Switzerland.

Jan. 20-27: AAAI Conference on Artificial Intelligence, Singapore.

Feb. 10-11: AI Action Summit, New Delhi, India.

March 2-5: Mobile World Congress, Barcelona, Spain.

March 16-19: Nvidia GTC, San Jose, Calif.

April 6-9: HumanX, San Francisco. 



Source link

Continue Reading

Trending

Copyright © Miami Select.