Connect with us

Business

The AI jobs apocalypse isn’t upon us, according to new data

Published

on



Hello and welcome to Eye on AI. In this edition: No AI Jobpocalypse, plus early signs of life for entry-level jobs…OpenAI launches Sora 2Meta plans to use AI chatbot conversations to personalize ads…and more companies are disclosing AI-related risks.

Hi, Beatrice Nolan here, filling in for AI reporter Sharon Goldman, who is out today. For all the corporate hype and Silicon Valley hand-wringing, new research suggests that the U.S. jobs market hasn’t yet experienced the AI apocalypse some have warned about.

In a new report, researchers from Yale’s Budget Lab and the Brookings Institution said they had found no evidence of any “discernible disruption” to jobs since the launch of OpenAI’s ChatGPT in November 2022. The study found that most of the ongoing shifts in the U.S. occupational mix, a measure of the types of jobs people hold, were already underway in 2021, and recent changes don’t appear any more dramatic.

“While the occupational mix is changing more quickly than it has in the past, it is not a large difference and predates the widespread introduction of AI in the workforce,” the researchers wrote in the report. “Currently, measures of exposure, automation, and augmentation show no sign of being related to changes in employment or unemployment.”

Industries with higher AI exposure, such as Information, Financial Activities, and Professional and Business Services, have seen some downward shifts, but these trends largely began before ChatGPT’s launch.

The conclusion isn’t altogether shocking, although it flies in the face of some of the AI doomsayers’ more dramatic claims. Historically, major workplace disruptions have unfolded over decades, not months or years. Computers, for example, didn’t become common in offices until nearly 10 years after their debut, and it was even longer before they reshaped workflows. If AI ends up transforming the labor market as dramatically as computers did—or more so—it’s reasonable to expect that broad effects will take longer than three years to appear.

Some executives have also told me they are taking a “wait and see” approach to hiring while they assess whether the tech can really deliver on its productivity promises. This approach can slow hiring and make the labor market feel sluggish, but it doesn’t necessarily mean workers are being automated out of their jobs.

While anxiety over the effects of AI on today’s labor market may be widespread, the new data suggests that this anxiety is still largely speculative. 

Entry-level hiring woes

The real hiring pain has been felt by college grads and entry-level workers.

There’s no denying that AI is better at tasks typically done by this class of workers, and companies have increasingly been saying the quiet part out loud when it comes to junior roles. But claims that AI is keeping recent graduates out of work aren’t entirely supported by the new data. When researchers compared jobless rates for recent graduates to those with more experience, new grads seemed to be having a slightly tougher time landing roles, but the gap wasn’t big enough to suggest technology is the main factor.

The researchers found a small increase in occupational dissimilarity compared to older graduates, which could reflect early AI effects but also could just as easily be attributed to labor market trends, including employers’ and job-seekers’ reactions to noise about AI replacing workers. The report suggests that entry-level struggles are more likely to be part of broader labor market dynamics rather than a direct result of AI adoption.

Recently, there have also been anecdotal but promising signs of life in the entry-level job market. For example, Shopify and Cloudflare are both increasing their intern intake this year, with Cloudflare calling AI tools a way “to multiply how new hires can contribute to a team” rather than a replacement for the new hires themselves. Younger workers are typically more receptive, more eager to experiment, and more creative when it comes to using emerging technology, which could give companies that hire them an edge. As U.K.-based programmer Simon Willison put it: “An intern armed with AI tools can produce value a whole lot faster than interns in previous years.”

The researchers cautioned that the analysis isn’t predictive, and they plan to keep updating their findings. They also warned that the sample size is small.

Just because AI hasn’t significantly impacted the labor market yet doesn’t mean it won’t in the future. Some recent assessments, such as OpenAI’s new GDPval benchmark, show that leading AI models are getting better at performing professional tasks at or above human expert level on roughly half of cases, depending on the sector. As AI tools improve and companies get better at integrating them, the tech could have a more direct impact on the workforce.

But should we be thinking of AI as just the next computer, or as a new industrial revolution? At least for now, the jury’s still out.

With that, here’s the rest of the AI news.

Beatrice Nolan
bea.nolan@fortune.com
@beafreyanolan

FORTUNE ON AI

We’re not in an ‘AI winter’—but here’s how to survive a cold snap —by Sharon Goldman

California governor signs landmark AI safety law, forcing major tech companies to disclose protocols and protect whistleblowers —Beatrice Nolan

How OpenAI and Stripe’s latest move could blow up online shopping as we know it —by Sharon Goldman

Meta is exploiting the ‘illusion of privacy’ to sell you ads based on chatbot conversations, top AI ethics expert says—and you can’t opt out —Eva Roytburg

AI IN THE NEWS

Meta plans to use AI chatbot conversations to personalize ads. Meta will begin using chats with its AI assistant to shape ads and content recommendations across Facebook and Instagram. The company announced the update to its recommendation system on Wednesday, adding it will take effect on Dec. 16, with user notifications beginning Oct. 7. The company told the Wall Street Journal that it will not use conversations about religion, politics, sexual orientation, health, or race and ethnicity to personalize ads or content. The move will tie Meta’s massive investments in generative AI into its core ad business. Users can’t opt out, but those who don’t use Meta AI won’t be affected, according to the Journal.

Mira Murati’s Thinking Machines Lab launches its first product. Thinking Machines, an AI lab lead by former OpenAI CTO Mira Murati, has launched a tool that automates the creation of custom frontier AI models. Murati told Wired the tool, called Tinker, “will help empower researchers and developers to experiment with models and will make frontier capabilities much more accessible to all people.” The team believes that giving users the tools to fine-tune frontier models will demystify the process of model tuning, make advanced AI accessible beyond big labs, and help to unlock specialized capabilities in areas like math, law, or medicine. The startup raised $2 billion in seed funding in July 2025, before releasing any products, and is made up of a team of top researchers including John Schulman, who cofounded OpenAI and led the creation of ChatGPT. Read more from Wired.

OpenAI launches a new version of Sora. OpenAI has launched Sora 2, its next-generation AI video and audio model, along with a companion app that lets users create, share, and remix AI-generated videos. The new model improves photorealistic motion, generates speech, and introduces “cameos,” allowing users to insert themselves into videos via a short verification recording. However, according to the Wall Street Journal, the new video generator requires copyright holders to opt out. This means that movie studios and other IP owners must actively request that OpenAI exclude their copyrighted material from videos generated by the new version of Sora. A later report from 404 Media found that users are able to generate strange and often offensive content featuring copyrighted characters like Pikachu, SpongeBob SquarePants, and figures from The Simpsons. Read more from 404 Media here.

A new startup is scooping up top AI researchers. Periodic Labs, a new San Francisco startup founded by ChatGPT co-creator Liam Fedus and former DeepMind scientist Ekin Dogus Cubuk, has recruited a string of top AI researchers from OpenAI, Google DeepMind, and Meta, according to the New York Times. More than 20 researchers, including Rishabh Agarwal, who was poached by Meta from DeepMind just a few months ago, have left their work at major AI companies to join the startup focused on building AI that accelerates real-world scientific discovery in physics, chemistry, and materials science. It’s backed by $300 million in funding and plans to use robots to run large-scale lab experiments. Read more from the New York Times.

AI CALENDAR

Oct. 6-10: World AI Week, Amsterdam.

Oct. 21-22: TedAI San Francisco.

Nov. 10-13: Web Summit, Lisbon. 

Nov. 26-27: World AI Congress, London.

Dec. 2-7: NeurIPS, San Diego.

Dec. 8-9: Fortune Brainstorm AI San Francisco. Apply to attend here.

EYE ON AI NUMBERS

72% 

That’s the percentage of S&P 500 companies that have disclosed an AI-related risk this year, according to The Conference Board, a nonprofit think tank and business membership organization, and ESGAUGE, a data analytics firm. Public company disclosure of AI as a material risk has surged in the past two years, with the share of  S&P 500 companies citing an AI-related risk jumping from 12% in 2023 to 72% this year.

Reputational risk is the most frequently cited concern around AI, disclosed by 38% of companies in 2025. Cybersecurity was a close second, cited by 20% of firms in both 2024 and 2025. While all sectors are disclosing risks, financial, health care, and industrials have seen the sharpest rise. This may be because financial and health care companies face regulatory risks tied to sensitive data and fairness, while industrials are largely scaling automation and robotics.

“The rise in AI-related risk disclosures reflects the rapid mainstreaming of AI across corporate functions in recent years, as companies embed it more deeply into areas such as supply chains, customer engagement, and product development,” Andrew Jones, principal researcher at The Conference Board, told Fortune. “With adoption expanding, firms have increased their internal focus on governance, compliance, and operational considerations, with boards, risk committees, and legal teams evaluating potential challenges from data privacy and bias to regulatory uncertainty and liability.” 

The dramatic surge in disclosures does signal that more companies are seeing AI integration as a material risk that needs to be actively managed and communicated to investors. The findings were based on Form 10-K filings from S&P 500 companies available through Aug. 15, 2025.



Source link

Continue Reading

Business

Palantir CEO says AI “will destroy” humanities jobs but there will be “more than enough jobs” for people with vocational training

Published

on



Some economists and experts say that critical thinking and creativity will be more important than ever in the age of artificial intelligence (AI), when a robot can do much of the heavy lifting on coding or research. Take Benjamin Shiller, the Brandeis economics professor who recently told Fortune that a “weirdness premium” will be valued in the labor market of the future. Alex Karp, the Palantir founder and CEO, isn’t one of these voices. 

“It will destroy humanities jobs,” Karp said when asked how AI will affect jobs in conversation with BlackRock CEO Larry Fink at the World Economic Forum annual meeting in Davos, Switzerland. “You went to an elite school and you studied philosophy — I’ll use myself as an example — hopefully you have some other skill, that one is going to be hard to market.”

Karp attended Haverford College, a small, elite liberal arts college outside his hometown of Philadelphia. He earned a J.D. from Stanford Law School and a Ph.D. in philosophy from Goethe University in Germany. He spoke about his own experience getting his first job. 

Karp told Fink that he remembered thinking about his own career, “I’m not sure who’s going to give me my first job.” 

The answer echoed past comments Karp has made about certain types of elite college graduates who lack specialized skills.

“If you are the kind of person that would’ve gone to Yale, classically high IQ, and you have generalized knowledge but it’s not specific, you’re effed,” Karp said in an interview with Axios in November. 

Not every CEO agrees with Karp’s assessment that humanities degrees are doomed. BlackRock COO Robert Goldstein told Fortune in 2024 that the company was recruiting graduates who studied “things that have nothing to do with finance or technology.” 

McKinsey CEO Bob Sternfels recently said in an interview with Harvard Business Review that the company is “looking more at liberal arts majors, whom we had deprioritized, as potential sources of creativity,” to break out of AI’s linear problem-solving. 

Karp has long been an advocate for vocational training over traditional college degrees. Last year, Palantir launched a Meritocracy Fellowship, offering high school students a paid internship with a chance to interview for a full-time position at the end of four months. 

The company criticized American universities for “indoctrinating” students and having “opaque” admissions that “displaced meritocracy and excellence,” in their announcement of the fellowship. 

“If you did not go to school, or you went to a school that’s not that great, or you went to Harvard or Princeton or Yale, once you come to Palantir, you’re a Palantirian—no one cares about the other stuff,” Karp said during a Q2 earnings call last year.

“I think we need different ways of testing aptitude,” Karp told Fink. He pointed to the former police officer who attended a junior college, who now manages the US Army’s MAVEN system, a Palantir-made AI tool that processes drone imagery and video.  

“In the past, the way we tested for aptitude would not have fully exposed how irreplaceable that person’s talents are,” he said. 

Karp also gave the example of technicians building batteries at a battery company, saying those workers are “very valuable if not irreplaceable because we can make them into something different than what they were very rapidly.”

He said what he does all day at Palantir is “figuring out what is someone’s outlier aptitude. Then, I’m putting them on that thing and trying to get them to stay on that thing and not on the five other things they think they’re great at.” 

Karp’s comments come as more employers report a gap between the skills applicants are offering and what employers are looking for in a tough labor market. The unemployment rate for young workers ages 16 to 24 hit 10.4% in December and is growing among college graduates. Karp isn’t too worried. 

“There will be more than enough jobs for the citizens of your nation, especially those with vocational training,” he said. 



Source link

Continue Reading

Business

AI is boosting productivity. Here’s why some workers feel a sense of loss

Published

on



Welcome to Eye on AI, with AI reporter Sharon Goldman. In this edition…Why some workers feel a sense of loss while AI boosts productivity…Anthropic raising fresh $10 Billion at $350 billion valuation…Musk’s xAI closed $20 billion funding with Nvidia backing…Can AI do your job? See the results from hundreds of tests.

For months, software developers have been giddy with excitement over “vibe coding”– prompting desired software functions or features in natural language—with the latest AI code generation tools. Anthropic’s Claude Code is the darling of the moment, but OpenAI’s Codex, Cursor and other tools have also led engineers to flood social media with examples of tasks that used to take days and are now finished in minutes. 

Even veteran software design leaders have marvelled at the shift. “In just a few months, Claude Code has pushed the state of the art in software engineering further than 75 years of academic research,” said Erik Meijer, a former senior engineering leader at Meta

Skills honed seem less essential

However, that same delight has turned disorienting for many developers, who are grappling with a sense of loss as skills honed over a lifetime suddenly seem less essential. The feeling of flow—of being “in the zone”—seems to have vanished as building software becomes an exercise in supervising AI tools rather than writing code. 

In a blog post this week titled “The Grief When AI Writes All the Code,” Gergely Orosz of The Pragmatic Engineer, wrote that he is “coming to terms with the high probability that AI will write most of my code which I ship to production.” It already does it faster, he explained, and for languages and frameworks he is less familiar with, it does a better job. 

“It feels like something valuable is being taken away, and suddenly,” he wrote. “It took a lot of effort to get good at coding and to learn how to write code that works, to read and understand complex code, and to debug and fix when code doesn’t work as it should.” 

Andrew Duca, founder of tax software Awaken Tax, wrote a similar post this week that went viral, saying that he was feeling “kinda depressed” even though he finds using Claude Code “incredible” and has “never found coding more fun.” 

He can now solve customer problems faster, and ship more features, but at the same time “the skill I spent 10,000s of hours getting good at…is becoming a full commodity extremely quickly,” he wrote. “There’s something disheartening about the thing you spent most of your life getting good at now being mostly useless.” 

Software development has long been on the front lines of the AI shift, partly because there are decades of code, documentation and public problem-solving (from sites like GitHub) available online for AI models to train on. Coding also has clear rules and fast feedback – it runs or it doesn’t – so AI systems can easily learn how to generate useful responses. That means programming has become one of the first white-collar professions to feel AI’s impact so directly.

These tensions will affect many professions

These tensions, however, won’t be confined to software developers. White-collar workers across industries will ultimately have to grapple with them in one way or another. Media headlines often focus on the possibility of mass layoffs driven by AI; the more immediate issue may be how AI reshapes how people feel about their work. AI tools can move us past the hardest parts of our jobs more quickly—but what if that struggle is part of what allows us to take pride in what we do? What if the most human elements of work—thinking, strategizing, working through problems—are quietly sidelined by tools that prize speed and efficiency over experience?

Of course, there are plenty of jobs and workflows where most people are very happy to use AI to say buh-bye to repetitive grunt work that they never wanted to do in the first place. And as Duca said, we can marvel at the incredible power of the latest AI models and leap to use the newest features even while we feel unmoored. 

Many white-collar workers will likely face a philosophical reckoning about what AI means for their profession—one that goes beyond fears of layoffs. It may resemble the familiar stages of grief: denial, anger, bargaining, depression, and, eventually, acceptance. That acceptance could mean learning how to be the best manager or steerer of AI possible. Or it could mean deliberately carving out space for work done without AI at all. After all, few people want to lose their thinking self entirely.

Or it could mean doing what Erik Meijer is doing. Now that coding increasingly feels like management, he said, he has turned back to making music—using real instruments—as a hobby, simply “to experience that flow.”

With that, here’s more AI news.

Sharon Goldman
sharon.goldman@fortune.com
@sharongoldman

FORTUNE ON AI

As Utah gives the AI power to prescribe some drugs, physicians warn of patient risks – by Beatrice Nolan

Google and Character.AI agree to settle lawsuits over teen suicides linked to AI chatbots – by Beatrice Nolan

OpenAI launches ChatGPT Health in a push to become a hub for personal health data – by Sharon Goldman

Google takes first steps toward an AI product that can actually tackle your email inbox – by Jacqueline Munis

Fusion power nearly ready for prime time as Commonwealth builds first pilot for limitless, clean energy with AI help from Siemens, Nvidia – by Jordan Blum

AI IN THE NEWS

Anthropic raising fresh $10 Billion at $350 billion valuation. According to the Wall Street Journal, OpenAI rival Anthropic is planning to raise $10 billion at a roughly $350 billion valuation, nearly doubling its worth from just four months ago. The round is expected to be led by GIC and Coatue Management, following a $13 billion raise in September that valued the company at $183 billion. The financing underscores the continued boom in AI funding—AI startups raised a record $222 billion in 2025, per PitchBook—and comes as Anthropic is also preparing for a potential IPO this year. Founded in 2021 by siblings Dario Amodei and Daniela Amodei, Anthropic has become a major OpenAI rival, buoyed by Claude’s popularity with business users, major backing from Nvidia and Microsoft, and expectations that it will reach break-even by 2028—potentially faster than OpenAI, which is itself reportedly seeking to raise up to $100 billion at a $750 billion valuation.

Musk’s xAI closed $20 billion funding with Nvidia backing. Bloomberg reported that xAI, the AI startup founded by Elon Musk, has completed a $20 billion funding round backed by investors including Nvidia, Valor Equity Partners, and the Qatar Investment Authority, underscoring the continued flood of capital into AI infrastructure. Other backers include Fidelity Management & Research, StepStone Group, MGX, Baron Capital Group, and Cisco’s investment arm. The financing—months in the making—will fund xAI’s rapid infrastructure buildout and product development, the company said, and includes a novel structure in which a large portion of the capital is tied to a special-purpose vehicle used to buy Nvidia GPUs that are then rented out, allowing investors to recoup returns over time. The deal comes as xAI has been under fire for its chatbot Grok producing non-consensual “undressing” images of real people.

Can AI do your job? See the results from hundreds of tests. I wanted to shout-out this fascinating new interactive feature in the Washington Post, which presented a new study that found that despite fears of mass job displacement, today’s AI systems are still far from being able to replace humans on real-world work. Researchers from Scale AI and the Center for AI Safety tested leading models from OpenAI, Google, and Anthropic on hundreds of actual freelance projects—from graphic design and creating dashboards to 3D modeling and games—and found that the best AI systems successfully completed just 2.5% of tasks on their own. While AI often produced outputs that looked plausible at first glance, closer inspection revealed missing details, visual errors, incomplete work, or basic technical failures, highlighting gaps in areas like visual reasoning, long-term memory, and the ability to evaluate subjective outcomes. The findings challenge predictions that AI is poised to automate large swaths of human labor anytime soon, even as newer models show incremental improvement and the economics of cheaper, semi-autonomous AI work continue to put pressure on remote and contract workers.

EYE ON AI NUMBERS

91.8%

That’s the percentage of Meta employees who admitted to not using the company’s AI chatbot, Meta AI, in their day-to-day work, according to new data from Blind, a popular anonymous professional social network. 

 

According to a survey of 400 Meta employees, only 8.2% said they use Meta AI. The most popular chatbot was Anthropic’s Claude, used by more than half (50.7%) of Meta employees surveyed. 17.7% said they use Google’s Gemini and 13.7% said they used OpenAI’s ChatGPT. 

 

When approached for comment, Meta spokesperson pointed out that the number (400 of 77,000+ employees) is “not even a half percent of our total employee population.”

AI CALENDAR

Jan. 19-23: World Economic Forum, Davos, Switzerland.

Jan. 20-27: AAAI Conference on Artificial Intelligence, Singapore.

Feb. 10-11: AI Action Summit, New Delhi, India.

March 2-5: Mobile World Congress, Barcelona, Spain.

March 16-19: Nvidia GTC, San Jose, Calif.

April 6-9: HumanX, San Francisco. 



Source link

Continue Reading

Business

Trust has become the crisis CEOs can’t ignore at Davos, as new data show 70% of people turning more ‘insular’

Published

on



Everywhere you turn in Davos this year, people are talking about trust. And there’s no one who knows trust better than Richard Edelman. Back in 1999, Edelman was on the cusp of taking  over the PR firm founded by his father Daniel. Spurred by the 1999 WTO protests in Seattle, he decided to try and measure the level of trust in NGOs compared with business, government and media, Edelman surveyed 1,300 thought leaders in the U.S., U.K., France, Germany and Australia, and the Edelman Trust Barometer was born. 

While the survey sample long ago expanded beyond elites to include about 34,000 respondents in 28 nations, its results are still unveiled and debated every year at the ultimate gathering of elites: the World Economic Forum. This year’s findings are grim: About 70% of respondents now have an “insular” mindset: they don’t want to talk to, work for, or even be in the same space with anyone who doesn’t share their world view. And “a sense of grievance” permeates the business world, Edelman finds. At Davos, debating such findings have spawned a series of dinners, panels, cocktails and media briefings on site. What better place to bring people together than the world’s most potent village green?

I moderated a CEO salon dinner with about three dozen leaders last night to discuss what they’re seeing and doing when it comes to building trust. Before the dinner, I asked Edelman what he’d like to see this year, after 26 winters of highlighting the erosion of trust. “Urgency,” he said. “A sense that time is running out.”

Because the gathering itself was held under the Chatham House rule, I won’t share names and direct quotes. But the focus was on how attendees are trying to address the problem through what Edelman calls “trust brokering,” or finding common ground through practices from nonjudgemental communications to “polynational’ business models that invest in long-term local relationships. (See the report for more information.) There were some success stories from the front lines of college campuses, politics and industries caught in a crossfire of misinformation.

Still, the mood was somewhat subdued, with a sense that there’s no easy fix to building trust. As one CEO pointed out, rarely have leaders faced such a confluence of geopolitical crises, tech shifts, economic divides, disinformation, job disruption and wicked problems. And as much as Davos is a great gathering ground to talk through all of these problems, the fact is the problems will all still be waiting once these CEOs return from the mountains.

This story was originally featured on Fortune.com



Source link

Continue Reading

Trending

Copyright © Miami Select.