Connect with us

Business

Silicon Valley’s tone-deaf take on the AI backlash will matter in 2026

Published

on



Welcome to Eye on AI, with AI reporter Sharon Goldman. In this edition…why Silicon Valley needs to read the room on AI skepticism…How Christian leaders are challenging the AI boom….Instacart ends AI-driven pricing tests that pushed up costs for some shoppers…and what will your life look like in 2035?

I’ve noticed a familiar frustration in Silicon Valley with public skepticism toward AI. The complaint goes like this: People outside the industry don’t appreciate the rapid, visible—and, to insiders, near-miraculous—advances that AI systems are making. Instead, critics and everyday users believe either that AI progress has stalled, or that the technology is just a hungry, plagiarizing machine spewing useless slop.

To AI optimists from San Francisco to San Jose, that skepticism is deeply misguided. AI progress is not stopping anytime soon, they argue, and the technology is already helping humanity—by contributing to cutting-edge research and boosting productivity, particularly in areas like coding, math, and science.

Take this excerpt from a recent post by Roon, a popular pseudonymous account on X written by an OpenAI researcher:

“Every time I use Codex to solve some issue late at night or GPT helps me figure out a difficult strategic problem, I feel: what a relief. There are so few minds on Earth that are both intelligent and persistent enough to generate new insights and keep the torch of scientific civilization alive. Now you have potentially infinite minds to throw at infinite potential problems. Your computer friend that never takes the day off, never gets bored, never checks out and stops trying.”

I understand Roon’s excitement—and his impatience with people who seem eager to declare AI a bubble every time it hits a setback. Who wouldn’t want, as he puts it, a “computer friend that never takes the day off, never gets bored, never checks out and stops trying”?

Thrilling to one may sound threatening to another

The answer, in fact, is: many. What sounds like thrilling abundance to people building AI often sounds unsettling—or even threatening—to everyone else. Even among the hundreds of millions now using tools like ChatGPT, Gemini, and Claude, there is plenty of anxiety. Maybe it’s concern about jobs. Maybe it’s a data center coming to their backyard. Maybe it’s the fear that the benefits of the AI boom will accrue only to a narrow set of companies and communities. Or maybe it’s the fact that many people are already preoccupied with non-AI problems—making rent, saving for a home, raising a family, dealing with health issues, keeping the lights on.

In that context, the promise of a tireless, 24/7 digital mind can feel distant from daily life—or worse, like a threat to livelihoods and self-worth. And for many (even me, in my freaked-out moments), it simply feels creepy.

The disconnect will only grow harder to ignore in 2026

As we head into 2026, Silicon Valley needs to read the room. The disconnect between how AI is framed by its builders and how it’s experienced by the public isn’t being properly addressed. But it will only grow harder to ignore in 2026, with increasing societal and political backlash. 

On X yesterday, Sebastian Caliri, a partner at venture capital firm 8VC, argued that “folks in tech do not appreciate that the entire country is polarized against tech.” Silicon Valley needs a better story, he said–a story that people can really buy into. 

“People do not care about competition with China when they can’t afford a house and healthcare is bankrupting them,” he wrote. “If you want our industry to flourish, and you earnestly believe we will be better off in 5 years by embracing AI, you need to start showing ordinary people a reason to believe you and quickly.” 

My take is that AI companies spend an enormous amount of time trying to impress: Look at what my AI can do! And yes, as someone who uses generative AI every single day, I agree it is incredibly impressive—regardless of what the critics say, and regardless of whether you believe Big Tech ever had the right to scrape the entire internet to make it so.

But ordinary people don’t need to be impressed. They need answers: about jobs, costs, and who actually benefits; about societal impact and what their own futures look like in an AI-driven economy; about what billionaires are really discussing behind closed doors. Without that, all the AI bells and whistles in the world won’t bring people on board. What you’ll get instead is skepticism—and not because people don’t understand AI, but because, given what’s at stake, it’s a rational response.

With that, here’s more AI news.

Sharon Goldman
sharon.goldman@fortune.com
@sharongoldman

FORTUNE ON AI

Google Cloud chief reveals the long game: a decade of silicon and the energy battle behind the AI boom – by Nick Lichtenberg 

Little-known underground salt caverns could slow the AI boom and its thirst for power – by Jordan Blum

Exclusive: Cursor acquires code review startup Graphite as AI coding competition heats up – by Beatrice Nolan

AI IN THE NEWS

How Christian leaders are challenging the AI boom. This interesting article from Time reports that Christian leaders across denominations and affiliations—including Catholics, evangelicals, and Baptists—are increasingly vocal in pushing back against the rapid acceleration of AI and urging caution in both public discourse and policy. Rather than rejecting technology outright, many faith figures are concerned about AI’s impact on family life, human relationships, labor, children, and organized religion itself. They are raising these issues in sermons, open letters, and conversations with lawmakers. At the top of the Catholic hierarchy, Pope Leo XIV has used his platform to warn about AI’s potential harms, even as he acknowledges possible benefits like spreading the Gospel. Other leaders have criticized AI companions for isolating users, especially young people, and expressed discomfort with Silicon Valley’s use of religious language to promote technology. 

Instacart ends AI-driven pricing tests that pushed up costs for some shoppers. According to CNBC, Instacart said it will stop allowing retailers to run AI-driven pricing experiments on its grocery delivery platform after consumer groups and lawmakers raised alarms that shoppers were paying different prices for identical items at the same store. The company said retailers will no longer be able to use its Eversight technology—acquired for $59 million in 2022—to test price increases or decreases on Instacart, after acknowledging that the experiments “missed the mark” and undermined trust at a time when families are struggling with food costs. A Consumer Reports–led study found that identical baskets of goods could vary in price by about 7%, potentially costing customers more than $1,000 extra per year. While Instacart said the tests were not based on personal data and rejected claims of “surveillance pricing,” the move comes amid growing regulatory scrutiny, including an FTC inquiry into its pricing practices and a recent $60 million settlement over deceptive subscription tactics.

What will your life look like in 2035? I want to shout out this really cool interactive piece from the Guardian, which explores how everyday life might look in 2035 as a future artificial general intelligence (AGI) becomes deeply embedded in society, transforming work, health care, farming, law, and daily routines. For example, by then AI could act as the first point of contact in medicine—handling pre-diagnosis and personalized treatment suggestions—while human doctors focus on oversight and wearable AI devices manage information and anticipate needs. In professions like law and agriculture, advanced AI could handle research, argument preparation, and real-time monitoring of crops and livestock, potentially increasing efficiency but raising questions about fairness, bias, and transparency. Work itself may shift dramatically: AI augmentation could boost productivity, enabling shorter workweeks and more leisure for some, even as others get laid off or struggle with purpose and mental health in a world where routine tasks are automated. 

EYE ON AI RESEARCH

Can LLMs actually discover science and function as “AI scientists”? The answer is no, according to this interesting new paper from Harvard and MIT, which found that today’s most sophisticated LLMs may talk and write like scientists, but they don’t think like scientists. 

When the 50+ co-authors from around the world evaluated state-of-the-art LLMs on a new framework, they found that performance on scientific discovery tasks lagged behind results on standard science benchmarks; scaling up models and enhancing reasoning yielded diminishing returns for discovery-oriented tasks; and there were systematic weaknesses shared across different top models, suggesting that current architectures aren’t yet well suited for real scientific workflows.

The paper noted that LLMs do show promise on parts of the discovery process, especially when guided exploration and serendipity are involved, and the authors argue that the framework they used provides a practical path for future progress toward AI that can truly assist scientific discovery.

AI CALENDAR

Jan. 6: Fortune Brainstorm Tech CES Dinner. Apply to attend here.

Jan. 19-23: World Economic Forum, Davos, Switzerland.

Feb. 10-11: AI Action Summit, New Delhi, India.

April 6-9: HumanX, San Francisco. 

BRAIN FOOD

For Brain Food this week, I’ve turned to our fearless AI editor, Jeremy Kahn, for his 2026 predictions. Here are his top five: 

  1. American open source AI has a moment. The story of 2025 was that of open source AI models, mostly from China, rapidly closing the performance gap with the frontier proprietary models produced by the three leading U.S. AI companies: OpenAI, Anthropic, and Google. In 2026, I predict we will see a wave of new venture-backed U.S. startups entering the open source AI space, releasing a powerful set of AI models that will surpass their Chinese rivals and be competitive on many leaderboards with the proprietary frontier models.
  2. China will unveil a Huawei chip that it says equals the performance of Nvidia’s GB200. The past year saw Chinese chipmakers making major strides, but still not reaching the performance, especially for training, of Nvidia’s top-of-the-line chips. The Trump administration has now authorized Nvidia to sell its H200 chip in China, which may dampen demand for a domestic alternative. But the Chinese government sees creating a domestic chip to rival Nvidia as a strategic priority, so it’s unlikely that China will remain behind Nvidia for much longer. 
  3. Ilya Sutskever’s startup will achieve a breakthrough. Ilya Sutskever’s startup, Safe Superintelligence (SSI), will release a model that achieves state-of-the-art results on demanding benchmarks designed to test generalization, including ARC-AGI-2 and MultiNet. But Sutskever will decline to disclose how the company achieved those gains, touching off intense speculation over whether SSI has unlocked a fundamentally new architectural approach—or simply combined a series of powerful, but less revolutionary, “optimizations.”
  4. Congress will pass regulations around how AI chatbots can interact with children and teenagers. The rules will seek to impose age verification and limit the extent to which chatbots can engage in certain kinds of dialogue with kids. The bill will have bipartisan support. 
  5. More and more Fortune 500 companies will begin to publicly report significant ROI from AI deployments. As a result, the revenue at the major cloud providers (Amazon AWS, Microsoft Azure, and Google Cloud) will continue to grow 30% year over year. 

FORTUNE AIQ: THE YEAR IN AI—AND WHAT’S AHEAD

Businesses took big steps forward on the AI journey in 2025, from hiring Chief AI Officers to experimenting with AI agents. The lessons learned—both good and bad–combined with the technology’s latest innovations will make 2026 another decisive year. Explore all of Fortune AIQ, and read the latest playbook below: 

The 3 trends that dominated companies’ AI rollouts in 2025.

2025 was the year of agentic AI. How did we do?

AI coding tools exploded in 2025. The first security exploits show what could go wrong.

The big AI New Year’s resolution for businesses in 2026: ROI.

Businesses face a confusing patchwork of AI policy and rules. Is clarity on the horizon?



Source link

Continue Reading

Business

Republican lawmaker and notable Trump critic Ben Sasse announces stage 4 cancer

Published

on



Former Nebraska U.S. Sen. Ben Sasse, a conservative who rebuked political tribalism and stood out as a longtime critic of President Donald Trump, announced Tuesday said he was diagnosed with advanced pancreatic cancer.

Sasse, 53, made the announcement on social media, saying he learned of the disease last week and is “now marching to the beat of a faster drummer.”

“This is a tough note to write, but since a bunch of you have started to suspect something, I’ll cut to the chase,” Sasse wrote. “Last week I was diagnosed with metastasized, stage-four pancreatic cancer, and am gonna die.”

Sasse was first elected to the Senate in 2014. He comfortably won reelection in 2020 after fending off a pro-Trump primary challenger. Sasse drew the ire of GOP activists for his vocal criticism of Trump’s character and policies, including questioning his moral values and saying he cozied up to adversarial foreign leaders.

Sasse was one of seven Republican senators to vote to convict the former president of “ incitement of insurrection ” after the Jan. 6, 2021, attack on the U.S. Capitol. After threats of a public censure back home, he extended his critique to party loyalists who blindly worship one man and rejected him for his refusal to bend the knee.

He resigned from the Senate in 2023 to serve as the 13th president of the University of Florida after a contentious approval process. He left that post the following year after his wife was diagnosed with epilepsy.

Sasse, who has degrees from Harvard, St. John’s College and Yale, worked as an assistant secretary of Health and Human Services under President George W. Bush. He served as president of Midland University, a small Christian university in eastern Nebraska, before he ran for the Senate.

Sasse and his wife have three children.

“I’m not going down without a fight. One sub-part of God’s grace is found in the jawdropping advances science has made the past few years in immunotherapy and more,” Sasse wrote. “Death and dying aren’t the same — the process of dying is still something to be lived.”

Join us at the Fortune Workplace Innovation Summit May 19–20, 2026, in Atlanta. The next era of workplace innovation is here—and the old playbook is being rewritten. At this exclusive, high-energy event, the world’s most innovative leaders will convene to explore how AI, humanity, and strategy converge to redefine, again, the future of work. Register now.



Source link

Continue Reading

Business

Medicaid paid more than $200 million to dead people, and Trump is rewriting privacy laws to fix it

Published

on



Medicaid programs made more than $200 million in improper payments to health care providers between 2021 and 2022 for people who had already died, according to a new report from the independent watchdog for the Department of Health and Human Services.

But the department’s Office of Inspector General said it expects a new provision in Republicans’ One Big Beautiful Bill requiring states to audit their Medicaid beneficiary lists may help reduce these improper payments in the future.

These kinds of improper payments are “not unique to one state, and the issue continues to be persistent,” Aner Sanchez, assistant regional inspector general in the Office of Audit Services told The Associated Press. Sanchez has been researching this issue for a decade.

The watchdog report released Tuesday said more than $207.5 million in managed care payments were made on behalf of deceased enrollees between July 2021 to July 2022. The office recommends that the federal government share more information with state governments to recover the incorrect payments — including a Social Security database known as the Full Death Master File, which contains more than 142 million records going back to 1899.

Sharing the Full Death Master File data has been tightly restricted due to privacy laws which protect against identity theft and fraud.

The massive tax and spending bill that was signed into law by President Donald Trump this summer expands how the Full Death Master File can be used by mandating Medicaid agencies to quarterly audit their provider and beneficiary lists against the file, beginning in 2027. The intent is to stop payments to dead people and improve accuracy.

Tuesday’s report is the first nationwide look at improper Medicaid payments. Since 2016, HHS’ inspector general has conducted 18 audits on a selection of state programs and had identified that Medicaid agencies had improperly made managed care payments on behalf of deceased enrollees totaling approximately $289 million.

The government had some success using the Full Death Master File to prevent improper payments earlier this year. In January, the Treasury Department reported that it had clawed back more than $31 million in federal payments that improperly went to dead people as part of a five-month pilot program after Congress gave Treasury temporary access to the file for three years as part of the 2021 appropriations bill.

Meanwhile, the SSA has been making unusual updates to the file itself, adding and removing records, and complicating its use. For instance, the Trump administration in April moved to classify thousands of living immigrants as dead and cancel their Social Security numbers to crack down on immigrants who had been temporarily allowed to live in the U.S. under programs started during the Biden administration.

Join us at the Fortune Workplace Innovation Summit May 19–20, 2026, in Atlanta. The next era of workplace innovation is here—and the old playbook is being rewritten. At this exclusive, high-energy event, the world’s most innovative leaders will convene to explore how AI, humanity, and strategy converge to redefine, again, the future of work. Register now.



Source link

Continue Reading

Business

There are more self-made billionaires under 30 than ever before—11 of them have made the ultra-wealthy club in the last 3 months thanks to AI

Published

on



While many Gen Zers are struggling to land entry-level jobs thanks to AI, the same technology is also fueling a new wave of young billionaires. This year, the number of self-made billionaires under 30 hit an all-time high, as entrepreneurial young people have turned growing up with smartphones into billion-dollar startups. 

In 2025, there were more self-made billionaires in their 20s than ever before—about 13 people higher than from a previous record of 7—according to an analysis from Forbes. 

And most have experienced a wealth surge as of late; about 11 of the 13 newly initiated ultra-wealthy became billionaires within the last three months, including the likes of Polymarket CEO Shayne Coplan, the cofounder of vibe coding startup Loveable, Fabian Hedin, and AI entrepreneur Arvid Lunnemark. 

The majority of these young and ultra-wealthy founders made their wealth by jumping on the AI industry while it’s hot. For example, 25-year-old Sualeh Asif found success as the cofounder of company Anysphere—the team behind popular $29.3 billion AI editing tool Cursor.

Adarsh Hiremath and Surya Midha, both just 22, cofounded Mercor: an AI-powered recruiting startup helping connect talent with Silicon Valley’s biggest AI labs. 

Of the 11 young entrepreneurs who became billionaires within the last few months, eight saw their fortunes boom through their AI innovations. 

How the youngest female self-made billionaire under 30 earned her wealth

One of the 11 entrepreneurs under 30 who stepped into newfound wealth late this year was Luana Lopes Lara: the world’s youngest female self-made billionaire ever. 

Earlier this month, Lopes Lara saw her fortune skyrocket to $1.3 billion after her prediction market startup, Kalshi, hit an eye-watering $11 billion valuation. But before making her Wall Street debut, the young entrepreneur was on a different life path. 

The Brazilian-born entrepreneur was once training to be a professional ballerina in Rio. After working for nine months as a professional dancer in Austria, she gave up the grueling career, and pivoted to a different dream: becoming the next Steve Jobs. 

While studying engineering at MIT, Lopes Lara spent her summers working as an intern at Ray Dalio’s Bridgewater Associates and Ken Griffin’s Citadel Securities. But something clicked when the founder took up a gig at Five Rings Capital, alongside fellow MIT student Tarek Mansour. During this internship, the duo bonded over a shared vision for a prediction market company that would allow users to bet on the outcomes of popular sporting events, elections, and current events. 

The entrepreneurs went into business together, and after a successful Y Combinator pitch just a year later, their platform Kalshi was born. In 2020, after receiving Commodity Futures Trading Commission (CFTC) approval, it became the first federally regulated prediction market platform in business. Earlier this month, Kalshi raised $1 billion, achieving a $11 billion valuation and propelling Lopes Lara and Mansour—who each own around 12% of the company—into the exclusive billionaire club.

This story was originally featured on Fortune.com



Source link

Continue Reading

Trending

Copyright © Miami Select.