Connect with us

Business

Why restricting graduate loans will bankrupt America’s talent supply chain

Published

on



Federal Reserve Chair Jerome Powell said at his December 10 press conference that the U.S. labor market is becoming increasingly K-shaped: growth, opportunity, and resilience accrue to those with assets, while everyone else absorbs volatility.

What’s becoming clear is that this divide is no longer confined to the labor market. It’s now embedded in its foundation: education.

When access to advanced degrees depends not on ability or workforce demand, but on whether a household can absorb six figures of upfront cost, stratification accelerates. The upper branch compounds advantage through credentialed mobility. The lower branch absorbs risk, debt, and stalled progression.

That dynamic isn’t neutral. It’s destabilizing.

That is exactly what the restructuring of federal graduate student lending under the One Big Beautiful Bill Act (OBBBA) does. Framed as fiscal discipline, it quietly rewires who gets to advance in the American economy—and who pays more just to try.

A Two-Tiered Talent System

Beginning July 1, 2026, the OBBBA eliminates the Graduate PLUS loan program and replaces it with lifetime federal borrowing caps. Students in a narrow set of “professional degrees” may borrow up to $200,000. Everyone else, regardless of licensure requirements or labor-market demand, is capped at $100,000.

This distinction isn’t grounded in labor force need. It’s grounded in academic prestige.

Medical and law degrees qualify for the higher cap. Advanced nursing, social work, education, and public-health degrees do not, despite requiring licensure, despite severe labor shortages, and despite being the backbone of the care economy.

For many students, that $100,000 cap isn’t theoretical. It’s binding. Especially for those who already carry undergraduate debt, it can mean running out of federal aid before finishing a required degree.

That’s not cost containment. It’s credit rationing.

And when the federal backstop disappears, students don’t stop needing capital. They’re pushed into the private market, where interest rates are higher, protections are weaker, and access depends on credit history or family wealth.

From Merit to Capital

Yale Law Professor Daniel Markovits, author of The Meritocracy Trap, argues that our modern systems of advancement have created a new aristocracy, where the elite maintain dominance not through titles, but through the monopolization of expensive human capital.

Graduate education has now been folded directly into that system. In my recent discussion with Karen Boykin-Towns, Vice Chair of the NAACP National Board of Directors, and Keisha D. Bross, the NAACP’s Director of Opportunity, Race, and Justice, we identified how the OBBBA accelerates this dynamic, creating a capital-versus-merit system.

By capping federal loans while eliminating Grad PLUS, the government isn’t discouraging debt. It’s outsourcing access to private capital. Families with liquidity pay tuition directly. Everyone else pays interest, often at double the rate. This creates a sharp bifurcation:

  1. The Upper Branch: Students with “Capital” (generational wealth or family assets) can bypass the cap using private resources, continuing their upward trajectory into high-value careers.
  2. The Lower Branch: Students with only “Merit” (talent and drive but no family wealth), disproportionately Black women, are shut out.

The result isn’t meritocracy. It’s capital-screened mobility.

And when capital, not capability, determines who becomes a nurse practitioner, a clinical social worker, or a public-health leader, the economy doesn’t get leaner. It gets weaker.

The Intersectional Cost of ‘Money Out

These loan changes don’t hit all workers equitably.

Women dominate the fields most affected by the lower cap. At least 80% of degree holders in nursing, social work, and elementary education are women. These are precisely the programs now classified as “non-professional.”

Even within the same occupations, women earn less than men. Forcing them to finance advanced degrees with higher-cost private loans raises debt-to-income ratios at career entry, increasing default risk and long-term financial strain.

For Black women, the impact is sharper still.

Black women who attended graduate school hold approximately $58,000 in federal student debt on average, more than white women or Black men. Nearly half of the Black–white student debt gap is driven by graduate borrowing, reflecting how essential advanced degrees are for upward mobility in the absence of intergenerational wealth.

Black women are also heavily concentrated in healthcare and social services, fields now subject to the $100,000 cap. Remove Grad PLUS, and the math changes fast.

Federal graduate loans currently carry fixed rates under 9%. Private loans can soar as high as 18%, particularly for borrowers without prime credit or co-signers. That gap isn’t abstract. It’s interest compounding over decades. 

Consider a Black woman pursuing an MSW who needs $30,000 beyond the new federal cap to finish her degree. Forced into the private market, she trades a federally protected 9% rate for a predatory 18% rate.

This shift actively destroys the capacity to build generational wealth. This is also a multigenerational risk: Black women are the breadwinners in 52% of Black households with children. When we financially hobble the primary earner, we are not just restricting her mobility; we are capping the economic future of the 9 million children relying on those households.

We are cannibalizing future retirement security to pay for today’s policy experiment.

Educated, and Still Locked Out

Economic policy is never gender-neutral, and it is rarely race-neutral. The OBBBA financing caps disproportionately target Black women, a demographic that serves as a linchpin in both the educated workforce and the Care Economy.

There’s a persistent myth that student debt reflects low completion or poor outcomes. The data tells a different story. In interviews conducted with NAACP leadership, they shared job-fair data showing that more than 80% of applicants held a bachelor’s degree or higher. These are educated workers, many with advanced training, struggling to access stable, well-paid roles.

They did what the system asked. They earned credentials. They pursued licensure. And now the rules are changing underneath them. That isn’t a failure of effort. It’s a failure of policy design.

The $290 Billion Macroeconomic Bill

The consequences don’t stop at individual balance sheets. The sectors pushed into the lower loan cap, nursing, social work, and public health, are already facing acute shortages. The U.S. currently has an estimated 1.8 million vacant care jobs.

Failure to address these shortages is projected to cost the economy roughly $290 billion per year in lost GDP by 2030.

When the talent pipeline narrows:

  • Employers compete harder for fewer workers, driving wage and signing-cost inflation.
  • Turnover rises. During the pandemic alone, excess nursing turnover cost between $88 billion and $137 billion.

This is how a student-loan rule becomes a productivity drag.

What a Smarter System Looks Like

If the goal is fiscal responsibility and economic growth, there is a better path.

First, the definition of “professional degree” must reflect labor-market reality, not academic hierarchy. Licensed, high-shortage fields like advanced nursing and clinical social work should qualify for the higher cap. We must value the labor that sustains society as highly as the labor that litigates it.

Second, we need non-debt investment in critical workforce education. Grants and fellowships targeted to shortage fields reduce long-term risk while maximizing return. A graduate degree delivers an estimated net lifetime value of over $300,000 for women. That value should accrue to the economy, not be siphoned off by interest payments.

Third, employers must recognize this as a supply-chain issue. Talent doesn’t appear by accident. Corporate co-investment in education, through tuition support and loan forgiveness, offers one of the highest returns available. Global research suggests health workforce investments can generate returns of up to 10-to-1.

The OBBBA was designed to manage debt. In its current form, it manufactures fragility. It hardens the K-shaped economy at its foundation. It substitutes capital for merit. And it weakens the very labor force the economy depends on to grow.

If we care about productivity, competitiveness, and long-term stability, this is the wrong place to cut. America doesn’t have a talent shortage problem. It has an access problem. And this policy just made it worse.

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

This story was originally featured on Fortune.com



Source link

Continue Reading

Business

Republican lawmaker and notable Trump critic Ben Sasse announces stage 4 cancer

Published

on



Former Nebraska U.S. Sen. Ben Sasse, a conservative who rebuked political tribalism and stood out as a longtime critic of President Donald Trump, announced Tuesday said he was diagnosed with advanced pancreatic cancer.

Sasse, 53, made the announcement on social media, saying he learned of the disease last week and is “now marching to the beat of a faster drummer.”

“This is a tough note to write, but since a bunch of you have started to suspect something, I’ll cut to the chase,” Sasse wrote. “Last week I was diagnosed with metastasized, stage-four pancreatic cancer, and am gonna die.”

Sasse was first elected to the Senate in 2014. He comfortably won reelection in 2020 after fending off a pro-Trump primary challenger. Sasse drew the ire of GOP activists for his vocal criticism of Trump’s character and policies, including questioning his moral values and saying he cozied up to adversarial foreign leaders.

Sasse was one of seven Republican senators to vote to convict the former president of “ incitement of insurrection ” after the Jan. 6, 2021, attack on the U.S. Capitol. After threats of a public censure back home, he extended his critique to party loyalists who blindly worship one man and rejected him for his refusal to bend the knee.

He resigned from the Senate in 2023 to serve as the 13th president of the University of Florida after a contentious approval process. He left that post the following year after his wife was diagnosed with epilepsy.

Sasse, who has degrees from Harvard, St. John’s College and Yale, worked as an assistant secretary of Health and Human Services under President George W. Bush. He served as president of Midland University, a small Christian university in eastern Nebraska, before he ran for the Senate.

Sasse and his wife have three children.

“I’m not going down without a fight. One sub-part of God’s grace is found in the jawdropping advances science has made the past few years in immunotherapy and more,” Sasse wrote. “Death and dying aren’t the same — the process of dying is still something to be lived.”

Join us at the Fortune Workplace Innovation Summit May 19–20, 2026, in Atlanta. The next era of workplace innovation is here—and the old playbook is being rewritten. At this exclusive, high-energy event, the world’s most innovative leaders will convene to explore how AI, humanity, and strategy converge to redefine, again, the future of work. Register now.



Source link

Continue Reading

Business

Medicaid paid more than $200 million to dead people, and Trump is rewriting privacy laws to fix it

Published

on



Medicaid programs made more than $200 million in improper payments to health care providers between 2021 and 2022 for people who had already died, according to a new report from the independent watchdog for the Department of Health and Human Services.

But the department’s Office of Inspector General said it expects a new provision in Republicans’ One Big Beautiful Bill requiring states to audit their Medicaid beneficiary lists may help reduce these improper payments in the future.

These kinds of improper payments are “not unique to one state, and the issue continues to be persistent,” Aner Sanchez, assistant regional inspector general in the Office of Audit Services told The Associated Press. Sanchez has been researching this issue for a decade.

The watchdog report released Tuesday said more than $207.5 million in managed care payments were made on behalf of deceased enrollees between July 2021 to July 2022. The office recommends that the federal government share more information with state governments to recover the incorrect payments — including a Social Security database known as the Full Death Master File, which contains more than 142 million records going back to 1899.

Sharing the Full Death Master File data has been tightly restricted due to privacy laws which protect against identity theft and fraud.

The massive tax and spending bill that was signed into law by President Donald Trump this summer expands how the Full Death Master File can be used by mandating Medicaid agencies to quarterly audit their provider and beneficiary lists against the file, beginning in 2027. The intent is to stop payments to dead people and improve accuracy.

Tuesday’s report is the first nationwide look at improper Medicaid payments. Since 2016, HHS’ inspector general has conducted 18 audits on a selection of state programs and had identified that Medicaid agencies had improperly made managed care payments on behalf of deceased enrollees totaling approximately $289 million.

The government had some success using the Full Death Master File to prevent improper payments earlier this year. In January, the Treasury Department reported that it had clawed back more than $31 million in federal payments that improperly went to dead people as part of a five-month pilot program after Congress gave Treasury temporary access to the file for three years as part of the 2021 appropriations bill.

Meanwhile, the SSA has been making unusual updates to the file itself, adding and removing records, and complicating its use. For instance, the Trump administration in April moved to classify thousands of living immigrants as dead and cancel their Social Security numbers to crack down on immigrants who had been temporarily allowed to live in the U.S. under programs started during the Biden administration.

Join us at the Fortune Workplace Innovation Summit May 19–20, 2026, in Atlanta. The next era of workplace innovation is here—and the old playbook is being rewritten. At this exclusive, high-energy event, the world’s most innovative leaders will convene to explore how AI, humanity, and strategy converge to redefine, again, the future of work. Register now.



Source link

Continue Reading

Business

Silicon Valley’s tone-deaf take on the AI backlash will matter in 2026

Published

on



Welcome to Eye on AI, with AI reporter Sharon Goldman. In this edition…why Silicon Valley needs to read the room on AI skepticism…How Christian leaders are challenging the AI boom….Instacart ends AI-driven pricing tests that pushed up costs for some shoppers…and what will your life look like in 2035?

I’ve noticed a familiar frustration in Silicon Valley with public skepticism toward AI. The complaint goes like this: People outside the industry don’t appreciate the rapid, visible—and, to insiders, near-miraculous—advances that AI systems are making. Instead, critics and everyday users believe either that AI progress has stalled, or that the technology is just a hungry, plagiarizing machine spewing useless slop.

To AI optimists from San Francisco to San Jose, that skepticism is deeply misguided. AI progress is not stopping anytime soon, they argue, and the technology is already helping humanity—by contributing to cutting-edge research and boosting productivity, particularly in areas like coding, math, and science.

Take this excerpt from a recent post by Roon, a popular pseudonymous account on X written by an OpenAI researcher:

“Every time I use Codex to solve some issue late at night or GPT helps me figure out a difficult strategic problem, I feel: what a relief. There are so few minds on Earth that are both intelligent and persistent enough to generate new insights and keep the torch of scientific civilization alive. Now you have potentially infinite minds to throw at infinite potential problems. Your computer friend that never takes the day off, never gets bored, never checks out and stops trying.”

I understand Roon’s excitement—and his impatience with people who seem eager to declare AI a bubble every time it hits a setback. Who wouldn’t want, as he puts it, a “computer friend that never takes the day off, never gets bored, never checks out and stops trying”?

Thrilling to one may sound threatening to another

The answer, in fact, is: many. What sounds like thrilling abundance to people building AI often sounds unsettling—or even threatening—to everyone else. Even among the hundreds of millions now using tools like ChatGPT, Gemini, and Claude, there is plenty of anxiety. Maybe it’s concern about jobs. Maybe it’s a data center coming to their backyard. Maybe it’s the fear that the benefits of the AI boom will accrue only to a narrow set of companies and communities. Or maybe it’s the fact that many people are already preoccupied with non-AI problems—making rent, saving for a home, raising a family, dealing with health issues, keeping the lights on.

In that context, the promise of a tireless, 24/7 digital mind can feel distant from daily life—or worse, like a threat to livelihoods and self-worth. And for many (even me, in my freaked-out moments), it simply feels creepy.

The disconnect will only grow harder to ignore in 2026

As we head into 2026, Silicon Valley needs to read the room. The disconnect between how AI is framed by its builders and how it’s experienced by the public isn’t being properly addressed. But it will only grow harder to ignore in 2026, with increasing societal and political backlash. 

On X yesterday, Sebastian Caliri, a partner at venture capital firm 8VC, argued that “folks in tech do not appreciate that the entire country is polarized against tech.” Silicon Valley needs a better story, he said–a story that people can really buy into. 

“People do not care about competition with China when they can’t afford a house and healthcare is bankrupting them,” he wrote. “If you want our industry to flourish, and you earnestly believe we will be better off in 5 years by embracing AI, you need to start showing ordinary people a reason to believe you and quickly.” 

My take is that AI companies spend an enormous amount of time trying to impress: Look at what my AI can do! And yes, as someone who uses generative AI every single day, I agree it is incredibly impressive—regardless of what the critics say, and regardless of whether you believe Big Tech ever had the right to scrape the entire internet to make it so.

But ordinary people don’t need to be impressed. They need answers: about jobs, costs, and who actually benefits; about societal impact and what their own futures look like in an AI-driven economy; about what billionaires are really discussing behind closed doors. Without that, all the AI bells and whistles in the world won’t bring people on board. What you’ll get instead is skepticism—and not because people don’t understand AI, but because, given what’s at stake, it’s a rational response.

With that, here’s more AI news.

Sharon Goldman
sharon.goldman@fortune.com
@sharongoldman

FORTUNE ON AI

Google Cloud chief reveals the long game: a decade of silicon and the energy battle behind the AI boom – by Nick Lichtenberg 

Little-known underground salt caverns could slow the AI boom and its thirst for power – by Jordan Blum

Exclusive: Cursor acquires code review startup Graphite as AI coding competition heats up – by Beatrice Nolan

AI IN THE NEWS

How Christian leaders are challenging the AI boom. This interesting article from Time reports that Christian leaders across denominations and affiliations—including Catholics, evangelicals, and Baptists—are increasingly vocal in pushing back against the rapid acceleration of AI and urging caution in both public discourse and policy. Rather than rejecting technology outright, many faith figures are concerned about AI’s impact on family life, human relationships, labor, children, and organized religion itself. They are raising these issues in sermons, open letters, and conversations with lawmakers. At the top of the Catholic hierarchy, Pope Leo XIV has used his platform to warn about AI’s potential harms, even as he acknowledges possible benefits like spreading the Gospel. Other leaders have criticized AI companions for isolating users, especially young people, and expressed discomfort with Silicon Valley’s use of religious language to promote technology. 

Instacart ends AI-driven pricing tests that pushed up costs for some shoppers. According to CNBC, Instacart said it will stop allowing retailers to run AI-driven pricing experiments on its grocery delivery platform after consumer groups and lawmakers raised alarms that shoppers were paying different prices for identical items at the same store. The company said retailers will no longer be able to use its Eversight technology—acquired for $59 million in 2022—to test price increases or decreases on Instacart, after acknowledging that the experiments “missed the mark” and undermined trust at a time when families are struggling with food costs. A Consumer Reports–led study found that identical baskets of goods could vary in price by about 7%, potentially costing customers more than $1,000 extra per year. While Instacart said the tests were not based on personal data and rejected claims of “surveillance pricing,” the move comes amid growing regulatory scrutiny, including an FTC inquiry into its pricing practices and a recent $60 million settlement over deceptive subscription tactics.

What will your life look like in 2035? I want to shout out this really cool interactive piece from the Guardian, which explores how everyday life might look in 2035 as a future artificial general intelligence (AGI) becomes deeply embedded in society, transforming work, health care, farming, law, and daily routines. For example, by then AI could act as the first point of contact in medicine—handling pre-diagnosis and personalized treatment suggestions—while human doctors focus on oversight and wearable AI devices manage information and anticipate needs. In professions like law and agriculture, advanced AI could handle research, argument preparation, and real-time monitoring of crops and livestock, potentially increasing efficiency but raising questions about fairness, bias, and transparency. Work itself may shift dramatically: AI augmentation could boost productivity, enabling shorter workweeks and more leisure for some, even as others get laid off or struggle with purpose and mental health in a world where routine tasks are automated. 

EYE ON AI RESEARCH

Can LLMs actually discover science and function as “AI scientists”? The answer is no, according to this interesting new paper from Harvard and MIT, which found that today’s most sophisticated LLMs may talk and write like scientists, but they don’t think like scientists. 

When the 50+ co-authors from around the world evaluated state-of-the-art LLMs on a new framework, they found that performance on scientific discovery tasks lagged behind results on standard science benchmarks; scaling up models and enhancing reasoning yielded diminishing returns for discovery-oriented tasks; and there were systematic weaknesses shared across different top models, suggesting that current architectures aren’t yet well suited for real scientific workflows.

The paper noted that LLMs do show promise on parts of the discovery process, especially when guided exploration and serendipity are involved, and the authors argue that the framework they used provides a practical path for future progress toward AI that can truly assist scientific discovery.

AI CALENDAR

Jan. 6: Fortune Brainstorm Tech CES Dinner. Apply to attend here.

Jan. 19-23: World Economic Forum, Davos, Switzerland.

Feb. 10-11: AI Action Summit, New Delhi, India.

April 6-9: HumanX, San Francisco. 

BRAIN FOOD

For Brain Food this week, I’ve turned to our fearless AI editor, Jeremy Kahn, for his 2026 predictions. Here are his top five: 

  1. American open source AI has a moment. The story of 2025 was that of open source AI models, mostly from China, rapidly closing the performance gap with the frontier proprietary models produced by the three leading U.S. AI companies: OpenAI, Anthropic, and Google. In 2026, I predict we will see a wave of new venture-backed U.S. startups entering the open source AI space, releasing a powerful set of AI models that will surpass their Chinese rivals and be competitive on many leaderboards with the proprietary frontier models.
  2. China will unveil a Huawei chip that it says equals the performance of Nvidia’s GB200. The past year saw Chinese chipmakers making major strides, but still not reaching the performance, especially for training, of Nvidia’s top-of-the-line chips. The Trump administration has now authorized Nvidia to sell its H200 chip in China, which may dampen demand for a domestic alternative. But the Chinese government sees creating a domestic chip to rival Nvidia as a strategic priority, so it’s unlikely that China will remain behind Nvidia for much longer. 
  3. Ilya Sutskever’s startup will achieve a breakthrough. Ilya Sutskever’s startup, Safe Superintelligence (SSI), will release a model that achieves state-of-the-art results on demanding benchmarks designed to test generalization, including ARC-AGI-2 and MultiNet. But Sutskever will decline to disclose how the company achieved those gains, touching off intense speculation over whether SSI has unlocked a fundamentally new architectural approach—or simply combined a series of powerful, but less revolutionary, “optimizations.”
  4. Congress will pass regulations around how AI chatbots can interact with children and teenagers. The rules will seek to impose age verification and limit the extent to which chatbots can engage in certain kinds of dialogue with kids. The bill will have bipartisan support. 
  5. More and more Fortune 500 companies will begin to publicly report significant ROI from AI deployments. As a result, the revenue at the major cloud providers (Amazon AWS, Microsoft Azure, and Google Cloud) will continue to grow 30% year over year. 

FORTUNE AIQ: THE YEAR IN AI—AND WHAT’S AHEAD

Businesses took big steps forward on the AI journey in 2025, from hiring Chief AI Officers to experimenting with AI agents. The lessons learned—both good and bad–combined with the technology’s latest innovations will make 2026 another decisive year. Explore all of Fortune AIQ, and read the latest playbook below: 

The 3 trends that dominated companies’ AI rollouts in 2025.

2025 was the year of agentic AI. How did we do?

AI coding tools exploded in 2025. The first security exploits show what could go wrong.

The big AI New Year’s resolution for businesses in 2026: ROI.

Businesses face a confusing patchwork of AI policy and rules. Is clarity on the horizon?



Source link

Continue Reading

Trending

Copyright © Miami Select.