Business
Anthropic’s ‘Red Team’ team pushes its AI models into the danger zone—and burnishes the company’s reputation for AI safety
Published
5 months agoon
By
Jace Porter
Last month, at the 33rd annual DEF CON, the world’s largest hacker convention in Las Vegas, Anthropic researcher Keane Lucas took the stage. A former U.S. Air Force captain with a Ph.D. in electrical and computer engineering from Carnegie Mellon, Lucas wasn’t there to unveil flashy cybersecurity exploits. Instead, he showed how Claude, Anthropic’s family of large language models, has quietly outperformed many human competitors in hacking contests — the kind used to train and test cybersecurity skills in a safe, legal environment. His talk highlighted not only Claude’s surprising wins but also its humorous failures, like drifting into musings on security philosophy when overwhelmed, or inventing fake “flags” (the secret codes competitors need to steal and submit to contest judges to prove they’ve successfully hacked a system).
Lucas wasn’t just trying to get a laugh, though. He wanted to show that AI agents are already more capable at simulated cyberattacks than many in the cybersecurity world realize – they are fast, and make good use of autonomy and tools. That makes them a potential tool for criminal hackers or state actors — and means, he argued, that those same tools need to be deployed for defense.
The message reflects Lucas’ role on Anthropic’s Frontier Red Team, an internal group of about 15 researchers tasked with stress-testing the company’s most advanced AI systems—probing how they might be misused in areas like biological research, cybersecurity, and autonomous systems, with a particular focus on risks to national security. Anthropic, which was founded in 2021 by ex-OpenAI employees, has cast itself as a safety-first lab convinced that unchecked models could pose “catastrophic risks.” But it is also one of the fastest-growing technology companies in history: This week Anthropic announced it has raised a fresh $13 billion at a $183 billion valuation and had passed $5 billion in run-rate revenue.
Unlike similar groups at other labs, Anthropic’s red team is also explicitly tasked with publicizing its findings. That outward-facing mandate reflects the team’s unusual placement inside Anthropic’s policy division, led by co-founder Jack Clark. Other safety and security teams at Anthropic sit under the company’s technical leadership, including a safeguards team that works to improve Claude’s ability to identify and refuse harmful requests, such as those that might negatively impact a user’s mental health or encourage self-harm.
According to Anthropic, the Frontier Red Team does the heavy lifting towards the company’s stated purpose of “building systems that people can rely on and generating research about the opportunities and risks of AI.” Its work underlies Anthropic’s Responsible Scaling Policy (RSP), the company’s governance framework that triggers stricter safeguards as models approach various dangerous thresholds. It does so by running thousands of safety tests, or “evals,” in high-risk domains—results that can determine when to impose tighter controls.
For example, it was the Frontier Red Team’s assessments that led Anthropic to release its latest model, Claude Opus 4, under what the company calls “AI Safety Level 3”—the first model released under that status—as a “precautionary and provisional action.” This designation says the model significantly enhances a user’s ability to obtain, produce or deploy chemical, biological, radiological or nuclear weapons, by providing better instructions than existing, non-AI resources like search engines. It also is a system that begins to show signs of autonomy, which include the ability to act on a goal. By designating Opus 4 as ASL-3, Anthropic flipped on stronger internal security measures to prevent someone from obtaining the model weights, or the neural network “brains” of the model—and visible safeguards to block the model from answering queries that might help someone build a chemical or nuclear weapon..
Telling the world about AI risks is good for policy—and business
The red team’s efforts to amplify its message publicly have grown louder in recent months: It launched a standalone blog last month, called Red, with posts ranging from a nuclear-proliferation study with the Department of Energy to a quirky experiment in which Claude runs a vending machine business. Lucas’ DEF CON talk was also its first public outing at the conference.
“As far as I know, there’s no other team explicitly tasked with finding these risks as fast as possible—and telling the world about them,” said Frontier Red Team leader Logan Graham, who, along with Lucas, met with Fortune at a Las Vegas cafe just before DEF CON. “We have worked out a bunch of kinks about what information is sensitive and not sensitive to share, and ultimately, who’s responsible for dealing with this information. It’s just really clear that it’s really important for the public to know about this, and so there’s definitely a concerted effort.”
Experts in security and defense point out that the work of the Frontier Red Team, as part of Anthropic’s policy organization, also happens to be good for the company’s business—particularly in Washington, DC. By showing they are out front on national-security risks, Anthropic turns what could be seen as an additional safety burden into a business differentiator.
“In AI, speed matters — but trust is what often accelerates scale,” said Wendy R. Anderson, a former Department of Defense staffer and defense tech executive. “From my years in the defense tech world, I’ve observed that companies that make safety and transparency core to their strategy don’t just earn credibility with regulators, they help shape the rules…it determines who gets access to the highest-value, most mission-critical deployments.”
Jen Weedon, a lecturer of Columbia University’s school of International and Public Affairs, who researches best practices in red teaming AI systems, pointed out that where a red team sits in the organizational chart shapes its incentives.
“By placing its Frontier Red Team under the policy umbrella, Anthropic is communicating that catastrophic risks aren’t just technical challenges—they’re also political, reputational, and regulatory ones,” she said. “This likely gives Anthropic leverage in Washington, but it also shows how security and safety talk doubles as strategy.” The environment for AI business in the US right now, particularly for public sector use cases, “seems to be open for the shaping and taking,” pointing to the Trump Administration’s recently-announced AI Action Plan, which is “broad in ambition but somewhat scant in details, particularly around safeguards.”
Critics from across the industry, however, have long taken aim at Anthropic’s broader efforts on AI safety. Some, like Yann LeCun, chief scientist at Meta’s Fundamental AI Research lab, argue that catastrophic risks are overblown and that today’s models are “dumber than a cat.” Others say the focus should be on present-day harms (such as encouraging self-harm or the tendency of LLMs to reinforce racial or gender stereotypes), or fault the company for being overly secretive despite its safety branding. Nvidia’s Jensen Huang has accused CEO Dario Amodei of regulatory capture—using his stance on AI safety to scare lawmakers into enacting rules that would benefit Anthropic at the expense of its rivals. He’s even claimed Amodei is trying to “control the entire industry.” (Amodei, on a recent technology podcast, called Huang’s comments “an outrageous lie” and a “bad-faith distortion.”)
On the other end of the spectrum, some researchers argue Anthropic isn’t going far enough. UC Berkeley’s Stuart Russell told the Wall Street Journal, “I actually think we don’t have a method of safely and effectively testing these kinds of systems.” And studies carried out by the nonprofits SaferAI and the Future of Life Institute (FLI) said that top AI companies such as Anthropic maintain “unacceptable” levels of risk management and show a “striking lack of commitment to many areas of safety.”
Inside Anthropic, though, executives argue that the Frontier Red Team, working alongside the company’s other security and safety teams, exists precisely to surface AI’s biggest potential risks—and to force the rest of the industry to reckon with them.
Securing the world from rogue AI models
Graham, who helped found Anthropic’s Frontier Red Team in 2022, has, like others in the group, a distinctive resume: After studying economics in college, he earned a Ph.D. in machine learning at Oxford as a Rhodes Scholar before spending two years advising the U.K. Prime Minister on science and technology.
Graham described himself as “AGI-pilled,” which he defines as someone who believes that AI models are just going to keep getting better. He added that while the red team’s viewpoints are diverse, “the people who select into it are probably, on average, more AGI-pilled than most.” The eclectic team includes a bioengineering expert, as well as three physicists, though Graham added that the most desired skill on the team is not a particular domain or background, but “craftiness” – which obviously comes in handy when when trying to outsmart an AI into revealing dangerous capabilities.
The Frontier Red Team is “one of the most unique groups in the industry,” said Dan Lahav, CEO of a stealth startup which focuses on evaluating frontier models (his firm conducted third-party tests on Anthropic’s Claude 4, as well as OpenAI’s GPT-5). To work effectively, he said, its members need to be “hardcore AI scientists” but also able to communicate outcomes clearly—“philosophers blended with AI scientists.”
Calling it a “red team” is a spin on traditional security red teams – security units that stress-test an organization’s defenses by playing the role of the attacker. Anthropic’s Frontier Red Team, Graham said, works differently. The key difference, he explained, is what they’re protecting against, and why. Traditional security red teams protect an organization from external attackers by finding vulnerabilities in their systems. Anthropic’s Frontier Red Team, on the other hand, is designed to protect society from the company’s own products, its AI models, by discovering what these systems are capable of before those capabilities become dangerous. They work to understand: “What could this AI do if someone wanted to cause harm?” and “What will AI be capable of next year that it can’t do today?”
For example, Anthropic points out that nuclear know-how, like AI, can be used for good or for harm — the same science behind power plants can also inform weapons development. To guard against that risk, the company recently teamed up with the Department of Energy’s National Nuclear Security Administration to test whether its models could spill sensitive nuclear information (they could not). More recently, they’ve gone a step further, co-developing a tool with the agency that flags potentially dangerous nuclear-related conversations with high accuracy.
Anthropic isn’t alone in running AI safety-focused “red team” exercises on its AI models: OpenAI’s red-team program feeds into its “Preparedness” framework, and Google DeepMind runs its own safety evaluations. But at those other companies, the red teams sit closer to technical security and research, while Anthropic’s placement under policy underscores what can be seen as a triple role — probing risks; making the public aware of them; and as a kind of marketing tool, reinforcing the company’s safety bona fides.
The right incentive structure
Jack Clark, who before co-founding Anthropic led policy efforts at OpenAI, told Fortune that the Frontier Red Team is focused on generating the evidence that guides both company decisions and public debate—and placing it under his policy organization was a “very intentional decision.”
Clark stressed that this work is happening in the context of rapid technological progress. “If you look at the actual technology, the music hasn’t stopped,” he said. “Things keep advancing, perhaps even more quickly than they did in the past.” In official submissions from Anthropic to the White House, he pointed out that the company has been consistent in saying it expects “really powerful systems by late 2026 or early 2027.”
That prediction, he explained, comes directly from the kinds of novel tests the Frontier Red Team are running. Some of what the team is studying are things like complex cyber-offense tasks, he explained, which involve long-horizon, multi-step problem-solving. “When we look at performance on these tests, it keeps going up,” he said. “I know that these tests are impossible to game because they have never been published and they aren’t on the internet. When I look at the scores on those things, I just come away with this impression of continued, tremendous and awesome progress, despite the vibes of people saying maybe AI is slowing down.”
Anthropic’s bid to shape the conversation on AI safety doesn’t end with the Frontier Red Team — or even with its policy shop. In July, the company unveiled a National Security and Public Sector Advisory Council stocked with former senators, senior Defense officials, and nuclear experts. The message is clear: safety work isn’t just about public debate, it’s also about winning trust in Washington. For the Frontier Red Team and beyond, Anthropic is betting that transparency about risk can translate into credibility with regulators, government buyers, and enterprise customers alike.
“The purpose of the Frontier Red Team is to create better information for all of us about the risks of powerful AI systems – by making this available publicly, we hope to inspire others to work on these risks as well, and build a community dedicated to understanding and mitigating them,” said Clark. “Ultimately, we expect this will lead to a far larger market for AI systems than exists today, though the primary motivating purpose is for generating safety insights rather than product ones.”
The real test
The real test, though, is whether Anthropic will still prioritize safety if doing so means slowing its own growth or losing ground to rivals, according to Herb Lin, senior research scholar at Stanford University’s Center for International Security and Cooperation and Research Fellow at the Hoover Institution.
“At the end of the day, the test of seriousness — and nobody can know the answer to this right now — is whether the company is willing to put its business interests second to legitimate national security concerns raised by its policy team,” he said. “That ultimately depends on the motivations of the leadership at the time those decisions arise. Let’s say it happens in two years — will the same leaders still be there? We just don’t know.”
While that uncertainty may hang over Anthropic’s safety-first pitch, inside the company, the Frontier Red Team wants to show there’s room for both caution and optimism.
“We take it all very, very seriously so that we can find the fastest path to mitigating risks,” said Graham.
Overall, he adds, he’s optimistic: “I think we want people to see that there’s a bright future here, but also realize that we can’t just go there blindly. We need to avoid the pitfalls.”
You may like
Business
AI is boosting productivity. Here’s why some workers feel a sense of loss
Published
17 minutes agoon
January 20, 2026By
Jace Porter
Welcome to Eye on AI, with AI reporter Sharon Goldman. In this edition…Why some workers feel a sense of loss while AI boosts productivity…Anthropic raising fresh $10 Billion at $350 billion valuation…Musk’s xAI closed $20 billion funding with Nvidia backing…Can AI do your job? See the results from hundreds of tests.
For months, software developers have been giddy with excitement over “vibe coding”– prompting desired software functions or features in natural language—with the latest AI code generation tools. Anthropic’s Claude Code is the darling of the moment, but OpenAI’s Codex, Cursor and other tools have also led engineers to flood social media with examples of tasks that used to take days and are now finished in minutes.
Even veteran software design leaders have marvelled at the shift. “In just a few months, Claude Code has pushed the state of the art in software engineering further than 75 years of academic research,” said Erik Meijer, a former senior engineering leader at Meta.
Skills honed seem less essential
However, that same delight has turned disorienting for many developers, who are grappling with a sense of loss as skills honed over a lifetime suddenly seem less essential. The feeling of flow—of being “in the zone”—seems to have vanished as building software becomes an exercise in supervising AI tools rather than writing code.
In a blog post this week titled “The Grief When AI Writes All the Code,” Gergely Orosz of The Pragmatic Engineer, wrote that he is “coming to terms with the high probability that AI will write most of my code which I ship to production.” It already does it faster, he explained, and for languages and frameworks he is less familiar with, it does a better job.
“It feels like something valuable is being taken away, and suddenly,” he wrote. “It took a lot of effort to get good at coding and to learn how to write code that works, to read and understand complex code, and to debug and fix when code doesn’t work as it should.”
Andrew Duca, founder of tax software Awaken Tax, wrote a similar post this week that went viral, saying that he was feeling “kinda depressed” even though he finds using Claude Code “incredible” and has “never found coding more fun.”
He can now solve customer problems faster, and ship more features, but at the same time “the skill I spent 10,000s of hours getting good at…is becoming a full commodity extremely quickly,” he wrote. “There’s something disheartening about the thing you spent most of your life getting good at now being mostly useless.”
Software development has long been on the front lines of the AI shift, partly because there are decades of code, documentation and public problem-solving (from sites like GitHub) available online for AI models to train on. Coding also has clear rules and fast feedback – it runs or it doesn’t – so AI systems can easily learn how to generate useful responses. That means programming has become one of the first white-collar professions to feel AI’s impact so directly.
These tensions will affect many professions
These tensions, however, won’t be confined to software developers. White-collar workers across industries will ultimately have to grapple with them in one way or another. Media headlines often focus on the possibility of mass layoffs driven by AI; the more immediate issue may be how AI reshapes how people feel about their work. AI tools can move us past the hardest parts of our jobs more quickly—but what if that struggle is part of what allows us to take pride in what we do? What if the most human elements of work—thinking, strategizing, working through problems—are quietly sidelined by tools that prize speed and efficiency over experience?
Of course, there are plenty of jobs and workflows where most people are very happy to use AI to say buh-bye to repetitive grunt work that they never wanted to do in the first place. And as Duca said, we can marvel at the incredible power of the latest AI models and leap to use the newest features even while we feel unmoored.
Many white-collar workers will likely face a philosophical reckoning about what AI means for their profession—one that goes beyond fears of layoffs. It may resemble the familiar stages of grief: denial, anger, bargaining, depression, and, eventually, acceptance. That acceptance could mean learning how to be the best manager or steerer of AI possible. Or it could mean deliberately carving out space for work done without AI at all. After all, few people want to lose their thinking self entirely.
Or it could mean doing what Erik Meijer is doing. Now that coding increasingly feels like management, he said, he has turned back to making music—using real instruments—as a hobby, simply “to experience that flow.”
With that, here’s more AI news.
Sharon Goldman
sharon.goldman@fortune.com
@sharongoldman
FORTUNE ON AI
As Utah gives the AI power to prescribe some drugs, physicians warn of patient risks – by Beatrice Nolan
Google and Character.AI agree to settle lawsuits over teen suicides linked to AI chatbots – by Beatrice Nolan
OpenAI launches ChatGPT Health in a push to become a hub for personal health data – by Sharon Goldman
Google takes first steps toward an AI product that can actually tackle your email inbox – by Jacqueline Munis
Fusion power nearly ready for prime time as Commonwealth builds first pilot for limitless, clean energy with AI help from Siemens, Nvidia – by Jordan Blum
AI IN THE NEWS
Anthropic raising fresh $10 Billion at $350 billion valuation. According to the Wall Street Journal, OpenAI rival Anthropic is planning to raise $10 billion at a roughly $350 billion valuation, nearly doubling its worth from just four months ago. The round is expected to be led by GIC and Coatue Management, following a $13 billion raise in September that valued the company at $183 billion. The financing underscores the continued boom in AI funding—AI startups raised a record $222 billion in 2025, per PitchBook—and comes as Anthropic is also preparing for a potential IPO this year. Founded in 2021 by siblings Dario Amodei and Daniela Amodei, Anthropic has become a major OpenAI rival, buoyed by Claude’s popularity with business users, major backing from Nvidia and Microsoft, and expectations that it will reach break-even by 2028—potentially faster than OpenAI, which is itself reportedly seeking to raise up to $100 billion at a $750 billion valuation.
Musk’s xAI closed $20 billion funding with Nvidia backing. Bloomberg reported that xAI, the AI startup founded by Elon Musk, has completed a $20 billion funding round backed by investors including Nvidia, Valor Equity Partners, and the Qatar Investment Authority, underscoring the continued flood of capital into AI infrastructure. Other backers include Fidelity Management & Research, StepStone Group, MGX, Baron Capital Group, and Cisco’s investment arm. The financing—months in the making—will fund xAI’s rapid infrastructure buildout and product development, the company said, and includes a novel structure in which a large portion of the capital is tied to a special-purpose vehicle used to buy Nvidia GPUs that are then rented out, allowing investors to recoup returns over time. The deal comes as xAI has been under fire for its chatbot Grok producing non-consensual “undressing” images of real people.
Can AI do your job? See the results from hundreds of tests. I wanted to shout-out this fascinating new interactive feature in the Washington Post, which presented a new study that found that despite fears of mass job displacement, today’s AI systems are still far from being able to replace humans on real-world work. Researchers from Scale AI and the Center for AI Safety tested leading models from OpenAI, Google, and Anthropic on hundreds of actual freelance projects—from graphic design and creating dashboards to 3D modeling and games—and found that the best AI systems successfully completed just 2.5% of tasks on their own. While AI often produced outputs that looked plausible at first glance, closer inspection revealed missing details, visual errors, incomplete work, or basic technical failures, highlighting gaps in areas like visual reasoning, long-term memory, and the ability to evaluate subjective outcomes. The findings challenge predictions that AI is poised to automate large swaths of human labor anytime soon, even as newer models show incremental improvement and the economics of cheaper, semi-autonomous AI work continue to put pressure on remote and contract workers.
EYE ON AI NUMBERS
91.8%
That’s the percentage of Meta employees who admitted to not using the company’s AI chatbot, Meta AI, in their day-to-day work, according to new data from Blind, a popular anonymous professional social network.
According to a survey of 400 Meta employees, only 8.2% said they use Meta AI. The most popular chatbot was Anthropic’s Claude, used by more than half (50.7%) of Meta employees surveyed. 17.7% said they use Google’s Gemini and 13.7% said they used OpenAI’s ChatGPT.
When approached for comment, Meta spokesperson pointed out that the number (400 of 77,000+ employees) is “not even a half percent of our total employee population.”
AI CALENDAR
Jan. 19-23: World Economic Forum, Davos, Switzerland.
Jan. 20-27: AAAI Conference on Artificial Intelligence, Singapore.
Feb. 10-11: AI Action Summit, New Delhi, India.
March 2-5: Mobile World Congress, Barcelona, Spain.
March 16-19: Nvidia GTC, San Jose, Calif.
April 6-9: HumanX, San Francisco.
Business
Trust has become the crisis CEOs can’t ignore at Davos, as new data show 70% of people turning more ‘insular’
Published
48 minutes agoon
January 20, 2026By
Jace Porter
Everywhere you turn in Davos this year, people are talking about trust. And there’s no one who knows trust better than Richard Edelman. Back in 1999, Edelman was on the cusp of taking over the PR firm founded by his father Daniel. Spurred by the 1999 WTO protests in Seattle, he decided to try and measure the level of trust in NGOs compared with business, government and media, Edelman surveyed 1,300 thought leaders in the U.S., U.K., France, Germany and Australia, and the Edelman Trust Barometer was born.
While the survey sample long ago expanded beyond elites to include about 34,000 respondents in 28 nations, its results are still unveiled and debated every year at the ultimate gathering of elites: the World Economic Forum. This year’s findings are grim: About 70% of respondents now have an “insular” mindset: they don’t want to talk to, work for, or even be in the same space with anyone who doesn’t share their world view. And “a sense of grievance” permeates the business world, Edelman finds. At Davos, debating such findings have spawned a series of dinners, panels, cocktails and media briefings on site. What better place to bring people together than the world’s most potent village green?
I moderated a CEO salon dinner with about three dozen leaders last night to discuss what they’re seeing and doing when it comes to building trust. Before the dinner, I asked Edelman what he’d like to see this year, after 26 winters of highlighting the erosion of trust. “Urgency,” he said. “A sense that time is running out.”
Because the gathering itself was held under the Chatham House rule, I won’t share names and direct quotes. But the focus was on how attendees are trying to address the problem through what Edelman calls “trust brokering,” or finding common ground through practices from nonjudgemental communications to “polynational’ business models that invest in long-term local relationships. (See the report for more information.) There were some success stories from the front lines of college campuses, politics and industries caught in a crossfire of misinformation.
Still, the mood was somewhat subdued, with a sense that there’s no easy fix to building trust. As one CEO pointed out, rarely have leaders faced such a confluence of geopolitical crises, tech shifts, economic divides, disinformation, job disruption and wicked problems. And as much as Davos is a great gathering ground to talk through all of these problems, the fact is the problems will all still be waiting once these CEOs return from the mountains.
This story was originally featured on Fortune.com
Business
History says there’s a 90% chance that Trump’s party will lose seats in the midterm elections. It also says there’s a 100% chance
Published
1 hour agoon
January 20, 2026By
Jace Porter
Now that the 2026 midterm elections are less than a year away, public interest in where things stand is on the rise. Of course, in a democracy no one knows the outcome of an election before it takes place, despite what the pollsters may predict.
Nevertheless, it is common for commentators and citizens to revisit old elections to learn what might be coming in the ones that lie ahead.
The historical lessons from modern midterm congressional elections are not favorable for Republicans today.
Most of the students I taught in American government classes for over 40 years knew that the party in control of the White House was likely to encounter setbacks in midterms. They usually did not know just how settled and solid that pattern was.
Since 1946, there have been 20 midterm elections. In 18 of them, the president’s party lost seats in the House of Representatives. That’s 90% of the midterm elections in the past 80 years.
Measured against that pattern, the odds that the Republicans will hold their slim House majority in 2026 are small. Another factor makes them smaller. When the sitting president is “underwater” – below 50% – in job approval polls, the likelihood of a bad midterm election result becomes a certainty. All the presidents since Harry S. Truman whose job approval was below 50% in the month before a midterm election lost seats in the House. All of them.
Even popular presidents – Dwight D. Eisenhower, in both of his terms; John F. Kennedy; Richard Nixon; Gerald Ford; Ronald Reagan in 1986; and George H. W. Bush – lost seats in midterm elections.
The list of unpopular presidents who lost House seats is even longer – Truman in 1946 and 1950, Lyndon B. Johnson in 1966, Jimmy Carter in 1978, Reagan in 1982, Bill Clinton in 1994, George W. Bush in 2006, Barack Obama in both 2010 and 2014, Donald Trump in 2018 and Joe Biden in 2022.
Exceptions are rare
There are only two cases in the past 80 years where the party of a sitting president won midterm seats in the House. Both involved special circumstances.
In 1998, Clinton was in the sixth year of his presidency and had good numbers for economic growth, declining interest rates and low unemployment. His average approval rating, according to Gallup, in his second term was 60.6%, the highest average achieved by any second-term president from Truman to Biden.
Moreover, the 1998 midterm elections took place in the midst of Clinton’s impeachment, when most Americans were simultaneously critical of the president’s personal behavior and convinced that that behavior did not merit removal from office. Good economic metrics and widespread concern that Republican impeachers were going too far led to modest gains for the Democrats in the 1998 midterm elections. The Democrats picked up five House seats.
The other exception to the rule of thumb that presidents suffer midterm losses was George W. Bush in 2002. Bush, narrowly elected in 2000, had a dramatic rise in popularity after the Sept. 11 attacks on the World Trade Center and the Pentagon. The nation rallied around the flag and the president, and Republicans won eight House seats in the 2002 midterm elections.
Those were the rare cases when a popular sitting president got positive House results in a midterm election. And the positive results were small.
In the 20 midterm elections between 1946 and 2022, small changes in the House – a shift of less than 10 seats – occurred six times. Modest changes – between 11 and 39 seats – took place seven times. Big changes, so-called “wave elections” involving more than 40 seats, have happened seven times. In every midterm election since 1946, at least five seats flipped from one party to the other. If the net result of the midterm elections in 2026 moved five seats from Republicans to Democrats, that would be enough to make Democrats the majority in the House. In an era of close elections and narrow margins on Capitol Hill, midterms make a difference. The past five presidents – Clinton, Bush, Obama, Trump and Biden – entered office with their party in control of both houses of Congress. All five lost their party majority in the House or the Senate in their first two years in office. Will that happen again in 2026? The obvious prediction would be yes. But nothing in politics is set in stone. Between now and November 2026, redistricting will move the boundaries of a yet-to-be-determined number of congressional districts. That could make it harder to predict the likely results in 2026. Unexpected events, or good performance in office, could move Trump’s job approval numbers above 50%. Republicans would still be likely to lose House seats in the 2026 midterms, but a popular president would raise the chances that they could hold their narrow majority. And there are other possibilities. Perhaps 2026 will involve issues like those in recent presidential elections. Close results could be followed by raucous recounts and court controversies of the kind that made Florida the focal point in the 2000 presidential election. Prominent public challenges to voting tallies and procedures, like those that followed Trump’s unsubstantiated claims of victory in 2020, would make matters worse. The forthcoming midterms may not be like anything seen in recent congressional election cycles. Democracy is never easy, and elections matter more than ever. Examining long-established patterns in midterm party performance makes citizens clear-eyed about what is likely to happen in the 2026 congressional elections. Thinking ahead about unusual challenges that might arise in close and consequential contests makes everyone better prepared for the hard work of maintaining a healthy democratic republic. Robert A. Strong, Senior Fellow, Miller Center, University of Virginia This article is republished from The Conversation under a Creative Commons license. Read the original article.Midterms matter
![]()
Saudi Darts Masters: Littler triumphs as atmosphere fails to ignite
AI is boosting productivity. Here’s why some workers feel a sense of loss
Fetty Wap Enjoys Post-Prison Reunion With Daughter Khari, See Pic
Trending
-
Politics8 years agoCongress rolls out ‘Better Deal,’ new economic agenda
-
Entertainment9 years agoNew Season 8 Walking Dead trailer flashes forward in time
-
Politics9 years agoPoll: Virginia governor’s race in dead heat
-
Politics8 years agoIllinois’ financial crisis could bring the state to a halt
-
Entertainment8 years agoThe final 6 ‘Game of Thrones’ episodes might feel like a full season
-
Entertainment9 years agoMeet Superman’s grandfather in new trailer for Krypton
-
Business9 years ago6 Stunning new co-working spaces around the globe
-
Tech8 years agoHulu hires Google marketing veteran Kelly Campbell as CMO
