Business
How former OpenAI researcher Leopold Aschenbrenner turned a viral AI prophecy into profit, with a $1.5 billion hedge fund and outsized influence from Silicon Valley to D.C.
Published
2 months agoon
By
Jace Porter
Of all the unlikely stories to emerge from the current AI frenzy, few are more striking than that of Leopold Aschenbrenner.
The 23-year-old’s career didn’t exactly start auspiciously: He spent time at the philanthropy arm of Sam Bankman-Fried’s now-bankrupt FTX cryptocurrency exchange before a controversial year at OpenAI, where he was ultimately fired. Then, just two months after being booted out of the most influential company in AI, he penned an AI manifesto that went viral—President Trump’s daughter Ivanka even praised it on social media—and used it as a launching pad for a hedge fund that now manages more than $1.5 billion. That’s modest by hedge-fund standards but remarkable for someone barely out of college. Just four years after graduating from Columbia, Aschenbrenner is holding private discussions with tech CEOs, investors, and policymakers who treat him as a kind of prophet of the AI age.
It’s an astonishing ascent, one that has many asking not just how this German-born early-career AI researcher pulled it off, but whether the hype surrounding him matches the reality. To some, Aschenbrenner is a rare genius who saw the moment—the coming of human-like artificial general intelligence, China’s accelerating AI race, and the vast fortunes awaiting those who move first—more clearly than anyone else. To others, including several former OpenAI colleagues, he’s a lucky novice with no finance track record, repackaging hype into a hedge fund pitch.
His meteoric rise captures how Silicon Valley converts zeitgeist into capital—and how that, in turn, can be parlayed into influence. While critics question whether launching a hedge fund was simply a way to turn dubious techno-prophecy into profit, friends like Anthropic researcher Sholto Douglas frame it differently—as a “theory of change.” Aschenbrenner is using the hedge fund to garner a credible voice in the financial ecosystem, Douglas explained: “He is saying, ‘I have an extremely high conviction [that this is] how the world is going to evolve, and I am literally putting my money where my mouth is.”
But that also begs the question: why are so many willing to trust this newcomer?
The answer is complicated. In conversations with over a dozen friends, former colleagues, and acquaintances of Aschenbrenner, as well as investors and Silicon Valley insiders, one theme keeps surfacing: that Aschenbrenner has been able to seize ideas that have been gathering momentum across Silicon Valley’s labs and use them as ingredients for a coherent and convincing narrative that are like a blue plate special to investors with a healthy appetite for risk.
Aschenbrenner declined to comment for this story. A number of sources were granted anonymity due to concerns about the potential consequences of speaking about people who wield considerable power and influence in AI circles.
Many spoke of Aschenbrenner with a mixture of admiration and wariness—“intense,” “scarily smart,” “brash,” “confident.” More than one described him as carrying the aura of a wunderkind, the kind of figure Silicon Valley has long been eager to anoint. Others, however, noted that his thinking wasn’t especially novel, just unusually well-packaged and well-timed. Yet, while critics dismiss him as more hype than insight, investors Fortune spoke with see him differently, crediting his essays and early portfolio bets with unusual foresight.
There is no doubt, however, that Aschenbrenner’s rise reflects a unique convergence: vast pools of global capital eager to ride the AI wave; a Valley enthralled by the prospect of achieving artificial general intelligence (AGI), or AI that matches or surpasses human intelligence; and a geopolitical backdrop that frames AI development as a technological arms race with China.
Sketching the future
Within certain corners of the AI world, Leopold Aschenbrenner’s name was already familiar as someone who had written blog posts, essays, and research papers that circulated among AI safety circles, even before joining OpenAI. But for most people, he appeared seemingly overnight in June 2024. That’s when he self-published online a 165-page monograph called Situational Awareness: A Decade Ahead. The long essay borrowed for its title a phrase already familiar in AI circles, where “situational awareness” usually refers to models becoming aware of their own circumstances—a safety risk. But Aschenbrenner used it to mean something else entirely: the need for governments and investors to recognize how quickly AGI might arrive, and what was at stake if the U.S. fell behind.
In a sense, Aschenbrenner intended his manifesto to be the AI era’s equivalent of George Kennan’s “long telegram,” in which the American diplomat and Russia expert sought to awaken elite opinion in the U.S. to what he saw as the looming Soviet threat to Europe. In the introduction, Aschenbrenner sketched a future he claimed was visible only to a few hundred prescient people, “most of them in San Francisco and the AI labs.” Not surprisingly, he included himself among those with “situational awareness,” while the rest of the world had “not the faintest glimmer of what is about to hit them.” To most, AI looked like hype or, at best, another internet-scale shift. What he insisted he could see more clearly was that LLMs were improving at an exponential rate, scaling rapidly towards AGI, and then beyond, to “superintelligence”—with geopolitical consequences and, for those who moved early, the chance to capture the biggest economic windfall of the century.
To drive the point home, he invoked the example of Covid in early 2020—arguing that only a few grasped the implications of a pandemic’s exponential spread, understood the scope of the coming economic shock, and profited by shorting before the crash. “All I could do is buy masks and short the market,” he wrote. Similarly, he emphasized that only a small circle today comprehends how quickly AGI is coming, and those who act early stand to capture historic gains. And once again, he cast himself among the prescient few.
But the core of Situational Awareness’s argument wasn’t the Covid parallel. It was the argument that the math itself—the scaling curves that suggested AI capabilities increased exponentially with the amount of data and compute thrown at the same basic algorithms—showed where things were headed.
Douglas, now a tech lead on reinforcement learning scaling at Anthropic, is both a friend and former roommate of Aschenbrenner’s who had conversations with him about the monograph. He told Fortune that the essay crystallized what many AI researchers had felt. ”If we believe that the trend line will continue, then we end up in some pretty wild places,” Douglas said. Unlike many who focused on the incremental progress of each successive model release, Aschenbrenner was willing to “really bet on the exponential,” he said.
An essay goes viral
Plenty of long, dense essays about AI risk and strategy circulate every year, most vanishing after brief debates in niche forums like LessWrong, a website founded by AI theorist and ‘doomer’ extraordinaire Eliezer Yudkowsky that became a hub for rationalist and AI-safety ideas.
But Situational Awareness hit different. Scott Aaronson, a computer science professor at UT Austin who spent two years at OpenAI overlapping with Aschenbrenner, remembered his initial reaction: “Oh man, another one.” But after reading, he told Fortune, “I had the sense that this is actually the document some general or national security person is going to read and say: ‘This requires action.’” In a blog post, he called the essay “one of the most extraordinary documents I’ve ever read,” saying Aschenbrenner “makes a case that, even after ChatGPT and all that followed it, the world still hasn’t come close to ‘pricing in’ what’s about to hit it.”
A longtime AI governance researcher described the essays as “a big achievement,” but emphasized that the ideas were not new: “He basically took what was already common wisdom inside frontier AI labs and wrote it up in a very nicely packaged, compelling, easy-to-consume way.” The result was to make insider thinking legible to a much broader audience at a fever-pitch moment in the AI conversation.
Among AI safety researchers, who worry primarily about the ways in which AI might pose an existential risk to humanity, the essays were more divisive. For many, Aschenbrenner’s work felt like a betrayal, particularly because he had come out of those very circles. They felt their arguments urging caution and regulation had been repurposed into a sales pitch to investors. “People who are very worried about [existential risks] quite dislike Leopold now because of what he’s done—they basically think he sold out,” said one former OpenAI governance researcher. Others agreed with most of his predictions and saw value in amplifying them.
Still, even critics conceded his knack for packaging and marketing. “He’s very good at understanding the zeitgeist—what people are interested in and what could go viral,” said another former OpenAI researcher. “That’s his superpower. He knew how to capture the attention of powerful people by articulating a narrative very favorable to the mood of the moment: that the U.S. needed to beat China, that we needed to take AI security more seriously. Even if the details were wrong, the timing was perfect.”
That timing made the essays unavoidable. Tech founders and investors shared Situational Awareness with the sort of urgency usually reserved for hot term sheets, while policymakers and national security officials circulated it like the juiciest classified NSA assessment.
As one current OpenAI staffer put it, Aschenbrenner’s skill is “knowing where the puck is skating.”
A sweeping narrative paired with an investment vehicle
At the same time as the essays were released, Aschenbrenner launched Situational Awareness LP, a hedge fund built around the theme of AGI, with its bets placed in publicly traded companies rather than private startups.
The fund was seeded by Silicon Valley heavyweights like investor and current Meta AI product lead Nat Friedman–Aschenbrenner reportedly connected with him after Friedman read one of his blog posts in 2023–as well as Friedman’s investing partner Daniel Gross, and Patrick and John Collison, Stripe’s co-founders. Patrick Collison reportedly met Aschenbrenner at a 2021 dinner set up by a connection “to discuss their shared interests.” Aschenbrenner also brought on Carl Shulman—a 45-year-old AI forecaster and governance researcher with deep ties in the AI safety field and a past stint at Peter Thiel’s Clarium Capital–to be the new hedge fund’s director of research.
In a four-hour podcast with Dwarkesh Patel tied to the launch, Aschenbrenner touted the explosive growth he expects once AGI arrives, saying “the decade after is also going to be wild,” in which “capital will really matter.” If done right, he said, “there’s a lot of money to be made. If AGI were priced in tomorrow, you could maybe make 100x.”
Together, the manifesto and the fund reinforced one another: Here was a book-length investment thesis paired with a prognosticator with so much conviction he was willing to put serious money on the line. It proved an irresistible combination to a certain kind of investor. One former OpenAI researcher said Friedman is known for “zeitgeist hacking” —backing people who could capture the mood of the moment and amplify it into influence. Supporting Aschenbrenner fit that playbook perfectly.
Situational Awareness’ strategy is straightforward: It bets on global stocks likely to benefit from AI—semiconductors, infrastructure, and power companies—offset by shorts on industries that could lag behind. Public filings reveal part of the portfolio: A June SEC filing showed stakes in U.S. companies including Intel, Broadcom, Vistra and former bitcoin-miner Core Scientific (which Coreweave announced it would acquire in July), all seen as beneficiaries of the AI buildout. So far, it has paid off: the fund quickly swelled to over $1.5 billion in assets and delivered 47% gains, after fees, in the first half of this year.
According to a spokesperson, Situational Awareness LP has global investors, including West Coast founders, family offices, institutions and endowments. In addition, the spokesperson said Aschenbrenner “has almost all of his net worth invested in the fund.”
To be sure, any picture of a U.S. hedge fund’s holdings is incomplete. The publicly available 13F filings only cover long positions in U.S.-listed stocks—shorts, derivatives, and international investments aren’t disclosed—adding an inevitable layer of mystery around what the fund is really betting on. Still, some observers have questioned whether Aschenbrenner’s early results reflect skill or fortunate timing. For example, his fund disclosed roughly $459 million in Intel call options in its first-quarter filing—positions that later looked prescient when Intel’s shares climbed over the summer following a federal investment and a subsequent $5 billion stake from Nvidia.
But at least some experienced financial industry professionals have come to view him differently. Veteran hedge-fund investor Graham Duncan, who invested personally in Situational Awareness LP and now serves as an advisor to the fund, said he was struck by Aschenbrenner’s combination of insider perspective and bold investment strategy. “I found his paper provocative,” Duncan said, adding that Aschenbrenner and Shulman weren’t outsiders scanning opportunities but insiders building an investment vehicle around their view. The fund’s thesis reminded him of the few contrarians who spotted the subprime collapse before it hit—people like Michael Bury, who Michael Lewis made famous in his book The Big Short. “If you want to have variant perception, it helps to be a little variant.”
He pointed to Situational Awareness’ reaction to Chinese startup DeepSeek’s January release of its R1 open-source LLM, which many dubbed a “Sputnik moment” that showcased China’s rising AI capabilities despite limited funding and export controls. While most investors panicked, he said Aschenbrenner and Shulman had already been tracking it and saw the sell-off as an overreaction. They bought instead of sold, and even a major tech fund reportedly held back from dumping shares after an analyst said, “Leopold says it’s fine.” That moment, Duncan said, cemented Aschenbrenner’s credibility—though Duncan acknowledged “he could yet be proven wrong.”
Another investor in Situational Awareness LP, who manages a leading hedge fund, told Fortune that he was struck by Aschenbrenner’s answer when asked why he was starting a hedge fund focused on AI rather than a VC fund, which seemed like the most obvious choice.
“He said that AGI was going to be so impactful to the global economy that the only way to fully capitalize on it was to express investment ideas in the most liquid markets in the world,” he said. “I am a bit stunned by how fast they have come up the learning curve…they are way more sophisticated on AI investing than anyone else I speak to in the public markets.“
A Columbia ‘whiz-kid’ who went on to FTX and OpenAI
Aschenbrenner, born in Germany to two doctors, enrolled at Columbia when he was just 15 and graduated valedictorian at 19. The longtime AI governance researcher, who described herself as an acquaintance of Aschenbrenner’s, recalled that she first heard of him when he was still an undergraduate.
“I heard about him as, ‘oh, we heard about this Leopold Aschenbrenner kid, he seems like a sharp guy,” she said. “The vibe was very much a whiz-kid sort of thing.”
That wunderkind reputation only deepened. At 17, Aschenbrenner won a grant from economist Tyler Cowen’s Emergent Ventures, and Cowen called him an “economics prodigy.” While still at Columbia, he also interned at the Global Priorities Institute, co-authoring a paper with economist Phillip Trammell, and contributed essays to Works in Progress, a Stripe-funded publication that gave him another foothold in the tech-intellectual world.
He was already embedded in the Effective Altruism community—a controversial philosophy-driven movement influential in AI safety circles —and co-founded Columbia’s EA chapter. That network eventually led him to a job at the FTX Futures Fund, a charity founded by cryptocurrency exchange founder Sam Bankman-Fried. Bankman-Fried was another EA adherent who donated hundreds of millions of dollars to causes, including AI governance research, that aligned with EA’s philanthropic priorities.
The FTX Futures Fund was designed to support EA-aligned philanthropic priorities, although it was later found to have used money from Bankman-Fried’s FTX cryptocurrency exchange that was essentially looted from account holders. (There is no evidence that anyone who worked at the FTX Futures Fund knew the money was stolen or did anything illegal.)
At the FTX Futures Fund, Aschenbrenner worked with a small team that included William MacAskill, a co-founder of Effective Altruism, and Avital Balwit—now chief of staff to Anthropic CEO Dario Amodei and, according to a Situational Awareness LP spokesperson, currently engaged to Aschenbrenner. Balwit wrote in a June 2024 essay that “these next five years might be the last few years that I work,” because AGI might “end employment as I know it”–a striking mirror image of Aschenbrenner’s conviction that the same technology will make his investors rich.
But when Bankman-Fried’s FTX empire collapsed in November 2022, the Futures Fund philanthropic effort imploded. “We were a tiny team, and then from one day to the next, it was all gone and associated with a giant fraud,” Aschenbrenner told Dwarkesh Patel. “That was incredibly tough.”
Just months after FTX collapsed, however, Aschenbrenner reemerged — at OpenAI. He joined the company’s newly-launched “superalignment” team in 2023, created to tackle a problem no one yet knows how to solve: how to steer and control future AI systems that would be far smarter than any human being, and perhaps smarter than all of humanity put together. Existing methods like reinforcement learning from human feedback (RLHF) had proven somewhat effective for today’s models, but they depend on humans being able to evaluate outputs — something which might not be possible if systems surpassed human comprehension.
Aaronson, the UT computer science professor, joined OpenAI before Aschenbrenner and said what impressed him was Aschenbrenner’s instinct to act. Aaronson had been working on watermarking ChatGPT outputs to make AI-generated text easier to identify. “I had a proposal for how to do that, but the idea was just sort of languishing,” he said. “Leopold immediately started saying, ‘Yes, we should be doing this, I’m going to take responsibility for pushing it.’”
Others remembered him differently, as politically clumsy and sometimes arrogant. “He was never afraid to be astringent at meetings or piss off the higher-ups, to a degree I found alarming,” said one current OpenAI researcher. A former OpenAI policy staffer, who said he first became aware of Aschenbrenner when he gave a talk at a company all-hands meeting that previewed themes he would later publish in Situational Awareness, recalled him as “a bit abrasive.” Multiple researchers also described a holiday party where, in a casual group discussion, Aschenbrenner told then Scale AI CEO Alexandr Wang how many GPUs OpenAI had— “just straight out in the open,” as one put it. Two people told Fortune they had directly overheard the remark. A number of people were taken aback, they explained, at how casually Aschenbrenner shared something so sensitive. Through spokespeople, both Wang and Aschenbrenner denied that the exchange occurred
In April 2024, OpenAI fired Aschenbrenner, officially citing the leaking of internal information (the incident was not related to the alleged GPU remarks to Wang). On the Dwarkesh podcast two months later, Aschenbrenner maintained the “leak” was “a brainstorming document on preparedness, safety, and security measures needed in the future on the path to AGI” that he shared with three external researchers for feedback–something he said was “totally normal” at OpenAI at the time. He argued that an earlier memo in which he said OpenAI’s security was “egregiously insufficient to protect against the theft of model weights or key algorithmic secrets from foreign actors” was the real reason for his dismissal.
According to news reports, OpenAI did respond, via a spokesperson, that the concerns about security that he raised internally (including to the board) “did not lead to his separation.” The spokesperson also said they “disagree with many of the claims he has since made” about OpenAI’s security and the circumstances of his departure.
Either way, Aschenbrenner’s ouster came amid broader turmoil: Within weeks, OpenAI’s “superalignment” team—led by OpenAI’s cofounder and chief scientist Ilya Sutskever and AI researcher Jan Leike, and where Aschenbrenner had worked—dissolved after both leaders departed the company.
Two months later, Aschenbrenner published Situational Awareness and unveiled his hedge fund. The speed of the rollout prompted speculation among some former colleagues that he had been laying the groundwork while still at OpenAI.
Returns vs. rhetoric
Even skeptics acknowledge the market has rewarded Aschenbrenner for channeling today’s AGI hype, but still, doubts linger. “I can’t think of anybody that would trust somebody that young with no prior fund management [experience],” said a former OpenAI colleague who is now a founder. “I would not be an LP in a fund drawn by a child unless I felt there was really strong governance in place.”
Others question the ethics of profiting from AI fears. “Many agree with Leopold’s arguments, but disapprove of stoking the US-China race or raising money based off AGI hype, even if the hype is justified,” said one former OpenAI researcher. “Either he no longer thinks that [the existential risk from AI] is a big deal or he is arguably being disingenuous,” said another.
One former strategist within the Effective Altruism community said many in that world “are annoyed with him,” particularly for promoting the narrative that there’s a “race to AGI” that “becomes a self-fulfilling prophecy.” While profiting from stoking the idea of an arms race can be rationalized—since Effective Altruists often view making money for the purpose of then giving it away as virtuous—the former strategist argued that “at the level of Leopold’s fund, you’re meaningfully providing capital,” and that carries more moral weight.
The deeper worry, said Aaronson, is that Aschenbrenner’s message—that the U.S. must accelerate the pace of AI development at all costs in order to beat China—has landed in Washington at a moment when accelerationist voices like Marc Andreessen, David Sacks and Michael Kratsios are ascendant. “Even if Leopold doesn’t believe that, his essay will be used by people who do,” Aaronson said. If so, his biggest legacy may not be a hedge fund, but a broader intellectual framework that is helping to cement a technological Cold War between the U.S. and China.
If that proves true, Aschenbrenner’s real impact may be less about returns and more about rhetoric—the way his ideas have rippled from Silicon Valley into Washington. It underscores the paradox at the center of his story: To some, he’s a genius who saw the moment more clearly than anyone else. To others, he’s a Machiavellian figure who repackaged insider safety worries into an investor pitch. Either way, billions are now riding on whether his bet on AGI delivers.
You may like
Business
SpaceX to offer insider shares at record-setting $800 billion valuation
Published
2 hours agoon
December 6, 2025By
Jace Porter
SpaceX is preparing to sell insider shares in a transaction that would value Elon Musk’s rocket and satellite maker at as much as $800 billion, people familiar with the matter said, reclaiming the title of the world’s most valuable private company.
The details, discussed by SpaceX’s board of directors on Thursday at its Starbase hub in Texas, could change based on interest from insider sellers and buyers or other factors, said some of the people, who asked not to be identified as the information isn’t public. SpaceX is also exploring a possible initial public offering as soon as late next year, one of the people said.
Another person briefed on the matter said that the price under discussion for the sale of some employees and investors’ shares is higher than $400 apiece, which would value SpaceX at between $750 billion and $800 billion. The company wouldn’t raise any funds though this planned sale, though a successful offering at such levels would catapult it past the record of $500 billion valuation achieved by OpenAI in October.
Elon Musk on Saturday denied that SpaceX is raising money at a $800 billion valuation without addressing Bloomberg’s reporting on the planned offering of insiders’ shares.
“SpaceX has been cash flow positive for many years and does periodic stock buybacks twice a year to provide liquidity for employees and investors,” Musk said in a post on his social media platform X.
The share sale price under discussion would be a substantial increase from the $212 a share set in July, when the company raised money and sold shares at a valuation of $400 billion. The Wall Street Journal and Financial Times earlier reported the $800 billion valuation target.
News of SpaceX’s valuation sent shares of EchoStar Corp., a satellite TV and wireless company, up as much as 18%. Last month, EchoStar had agreed to sell spectrum licenses to SpaceX for $2.6 billion, adding to an earlier agreement to sell about $17 billion in wireless spectrum to Musk’s company.
Subscribe Now: The Business of Space newsletter covers NASA, key industry events and trends.
The world’s most prolific rocket launcher, SpaceX dominates the space industry with its Falcon 9 rocket that lifts satellites and people to orbit.
SpaceX is also the industry leader in providing internet services from low-Earth orbit through Starlink, a system of more than 9,000 satellites that is far ahead of competitors including Amazon.com Inc.’s Amazon Leo.
Elite Group
SpaceX is among an elite group of companies that have the ability to raise funds at $100 billion-plus valuations while delaying or denying they have any plan to go public.
An IPO of the company at an $800 billion value would vault SpaceX into another rarefied group — the 20 largest public companies, a few notches below Musk’s Tesla Inc.
If SpaceX sold 5% of the company at that valuation, it would have to sell $40 billion of stock — making it the biggest IPO of all time, well above Saudi Aramco’s $29 billion listing in 2019. The firm sold just 1.5% of the company in that offering, a much smaller slice than the majority of publicly traded firms make available.
A listing would also subject SpaceX to the volatility of being a public company, versus private firms whose valuations are closely guarded secrets. Space and defense company IPOs have had a mixed reception in 2025. Karman Holdings Inc.’s stock has nearly tripled since its debut, while Firefly Aerospace Inc. and Voyager Technologies Inc. have plunged by double-digit percentages since their debuts.
SpaceX executives have repeatedly floated the idea of spinning off SpaceX’s Starlink business into a separate, publicly traded company — a concept President Gwynne Shotwell first suggested in 2020.
However, Musk cast doubt on the prospect publicly over the years and Chief Financial Officer Bret Johnsen said in 2024 that a Starlink IPO would be something that would take place more likely “in the years to come.”
The Information, citing people familiar with the discussions, separately reported on Friday that SpaceX has told investors and financial institution representatives that it’s aiming for an IPO of the entire company in the second half of next year.
Read More: How to Buy SpaceX: A Guide for the Eager, Pre-IPO
A so-called tender or secondary offering, through which employees and some early shareholders can sell shares, provides investors in closely held companies such as SpaceX a way to generate liquidity.
SpaceX is working to develop its new Starship vehicle, advertised as the most powerful rocket ever developed to loft huge numbers of Starlink satellites as well as carry cargo and people to moon and, eventually, Mars.
Business
National Park Service drops free admission on MLK Day and Juneteenth while adding Trump’s birthday
Published
3 hours agoon
December 6, 2025By
Jace Porter
The National Park Service will offer free admission to U.S. residents on President Donald Trump’s birthday next year — which also happens to be Flag Day — but is eliminating the benefit for Martin Luther King Jr. Day and Juneteenth.
The new list of free admission days for Americans is the latest example of the Trump administration downplaying America’s civil rights history while also promoting the president’s image, name and legacy.
Last year, the list of free days included Martin Luther King Jr Day and Juneteenth — which is June 19 — but not June 14, Trump’s birthday.
The new free-admission policy takes effect Jan. 1 and was one of several changes announced by the Park Service late last month, including higher admission fees for international visitors.
The other days of free park admission in 2026 are Presidents Day, Memorial Day, Independence Day, Constitution Day, Veterans Day, President Theodore Roosevelt’s birthday (Oct. 27) and the anniversary of the creation of the Park Service (Aug. 25).
Eliminating Martin Luther King Jr. Day and Juneteenth, which commemorates the day in 1865 when the last enslaved Americans were emancipated, removes two of the nation’s most prominent civil rights holidays.
Some civil rights leaders voiced opposition to the change after news about it began spreading over the weekend.
“The raw & rank racism here stinks to high heaven,” Harvard Kennedy School professor Cornell William Brooks, a former president of the NAACP, wrote on social media about the new policy.
Kristen Brengel, a spokesperson for the National Parks Conservation Association, said that while presidential administrations have tweaked the free days in the past, the elimination of Martin Luther King Jr. Day is particularly concerning. For one, the day has become a popular day of service for community groups that use the free day to perform volunteer projects at parks.
That will now be much more expensive, said Brengel, whose organization is a nonprofit that advocates for the park system.
“Not only does it recognize an American hero, it’s also a day when people go into parks to clean them up,” Brengel said. “Martin Luther King Jr. deserves a day of recognition … For some reason, Black history has repeatedly been targeted by this administration, and it shouldn’t be.”
Some Democratic lawmakers also weighed in to object to the new policy.
“The President didn’t just add his own birthday to the list, he removed both of these holidays that mark Black Americans’ struggle for civil rights and freedom,” said Democratic Sen. Catherine Cortez Masto of Nevada. “Our country deserves better.”
A spokesperson for the National Park Service did not immediately respond to questions on Saturday seeking information about the reasons behind the changes.
Since taking office, Trump has sought to eliminate programs seen as promoting diversity across the federal government, actions that have erased or downplayed America’s history of racism as well as the civil rights victories of Black Americans.
Self-promotion is an old habit of the president’s and one he has continued in his second term. He unsuccessfully put himself forwardfor the Nobel Peace Prize, renamed the U.S. Institute of Peace after himself, sought to put his name on the planned NFL stadium in the nation’s capital and had a new children’s savings program named after him.
Some Republican lawmakers have suggested putting his visage on Mount Rushmore and the $100 bill.
Business
JPMorgan CEO Jamie Dimon says Europe has a ‘real problem’
Published
3 hours agoon
December 6, 2025By
Jace Porter
JPMorgan Chase & Co. Chief Executive Officer Jamie Dimon called out slow bureaucracy in Europe in a warning that a “weak” continent poses a major economic risk to the US.
“Europe has a real problem,” Dimon said Saturday at the Reagan National Defense Forum. “They do some wonderful things on their safety nets. But they’ve driven business out, they’ve driven investment out, they’ve driven innovation out. It’s kind of coming back.”
While he praised some European leaders who he said were aware of the issues, he cautioned politics is “really hard.”
Dimon, leader of the biggest US bank, has long said that the risk of a fragmented Europe is among the major challenges facing the world. In his letter to shareholders released earlier this year, he said that Europe has “some serious issues to fix.”
On Saturday, he praised the creation of the euro and Europe’s push for peace. But he warned that a reduction in military efforts and challenges trying to reach agreement within the European Union are threatening the continent.
“If they fragment, then you can say that America first will not be around anymore,” Dimon said. “It will hurt us more than anybody else because they are a major ally in every single way, including common values, which are really important.”
He said the US should help.
“We need a long-term strategy to help them become strong,” Dimon said. “A weak Europe is bad for us.”
The administration of President Donald Trump issued a new national security strategy that directed US interests toward the Western Hemisphere and protection of the homeland while dismissing Europe as a continent headed toward “civilizational erasure.”
Read More: Trump’s National Security Strategy Veers Inward in Telling Shift
JPMorgan has been ramping up its push to spur more investments in the national defense sector. In October, the bank announced that it would funnel $1.5 trillion into industries that bolster US economic security and resiliency over the next 10 years — as much as $500 billion more than what it would’ve provided anyway.
Dimon said in the statement that it’s “painfully clear that the United States has allowed itself to become too reliant on unreliable sources of critical minerals, products and manufacturing.”
Investment banker Jay Horine oversees the effort, which Dimon called “100% commercial.” It will focus on four areas: supply chain and advanced manufacturing; defense and aerospace; energy independence and resilience; and frontier and strategic technologies.
The bank will also invest as much as $10 billion of its own capital to help certain companies expand, innovate or accelerate strategic manufacturing.
Separately on Saturday, Dimon praised Trump for finding ways to roll back bureaucracy in the government.
“There is no question that this administration is trying to bring an axe to some of the bureaucracy that held back America,” Dimon said. “That is a good thing and we can do it and still keep the world safe, for safe food and safe banks and all the stuff like that.”
Tucker Wetmore’s Song ‘Proving Me Right’ is About Model Bryana Ferringer
‘Bachelor’ Winner Cassie Randolph Marries Brighton Reinhardt
Kanye West and Bianca Censori Travel to South Korea Together
Trending
-
Politics8 years agoCongress rolls out ‘Better Deal,’ new economic agenda
-
Entertainment8 years agoNew Season 8 Walking Dead trailer flashes forward in time
-
Politics8 years agoPoll: Virginia governor’s race in dead heat
-
Entertainment8 years agoThe final 6 ‘Game of Thrones’ episodes might feel like a full season
-
Entertainment8 years agoMeet Superman’s grandfather in new trailer for Krypton
-
Politics8 years agoIllinois’ financial crisis could bring the state to a halt
-
Business8 years ago6 Stunning new co-working spaces around the globe
-
Tech8 years agoHulu hires Google marketing veteran Kelly Campbell as CMO
