Connect with us

Business

Anthropic CEO Dario Amodei escalates war of words with Jensen Huang, calling out ‘outrageous lie’ and getting emotional about father’s death

Published

on



The doomers versus the optimists. The techno-optimists and the accelerationists. The Nvidia camp and the Anthropic camp. And then, of course, there’s OpenAI, which opened the Pandora’s Box of artificial intelligence in the first place.

The AI space is driven by debates about whether it’s a doomsday technology or the gateway to a world of future abundance, or even whether it’s a throwback to the dotcom bubble of the early 2000s. Anthropic CEO Dario Amodei has been outspoken about AI’s risks, even famously predicting it would wipe out half of all white-collar jobs, a much gloomier outlook than the optimism offered by OpenAI’s Sam Altman or Nvidia’s Jensen Huang in the past. But Amodei has rarely laid it all out in the way he just did on tech journalist Alex Kantrowitz’s Big Technology podcast on July 30.

In a candid and emotionally charged interview, Amodei escalated his war of words with Nvidia CEO Jensen Huang, vehemently denying accusations that he is seeking to control the AI industry and expressing profound anger at being labeled a “doomer.” Amodei’s impassioned defense was rooted in a deeply personal revelation about his father’s death, which he says fuels his urgent pursuit of beneficial AI while simultaneously driving his warnings about its risks, including his belief in strong regulation.

Amodei directly confronted the criticism, stating, “I get very angry when people call me a doomer … When someone’s like, ‘This guy’s a doomer. He wants to slow things down.’” He dismissed the notion, attributed to figures like Jensen Huang, that “Dario thinks he’s the only one who can build this safely and therefore wants to control the entire industry” as an “outrageous lie. That’s the most outrageous lie I’ve ever heard.” He insisted that he’s never said anything like that.

His strong reaction, Amodei explained, stems from a profound personal experience: his father’s death in 2006 from an illness that saw its cure rate jump from 50% to roughly 95% just three or four years later. This tragic event instilled in him a deep understanding of “the urgency of solving the relevant problems” and a powerful “humanistic sense of the benefit of this technology.” He views AI as the only means to tackle complex issues like those in biology, which he felt were “beyond human scale.” As he continued, he explained how he’s actually the one who’s really optimistic about AI, despite his own doomsday warnings about its future impact.

Who’s the real optimist?

Amodei insisted that he appreciates AI’s benefits more than those who call themselves optimists. “I feel in fact that I and Anthropic have often been able to do a better job of articulating the benefits of AI than some of the people who call themselves optimists or accelerationists,” he asserted.

In bringing up “optimist” and “accelerationist,” Amodei was referring to two camps, even movements, in Silicon Valley, with venture-capital billionaire Marc Andreessen close to the center of each. The Andreessen Horowitz co-founder has embraced both, issuing a “techno-optimist manifesto” in 2023 and often tweeting “e/acc,” short for effective accelerationism.

Both terms stretch back to roughly the mid-20th century, with techno-optimism appearing shortly after World War II and accelerationism appearing in the science-fiction of Roger Zelazny in his classic 1967 novel “Lord of Light.” As Andreessen helped popularize and mainstream these beliefs, they roughly add up to an overarching belief that technology can solve all of humanity’s problems. Amodei’s remarks to Kantrowitz revealed much in common with these beliefs, with Amodei declaring that he feels obligated to warn about the risks inherent with AI, “because we can have such a good world if we get everything right.”

Amodei claimed he’s “one of the most bullish about AI capabilities improving very fast,” saying he’s repeatedly stressed how AI progress is exponential in nature, where models rapidly improve with more compute, data, and training. This rapid advancement means issues such as national security and economic impacts are drawing very close, in his opinion. His urgency has increased because he is “concerned that the risks of AI are getting closer and closer” and he doesn’t see that the ability to handle risk isn’t keeping up with the speed of technological advance.

To mitigate these risks, Amodei champions regulations and “responsible scaling policies” and advocates for a “race to the top,” where companies compete to build safer systems, rather than a “race to the bottom,” with people and companies competing to release products as quickly as possible, without minding the risks. Anthropic was the first to publish such a responsible scaling policy, he noted, aiming to set an example and encourage others to follow suit. He openly shares Anthropic’s safety research, including interpretability work and constitutional AI, seeing them as a public good.

Amodei addressed the debate about “open source,” as championed by Nvidia and Jensen Huang. It’s a “red herring,” Amodei insisted, because large language models are fundamentally opaque, so there can be no such thing as open-source development of AI technology as currently constructed.

An Nvidia spokesperson, who provided a similar statement to Kantrowitz, told Fortune that the company supports “safe, responsible, and transparent AI.” Nvidia said thousands of startups and developers in its ecosystem and the open-source community are enhancing safety. The company then criticized Amodei’s stance calling for increased AI regulation: “Lobbying for regulatory capture against open source will only stifle innovation, make AI less safe and secure, and less democratic. That’s not a ‘race to the top’ or the way for America to win.” 

Anthropic reiterated its statement that it “stands by its recently filed public submission in support of strong and balanced export controls that help secure America’s lead in infrastructure development and ensure that the values of freedom and democracy shape the future of AI.” The company previously told Fortune in a statement that “Dario has never claimed that ‘only Anthropic’ can build safe and powerful AI. As the public record will show, Dario has advocated for a national transparency standard for AI developers (including Anthropic) so the public and policymakers are aware of the models’ capabilities and risks and can prepare accordingly.”

Kantrowitz also brought up Amodei’s departure from OpenAI to found Anthropic, years before the drama that saw Sam Altman fired by his board over ethical concerns, with several chaotic days unfolding before Altman’s return.

Amodei did not mention Altman directly, but said his decision to co-found Anthropic was spurred by a perceived lack of sincerity and trustworthiness at rival companies regarding their stated missions. He stressed that for safety efforts to succeed, “the leaders of the company … have to be trustworthy people, they have to be people whose motivations are sincere.” He continued, “if you’re working for someone whose motivations are not sincere who’s not an honest person who does not truly want to make the world better, it’s not going to work you’re just contributing to something bad.”

Amodei also expressed frustration with both extremes in the AI debate. He labeled arguments from certain “doomers” that AI cannot be built safely as “nonsense,” calling such positions “intellectually and morally unserious.” He called for more thoughtfulness, honesty, and “more people willing to go against their interest.”

For this story, Fortune used generative AI to help with an initial draft. An editor verified the accuracy of the information before publishing. 



Source link

Continue Reading

Business

Fed chair race: Warsh overtakes Hassett as favorite to be nominated by Trump

Published

on



Wall Street’s top parlor game took a sudden turn on Monday, when the prediction market Kalshi showed Kevin Warsh is now the frontrunner to be nominated as the next Federal Reserve chairman, overtaking Kevin Hassett.

Warsh, a former Fed governor, now has a 47% probability, up from 39% on Sunday and just 11% on Dec. 3. Hassett, director of the National Economic Council, has fallen to 41%, down from 51% on Sunday and 81% on Dec. 3.

A report from CNBC saying Hassett’s candidacy was running into pushback from people close to President Donald Trump seemed to put Warsh on top. The resistance stems from concerns Hassett is too close to Trump.

That followed Trump’s comment late Friday, when he told The Wall Street Journal Warsh was at the top of his list, though he added “the two Kevins are great.”

According to the Journal, Trump met Warsh on Wednesday at the White House and pressed him on whether he could be trusted to back rate cuts. 

The report surprised Wall Street, which had overwhelming odds on Hassett as the favorite, lifting Warsh’s odds from the cellar.

But even prior to the Journal story, there have been rumblings in the finance world Hassett wasn’t their preferred choice to be Fed chair.

At a private conference for asset managers on Thursday, JPMorgan Chase CEO Jamie Dimon signaled support for Warsh and predicted Hassett was likelier to support Trump on more rate cuts, sources told the Financial Times.

And in a separate report earlier this month, the FT said bond investors shared their concerns about Hassett with the Treasury Department in November, saying they’re worried he would cut rates aggressively in order to please Trump.

Trump has said he will nominate a Fed chair in early 2026, with Jerome Powell’s term due to expire in May. 

For his part, Hassett appeared to put some distance between himself and Trump during an appearance on CBS’ Face the Nation on Sunday.

When asked if Trump’s voice would have equal weighting to the voting members on the rate-setting Federal Open Market Committee, Hassett replied, “no, he would have no weight.”

“His opinion matters if it’s good, if it’s based on data,” he explained. “And then if you go to the committee and you say, ‘well the president made this argument, and that’s a really sound argument, I think. What do you think?’ If they reject it, then they’ll vote in a different way.”



Source link

Continue Reading

Business

What happens to old AI chips? They’re still put to good use and don’t depreciate that fast

Published

on



New AI chips seem to hit the market at a quicker pace as tech companies scramble to gain supremacy in the global arms race for computational power.

But that begs the question: What happens to all those older-generation chips?

The AI stock boom has lost a lot of momentum in recent weeks due, in part, to worries that so-called hyperscalers aren’t correctly accounting for the depreciation in the hoard of chips they’ve purchased to power chatbots.

Michael Burry—the investor of Big Short fame who famously predicted the 2008 housing collapse—sounded the alarm last month when he warned AI-era profits are built on “one of the most common frauds in the modern era,” namely stretching the depreciation schedule. He estimated Big Tech will understate depreciation by $176 billion between 2026 and 2028.

But according to a note last week from Alpine Macro, chip depreciation fears are overstated for three reasons.

First, analysts pointed out software advances that accompany next-generation chips can also level up older-generation processors. For example, software can improve the performance of Nvidia’s five-year-old A100 chip by two to three times compared to its initial version.

Second, Alpine said the need for older chips remains strong amid rising demand for inference, meaning when a chatbot responds to queries. In fact, inference demand will significantly outpace demand for AI training in the coming years.

“For inference, the latest hardware helps but is often not essential, so chip quantity can substitute for cutting-edge quality,” analysts wrote, adding Google is still running seven- to eight-year-old TPUs at full utilization.

Third, China continues to demonstrate “insatiable” demand for AI chips as its supply “lags the U.S. by several generations in quality and severalfold in quantity.” And even though Beijing has banned some U.S. chips, the black market will continue to serve China’s shortfalls.

Meanwhile, not all chips used in AI belong to hyperscalers. Even graphics processors contained in everyday gaming consoles could work.

A note last week from Yardeni Research pointed to “distributed AI,” which draws on unused chips in homes, crypto-mining servers, offices, universities, and data centers to act as global virtual networks.

While distributed AI can be slower than a cluster of chips housed in the same data center, its network architecture can be more resilient if a computer or a group of them fails, Yardeni added.

“Though we are unable to ascertain how many GPUs were being linked in this manner, Distributed AI is certainly an interesting area worth watching, particularly given that billions are being spent to build new, large data centers,” the note said.



Source link

Continue Reading

Business

‘I had to take 60 meetings’: Jeff Bezos says ‘the hardest thing I’ve ever done’ was raising the first million dollars of seed capital for Amazon

Published

on



Today, Amazon’s market cap is hovering around $2.38 trillion, and founder Jeff Bezos is one of the world’s richest men, worth $236.1 billion. But three decades ago, in 1995, getting the first million dollars in seed capital for Amazon was more grueling than any challenge that would follow. One year ago, at New York’s Dealbook Summit, Bezos told Andrew Ross Sorkin those early fundraising efforts were an absolute slog, with dozens of meetings with angel investors—the vast majority of which were “hard-earned no’s.”

“I had to take 60 meetings,” Bezos said, in reference to the effort required to convince angel investors to sink tens of thousands of dollars into his company. “It was the hardest thing I’ve ever done, basically.”

The structure was straightforward: Bezos said he offered 20% of Amazon for a $5 million valuation. He eventually got around 20 investors to each invest around $50,000. But out of those 60 meetings he took around that time, 40 investors said no—and those 40 “no’s” were particularly soul-crushing because before getting an answer, each back-and-forth required “multiple meetings” and substantial effort.

Bezos said he had a hard time convincing investors selling books over the internet was a good idea. “The first question was what’s the internet? Everybody wanted to know what the internet was,” Bezos recalled. Few investors had heard of the World Wide Web, let alone grasped its commercial potential.

That said, Bezos admitted brutal honesty with his potential investors may have played a role in getting so many rejections.

“I would always tell people I thought there was a 70% chance they would lose their investment,” he said. “In retrospect, I think that might have been a little naive. But I think it was true. In fact, if anything, I think I was giving myself better odds than the real odds.”

Bezos said getting those investors on board in the mid-90s was absolutely critical. “The whole enterprise could have been extinguished then,” he said.

You can watch Bezos’ full interview with Andrew Ross Sorkin below. He starts talking about this interview gauntlet for seed capital around the 33-minute mark.

This story was originally featured on Fortune.com



Source link

Continue Reading

Trending

Copyright © Miami Select.