Connect with us

Business

In Silicon Valley’s latest vibe shift, leading AI bosses are no longer so eager to talk about AGI

Published

on



Once upon a time—meaning, um, as recently as earlier this year—Silicon Valley couldn’t stop talking about AGI.

OpenAI CEO Sam Altman wrote in January “we are now confident we know how to build AGI.” This is after he told a Y Combinator vodcast in late 2024 that AGI might be achieved in 2025 and tweeted in 2024 that OpenAI had “AGI achieved internally.” OpenAI was so AGI-entranced that its head of sales dubbed her team “AGI sherpas” and its former chief scientist Ilya Sutskever led the fellow researchers in campfire chants of “Feel the AGI!”

OpenAI’s partner and major financial backer Microsoft put out a paper in 2024 claiming OpenAI’s GPT-4 AI model exhibited “sparks of AGI.” Meanwhile, Elon Musk founded xAI in March 2023 with a mission to build AGI, a development he said might occur as soon as 2025 or 2026. Demis Hassabis, the Nobel-laureate co-founder of Googe DeepMind, told reporters that the world was “on the cusp” of AGI. Meta CEO Mark Zuckerberg said his company was committed to “building full general intelligence” to power the next generation of its products and services. Dario Amodei, the cofounder and CEO of Anthropic, while saying he disliked the term AGI, said “powerful AI” could arrive by 2027 and usher in a new age of health and abundance—if it didn’t wind up killing us all. Eric Schmidt, the former Google CEO turned prominent tech investor, said in a talk in April that we would have AGI “within three to five years.”

Now the AGI fever is breaking—in what amounts to a wholesale vibe shift towards pragmatism as opposed to chasing utopian visions. For example, at a CNBC appearance this summer, Altman called AGI “not a super-useful term.” In the New York Times, Schmidt—yes that same guy who was talking up AGI in April—urged Silicon Valley to stop fixating on superhuman AI, warning that the obsession distracts from building useful technology. Both AI pioneer Andrew Ng and U.S. AI czar David Sacks called AGI “overhyped.”

AGI: under-defined and over-hyped

What happened? Well, first, a little background. Everyone agrees that AGI stands for “artificial general intelligence.” And that’s pretty much all everyone agrees on. People define the term in subtly, but importantly, different ways. Among the first to use the term was physicist Mark Avrum Gubrud who in a 1997 research article wrote that “by advanced artificial general intelligence, I mean AI systems that rival or surpass the human brain in complexity and speed, that can acquire, manipulate and reason with general knowledge, and that are usable in essentially any phase of industrial or military operations where a human intelligence would otherwise be needed.”

The term was later picked up and popularized by AI researcher Shane Legg, who would go on to co-found Googled DeepMind with Hassabis, and fellow computer scientists Ben Goertzel and Peter Voss in the early 2000s. They defined AGI, according to Voss, as an AI system that could learn to “reliably perform any cognitive task that a competent human can.” That defintion had some problems—for instance, who decides who qualifies as a competent human? And, since then, other AI researchers have developed different definitions that see AGI as AI that is as capable as any human expert at all tasks, as opposed to merely a “competent” person. OpenAI was founded in late 2015 with the explicit mission of developing AGI “for the benefit of all,” and it added its own twist to the AGI definition debate. The company’s charter says AGI is an autonomous system that can “outperform humans at most economically valuable work.”

But whatever AGI is, the important thing these days, it seems, is not to talk about it. And the reason why has to do with growing concerns that progress in AI development may not be galloping ahead as fast as industry insiders touted just a few months ago—and growing indications that all the AGI talk was stoking inflated expectations that the tech itself couldn’t live up to.

Among the biggest factors in AGI’s sudden fall from grace, seems to have been the roll-out of OpenAI’s GPT-5 model in early August. Just over two years after Microsoft’s claim that GPT-4 showed “sparks” of AGI, the new model landed with a thud: incremental improvements wrapped in a routing architecture, not the breakthrough many expected. Goertzel, who helped coin the phrase AGI, reminded the public that while GPT-5 is impressive, it remains nowhere near true AGI—lacking real understanding, continuous learning, or grounded experience. 

Altman’s retreat from AGI language is especially striking given his prior position. OpenAI was built on AGI hype: AGI is in the company’s founding mission, it helped raise billions in capital, and it underpins the partnership with Microsoft. A clause in their agreement even states that if OpenAI’s nonprofit board declares it has achieved AGI, Microsoft’s access to future technology would be restricted. Microsoft—after investing more than $13 billion—is reportedly pushing to remove that clause, and has even considered walking away from the deal. Wired also reported on an internal OpenAI debate over whether publishing a paper on measuring AI progress could complicate the company’s ability to declare it had achieved AGI. 

A ‘very healthy’ vibe shift

But whether observers think the vibe shift is a marketing move or a market response, many, particularly on the corporate side, say it is a good thing. Shay Boloor, chief market strategist at Futurum Equities, called the move “very healthy,” noting that markets reward execution, not vague “someday superintelligence” narratives. 

Others stress that the real shift is away from a monolithic AGI fantasy, toward domain-specific “superintelligences.” Daniel Saks, CEO of agentic AI company Landbase, argued that “the hype cycle around AGI has always rested on the idea of a single, centralized AI that becomes all-knowing,” but said that is not what he sees happening. “The future lies in decentralized, domain-specific models that achieve superhuman performance in particular fields,” he told Fortune.

Christopher Symons, chief AI scientist at digital health platform Lirio, said that the term AGI was never useful: Those promoting AGI, he explained, “draw resources away from more concrete applications where AI advancements can most immediately benefit society.” 

Still, the retreat from AGI rhetoric doesn’t mean the mission—or the phrase—has vanished. Anthropic and DeepMind executives continue to call themselves “AGI-pilled,” which is a bit of insider slang. Even that phrase is disputed, though; for some it refers to the belief that AGI is imminent, while others say it’s simply the belief that AI models will continue to improve. But there is no doubt that there is more hedging and downplaying than doubling down.

Some still call out urgent risks

And for some, that hedging is exactly what makes the risks more urgent. Former OpenAI researcher Steven Adler told Fortune: “We shouldn’t lose sight that some AI companies are explicitly aiming to build systems smarter than any human. AI isn’t there yet, but whatever you call this, it’s dangerous and demands real seriousness.”

Others accuse AI leaders of changing their tune on AGI to muddy the waters in a bid to avoid regulation. Max Tegmark, president of the Future of Life Institute, says Altman calling AGI “not a useful term” isn’t scientific humility, but a way for the company to steer clear of regulation while continuing to build towards more and more powerful models. 

“It’s smarter for them to just talk about AGI in private with their investors,” he told Fortune, adding that “it’s like a cocaine salesman saying that it’s unclear whether cocaine is is really a drug,” because it’s just so complex and difficult to decipher. 

Call it AGI or call it something else—the hype may fade and the vibe may shift, but with so much on the line, from money and jobs to security and safety, the real questions about where this race leads are only just beginning.



Source link

Continue Reading

Business

Trump says he’ll allow Nvidia to sell advanced chips to ‘approved customers’ in China

Published

on



President Donald Trump said Monday that he would allow Nvidia to sell an advanced type of computer chip used in the development of artificial intelligence to “approved customers” in China.

There have been concerns about allowing advanced computer chips to be sold to China as it could help the country better compete against the U.S. in building out AI capabilities, but there has also been a desire to develop the AI ecosystem with American companies such as chipmaker Nvidia.

The chip, known as the H200, is not Nvidia’s most advanced product. Those chips, called Blackwell and the upcoming Rubin, were not part of what Trump approved.

Trump said on social media that he had informed China’s leader Xi Jinping about his decision and “President Xi responded positively!”

“This policy will support American Jobs, strengthen U.S. Manufacturing, and benefit American Taxpayers,” Trump said in his post.

Nvidia said in a statement that it applauded Trump’s decision, saying the choice would support domestic manufacturing and that by allowing the Commerce Department to vet commercial customers it would “strike a thoughtful balance” on economic and national security priorities.

Trump said the Commerce Department was “finalizing the details” for other chipmakers such as AMD and Intel to sell their technologies abroad.

The approval of the licenses to sell Nvidia H200 chips reflects the increasing power and close relationship that the company’s founder and CEO, Jensen Huang, enjoys with the president. But there have been concerns that China will find ways to use the chips to develop its own AI products in ways that could pose national security risks for the U.S., a primary concern of the Biden administration that sought to limit exports.

Nvidia has a market cap of $4.5 trillion and Trump’s announcement appeared to drive the stock slightly higher in after hours trading.



Source link

Continue Reading

Business

Google Cloud CEO lays out 3-part AI plan after identifying it as the ‘most problematic thing’

Published

on



The immense electricity needs of AI computing was flagged early on as a bottleneck, prompting Alphabet’s Google Cloud to plan for how to source energy and how to use it, according to Google Cloud CEO Thomas Kurian.

Speaking at the Fortune Brainstorm AI event in San Francisco on Monday, he pointed out that the company—a key enabler in the AI infrastructure landscape—has been working on AI since well before large language models came along and took the long view.

“We also knew that the the most problematic thing that was going to happen was going to be energy, because energy and data centers were going to become a bottleneck alongside chips,” Kurian told Fortune’sAndrew Nusca. “So we designed our machines to be super efficient.”

The International Energy Agency has estimated that some AI-focused data centers consume as much electricity as 100,000 homes, and some of the largest facilities under construction could even use 20 times that amount.

At the same time, worldwide data center capacity will increase by 46% over the next two years, equivalent to a jump of almost 21,000 megawatts, according to real estate consultancy Knight Frank.  

At the Brainstorm event, Kurian laid out Google Cloud’s three-pronged approach to ensuring that there will be enough energy to meet all that demand.

First, the company seeks to be as diversified as possible in the kinds of energy that power AI computation. While many people say any form of energy can be used, that’s actually not true, he said.

“If you’re running a cluster for training and you bring it up and you start running a training job, the spike that you have with that computation draws so much energy that you can’t handle that from some forms of energy production,” Kurian explained.

The second part of Google Cloud’s strategy is being as efficient as possible, including how it reuses energy within data centers, he added.

In fact, the company uses AI in its control systems to monitor thermodynamic exchanges necessary in harnessing the energy that has already been brought into data centers.

And third, Google Cloud is working on “some new fundamental technologies to actually create energy in new forms,” Kurian said without elaborating further.

Earlier on Monday, utility company NextEra Energy and Google Cloud said they are expanding their partnership and will develop new U.S. data center campuses that will include with new power plants as well.

Tech leaders have warned that energy supply is critical to AI development alongside innovations in chips and improved language models.

The ability to build data centers is another potential chokepoint as well. Nvidia CEO Jensen Huang recently pointed out China’s advantage on that front compared to the U.S.

“If you want to build a data center here in the United States, from breaking ground to standing up an AI supercomputer is probably about three years,” he said at the Center for Strategic and International Studies in late November. “They can build a hospital in a weekend.”



Source link

Continue Reading

Business

Pepsi to cut product offering nearly 20% in deal with $4 billion activist Elliott

Published

on



PepsiCo plans to cut prices and eliminate some of its products under a deal with an activist investor announced Monday.

The Purchase, New York-based company, which makes Cheetos, Tostitos and other Frito-Lay products as well as beverages, said it will cut nearly 20% of its product offerings by early next year. PepsiCo said it will use the savings to invest in marketing and improved value for consumers. It didn’t disclose which products or how much it would cut prices.

PepsiCo said it also plans to accelerate the introduction of new offerings with simpler and more functional ingredients, including Doritos Protein and Simply NKD Cheetos and Doritos, which contain no artificial flavors or colors. The company also recently introduced a prebiotic version of its signature cola.

PepsiCo is making the changes after prodding from Elliott Investment Management, which took a $4 billion stake in the company in September. In a letter to PepsiCo’s board, Elliott said the company is being hurt by a lack of strategic clarity, decelerating growth and eroding profitability in its North American food and beverage businesses.

In a joint statement with PepsiCo Monday, Elliott Partner Marc Steinberg said the firm is confident that PepsiCo can create value for shareholders as it executes on its new plan.

“We appreciate our collaborative engagement with PepsiCo’s management team and the urgency they have demonstrated,” Steinberg said. “We believe the plan announced today to invest in affordability, accelerate innovation and aggressively reduce costs will drive greater revenue and profit growth.”

Elliott said it plans to continue working closely with the company.

PepsiCo shares were flat in after-hours trading Monday.

PepsiCo said it expects organic revenue to grow between 2% and 4% in 2026. The company’s organic revenue rose 1.5%. the first nine months of this year.

PepsiCo also said it plans to review its supply chain and continue to make changes to its board, with a focus on global leaders who can help it reach its growth and profitability goals.

“We feel encouraged about the actions and initiatives we are implementing with urgency to improve both marketplace and financial performance,” PepsiCo Chairman and CEO Ramon Laguarta said in a statement.

PepsiCo said in February that years of double-digit price increases and changing customer preferences have weakened demand for its drinks and snacks. In July, the company said it was trying to combat perceptions that its products are too expensive by expanding distribution of value brands like Chester’s and Santitas.



Source link

Continue Reading

Trending

Copyright © Miami Select.