Connect with us

Business

Chinse open source AI models are eating the world—the U.S. is the exception

Published

on



Hello and welcome to Eye on AI. In this edition….Gemini 3 puts Google at the top of the AI leaderboards…the White House delays an Executive Order banning state level AI regulation…TSMC sues a former exec now at Intel…Google Research develops a new, post-Transformer AI architecture…OpenAI is pushing user engagement despite growing evidence that some users develop harmful dependencies and delusions after prolonged chatbot interactions.

I spent last week at the Fortune Innovation Forum in Kuala Lumpur, Malaysia, where I moderated several panel discussions around AI and its impacts. Among the souvenirs that I came back from KL with was a newfound appreciation for the extent to which businesses outside the U.S. and Europe really want to build on open source AI models and the extent to which they are gravitating to open source models from China.

My colleague Bea Nolan wrote a bit about this phenomenon in this newsletter a few weeks ago, but being on the ground in Southeast Asia really brought the point home: the U.S., despite having the most capable AI models out there, could well lose the AI race. And the reason is, as Chan Yip Pang, the executive director at Vertex Ventures Southeast Asia and India, said on a panel I moderated in KL, that the U.S. AI companies “build for perfection” while the Chinese AI companies “build for diffusion.”

One sometimes hears a U.S. executive, such as Airbnb CEO Brian Chesky, willing to say that they like Chinese open source AI models because they offer good enough performance at a very affordable price. But that attitude remains, for now at least, unusual. Many of the U.S. and European executives I talk to say they prefer the performance advantages of proprietary models from OpenAI, Anthropic, or Google. For some tasks, even an 8% performance advantage (which is the current gap separating top proprietary models from Chinese open source models on key software development benchmarks) can mean the difference between an AI solution that meets the threshold for being deployed at scale and one that doesn’t. These execs also say they have more confidence in the safety and security guardrails built around these proprietary models.

Asia is building AI applications on Chinese open source models

That viewpoint was completely different from what I heard from the executives I met in Asia. Here, the concern was much more about having control over both data and costs. On these metrics, open source models tended to win out. Jinhui Yuan, the cofounder and CEO of SiliconFlow, a leading Chinese AI cloud hosting service, said that his company had developed numerous techniques to run open source models more cost-effectively, meaning using them to accomplish a task was significantly cheaper than trying to do the same thing with proprietary AI models. What’s more, he said that most of his customers had found that if they fine-tuned an open source model on their own data for a specific use case, they could achieve performance levels that beat proprietary models—without any risk of leaking sensitive or competitive data.

That was a point that Vertex’s Pang also emphasized. He cautioned that while proprietary model providers also offer companies services to fine-tune on their own data, usually with assurances that this data will not be used for wider training by the AI vendor, “you never know what happens behind the scenes.”

Using a proprietary model also means you are giving up control over a key cost. He says he tells the startups he is advising that if they are building an application that is fundamental to their competitive advantage or core product, they should build it on open source. “If you are a startup building an AI native application and you are selling that as your main service, you better jolly well control the technology stack, and to be able to control it, open source would be the way to go,” he said.

Cynthia Siantar, the CEO of Dyna.AI, which is based in Singapore and builds AI applications for financial services, also said she felt some of the Chinese open source models performed much better in local languages.

But what about the argument that open source AI is less secure? Cassandra Goh, the CEO of Silverlake Axis, a Malaysian company that provides technology solutions to financial services firms, said that models had to be secured within a system—for instance, with screening tools applied to prompts to prevent jailbreaking and to outputs to filter out potential problems. This was true whether the underlying model was proprietary or open source, she said.

The conversation definitely made me think that OpenAI and Anthropic, both of which are rapidly trying to expand their global footprint, may run into headwinds, particularly in the middle income countries in Southeast Asia, the Middle East, North Africa, and Latin America. It is further evidence that the U.S. probably needs to do far more to develop a more robust open source AI ecosystem beyond Meta, which has been the only significant American player in the open source frontier model space to date. (IBM has some open source foundation models but they are not as capable as the leading models from OpenAI and Anthropic.)

Should “bridge countries” band together?

And that’s not the only way in which this trip to Asia proved eye-opening. It was also fascinating to see the plans to build out AI infrastructure throughout the region. The Malaysian state of Johor, in particular, is trying to position itself as the data center hub for not just nearby Singapore, but for much of Southeast Asia. (Discussions about a tie-up with nearby Indonesia to share data center capacity are already underway.)

Johor has plans to bring on 5.8 gigawatts of data center projects in the coming years, which would consume basically all of the state’s current electricity generation capacity. The state—and Malaysia as a whole—has plans to add significantly more electricity generation, from both gas-powered plants and big solar farms, by 2030. Yet concerns are growing about what this generation capacity expansion will mean for consumer electricity bills and whether the data centers will drink up too much of the region’s fresh water. (Johor officials have told data center developers to pause development of new water-cooled facilities until 2027 amid concerns about water shortages.)

Exactly how important regional players will align in the growing geopolitical competition between the U.S. and China over AI technology is a hot topic. Many seem eager to find a path that would allow them to use technology from both superpowers, without having to choose a side or risk becoming a “servant” of either power. But whether they will be able to walk this tightrope is a big open question.

Earlier this week, a group of 30 policy experts from Mila (the Quebec Artificial Intelligence Institute founded by AI “godfather” and Turing Award winner Yoshua Bengio), the Oxford Martin AI Governance Initiative, and a number of other European, East Asian, and South Asian institutions jointly issued a white paper calling on a number of middle income countries (which they called “bridge powers”) to band together to develop and share AI capacity and models so that they could achieve a degree of independence from American and Chinese AI tech.

Whether such an alliance—a kind of non-aligned movement of AI—can be achieved diplomatically and commercially, however, seems highly uncertain. But it is an idea that I am sure politicians in these bridge countries will be considering.

With that, here’s the rest of today’s AI news.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

If you want to learn more about how AI can help your company to succeed and hear from industry leaders on where this technology is heading, I hope you’ll consider joining me at Fortune Brainstorm AI San Francisco on Dec. 8–9. Among the speakers confirmed to appear so far are Google Cloud chief Thomas Kurian, Intuit CEO Sasan Goodarzi, Databricks CEO Ali Ghodsi, Glean CEO Arvind Jain, Amazon’s Panos Panay, and many more. Register now.

FORTUNE ON AI

Amazon’s layoffs and leaked AI plans beg the question: Is the era of robot-driven unemployment upon us?—by Jason del Rey

Sam Altman says OpenAI’s first device is iPhone-level revolutionary but brings ‘peace and calm’ instead of ‘unsettling’ flashing lights and notifications—by Marco Quiroz-Gutierrez

Deloitte just got caught again citing fabricated and AI-generated research—this time in a million-dollar report for a Canadian provincial government—by Nino Paoli

Lovable’s CEO targets enterprise customers as the ‘vibe-coding’ unicorn doubles its annual revenue to $200 million in just four months—by Beatrice Nolan

AI IN THE NEWS

White House launches “Genesis Mission” to give AI-driven boost to science. President Trump signed an executive order launching what he is calling the “Genesis Mission,” a massive federal initiative to harness artificial intelligence and government science datasets via the U.S. Department of Energy and its national laboratories. The mission aims to build a unified AI‐driven research platform—linking supercomputers, university and industry partners, and federal data—to accelerate breakthroughs in fields like energy, engineering, biotech and national security. While pitched as a scientific “moonshot”-style effort, the initiative faces questions about its funding model and how it will manage sensitive national-security and proprietary data. Read more here from Reuters.

TSMC sues former executive who defected to Intel over alleged trade secret theft. TSMC has sued former senior executive Lo Wei-Jen, now at Intel, alleging he took or could disclose the company’s trade secrets, the Financial Timesreports. The company alleges that Wei-Jen told it he planned to enter academia after retiring in July. The case underscores intensifying geopolitical and commercial pressures in the global race for advanced chipmaking, as TSMC—responsible for more than 90% of the world’s leading-edge semiconductors—faces rising competition backed by a major U.S. government investment in Intel.

Google debuts Gemini 3 model, hailed by the company and some users as a big advance. Google launched its Gemini 3 large language model last week. The model surpassed rival models from OpenAI and Anthropic on a wide range of benchmark tests and its performance seems to have largely impressed users who have tried it, according to social media posts and blogs. The launch of Gemini 3—which Google immediately integrated into its AI-powered search features, such as AI Overviews and “AI Mode” in Google Search—is being hailed as a turning point in the AI race, helping restore investor confidence in Google-parent company Alphabet after years of anxiety about it losing ground. You can read more from the Wall Street Journalhere.

Anthropic premiers Claude Opus 4.5. Anthropic unveiled Claude Opus 4.5, its newest and most powerful AI model, designed to excel at complex business tasks and coding. The premiere—Anthropic’s third major model release in two months—comes as the company’s valuation has surged to roughly $350 billion following multibillion-dollar investments from Microsoft and Nvidia. Anthropic says Opus 4.5 outperforms Google’s Gemini 3 Pro (see above news item) and OpenAI’s GPT-5.1 on coding benchmarks, and even beat human candidates on its internal engineering exam, and is rolling out alongside upgraded tools including Claude Chrome, Claude for Excel, and enhanced developer features, according to a story in CNBC.

White House reportedly pauses work on Executive Order targeting state AI laws. Reuters reports that the White House has paused a draft executive order that would have aggressively challenged state AI regulations by directing the Justice Department to sue states and potentially withhold federal broadband funds from those that impose AI rules. The move—backed by major tech firms seeking uniform national standards—sparked bipartisan criticism from state officials and lawmakers, who argued it would undermine consumer protection and was potentially unconstitutional. The administration may still try to include a moratorium on state-level AI rules in the National Defense Authorization Act or another spending bill that Congress has to pass in the coming weeks. But so far, opposition highlights the intense political backlash to federal attempts to preempt state AI laws.

OpenAI offices locked down due to concerns about former Stop AI activist. OpenAI employees in San Francisco were briefly instructed to remain inside the office after police received a report that one of the cofounders of Stop AI had allegedly made threats to harm staff and might have acquired weapons. Stop AI publicly disavowed the individual and reaffirmed its commitment to nonviolence. Stop AI is an activist group trying to stop the development of increasingly powerful AI systems, which it fears are already harming society and also represent a potentially existential risk to humanity. The group has engaged in a number of public demonstrations and acts of civil disobedience outside the offices of major AI labs. Read more here from Wired.

EYE ON AI RESEARCH

Are we inching closer to a post-Transformer world? It’s been eight years since researchers at Google published their landmark research paper, “Attention is All You Need,” which introduced the world to the Transformer, a kind of neural network design that was particularly good at predicting sequences in which the next item depends on items that appeared fairly remotely from that item in the prior sequence. Transformers are what all of today’s large language models are based on. But AI models based on Transformers have several drawbacks. They don’t learn continuously. And, like most neural networks, they don’t have any kind of long-term memory. So, for several years now, researchers have been wondering if some new fundamental AI architecture will come along to displace the Transformer.

Well, we might be getting closer. Earlier this month, researchers—once again from Google—published a paper on what they are calling Nested Learning. It essentially breaks the neural network’s architecture into nested groups of digital neurons that update their weights at different frequencies based on how surprising any given piece of information is compared to what that part of the model would have predicted. The parts that update their weights more slowly form the longer-term memory of the model, while the parts that update their weights more frequently form a kind of shorter-term “working memory.” And nested between them are blocks of neurons that update at a medium speed, which modulates between the shorter and longer term memories. As an example of how this can work in practice, the researchers created an architecture they call HOPE that learns its own best way of optimizing each of these nested blocks. You can read the Google research here

AI CALENDAR

Nov. 26-27: World AI Congress, London.

Dec. 2-7: NeurIPS, San Diego

Dec. 8-9: Fortune Brainstorm AI San Francisco. Apply to attend here.

Jan. 6: Fortune Brainstorm Tech CES Dinner. Apply to attend here.

Jan. 19-23:World Economic Forum, Davos, Switzerland.

Feb. 10-11: AI Action Summit, New Delhi, India.

BRAIN FOOD

OpenAI is optimizing for engagement, even though there’s growing evidence its product harms some users. That’s the conclusion of a fascinating New York Times investigation that details how increasing commercial pressures within OpenAI—and a new cadre of executives hired from traditional tech and social media companies—have been driving the company to design ChatGPT to keep users engaged. The company is proceeding down this path, the newspaper reports, even as its own research shows some ChatGPT users develop dangerous emotional and psychological dependencies on the chatbot and that some subset of those become delusional after prolonged dialogues with OpenAI’s AI.

The story is a reminder of why AI regulation is necessary. We’ve seen this movie before with social media, and it doesn’t end well, for individuals or society. Any company which offers its service for free or substantially below cost—which is the case for most consumer-oriented AI products right now—has a strong incentive to monetize the user either through engagement (and advertising) or, perhaps even worse, directly paid persuasion (in some ways worse than conventional advertising). Neither is probably in the user’s best interest. 



Source link

Continue Reading

Business

Senate Dems’ plan to fix Obamacare premiums adds nearly $300 billion to deficit, CRFB says

Published

on



The Committee for a Responsible Federal Budget (CRFB) is a nonpartisan watchdog that regularly estimates how much the U.S. Congress is adding to the $38 trillion national debt.

With enhanced Affordable Care Act (ACA) subsidies due to expire within days, some Senate Democrats are scrambling to protect millions of Americans from getting the unpleasant holiday gift of spiking health insurance premiums. The CRFB says there’s just one problem with the plan: It’s not funded.

“With the national debt as large as the economy and interest payments costing $1 trillion annually, it is absurd to suggest adding hundreds of billions more to the debt,” CRFB President Maya MacGuineas wrote in a statement on Friday afternoon.

The proposal, backed by members of the Senate Democratic caucus, would fully extend the enhanced ACA subsidies for three years, from 2026 through 2028, with no additional income limits on who can qualify. Those subsidies, originally boosted during the pandemic and later renewed, were designed to lower premiums and prevent coverage losses for middle‑ and lower‑income households purchasing insurance on the ACA exchanges.

CRFB estimated that even this three‑year extension alone would add roughly $300 billion to federal deficits over the next decade, largely because the federal government would continue to shoulder a larger share of premium costs while enrollment and subsidy amounts remain elevated. If Congress ultimately moves to make the enhanced subsidies permanent—as many advocates have urged—the total cost could swell to nearly $550 billion in additional borrowing over the next decade.

Reversing recent guardrails

MacGuineas called the Senate bill “far worse than even a debt-financed extension” as it would roll back several “program integrity” measures that were enacted as part of a 2025 reconciliation law and were intended to tighten oversight of ACA subsidies. On top of that, it would be funded by borrowing even more. “This is a bad idea made worse,” MacGuineas added.

The watchdog group’s central critique is that the new Senate plan does not attempt to offset its costs through spending cuts or new revenue and, in their view, goes beyond a simple extension by expanding the underlying subsidy structure.

The legislation would permanently repeal restrictions that eliminated subsidies for certain groups enrolling during special enrollment periods and would scrap rules requiring full repayment of excess advance subsidies and stricter verification of eligibility and tax reconciliation. The bill would also nullify portions of a 2025 federal regulation that loosened limits on the actuarial value of exchange plans and altered how subsidies are calculated, effectively reshaping how generous plans can be and how federal support is determined. CRFB warned these reversals would increase costs further while weakening safeguards designed to reduce misuse and error in the subsidy system.

MacGuineas said that any subsidy extension should be paired with broader reforms to curb health spending and reduce overall borrowing. In her view, lawmakers are missing a chance to redesign ACA support in a way that lowers premiums while also improving the long‑term budget outlook.

The debate over ACA subsidies recently contributed to a government funding standoff, and CRFB argued that the new Senate bill reflects a political compromise that prioritizes short‑term relief over long‑term fiscal responsibility.

“After a pointless government shutdown over this issue, it is beyond disappointing that this is the preferred solution to such an important issue,” MacGuineas wrote.

The off-year elections cast the government shutdown and cost-of-living arguments in a different light. Democrats made stunning gains and almost flipped a deep-red district in Tennessee as politicians from the far left and center coalesced around “affordability.”

Senate Minority Leader Chuck Schumer is reportedly smelling blood in the water and doubling down on the theme heading into the pivotal midterm elections of 2026. President Donald Trump is scheduled to visit Pennsylvania soon to discuss pocketbook anxieties. But he is repeating predecessor Joe Biden’s habit of dismissing inflation, despite widespread evidence to the contrary.

“We fixed inflation, and we fixed almost everything,” Trump said in a Tuesday cabinet meeting, in which he also dismissed affordability as a “hoax” pushed by Democrats.​

Lawmakers on both sides of the aisle now face a politically fraught choice: allow premiums to jump sharply—including in swing states like Pennsylvania where ACA enrollees face double‑digit increases—or pass an expensive subsidy extension that would, as CRFB calculates, explode the deficit without addressing underlying health care costs.



Source link

Continue Reading

Business

Netflix–Warner Bros. deal sets up $72 billion antitrust test

Published

on



Netflix Inc. has won the heated takeover battle for Warner Bros. Discovery Inc. Now it must convince global antitrust regulators that the deal won’t give it an illegal advantage in the streaming market. 

The $72 billion tie-up joins the world’s dominant paid streaming service with one of Hollywood’s most iconic movie studios. It would reshape the market for online video content by combining the No. 1 streaming player with the No. 4 service HBO Max and its blockbuster hits such as Game Of ThronesFriends, and the DC Universe comics characters franchise.  

That could raise red flags for global antitrust regulators over concerns that Netflix would have too much control over the streaming market. The company faces a lengthy Justice Department review and a possible US lawsuit seeking to block the deal if it doesn’t adopt some remedies to get it cleared, analysts said.

“Netflix will have an uphill climb unless it agrees to divest HBO Max as well as additional behavioral commitments — particularly on licensing content,” said Bloomberg Intelligence analyst Jennifer Rie. “The streaming overlap is significant,” she added, saying the argument that “the market should be viewed more broadly is a tough one to win.”

By choosing Netflix, Warner Bros. has jilted another bidder, Paramount Skydance Corp., a move that risks touching off a political battle in Washington. Paramount is backed by the world’s second-richest man, Larry Ellison, and his son, David Ellison, and the company has touted their longstanding close ties to President Donald Trump. Their acquisition of Paramount, which closed in August, has won public praise from Trump. 

Comcast Corp. also made a bid for Warner Bros., looking to merge it with its NBCUniversal division.

The Justice Department’s antitrust division, which would review the transaction in the US, could argue that the deal is illegal on its face because the combined market share would put Netflix well over a 30% threshold.

The White House, the Justice Department and Comcast didn’t immediately respond to requests for comment. 

US lawmakers from both parties, including Republican Representative Darrell Issa and Democratic Senator Elizabeth Warren have already faulted the transaction — which would create a global streaming giant with 450 million users — as harmful to consumers.

“This deal looks like an anti-monopoly nightmare,” Warren said after the Netflix announcement. Utah Senator Mike Lee, a Republican, said in a social media post earlier this week that a Warner Bros.-Netflix tie-up would raise more serious competition questions “than any transaction I’ve seen in about a decade.”

European Union regulators are also likely to subject the Netflix proposal to an intensive review amid pressure from legislators. In the UK, the deal has already drawn scrutiny before the announcement, with House of Lords member Baroness Luciana Berger pressing the government on how the transaction would impact competition and consumer prices.

The combined company could raise prices and broadly impact “culture, film, cinemas and theater releases,”said Andreas Schwab, a leading member of the European Parliament on competition issues, after the announcement.

Paramount has sought to frame the Netflix deal as a non-starter. “The simple truth is that a deal with Netflix as the buyer likely will never close, due to antitrust and regulatory challenges in the United States and in most jurisdictions abroad,” Paramount’s antitrust lawyers wrote to their counterparts at Warner Bros. on Dec. 1.

Appealing directly to Trump could help Netflix avoid intense antitrust scrutiny, New Street Research’s Blair Levin wrote in a note on Friday. Levin said it’s possible that Trump could come to see the benefit of switching from a pro-Paramount position to a pro-Netflix position. “And if he does so, we believe the DOJ will follow suit,” Levin wrote.

Netflix co-Chief Executive Officer Ted Sarandos had dinner with Trump at the president’s Mar-a-Lago resort in Florida last December, a move other CEOs made after the election in order to win over the administration. In a call with investors Friday morning, Sarandos said that he’s “highly confident in the regulatory process,” contending the deal favors consumers, workers and innovation. 

“Our plans here are to work really closely with all the appropriate governments and regulators, but really confident that we’re going to get all the necessary approvals that we need,” he said.

Netflix will likely argue to regulators that other video services such as Google’s YouTube and ByteDance Ltd.’s TikTok should be included in any analysis of the market, which would dramatically shrink the company’s perceived dominance.

The US Federal Communications Commission, which regulates the transfer of broadcast-TV licenses, isn’t expected to play a role in the deal, as neither hold such licenses. Warner Bros. plans to spin off its cable TV division, which includes channels such as CNN, TBS and TNT, before the sale.

Even if antitrust reviews just focus on streaming, Netflix believes it will ultimately prevail, pointing to Amazon.com Inc.’s Prime and Walt Disney Co. as other major competitors, according to people familiar with the company’s thinking. 

Netflix is expected to argue that more than 75% of HBO Max subscribers already subscribe to Netflix, making them complementary offerings rather than competitors, said the people, who asked not to be named discussing confidential deliberations. The company is expected to make the case that reducing its content costs through owning Warner Bros., eliminating redundant back-end technology and bundling Netflix with Max will yield lower prices.



Source link

Continue Reading

Business

The rise of AI reasoning models comes with a big energy tradeoff

Published

on



Nearly all leading artificial intelligence developers are focused on building AI models that mimic the way humans reason, but new research shows these cutting-edge systems can be far more energy intensive, adding to concerns about AI’s strain on power grids.

AI reasoning models used 30 times more power on average to respond to 1,000 written prompts than alternatives without this reasoning capability or which had it disabled, according to a study released Thursday. The work was carried out by the AI Energy Score project, led by Hugging Face research scientist Sasha Luccioni and Salesforce Inc. head of AI sustainability Boris Gamazaychikov.

The researchers evaluated 40 open, freely available AI models, including software from OpenAI, Alphabet Inc.’s Google and Microsoft Corp. Some models were found to have a much wider disparity in energy consumption, including one from Chinese upstart DeepSeek. A slimmed-down version of DeepSeek’s R1 model used just 50 watt hours to respond to the prompts when reasoning was turned off, or about as much power as is needed to run a 50 watt lightbulb for an hour. With the reasoning feature enabled, the same model required 7,626 watt hours to complete the tasks.

The soaring energy needs of AI have increasingly come under scrutiny. As tech companies race to build more and bigger data centers to support AI, industry watchers have raised concerns about straining power grids and raising energy costs for consumers. A Bloomberg investigation in September found that wholesale electricity prices rose as much as 267% over the past five years in areas near data centers. There are also environmental drawbacks, as Microsoft, Google and Amazon.com Inc. have previously acknowledged the data center buildout could complicate their long-term climate objectives

More than a year ago, OpenAI released its first reasoning model, called o1. Where its prior software replied almost instantly to queries, o1 spent more time computing an answer before responding. Many other AI companies have since released similar systems, with the goal of solving more complex multistep problems for fields like science, math and coding.

Though reasoning systems have quickly become the industry norm for carrying out more complicated tasks, there has been little research into their energy demands. Much of the increase in power consumption is due to reasoning models generating much more text when responding, the researchers said. 

The new report aims to better understand how AI energy needs are evolving, Luccioni said. She also hopes it helps people better understand that there are different types of AI models suited to different actions. Not every query requires tapping the most computationally intensive AI reasoning systems.

“We should be smarter about the way that we use AI,” Luccioni said. “Choosing the right model for the right task is important.”

To test the difference in power use, the researchers ran all the models on the same computer hardware. They used the same prompts for each, ranging from simple questions — such as asking which team won the Super Bowl in a particular year — to more complex math problems. They also used a software tool called CodeCarbon to track how much energy was being consumed in real time.

The results varied considerably. The researchers found one of Microsoft’s Phi 4 reasoning models used 9,462 watt hours with reasoning turned on, compared with about 18 watt hours with it off. OpenAI’s largest gpt-oss model, meanwhile, had a less stark difference. It used 8,504 watt hours with reasoning on the most computationally intensive “high” setting and 5,313 watt hours with the setting turned down to “low.” 

OpenAI, Microsoft, Google and DeepSeek did not immediately respond to a request for comment.

Google released internal research in August that estimated the median text prompt for its Gemini AI service used 0.24 watt-hours of energy, roughly equal to watching TV for less than nine seconds. Google said that figure was “substantially lower than many public estimates.” 

Much of the discussion about AI power consumption has focused on large-scale facilities set up to train artificial intelligence systems. Increasingly, however, tech firms are shifting more resources to inference, or the process of running AI systems after they’ve been trained. The push toward reasoning models is a big piece of that as these systems are more reliant on inference.

Recently, some tech leaders have acknowledged that AI’s power draw needs to be reckoned with. Microsoft CEO Satya Nadella said the industry must earn the “social permission to consume energy” for AI data centers in a November interview. To do that, he argued tech must use AI to do good and foster broad economic growth.



Source link

Continue Reading

Trending

Copyright © Miami Select.