Connect with us

Business

Key takeaways from Fortune Brainstorm AI Singapore 2025

Published

on



Hello and welcome to Eye on AI. In this edition…China launches its own AI Action Plan…Meta hires an OpenAI veteran for new “chief scientist” role, raising questions about status of AI “godfather” LeCun…what if AI doesn’t speed up scientific progress?…and economists can’t agree on the impact AI superintelligence could have.

I spent last week in Singapore at Fortune Brainstorm AI Singapore. It was our second time hosting this event in the thriving city state, and I was eager to find out what had changed since last year. Here are some of the key thoughts and impressions I took away from the conference:

The pace of AI adoption is equally fast everywhere. With previous technological waves, many Asian companies and countries lagged the U.S., Europe, and China in adoption. But that’s not the case with AI. Instead, the pace of deployment seems equally fast—and equally ambitious—everywhere.

Everyone wants AI agents. Few are actually using them yet. Everyone anticipated AI agents last year; now they’re here from OpenAI, Google, Anthropic, and others. Yet adoption still trails the hype everywhere. Why?

Agents, by their very nature, are higher risk than most kinds of predictive AI or generative AI that simply produces content. And right now, AI agents are often not that reliable. Some of the ways to make them more reliable—such as using multiple agents, each assigned a specific task within a workflow and with some agents assigned to check the work of others—are also expensive.

As a result, Vivek Luthra, Accenture’s Asia-Pacific data and AI lead, said that most businesses are using AI to assist human workers within existing workflows. In some cases, they may be using AI as an “advisor” to provide decision support. But few are automating entire workflows.

Luthra, however, predicts this will change dramatically. By 2028, he forecasts that one-third of large companies will have deployed AI agents, and that about 15% of day-to-day workflows could be fully automated. (Accenture is a sponsor of Brainstorm AI.) This is because costs will continue to come down, models will continue to become more capable and reliable, and more companies will figure out how to redesign workflows to take advantage of these new agentic properties.

AI’s impact on the job market is not easy to discern—yet. Pei Ying Chua, LinkedIn’s APAC head economist, told the conference that despite anecdotal reports that young graduates are struggling to find work, there’s not yet much evidence of this in the data on open roles that LinkedIn tracks. That said, there has been an uptick in the average number of applications required before coders land a job.

On the same panel with Chua, both Madhu Kurup, vice president of engineering at Indeed, and Sun Sun Lim, vice president at Singapore Management University, emphasized the need for employees to acquire both AI skills—techniques for prompting models, familiarity with how to build an AI agent, an understanding and of the strengths and weaknesses of different kinds of AI—as well as human “soft skills.” As AI transforms all jobs, soft skills like flexibility, resilience, and critical thinking matter more than ever, the two panelists said.

Jess O’Reilly, Workday’s general manager for ASEAN, said that she thinks AI will lead many companies to adopt an organizational structure based more around teams from diverse functional areas coming together for specific projects and then being reconfigured for the next project. She said this would be like an “internal gig economy” for employees. Traditional reporting lines and vertical organization would need to change in favor of a flatter, more dynamic org chart, she said.

Infrastructure is destiny. From several panels at Brainstorm AI Singapore, it was clear that access to AI infrastructure is going to be critical. This is true even when countries don’t want to build their own models. Just running models—what’s known as “inference”—also requires a lot of AI chips.

But building data center capacity requires big investments in energy. Rangu Salgame, CEO and co-founder of Princeton Digital Group, said that in the near-term fossil fuels, especially natural gas, would likely be used to power the data center buildout in Asia—which is not great news for climate policy. But in the medium term, he saw great potential for AI data centers to force countries to build out renewable energy capacity, such as solar power and off-shore wind.

Sovereign AI matters. Delivering it is challenging.
Everyone is talking about the need for sovereign AI—and that was certainly the case in Southeast Asia, too. Governments want the ability to control their own destiny when it comes to AI technology and not become overly dependent on solutions from the U.S. and China. But achieving that independence is tricky, as was clear from several of the sessions at Brainstorm AI.

While increasingly capable open source models are giving governments some options in terms of which models they choose to build their solutions on, there are still some big constraints.

First, there’s the huge cost of building out data center capacity and constructing the power plants and upgrading national grids to support it, which I mentioned above. Then there is the issue of training AI models that are adept at local languages and also understand cultural nuance. This requires curating data sets specific to local context, said Kasima Tharnpipitchai, head of AI strategy at SCB 10X, which is building an LLM for the Thai language. “There are no tricks here, you really have to do the work,” he said. “It really is just effort. It’s almost brute force.”

Embodied AI is China’s big strength.
While it often looks like the U.S. and China are evenly matched when it comes to the capabilities of AI models, China has a massive advantage when it comes to “embodied AI”—that is, AI that will live in physical devices, from robotaxis to humanoid robots. That was the message from Rui Ma, founder of Tech Buzz China, who spoke on a fascinating panel looking at the geopolitics of AI. China controls almost the entire robotics supply chain and is making rapid progress creating cheap and practical robots designed for factories, as well as general purpose humanoid robots. (One of those humanoid robots—Terri, which uses software from Hong Kong startup Auki Labs, but whose body comes from Chinese robotics company Unitree—wowed delegates at Brainstorm AI.)

There is a middle path between the U.S. and China.
Singapore has consistently tried to thread a path between the two superpowers. And at Brainstorm AI the country’s digital minister Josephine Teo said that the country was finding places to act as a bridge between the U.S. and China. For instance, in late April, Singapore played a key role in hosting a meeting of AI safety researchers from both the U.S., China, and elsewhere that arrived at what is called the “Singapore Consensus”—an agreement that AI systems should be reliable, secure, and aligned with human values, as well as a shared vision about ways to ensure that is the case.

With that, here’s more AI news.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

Before we get to the news, I want to flag my most recent Fortune magazine feature on AI darling Perplexity. If you want to know why the “answer engine” company is now worth $18 billion and why tech giants from Google to Apple are watching its every move, please give the story a read. 

Note: The essay above was written and edited by Fortune staff. The news items below were selected by the newsletter author, created using AI, and then edited and fact-checked.

AI IN THE NEWS

China calls for cooperation on AI governance and a new international organization. At the World Artificial Intelligence Conference in Shanghai, Chinese Premier Li Qiang called for a global governance framework to coordinate the development of AI and work towards agreed safety standards. He called for the creation of an international organization to coordinate AI efforts and warned against AI becoming an “exclusive game” for a few nations or corporations. He also called for cooperation on the build out of data center capacity around the world, emphasized the importance of open-source AI models, and said that AI deployment should be “state led.” Li’s speech came just days after U.S. President Donald Trump unveiled his own AI Action Plan, which was designed to ensure the U.S. remains the dominant power in AI development. You can read more on Li’s speech from The Guardian here.

U.S. suspends AI hardware export control enforcement amid trade talks with China.
The Trump administration has frozen planned restrictions on U.S. technology exports to China, including Nvidia’s H20 AI chip, in an effort to preserve ongoing trade talks and secure a meeting between U.S. President Donald Trump and Chinese President Xi Jinping. This reversal—prompted in part by lobbying from Nvidia—has sparked backlash from national security officials and experts, who warn the H20 chip could accelerate China’s military AI capabilities, particularly in areas like autonomous weapons and surveillance. You can read more from the Financial Times here.

Meta hires a new chief scientist amid AI hiring spree, forcing AI “godfather” LeCun to clarify his role at the company. Meta founder and CEO Mark Zuckerberg announced that he had poached AI researcher Shengjia Zhao from OpenAI and was appointing him “chief scientist” for Meta’s new Superintelligence unit. Zhao, who helped develop ChatGPT, is just the latest in a string of researchers Meta has hired away from rival labs, including OpenAI, Google DeepMInd, and Anthropic as well as Apple. But Zhao’s title raised eyebrows among AI industry watchers as it is similar to the title long-held by Yann LeCun, the Turing Award-winner and “godfather” of AI who Zuckerberg hired back in 2013 to establish Meta’s Fundamental AI Research (FAIR) lab. LeCun, who has been openly skeptical that current approaches to AI will lead to human-level AI, let alone superintelligence, has been increasingly sidelined in Meta’s drive to develop AI models and products. LeCun was forced to issue a statement on LinkedIn clarifying that he has always been focused on long-term research into new AI methods at FAIR and that “my role and FAIR’s mission remain unchanged.” You can read more about Zhao’s hiring here in Tech Crunch and more on LeCun’s statement from Business Insider here.

Anthropic courts $150 billion valuation, even as expert warns copyright cases could jeopardize the company. Anthropic is in early talks to raise about $3 billion at a $150 billion valuation, the Financial Times reported. The amount is more than double the company’s March valuation, driven by surging revenue that is now running at a $4 billion annualized pace. But at the same time, Santa Clara University law professor Ed Lee published a blog post in which he calculated that if Anthropic loses the class action lawsuit it is facing over the alleged use of libraries of pirated books to help train its Claude AI model, the company could face “business-ending” damages totaling into the billions of dollars. For more on Lee’s analysis, see my Fortune colleague Bea Nolan here.

EYE ON AI RESEARCH

Will AI accelerate scientific progress or slow it down? Conventional wisdom is that AI is about to massively accelerate scientific progress. Indeed, hardly a week goes by without news of scientists using AI to help unlock some previously difficult or impossible task—from predicting the structure of proteins to controlling plasma in a fusion reactor. The latest example came last week with Google DeepMind unveiling an AI system called Aeneas that can pinpoint the date of Latin inscriptions—a boon to classicists and historians.

But Princeton University computer scientists Sayash Kapoor and Arvind Narayanan, who write a newsletter called “AI Snake Oil” that is deeply skeptical of much of the hype surrounding AI, argue in an essay published earlier this month that the conventional wisdom about AI and science is wrong. Rather than accelerating science, they contend, AI will slow it down.

Their argument rests primarily on AI’s ability to increase the volume of research papers being published, which makes it that much harder for scientists to find novel ideas. They also argue that AI’s ability to make accurate predictions without creating underlying theories of causation will actually decrease human understanding, not advance it. That second argument is one I also explore in my book, Mastering AI, and I think it is a real possibility. But, I think Narayan and Kapoor don’t give enough credit to AI tools such as DeepMind’s AlphaFold to rapidly expand the boundary of scientific discovery. 

FORTUNE ON AI

Many students in China are choosing to learn AI mostly out of ‘guilt or shame,’ not because they enjoy it, study finds—by Sasha Rogelberg

Is a ‘pretty good’ Alexa+ good enough to pull off a comeback almost two years after Amazon’s revamped voice assistant was first announced?—by Jason del Rey

Walmart—yes, Walmart—says AI agents are its future—by Jason del Rey

Satya Nadella on the ‘enigma of success’ in the age of AI: A thriving business, but 15,000+ layoffs —by Nick Lichtenberg

AI is driving mass layoffs in tech, but it’s boosting salaries by $18,000 a year everywhere else, study says—by Nino Paoli and Nick Lichtenberg

Agentic AI systems must have ‘a human in the loop,’ says Google exec—by Sheryl Estrada

AI CALENDAR

Sept. 8-10: Fortune Brainstorm Tech, Park City, Utah. Apply to attend here.

Oct. 6-10: World AI Week, Amsterdam

Oct. 21-22: TedAI San Francisco. Apply to attend here.

Dec. 2-7: NeurIPS, San Diego

Dec. 8-9: Fortune Brainstorm AI San Francisco. Apply to attend here.

BRAIN FOOD

What would AI superintelligence do to the economy? That question is increasingly being debated among economists as more AI companies begin to talk about artificial superintelligence (ASI) as achievable in the next decade. The Economist  has an excellent feature covering the various and contradictory views of economic experts. If the AI boosters are right, almost all economic value will accrue to owners of capital. But some funny things can happen during the transition—with wages for workers who are still employed going up, not down. One thing that is clear from the analysis is that, so far, the financial markets, for all their enthusiasm about companies such as Nvidia that are closely linked to the AI boom, are discounting the likelihood of ASI. The whole article is well-worth a read.

This is the online version of Eye on AI, Fortune’s weekly newsletter on how AI is shaping the future of business. Sign up for free.



Source link

Continue Reading

Business

Senate Dems’ plan to fix Obamacare premiums adds nearly $300 billion to deficit, CRFB says

Published

on



The Committee for a Responsible Federal Budget (CRFB) is a nonpartisan watchdog that regularly estimates how much the U.S. Congress is adding to the $38 trillion national debt.

With enhanced Affordable Care Act (ACA) subsidies due to expire within days, some Senate Democrats are scrambling to protect millions of Americans from getting the unpleasant holiday gift of spiking health insurance premiums. The CRFB says there’s just one problem with the plan: It’s not funded.

“With the national debt as large as the economy and interest payments costing $1 trillion annually, it is absurd to suggest adding hundreds of billions more to the debt,” CRFB President Maya MacGuineas wrote in a statement on Friday afternoon.

The proposal, backed by members of the Senate Democratic caucus, would fully extend the enhanced ACA subsidies for three years, from 2026 through 2028, with no additional income limits on who can qualify. Those subsidies, originally boosted during the pandemic and later renewed, were designed to lower premiums and prevent coverage losses for middle‑ and lower‑income households purchasing insurance on the ACA exchanges.

CRFB estimated that even this three‑year extension alone would add roughly $300 billion to federal deficits over the next decade, largely because the federal government would continue to shoulder a larger share of premium costs while enrollment and subsidy amounts remain elevated. If Congress ultimately moves to make the enhanced subsidies permanent—as many advocates have urged—the total cost could swell to nearly $550 billion in additional borrowing over the next decade.

Reversing recent guardrails

MacGuineas called the Senate bill “far worse than even a debt-financed extension” as it would roll back several “program integrity” measures that were enacted as part of a 2025 reconciliation law and were intended to tighten oversight of ACA subsidies. On top of that, it would be funded by borrowing even more. “This is a bad idea made worse,” MacGuineas added.

The watchdog group’s central critique is that the new Senate plan does not attempt to offset its costs through spending cuts or new revenue and, in their view, goes beyond a simple extension by expanding the underlying subsidy structure.

The legislation would permanently repeal restrictions that eliminated subsidies for certain groups enrolling during special enrollment periods and would scrap rules requiring full repayment of excess advance subsidies and stricter verification of eligibility and tax reconciliation. The bill would also nullify portions of a 2025 federal regulation that loosened limits on the actuarial value of exchange plans and altered how subsidies are calculated, effectively reshaping how generous plans can be and how federal support is determined. CRFB warned these reversals would increase costs further while weakening safeguards designed to reduce misuse and error in the subsidy system.

MacGuineas said that any subsidy extension should be paired with broader reforms to curb health spending and reduce overall borrowing. In her view, lawmakers are missing a chance to redesign ACA support in a way that lowers premiums while also improving the long‑term budget outlook.

The debate over ACA subsidies recently contributed to a government funding standoff, and CRFB argued that the new Senate bill reflects a political compromise that prioritizes short‑term relief over long‑term fiscal responsibility.

“After a pointless government shutdown over this issue, it is beyond disappointing that this is the preferred solution to such an important issue,” MacGuineas wrote.

The off-year elections cast the government shutdown and cost-of-living arguments in a different light. Democrats made stunning gains and almost flipped a deep-red district in Tennessee as politicians from the far left and center coalesced around “affordability.”

Senate Minority Leader Chuck Schumer is reportedly smelling blood in the water and doubling down on the theme heading into the pivotal midterm elections of 2026. President Donald Trump is scheduled to visit Pennsylvania soon to discuss pocketbook anxieties. But he is repeating predecessor Joe Biden’s habit of dismissing inflation, despite widespread evidence to the contrary.

“We fixed inflation, and we fixed almost everything,” Trump said in a Tuesday cabinet meeting, in which he also dismissed affordability as a “hoax” pushed by Democrats.​

Lawmakers on both sides of the aisle now face a politically fraught choice: allow premiums to jump sharply—including in swing states like Pennsylvania where ACA enrollees face double‑digit increases—or pass an expensive subsidy extension that would, as CRFB calculates, explode the deficit without addressing underlying health care costs.



Source link

Continue Reading

Business

Netflix–Warner Bros. deal sets up $72 billion antitrust test

Published

on



Netflix Inc. has won the heated takeover battle for Warner Bros. Discovery Inc. Now it must convince global antitrust regulators that the deal won’t give it an illegal advantage in the streaming market. 

The $72 billion tie-up joins the world’s dominant paid streaming service with one of Hollywood’s most iconic movie studios. It would reshape the market for online video content by combining the No. 1 streaming player with the No. 4 service HBO Max and its blockbuster hits such as Game Of ThronesFriends, and the DC Universe comics characters franchise.  

That could raise red flags for global antitrust regulators over concerns that Netflix would have too much control over the streaming market. The company faces a lengthy Justice Department review and a possible US lawsuit seeking to block the deal if it doesn’t adopt some remedies to get it cleared, analysts said.

“Netflix will have an uphill climb unless it agrees to divest HBO Max as well as additional behavioral commitments — particularly on licensing content,” said Bloomberg Intelligence analyst Jennifer Rie. “The streaming overlap is significant,” she added, saying the argument that “the market should be viewed more broadly is a tough one to win.”

By choosing Netflix, Warner Bros. has jilted another bidder, Paramount Skydance Corp., a move that risks touching off a political battle in Washington. Paramount is backed by the world’s second-richest man, Larry Ellison, and his son, David Ellison, and the company has touted their longstanding close ties to President Donald Trump. Their acquisition of Paramount, which closed in August, has won public praise from Trump. 

Comcast Corp. also made a bid for Warner Bros., looking to merge it with its NBCUniversal division.

The Justice Department’s antitrust division, which would review the transaction in the US, could argue that the deal is illegal on its face because the combined market share would put Netflix well over a 30% threshold.

The White House, the Justice Department and Comcast didn’t immediately respond to requests for comment. 

US lawmakers from both parties, including Republican Representative Darrell Issa and Democratic Senator Elizabeth Warren have already faulted the transaction — which would create a global streaming giant with 450 million users — as harmful to consumers.

“This deal looks like an anti-monopoly nightmare,” Warren said after the Netflix announcement. Utah Senator Mike Lee, a Republican, said in a social media post earlier this week that a Warner Bros.-Netflix tie-up would raise more serious competition questions “than any transaction I’ve seen in about a decade.”

European Union regulators are also likely to subject the Netflix proposal to an intensive review amid pressure from legislators. In the UK, the deal has already drawn scrutiny before the announcement, with House of Lords member Baroness Luciana Berger pressing the government on how the transaction would impact competition and consumer prices.

The combined company could raise prices and broadly impact “culture, film, cinemas and theater releases,”said Andreas Schwab, a leading member of the European Parliament on competition issues, after the announcement.

Paramount has sought to frame the Netflix deal as a non-starter. “The simple truth is that a deal with Netflix as the buyer likely will never close, due to antitrust and regulatory challenges in the United States and in most jurisdictions abroad,” Paramount’s antitrust lawyers wrote to their counterparts at Warner Bros. on Dec. 1.

Appealing directly to Trump could help Netflix avoid intense antitrust scrutiny, New Street Research’s Blair Levin wrote in a note on Friday. Levin said it’s possible that Trump could come to see the benefit of switching from a pro-Paramount position to a pro-Netflix position. “And if he does so, we believe the DOJ will follow suit,” Levin wrote.

Netflix co-Chief Executive Officer Ted Sarandos had dinner with Trump at the president’s Mar-a-Lago resort in Florida last December, a move other CEOs made after the election in order to win over the administration. In a call with investors Friday morning, Sarandos said that he’s “highly confident in the regulatory process,” contending the deal favors consumers, workers and innovation. 

“Our plans here are to work really closely with all the appropriate governments and regulators, but really confident that we’re going to get all the necessary approvals that we need,” he said.

Netflix will likely argue to regulators that other video services such as Google’s YouTube and ByteDance Ltd.’s TikTok should be included in any analysis of the market, which would dramatically shrink the company’s perceived dominance.

The US Federal Communications Commission, which regulates the transfer of broadcast-TV licenses, isn’t expected to play a role in the deal, as neither hold such licenses. Warner Bros. plans to spin off its cable TV division, which includes channels such as CNN, TBS and TNT, before the sale.

Even if antitrust reviews just focus on streaming, Netflix believes it will ultimately prevail, pointing to Amazon.com Inc.’s Prime and Walt Disney Co. as other major competitors, according to people familiar with the company’s thinking. 

Netflix is expected to argue that more than 75% of HBO Max subscribers already subscribe to Netflix, making them complementary offerings rather than competitors, said the people, who asked not to be named discussing confidential deliberations. The company is expected to make the case that reducing its content costs through owning Warner Bros., eliminating redundant back-end technology and bundling Netflix with Max will yield lower prices.



Source link

Continue Reading

Business

The rise of AI reasoning models comes with a big energy tradeoff

Published

on



Nearly all leading artificial intelligence developers are focused on building AI models that mimic the way humans reason, but new research shows these cutting-edge systems can be far more energy intensive, adding to concerns about AI’s strain on power grids.

AI reasoning models used 30 times more power on average to respond to 1,000 written prompts than alternatives without this reasoning capability or which had it disabled, according to a study released Thursday. The work was carried out by the AI Energy Score project, led by Hugging Face research scientist Sasha Luccioni and Salesforce Inc. head of AI sustainability Boris Gamazaychikov.

The researchers evaluated 40 open, freely available AI models, including software from OpenAI, Alphabet Inc.’s Google and Microsoft Corp. Some models were found to have a much wider disparity in energy consumption, including one from Chinese upstart DeepSeek. A slimmed-down version of DeepSeek’s R1 model used just 50 watt hours to respond to the prompts when reasoning was turned off, or about as much power as is needed to run a 50 watt lightbulb for an hour. With the reasoning feature enabled, the same model required 7,626 watt hours to complete the tasks.

The soaring energy needs of AI have increasingly come under scrutiny. As tech companies race to build more and bigger data centers to support AI, industry watchers have raised concerns about straining power grids and raising energy costs for consumers. A Bloomberg investigation in September found that wholesale electricity prices rose as much as 267% over the past five years in areas near data centers. There are also environmental drawbacks, as Microsoft, Google and Amazon.com Inc. have previously acknowledged the data center buildout could complicate their long-term climate objectives

More than a year ago, OpenAI released its first reasoning model, called o1. Where its prior software replied almost instantly to queries, o1 spent more time computing an answer before responding. Many other AI companies have since released similar systems, with the goal of solving more complex multistep problems for fields like science, math and coding.

Though reasoning systems have quickly become the industry norm for carrying out more complicated tasks, there has been little research into their energy demands. Much of the increase in power consumption is due to reasoning models generating much more text when responding, the researchers said. 

The new report aims to better understand how AI energy needs are evolving, Luccioni said. She also hopes it helps people better understand that there are different types of AI models suited to different actions. Not every query requires tapping the most computationally intensive AI reasoning systems.

“We should be smarter about the way that we use AI,” Luccioni said. “Choosing the right model for the right task is important.”

To test the difference in power use, the researchers ran all the models on the same computer hardware. They used the same prompts for each, ranging from simple questions — such as asking which team won the Super Bowl in a particular year — to more complex math problems. They also used a software tool called CodeCarbon to track how much energy was being consumed in real time.

The results varied considerably. The researchers found one of Microsoft’s Phi 4 reasoning models used 9,462 watt hours with reasoning turned on, compared with about 18 watt hours with it off. OpenAI’s largest gpt-oss model, meanwhile, had a less stark difference. It used 8,504 watt hours with reasoning on the most computationally intensive “high” setting and 5,313 watt hours with the setting turned down to “low.” 

OpenAI, Microsoft, Google and DeepSeek did not immediately respond to a request for comment.

Google released internal research in August that estimated the median text prompt for its Gemini AI service used 0.24 watt-hours of energy, roughly equal to watching TV for less than nine seconds. Google said that figure was “substantially lower than many public estimates.” 

Much of the discussion about AI power consumption has focused on large-scale facilities set up to train artificial intelligence systems. Increasingly, however, tech firms are shifting more resources to inference, or the process of running AI systems after they’ve been trained. The push toward reasoning models is a big piece of that as these systems are more reliant on inference.

Recently, some tech leaders have acknowledged that AI’s power draw needs to be reckoned with. Microsoft CEO Satya Nadella said the industry must earn the “social permission to consume energy” for AI data centers in a November interview. To do that, he argued tech must use AI to do good and foster broad economic growth.



Source link

Continue Reading

Trending

Copyright © Miami Select.