Welcome to Eye on AI. In this edition…President Trump takes aim at state AI regulations with a new executive order…OpenAI unveils a new image generator to catch up with Google’s Nano Banana….Google DeepMind trains a more capable agent for virtual worlds…and an AI safety report card doesn’t provide much reassurance.
Hello. 2025 was supposed to be the year of AI agents. But as the year draws to a close, it is clear such prognostications from tech vendors were overly optimistic. Yes, some companies have started to use AI agents. But most are not yet doing so, especially not in company-wide deployments.
A McKinsey “State of AI” survey from last month found that a majority of businesses had yet to begin using AI agents, while 40% said they were experimenting. Less than a quarter said they had deployed AI agents at scale in at least one use case; and when the consulting firm asked people about whether they were using AI in specific functions, such as marketing and sales or human resources, the results were even worse. No more than 10% of survey respondents said they had AI agents “fully scaled” or were “in the process of scaling” in any of these areas. The one function with the most usage of scaled agents was IT (where agents are often used to automatically resolve service tickets or install software for employees), and even here only 2% reported having agents “fully scaled,” with an additional 8% saying they were “scaling.”
A big part of the problem is that designing workflows for AI agents that will enable them to produce reliable results turns out to be difficult. Even the most capable of today’s AI models sit on a strange boundary—capable of doing certain tasks in a workflow as well as humans, but unable to do others. Complex tasks that involve gathering data from multiple sources and using software tools over many steps represent a particular challenge. The longer the workflow, the more risk that an error in one of the early steps in a process will compound, resulting in a failed outcome. Plus, the most capable AI models can be expensive to use at scale, especially if the workflow involves the agent having to do a lot of planning and reasoning.
Many firms have sought to solve these problems by designing “multi-agent workflows,” where different agents are spun up, with each assigned just one discrete step in the workflow, including sometimes using one agent to check the work of another agent. This can improve performance, but it too can wind up being expensive—sometimes too expensive to make the workflow worth automating.
Are two AI agents always better than one?
Now a team at Google has conducted research that aims to give businesses a good rubric for deciding when it is better to use a single agent, as opposed to building a multi-agent workflow, and what type of multi-agent workflows might be best for a particular task.
The researchers conducted 180 controlled experiments using AI models from Google, OpenAI, and Anthropic. It tried them against four different agentic AI benchmarks that covered a diverse set of goals: retrieving information from multiple websites; planning in a Minecraft game environment; planning and tool use to accomplish common business tasks such as answering emails, scheduling meetings, and using project management software; and a finance agent benchmark. That finance test requires agents to retrieve information from SEC filings and perform basic analytics, such as comparing actual results to management’s forecasts from the prior quarter, figuring out how revenue derived from a specific product segment has changed over time, or figuring out how much cash a company might have free for M&A activity.
In the past year, the conventional wisdom has been that multi-agent workflows produce more reliable results. (I’ve previously written about this view, which has been backed up by the experience of some companies, such as Prosus, here in Eye on AI.) But the Google researchers found instead that whether the conventional wisdom held was highly contingent on exactly what the task was.
Single agents do better at sequential steps, worse at parallel ones
If the task was sequential, which was the case for many of the Minecraft benchmark tasks, then it turned out that so long as a single AI agent could perform the task accurately at least 45% of the time (which is a pretty low bar, in my opinion), then it was better to deploy just one agent. Using multiple agents, in any configuration, reduced overall performance by huge amounts, ranging between 39% and 70%. The reason, according to the researchers, is that if a company had a limited token budget for completing the entire task, then the demands of multiple agents trying to figure out how to use different tools would quickly overwhelm the budget.
But if a task involved steps that could be performed in parallel, as was true for many of the financial analysis tasks, then multi-agent systems conveyed big advantages. What’s more, the researchers found that exactly how the agents are configured to work with one another makes a big difference, too. For the financial-analysis tasks, a centralized multi-agent syste—where a single coordinator agent directs and oversees the activity of multiple sub-agents and all communication flows to and from the coordinator—produced the best result. This system performed 80% better than a single agent. Meanwhile, an independent multi-agent system, in which there is no coordinator and each agent is simply assigned a narrow role that they complete in parallel, was only 57% better than a single agent.
Research like this should help companies figure out the best ways to configure AI agents and enable the technology to finally begin to deliver on last year’s promises. For those selling AI agent technology, late is better than never. For the people working in the businesses using AI agents, we’ll have to see what impact these agents have on the labor market. That’s a story we’ll be watching closely as we head into 2026.
President Trump signs executive order to stop state-level AI regulation. President Trump signed an executive order giving the U.S. Attorney General broad power to challenge and potentially overturn state laws that regulate artificial intelligence, arguing they hinder U.S. “global AI dominance.” The order also allows federal agencies to withhold funding from states that keep such laws. Trump said he wanted to replace what he called a confusing patchwork of state rules with a single federal framework—but the order did not contain any new federal requirements for those building AI models. Tech companies welcomed the move, but the executive order drew bipartisan criticism and is expected to face legal challenges from states and consumer groups who argue that only Congress can pre-empt state laws. Read more here from the New York Times. Oracle stock hammered on reports of data center delays, huge lease obligations. Oracle denied a Bloomberg report that it had delayed completion of data centers being built for OpenAI, saying all projects remain on track to meet contractual commitments despite labor and materials shortages. The report rattled investors already worried about Oracle’s debt-heavy push into AI infrastructure under its $300 billion OpenAI deal, and investors pummeled Oracle’s stock price. You can read more on Oracle’s denial from Reuters here. Oracle was also shaken by reports that it has $248 billion in rental payments for data centers that will commence between now and 2028. That was covered by Bloomberg here. OpenAI launches new image generation model. The company debuted a new image generation AI model that it says offers more fine-grained editing control and generates images four times faster than its previous image creators. The move is being widely viewed as an effort by OpenAI to show that it has not lost ground to competitors, in particular Google, whose Nano Banana Pro image generation model has been the talk of the internet since it launched in late November. You can read more from Fortune’s Sharon Goldman here. OpenAI hires Shopify executive in push to make ChatGPT an ‘operating system’ The AI company hired Glen Coates, who had been head of “core product” at Shopify, to be its new head of app platform, working under ChatGPT product head Nick Turley. “We’re going to find out what happens if you architect an OS ground-up with a genius at its core that use its apps just like you can,” Coates wrote in a LinkedIn postannouncing the move.
EYE ON AI RESEARCH
A Google DeepMind agent that can make complex plans in a virtual world. The AI lab debuted an updated version of its SIMA agent, called SIMA 2, that can navigate complex, 3D digital worlds, including those from different video games. Unlike earlier systems that only followed simple commands, SIMA 2 can understand broader goals, hold short conversations, and figure out multi-step plans on its own. In tests, it performed far better than its predecessor and came close to human players on many tasks, even in games it had never seen before. Notably, SIMA 2 can also teach itself new skills by setting its own challenges and learning from trial and error. The paper shows progress towards AI that can act, adapt, and learn in environments rather than just analyze text or images. The approach, which is based on reinforcement learning—a technique where an agent learns by trial and error to accomplish a goal—should help power more capable virtual assistants and, eventually, real-world robots. You can read the paper here.
AI CALENDAR
Jan. 6: Fortune Brainstorm Tech CES Dinner. Apply to attend here.
Jan. 19-23: World Economic Forum, Davos, Switzerland.
Feb. 10-11: AI Action Summit, New Delhi, India.
BRAIN FOOD
Is it safe? A few weeks ago, the Future of Life Institute (FLI) released its latest AI Safety Index, a report that grades leading AI labs on how they are doing on a range of safety criteria. A clear gap has emerged between three of the leading AI labs and pretty much everyone else. OpenAI, Google, and Anthropic all received grades in the “C” range. Anthropic and OpenAI both scored a C+, with Anthropic narrowly beating OpenAI on its total safety score. Google DeepMind’s solid C was an improvement from the C- it scored when FLI last graded the field on their safety efforts back in July. But the rest of the pack is doing a pretty poor job. X.ai and Meta and DeepSeek all received Ds, while Alibaba, which makes the popular open source AI model Qwen, got a D-. (DeepSeek’s grade was actually a step up from the F it received in the summer.) Despite this somewhat dismal picture, FLI CEO Max Tegmark—ever an optimist—told me he actually sees some good news in the results. Not only did all the labs pull up their raw scores by at least some degree, more AI companies agreed to submit data to FLI in order to be graded. Tegmark sees this as evidence that the AI Safety Index is starting to have its intended effect of creating “a race to the top” on AI safety. But Tegmark also allows that all three of the top-marked AI labs saw their scores for “current harms” from AI—such as the negative impacts their models can have on mental health—slip since they were assessed in the summer. And when it comes to potential “existential risks” to humanity, none of the labs gets a grade above D. Somehow that doesn’t cheer me.
FORTUNE AIQ: THE YEAR IN AI—AND WHAT’S AHEAD
Businesses took big steps forward on the AI journey in 2025, from hiring Chief AI Officers to experimenting with AI agents. The lessons learned—both good and bad–combined with the technology’s latest innovations will make 2026 another decisive year. Explore all of Fortune AIQ, and read the latest playbook below:
The financially struggling Trump Media & Technology Group’s shocking, $6 billion merger with a nuclear fusion developer represents either a bet on more taxpayer dollars being invested in the first fusion player to go public—soon owned in part by the Trump family—or a belief that an influx of capital will speed up the launch of clean, limitless electricity that eventually will transform the global grid.
Trump Media’s struggling stock had plummeted nearly 70% year-to-date prior to the announcement. But the stock value spiked over 40% on the deal news with the market cap rising back above $4 billion on Dec. 18—even though TAE Technologies doesn’t plan to bring its first power plant online until 2031 to start generating revenues.
TAE Technologies CEO Michl Binderbauer recognizes the potential negative perception, but he told Fortune he’s eager to speed up the clean energy revolution that he is confident will come with the so-called merger of equals with Trump Media, which will become a Truth Social media, cryptocurrency, and fusion power conglomerate.
“In the end, if we get more scrutiny because of the deal we did, I actually don’t mind that,” Binderbauer said. “It’s perversely sounding, but I welcome it in a way because we let the technology speak.
“It’s big, bold and fast. You make a big bet with boldness at heart, and it allows you to run really fast,” he said. “I know our technology will succeed. Let it be adjudicated on a perhaps even deeper level. We need more energy; we need clean, scalable power.”
Robert Weissman, co-president of Public Citizen government watchdog group, sees it quite differently as an obviously unethical cash grab by the president and his family.
“It’s a ridiculous merger. Why in the world would those two companies merge, and why would the markets respond positively?” Weissman said. “The markets are betting on the prospect of the Trump grift expanding and for … direct federal government payments to a company whose leading shareholder is the president of the United States.”
TAE has received federal Department of Energy grants dating back to Trump’s first term and continuing through the Biden administration. As part of a reorganization announced in November, the DOE is opening a new Office of Fusion.
The deal would value the merged company at $6 billion, including debt, and Binderbauer and Trump Media head Devin Nunes would serve as co-CEOs, they said. Shareholders of each company would own about 50% of the combined company. Donald Trump Jr. would take one of the nine board seats.
Trump Media will invest up to $200 million in TAE up front and another $100 million before the deal closes in mid-2026, they said.
TAE aims to select a site for its first power plant by the end of 2026 and generate first power by late 2031, on par with the goals of some of its top competitors.
In a statement, White House Press Secretary Karoline Leavitt said the media is irresponsibly trying to fabricate conflicts of interest.
“Neither the president nor his family have ever engaged, or will ever engage, in conflicts of interest,” Leavitt said.
The DOE, Trump Org, and Trump Media did not respond to interview or comment requests.
In a media call during which no questions were allowed, Nunes said fusion power will lower energy prices, bolster national defense, and support “America’s dominance” of AI.
“Why is fusion power revolutionary? It’s because fusion power plants are now feasible at commercial scale, and they will produce reliable, cost-effective, dispatchable, and carbon-free electricity, and industrial heat with no nuclear meltdown risk or radioactive waste,” Nunes added.
The potential of fusion
The joke about fusion energy is it’s always 30 years away and not getting any closer.
However, the breakthrough scientific moment came at the end of 2022 when scientists at Lawrence Livermore National Laboratory successfully achieved “first ignition,” fusing atoms through extreme heat to generate more energy than the setup consumes for the first time ever.
Since then, TAE and other competitors have continued to make greater fusion progress on their various scientific approaches to fusion power generation.
Whereas traditional nuclear fission energy creates power by splitting atoms, fusion uses heat to create energy by melding them together. In the simplest form, it fuses hydrogen found in water into an extremely hot, electrically charged state known as plasma to create helium—the same process that powers the sun. When executed properly, the process triggers endless reactions to make energy for electricity. But stars rely on overwhelming gravitational pressure to force their fusion. Here on Earth, creating and containing the pressure needed to force the reaction in a consistent, controlled way remains an engineering challenge.
While TAE and others are targeting the early 2030s to bring the first commercial fusion power plants online, industry analysts agree it will take several additional years at least to start making a notable dent in the nationwide or even global energy grid. Still, the long-term potential remains huge.
“Fusion power is the answer to providing reliable, cost-effective, carbon-free electricity,” Binderbauer said.
TAE was founded 27 years ago—originally as Tri Alpha Energy—but stayed in stealth mode until 2015. Actor turned entrepreneur and angel investor Harry Hamlin was even a cofounder back in 1998. An Austrian-American physicist, Binderbauer served as the founding chief technology officer, eventually rising to CEO in 2018.
“Do you raise $1 billion in scaled capital over multiple years? Or do you have it come at high velocity?” Binderbauer asked. “The high velocity is critical if you want to build something quickly and efficiently.
“The concerns are very secondary.”
That’s what makes the Trump Media deal so critical, he said.
When it comes to a potential future conflict, especially with China, the U.S. may be on its back foot, claim experts at the intersection of AI and defense.
Speaking at the Fortune Brainstorm AI conference in San Francisco last week, Tara Murphy Dougherty, the CEO of defense software company Govini, said that in a conflict with China the U.S. could run out of some munitions in seven days, while China could potentially hold out longer.
“They are planning for a very protracted conflict, and would be happy to draw that fight out to bleed American stockpiles dry, because they aren’t missing the economic piece of this puzzle,” Dougherty said.
This possibility should be troubling to the U.S., and yet there is no easy fix, explained Dougherty. The U.S. stockpile of munitions and other war time resources are held up by various obstacles established over years, she said.
“Unfortunately, those stockpiles are low enough, and the United States has outsourced so much manufacturing capacity at this point, that the amount of time it will take to build the munitions and weapons systems that the United States needs is just much, much too long,” she said.
The U.S. could indeed run out some munitions especially in a conflict with China over the Taiwan strait, according to a study by the Center for Strategic and International Studies. Still, the study only singled out certain types of munitions such as long range and precision-guided munitions in under a week.
At the same time, the U.S. has the second most number of nuclear warheads, just behind Russia, and significantly more than China’s 600, according to the International Campaign to Abolish Nuclear Weapon (ICAN).
The war in Ukraine, which has escalated since Russia’s invasion in 2022, has shown the need for countries to be nimble when it comes to the resources required for war. Yet, in a war time situation it’s unclear how quickly the U.S. would be able to mobilize, Dougherty added.
“Our weapon systems and military platforms have historically low operational availability, which basically means, if we need to go to war, half the fleet is sitting in depot or at dock,” she said.
The Trump administration and Department of War secretary Pete Hegseth have tried to spur a change in the status quo. Earlier this year, Hegseth sent a memo to senior Pentagon leadership asking for the Army to restructure its acquisition systems and close redundant and inefficient programs.
Using AI, though, may be another way to help America’s war readiness, added Gary Steele, the CEO of AI-powered autonomous systems company Shield AI. Steele said AI will completely transform the aerospace and defense industry so much so that in 20 years it will look radically different.
“You’re gonna have lower cost systems, AI-led, software-led, not these super expensive, incredibly elaborate systems that just get shut down,” said Steele. “I think there’s a revolution happening, and we’re at the very beginning of that journey.”
Swedish AI coding startup Lovable has just raised $330 million in Series B funding round at a $6.6 billion valuation, more than tripling its worth from just five months ago. CEO Anton Osika told Fortune the funding would further the company’s mission to become “the last piece of software” needed by companies and developers.
The round was led by CapitalG and Menlo Ventures’ Anthology fund, with participation from NVIDIA’s venture arm NVentures, Salesforce Ventures, Databricks Ventures, and strategic investors including Atlassian Ventures and HubSpot Ventures. It comes just one month after Lovable announced it had hit $200 million in annual recurring revenue.
The company has grand aims to make software engineering accessible to anyone by promoting “vibe-coding,” a process in which a user describes in plain language the product they want to build or the function of a piece of software they want to create, and AI writes the code to produce that result.
“Our mission is to let anyone be a builder,” Osika said.
He predicted a world where every company can build its own bespoke software, rather than depending on expensive, and less customized products from major tech vendors. For instance, rather than purchasing different tools for customer relationship management, project tracking, or inventory management, Osika envisions companies using Lovable to simply build whatever they need on demand.
Companies are already seeing results from some of Lovable’s products. At Zendesk, teams using Lovable have been able to move from idea to working prototype in three hours instead of six weeks, according to Jorge Luthe, the company’s Senior Director of Product. While at management consulting firm McKinsey, Osika said engineers used his company’s product to build in a few hours what they had been waiting four to six months for their internal development team to deliver.
“Anyone being able to go with an arbitrary software problem and just explain it to Lovable and solve it, is becoming a universal reality,” he said.
Skeptics say that vibe coding doesn’t always result in the best quality software. The code vibe coding tools produce can be inefficient or contain security flaws that could present a serious risk to the company deploying it, depending on what it is being used for. In addition, just because tools like Lovable allow people without any coding experience to create software for their specific needs, it doesn’t mean that those non-developers will be able to maintain that code over time, these critics say.
Lovable says it sees three main use cases emerge among enterprise customers, Osika said. Some organizations are building core business systems entirely on Lovable; others are using it to build internal tools that previously stalled in development backlogs for months; and some product teams are using it to validate ideas with functional prototypes rather than static designs.
“Enterprises are reworking entire workflows with AI, because you can build AI applications with Lovable in just one prompt,” Osika said. “It becomes kind of the work where work gets done.”
Competition heats up in AI-powered coding
Lovable is operating in an increasingly competitive landscape and facing competition from fellow start-ups as well as bigger players that are now releasing their own coding products. While Lovable uses foundational models from OpenAI, Google, and Anthropic to power its own product, these companies are now releasing their own coding tools that could compete more directly.
“We just see them as partners,” Okisa said of the competition with major AI labs. “I think as software and AI kind of converges, there’s going to be more overlap in what companies do, but what people say and why they choose us, despite that there are other alternatives, is that Lovable just works.”
Matt Murphy, a partner at Menlo Ventures who led the investment, said that Lovable’s strategy is to build a “beloved layer” of software on top of the AI labs’ models that customers want to pay for. “The numbers speak for themselves,” Murphy said, noting that Lovable has transformed a latent market of tens of millions of people into developers.
“Lovable has done something rare: built a product that enterprises and founders both love. The demand we’re seeing from Fortune 500 companies signals a fundamental shift in how software gets built,” Laela Sturdy, Managing Partner at CapitalG added.