One of my ongoing fixations in AI is what it’s doing to cybersecurity. Two months ago in Eye on AI, I quoted a security leader who described the current moment as “grim,” as businesses struggle to secure systems in a world where AI agents are no longer just answering questions, but acting autonomously.
This week, I spoke with Gal Nagli, head of threat exposure at $32 billion cloud security startup Wiz, and Omer Nevo, cofounder and CTO at Irregular, a Sequoia-backed AI security lab that works with OpenAI, Anthropic, and Google DeepMind. Wiz and Irregular recently completed a joint study on the true economics of AI-driven cyberattacks.
Bargain-priced AI-powered cyberattacks
They found that AI-powered hacking is becoming incredibly cheap. In their tests, AI agents completed sophisticated offensive security challenges for under $50 in LLM costs — tasks that would typically cost close to $100,000 if carried out by human researchers paid to find flaws before criminals do. In controlled scenarios with clear targets, the agents solved 9 out of 10 real-world–modeled attacks, showing that large swaths of offensive security work are already becoming fast, cheap, and automated.
“Even for a lot of seasoned professionals who have seen both AI and cybersecurity, it has been genuinely surprising in what we didn’t think AI would be able to do and that models will be able to do,” said Nevo, who added that even in just the past few months there has been a big jump in capabilities. One area is in AI models being able to stay on track to do multi-step challenges without losing focus or giving up. “We’re seeing more and more that models are able to solve challenges that are genuine expert level, even for offensive cybersecurity professionals,” he said.
This is a particular problem now, because in many organizations, non-tech professionals, such as in marketing or design, are bringing applications to life using accessible coding tools such as Anthropic’s Claude Code and OpenAI’s Codex. These are people that are not engineers, Nagli explained. “They don’t know anything about security, they just develop new applications by themselves, and they use sensitive data exposed to the public Internet, and then they are super easy to exploit,” he said. “This creates a huge attack surface.”
Cost is no longer an issue for hackers
The research suggests that the cat-and-mouse game of cybersecurity is no longer constrained by cost. Criminals no longer need to carefully choose their targets if an AI agent can probe and exploit systems for just a few dollars. In this new economic landscape, every exposed system becomes worth testing. Every weakness becomes worth a try.
In more realistic, real-world conditions, the researchers did see performance drop and costs double. But the larger takeaway remains: attacks are getting cheaper and faster to launch. And most companies are still defending themselves as if every serious attack requires expensive, human labor.
“If we reach the point where AI is able to conduct sophisticated attacks, and it’s able to do that at scale, suddenly a lot more people will be exposed, and that means that [even at] smaller organizations people will need to have considerably better awareness of cybersecurity than they have today,” Nevo said.
At the same time, that means using AI for defense will become a critical need, he said, which raises the question: “Are we helping defenders utilize AI fast enough to be able to keep up with what offensive actors are already doing?”
With that, here’s more AI news.
Sharon Goldman
sharon.goldman@fortune.com
@sharongoldman
FORTUNE ON AI
‘The new politics of electricity’: Utilities seek $31 billion in hikes as voters revolt over soaring bills – by Jordan Blum
How Samsung’s first-ever chief design officer is reinventing the electronics giant for the AI age – by Nicolas Gordon
Meta beats on Q4 revenue as Mark Zuckerberg predicts a ‘major AI acceleration’ in 2026—with up to $135 billion in capex spending to match – by Sharon Goldman
Waabi raises up to $1 billion and partners with Uber to deploy 25,000 robotaxis as the race to dominate self-driving heats up – by Jeremy Kahn
SAP boss Christian Klein has seen the AI future: What you say will be more important than what you type – by Kamal Ahmed
AI IN THE NEWS
DeepMind opens access to its world-building AI with Project Genie. Google DeepMind is offering early access to its Project Genie, an experimental prototype that lets users generate and explore interactive, AI-created worlds in real time. Rolling out today to Google AI Ultra subscribers in the U.S., the web-based tool is powered by Genie 3 and allows people to sketch environments with text or images, navigate them as they evolve dynamically, and remix existing worlds into new ones. Unlike static 3D scenes, Genie generates what lies ahead as you move, simulating physics and interactions on the fly—a step toward the kind of general-purpose systems DeepMind believes are needed for AGI. The company frames Project Genie as both a creative playground and a research testbed for understanding how people might use world models across media, simulation, and AI development, while acknowledging limitations like short generation windows and imperfect realism.
U.S. lawmaker says Nvidia helped DeepSeek hone AI models later used by China’s military. According to a letter seen by Reuters, a U.S. lawmaker is saying that Nvidia provided technical assistance to Chinese AI startup DeepSeek that helped DeepSeek improve the efficiency of its models—allowing them to be trained with far fewer GPU hours than typical U.S. frontier models—and that those models were later used by China’s military, raising fresh concerns about AI technology transfers to Beijing. Representative John Moolenaar, chair of the House Select Committee on China, cited internal Nvidia documents showing engineers helped optimize algorithms and hardware, and argues the episode underscores the need for stricter export controls and enforcement to prevent American AI technology from being repurposed for military use by potential adversaries. Nvidia responded that it would be unreasonable to think China’s military depends on U.S. technology, and the Commerce Department and DeepSeek did not comment.
Dow Chemical to cut 4,500 employees in AI overhaul. The Wall Street Journal reported Dow Chemical said it will cut 4,500 jobs as part of a sweeping cost-cutting effort that leans heavily on AI and automation to boost productivity and returns, as the company grapples with a widening quarterly loss driven by lower revenue and higher costs. The “Transform to Outperform” program is expected to generate an additional $2 billion in operating earnings and will come with $1.1 billion to $1.5 billion in one-time charges, including up to $800 million in severance. CEO Jim Fitterling called the plan a “comprehensive and radical simplification” of Dow’s operating model. The move comes amid a broader wave of corporate layoffs—from Amazon to UPS—as companies shift spending toward AI and technology.
Inside an AI start-up’s plan to scan and dispose of millions of books. This fascinating story from the Washington Post details newly unsealed court filings which revealed that in early 2024, Anthropic quietly launched an internal effort called “Project Panama” aimed at “destructively scan[ning] all the books in the world,” spending tens of millions of dollars to buy millions of physical books, cut off their spines, and scan them to train models like Claude. The documents, released as part of a copyright lawsuit settled for $1.5 billion, offer a rare look at how aggressively AI companies have pursued high-quality data, particularly books, which executives believed could teach models to “write well” rather than mimic “low-quality internet speak.” The filings suggest that Anthropic, Meta, and other major labs saw licensing at scale as impractical and instead sought bulk access without authors’ knowledge, in some cases by downloading pirated collections such as LibGen—despite internal warnings about legality. Together with other copyright cases, the revelations underscore how the modern AI race has been fueled by a frantic, often clandestine scramble to ingest humanity’s written record.
EYE ON AI NUMBERS
$211 billion
That’s how much venture investment flowed to AI in 2025, which was 50% of all global venture capital, according to the new 2026 AI Funding Report from HumanX and Crunchbase.
Other key points from the report:
- Firms with at least one female founder secured 47% ($84.7B) of all AI funding in North America and Europe.
- 77% of total AI funding, or $163 billion, came from rounds of $100 million or more.
- 59% of AI-focused investment flowed into the supporting ecosystem, including infrastructure (19%), deep tech/robotics (11%), and AI-driven software for health and security (15%).
AI CALENDAR
Feb. 10-11: AI Action Summit, New Delhi, India.
March 2-5: Mobile World Congress, Barcelona, Spain.
March 16-19: Nvidia GTC, San Jose, Calif.
April 6-9: HumanX, San Francisco.