Connect with us

Business

A handful of bad data can ‘poison’ even the largest AI models, researchers warn

Published

on



Hello and welcome to Eye on AI…In this edition: A new Anthropic study reveals that even the biggest AI models can be ‘poisoned’ with just a few hundred documents…OpenAI’s deal with Broadcom….Sora 2 and the AI slop issueand corporate America spends big on AI. 

Hi, Beatrice Nolan here. I’m filling in for Jeremy, who is on assignment this week. A recent study from Anthropic, in collaboration with the UK AI Security Institute and the Alan Turing Institute, caught my eye earlier this week. The study focused on the “poisoning” of AI models, and it undermined some conventional wisdom within the AI sector.

The research found that the introduction of just 250 bad documents, a tiny proportion when compared to the billions of texts a model learns from, can secretly produce a “backdoor” vulnerability in large language models (LLMs). This means that even a very small number of malicious files inserted into training data can teach a model to behave in unexpected or harmful ways when triggered by a specific phrase or pattern.

This idea itself isn’t new; researchers have cited data poisoning as a potential vulnerability in machine learning for years, particularly in smaller models or academic settings. What was surprising was that the researchers found that model size didn’t matter.

Small models along with the largest models on the market were both effected by the same small amount of bad files, even though the bigger models are trained on far more total data. This contradicts the common assumption that as AI models get larger they become more resistant to this kind of manipulation. Researchers had previously assumed attackers would need to corrupt a specific percentage of the data, which, for larger models would be millions of documents. But the study showed even a tiny handful of malicious documents can “infect” a model, no matter how large it is.

The researchers stress that this test used a harmless example (making the model spit out gibberish text) that is unlikely to pose significant risks in frontier models. But the findings imply data-poisoning attacks could be much easier, and become much more prolific, than people originally assumed.

Safety training can be quietly unwound

What does all of this mean in real-world terms? Vasilios Mavroudis, one of the authors of the study and a principal research scientist at the Alan Turing Institute, told me he was worried about a few ways this could be scaled by bad actors.

“How this translates in practice is two examples. One is you could have a model that when, for example, it detects a specific sequence of words, it foregoes its safety training and then starts helping the user carry out malicious tasks,” Mavroudis said. Another risk that worries him was the potential for models to be engineered to refuse requests from or be less helpful to certain groups of the population, just by detecting specific patterns in the request or keywords.

“This would be an agenda by someone who wants to marginalize or target specific groups,” he said. “Maybe they speak a specific language or have interests or questions that reveal certain things about the culture…and then, based on that, the model could be triggered, essentially to completely refuse to help or to become less helpful.”

“It’s fairly easy to detect a model not being responsive at all. But if the model is just handicapped, then it becomes harder to detect,” he added.

Rethinking data ‘supply chains’

The paper suggests that this kind of data poisoning could be scalable, and it acts as a warning that stronger defenses, as well as more research into how to prevent and detect poisoning, are needed.

Mavroudis suggests one way to tackle this is for companies to treat data pipelines the way manufacturers treat supply chains: verifying sources more carefully, filtering more aggressively, and strengthening post-training testing for problematic behaviors.

“We have some preliminary evidence that suggests if you continue training on curated, clean data…this helps decay the factors that may have been introduced as part of the process up until that point,” he said. “Defenders should stop assuming the data set size is enough to protect them on its own.”

It’s a good reminder for the AI industry, which is notoriously preoccupied with scale, that bigger doesn’t always mean safer. Simply scaling models can’t replace the need for clean, traceable data. Sometimes, it turns out, all it takes is a few bad inputs to spoil the entire output.

With that, here’s more AI news.

Beatrice Nolan

bea.nolan@fortune.com

FORTUNE ON AI

A 3-person policy nonprofit that worked on California’s AI safety law is publicly accusing OpenAI of intimidation tacticsSharon Goldman 

Browser wars, a hallmark of the late 1990s tech world, are back with a vengeance—thanks to AI Beatrice Nolan and Jeremy Kahn

Former Apple CEO says ‘AI has not been a particular strength’ for the tech giant and warns it has its first major competitor in decades — Sasha Rogelberg

EYE ON AI NEWS

OpenAI and Broadcom have struck a multibillion-dollar AI chip deal. The two tech giants have signed a deal to co-develop and deploy 10 gigawatts of custom artificial intelligence chips over the next four years. Announced on Monday, the agreement is a way for OpenAI to address its growing compute demands as it scales its AI products. The partnership will see OpenAI design its own GPUs, while Broadcom co-develops and deploys them beginning in the second half of 2026. Broadcom shares jumped nearly 10% following the announcement. Read more in the Wall Street Journal.

 

The Dutch government seizure of chipmaker Nexperia followed a U.S. warning. The Dutch government took control of chipmaker Nexperia, a key supplier of low-margin semiconductors for Europe’s auto industry, after the U.S. warned it would remain on Washington’s export control list while its Chinese chief executive, Zhang Xuezheng, remained in charge, according to court filings cited by the Financial Times. The Dutch economy minister Vincent Karremans removed Zhang earlier this month before invoking a 70-year-old emergency law to take control of the company, citing “serious governance shortcomings,”  Nexperia was sold to a Chinese consortium in 2017 and later acquired by the partially state-owned Wingtech. The dispute escalated after U.S. officials told the Dutch government in June that efforts to separate Nexperia’s European operations from its Chinese ownership were progressing too slowly. Read more in the Financial Times.

 

California becomes the first state to regulate AI companion chatbots. Governor Gavin Newsom has signed SB 243, making his home state the first to regulate AI companion chatbots. The new law requires companies like OpenAI, Meta, Character.AI, and Replika to implement safety measures designed to protect children and vulnerable users from potential harm. It comes into effect on January 1, 2026, and mandates age verification and protocols to address suicide and self-harm. It also introduces new restrictions on chatbots posing as healthcare professionals or engaging in sexually explicit conversations with minors. Read more in TechCrunch.

EYE ON AI RESEARCH

A new report has found corporate America is going all-in on artificial intelligence. The annual State of AI Report found that generative AI is crossing a “commercial chasm,” with adoption and retention of AI technology up, while spend grows. According to the report, which analyzed data from Ramp’s AI Index, paid AI adoption among U.S. businesses has surged from 5% in early 2023 to 43.8% by September 2025. Average enterprise contracts have also ballooned from $39,000 to $530,000, with Ramp projecting a further $1 million in 2026 as pilots develop into full-scale deployments. Cohort retention—the share of customers who keep using a product over time—is also strengthening, with 12-month retention rising from 50% in 2022 to 80% in 2024, suggesting AI pilots are being transferred into more consistent workflows.

AI CALENDAR

Oct. 21-22: TedAI San Francisco.

Nov. 10-13: Web Summit, Lisbon. 

Nov. 26-27: World AI Congress, London.

Dec. 2-7: NeurIPS, San Diego.

Dec. 8-9: Fortune Brainstorm AI San Francisco. Apply to attend here.

BRAIN FOOD

Sora 2 and the AI slop issue. OpenAI’s newest iteration of its video-generation software has caused quite a stir since it launched earlier this month. The technology has horrified the children of deceased actors, caused a copyright row, and sparked headlines including: “Is art dead?”

The death of art seems less like the issue than the inescapable spread of AI “slop.” AI-generated videos are already cramming people’s social media, which raises a bunch of potential safety and misinformation issues, but also risks undermining the internet as we know it. If low-quality, mass-produced slop floods the web, it risks pushing out authentic human content and siphoning engagement away from the content that many creators rely on to make a living.

OpenAI has tried to watermark Sora 2’s content to help viewers tell AI-generated clips from real footage, automatically adding a small cartoon cloud watermark to every video it produces. However, a report from 404 Media found that the watermark is easy to remove and that multiple websites already offer tools to strip it out. The outlet tested three of the sites and found that each could erase the watermark within seconds. You can read more on that from 404 Media here. 



Source link

Continue Reading

Business

YouTube launches option for U.S. creators to receive stablecoin payouts through PayPal

Published

on



Big Tech continues to tiptoe into crypto. The latest example is a move by YouTube to let creators on the video platform choose to receive payouts in PayPal’s stablecoin. The head of crypto at PayPal, May Zabaneh, confirmed the arrangement to Fortune, adding that the feature is live and, as of now, only applies to users in the U.S. 

A spokesperson for Google, which owns YouTube, confirmed the video site has added payouts for creators in PayPal’s stablecoin but declined to comment further.

YouTube is already an existing customer of PayPal’s and uses the fintech giant’s payouts service, which helps large enterprises pay gig workers and contractors. 

Early in the third quarter, PayPal added the capability for payment recipients to receive their checks in PayPal’s stablecoin, PYUSD. Afterwards, YouTube decided to give that option to creators, who receive a share of earnings from the content they post on the platform, said Zabaneh.

“The beauty of what we’ve built is that YouTube doesn’t have to touch crypto and so we can help take away that complexity,” she added.

Big Tech eyes stablecoins

YouTube’s interest in stablecoins comes as Google and other Big Tech companies have shown interest in the cryptocurrencies amid a wave of hype in Silicon Valley and beyond. 

The tokens, which are pegged to underlying assets like the U.S. dollar, are longtime features of the crypto industry. But over the past year, they’ve exploded into the mainstream, especially after President Donald Trump signed into law a new bill regulating the crypto assets. Proponents say they are an upgrade over existing financial infrastructure, and big fintechs have taken notice, including Stripe. In February, the payments giant closed a blockbuster $1.1 billion purchase of the stablecoin startup Bridge.

PayPal has long been an earlier mover in crypto among large tech firms. In 2020, it let users buy and sell Bitcoin, Ethereum, and a handful of other cryptocurrencies. And, in 2023, it launched the PYSUD stablecoin, which now has a market capitalization of nearly $4 billion, according to CoinGecko.

PayPal has slowly integrated PYUSD throughout its stable of products. Users can hold it in its digital wallet as well as Venmo, another financial app that PayPal also owns. They can use it to pay merchants. And, in February, a PayPal executive said small-to-medium sized merchants will be able to use it to pay vendors.

YouTube’s addition of payouts in PYUSD isn’t the first time Google has experimented with PayPal’s stablecoin. An executive at Google Cloud, the tech giant’s cloud computing arm, previously toldFortune that it had received payments from two of its customers in PYUSD. 



Source link

Continue Reading

Business

Oracle slides by most since January on mounting AI spending

Published

on



Oracle Corp. shares plunged the most in almost 11 months after the company escalated its spending on AI data centers and other equipment, rising outlays that are taking longer to translate into cloud revenue than investors want.

Capital expenditures, a metric of data center spending, were about $12 billion in the quarter, an increase from $8.5 billion in the preceding period, the company said Wednesday in a statement. Analysts anticipated $8.25 billion in capital spending in the quarter, according to data compiled by Bloomberg. 

Oracle now expects capital expenditures will reach about $50 billion in the fiscal year ending in May 2026 — a $15 billion increase from its September forecast — executives said on a conference call after the results were released.

The shares fell 11% to $198.85 at the close Thursday in New York, the biggest single-day decline since Jan. 27. Oracle’s stock had already lost about a third of its value through Wednesday’s close since a record high on Sept. 10. Meanwhile, a measure of Oracle’s credit risk reached a fresh 16-year high.

The latest earning report and share slide marks a reversal of fortunes for a company that just a few months ago was enjoying a blistering rally and clinching multibillion-dollar data center deals with the likes of OpenAI. The gains temporarily turned co-founder Larry Ellison into the world’s richest person, with the tech magnate passing Elon Musk for a few hours.

Known for its database software, Oracle has recently found success in the competitive cloud computing market. It’s engaging in a massive data center build-out to power AI work for OpenAI and also counts companies such as ByteDance Ltd.’s TikTok and Meta Platforms Inc. as major cloud customers. 

Fiscal second-quarter cloud sales increased 34% to $7.98 billion, while revenue in the company’s closely watched infrastructure business gained 68% to $4.08 billion. Both numbers fell just short of analysts’ estimates.Play Video

Still, Wall Street has raised doubts about the costs and time required to develop AI infrastructure at such a massive scale. Oracle has taken out significant sums of debt and committed to leasing multiple data center sites. 

The cost of protecting the company’s debt against default for five years rose as much as 0.17 percentage point to around 1.41 percentage point a year, the highest intraday level since April 2009, according to ICE Data Services. The gauge rises as investor confidence in the company’s credit quality falls. Oracle credit derivatives have become a credit market barometer for AI risk.

“Oracle faces its own mounting scrutiny over a debt-fueled data center build-out and concentration risk amid questions over the outcome of AI spending uncertainty,” said Jacob Bourne, an analyst at Emarketer. “This revenue miss will likely exacerbate concerns among already cautious investors about its OpenAI deal and its aggressive AI spending.”

Remaining performance obligation, a measure of bookings, jumped more than fivefold to $523 billion in the quarter, which ended Nov. 30. Analysts, on average, estimated $519 billion.

Investors want to see Oracle turn its higher spending on infrastructure into revenue as quickly as it has promised. 

“The vast majority of our cap ex investments are for revenue generating equipment that is going into our data centers and not for land, buildings or power that collectively are covered via leases,” Principal Financial Officer Doug Kehring said on the call. “Oracle does not pay for these leases until the completed data centers and accompanying utilities are delivered to us.”

“As a foundational principle, we expect and are committed to maintaining our investment grade debt rating,” Kehring added.

Oracle’s cash burn increased in the quarter and its free cash flow reached a negative $10 billion. Overall, the company has about $106 billion in debt, according to data compiled by Bloomberg. “Investors continually seem to expect incremental cap ex to drive incremental revenue faster than the current reality,” wrote Mark Murphy, an analyst at JP Morgan.Play Video

“Oracle is very good at building and running high-performance and cost-efficient cloud data centers,” Clay Magouyrk, one of Oracle’s two chief executive officers, said in the statement. “Because our data centers are highly automated, we can build and run more of them.”

This is Oracle’s first earnings report since longtime Chief Executive Officer Safra Catz was succeeded by Magouyrk and Mike Sicilia, who are sharing the CEO post.

Part of the negative sentiment from investors in recent weeks is tied to increased skepticism about the business prospects of OpenAI, which is seeing more competition from companies like Alphabet Inc.’s Google, wrote Kirk Materne, an analyst at Evercore ISI, in a note ahead of earnings. Investors would like to see Oracle management explain how they could adjust spending plans if demand from OpenAI changes, he added.

In the quarter, total revenue expanded 14% to $16.1 billion. The company’s cloud software application business rose 11% to $3.9 billion. This is the first quarter that Oracle’s cloud infrastructure unit generated more sales than the applications business.

Earnings, excluding some items, were $2.26 a share. The profit was helped by the sale of Oracle’s holdings in chipmaker Ampere Computing, the company said. That generated a pretax gain of $2.7 billion in the period. Ampere, which was backed early in its life by Oracle, was bought by Japan’s SoftBank Group Corp. in a transaction that closed last month.

In the current period, which ends in February, total revenue will increase 19% to 22%, while cloud sales will increase 40% to 44%, Kehring said on the call. Both forecasts were in line with analysts’ estimates.

Annual revenue will be $67 billion, affirming an outlook the company gave in October.



Source link

Continue Reading

Business

Analyst sees Disney/OpenAI deal as a dividing line in entertainment history

Published

on



Disney’s expansive $1 billion licensing agreement with OpenAI is a sign Hollywood is serious about adapting entertainment to the age of artificial intelligence (AI), marking the start of what one Ark Invest analyst describes as a “pre‑ and post‑AI” era for entertainment content. The deal, which allows OpenAI’s Sora video model to use Disney characters and franchises, instantly turns a century of carefully guarded intellectual property (IP) into raw material for a new kind of crowd‑sourced, AI‑assisted creativity.​

Nicholas Grous, director of research for consumer internet and fintech at Ark Invest, told Fortune tools like Sora effectively recreate the “YouTube moment” for video production, handing professional‑grade creation capabilities to anyone with a prompt instead of a studio budget. In his view, that shift will flood the market with AI‑generated clips and series, making it far harder for any single new creator or franchise to break out than it was in the early social‑video era.​ His remarks echoed the analysis from Melissa Otto, head of research at S&P Global Visible Alpha, who recently told Fortune Netflix’s big move for Warner Bros.’ reveals the streaming giant is motivated by a need to deepen its war chest as it sees Google’s AI-video capabilities exploding with the onset of TPU chips.

As low‑cost synthetic video proliferates, Grous said he believes audiences will begin to mentally divide entertainment into “pre‑AI” and “post‑AI” categories, attaching a premium to work made largely by humans before generative tools became ubiquitous. “I think you’re going to have basically a split between pre-AI content and post-AI content,” adding that viewers will consider pre-AI content closer to “true art, that was made with just human ingenuity and creativity, not this AI slop, for lack of a better word.”

Disney’s IP as AI fuel

Within that framework, Grous argued Disney’s real advantage is not just Sora access, but the depth of its pre‑AI catalog across animation, live‑action films, and television. Iconic franchises like Star Wars, classic princess films and legacy animated characters become building blocks for a global experiment in AI‑assisted storytelling, with fans effectively test‑marketing new scenarios at scale.​

“I actually think, and this might be counterintuitive, that the pre-AI content that existed, the Harry Potter, the Star Wars, all of the content that we’ve grown up with … that actually becomes incrementally more valuable to the entertainment landscape,” Grous said. On the one hand, he said, there are deals like Disney and OpenAI’s where IP can become user-generated content, but on the other, IP represents a robust content pipeline for future shows, movies, and the like.

Grous sketched a feedback loop in which Disney can watch what AI‑generated character combinations or story setups resonate online, then selectively “pull up” the most promising concepts into professionally produced, higher‑budget projects for Disney+ or theatrical release. From Disney’s perspective, he added, “we didn’t know Cinderella walking down Broadway and interacting with these types of characters, whatever it may be, was something that our audience would be interested in.” The OpenAI deal is exciting because Disney can bring that content onto its streaming arm Disney+ and make it more premium. “We’re going to use our studio chops to build this into something that’s a bit more luxury than what just an individual can create.”

Grous agreed the emerging market for pre‑AI film and TV libraries is similar to what’s happened in the music business, where legacy catalogs from artists like Bruce Springsteen and Bob Dylan have fetched huge sums from buyers betting on long‑term streaming and licensing value.

The big Netflix-Warner deal

For streaming rivals, the Disney-OpenAI pact is a strategic warning shot. Grous argued the soaring price tags in the bidding war for Warner Bros. between Netflix and Paramount shows the importance of IP for the next phase of entertainment. “​I think the reason this bidding [for Warner Bros.] is approaching $100 billion-plus is the content library and the potential to do a Disney-OpenAI type of deal.” In other words, whoever controls Batman and the like will control the inevitable AI-generated versions of those characters, although “they could take a franchise like Harry Potter and then just create slop around it.”

Netflix has a great track record on monetizing libraries, Grous said, listing the example of how the defunct USA dramedy Suits surged in popularity once it landed on Netflix, proving extensive back catalogs can be revived and re‑monetized when matched with modern distribution.​

Grous cited Nintendo and Pokémon as examples of under‑monetized franchises that could see similar upside if their owners strike Sora‑style deals to bring characters more deeply into mobile and social environments.​ “That’s another company where you go, ‘Oh my god, the franchises they have, if they’re able to bring it into this new age that we’re all experiencing, this is a home-run opportunity.’”

In that environment, the Ark analyst suggests Disney’s OpenAI deal is less of a one‑off licensing win than an early template for how legacy media owners might survive and thrive in an AI‑saturated market. The companies with rich pre‑AI catalogs and a willingness to experiment with new tools, he argued, will be best positioned to stand out amid the “AI slop” and turn nostalgia‑laden IP into enduring, flexible assets for the post‑AI age.​

Underlying all of this is a broader battle for attention that spans far beyond traditional studios and shows how sectors between tech and entertainment are getting even blurrier than when the gatecrashers from Silicon Valley first piled into streaming. Grous notes Netflix itself has long framed its competition as everything from TikTok and Instagram to Fortnite and “sleep,” a mindset that fits naturally with the coming wave of AI‑generated video and interactive experiences.​ (In 2017, Netflix co-founder Reed Hastings famously said “sleep” was one of the company’s biggest competitors, as it was busy pioneering the binge-watch.)

Grous also sounded a warning for the age of post-AI content: The binge-watch won’t feel as good anymore, and there will be some kind of backlash. As critics such as The New York Times‘ James Poniewozik increasingly note, streaming shows don’t seem to be as re-watchable as even recent hits from the golden age of cable TV, such as Mad Men. Grous said he sees a future where the endangered movie theater makes a comeback. “People are going to want to go outside and meet or go to the theater. Like, we’re not just going to want to be fed AI slop for 16 hours a day.”

Editor’s note: the author worked for Netflix from June 2024 through July 2025.



Source link

Continue Reading

Trending

Copyright © Miami Select.