Connect with us

Business

A handful of bad data can ‘poison’ even the largest AI models, researchers warn

Published

on



Hello and welcome to Eye on AI…In this edition: A new Anthropic study reveals that even the biggest AI models can be ‘poisoned’ with just a few hundred documents…OpenAI’s deal with Broadcom….Sora 2 and the AI slop issueand corporate America spends big on AI. 

Hi, Beatrice Nolan here. I’m filling in for Jeremy, who is on assignment this week. A recent study from Anthropic, in collaboration with the UK AI Security Institute and the Alan Turing Institute, caught my eye earlier this week. The study focused on the “poisoning” of AI models, and it undermined some conventional wisdom within the AI sector.

The research found that the introduction of just 250 bad documents, a tiny proportion when compared to the billions of texts a model learns from, can secretly produce a “backdoor” vulnerability in large language models (LLMs). This means that even a very small number of malicious files inserted into training data can teach a model to behave in unexpected or harmful ways when triggered by a specific phrase or pattern.

This idea itself isn’t new; researchers have cited data poisoning as a potential vulnerability in machine learning for years, particularly in smaller models or academic settings. What was surprising was that the researchers found that model size didn’t matter.

Small models along with the largest models on the market were both effected by the same small amount of bad files, even though the bigger models are trained on far more total data. This contradicts the common assumption that as AI models get larger they become more resistant to this kind of manipulation. Researchers had previously assumed attackers would need to corrupt a specific percentage of the data, which, for larger models would be millions of documents. But the study showed even a tiny handful of malicious documents can “infect” a model, no matter how large it is.

The researchers stress that this test used a harmless example (making the model spit out gibberish text) that is unlikely to pose significant risks in frontier models. But the findings imply data-poisoning attacks could be much easier, and become much more prolific, than people originally assumed.

Safety training can be quietly unwound

What does all of this mean in real-world terms? Vasilios Mavroudis, one of the authors of the study and a principal research scientist at the Alan Turing Institute, told me he was worried about a few ways this could be scaled by bad actors.

“How this translates in practice is two examples. One is you could have a model that when, for example, it detects a specific sequence of words, it foregoes its safety training and then starts helping the user carry out malicious tasks,” Mavroudis said. Another risk that worries him was the potential for models to be engineered to refuse requests from or be less helpful to certain groups of the population, just by detecting specific patterns in the request or keywords.

“This would be an agenda by someone who wants to marginalize or target specific groups,” he said. “Maybe they speak a specific language or have interests or questions that reveal certain things about the culture…and then, based on that, the model could be triggered, essentially to completely refuse to help or to become less helpful.”

“It’s fairly easy to detect a model not being responsive at all. But if the model is just handicapped, then it becomes harder to detect,” he added.

Rethinking data ‘supply chains’

The paper suggests that this kind of data poisoning could be scalable, and it acts as a warning that stronger defenses, as well as more research into how to prevent and detect poisoning, are needed.

Mavroudis suggests one way to tackle this is for companies to treat data pipelines the way manufacturers treat supply chains: verifying sources more carefully, filtering more aggressively, and strengthening post-training testing for problematic behaviors.

“We have some preliminary evidence that suggests if you continue training on curated, clean data…this helps decay the factors that may have been introduced as part of the process up until that point,” he said. “Defenders should stop assuming the data set size is enough to protect them on its own.”

It’s a good reminder for the AI industry, which is notoriously preoccupied with scale, that bigger doesn’t always mean safer. Simply scaling models can’t replace the need for clean, traceable data. Sometimes, it turns out, all it takes is a few bad inputs to spoil the entire output.

With that, here’s more AI news.

Beatrice Nolan

bea.nolan@fortune.com

FORTUNE ON AI

A 3-person policy nonprofit that worked on California’s AI safety law is publicly accusing OpenAI of intimidation tacticsSharon Goldman 

Browser wars, a hallmark of the late 1990s tech world, are back with a vengeance—thanks to AI Beatrice Nolan and Jeremy Kahn

Former Apple CEO says ‘AI has not been a particular strength’ for the tech giant and warns it has its first major competitor in decades — Sasha Rogelberg

EYE ON AI NEWS

OpenAI and Broadcom have struck a multibillion-dollar AI chip deal. The two tech giants have signed a deal to co-develop and deploy 10 gigawatts of custom artificial intelligence chips over the next four years. Announced on Monday, the agreement is a way for OpenAI to address its growing compute demands as it scales its AI products. The partnership will see OpenAI design its own GPUs, while Broadcom co-develops and deploys them beginning in the second half of 2026. Broadcom shares jumped nearly 10% following the announcement. Read more in the Wall Street Journal.

 

The Dutch government seizure of chipmaker Nexperia followed a U.S. warning. The Dutch government took control of chipmaker Nexperia, a key supplier of low-margin semiconductors for Europe’s auto industry, after the U.S. warned it would remain on Washington’s export control list while its Chinese chief executive, Zhang Xuezheng, remained in charge, according to court filings cited by the Financial Times. The Dutch economy minister Vincent Karremans removed Zhang earlier this month before invoking a 70-year-old emergency law to take control of the company, citing “serious governance shortcomings,”  Nexperia was sold to a Chinese consortium in 2017 and later acquired by the partially state-owned Wingtech. The dispute escalated after U.S. officials told the Dutch government in June that efforts to separate Nexperia’s European operations from its Chinese ownership were progressing too slowly. Read more in the Financial Times.

 

California becomes the first state to regulate AI companion chatbots. Governor Gavin Newsom has signed SB 243, making his home state the first to regulate AI companion chatbots. The new law requires companies like OpenAI, Meta, Character.AI, and Replika to implement safety measures designed to protect children and vulnerable users from potential harm. It comes into effect on January 1, 2026, and mandates age verification and protocols to address suicide and self-harm. It also introduces new restrictions on chatbots posing as healthcare professionals or engaging in sexually explicit conversations with minors. Read more in TechCrunch.

EYE ON AI RESEARCH

A new report has found corporate America is going all-in on artificial intelligence. The annual State of AI Report found that generative AI is crossing a “commercial chasm,” with adoption and retention of AI technology up, while spend grows. According to the report, which analyzed data from Ramp’s AI Index, paid AI adoption among U.S. businesses has surged from 5% in early 2023 to 43.8% by September 2025. Average enterprise contracts have also ballooned from $39,000 to $530,000, with Ramp projecting a further $1 million in 2026 as pilots develop into full-scale deployments. Cohort retention—the share of customers who keep using a product over time—is also strengthening, with 12-month retention rising from 50% in 2022 to 80% in 2024, suggesting AI pilots are being transferred into more consistent workflows.

AI CALENDAR

Oct. 21-22: TedAI San Francisco.

Nov. 10-13: Web Summit, Lisbon. 

Nov. 26-27: World AI Congress, London.

Dec. 2-7: NeurIPS, San Diego.

Dec. 8-9: Fortune Brainstorm AI San Francisco. Apply to attend here.

BRAIN FOOD

Sora 2 and the AI slop issue. OpenAI’s newest iteration of its video-generation software has caused quite a stir since it launched earlier this month. The technology has horrified the children of deceased actors, caused a copyright row, and sparked headlines including: “Is art dead?”

The death of art seems less like the issue than the inescapable spread of AI “slop.” AI-generated videos are already cramming people’s social media, which raises a bunch of potential safety and misinformation issues, but also risks undermining the internet as we know it. If low-quality, mass-produced slop floods the web, it risks pushing out authentic human content and siphoning engagement away from the content that many creators rely on to make a living.

OpenAI has tried to watermark Sora 2’s content to help viewers tell AI-generated clips from real footage, automatically adding a small cartoon cloud watermark to every video it produces. However, a report from 404 Media found that the watermark is easy to remove and that multiple websites already offer tools to strip it out. The outlet tested three of the sites and found that each could erase the watermark within seconds. You can read more on that from 404 Media here. 



Source link

Continue Reading

Business

Google DeepMind agrees to sweeping partnership with the U.K. government

Published

on



AI lab GoogleDeepMind announced a major new partnership with the U.K. government Wednesday, pledging to accelerate breakthroughs in materials science and clean energy, including nuclear fusion, as well as conducting joint research on the societal impacts of AI and on ways to make AI decision-making more interpretable and safer.

As part of the partnership, Google DeepMind said it would open its first automated research laboratory in the U.K. in 2026. That lab will focus on discovering advanced materials including superconductors that can carry electricity with zero resistance. The facility will be fully integrated with Google’s Gemini AI models. Gemini will serve as a kind of scientific brain for the lab, which will also use robotics to synthesize and characterize hundreds of materials per day, significantly accelerating the timeline for transformative discoveries.

The company will also work with the U.K. government and other U.K.-based scientists on trying to make breakthroughs in nuclear fusion, potentially paving the way for cheaper, cleaner energy. Fusion reactions should produce abundant power while producing little to no nuclear waste, but such reactions have proved to be very difficult to sustain or scale up.

Additionally, Google DeepMind is expanding its research alliance with the government-run U.K. AI Security Institute to explore methods for discovering how large language models and other complex neural network-based AI models arrive at decisions. The partnership will also involve joint research into the societal impacts of AI, such as the effect AI deployment is likely to have on the labor market and the impact increased use of AI chatbots may have on mental health.

British Prime Minister Keir Starmer said in a statement that the partnership would “make sure we harness developments in AI for public good so that everyone feels the benefits.”

“That means using AI to tackle everyday challenges like cutting energy bills thanks to cheaper, greener energy and making our public services more efficient so that taxpayers’ money is spent on what matters most to people,” Starmer said.

Google DeepMind cofounder and CEO Demis Hassabis said in a statement that AI has “incredible potential to drive a new era of scientific discovery and improve everyday life.”

As part of the partnership, British scientists will receive priority access to Google DeepMind’s advanced AI tools, including AlphaGenome for DNA sequencing; AlphaEvolve for designing algorithms; DeepMind’s WeatherNext weather forecasting models; and its new AI co-scientist, a multi-agent system that acts as a virtual research collaborator.

DeepMind was founded in London in 2010 and is still headquartered there; it was acquired by Google in 2014.

Gemini’s U.K. footprint expands

The collaboration also includes potential development of AI systems for education and government services. Google DeepMind will explore creating a version of Gemini tailored to England’s national curriculum to help teachers reduce administrative workloads. A pilot program in Northern Ireland showed that Gemini helped save teachers an average of 10 hours per week, according to the U.K. government.

For public services, the U.K. government’s AI Incubator team is trialing Extract, a Gemini-powered tool that converts old planning documents into digital data in 40 seconds, compared to the current two-hour process.

The expanded research partnership with the U.K. AI Security Institute will focus on three areas, the government and DeepMind said: developing techniques to monitor AI systems’ so-called “chain of thought”—the reasoning steps an AI model takes to arrive at an answer; studying the social and emotional impacts of AI systems; and exploring how AI will affect employment.

U.K. AISI currently tests the safety of frontier AI models, including those from Google DeepMind and a number of other AI labs, under voluntary agreements. But the new research collaboration could potentially raise concerns about whether the U.K. AISI will remain objective in its testing of its now-partner’s models.

In response to a question on this from Fortune, William Isaac, principal scientist and director of responsibility at Google DeepMind, did not directly address the issue of how the partnership might affect the U.K. AISI’s objectivity. But he said the new research agreement puts in place “a separate kind of relationship from other points of interaction.” He also said the new partnership was focused on “question on the horizon” rather than present models, and that the researchers would publish the results of their work for anyone to review.

Isaac said there is no financial or commercial exchange as part of the research partnership, with both sides contributing people and research resources.

“We’re excited to announce that we’re going to be deepening our partnership with the U.K. AISI to really focus on exploring, really the frontier research questions that we believe are going to be important for ensuring that we have safe and responsible development,” he said.

He said the partnership will produce publicly accessible research focused on foundational questions—such as how AI impacts jobs or how talking to chatbots effects mental health—rather than policy-specific recommendations, though the findings could influence how businesses and policymakers think about AI and how to regulate it.

“We want the research to be meaningful and provide insights,” Isaac said.

Isaac described the U.K. AISI as “the crown jewel of all of the safety institutes” globally and said deepening the partnership “sends a really strong signal” about the importance of engaging responsibly as AI systems become more widely adopted.

The partnership also includes expanded collaboration on AI-enhanced approaches to cybersecurity. This will include the U.K. government exploring the sue of tools like Big Sleep, an AI agent developed by Google that autonomously hunts for previously unknown “Zero Day” cybersecurity exploits, and CodeMender, another AI agent that can search for and then automatically patch security vulnerabilities in open source software.

British Technology Secretary Liz Kendall is visiting San Francisco this week to further the U.K.-U.S. Tech Prosperity Deal, which was agreed to during U.S. President Trump’s state visit to the U.K. in September. In November alone, the British government said the pact helped secure more than $32.4 billion of private investment committed to the U.K tech sector.

The Google-U.K. partnership builds on a £5 billion ($6.7 billion) investment commitment from Google made earlier this year to support U.K. AI infrastructure and research, and to help modernize government IT systems.

The British government also said collaboration supports its AI Opportunities Action Plan and its £137 million AI for Science Strategy, which aims to position the UK as a global leader in AI-driven research.



Source link

Continue Reading

Business

49-year-old Democrat who owns a gourmet olive oil store swipes another historically Republican district from Trump and Republicans

Published

on



Democrat Eric Gisler claimed an upset victory Tuesday in a special election in a historically Republican Georgia state House district.

Gisler said he was the winner of the contest, in which he was leading Republican Mack “Dutch” Guest by about 200 votes out of more than 11,000 in final unofficial returns.

Robert Sinners, a spokesperson with the secretary of state’s office, said there could be a few provisional ballots left before the tally is finalized.

“I think we had the right message for the time,” Gisler told The Associated Press in a phone interview. He credited his win to Democratic enthusiasm but also said some Republicans were looking for a change.

“A lot of what I would call traditional conservatives held their nose and voted Republican last year on the promise of low prices and whatever else they were selling,” Gisler said. “But they hadn’t received that.”

Guest did not immediately respond to a text message seeking comment late Tuesday.

Democrats have seen a number of electoral successes in 2025 as the party’s voters have been eager to express dissatisfaction with Republican President Donald Trump.

In Georgia in November, they romped to two blowouts in statewide special elections for the Public Service Commission, unseating two incumbent Republicans in campaigns driven by discontent over rising electricity costs.

Nationwide, Democrats won governor’s races by broad margins in Virginia and New Jersey. On Tuesday a Democrat defeated a Trump-endorsed Republican in the officially nonpartisan race for Miami mayor, becoming the first from his party to win the post in nearly 30 years.

Democrats have also performed strongly in some races they lost, such as a Tennessee U.S. House race last week and a Georgia state Senate race in September.

Republicans remain firmly in control of the Georgia House, but their majority is likely fall to 99-81 when lawmakers return in January. Also Tuesday, voters in a second, heavily Republican district in Atlanta’s northwest suburbs sent Republican Bill Fincher and Democrat Scott Sanders to a Jan. 6 runoff to fill a vacancy created when Rep. Mandi Ballinger died.

The GOP majority is down from 119 Republicans in 2015. It would be the first time the GOP holds fewer than 100 seats in the lower chamber since 2005, when they won control for the first time since Reconstruction.

The race between Gisler and Guest in House District 121 in the Athens area northeast of Atlanta was held to replace Republican Marcus Wiedower, who was in the seat since 2018 but resigned in the middle of this term to focus on business interests.

Most of the district is in Oconee County, a Republican suburb of Athens, reaching into heavily Democratic Athens-Clarke County. Republicans gerrymandered Athens-Clarke to include one strongly Democratic district, parceling out the rest of the county into three seats intended to be Republican.

Gisler ran against Wiedower in 2024, losing 61% to 39%. This year was Guest’s first time running for office.

A Democrat briefly won control of the district in a 2017 special election but lost to Wiedower in 2018.

Gisler, a 49-year-old Watkinsville resident, works for an insurance technology company and owns a gourmet olive oil store. He campaigned on improving health care, increasing affordability and reinvesting Georgia’s surplus funds

Guest is the president of a trucking company and touted his community ties, promising to improve public safety and cut taxes. He was endorsed by Republican Gov. Brian Kemp, an Athens native, and raised far more in campaign contributions than Gisler.



Source link

Continue Reading

Business

Rivian CEO says it’s a misconception EVs are politicized, with a 50-50 party split among R1 buyers

Published

on



If Rivian’s sales are any indication, owning an electric vehicle isn’t such a partisan issue, despite President Donald Trump’s rollbacks of mandates, incentives, and targets for EVs.

At the Fortune Brainstorm AI conference in San Francisco on Tuesday, Rivian CEO RJ Scaringe said it’s a misconception that electrification is politicized, explaining that most customers buy a product based on how it fits their needs, not their ideology. The questions car buyers ask, he said, are the same whether they’re purchasing one with an internal-combustion engine or a battery: “Is it exciting? Are you attracted to the product? Does it draw you in? Does the brand positioning resonate with you? Do the features answer needs that you have?”

Buyers of Rivian’s R1 electric SUV are split roughly 50-50 between Republicans and Democrats, Scaringe told Fortune’s Andrew Nusca. “I think that’s extraordinarily powerful news for us to recognize—that this isn’t just left-leaning buyers,” he added. “These are people that are saying, ‘I like the idea of this product, I’m excited about it.’ And this is thousands and thousands of customers. This is statistically relevant information.”

Buying an EV was once an indication of left-leaning politics, but the politics got scrambled after Tesla CEO Elon Musk became the top Republican donor and a close adviser to Trump. That drew some new customers to Tesla, and turned off a lot of progressive EV buyers, with many existing owners putting bumper stickers on their Teslas explaining that they bought their cars before Musk’s hard-right turn. Trump and Musk later had a stunning public feud, in part over the administration’s elimination of EV and solar tax credits.

But Scaringe said he started Rivian with a long-term view, independent of any policy framework or political trends. He also insisted that if Americans have more EV choices, sales would follow. Right now, Tesla dominates a key corner of the market, namely EVs in the $50,000 price range. Rivian’s forthcoming R2 mid-size SUV will represent a new choice in that market, with a starting price of $45,000 versus the R1’s $70,000.

Ten years from now, Scaringe said he hopes—and believes—that EV adoption in the U.S. will be meaningfully higher than it is today across the board, explaining that the main constraint isn’t on the demand side. Instead, it’s on the supply side, which suffers from “a shocking lack of choice,” especially compared to Europe and China, he added. EV options in the U.S. are limited by the fact that Chinese brands are shut out of the market.

More choices for U.S. EV buyers would presumably create more competition for Rivian—and indeed, the flood of low-priced Chinese EVs in other auto markets has created a backlash, with countries such as Canada imposing steep tariffs on them. But Scaringe appears to view more competition as positive for the market overall.

“I do think that the existence of choice will help drive more penetration, and it actually creates a unique opportunity in the United States,” he said.



Source link

Continue Reading

Trending

Copyright © Miami Select.