Connect with us

Business

2026 will be the year you get fooled by a deepfake, researcher says. Voice cloning has crossed the ‘indistinguishable threshold’

Published

on


Over the course of 2025, deepfakes improved dramatically. AI-generated faces, voices and full-body performances that mimic real people increased in quality far beyond what even many experts expected would be the case just a few years ago. They were also increasingly used to deceive people.

For many everyday scenarios — especially low-resolution video calls and media shared on social media platforms — their realism is now high enough to reliably fool nonexpert viewers. In practical terms, synthetic media have become indistinguishable from authentic recordings for ordinary people and, in some cases, even for institutions.

And this surge is not limited to quality. The volume of deepfakes has grown explosively: Cybersecurity firm DeepStrike estimates an increase from roughly 500,000 online deepfakes in 2023 to about 8 million in 2025, with annual growth nearing 900%.

I’m a computer scientist who researches deepfakes and other synthetic media. From my vantage point, I see that the situation is likely to get worse in 2026 as deepfakes become synthetic performers capable of reacting to people in real time. https://www.youtube.com/embed/2DhHxitgzX0?wmode=transparent&start=0 Just about anyone can now make a deepfake video.

Dramatic improvements

Several technical shifts underlie this dramatic escalation. First, video realism made a significant leap thanks to video generation models designed specifically to maintain temporal consistency. These models produce videos that have coherent motion, consistent identities of the people portrayed, and content that makes sense from one frame to the next. The models disentangle the information related to representing a person’s identity from the information about motion so that the same motion can be mapped to different identities, or the same identity can have multiple types of motions.

These models produce stable, coherent faces without the flicker, warping or structural distortions around the eyes and jawline that once served as reliable forensic evidence of deepfakes.

Second, voice cloning has crossed what I would call the “indistinguishable threshold.” A few seconds of audio now suffice to generate a convincing clone – complete with natural intonation, rhythm, emphasis, emotion, pauses and breathing noise. This capability is already fueling large-scale fraud. Some major retailers report receiving over 1,000 AI-generated scam calls per day. The perceptual tells that once gave away synthetic voices have largely disappeared.

Third, consumer tools have pushed the technical barrier almost to zero. Upgrades from OpenAI’s Sora 2 and Google’s Veo 3 and a wave of startups mean that anyone can describe an idea, let a large language model such as OpenAI’s ChatGPT or Google’s Gemini draft a script, and generate polished audio-visual media in minutes. AI agents can automate the entire process. The capacity to generate coherent, storyline-driven deepfakes at a large scale has effectively been democratized.

This combination of surging quantity and personas that are nearly indistinguishable from real humans creates serious challenges for detecting deepfakes, especially in a media environment where people’s attention is fragmented and content moves faster than it can be verified. There has already been real-world harm – from misinformation to targeted harassment and financial scams – enabled by deepfakes that spread before people have a chance to realize what’s happening. https://www.youtube.com/embed/syNN38cu3Vw?wmode=transparent&start=0 AI researcher Hany Farid explains how deepfakes work and how good they’re getting.

The future is real time

Looking forward, the trajectory for next year is clear: Deepfakes are moving toward real-time synthesis that can produce videos that closely resemble the nuances of a human’s appearance, making it easier for them to evade detection systems. The frontier is shifting from static visual realism to temporal and behavioral coherence: models that generate live or near-live content rather than pre-rendered clips.

Identity modeling is converging into unified systems that capture not just how a person looks, but how they move, sound and speak across contexts. The result goes beyond “this resembles person X,” to “this behaves like person X over time.” I expect entire video-call participants to be synthesized in real time; interactive AI-driven actors whose faces, voices and mannerisms adapt instantly to a prompt; and scammers deploying responsive avatars rather than fixed videos.

As these capabilities mature, the perceptual gap between synthetic and authentic human media will continue to narrow. The meaningful line of defense will shift away from human judgment. Instead, it will depend on infrastructure-level protections. These include secure provenance such as media signed cryptographically, and AI content tools that use the Coalition for Content Provenance and Authenticity specifications. It will also depend on multimodal forensic tools such as my lab’s Deepfake-o-Meter.

Simply looking harder at pixels will no longer be adequate.

Siwei Lyu, Professor of Computer Science and Engineering; Director, UB Media Forensic Lab, University at Buffalo

This article is republished from The Conversation under a Creative Commons license. Read the original article.

This story was originally featured on Fortune.com



Source link

Continue Reading

Business

Why over 80% of America’s top CEOs think Trump would be wrong not to pick Chris Waller for Fed chair

Published

on


Since the founding of the Federal Reserve in 1914, the United States has had 16 Fed chairs, yet rarely has the selection of the nation’s central-bank leader captured such sustained media and political attention as the spectacle which his playing out right now. Of course, this is by design; at least since the debut of The Apprentice in 2004, Donald Trump has reveled in transforming senior hiring decisions into a public spectacle—casting staffing choices as a form of modern gladiatorial entertainment. While this approach has drawn criticism, including my original 2004 critiques in the WSJ, it also has the paradoxical virtue of rendering candidates’ strengths, weaknesses, and temperaments unusually transparent.

Much of the media’s attention has centered on Kevin Hassett and Kevin Warsh as the presumptive front-runners to be next Fed Chair. Both are highly respected, with long track records of public service and honorable character. But whether fairly or not, their perceived weaknesses have been under a magnifying glass, creating an opening for an ascendant dark horse who is drawing growing backing from the top CEOs of the nation’s largest enterprises.

CEOs are gravitating towards that dark horse candidate, current Fed Governor Chris Waller, because while he may lack the White House network of other top contenders; he is quickly emerging as perhaps the only candidate who can cut interest rates with broad-based credibility and build broad consensus around those needed rate cuts, both at the Fed as well as across corporate America and within financial markets.

A great irony in President Trump’s jawboning of the Fed is that Trump is perhaps his own worst enemy in trying to force interest rates down. Ironically, the belief that interest rates need to come down is shared not only among economists across ideological anchoring, and not only among many top business leaders, but even many of Trump’s most vocal critics. We have previously written several publications calling for the Fed to lower interest rates, pointing out that entire sectors, such as homebuilders, are getting hammered unnecessarily from holding rates so high for so long.

CEOs care about interest rates coming down, but they care even more about Fed independence. History is clear: countries that politicize their central banks set themselves on a path towards monetary purgatory and collapse. That’s why Trump’s brazen interventions at the Fed have wreaked havoc in the markets, with bond investors in active revolt and with long-term bond yields rising by 20 basis points after some pointed commentary from Trump.

Chris Waller is perhaps the only choice for Fed Chair who can thread the needle. Unlike other top contenders, Waller’s calls for rates to come down reflect not convenient political posturing nor obsequious flattery, but genuine intellectual conviction. Waller has been incredibly consistent and correctly prescient across his entire career at the Fed; he correctly pointed to signs that the economy, and in particular employment, was softening, and has been calling for rates to come down for far longer than any of his peers at the Fed.

Yet, at the same time, Waller has emphasized and defended central bank independence time and time again, building off his own academic research which was focused on the importance of central bank independence. Indeed, prior to Waller’s public service at the Fed starting in 2009, he was a renowned academic with a long track record of groundbreaking economic research, including as professor and the Gilbert F. Schaefer Chair of Economics at the University of Notre Dame.

Financial markets have already offered a preview of how they would respond to a potential Waller nomination — decidedly positively. When CNBC broadcast live Waller’s hour-long plus Q&A with 200 top CEOs in attendance at our Yale CEO Summit last week after a moderated Q&A with CNBC’s Steve Liesman; stocks rallied and bond yields fell in real time as Waller called for rates to come down, pointing to softening employment numbers, while simultaneously pledging to defend central bank independence. No other contender for Fed Chair has sparked such a positive market reaction.

courtesy of the Yale Chief Executive Leadership Institute/Photographer Donovan Marks

Waller is a lifelong Republican who has a knack for getting along with very different constituencies, all of whom respect his genuine expertise, personal humility and willingness to listen. Even CEOs who disagreed with certain aspects of Waller’s arguments clearly appreciated his constructive engagement, as well as his intellectual honesty and independence. When we polled the room, as reported by Nick Timiraos of The Wall Street Journal, a whopping 81% of CEOs picked Waller as their top choice for Fed Chair, building on prior polls done by CNBC showing a majority of market participants prefer Waller, as well as prominent endorsements from publications such as The Economist.

Many CEOs at our Yale CEO Summit expressed their appreciation for Waller’s long track record of partnering effectively with business leaders on challenges as well as opportunities. Take crypto innovation as one such example. As the Fed Governor who oversees the payment system, Waller was once again correctly prescient as an advocate of stablecoins dating back to before 2021, when few knew what stablecoins even were, and he convened the first ever Payments Innovation Conference earlier this year, bringing in top leaders from industry to help shape the future of stablecoin payments.

President Harry Truman lamented, “Give me a one-handed economist. All my economists say, ‘on ONE hand…’, then ‘but on the other.’” Business leaders appreciate Waller’s serious and decisive style, his systemic economic knowledge, his track record of constructive engagement, his clarity of message, and his credible presence, which transcend political or personal career agendas.

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.



Source link

Continue Reading

Business

I pioneered machine teaching at Microsoft. Building AI agents is like building a basketball team, not drafting a player 

Published

on



Salesforce’s latest agent testing/builder tool and Jeff Bezos’s new AI venture focused on practical industrial applications of AI show that enterprises are inching towards autonomous systems. It’s meaningful progress because robust guardrails, testing and evaluation are the foundation of agentic AI. But the next step that’s largely missing right now is practice, giving teams of agents repeated, structured experience. As the pioneer of Machine Teaching, a methodology for training autonomous systems that has been deployed across several Fortune 500 companies, I’ve experienced the impact of agent practice while building and deploying over 200 autonomous multi-agent systems at Microsoft and now at AMESA for enterprises around the globe. 

Every CEO investing in AI faces the same problem: spending billions on pilots that may or may not deliver real autonomy. Agents seem to excel in demos but stall when real-world complexity hits. As a result, business leaders do not trust AI to act independently on billion-dollar machinery or workflows. Leaders are searching for the next phase of AI’s capability: true enterprise expertise. We shouldn’t ask how much knowledge an agent can retain, but rather if it has had the opportunity to develop expertise by practicing as humans do. 

The Testing Illusion 

Just as human teams develop expertise through repetition, feedback and clear roles, AI agents must develop skills inside realistic practice environments with structured orchestration. Practice is what turns intelligence into reliable, autonomous performance.

Many enterprise leaders still assume that a few major LLM companies will develop powerful enough models and massive data sets to manage complex enterprise operations end-to-end via “Artificial General Intelligence.” 

But that isn’t how enterprises work. 

No critical process, whether it be supply chain planning or energy optimization, is run by one person with one skill set. Think of a basketball team. Each player needs to work on their skills, whether it be dribbling or jump shot, but each player also has a role on the team. A center’s purpose is different from a point guard’s. Teams succeed with defined roles, expertise and responsibilities. AI needs that same structure. 

Even if you did create the perfect model or reach AGI, I’d predict the agents would still fail in production because they never encountered variability, drift, anomalies, or the subtle signals that humans navigate every day. They haven’t differentiated their skill sets or learned when to act or pause. They also haven’t been exposed to expert feedback loops that shape real judgment.

How Machine Teaching Creates Practice

Machine Teaching provides the structure that modern agentic systems need. It guides agents to:

  • Perceive the environment correctly.
  • Master basic skills that mirror human operators.
  • Learn higher-level strategies that reflect expert judgment.
  • Coordinate under a supervisor agent that selects the right strategy at the right time.

Take one Fortune 500 company I worked with that was improving a nitrogen manufacturing process. Our agents practiced inside the AMESA Agent Cloud, improving through experimentation and feedback. In less than one day, the agent teams outperformed a custom-built industrial control system that other automation tools and single-agent AI applications could not match.

This resulted in an estimated $1.2 million in annual efficiency gains, and more importantly, gave leadership the confidence to deploy autonomy at scale because the system behaved like their best operators. 

Why CEOs and Leaders Need Practiced AI

Practice is what drives true autonomy in agents. I invite every leader to begin reframing a few assumptions:

  1. Stop thinking in terms of models and think in terms of teams. Every day interactions with systems like ChatGPT or Claude are powerful, but they reinforce a misconception that large language models are the path to enterprise autonomy.  Autonomy emerges from specialized agents that take on perception, control, planning and supervisory roles through a wide variety of technologies. 
  2. Identify where expertise is disappearing and preserve it within agents. Many essential operations rely on experts who are nearing retirement. CEOs should ask which processes would be most vulnerable if these experts left tomorrow. Those areas are the ideal starting point for a Machine Teaching approach. Let your top operators teach a team of agents in a safe practice environment so that their expertise becomes scalable and permanent.
  3. Recognize that you already have the infrastructure for autonomy. Years of investment in sensors, MES and SCADA systems, ERP integrations and IoT telemetry already form your organization’s backbone of digital twins and high-fidelity simulations. Success requires orchestration, structure, and leveraging the data foundation you already built.

The Payoff of Practice

When enterprises give agents room to practice before deployment, several things happen:  

  • Human teams begin to trust the AI and understand its boundaries. 
  • Leaders can calculate true ROI rather than speculative projections. 
  • Agents become safer, more consistent and aligned with expert judgment. 
  • Human teams are elevated rather than replaced because AI now understands their workflows and supports them.

Agents won’t truly perform without experience, and experience only comes from practice. The companies that invest in and embrace this framing will be the ones to break out of pilot purgatory and see real impact.

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.



Source link

Continue Reading

Business

Ex-Palantir turned politician Alex Bores says AI deepfakes are a ‘solvable problem’ if we bring back a free, decades-old technique

Published

on



New York Assemblymember Alex Bores, a Democrat now running for Congress in Manhattan’s 12th District, argues that one of the most alarming uses of artificial intelligence—highly realistic deepfakes—is less an unsolvable crisis than a failure to deploy an existing fix.

“Can we nerd out about deep fakes? Because this is a solvable problem and one that that I think most people are missing the boat on,” Bores said on a recent episode of Bloomberg’s Odd Lots podcast, hosted by Joe Weisenthal and Tracy Alloway.​

Rather than training people to spot visual glitches in fake images or audio, Bores said policymakers and the tech industry should lean on a well-established cryptographic approach similar to what made online banking possible in the 1990s. Back then, skeptics doubted consumers would ever trust financial transactions over the internet. The widespread adoption of HTTPS—using digital certificates to verify that a website is authentic—changed that.​

“That was a solvable problem,” Bores said. “That basically same technique works for images, video, and for audio.”​

Bores pointed to a “free open-source metadata standard” known as C2PA, short for the Coalition for Content Provenance and Authenticity, which allows creators and platforms to attach tamper-evident credentials to files. The standard can cryptographically record whether a piece of content was captured on a real device, generated by AI, and how it has been edited over time.​

“The challenge is the creator has to attach it and so you need to get to a place where that is the default option,” Bores said.

In his view, the goal is a world where most legitimate media carries this kind of provenance data, and should “you see an image and it doesn’t have that cryptographic proof, you should be skeptical.”​

Bores said thanks to the shift from HTTP to HTTPS, consumers now instinctively know to distrust a banking site that lacks a secure connection. “It’d be like going to your banking website and only loading HTTP, right? You would instantly be suspect, but you can still produce the images.”​

AI has become a central political and economic issue, with deepfakes emerging as a particular concern for elections, financial fraud, and online harassment. Bores said some of the most damaging cases involve non-consensual sexual images, including those targeting school-age girls, where even a clearly labeled fake can have real-world consequences. He argued that state-level laws banning deepfake pornography, including in New York, now risk being constrained by a new federal push to preempt state AI rules.​

Bores’s broader AI agenda has already drawn industry fire. He authored the Raise Act—a bill that aims to impose safety and reporting requirements on a small group of so-called “frontier” AI labs, including Meta, Google, OpenAI, Anthropic and XAI—which was just signed into law last Friday. The Raise Act requires those companies to publish safety plans, disclose “critical safety incidents,” and refrain from releasing models that fail their own internal tests.​

The measure passed the New York State Assembly with bipartisan support, but has also triggered a backlash from a pro-AI super PAC, reportedly backed by prominent tech investors and executives, which has pledged millions of dollars to defeat Bores in the 2026 primary.​

Bores, who previously worked as a data scientist and federal-civilian business lead at Palantir, says his position isn’t anti-industry but rather an attempt to systematize protections that large AI labs have already endorsed in voluntary commitments with the White House and at international AI summits. He said compliance with the Raise Act, for a company like Google or Meta, would amount to hiring “one extra full-time employee.”​

On Odd Lots, Bores said cryptographic content authentication should anchor any policy response to deepfakes. But he also stressed that technical labels are only one piece of the puzzle. Laws that explicitly ban harmful uses—such as deepfake child sexual abuse material—are still vital, he said, particularly while Congress has yet to enact comprehensive federal standards.​

“AI is already embedded in [voters’] lives,” Bores said, pointing to examples like AI toys aimed at children to bots mimicking human conversation.

You can watch the full Odd Lots interview with Bores below:

This story was originally featured on Fortune.com



Source link

Continue Reading

Trending

Copyright © Miami Select.