Connect with us

Business

From pilots to powering sustainable growth: the C-suite blueprint for physical AI

Published

on



It’s no secret that artificial intelligence (AI) has already had a significant impact on businesses — introducing new levels of automation and challenges for leaders to overcome. Until now, it’s been largely confined to screens and data centers, but we are witnessing this technology advance beyond the digital world right before our eyes. 

In manufacturing, sensors and AI-driven analytics allow factories to anticipate maintenance before breakdowns occur, and in healthcare, smart diagnostic systems accelerate detection and personalize treatment. Even in global supply chains, intelligent networks are improving efficiency, reducing waste and advancing sustainability. The result isn’t just incremental improvement but the creation of safer workplaces, more reliable products and stronger customer trust through consistently better outcomes.

“Physical AI” represents the next frontier, transforming industries by embedding intelligence directly into the systems powering our daily lives. Examples include robots in hospitals, autonomous fleets or AI-driven factories. This new era not only unlocks a wealth of unprecedented possibilities for businesses but also comes with new complications that the C-suite needs to prepare for. Further, successful implementation demands that business structures adapt. 

A call to action for leaders

Rapid advances in robotics, combined with the sizable potential of these technologies, are positioning physical AI as a critical development in the AI revolution. For executives, though, the challenge is moving from pilots to deploying physical AI at scale so that it becomes a driver of sustainable growth for their organization. 

Piloting physical AI involves identifying the workflows where embedded intelligence can drive immediate gains — whether that’s streamlining supply chains, enhancing workforce productivity or enabling entirely new services. Scaling is a tougher ask because it involves substantial investment in infrastructure, data collection and management, and workforce transformation to build on the outcomes of a successful pilot.

Without a clear strategy, even the most promising physical AI deployment may stall or fail to realize its potential. For that reason, EY teams have rolled out several internal physical AI projects, in collaboration with NVIDIA, to navigate the risks and develop a blueprint for success.  

Building a strong data framework 

Just like other AI systems, physical AI tools need access to high-quality, secure and accessible data. Without it, a physical AI system is incapable of performing well. Businesses must have appropriate data for the system to use, supported by cybersecurity and governance processes that protect the integrity and quality of that data. 

AI-ready data is the foundation for deploying physical AI at scale — so leaders should ensure that data is high-quality, based on the proper context, formatted correctly and well-governed. When this foundation is in place, physical AI can perform tasks more safely and effectively, allowing businesses to capitalize on the benefits of using the technologies while eliminating associated risks. 

Navigating increasing complexities

The transition from the digital world to the physical world comes with new rules and regulations for businesses that adopt AI technologies. What’s more, adhering to these rules is not a trivial undertaking. Compliance can’t be overlooked. 

As leaders prepare to integrate robots into their processes, they must consider human privacy rights. Also, what do safety and security procedures look like in this new environment? And are there additional insurance requirements that need to be in place before the robots can become operational? If neglected, each of these considerations has the power to unravel a physical AI deployment. That’s why a comprehensive risk management strategy is critical to success. 

The good news is that businesses don’t need to develop this strategy on their own. By working with partners who are well versed in change management, leaders can tap into additional resources and gain the knowledge they need to successfully navigate the new challenges posed by large-scale adoption of physical AI. 

Enhancing your people’s capabilities

Integrating AI into business operations has already sparked a talent challenge, with both the current and the future workforce being tasked with learning how to use the technology effectively. Adding a physical element to the AI equation further deepens this challenge, since it requires the workforce to develop additional skills. It also raises difficult questions around how roles might need to adapt and change. 

The truth is, the real value of physical AI technology is its ability to enhance human capabilities. When leaders give their teams the knowledge and resources to understand physical AI, they enable them to collaborate directly with the technology and better execute intricate tasks. So, it is essential to develop and update training modules so that people can safely and effectively add these tools to their toolbox, opening the door to the continued exploration of emerging technologies. 

As physical AI applications are increasingly deployed at scale, humans must remain in the loop. A certain level of trust is needed for the operation of heavy machinery or even medical diagnostics — one that can only come from the responsible oversight and governance provided by a human.  

Setting the pace of innovation 

Physical AI is more than just the next stage of automation; it represents a strategic shift in how companies create value. Streamlining tasks and creating efficiency through the integration of robotics and AI can bring huge benefits to businesses. Hence organizations that act now won’t just adapt to the future of work — they will define it.

The blueprint for success is clear: identify priority use cases, test and learn quickly, and build governance frameworks that balance innovation with accountability. For C-suites ready to lead the charge, physical AI offers not just efficiency, but the chance to set the pace of global innovation and shape the future for the better. 

The views reflected in this article are the views of the author and do not necessarily reflect the views of the global EY organization or its member firms.

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.



Source link

Continue Reading

Business

Analyst sees Disney/OpenAI deal as a dividing line in entertainment history

Published

on



Disney’s expansive $1 billion licensing agreement with OpenAI is a sign Hollywood is serious about adapting entertainment to the age of artificial intelligence (AI), marking the start of what one Ark Invest analyst describes as a “pre‑ and post‑AI” era for entertainment content. The deal, which allows OpenAI’s Sora video model to use Disney characters and franchises, instantly turns a century of carefully guarded intellectual property (IP) into raw material for a new kind of crowd‑sourced, AI‑assisted creativity.​

Nicholas Grous, director of research for consumer internet and fintech at Ark Invest, told Fortune tools like Sora effectively recreate the “YouTube moment” for video production, handing professional‑grade creation capabilities to anyone with a prompt instead of a studio budget. In his view, that shift will flood the market with AI‑generated clips and series, making it far harder for any single new creator or franchise to break out than it was in the early social‑video era.​ His remarks echoed the analysis from Melissa Otto, head of research at S&P Global Visible Alpha, who recently told Fortune Netflix’s big move for Warner Bros.’ reveals the streaming giant is motivated by a need to deepen its war chest as it sees Google’s AI-video capabilities exploding with the onset of TPU chips.

As low‑cost synthetic video proliferates, Grous said he believes audiences will begin to mentally divide entertainment into “pre‑AI” and “post‑AI” categories, attaching a premium to work made largely by humans before generative tools became ubiquitous. “I think you’re going to have basically a split between pre-AI content and post-AI content,” adding that viewers will consider pre-AI content closer to “true art, that was made with just human ingenuity and creativity, not this AI slop, for lack of a better word.”

Disney’s IP as AI fuel

Within that framework, Grous argued Disney’s real advantage is not just Sora access, but the depth of its pre‑AI catalog across animation, live‑action films, and television. Iconic franchises like Star Wars, classic princess films and legacy animated characters become building blocks for a global experiment in AI‑assisted storytelling, with fans effectively test‑marketing new scenarios at scale.​

“I actually think, and this might be counterintuitive, that the pre-AI content that existed, the Harry Potter, the Star Wars, all of the content that we’ve grown up with … that actually becomes incrementally more valuable to the entertainment landscape,” Grous said. On the one hand, he said, there are deals like Disney and OpenAI’s where IP can become user-generated content, but on the other, IP represents a robust content pipeline for future shows, movies, and the like.

Grous sketched a feedback loop in which Disney can watch what AI‑generated character combinations or story setups resonate online, then selectively “pull up” the most promising concepts into professionally produced, higher‑budget projects for Disney+ or theatrical release. From Disney’s perspective, he added, “we didn’t know Cinderella walking down Broadway and interacting with these types of characters, whatever it may be, was something that our audience would be interested in.” The OpenAI deal is exciting because Disney can bring that content onto its streaming arm Disney+ and make it more premium. “We’re going to use our studio chops to build this into something that’s a bit more luxury than what just an individual can create.”

Grous agreed the emerging market for pre‑AI film and TV libraries is similar to what’s happened in the music business, where legacy catalogs from artists like Bruce Springsteen and Bob Dylan have fetched huge sums from buyers betting on long‑term streaming and licensing value.

The big Netflix-Warner deal

For streaming rivals, the Disney-OpenAI pact is a strategic warning shot. Grous argued the soaring price tags in the bidding war for Warner Bros. between Netflix and Paramount shows the importance of IP for the next phase of entertainment. “​I think the reason this bidding [for Warner Bros.] is approaching $100 billion-plus is the content library and the potential to do a Disney-OpenAI type of deal.” In other words, whoever controls Batman and the like will control the inevitable AI-generated versions of those characters, although “they could take a franchise like Harry Potter and then just create slop around it.”

Netflix has a great track record on monetizing libraries, Grous said, listing the example of how the defunct USA dramedy Suits surged in popularity once it landed on Netflix, proving extensive back catalogs can be revived and re‑monetized when matched with modern distribution.​

Grous cited Nintendo and Pokémon as examples of under‑monetized franchises that could see similar upside if their owners strike Sora‑style deals to bring characters more deeply into mobile and social environments.​ “That’s another company where you go, ‘Oh my god, the franchises they have, if they’re able to bring it into this new age that we’re all experiencing, this is a home-run opportunity.’”

In that environment, the Ark analyst suggests Disney’s OpenAI deal is less of a one‑off licensing win than an early template for how legacy media owners might survive and thrive in an AI‑saturated market. The companies with rich pre‑AI catalogs and a willingness to experiment with new tools, he argued, will be best positioned to stand out amid the “AI slop” and turn nostalgia‑laden IP into enduring, flexible assets for the post‑AI age.​

Underlying all of this is a broader battle for attention that spans far beyond traditional studios and shows how sectors between tech and entertainment are getting even blurrier than when the gatecrashers from Silicon Valley first piled into streaming. Grous notes Netflix itself has long framed its competition as everything from TikTok and Instagram to Fortnite and “sleep,” a mindset that fits naturally with the coming wave of AI‑generated video and interactive experiences.​ (In 2017, Netflix co-founder Reed Hastings famously said “sleep” was one of the company’s biggest competitors, as it was busy pioneering the binge-watch.)

Grous also sounded a warning for the age of post-AI content: The binge-watch won’t feel as good anymore, and there will be some kind of backlash. As critics such as The New York Times‘ James Poniewozik increasingly note, streaming shows don’t seem to be as re-watchable as even recent hits from the golden age of cable TV, such as Mad Men. Grous said he sees a future where the endangered movie theater makes a comeback. “People are going to want to go outside and meet or go to the theater. Like, we’re not just going to want to be fed AI slop for 16 hours a day.”

Editor’s note: the author worked for Netflix from June 2024 through July 2025.



Source link

Continue Reading

Business

The race to an AI workforce faces one important trust gap: What happens when an agent goes rogue?

Published

on



To err is human; to forgive, divine. But when it comes to autonomous AI “agents” that are taking on tasks previously handled by humans, what’s the margin for error? 

At Fortune’s recent Brainstorm AI event in San Francisco, an expert roundtable grappled with that question as insiders shared how their companies are approaching security and governance—an issue that is leapfrogging even more practical challenges such as data and compute power. Companies are in an arm’s race to parachute AI agents into their workflows that can tackle tasks autonomously and with little human supervision. But many are facing a fundamental paradox that is slowing adoption to a crawl: Moving fast requires trust, and yet building trust takes a lot of time. 

Dev Rishi, general manager for AI at Rubrik, joined the security company last summer following its acquisition of his deep learning AI startup Predibase. Afterward, he spent the next four months meeting with executives from 180 companies. He used those insights to divide agentic AI adoption into four phases, he told the Brainstorm AI audience. (To level set, agentic adoption refers to businesses implementing AI systems that work autonomously, rather than responding to prompts.) 

According to Rishi’s learnings, the four phases he unearthed include the early experimentation phase where companies are hard at work on prototyping their agents and mapping goals they think could be integrated into their workflows. The second phase, said Rishi, is the trickiest. That’s when companies shift their agents from prototypes and into formal work production. The third phase involves scaling those autonomous agents across the entire company. The fourth and final stage—which no one Rishi spoke with had achieved—is autonomous AI. 

Roughly half of the 180 companies were in the experimentation and prototyping phase, Rishi found, while 25% were hard at work formalizing their prototypes. Another 13% were scaling, and the remaining 12% hadn’t started any AI projects. However, Rishi projects a dramatic change ahead: In the next two years, those in the 50% bucket are anticipating that they will move into phase two, according to their roadmaps. 

“I think we’re going to see a lot of adoption very quickly,” Rishi told the audience. 

However, there’s a major risk holding companies back from going “fast and hard,” when it comes to speeding up the implementation of AI agents in the workforce, he noted. That risk—and the No.1 blocker to broader deployment of agents— is security and governance, he said. And because of that, companies are struggling to shift from agents being used for knowledge retrieval to being action oriented.

“Our focus actually is to accelerate the AI transformation,” said Rishi. “I think the number one risk factor, the number one bottleneck to that, is risk [itself].”

Integrating agents into the workforce

Kathleen Peters, chief innovation office at Experian who leads product strategy, said the slowing is due to not fully understanding the risks when AI agents overstep the guardrails that companies have put into place and the failsafes needed for when that happens.

“If something goes wrong, if there’s a hallucination, if there’s a power outage, what can we fall back to,” she questioned. “It’s one of those things where some executives, depending on the industry, are wanting to understand ‘How do we feel safe?’”

Figuring out that piece will be different for every company and is likely to be particularly thorny for companies in highly regulated industries, she noted. Chandhu Nair, senior vice president in data, AI, and innovation at home improvement retailer Lowe’s, noted that it’s “fairly easy” to build agents, but people don’t understand what they are: Are they a digital employee? Is it a workforce? How will it be incorporated into the organizational fabric? 

“It’s almost like hiring a whole bunch of people without an HR function,” said Nair. “So we have a lot of agents, with no kind of ways to properly map them, and that’s been the focus.”

The company has been working through some of these questions, including who might be responsible if something goes wrong. “It’s hard to trace that back,” said Nair. 

Experian’s Peters predicted that the next few years will see a lot of those very questions hashed out in public even as conversations take place simultaneously behind closed doors in boardrooms and among senior compliance and strategy committees. 

“I actually think something bad is going to happen,” Peters said. “There are going to be breaches. There are going to be agents that go rogue in unexpected ways. And those are going to make for a very interesting headlines in the news.”

Big blowups will generate a lot of attention, Peters continued, and reputational risk will be on the line. That will force the issue of uncomfortable conversations about where liabilities reside regarding software and agents, and it will all likely add up to increased regulation, she said. 

“I think that’s going to be part of our societal overall change management in thinking about these new ways of working,” Peters said.

Still, there are concrete examples as to how AI can benefit companies when it is implemented in ways that resonate with employees and customers. 

Nair said Lowe’s has seen strong adoption and “tangible” return on investment from the AI it has embedded into the company’s operations thus far. For instance, among its 250,000 store associates, each has an agent companion with extensive product knowledge across its 100,000 square foot stores that sell anything from electrical equipment, to paints, to plumbing supplies. A lot of the newer entrants to the Lowe’s workforce aren’t tradespeople, said Nair, and the agent companions have become the “fastest-adopted technology” so far.

“It was important to get the use cases right that really resonate back with the customer,” he said. In terms of driving change management in stores, “if the product is good and can add value, the adoption just goes through the roof.”

Who’s watching the agent?

But for those who work at headquarters, the change management techniques have to be different, he added, which piles on the complexity. 

And many enterprises are stuck at another early-stage question, which is whether they should build their own agents or rely on the AI capabilities developed by major software vendors. 

Rakesh Jain, executive director for cloud and AI engineering at healthcare system Mass General Brigham, said his organization is taking a wait-and-see approach. With major platforms like Salesforce, Workday, and ServiceNow building their own agents, it could create redundancies if his organization builds its own agents at the same time. 

“If there are gaps, then we want to build our own agents,” said Jain. “Otherwise, we would rely on buying the agents that the product vendors are building.”

In healthcare, Jain said there’s a critical need for human oversight given the high stakes. 

“The patient complexity cannot be determined through algorithms,” he said. “There has to be a human involved in it.” In his experience, agents can accelerate decision making, but humans have to make the final judgment, with doctors validating everything before any action is taken. 

Still, Jain also sees enormous potential upside as the technology matures. In radiology, for example, an agent trained on the expertise of multiple doctors could catch tumors in dense tissue that a single radiologist might miss. But even with agents trained on multiple doctors, “you still have to have a human judgment in there,” said Jain. 

And the threat of overreach by an agent that is supposed to be a trusted entity is ever present. He compared a rogue agent to an autoimmune disease, which is one of the most difficult conditions for doctors to diagnose and treat because the threat is internal. If an agent inside a system “becomes corrupt,” he said, “it’s going to cause massive damages which people have not been able to really quantify.”

Despite the open questions and looming challenges, Rishi said there’s a path forward. He identified two requirements for building trust in agents. First, companies need systems that provide confidence that agents are operating within policy guardrails. Second, they need clear policies and procedures for when things will inevitably go wrong—a policy with teeth. Nair, additionally, added three factors for building trust and moving forward smartly: identity and accountability and knowing who the agent is; evaluating how consistent the quality of each agent’s output is; and, reviewing the post-mortem trail that can explain why and when mistakes have occurred. 

“Systems can make mistakes, just like humans can as well,” said Nair. “ But to be able to explain and recover is equally important.”



Source link

Continue Reading

Business

Highlights from Fortune Brainstorm AI San Francisco

Published

on



Hello and welcome to Eye on AI. In this edition….Insights from Fortune Brainstorm AI San Francisco…Disney invests $1 billion in OpenAI and licenses its IP to the company…OpenAI debuts GPT-5.2 in effort to silence concerns it’s trailing rivals…Oracle stock takes a tumble.

Hi, it’s Jeremy here. I’m still buzzing from Fortune Brainstorm AI San Francisco, which took place earlier this week. We had a fabulous lineup including Brad Lightcap, OpenAI’s chief operating officer, Google Cloud CEO Thomas Kurian, Intuit CEO Sasan Goodarzi, Exelon CEO Calvin Butler, Databricks CEO Ali Ghodsi, Rivian CEO RJ Scaringe, Insitro CEO Daphne Koller, and many more. We also had a thoughtful conversation on AI’s impacts with actor, director, and increasingly AI thought leader Joseph Gordon Levitt, as well as a scream of a session with actor, comedian and AI CEO Natasha Lyonne. Today, Sharon Goldman, Bea Nolan, and I are going to share a few highlights and personal impressions.

For me, there was a notable vibe this year that a lot of companies are substantially further along in implementing AI across their organizations, including using AI agents in some limited, but important, capacities. Many audience questions, especially in some of the breakout sessions, were around governance and orchestration methods for an increasingly hybrid workforce where AI agents will be completing tasks alongside employees.

Still, it was striking to hear Butler, the Exelon CEO, say that his company is moving cautiously. When the consequence of getting something wrong is literally lights out, security and reliability have to take precedence over everything else. And so Butler said he was happy not to be a “first mover” but instead a “fast follower” when it came to AI implementations. Let other people take the hit and learn from their mistakes, seems to be his view.

And this wasn’t the only place where speakers were seeking to tamp down hype. It was refreshing to hear Michael Truell, the cofounder and CEO of hit coding assistant Cursor tell me that he didn’t think software engineering would ever be fully automated in the way that OpenAI CEO Sam Altman sometimes talks about. Instead Truell said that while the amount of time that coders spent on “compilation” of code would continue to shrink, he saw a continued need for humans to make design decisions around “how should the software work.”

Similarly, Vidya Peters, from DataSnipper, said she thought there would still be a role for qualified accountants within finance organizations, even if they were increasingly being assisted with AI tools such as the one her company makes. She also said she thought that applications geared specifically for a particular industry or job—especially in regulated industries—would continue to win out over more general purpose AI models, even as the big AI companies are increasingly targeting specific professional use cases for their general purpose models.

A panel that Sharon moderated on the “new geography of data centers” was fascinating. The message was that right now, data centers are going where the power is. But increasingly data centers are going to be looking to build their own power on site and possibly even become net contributors to the grid. And Jason Eichenholz, the CEO of Relativity Networks, said that as AI inference workloads come to eclipse AI training workloads, there will be an increasing need to bring data centers close to major population centers, but that most cities in the U.S. are power constrained. How are we going to get these urban centers the tokens they need at the speed at which they need them? That’s anyone’s guess right now, Eichenholz says—although his company builds the fast fiber that will carry those tokens from the data centers to end users.

Finally, I enjoyed hearing Dayle Stevens from Telstra explain why her company chose to form a joint venture with Accenture to deliver its AI stragegy, rather than simply hiring the consulting firm under a traditional service contract. Stevens said the joint venture has enabled the company to move much faster than it would have otherwise and to tap expertise, including starting an AI innovation hub in Silicon Valley, that would have been hard to implement otherwise. 

The future of enterprise AI is hybrid

Now, here’s Sharon’s takeaways: In my mainstage session with PayPal global head of AI Prakhar Mehrotra and Marc Hamilton, VP of solutions architecture and engineering at Nvidia, both discussed the increasing power of open source AI models to allow enterprise companies to control their data and fine-tune for specific use cases. But both agreed that the future of enterprise AI will be hybrid, with enterprises typically using both open models and proprietary model APIs.

There was plenty of time for philosophizing, as well: at one dinner, I chatted with delegates from The Clorox Company, Workday and other companies about everything from what jobs were future-proof (I suggested dog walkers were safe from AI) to what AI would really mean for the future of today’s children (the bottom line: they still need to learn to think for themselves!).

My favorite panel was one I moderated with a half-dozen leaders and stakeholders in the world of AI data centers, including Andy Hock from Cerebras, Matt Field from Crusoe, and former OpenAI infrastructure policy leader Lane Dilg. We dug into how the line is blurring between power infrastructure and data centers, with billions in capital and gigawatts of power at play. My biggest takeaway was that the AI data center issue is local, local, local. Every community and local government will be dealing with its own specific issues and compromises around issues such as land, energy, and water—and what works for one area might not work for another.

People and culture are paramount

And here is what Bea had to say about this year’s Brainstorm AI San Francisco:

Most enterprises are still trying to figure out the best way to adopt AI, but leaders this year were also keen to emphasize that choosing the right tools is only part of the equation. Companies also need to ensure that both their employees and their org charts are ready for the shift—otherwise, even the most advanced AI pilots are likely to fail.

As Accenture’s Chief Responsible AI Officer Arnab Chakraborty put it: “Don’t just think about technology—think about people and the culture. It is so paramount.”

Or take Open Machine CEO Allie K. Miller’s advice and don’t call AI a tool at all: “Calling it a tool ends up being a little bit of borderline self-limiting behavior that is holding enterprise all around the world behind.”

I also moderated a panel of healthcare experts, which brought together a mix of clinicians who see patients every day and tech leaders building and deploying healthtech tools at scale. In healthcare, the industry is generally feeling good about clinician-facing AI, but it’s still wrestling with what it means to safely deploy patient-facing agents.

The panelists discussed, among other things, what it means to be moving toward a future where patients and clinicians consult the same AI before they consult each other.

The excitement is running high on the corporate side, but not that much has really changed in the examination room—at least according to Gurpreet Dhaliwal, a clinician-educator and Professor of Medicine at the University of California. Whether it’s with Dr. Google, Dr. ChatGPT, or just a neighbor with some strong beliefs about antibiotics, Dhaliwal said patients have always arrived with a second opinion in their back pocket. While AI is poised to be a revolutionary force for healthcare—especially in fringe cases such as rare diseases—it’s yet to fundamentally change the dynamic between patients and their physicians.

With that, here’s the rest of the AI news.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

FORTUNE ON AI

Google DeepMind agrees to sweeping partnership with U.K. government focused on science and clean energy—by Jeremy Kahn

Hinge’s founder and CEO is stepping down to start a new AI-first dating app—by Marco Quiroz-Gutierrez

Cursor has growing revenue and a $29 billion valuation—but CEO Michael Truell isn’t thinking about an IPO—by Beatrice Nolan

AI IN THE NEWS

Disney invests $1 billion in OpenAI, brings characters to OpenAI apps. The home of Mickey Mouse is investing $1 billion in OpenAI and, under a three-year licensing deal, will let users generate short, prompt-driven videos in OpenAI’s Sora app using more than 200 Disney, Marvel, Star Wars, and Pixar characters. OpenAI is supposed to create guardrails to prevent users from creating videos or images that might reflect poorly on the Disney brand. The partnership was struck after nearly two years of talks. Meanwhile, Disney simultaneously sent a cease-and-desist letter to Google accusing it of large-scale copyright infringement tied to AI outputs featuring Disney characters. You can read more from The Wall Street Journal here.

OpenAI debuts GPT-5.2 model, answering concerns it was trailing competitors. The company launched a new AI model that, according to evaluations OpenAI conducted, delivers state-of-the-art performance across a wide range of tasks, including coding, mathematical reasoning, and “knowledge work.” The model showed significant improvement over GPT-5.1, which OpenAI released only a month ago, and bested Google’s and Anthropic’s new models. The release of Google’s Gemini 3 Pro in late November prompted OpenAI CEO Sam Altman to declare a “code red” to refocus the company on improving ChatGPT. But OpenAI executives said the release of GPT-5.2 had been in the works for months and that its debut was not related to the “code red.” OpenAI said GPT-5.2 also improves safety, particularly around mental health–related responses. You can read more from Jeremy here.

New lawsuit claims ChatGPT contributed to murder-suicide in Connecticut. A wrongful-death lawsuit was filed against OpenAI and Microsoft after a 56-year-old Connecticut man, Stein-Erik Soelberg, killed his 83-year-old mother and then himself following months of increasingly delusional conversations with ChatGPT. His family says the chatbot reinforced and contributed to his mental illness. OpenAI has expressed condolences and pointed to ongoing improvements to ChatGPT’s ability to recognize and respond to users in distress. You can read more from The Wall Street Journal here.

Microsoft says health queries are the most frequent use of its Copilot AI by consumers. Microsoft analyzed 37.5 million anonymized Copilot conversations from January through September 2025 to understand how people use the AI assistant in daily life. The study found that health-related questions dominated mobile usage, while topics and usage patterns varied significantly by device, time of day, and context. Beyond information search, users increasingly turned to Copilot for advice on personal topics, showing its role as a companion in both work and life moments. You can read Microsoft’s blog on the findings here.

Meta and Eleven Labs sign a new partnership to provide voice overs for Reels. Meta has partnered with London-based voice AI company ElevenLabs to integrate AI-powered audio capabilities across Instagram and Horizon. This partnership will enable new features such as the ability to dub Reels into local languages as well as to generate character voices. You can read more in The Economic Times here.

AI CALENDAR

Jan. 7-10: Consumer Electronics Show, Las Vegas. 

March 12-18: SWSW, Austin. 

March 16-19: Nvidia GTC, San Jose. 

April 6-9: HumanX, San Francisco. 

EYE ON AI NUMBERS

$34 billion

That’s the one-day paper loss Oracle founder and chairman Larry Ellison suffered Thursday after his company’s shares were pummeled by investors increasingly concerned with the amount Oracle is spending to build data centers for OpenAI. Oracle’s quarterly capital expenditures for the last quarter came in above analyst expectations and in fact exceeded the amount of cash the company generated in the quarter. “It’s like the poster child of the AI bear case,” Jay Hatfield, chief executive of Infrastructure Capital Advisors, told the Wall Street Journal.

Join us at the Fortune Workplace Innovation Summit May 19–20, 2026, in Atlanta. The next era of workplace innovation is here—and the old playbook is being rewritten. At this exclusive, high-energy event, the world’s most innovative leaders will convene to explore how AI, humanity, and strategy converge to redefine, again, the future of work. Register now.



Source link

Continue Reading

Trending

Copyright © Miami Select.