Connect with us

Business

Bob Iger says Disney’s $1 billion deal with OpenAI is an ‘opportunity, not a threat’: ‘We’d rather participate than be disrupted by it’

Published

on



Disney is investing $1 billion in OpenAI and is giving the go-ahead for its iconic characters like Mickey Mouse to be used in the AI short-form video app Sora.

The two companies announced a three-year deal that would bring more than 200 characters to Sora with a period of exclusivity for part of the duration of the deal.

Disney CEO Bob Iger painted the team-up as Disney taking the next step in content with the newest technology and waived away concerns about whether the deal represents a threat to human creators.

“We’ve always viewed technological advances as opportunity, not threat,” Iger said. 

“It’s going to happen regardless, and we’d rather participate in the rather dramatic growth, rather than just watching it happen and essentially being disrupted by it,” he later added.

Iger also noted in an interview with CNBC that as part of the deal, Disney characters can be used in Sora videos, but it does not include rights to likeness or voices. 

“OpenAI is putting guardrails essentially around how these are used, so that really there’s nothing to be concerned about from a consumer perspective,” he said. “This will be a safe environment and a safe way for consumers to engage with our characters in a new way.”

Iger said the company would also feature some user-generated AI content from Sora on the Disney+ platform, which he said would be a great way to increase engagement with younger users.

Disney will receive warrants to buy additional equity in OpenAI as part of the deal and Iger said there would be future opportunities for the company to become an OpenAI customer including licensing from OpenAI. 

Starting last year, OpenAI started opening up Sora to more users and in September launched Sora 2, an upgraded version of the video generator catered more toward mobile. Controversy followed its September release because of the app’s ability to create convincing and realistic videos of people. In October, OpenAI paused AI-generated deepfake videos that featured civil rights leader Martin Luther King Jr. after his daughter, Bernice A. King complained they were being used in a “demeaning, disjointed” way.

Thursday’s deal also comes after Disney sent a cease-and-desist letter to Google for allegedly using its intellectual property to train its AI models and in its services without permission. The company has previously sent similar letters to other companies like Character.ai. Iger told CNBC that Character.ai corrected the issue shortly after and noted that with Google, “the ball is in their court,” and Disney would wait to see how they react to the claim.

Altman for his part said Sora users have longed to use Disney characters in their videos and said he hoped adding them to the platform could “unleash a sort of whole new way that people use this technology.”

“We have underestimated the amount of latent creativity in the world,” said Altman. “But if you lower the effort, skill, time required to create new things people very quickly are able to bring ideas to life.”



Source link

Continue Reading

Business

OpenAI debuts GPT-5.2 in effort to silence concerns it is falling behind its rivals

Published

on



OpenAI, under increasing competitive pressure from Google and Anthropic, has debuted a new AI model, GPT-5.2, that it says beats all existing models by a substantial margin across a wide range of tasks.

The new model, which is being released less than a month after OpenAI debuted its predecessor, GPT-5.1, performed particularly well on a benchmark of complicated professional tasks across a range of “knowledge work”—from law to accounting to finance—as well as on evaluations involving coding and mathematical reasoning, according to data OpenAI released.

Fidji Simo, the former InstaCart CEO who now serves as OpenAI’s CEO of applications, told reporters that the model should not been seen as a direct response to Google’s Gemini 3 Pro AI model, which was released last month. That release prompted OpenAI CEO Sam Altman to issue a “code red,” delaying the rollout of several initiatives in order to focus more staff and computing resources on improving its core product, ChatGPT.

“I would say that [the Code Red] helps with the release of this model, but that’s not the reason it is coming out this week in particular, it has been in the works for a while,” she said.

She said the company had been building GPT-5.2 “for many months.” “We don’t turn around these models in just a week. It’s the result of a lot of work,” she said. The model had been known internally by the code name “Garlic,” according to a story in The Information. The day before the model’s release Altman teased its imminent rollout by posting to social media a video clip of him cooking a dish with a large amount of garlic.

OpenAI executives said that the model had been in the hands of “Alpha customers” who help test its performance for “several weeks”—a time period that would mean the model was completed prior to Altman’s “code red” declaration.

These testers included legal AI startup Harvey, note-taking app Notion, and file-management software company Box, as well as Shopify and Zoom.

OpenAI said these customers found GPT-5.2 demonstrated a “state of the art” ability to use other software tools to complete tasks, as well as excelling at writing and debugging code.

Coding has become one of the most competitive use cases for AI model deployment within companies. Although OpenAI had an early lead in the space, Anthropic’s Claude model has proved especially popular among enterprises, exceeding OpenAI’s marketshare according to some figures. OpenAI is no doubt hoping to convince customers to turn back to its models for coding with GPT-5.2.

Simo said the “Code Red” was helping OpenAI focus on improving ChatGPT. “Code Red is really a signal to the company that we want to marshal resources in one particular area, and that’s a way to really define priorities and define things that can be deprioritized,” she said. “So we have had an increase in resources focused on ChatGPT in general.”

The company also said its new model is better than the company’s earlier ones at providing “safe completions”—which it defines as providing users with helpful answers while not saying things that might contribute to or worsen mental health crises.

“On the safety side, as you saw through the benchmarks, we are improving on pretty much every dimension of safety, whether that’s self harm, whether that’s different types of mental health, whether that’s emotional reliance,” Simo said. “We’re very proud of the work that we’re doing here. It is a top priority for us, and we only release models when we’re confident that the safety protocols have been followed, and we feel proud of our work.”

The release of the new model came on the same day a new lawsuit was filed against the company alleging that ChatGPT’s interactions with a psychologically troubled user had contributed to a murder-suicide in Connecticut. The company also faces several other lawsuits alleging ChatGPT contributed to people’s suicides. The company called the Connecticut murder-suicide “incredibly heartbreaking” and said it is continuing to improve “ChatGPT’s training to recognize and respond to signs of mental or emotional distress, de-escalate conversations and guide people toward real-world support.” 

GPT-5.2 showed a large jump in performance across several benchmark tests of interest to enterprise customers. It met or exceeded human expert performance on a wide range of difficult professional tasks, as measured by OpenAI’s GDPval benchmark, 70.9% of the time. That compares to just 38.8% of the time for GPT-5, a model that OpenAI released in August; 59.6% for Anthropic’s Claude Opus 4.5; and 53.3% for Google’s Gemini 3 Pro.

On the software development benchmark, SWE-Bench Pro, GPT-5.2 scored 55.6%, which was almost 5 percentage points better than its predecessor, GPT-5.1, and more than 12% better than Gemini 3 Pro.

OpenAI’s Aidan Clark, vice president of research (training), declined to answer questions about exactly what training methods had been used to upgrade GPT-5.2’s performance, although he said that the company had made improvements across the board, including in pretraining, the initial step in creating an AI model.

When Google released its Gemini 3 Pro model last month, its researchers also said the company had made improvements in pretraining as well as post-training. This surprised some in the field who believed that AI companies had largely exhausted the ability to wring substantial improvements out of the pretraining stage of model building, and it was speculated that OpenAI may have been caught off guard by Google’s progress in this area.



Source link

Continue Reading

Business

OpenAI and Disney just ended the ‘war’ between AI and Hollywood with their $1 billion Sora deal

Published

on



Disney’s $1 billion investment in OpenAI, announced Thursday morning—and its decision to let more than 200 Disney, Pixar, Marvel, and Star Wars characters appear inside the Sora video generator—is more than a licensing deal. According to copyright and AI law expert Matthew Sag, who teaches at Emory University’s law school, the deal marks a strategic realignment that could reshape how Hollywood protects its IP in the face of AI-generated content that threatens to leech on their legally-protected magic. 

“AI companies are either in a position where they need to aggressively filter user prompts and model outputs to make sure that they don’t accidentally show Darth Vader, or strike deals with the rights holders to get permission to make videos and images of Darth Vader,” Sag told Fortune. “The licensing strategy is much more of a win-win.” 

The three-year agreement gives OpenAI the right to ingest hundreds of Disney-owned characters into Sora and ChatGPT Image. Disney will also receive equity warrants and become a major OpenAI customer, while deploying ChatGPT internally.

Sag said the deal itself will be a kind of “revenue-sharing.”

“OpenAI hasn’t figured out the revenue model,” Sag said. “So I think making this just an investment deal, in some ways, simplifies it. For Disney … [OpenAI] will figure out a way to make this profitable at some point, and [Disney will] get a cut of that.”

Why this deal matters: the ‘Snoopy problem’

For more than a year, the biggest legal threat to large-scale generative AI has centered on what Sag calls the “Snoopy problem”: It is extremely difficult to train powerful generative models without some degree of memorization, and copyrightable characters are uniquely vulnerable because copyright protects them in the abstract.

Sag was careful to outline a key distinction. AI companies aren’t licensing the right to train on copyrighted works; they’re licensing the right to create outputs that would otherwise be infringing.

That’s because the case for AI companies training their models on unlicensed content is “very strong,” Sag said. Two recent court rulings involving Anthropic and Meta have strengthened those arguments.  

The real stumbling block, Sag said, has always been outputs, not training. If a model can accidentally produce a frame that looks too much like Darth Vader, Homer Simpson, Snoopy, or Elsa, the fair use defense begins to fray.

“If you do get too much memorization, if that memorization finds its way into outputs, then your fair-use case begins to just crumble,” Sag said.

While it’s impossible to license enough text to train an LLM (“that would take a billion” deals, Sag said), it is possible to build image or video models entirely from licensed data if you have the right partners. This is why deals like Disney’s are crucial: They turn previously illegal outputs into legal ones, irrespective of whether the training process itself qualifies as fair use.

“The limiting principle is going to be essentially about whether—in their everyday operation—these models reproduce substantial portions of works from their training data,” Sag said.

The deal, Sag says, is also a hedge against Hollywood’s lawsuits. This announcement is “very bad” for Midjourney, who Disney is suing for copyright infringement, because it upholds OpenAI’s licensing deal as the “responsible” benchmark for AI firms. 

This is also a signal about the future of AI data

Beyond copyright risk, the deal exposes another trend: the drying up of high-quality, unlicensed data on the public internet.

In a blog post, Sag wrote:

“The low-hanging fruit of the public internet has been picked,” he wrote. “To get better, companies like OpenAI are going to need access to data that no one else has. Google has YouTube; OpenAI now has the Magic Kingdom.”

This is the core of what he calls the “data scarcity thesis.” OpenAI’s next leap in model quality may require exclusive content partnerships, as opposed to more scraping. 

“By entangling itself with the world’s premier IP holder, OpenAI makes itself indispensable to the very industry that threatened to sue it out of existence,” Sag wrote. 

AI and Hollywood have spent three years locked in a cold war over training data, likeness rights and infringement. With Disney’s $1 billion investment, that era appears to be ending.

“This is the template for the future,” Sag wrote. “We are moving away from total war between AI and content, toward a negotiated partition of the world.”



Source link

Continue Reading

Business

RFK Jr. and Sean Duffy had pull-up competition to announce a $1B plan for healthy airport upgrades

Published

on



As if dragging a three-wheeled carry-on across the mileage of an international airport isn’t enough, the government wants you panting before your flight. This week, Transportation Secretary Sean Duffy and Health Secretary Robert F. Kennedy Jr. had a pull-up competition in the middle of Reagan National Airport’s Terminal 2 (not a metaphor) to announce the $1 billion in grants the administration plans to allocate toward healthy airport upgrades.

  • Officials were vague about what these upgrades could include, but mentioned projects like dedicated play areas for kids, more lactation pods, and mini-gyms for travelers.
  • The funding will come from former President Biden’s 2021 Infrastructure Investment and Jobs Act and is part of the current administration’s “Make Travel Family Friendly Again” initiative.

But…over 30 major US airport hubs already have children’s play areas, and most airports have been required to provide private lactation areas since FY2021. And 68% of passengers said their top priority for air travel changes is lower prices, according to a 2025 Ipsos poll for Airlines for America.

Big picture: The administration has been pushing initiatives to make flying more pleasant. Last month, Duffy encouraged travelers to dress up for flights and act right, which some travelers responded to by…wearing pajamas to the airport to troll the secretary.—MM

This report was originally published by Morning Brew.

Join us at the Fortune Workplace Innovation Summit May 19–20, 2026, in Atlanta. The next era of workplace innovation is here—and the old playbook is being rewritten. At this exclusive, high-energy event, the world’s most innovative leaders will convene to explore how AI, humanity, and strategy converge to redefine, again, the future of work. Register now.



Source link

Continue Reading

Trending

Copyright © Miami Select.