Connect with us

Business

Boeing’s plea deal over 737 Max jet crashes was rejected over DEI policies. Now the judge has set a June trial date

Published

on



A federal judge in Texas has set a June trial date for the U.S. government’s years-old conspiracy case against Boeing for misleading regulators about the 737 Max jetliner before two of the planes crashed, killing 346 people.

U.S. District Judge Reed O’Connor did not explain in the scheduling order he issued on Tuesday why he decided to set the case for trial. Lawyers for the aerospace company and the Justice Department have spent months trying to renegotiate a July 2024 plea agreement that called for Boeing to plead guilty to a single felony charge.

The judge rejected that deal in December, saying that diversity, inclusion and equity policies the Justice Department had in place at the time might influence the selection of a monitor to oversee the company’s compliance with the terms of its proposed sentence.

Since then, O’Connor had three times extended the deadline for the two sides to report how they planned to proceed. His most recent extension, granted earlier this month, gave them until April 11 to “confer on a potential resolution of this case short of trial.”

The judge revoked the remaining time with his Tuesday order, which laid out a timeline for proceedings leading up to a June 23 trial in Fort Worth.

The Department of Justice declined to comment on the judge’s action. A Boeing statement shed no light on the status of the negotiations.

“As stated in the parties’ recent filings, Boeing and the Department of Justice continue to be engaged in good faith discussions regarding an appropriate resolution of this matter,” the company said.

The deal the judge refused to approve would have averted a criminal trial by allowing Boeing to plead guilty to conspiring to defraud Federal Aviation Administration regulators who approved minimal pilot-training requirements for the 737 Max nearly a decade ago. More intensive training in flight simulators would have increased the cost for airlines to operate the then-new plane model.

The development and certification of what has become Boeing’s bestselling airliner became an intense focus of safety investigators after two of Max planes crashed less than five months apart in 2018 and 2019. Many relatives of passengers who died off the coast of Indonesia and in Ethiopia have pushed for the prosecution of former Boeing officials, a public criminal trial and more severe financial punishment for the company.

In response to criticism of last year’s plea deal from victims’ families, prosecutors said they did not have evidence to argue that Boeing’s deception played a role in the crashes. Prosecutors told O’Connor the conspiracy to commit fraud charge was the toughest they could prove against Boeing.

O’Connor did not object in his December ruling against the plea agreement to the sentence Boeing would have faced: a fine of up to $487.2 million with credit given for $243.6 million in previously paid penalties; a requirement to invest $455 million in compliance and safety programs; and outside oversight during three years of probation.

Instead, the judge focused his negative assessment on the process for selecting an outsider to keep an eye on Boeing’s actions to prevent fraud. He expressed particular concern that the agreement “requires the parties to consider race when hiring the independent monitor … ‘in keeping with the (Justice) Department’s commitment to diversity and inclusion.’”

“In a case of this magnitude, it is in the utmost interest of justice that the public is confident this monitor selection is done based solely on competency. The parties’ DEI efforts only serve to undermine this confidence in the government and Boeing’s ethics and anti-fraud efforts,” O’Connor wrote.

An executive order President Donald Trump signed during the first week of his second term sought to end diversity, equity and inclusion programs across the federal government. Trump’s move may render the judge’s concerns moot, depending on the outcome of legal challenges to his order.

Trump’s return to office also means the Justice Department’s leadership has changed since federal prosecutors decided last year to pursue the case against Boeing.

Boeing agreed to the plea deal only after the Justice Department determined last year that the company violated a 2021 agreement that had protected it against criminal prosecution on the same fraud-conspiracy charge.

Government officials started reexamining the case after a door plug panel blew off an Alaska Airlines 737 Max during flight in January 2024. That incident renewed concerns about manufacturing quality and safety at Boeing, and put the company under intense scrutiny by regulators and lawmakers.

Boeing lawyers said last year that if the plea deal were rejected, the company would challenge the Justice Department’s finding that it breached the deferred-prosecution agreement. O’Connor helped Boeing’s position by writing in his December decision that it was not clear what the company did to violate the 2021 deal.

This story was originally featured on Fortune.com



Source link

Continue Reading

Business

Trump’s tariffs may trigger the return of ‘shrinkflation’ with shoppers paying more for less

Published

on


Steve Rad, CEO of toy maker Abacus Brands Inc., which designs science kits and other educational toys for older children, shows a newly improved matte box, left, that will replace its black plastic mold packaging insert, seen right, with an improved cardboard material to help offset the costs of future tariffs in El Segundo, Calif., on Monday, March 31, 2025.

AP Photo/Damian Dovarganes



Source link

Continue Reading

Business

Why OpenAI caved to open source on the same day as its $300 billion flex (hint: it’s not just about DeepSeek)

Published

on



To judge by his social feeds, OpenAI CEO Sam Altman is a very happy camper, as his company notches one eye-popping success after another. The startup he co-founded in 2015 just raised $40 billion at a $300 billion valuation, the biggest funding round ever by a private tech company; everyone on the internet seems to be posting Studio Ghibli-style images courtesy of OpenAI’s new GPT-4o image generation model; and ChatGPT now has 500 million weekly users, up from 400 million last month. 

And yet, along with all this good news, Altman revealed Monday that OpenAI is making what appears to be a pretty big about-face in its strategy: In several months, Altman said, OpenAI will be releasing an open source model. 

The move would mark the first time the company has released a model openly since the launch of GPT-2 in 2019, seemingly reversing the company’s shift to closed models in recent years. Granted, the forthcoming model will not be 100% open — as with other companies offering “open” AI models, including Meta and Mistral, OpenAI will offer no access to the data used to train the model. Still, the usage license would allow researchers, developers, and other users to access the underlying code and “weights” of the new model (which determine how the model processes information) to use, modify, or improve it. 

Why the turnaround?

On its surface, the direct cause of OpenAI’s open source embrace might appear to come from China, specifically, the emergence of startup DeepSeek, which flipped the AI script in favor of open-source in January. But according to several AI industry insiders that Fortune spoke to, a broader, and more nuanced, set of factors is also likely motivating Altman’s change of heart on open source. As AI technology makes its way into businesses, customers want the flexibility and transparency of open source models for many uses. And as the performance gap between OpenAI and its competitors narrows, it’s become more difficult for OpenAI to justify its 100% closed approach–something Altman acknowledged in January when he admitted that DeepSeek had lessened OpenAI’s lead in AI, that OpenAI has been “on the wrong side of history” when it comes to open sourcing its technologies.

OpenAI needs a presence beyond the models

Naveen Rao, VP of artificial intelligence at Databricks, said OpenAI’s move is more about an admission that the AI landscape is changing. Value is shifting away from the models themselves to the applications or systems organizations use to customize a model to their specific needs. While there are many situations where a company might want to use a state-of-the-art LLM, an open weights model would allow OpenAI to have a presence in scenarios where customers to don’t want to use ChatGPT, for example, or the company’s developer API. For example, a financial company might not want their customer data to leave their own infrastructure and move to an outside cloud, or a manufacturing business might want AI embedded in factory hardware that is not connected to the internet. 

“Open source is not some curiosity, it’s a big part of AI usage,” he told me. “OpenAI wants to be a part of that through their brand and their models.” 

Rowan Curran, a senior analyst at Forrester Research focused on AI, agreed, saying that OpenAI’s return to open source speaks to AI’s increasingly-diverse ecosystem, from OpenAI, Google, Anthropic, Amazon to Meta to China’s Alibaba and DeepSeek, France’s Mistral, Canada’s Cohere and Israel’s AI21 Labs.

He said many enterprise companies are excited about open-source AI models — not just because of how accurate they are or how well they answer questions, but because they’re flexible. The fact that they are portable is key, he explained — meaning they can run on different cloud platforms or even on a company’s own data center, workstation, laptop or robot, instead of being tied to one provider. 

Curran also explained that releasing an open model could make OpenAI’s own services more appealing to its own enterprise customers. If OpenAI is building a project for a customer and needs to run some of their work within the company’s own data center or even smaller models, for example, they can’t do that with OpenAI models like 4o because those run off of cloud-based servers. “That limits their ability to provide an end-to-end solution from the cloud all the way to the edge,” whether that is a laptop, a smartphone, a robot or a self-driving car, he said. Similar to what Google does with Gemini (it’s largest closed model family) and Gemma (it’s smaller open model), OpenAI could have its own open solution without having to look at third-party open source models. 

A tricky balancing act

While Rao does not see an open source OpenAI model as a big reaction to the DeepSeek releases, the “DeepSeek moment” did show that Chinese startups are no longer behind in the AI race. 

“Many of us in the field already knew this,” he said. If OpenAI doesn’t target the open source community now, he added, “it will lose a lot of influence, goodwill and community innovation.” 

Previously, OpenAI had said that one reason they could not release open models is because Chinese firms would try to use their technology to improve their own models. In January, OpenAI released a statement that said “it is critically important that we are working closely with the U.S. government to best protect the most capable models from efforts by adversaries and competitors to take U.S. technology.” And in fact, while DeepSeek did not release the data it used to train its R1 model, there are indications that it may have used outputs from OpenAI’s o1 to kick-start the training of the model’s reasoning abilities.

As OpenAI now tacks towards open source again, it’s found itself trying to reconcile seemingly contradictory messages. Witness OpenAI Chief Global Affairs Officer Chris Lehane’s LinkedIn post  on Monday: “For US-led democratic AI to prevail over CCP-led authoritarian AI, it’s becoming increasingly clear that we need to strike a balance between open and closed models. Open source puts powerful tools into the hands of developers around the world, expanding the reach of democratic AI principles and enabling innovators everywhere to solve hard problems and drive economic growth. Closed models incorporate important safeguards that protect America’s strategic advantage and prevent misuse.” 

“They’re definitely talking out of both sides,” Rao said,  describing OpenAI’s messaging as “it’s still really dangerous [to release open models] but we need to take advantage of the community that is building and has influence.” 

There’s also a commercial balancing act for OpenAI: It can’t release an open model that competes with its own paid ones. To target AI developers with influence, Rao suggested OpenAI would release a model that is big – but not too big. 

Throwing shade at Meta

If OpenAI’s strategic move to open source a model isn’t solely in reaction to DeepSeek, it may very well be about throwing shade at another big open source competitor: Meta is set to release the fourth iteration of its open source model family, Llama, at the end of this month. Llama has notably been released with an open license except for services with more than 700 million monthly active users–meant to limit companies like OpenAI building on it. 

“We will not do anything silly like saying that you can’t use our open model if your service has more than 700 million monthly active users,” Altman posted yesterday on X

“Meta has become the standard bearer for open source AI, at least in the west,” said Rao. “If they want to wrestle away some influence in the ecosystem, they have to take on Meta.” 

However, Forrester’s Curran said that Altman’s vague comments aside, there is no reason to think that OpenAI’s open source model will be any more transparent–in terms of data or training methods, for example–than any other commercial open version from Meta or Mistral. 

“I expect it to be much more opaque and closed compared to other open models,” he said, “with significantly less transparency.” 

This story was originally featured on Fortune.com



Source link

Continue Reading

Business

Charlie Javice faces 14 years in prison: How she fooled JP Morgan

Published

on

© 2025 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.



Source link

Continue Reading

Trending

Copyright © Miami Select.