Connect with us

Business

Under Armour CEO says micromanagement is underrated: ‘There’s too much lost on pretense’

Published

on



Under Armour CEO Kevin Plank told Graham Bensinger in a YouTube interview published Sept. 10 he believes in micromanagement “at certain levels.”

“I think it’s totally underestimated,” said Plank, who founded Under Armour in 1996. “I think there’s too much loss on pretense or structure or process. Like, that’s great, but the right answer will save us a lot of time.”

Plank, a boomerang CEO who took a brief hiatus from the athletic-wear company from 2020 to 2024, said he believes in an 80-20 rule for management. His priority is to “get it right” by focusing on the correct solutions to problems while allowing creativity and flexibility to remain.

“We do need structure in place, but we also need to build in the fact that the market is not going to wait 18 months for all of our products,” he said. “And so we need the speed of market. We need to be able to get things to market in 12 months, nine months, six months. And that shouldn’t feel like a burden or wait.”

To achieve this, Plank said Under Armour plans for about 80% to 90% of business to be set and structured, with the remaining 10% to 20% to have time to “just be able to think a little bit.”

To be sure, Plank said he wants to have an “evolved personality” in which he models the behavior he expects from his employees like he does with his children, who are 21 and 18 years old. He said he prioritizes “modeling the behavior that I expect from my teammates to live by, my partners, or vendors, and other people. I hold them accountable and they hold me accountable, too.”

Other CEOs who were open micromanagers

A Journal of Management Research and Analysis study shows micromanaging—or intensely monitoring and controlling every aspect of an employee’s work—in some cases can lead to reduced autonomy and innovation, lower job satisfaction, and burnout. But the same study says this management style can also improve short-term productivity, skill upgrading, and company structure. 

One of the most famous CEOs who has been repeatedly cited as a micromanager was Apple’s Steve Jobs. The former CEO of Apple, who died in 2011 from pancreatic cancer, continues to be revered as one of the greatest leaders in business history, but he was open about his “no-bozos” policy in the workplace. 

“He’s a corporate dictator who makes every critical decision—and oodles of seemingly noncritical calls too, from the design of the shuttle buses that ferry employees to and from San Francisco to what food will be served in the cafeteria,” Adam Lashinsky wrote in a Fortune article about Jobs published just about a month before his death.

Tesla CEO Elon Musk has also been cited as an extreme micromanager. A CNBC investigation, including interviews with 35 current and former direct reports to Musk, said his severe micromanagement tendencies “sometimes impaired his decision-making, leading him to approve expensive projects that failed and delayed production.”

Under Armour didn’t immediately respond to Fortune’s request for comment.



Source link

Continue Reading

Business

Why Jerome Powell’s latest rate cut still won’t help you get a lower mortgage rate

Published

on



For the third meeting in a row, the Federal Reserve cut interest rates—a “hawkish” move in an effort to help a softening labor market. The 0.25% cut brought the interest rate range to 3.5% to 3.75%—but economists and housing experts warn that’s not going to affect mortgage rates in the way potential homebuyers were hoping for. 

Chen Zhao, head of economics research at Redfin, wrote in a Wednesday post that the Fed’s December interest rate cut won’t move mortgage rates “because markets have already priced it in.” 

The Federal Reserve controls the Federal funds rate, which is a rate that banks charge each other and is more closely tied to credit cards, personal loans, and home-equity lines. A standard 30-year mortgage, on the other hand, is a long-term loan, and the pricing of those loans are tied more closely to yields on longer-term bonds like the 10-year Treasury and mortgage-backed securities. 

“Since this rate cut was no surprise, the markets have taken it in stride,” 43-year mortgage industry veteran Melissa Cohn, regional vice president of William Raveis Mortgage, told Fortune. She said more dropping shoes in terms of economic data will be the real turning point: “The future of bond yields and mortgage rates will be determined as new data on jobs and inflation get released.”

The current mortgage rate is 6.3%, according to Mortgage News Daily, which is of course much higher than the sub-3% rate that homebuyers from the pandemic era remember, although it’s also a far cry from the 8% peak in October 2023

“The committee’s projections and Chair Jerome Powell’s remarks indicate that this will be the last interest cut for a while,” Zhao wrote. “Given the underlying economic fundamentals of 3% inflation coupled with a weakening—but not recessionary—labor market, the Fed is likely to hold steady in the near future.

“Mortgage rates are unlikely to fall or rise by much,” she continued.

How mortgage rates affect housing affordability

Mortgage rates are just one piece of the housing affordability puzzle. While it may feel as if it’s the major roadblock in the ability to buy a home—especially having a recent memory of the pandemic housing boom—mortgage rates are only one factor. 

To put it in perspective, Zillow reported earlier this year not even a 0% mortgage rate would make buying a house affordable in several major U.S. cities. 

Let that sink in. 

Even without any interest accrued on a loan, homebuying is still out of reach for the typical American. Much of the affordability crisis has to do with home prices, which are more than 50% higher than in 2020. This has locked out new homebuyers from entering the market and current homeowners from selling. 

The mortgage rate drop required to make an average home affordable (to about 4.43%) for the typical buyer is “unrealistic,” according to Zillow economic analyst Anushna Prakash.  

“It’s unlikely rates will drop to the mid-[4% range] anytime soon,” Arlington, Va.–based real estate agent Philippa Main told Fortune. “And even if they did, housing prices are still at historic highs.” With 11 years of experience, Main is also a licensed mortgage loan officer.

To be sure, some economists see some light at the end of the tunnel for homebuyers plagued by high mortgage rates and home prices.

“For prospective buyers who have been waiting on the sidelines, the housing market is finally starting to listen,” wrote First American chief economist Mark Fleming in an Aug. 29 blog post. First American’s analysis takes into account inflation, and Fleming said: “The price of a house today is not directly comparable to the price of that same house 30 years ago.”



Source link

Continue Reading

Business

OpenAI debuts GPT-5.2 in effort to silence concerns it is falling behind its rivals

Published

on



OpenAI, under increasing competitive pressure from Google and Anthropic, has debuted a new AI model, GPT-5.2, that it says beats all existing models by a substantial margin across a wide range of tasks.

The new model, which is being released less than a month after OpenAI debuted its predecessor, GPT-5.1, performed particularly well on a benchmark of complicated professional tasks across a range of “knowledge work”—from law to accounting to finance—as well as on evaluations involving coding and mathematical reasoning, according to data OpenAI released.

Fidji Simo, the former InstaCart CEO who now serves as OpenAI’s CEO of applications, told reporters that the model should not been seen as a direct response to Google’s Gemini 3 Pro AI model, which was released last month. That release prompted OpenAI CEO Sam Altman to issue a “code red,” delaying the rollout of several initiatives in order to focus more staff and computing resources on improving its core product, ChatGPT.

“I would say that [the Code Red] helps with the release of this model, but that’s not the reason it is coming out this week in particular, it has been in the works for a while,” she said.

She said the company had been building GPT-5.2 “for many months.” “We don’t turn around these models in just a week. It’s the result of a lot of work,” she said. The model had been known internally by the code name “Garlic,” according to a story in The Information. The day before the model’s release Altman teased its imminent rollout by posting to social media a video clip of him cooking a dish with a large amount of garlic.

OpenAI executives said that the model had been in the hands of “Alpha customers” who help test its performance for “several weeks”—a time period that would mean the model was completed prior to Altman’s “code red” declaration.

These testers included legal AI startup Harvey, note-taking app Notion, and file-management software company Box, as well as Shopify and Zoom.

OpenAI said these customers found GPT-5.2 demonstrated a “state of the art” ability to use other software tools to complete tasks, as well as excelling at writing and debugging code.

Coding has become one of the most competitive use cases for AI model deployment within companies. Although OpenAI had an early lead in the space, Anthropic’s Claude model has proved especially popular among enterprises, exceeding OpenAI’s marketshare according to some figures. OpenAI is no doubt hoping to convince customers to turn back to its models for coding with GPT-5.2.

Simo said the “Code Red” was helping OpenAI focus on improving ChatGPT. “Code Red is really a signal to the company that we want to marshal resources in one particular area, and that’s a way to really define priorities and define things that can be deprioritized,” she said. “So we have had an increase in resources focused on ChatGPT in general.”

The company also said its new model is better than the company’s earlier ones at providing “safe completions”—which it defines as providing users with helpful answers while not saying things that might contribute to or worsen mental health crises.

“On the safety side, as you saw through the benchmarks, we are improving on pretty much every dimension of safety, whether that’s self harm, whether that’s different types of mental health, whether that’s emotional reliance,” Simo said. “We’re very proud of the work that we’re doing here. It is a top priority for us, and we only release models when we’re confident that the safety protocols have been followed, and we feel proud of our work.”

The release of the new model came on the same day a new lawsuit was filed against the company alleging that ChatGPT’s interactions with a psychologically troubled user had contributed to a murder-suicide in Connecticut. The company also faces several other lawsuits alleging ChatGPT contributed to people’s suicides. The company called the Connecticut murder-suicide “incredibly heartbreaking” and said it is continuing to improve “ChatGPT’s training to recognize and respond to signs of mental or emotional distress, de-escalate conversations and guide people toward real-world support.” 

GPT-5.2 showed a large jump in performance across several benchmark tests of interest to enterprise customers. It met or exceeded human expert performance on a wide range of difficult professional tasks, as measured by OpenAI’s GDPval benchmark, 70.9% of the time. That compares to just 38.8% of the time for GPT-5, a model that OpenAI released in August; 59.6% for Anthropic’s Claude Opus 4.5; and 53.3% for Google’s Gemini 3 Pro.

On the software development benchmark, SWE-Bench Pro, GPT-5.2 scored 55.6%, which was almost 5 percentage points better than its predecessor, GPT-5.1, and more than 12% better than Gemini 3 Pro.

OpenAI’s Aidan Clark, vice president of research (training), declined to answer questions about exactly what training methods had been used to upgrade GPT-5.2’s performance, although he said that the company had made improvements across the board, including in pretraining, the initial step in creating an AI model.

When Google released its Gemini 3 Pro model last month, its researchers also said the company had made improvements in pretraining as well as post-training. This surprised some in the field who believed that AI companies had largely exhausted the ability to wring substantial improvements out of the pretraining stage of model building, and it was speculated that OpenAI may have been caught off guard by Google’s progress in this area.



Source link

Continue Reading

Business

OpenAI and Disney just ended the ‘war’ between AI and Hollywood with their $1 billion Sora deal

Published

on



Disney’s $1 billion investment in OpenAI, announced Thursday morning—and its decision to let more than 200 Disney, Pixar, Marvel, and Star Wars characters appear inside the Sora video generator—is more than a licensing deal. According to copyright and AI law expert Matthew Sag, who teaches at Emory University’s law school, the deal marks a strategic realignment that could reshape how Hollywood protects its IP in the face of AI-generated content that threatens to leech on their legally-protected magic. 

“AI companies are either in a position where they need to aggressively filter user prompts and model outputs to make sure that they don’t accidentally show Darth Vader, or strike deals with the rights holders to get permission to make videos and images of Darth Vader,” Sag told Fortune. “The licensing strategy is much more of a win-win.” 

The three-year agreement gives OpenAI the right to ingest hundreds of Disney-owned characters into Sora and ChatGPT Image. Disney will also receive equity warrants and become a major OpenAI customer, while deploying ChatGPT internally.

Sag said the deal itself will be a kind of “revenue-sharing.”

“OpenAI hasn’t figured out the revenue model,” Sag said. “So I think making this just an investment deal, in some ways, simplifies it. For Disney … [OpenAI] will figure out a way to make this profitable at some point, and [Disney will] get a cut of that.”

Why this deal matters: the ‘Snoopy problem’

For more than a year, the biggest legal threat to large-scale generative AI has centered on what Sag calls the “Snoopy problem”: It is extremely difficult to train powerful generative models without some degree of memorization, and copyrightable characters are uniquely vulnerable because copyright protects them in the abstract.

Sag was careful to outline a key distinction. AI companies aren’t licensing the right to train on copyrighted works; they’re licensing the right to create outputs that would otherwise be infringing.

That’s because the case for AI companies training their models on unlicensed content is “very strong,” Sag said. Two recent court rulings involving Anthropic and Meta have strengthened those arguments.  

The real stumbling block, Sag said, has always been outputs, not training. If a model can accidentally produce a frame that looks too much like Darth Vader, Homer Simpson, Snoopy, or Elsa, the fair use defense begins to fray.

“If you do get too much memorization, if that memorization finds its way into outputs, then your fair-use case begins to just crumble,” Sag said.

While it’s impossible to license enough text to train an LLM (“that would take a billion” deals, Sag said), it is possible to build image or video models entirely from licensed data if you have the right partners. This is why deals like Disney’s are crucial: They turn previously illegal outputs into legal ones, irrespective of whether the training process itself qualifies as fair use.

“The limiting principle is going to be essentially about whether—in their everyday operation—these models reproduce substantial portions of works from their training data,” Sag said.

The deal, Sag says, is also a hedge against Hollywood’s lawsuits. This announcement is “very bad” for Midjourney, who Disney is suing for copyright infringement, because it upholds OpenAI’s licensing deal as the “responsible” benchmark for AI firms. 

This is also a signal about the future of AI data

Beyond copyright risk, the deal exposes another trend: the drying up of high-quality, unlicensed data on the public internet.

In a blog post, Sag wrote:

“The low-hanging fruit of the public internet has been picked,” he wrote. “To get better, companies like OpenAI are going to need access to data that no one else has. Google has YouTube; OpenAI now has the Magic Kingdom.”

This is the core of what he calls the “data scarcity thesis.” OpenAI’s next leap in model quality may require exclusive content partnerships, as opposed to more scraping. 

“By entangling itself with the world’s premier IP holder, OpenAI makes itself indispensable to the very industry that threatened to sue it out of existence,” Sag wrote. 

AI and Hollywood have spent three years locked in a cold war over training data, likeness rights and infringement. With Disney’s $1 billion investment, that era appears to be ending.

“This is the template for the future,” Sag wrote. “We are moving away from total war between AI and content, toward a negotiated partition of the world.”



Source link

Continue Reading

Trending

Copyright © Miami Select.