Connect with us

Business

The CEO behind Abercrombie & Fitch’s turnaround says the retailer isn’t chasing ‘cool’; wants to be a ‘lifestyle’ brand instead

Published

on



As counterintuitive as it might seem, being trendy can be a curse for a clothing brand. Abercrombie & Fitch for years was a trendy tastemaker that told customers what was cool, not the other way around, and it worked. Until it didn’t.

A decade ago, the ‘it’ brand of the 2000s and early 2010s crashed. Customers were turned off by the highly sexualized marketing and leadership’s disdainful attitude toward shoppers it said weren’t cool enough for Abercrombie & Fitch’s clothes.

So when Abercrombie CEO Fran Horowitz, previously president of A&F’s sister brand Hollister, went about repairing the parent company in 2017, she had her work cut out for her. As detailed in a Fortune feature in 2022, she dropped her predecessors’ command-and-control way of managing the company in favor of letting the rank and file have more input. She raised the quality of the clothing and, crucially, decided to eschew chasing trends.

“Cool is a tough word,” Horowitz told the Fortune Most Powerful Women summit in Washington, D.C., on Wednesday. “It’s not what we’re aspiring to be. We’re aspiring to be a long-lasting lifestyle brand that
someone can wear and enjoy for many, many years.” (Her sentiments echo those of designer Ralph Lauren who once famously said, “I don’t want to be too hot.”)

Horowitz’s strategy has worked. A&F’s sales nearly doubled to $2.6 billion between 2019 and 2024; Hollister, which had also struggled, boomed too. This year, the combined sales of both brands will top $5 billion for the first time.

Keeping that upward momentum is Horowitz’s big challenge. One tactic she’s employed is seeking out new partnerships. This summer, Abercrombie Kids started selling its clothing at Macy’s department stores. Separately, the National Football League chose Abercrombie & Fitch to be its first ever fashion partner with the retailer carrying merchandise such as mesh t-shirts and half-zip sweaters for each of the 32 NFL leagues.

The upside of the NFL deal, Horowitz said, is the exposure to millions of NFL fans, half of whom are women. The CEO says that group hasn’t always been well served by team merchandise that too often has been “pink and sparkly”—not the more sophisticated products she is betting they want to wear on game day.

“The opportunity for us specifically is customer acquisition and brand awareness,” said Horowitz, a New York Giants fan. “Even though 50% of their fandom is female, perhaps they’re not recognized or celebrated as much as they can be, and they haven’t been served by merch exactly.



Source link

Continue Reading

Business

The race to an AI workforce faces one important trust gap: What happens when an agent goes rogue?

Published

on



To err is human; to forgive, divine. But when it comes to autonomous AI “agents” that are taking on tasks previously handled by humans, what’s the margin for error? 

At Fortune’s recent Brainstorm AI event in San Francisco, an expert roundtable grappled with that question as insiders shared how their companies are approaching security and governance—an issue that is leapfrogging even more practical challenges such as data and compute power. Companies are in an arm’s race to parachute AI agents into their workflows that can tackle tasks autonomously and with little human supervision. But many are facing a fundamental paradox that is slowing adoption to a crawl: Moving fast requires trust, and yet building trust takes a lot of time. 

Dev Rishi, general manager for AI at Rubrik, joined the security company last summer following its acquisition of his deep learning AI startup Predibase. Afterward, he spent the next four months meeting with executives from 180 companies. He used those insights to divide agentic AI adoption into four phases, he told the Brainstorm AI audience. (To level set, agentic adoption refers to businesses implementing AI systems that work autonomously, rather than responding to prompts.) 

According to Rishi’s learnings, the four phases he unearthed include the early experimentation phase where companies are hard at work on prototyping their agents and mapping goals they think could be integrated into their workflows. The second phase, said Rishi, is the trickiest. That’s when companies shift their agents from prototypes and into formal work production. The third phase involves scaling those autonomous agents across the entire company. The fourth and final stage—which no one Rishi spoke with had achieved—is autonomous AI. 

Roughly half of the 180 companies were in the experimentation and prototyping phase, Rishi found, while 25% were hard at work formalizing their prototypes. Another 13% were scaling, and the remaining 12% hadn’t started any AI projects. However, Rishi projects a dramatic change ahead: In the next two years, those in the 50% bucket are anticipating that they will move into phase two, according to their roadmaps. 

“I think we’re going to see a lot of adoption very quickly,” Rishi told the audience. 

However, there’s a major risk holding companies back from going “fast and hard,” when it comes to speeding up the implementation of AI agents in the workforce, he noted. That risk—and the No.1 blocker to broader deployment of agents— is security and governance, he said. And because of that, companies are struggling to shift from agents being used for knowledge retrieval to being action oriented.

“Our focus actually is to accelerate the AI transformation,” said Rishi. “I think the number one risk factor, the number one bottleneck to that, is risk [itself].”

Integrating agents into the workforce

Kathleen Peters, chief innovation office at Experian who leads product strategy, said the slowing is due to not fully understanding the risks when AI agents overstep the guardrails that companies have put into place and the failsafes needed for when that happens.

“If something goes wrong, if there’s a hallucination, if there’s a power outage, what can we fall back to,” she questioned. “It’s one of those things where some executives, depending on the industry, are wanting to understand ‘How do we feel safe?’”

Figuring out that piece will be different for every company and is likely to be particularly thorny for companies in highly regulated industries, she noted. Chandhu Nair, senior vice president in data, AI, and innovation at home improvement retailer Lowe’s, noted that it’s “fairly easy” to build agents, but people don’t understand what they are: Are they a digital employee? Is it a workforce? How will it be incorporated into the organizational fabric? 

“It’s almost like hiring a whole bunch of people without an HR function,” said Nair. “So we have a lot of agents, with no kind of ways to properly map them, and that’s been the focus.”

The company has been working through some of these questions, including who might be responsible if something goes wrong. “It’s hard to trace that back,” said Nair. 

Experian’s Peters predicted that the next few years will see a lot of those very questions hashed out in public even as conversations take place simultaneously behind closed doors in boardrooms and among senior compliance and strategy committees. 

“I actually think something bad is going to happen,” Peters said. “There are going to be breaches. There are going to be agents that go rogue in unexpected ways. And those are going to make for a very interesting headlines in the news.”

Big blowups will generate a lot of attention, Peters continued, and reputational risk will be on the line. That will force the issue of uncomfortable conversations about where liabilities reside regarding software and agents, and it will all likely add up to increased regulation, she said. 

“I think that’s going to be part of our societal overall change management in thinking about these new ways of working,” Peters said.

Still, there are concrete examples as to how AI can benefit companies when it is implemented in ways that resonate with employees and customers. 

Nair said Lowe’s has seen strong adoption and “tangible” return on investment from the AI it has embedded into the company’s operations thus far. For instance, among its 250,000 store associates, each has an agent companion with extensive product knowledge across its 100,000 square foot stores that sell anything from electrical equipment, to paints, to plumbing supplies. A lot of the newer entrants to the Lowe’s workforce aren’t tradespeople, said Nair, and the agent companions have become the “fastest-adopted technology” so far.

“It was important to get the use cases right that really resonate back with the customer,” he said. In terms of driving change management in stores, “if the product is good and can add value, the adoption just goes through the roof.”

Who’s watching the agent?

But for those who work at headquarters, the change management techniques have to be different, he added, which piles on the complexity. 

And many enterprises are stuck at another early-stage question, which is whether they should build their own agents or rely on the AI capabilities developed by major software vendors. 

Rakesh Jain, executive director for cloud and AI engineering at healthcare system Mass General Brigham, said his organization is taking a wait-and-see approach. With major platforms like Salesforce, Workday, and ServiceNow building their own agents, it could create redundancies if his organization builds its own agents at the same time. 

“If there are gaps, then we want to build our own agents,” said Jain. “Otherwise, we would rely on buying the agents that the product vendors are building.”

In healthcare, Jain said there’s a critical need for human oversight given the high stakes. 

“The patient complexity cannot be determined through algorithms,” he said. “There has to be a human involved in it.” In his experience, agents can accelerate decision making, but humans have to make the final judgment, with doctors validating everything before any action is taken. 

Still, Jain also sees enormous potential upside as the technology matures. In radiology, for example, an agent trained on the expertise of multiple doctors could catch tumors in dense tissue that a single radiologist might miss. But even with agents trained on multiple doctors, “you still have to have a human judgment in there,” said Jain. 

And the threat of overreach by an agent that is supposed to be a trusted entity is ever present. He compared a rogue agent to an autoimmune disease, which is one of the most difficult conditions for doctors to diagnose and treat because the threat is internal. If an agent inside a system “becomes corrupt,” he said, “it’s going to cause massive damages which people have not been able to really quantify.”

Despite the open questions and looming challenges, Rishi said there’s a path forward. He identified two requirements for building trust in agents. First, companies need systems that provide confidence that agents are operating within policy guardrails. Second, they need clear policies and procedures for when things will inevitably go wrong—a policy with teeth. Nair, additionally, added three factors for building trust and moving forward smartly: identity and accountability and knowing who the agent is; evaluating how consistent the quality of each agent’s output is; and, reviewing the post-mortem trail that can explain why and when mistakes have occurred. 

“Systems can make mistakes, just like humans can as well,” said Nair. “ But to be able to explain and recover is equally important.”



Source link

Continue Reading

Business

Highlights from Fortune Brainstorm AI San Francisco

Published

on



Hello and welcome to Eye on AI. In this edition….Insights from Fortune Brainstorm AI San Francisco…Disney invests $1 billion in OpenAI and licenses its IP to the company…OpenAI debuts GPT-5.2 in effort to silence concerns it’s trailing rivals…Oracle stock takes a tumble.

Hi, it’s Jeremy here. I’m still buzzing from Fortune Brainstorm AI San Francisco, which took place earlier this week. We had a fabulous lineup including Brad Lightcap, OpenAI’s chief operating officer, Google Cloud CEO Thomas Kurian, Intuit CEO Sasan Goodarzi, Exelon CEO Calvin Butler, Databricks CEO Ali Ghodsi, Rivian CEO RJ Scaringe, Insitro CEO Daphne Koller, and many more. We also had a thoughtful conversation on AI’s impacts with actor, director, and increasingly AI thought leader Joseph Gordon Levitt, as well as a scream of a session with actor, comedian and AI CEO Natasha Lyonne. Today, Sharon Goldman, Bea Nolan, and I are going to share a few highlights and personal impressions.

For me, there was a notable vibe this year that a lot of companies are substantially further along in implementing AI across their organizations, including using AI agents in some limited, but important, capacities. Many audience questions, especially in some of the breakout sessions, were around governance and orchestration methods for an increasingly hybrid workforce where AI agents will be completing tasks alongside employees.

Still, it was striking to hear Butler, the Exelon CEO, say that his company is moving cautiously. When the consequence of getting something wrong is literally lights out, security and reliability have to take precedence over everything else. And so Butler said he was happy not to be a “first mover” but instead a “fast follower” when it came to AI implementations. Let other people take the hit and learn from their mistakes, seems to be his view.

And this wasn’t the only place where speakers were seeking to tamp down hype. It was refreshing to hear Michael Truell, the cofounder and CEO of hit coding assistant Cursor tell me that he didn’t think software engineering would ever be fully automated in the way that OpenAI CEO Sam Altman sometimes talks about. Instead Truell said that while the amount of time that coders spent on “compilation” of code would continue to shrink, he saw a continued need for humans to make design decisions around “how should the software work.”

Similarly, Vidya Peters, from DataSnipper, said she thought there would still be a role for qualified accountants within finance organizations, even if they were increasingly being assisted with AI tools such as the one her company makes. She also said she thought that applications geared specifically for a particular industry or job—especially in regulated industries—would continue to win out over more general purpose AI models, even as the big AI companies are increasingly targeting specific professional use cases for their general purpose models.

A panel that Sharon moderated on the “new geography of data centers” was fascinating. The message was that right now, data centers are going where the power is. But increasingly data centers are going to be looking to build their own power on site and possibly even become net contributors to the grid. And Jason Eichenholz, the CEO of Relativity Networks, said that as AI inference workloads come to eclipse AI training workloads, there will be an increasing need to bring data centers close to major population centers, but that most cities in the U.S. are power constrained. How are we going to get these urban centers the tokens they need at the speed at which they need them? That’s anyone’s guess right now, Eichenholz says—although his company builds the fast fiber that will carry those tokens from the data centers to end users.

Finally, I enjoyed hearing Dayle Stevens from Telstra explain why her company chose to form a joint venture with Accenture to deliver its AI stragegy, rather than simply hiring the consulting firm under a traditional service contract. Stevens said the joint venture has enabled the company to move much faster than it would have otherwise and to tap expertise, including starting an AI innovation hub in Silicon Valley, that would have been hard to implement otherwise. 

The future of enterprise AI is hybrid

Now, here’s Sharon’s takeaways: In my mainstage session with PayPal global head of AI Prakhar Mehrotra and Marc Hamilton, VP of solutions architecture and engineering at Nvidia, both discussed the increasing power of open source AI models to allow enterprise companies to control their data and fine-tune for specific use cases. But both agreed that the future of enterprise AI will be hybrid, with enterprises typically using both open models and proprietary model APIs.

There was plenty of time for philosophizing, as well: at one dinner, I chatted with delegates from The Clorox Company, Workday and other companies about everything from what jobs were future-proof (I suggested dog walkers were safe from AI) to what AI would really mean for the future of today’s children (the bottom line: they still need to learn to think for themselves!).

My favorite panel was one I moderated with a half-dozen leaders and stakeholders in the world of AI data centers, including Andy Hock from Cerebras, Matt Field from Crusoe, and former OpenAI infrastructure policy leader Lane Dilg. We dug into how the line is blurring between power infrastructure and data centers, with billions in capital and gigawatts of power at play. My biggest takeaway was that the AI data center issue is local, local, local. Every community and local government will be dealing with its own specific issues and compromises around issues such as land, energy, and water—and what works for one area might not work for another.

People and culture are paramount

And here is what Bea had to say about this year’s Brainstorm AI San Francisco:

Most enterprises are still trying to figure out the best way to adopt AI, but leaders this year were also keen to emphasize that choosing the right tools is only part of the equation. Companies also need to ensure that both their employees and their org charts are ready for the shift—otherwise, even the most advanced AI pilots are likely to fail.

As Accenture’s Chief Responsible AI Officer Arnab Chakraborty put it: “Don’t just think about technology—think about people and the culture. It is so paramount.”

Or take Open Machine CEO Allie K. Miller’s advice and don’t call AI a tool at all: “Calling it a tool ends up being a little bit of borderline self-limiting behavior that is holding enterprise all around the world behind.”

I also moderated a panel of healthcare experts, which brought together a mix of clinicians who see patients every day and tech leaders building and deploying healthtech tools at scale. In healthcare, the industry is generally feeling good about clinician-facing AI, but it’s still wrestling with what it means to safely deploy patient-facing agents.

The panelists discussed, among other things, what it means to be moving toward a future where patients and clinicians consult the same AI before they consult each other.

The excitement is running high on the corporate side, but not that much has really changed in the examination room—at least according to Gurpreet Dhaliwal, a clinician-educator and Professor of Medicine at the University of California. Whether it’s with Dr. Google, Dr. ChatGPT, or just a neighbor with some strong beliefs about antibiotics, Dhaliwal said patients have always arrived with a second opinion in their back pocket. While AI is poised to be a revolutionary force for healthcare—especially in fringe cases such as rare diseases—it’s yet to fundamentally change the dynamic between patients and their physicians.

With that, here’s the rest of the AI news.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

FORTUNE ON AI

Google DeepMind agrees to sweeping partnership with U.K. government focused on science and clean energy—by Jeremy Kahn

Hinge’s founder and CEO is stepping down to start a new AI-first dating app—by Marco Quiroz-Gutierrez

Cursor has growing revenue and a $29 billion valuation—but CEO Michael Truell isn’t thinking about an IPO—by Beatrice Nolan

AI IN THE NEWS

Disney invests $1 billion in OpenAI, brings characters to OpenAI apps. The home of Mickey Mouse is investing $1 billion in OpenAI and, under a three-year licensing deal, will let users generate short, prompt-driven videos in OpenAI’s Sora app using more than 200 Disney, Marvel, Star Wars, and Pixar characters. OpenAI is supposed to create guardrails to prevent users from creating videos or images that might reflect poorly on the Disney brand. The partnership was struck after nearly two years of talks. Meanwhile, Disney simultaneously sent a cease-and-desist letter to Google accusing it of large-scale copyright infringement tied to AI outputs featuring Disney characters. You can read more from The Wall Street Journal here.

OpenAI debuts GPT-5.2 model, answering concerns it was trailing competitors. The company launched a new AI model that, according to evaluations OpenAI conducted, delivers state-of-the-art performance across a wide range of tasks, including coding, mathematical reasoning, and “knowledge work.” The model showed significant improvement over GPT-5.1, which OpenAI released only a month ago, and bested Google’s and Anthropic’s new models. The release of Google’s Gemini 3 Pro in late November prompted OpenAI CEO Sam Altman to declare a “code red” to refocus the company on improving ChatGPT. But OpenAI executives said the release of GPT-5.2 had been in the works for months and that its debut was not related to the “code red.” OpenAI said GPT-5.2 also improves safety, particularly around mental health–related responses. You can read more from Jeremy here.

New lawsuit claims ChatGPT contributed to murder-suicide in Connecticut. A wrongful-death lawsuit was filed against OpenAI and Microsoft after a 56-year-old Connecticut man, Stein-Erik Soelberg, killed his 83-year-old mother and then himself following months of increasingly delusional conversations with ChatGPT. His family says the chatbot reinforced and contributed to his mental illness. OpenAI has expressed condolences and pointed to ongoing improvements to ChatGPT’s ability to recognize and respond to users in distress. You can read more from The Wall Street Journal here.

Microsoft says health queries are the most frequent use of its Copilot AI by consumers. Microsoft analyzed 37.5 million anonymized Copilot conversations from January through September 2025 to understand how people use the AI assistant in daily life. The study found that health-related questions dominated mobile usage, while topics and usage patterns varied significantly by device, time of day, and context. Beyond information search, users increasingly turned to Copilot for advice on personal topics, showing its role as a companion in both work and life moments. You can read Microsoft’s blog on the findings here.

Meta and Eleven Labs sign a new partnership to provide voice overs for Reels. Meta has partnered with London-based voice AI company ElevenLabs to integrate AI-powered audio capabilities across Instagram and Horizon. This partnership will enable new features such as the ability to dub Reels into local languages as well as to generate character voices. You can read more in The Economic Times here.

AI CALENDAR

Jan. 7-10: Consumer Electronics Show, Las Vegas. 

March 12-18: SWSW, Austin. 

March 16-19: Nvidia GTC, San Jose. 

April 6-9: HumanX, San Francisco. 

EYE ON AI NUMBERS

$34 billion

That’s the one-day paper loss Oracle founder and chairman Larry Ellison suffered Thursday after his company’s shares were pummeled by investors increasingly concerned with the amount Oracle is spending to build data centers for OpenAI. Oracle’s quarterly capital expenditures for the last quarter came in above analyst expectations and in fact exceeded the amount of cash the company generated in the quarter. “It’s like the poster child of the AI bear case,” Jay Hatfield, chief executive of Infrastructure Capital Advisors, told the Wall Street Journal.

Join us at the Fortune Workplace Innovation Summit May 19–20, 2026, in Atlanta. The next era of workplace innovation is here—and the old playbook is being rewritten. At this exclusive, high-energy event, the world’s most innovative leaders will convene to explore how AI, humanity, and strategy converge to redefine, again, the future of work. Register now.



Source link

Continue Reading

Business

Backflips are easy, stairs are hard: Robots still struggle with simple human movements, experts say

Published

on


Whether it’s running down a track, doing a backflip, dancing to music, or kickboxing, there are more and more videos of humanoid robots doing increasingly impressive things.

Yet speakers at the Fortune Brainstorm AI conference on Tuesday warned against getting too dazzled by the acrobatic feats. A robot doing a backflip–something difficult for a person–looks impressive. But ask a robot to perform seemingly easy tasks, say, climbing up stairs or grabbing a glass of water, and many of todays droids still struggle.

“What looks hard is easy, but what looks easy is really hard,” Stephanie Zhan, a partner at Sequoia Capital, explained, paraphrasing an observation from computer scientist Hans Moravec. In the late Eighties, Moravec and other computer scientists noted that it was easier for computers to perform well on tests of intelligence, yet failed at tasks that even young children could do.

Deepak Pathak, CEO of robotics startup Skild AI, explained that robots, and computers in general, were good at doing complex tasks when operating in a controlled environment. Showing a video of a Skild robot skipping down a sidewalk, Pathak noted that “apart from the ground, the robot is not interacting with anything.”

Yet for tasks like picking up a bottle or walking up stairs, a person is using vision to “continuously correct” what he or she is doing, Pathak explains. “That interaction is the root reason for human general intelligence, which you don’t appreciate because almost every human has it.”

Zhan explained that viral videos of humanoid robots don’t show how the product was trained, nor whether it can operate in an uncontrolled environment. “The challenge for you as a consumer of all these videos is to really discern what’s real and what’s not,” she said.

The next step for robots

Still, both speakers were optimistic that advances in general intelligence will soon lead to more advanced and flexible robots.

“Robots used to be driven more by human intelligence. Somebody super smart would look at [a task], and…pre-program the robot mathematically to do it,” Pathak said. 

But now, the robotics field is shifting from “programming something to learning from experience,” he explained. This allows for new robots that handle more complex tasks in more uncontrolled environments, and which can easily be adapted for other tasks without the cost of reprogramming and retooling them. 

Stephanie Zhan, partner at Sequoia Capital, speaking at Fortune Brainstorm AI in San Francisco on Dec. 9, 2025.

Stuart Isett for Fortune

Today’s robotics firms are “still constrained by having robots that are only built for specific things,” Zhan argued. A robotics platform with more general intelligence can open up “possibilities that are otherwise not possible for us to achieve,” including tasks that are currently dangerous for human workers.

Consumers could benefit too. “You see all these household robots, but they’re only capable of doing one thing,” Zhan said. “But if we succeed at building general intelligent robots, you will finally have consumer robots that can tackle the whole host of household tasks that you now have.” A similar point was made earlier at Brainstorm AI by Qualcomm CEO Rene Haas, who said that the general adaptability of humanoid robots will make them much better suited for factory jobs than the robotics arms used today.

There are social repercussions to a robotics boom, dislodging jobs that, as of now, still needed to be done by humans. Yet Pathak was sanguine about the social benefits of spreading automation. One is safety, as robots remove the need for humans to do jobs that are hazardous or unhealthy in the long-run. Another benefit is filling the massive labor shortage for blue-collar and manufacturing jobs. (That shortfall has been a barrier to U.S. efforts to re-shore much of its advanced manufacturing from Asian economies.)

Yet Pathak also envisioned a future where robots free humans from the drudgery of everyday work, even as he admitted that societies needed to figure out how to spread the gains from automation. “There lies a scenario, a good scenario, where everybody is doing things that they like,” Pathak said. “Work is more optional, and they are doing things that they enjoy.”



Source link

Continue Reading

Trending

Copyright © Miami Select.