Connect with us

Business

Obituary: former Hasbro CEO Alan Hassenfeld dies at 76

Published

on



Alan G. Hassenfeld, a renowned philanthropist and former CEO of iconic toy company Hasbro Inc., the maker of G.I. Joe and Play-Doh, has died. He was 76, according to the toy company.

Hasbro, the nation’s second largest toy company behind Mattel based on annual sales, declined to offer more details. Hassenfeld’s family foundation, Hassenfeld Family Initiatives, wasn’t immediately available to comment.

Hassenfeld was born in Providence, Rhode Island and graduated from Deerfield Academy in Massachusetts. He received an undergraduate arts degree from the University of Pennsylvania in 1970. Upon graduation, he joined the Pawtucket, Rhode Island-based family business in 1970. Hasbro was founded in 1923, by Hassenfeld’s grandfather, Henry. Known initially as Hassenfeld Brothers, it sold textile remnants but expanded into school supplies and later toy manufacturing under the Hasbro name in the 1940s, according to Hasbro’s website. It went public in 1968.

Hassenfeld rose quickly in the family business serving as special assistant to the president and worked his way up the ranks. He became one of the key architects of Hasbro’s international operations and spent extensive time traveling overseas. He was named executive vice president in 1980 and became president in September 1984.

Hassenfeld labored for years in the shadow of his older brother Stephen. His brother’s death of pneumonia in June 1989 at age 47, however, moved Hassenfeld into the position of chairman and chief executive officer.

Hassenfeld stepped down as CEO in 2003 and in August 2005, he retired from his chairman position and became emeritus chairman. He stepped away from that role last year. Hassenfeld was the last family member to sit on the board, according to Hasbro.

“All of us who have ever had any connection to Hasbro today are mourning the profound loss of Alan Hassenfeld, our beloved former Chairman & CEO, mentor, and dear friend, ” Hasbro CEO Chris Cocks in an e-mailed statement to The Associated Press. “Alan’s enormous heart was, and will remain, the guiding force behind Hasbro — compassionate, imaginative, and dedicated to bringing a smile to the face of every child around the world. His tireless advocacy for philanthropy, children’s welfare, and the toy industry created a legacy that will inspire us always.”

Hassenfeld was involved in many charitable and social causes both nationally and locally in Rhode Island. His concerns ranged from childhood hunger to issues involving refugee settlement in the state. As chairman of the Hassenfeld Family Initiatives, he oversaw the foundation’s mission of globalizing safety and human rights within the area of children’s products; empowering women in developing countries; and enhancing the economy, education and business opportunities in Rhode Island.

Hassenfeld was also founding benefactor of Hasbro Children’s Hospital in Providence, and his family’s contributions helped to establish the Hassenfeld Child Health Innovation Institute at Brown University.

Introducing the 2025 Fortune 500, the definitive ranking of the biggest companies in America. Explore this year’s list.



Source link

Continue Reading

Business

‘Customers don’t care about AI’ — they want to boost cash flow and make ends meet, Intuit CEO says

Published

on



While Wall Street and Silicon Valley are obsessed with artificial intelligence, many businesses don’t have the luxury to fixate on AI because they’re too busy trying to grind out more revenue.

At the Fortune Brainstorm AI conference in San Francisco on Monday, Intuit CEO Sasan Goodzari acknowledged the day-to-day priorities of users of his company’s products, such as QuickBooks, TurboTax, Mailchimp and Credit Karma.

“I remind ourselves at the company all the time: customers don’t care about AI,” he told Fortune’s Andrew Nusca. “Everybody talks about AI, but the reality is a consumer is looking to increase their cash flow. A consumer is looking to power their prosperity to make ends meet. A business is trying to get more customers. They’re trying to manage their customers, sell them more services.”

Of course, AI still powers Intuit’s platforms, which help companies and entrepreneurs digest data that’s often stovepiped across dozens of separate applications they juggle. So Intuit declared years ago that it would focus on delivering “done-for-you experiences,” Goodzari said.

On the enterprise side, it means helping businesses manage sales leads, cash flow, accounting, or taxes. On the consumer side, it entails helping users build credit and wealth. Expertise from a real person, or human intelligence (HI), is an essential component as well.

“Customers don’t care about AI,” Goodzari added. “What they care about is ‘Help me grow my business, help me prosper.’ And we have found the only way to do that is to combine technology automating everything for them with human intelligence on our platform that can actually give you the human touch and the advice. And we believe that will be the case for decades to come. But the role of the HI, the human, will change.”

For example, an Intuit AI agent can hand off tasks to humans by helping them follow up with business clients who have overdue invoices or identify which ones typically pay on time.

Ashok Srivastava, Intuit’s chief AI officer, noted that the AI agents on average save customers 12 hours per month on routine tasks. In addition, users get paid five days sooner and are 10% more likely to be paid in full.

“As a person who’s run small businesses in the past, I can tell you numbers like that are very meaningful,” he said. “Twelve more hours means 12 more hours that I can spend building my products, understanding my customers.”

Read more from Fortune Brainstorm AI:

Cursor developed an internal AI help desk that handles 80% of its employees’ support tickets, says the $29 billion startup’s CEO

OpenAI COO Brad Lightcap says ‘code red’ will force the company to focus, as the ChatGPT maker ramps up enterprise push

Amazon robotaxi service Zoox to start charging for rides in 2026, with ‘laser focus’ on transporting people, not deliveries, says cofounder



Source link

Continue Reading

Business

The problem with ‘human in the loop’ AI? Often, it’s the humans

Published

on



Welcome to Eye on AI. In this edition…AI is outperforming some professionals…Google plans to bring ads to Gemini…leading AI labs team up on AI agent standards…a new effort to give AI models a longer memory…and the mood turns on LLMsand AGI.

Greetings from San Francisco, where we are just wrapping up Fortune Brainstorm AI. On Thursday, we’ll bring you a roundup of insights from the conference. But today, I want to talk about some notable studies from the past few weeks with potentially big implications for the business impact AI may have.

First, there was a study from the AI evaluations company Vals AI that pitted several legal AI applications as well as ChatGPT against human lawyers on legal research tasks. All of the AI applications beat the average human lawyers (who were allowed to use digital legal search tools) in drafting legal research reports across three criteria: accuracy, authoritativeness, and appropriateness. The lawyers’ aggregate median score was 69%, while ChatGPT scored 74%, Midpage 76%, Alexi 77%, and Counsel Stack, which had the highest overall score, 78%.

One of the more intriguing findings is that for many question types, it was the generalist ChatGPT that was the most accurate, beating out the more specialized applications. And while ChatGPT lost points for authoritativeness and appropriateness, it still topped the human lawyers across those dimensions.

The study has been faulted for not testing some of the better-known and most widely adopted legal AI research tools, such as Harvey, Legora, CoCounsel from Thompson Reuters, or LexisNexis Protégé, and for only testing ChatGPT among the frontier general-purpose models. Still, the findings are notable and comport with what I’ve heard anecdotally from lawyers.

A little while ago I had a conversation with Chris Kercher, a litigator at Quinn Emanuel who founded that firm’s data and analytics group. Quinn Emanuel has been using Anthropic’s general purpose AI model Claude for a lot of tasks. (This was before Anthropic’s latest model, Claude Opus 4.5, debuted.) “Claude Opus 3 writes better than most of my associates,” Kercher told me. “It just does. It is clear and organized. It’s a great model.” He said he is “constantly amazed” by what LLMs can do, finding new issues, strategies, and tactics that he can use to argue cases.

Kercher said that AI models have allowed Quinn Emanuel to “invert” its prior work processes. In the past, junior lawyers—who are known as associates—used to spend days researching and writing up legal memos, finding citations for every sentence, before presenting those memos to more senior lawyers who would incorporate some of that material into briefs or arguments that would actually be presented in court. Today, he says, AI is used to generate drafts that Kercher said are by and large better, in a fraction of the time, and then these drafts are given to associates to vet. The associates are still responsible for the accuracy of the memos and citations—just as they always were—but now they are fact-checking the AI and editing what it produces, not performing the initial research and drafting, he said.

He said that the most experienced, senior lawyers often get the most value out of working with AI, because they have the expertise to know how to craft the perfect prompt, along with the professional judgment and discernment to quickly assess the quality of the AI’s response. Is the argument the model has come up with sound? Is it likely to work in front of a particular judge or be convincing to a jury? These sorts of questions still require judgment that comes from experience, Kercher said.

Ok, so that’s law, but it likely points to ways in which AI is beginning to upend work within other “knowledge industries” too. Here at Brainstorm AI yesterday, I interviewed Michael Truell, the cofounder and CEO of hot AI coding tool Cursor. He noted that in a University of Chicago study looking at the effects of developers using Cursor, it was often the most experienced software engineers who saw the most benefit from using Cursor, perhaps for some of the same reasons Kercher says experienced lawyers get the most out of Claude—they have the professional experience to craft the best prompts and the judgment to better assess the tools’ outputs. 

Then there was a study out on the use of generative AI to create visuals for advertisements. Business professors at New York University and Emory University tested whether advertisements for beauty products created by human experts alone, created by human experts and then edited by AI models, or created entirely by AI models were most appealing to prospective consumers. They found the ads that were entirely AI generated were chosen as the most effective—increasing clickthrough rates in a trial they conducted online by 19%. Meanwhile, those created by humans and edited by AI were actually less effective than those simply created by human experts with no AI intervention. But, critically, if people were told the ads were AI-generated, their likelihood of buying the product declined by almost a third.

Those findings present a big ethical challenge to brands. Most AI ethicists think people should generally be told when they are consuming content generated by AI. And advertisers do need to negotiate various Federal Trade Commission rulings around “truth in advertising.” But many ads already use actors posing in various roles without needing to necessarily tell people that they are actors—or the ads do so only in very fine print. How different is AI-generated advertising? The study seems to point to a world where more and more advertising will be AI-generated and where disclosures will be minimal.

The study also seems to challenge the conventional wisdom that “centaur” solutions (which combine the strengths of humans and those of AI in complementary ways) will always perform better than either humans or AI alone. (Sometimes this is condensed to the aphorism “AI won’t take your job. A human using AI will take your job.”) A growing body of research seems to suggest that in many areas, this simply isn’t true. Often, the AI on its own actually produces the best results.

But it is also the case that whether centaur solutions work well depends tremendously on the exact design of the human-AI interaction. A study on human doctors using ChatGPT to aid diagnosis, for example, found that humans working with AI could indeed produce better diagnoses than either doctors or ChatGPT alone—but only if ChatGPT was used to render an initial diagnosis and human doctors, with access to the ChatGPT diagnosis, then gave a second opinion. If that process was reversed, and ChatGPT was asked to render the second opinion on the doctor’s diagnosis, the results were worse—and in fact, the second-best results were just having ChatGPT provide the diagnosis. In the advertising study, it would have been good if the researchers had looked at what happens if AI generates the ads and then human experts edit them.

But in any case, momentum towards automation—often without a human in the loop—is building across many fields.

On that happy note, here’s more AI news.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

FORTUNE ON AI

Exclusive: Glean hits $200 million ARR, up from $100 million 9 months back—by Allie Garfinkle

Cursor developed an internal AI help desk that handles 80% of its employees’ support tickets, says the $29 billion startup’s CEO —by Beatrice Nolan

HP’s chief commercial officer predicts the future will include AI-powered PCs that don’t share data in the cloud —by Nicholas Gordon

How Intuit’s chief AI officer supercharged the company’s emerging technologies teams—and why not every company should follow his lead —by John Kell

Google Cloud CEO lays out 3-part strategy to meet AI’s energy demands, after identifying it as ‘the most problematic thing’ —by Jason Ma

OpenAI COO Brad Lightcap says code red will ‘force’ the company to focus, as the ChatGPT maker ramps up enterprise push —by Beatrice Nolan

AI IN THE NEWS

Trump allows Nvidia to sell H200 GPUs to China, but China may limit adoption. President Trump signaled he would allow exports of Nvidia’s high-end H200 chips to approved Chinese customers. Nvidia CEO Jensen Huang has called China a $50 billion annual sales opportunity for the company, but Beijing wants to limit the reliance of its companies on U.S.-made chips, and Chinese regulators are weighing an approval system that would require buyers to justify why domestic chips cannot meet their needs. They may even bar the public sector from purchasing H200s. But Chinese companies often prefer to use Nvidia chips and even train their models outside of China to get around U.S. export controls. Trump’s decision has triggered political backlash in Washington, with a bipartisan group of senators seeking to block such exports, though the legislation’s prospects remain uncertain. Read more from the Financial Timeshere.

Trump plans executive order on national AI standard, aimed at pre-empting state-level regulation. President Trump said he will issue an executive order this week creating a single national artificial-intelligence standard, arguing that companies cannot navigate a patchwork of 50 different state approval regimes, Politico reported. The move follows a leaked November draft order that sought to block state AI laws and reignited debate over whether federal rules should override state and local regulations. A previous attempt to add AI-preemption language to the year-end defense bill collapsed last week, prompting the administration to return to pursuing the policy through executive action instead.

Google plans to bring advertising to its Gemini chatbot in 2026. That’s according to a report in Adweek that cited information from two unnamed Google advertising clients. The story said that details on format, pricing, and testing remained unclear. It also said the new ad format for Gemini is separate from ads that will appear alongside “AI Mode” searches in Google Search.

Former Databricks AI head’s new AI startup valued at $4.5 billion in seed round. Unconventional AI, a startup cofounded by former Databricks AI head Naveen Rao, raised $475 million in a seed round led by Andreessen Horowitz and Lightspeed Venture Partners at a valuation of $4.5 billion—just two months after its founding, Bloomberg News reported. The company aims to build a novel, more energy-efficient computing architecture to power AI workloads.

Anthropic forms partnership with Accenture to target enterprise customers. Anthropic and Accenture have formed a three-year partnership that makes Accenture one of Anthropic’s largest enterprise customers and aims to help businesses—many of which remain skeptical—realize tangible returns from AI investments, the Wall Street Journalreported. Accenture will train 30,000 employees on Claude and, together with Anthropic, launch a dedicated business group targeting highly regulated industries and embedding engineers directly with clients to accelerate adoption and measure value.

OpenAI, Anthropic, Google, and Microsoft team up for new standard for agentic AI. The Linux Foundation is organizing a group called the Agentic Artificial Intelligence Foundation with participation from major AI companies, including OpenAI, Anthropic, Google, and Microsoft. It aims to create shared open-source standards that allow AI agents to reliably interact with enterprise software. The group will focus on standardizing key tools such as the Model Context Protocol, OpenAI’s Agents.md format, and Block’s Goose agent, aiming to ensure consistent connectivity, security practices, and contribution rules across the ecosystem. CIOs increasingly say common protocols are essential for fixing vulnerabilities and enabling agents to function smoothly in real business environments. Read more here from The Information.

EYE ON AI RESEARCH

Google has created a new architecture to give AI models longer-term memory. The architecture, called Titans—which Google first debuted at the start of 2025 and which Eye on AI covered at the time—is paired with a framework named MIRAS that is designed to give AI something closer to long-term memory. Instead of forgetting older details when its short memory window fills up, the system uses a separate memory module that continually updates itself. The system assesses how surprising any new piece of information is compared to what it has stored in its long-term memory, updating the memory module only when it encounters high surprise. In testing, Titans with MIRAS performed better than older models on tasks that require reasoning over long stretches of information, suggesting it could eventually help with things like analyzing complex documents, doing in-depth research, or learning continuously over time. You can read Google’s research blog here.

AI CALENDAR

Jan. 6: Fortune Brainstorm Tech CES Dinner. Apply to attend here.

Jan. 19-23: World Economic Forum, Davos, Switzerland.

Feb. 10-11: AI Action Summit, New Delhi, India.

BRAIN FOOD

At NeurIPS, the mood shifts against LLMs as a path to AGI. The Information reported that a growing number of researchers attending NeurIPS, the AI research field’s most important conference—which took place last week in San Diego (with satellite events in other cities)—are increasingly skeptical of the idea that large language models (LLMs) will ever lead to artificial general intelligence (AGI). Instead, they feel the field may need an entirely new kind of AI architecture to advance to more human-like AI that can continually learn, can learn efficiently from fewer examples, and can extrapolate and analogize concepts to previously unseen problems.

Figures such as Amazon’s David Luan and OpenAI co-founder Ilya Sutskever contend that current approaches, including large-scale pre-training and reinforcement learning, fail to produce models that truly generalize, while new research presented at the conference explores self-adapting models that can acquire new knowledge on the fly. Their skepticism contrasts with the view of leaders like Anthropic CEO Dario Amodei and OpenAI’s Sam Altman, who believe scaling current methods can still achieve AGI. If critics are correct, it could undermine billions of dollars in planned investment in existing training pipelines.



Source link

Continue Reading

Business

OpenAI COO Brad Lightcap says code red will ‘force’ focus, as ChatGPT maker ramps up enterprise push

Published

on



OpenAI’s Chief Operating Officer Brad Lightcap says the company’s recent ‘code red’ alert will force the $500 billion startup to “focus” as it faces heightened competition in the technical capabilities of its AI models and in making inroads among business customers.

“I think a big part of it is really just starting to push on the rate at which we see improvement in focus areas within the models,” Lightcap said on stage at Fortune’s Brainstorm AI conference in San Francisco on Tuesday. “What you’re going to see, even starting fairly soon, will be a really exciting series of things that we release.”

Last week, in an internal memo shared with employees, OpenAI CEO Sam Altman said he was declaring a “Code Red” alarm within the organization, according to reports from The Information and the Wall Street Journal. Altman told employees it was “a critical time for ChatGPT,” the company’s flagship product, and that OpenAI would delay other initiatives, including its advertising plans to focus on improving the core product.

Speaking at the event on Tuesday, Lightcap framed the code red alert as a standard practice that many businesses occasionally undertake to sharpen focus, and not an OpenAI specific action. But Lightcap acknowledged the importance of the move at OpenAI at this moment, given the growth in headcount and projects over the past couple of years.

“It’s a way of forcing company focus,” Lightcap said. “For a company that’s doing a bazillion things, it’s actually quite refreshing.”

He continued: “We will come out of it. I think what comes out of it that way will be really exciting.”

In addition to the increasing pressure from Google and its Gemini family of LLM models, OpenAI is facing heightened competition from rival AI lab Anthropic among enterprise customers. Anthropic has emerged as a favorite for businesses, particularly software engineers, due to its popular coding tools and reputation for AI safety.

Lightcap told the audience that the company was focused on pushing enterprise adoption of AI tools. He said OpenAI was developing two main levels of enterprise products: user-focused solutions like ChatGPT, which boost team productivity, and lower-level APIs for developers to build custom applications. However, he noted the company currently lacks offerings in the middle tier, such as tools are user-directed but also have deep integration into enterprise systems, like AI coding assistants that employees can direct while tapping into the organization’s code bases. He said the company was also prioritizing further investments to enable enterprises to tackle longer-term, complex tasks using AI.



Source link

Continue Reading

Trending

Copyright © Miami Select.