Connect with us

Business

After suicides, calls for stricter rules on how chatbots interact with children and teens

Published

on



A growing number of young people have found themselves a new friend. One that isn’t a classmate, a sibling, or even a therapist, but a human-like, always supportive AI chatbot. But if that friend begins to mirror some user’s darkest thoughts, the results can be devastating.

In the case of Adam Raine, a 16-year-old from Orange County, his relationship with AI-powered ChatGPT ended in tragedy. His parents are suing the company behind the chatbot, OpenAI, over his death, alleging that bot became his “closest confidant,” one that validated his “most harmful and self-destructive thoughts,” and ultimately encouraged him to take his own life.

It’s not the first case to put the blame for a minor’s death on an AI company. Character.AI, which hosts bots, including ones that mimic public figures or fictional characters, is facing a similar legal claim from parents who allege a chatbot hosted on the company’s platform actively encouraged a 14-year-old-boy to take his own life after months of inappropriate, sexually explicit,  messages.

When reached for comment, OpenAI directed Fortune to two blog posts on the matter. The posts outlined some of the steps OpenAI is taking to improve ChatGPT’s safety, including routing sensitive conversations to reasoning models, partnering with experts to develop further protections, and rolling out parental controls within the next month. OpenAI also said it was working on strengthening ChatGPT’s ability to recognize and respond to mental health crises by adding layered safeguards, referring users to real-world resources, and enabling easier access to emergency services and trusted contacts.

Character.ai said the company does not comment on pending litigation but that they has rolled out more safety features over the past year, “including an entirely new under-18 experience and a Parental Insights feature. A spokesperson said: “We already partner with external safety experts on this work, and we aim to establish more and deeper partnerships going forward.

“The user-created Characters on our site are intended for entertainment. People use our platform for creative fan fiction and fictional roleplay. And we have prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction.”

But lawyers and civil society groups that advocate for better accountability and oversight of technology companies say the companies should not be left to police themselves when it comes to ensuring their products are safe, particularly for vulnerable children and teens.

“Unleashing chatbots on minors is an inherently dangerous thing,” Meetali Jain, the Director of the Tech Justice Law Project and a lawyer involved in both cases, told Fortune. “It’s like social media on steroids.”

“I’ve never seen anything quite like this moment in terms of people stepping forward and claiming that they’ve been harmed…this technology is that much more powerful and very personalized,” she said.

Lawmakers are starting to take notice, and AI companies are promising changes to protect children from engaging in harmful conversations. But, at a time when loneliness among young people is at an all-time high, the popularity of chatbots may leave young people uniquely exposed to manipulation, harmful content, and hyper-personalized conversations that reinforce dangerous thoughts.

AI and Companionship

Intended or not, one of the most common uses for AI chatbots has become companionship. Some of the most active users of AI are now turning to the bots for things like life advice, therapy, and human intimacy. 

While most leading AI companies tout their AI products as productivity or search tools, an April survey of 6,000 regular AI users from the Harvard Business Review found that “companionship and therapy” was the most common use case. Such usage among teens is even more prolific. 

A recent study by the U.S. nonprofit Common Sense Media, revealed that a large majority of American teens (72%) have experimented with an AI companion at least once. More than half saying they use the tech regularly in this way. 

“I am very concerned that developing minds may be more susceptible to [harms], both because they may be less able to understand the reality, the context, or the limitations [of AI chatbots], and because culturally, younger folks tend to be just more chronically online,” Karthik Sarma a health AI scientist and psychiatrist at University of California, UCSF, said.

“We also have the extra complication that the rates of mental health issues in the population have gone up dramatically. The rates of isolation have gone up dramatically,” he said. “I worry that that expands their vulnerability to unhealthy relationships with these bonds.”

Intimacy by Design

Some of the design features of AI chatbots encourage users to feel an emotional bond with the software. They are anthropomorphic—prone to acting as if they have interior lives and lived experience that they do not, prone to being sycophantic, can hold long conversations, and are able to remember information.

There is, of course, a commercial motive for making chatbots this way. Users tend to return and stay loyal to certain chatbots if they feel emotionally connected or supported by them. 

Experts have warned that some features of AI bots are playing into the “intimacy economy,” a system that tries to capitalize on emotional resonance. It’s a kind of AI-update on the “attention economy” that capitalized on constant engagement.

“Engagement is still what drives revenue,” Sarma said. “For example, for something like TikTok, the content is customized to you. But with chatbots, everything is made for you, and so it is a different way of tapping into engagement.”

These features, however, can become problematic when the chatbots go off script and start reinforcing harmful thoughts or offering bad advice. In Adam Raine’s case, the lawsuit alleges that ChatGPT bought up suicide at twelve times the rate he did, normalized his sucicial thoughts, and suggested ways to circumvent its content moderation.

It’s notoriously tricky for AI companies to stamp out behaviours like this completely and most experts agree it’s unlikely that hallucinations or unwanted actions will ever be eliminated entirely. 

OpenAI, for example, acknowledged in its response to the lawsuit that safety features can degrade over long conversions, despite the fact that the chatbot itself has been optimized to hold these longer conversations. The company says it is trying to fortify these guardrails, writing in a blogpost that it was strengthening “mitigations so they remain reliable in long conversations” and “researching ways to ensure robust behavior across multiple conversations.” 

Research Gaps Are Slowing Safety Efforts

For Michael Kleinman, U.S. policy director at the Future of Life Institute, the lawsuits underscore a pointAI safety researchers have been making for years: AI companies can’t be trusted to police themselves.

Kleinman equated OpenAI’s own description of its safeguards degrading in longer conversations to “a car company saying, here are seat belts—but if you drive more than 20 kilometers, we can’t guarantee they’ll work.”

He told Fortune the current moment echoes the rise of social media, where he said tech companies were effectively allowed to “experiment on kids” with little oversight. “We’ve spent the last 10 to 15 years trying to catch up to the harms social media caused. Now we’re letting tech companies experiment on kids again with chatbots, without understanding the long-term consequences,” he said.

Part of this is a lack of scientific research on the effects of long, sustained chatbot conversations.  Most studies only look at brief exchanges, a single question and answer, or at most a handful of back-and-forth messages. Almost no research has examined what happens in longer conversations.

“The cases where folks seem to have gotten in trouble with AI: we’re looking at very long, multi-turn interactions. We’re looking at transcripts that are hundreds of pages long for two or three days of interaction alone and studying that is really hard, because it’s really hard to stimulate in the experimental setting,” Sarma said. “But at the same time, this is moving too quickly for us to rely on only gold standard clinical trials here.”

AI companies are rapidly investing in development and shipping more powerful models at a pace that regulators and researchers struggle to match.

“The technology is so far ahead and research is really behind,” Sakshi Ghai, a Professor of Psychological and Behavioural Science at The London School of Economics and Political Science, told Fortune.

A Regulatory Push for Accountability

Regulators are trying to step in, helped by the fact that child online safety is a relatively bipartisan issue in the U.S. 

On Thursday, the FTC said it was issuing orders to seven companies, including OpenAI and Character.AI, in an effort to understand how their chatbots impact children. The agency said that chatbots can simulate human-like conversations and form emotional connections with their users. It’s asking companies for more information about how they measure and “evaluate the safety of these chatbots when acting as companions.” 

FTC Chairman Andrew Ferguson said in a statement shared with CNBC that “protecting kids online is a top priority for the Trump-Vance FTC.”

The move follows a push for state level push for more accountability from several attorneys generals. 

In late August, a bipartisan coalition of 44 attorneys general warned OpenAI, Meta, and other chatbot makers that they will “answer for it” if they release products that they know cause harm to children. The letter cited reports of chatbots flirting with children, encouraging self-harm, and engaging in sexually suggestive conversations, behavior the officials said would be criminal if done by a human.

Just a week later, California Attorney General Rob Bonta and Delaware Attorney General Kathleen Jennings issued a sharper warning. In a formal letter to OpenAI, they said they had “serious concerns” about ChatGPT’s safety, pointing directly to Raine’s death in California and another tragedy in Connecticut. 

“Whatever safeguards were in place did not work,” they wrote. Both officials warned the company that its charitable mission requires more aggressive safety measures, and they promised enforcement if those measures fall short.

According to Jain, the lawsuits from the Raine family as well as the suit against Character.AI are, in part, intended tocreate this kind of regulatory pressure on AI companies to design their products more safely and prevent future harm to children. One way lawsuits can generate this pressure is through the discovery process, which compels companies to turn over internal documents, and could shed insight into what executives knew about safety risks or marketing harms. Another way is just public awareness of what’s at stake, in an attempt to galvanize parents, advocacy groups, and lawmakers to demand new rules or stricter enforcement.

Jain said the two lawsuits aim to counter an almost religious fervor in Silicon Valley that  sees the pursuit of artificial general intelligence (AGI) as so important, it is worth any cost—human or otherwise.

“There is a vision that we need to deal with [that tolerates] whatever casualties in order for us to get to AGI and get to AGI fast,” she said. “We’re saying: This is not inevitable. This is not a glitch. This is very much a function of how these chat bots were designed and with the proper external incentive, whether that comes from courts or legislatures, those incentives could be realigned to design differently.”



Source link

Continue Reading

Business

Activist investors are targeting female CEOs—and it’s costing Corporate America

Published

on



Good morning. When Victoria’s Secret reported stellar quarterly results last week, shares shot up 14% and likely gave Hillary Super some breathing room from the activist investors pushing the lingerie company to, among other things, consider whether the CEO of 16 months is up to the task of turning it around.

Of course, the potential of having to deal with an activist investor’s campaign goes with the territory of being a CEO, especially at a company that has been struggling. But Super’s saga is a reminder that women CEOs remain much likelier than their male counterparts to be targeted by activist investors.

This year, according to a report last week by the Conference Board, women have made up 8% of the CEOs in the Russell 3000 index but accounted for 15% of activist campaigns specifically targeting chief executives. Other women to have recently confronted activists: Cracker Barrel’s Julie Masino, who survived a campaign, and Vail Resorts’ Kirsten Lynch, who did not. 

What makes the Conference Board report especially frustrating is that it adds more proof points to an old, seemingly intractable trend.

In 2015, the New York Times’ DealBook pondered “Do Activist Investors Target Female CEOs?” while Fortune’s Pattie Sellers asked “Does Nelson Peltz have a problem with women?” In 2017, Harvard Law School found that women CEOs had almost a 50% higher probability than men of becoming the target of shareholder activism.

Why? One reason, the Conference Board theorized, is rooted in a stereotype that women are more cooperative. It’s also conceivable that the trend reflects the glass cliff phenomenon in which women often take the helm of companies in decline. But there is almost certainly some bias at play. The Conference Board research showed that women targeted by activists face the same odds of being canned whether they turn things around or not, while male CEOs are less likely to be ousted when results improve.

Some of the most prominent women chief executives ever have tangled with activists: PepsiCo’s ex-CEO Indra Nooyi, ex-Yahoo CEO Marissa Mayer, ex-DuPont CEO Ellen Kullman, ex-Mondelez CEO Irene Rosenfeld, ex-HP CEO Meg Whitman, and Mary Barra, still at GM. Michelle Gass, now thriving as CEO of Levi Strauss & Co, dealt with not one but three activist campaigns as she tried to fix Kohl’s.

Everyone should be held accountable when their company is failing or on a bad path. But it is worth wondering what this extra hurdle women CEOs face is costing us. Activist campaigns are bruising to the company but also to a CEO’s reputation. Does this mean boards might be more likely to avoid naming a woman to lower the odds of an activist campaign, or that fewer women will throw their hat in the ring?

Either way, it seems the phenomenon could needlessly be costing corporate America some much needed talent.—Phil Wahba

Contact CEO Daily via Diane Brady at diane.brady@fortune.com

Top news

Fed watch

All eyes will be on the Fed meeting today even though an interest rate cut is all but certain. Instead, investors will focus on Chair Jerome Powell’s tone and whether he characterizes Fed policy as “in a good place;” doing so would imply that a January cut is unlikely. 

Fed chair watch 

Meanwhile, President Donald Trump has narrowed down the candidates to replace Powell as Fed chair. The frontrunner is National Economic Council Director Kevin Hassett, but to clinch the job he’ll reportedly have to outshine three other contenders in the final round of interviews, suggesting he’s not a shoo-in for the job. 

Trump’s affordability tour

In his first in a series of speeches about “affordability,” President Trump mocked the term and insisted that Americans are doing better than ever. In reality, U.S. inflation is close to 3%, about where it was when Trump’s predecessor Joe Biden left office. 

Miami’s mayoral race

As Trump railed against affordability, Eileen Higgins, a Democrat, defeated Trump’s favored candidate in Miami’s mayoral race with a campaign focused in part on affordable housing. She’s the first Democrat to occupy Miami’s City Hall in three decades (and the first-ever woman), giving Democrats another jolt of momentum ahead of the 2026 midterms. 

Taiwan’s chip action

Taiwan is invoking a national security law to protect the trade secrets of its homegrown chipmaker TSMC and has used it to indict a TSMC supplier for allegedly letting a former employee steal details about TSMC’s top chips. 

Layoffs hit 1.1 million

Recruitment firm Challenger, Gray & Christmas has calculated the number of layoffs so far this year at 1.1 million, the sixth time since 1993 that layoffs have been that high. Technology was the hardest hit sector with 150,000 layoffs.

Americans ‘living on the financial edge’

Moody’s Analytics Chief Economist Mark Zandi told Fortunethat many Americans are “already living on the financial edge,” and that a drop in their spending could lead to a recession. If layoffs increase, then Zandi estimates that a “jobs recession” is certain. 

Sam Altman worries about ‘rate of change’

During an appearance on The Tonight Show with Jimmy Fallon, OpenAI CEO Sam Altman admitted that he’s worried about “the rate of change that’s happening in the world right now.” He added that the “rate at which jobs will change over may be pretty fast,” with hopes that “much better jobs” will follow. 

The markets

S&P 500 futures were up 0.05% this morning. The last session closed down 0.09%. STOXX Europe 600 was down 0.19% in early trading. The U.K.’s FTSE 100 was up 0.14% in early trading. Japan’s Nikkei 225 was down 0.1%. China’s CSI 300 was down 0.14%. The South Korea KOSPI was down 0.21%. India’s NIFTY 50 is down 0.32%. Bitcoin is up at $93K.

Around the watercooler

New contract shows Palantir is working on a tech platform for another federal agency that works with ICE by Jessica Mathews

Jamie Dimon taps Jeff Bezos, Michael Dell, and Ford CEO Jim Farley to advise JPMorgan’s $1.5 trillion national security initiative by Nino Paoli

Trump’s $12 billion farmer bailout is a ‘Band-Aid on a bigger wound’ the American agriculture industry is still reeling from by Sasha Rogelberg

Exelon CEO: The ‘warning lights are on’ for U.S. electric grid resilience and utility prices amid AI demand surge by Jordan Blum

CEO Daily is compiled and edited by Joey Abrams, Claire Zillman and Lee Clifford.



Source link

Continue Reading

Business

5 VCs sounds off on the AI question du jour

Published

on



The views seem to range from bubble-wary to bubble-dismissive. We hashed it all out over eggs and sausages at Fortune’s IRL Term Sheet Breakfast at Brainstorm AI in San Francisco yesterday. This is Amanda Gerut, Fortune’s West Coast news editor, pinch-hitting for my colleague Allie Garfinkle.

Allie hosted five VCs with funds ranging in size from $5 million to $25 billion and views varied across the panel. This group alone is collectively going to deploy anywhere from tens to hundreds of millions over the next decade into companies with AI as a backdrop and these investments will either prove spectacularly right or wrong.

Here’s a roll call:

Jenny Xiao, partner at Leonsis Capital and former researcher at OpenAI, came in with a nuanced take. There’s something of a bubble, but it’s “relatively contained” in the infrastructure layer with overinvestment primarily in data centers, GPUs and in large language model companies. But right now, there’s actually underinvestment in the application layer because there are so many ways AI can make an impact in various enterprises, Xiao said. 

Vanessa Larco, former partner at New Enterprise Associates (NEA) and co-founder of new venture firm Premise, has a contrarian view. “Everyone thinks enterprise is safer,” Larco said. “But I actually think the consumer might, this time around in the current environment, be what survives.” Larco’s reasoning is that if a consumer adopts your AI product, it’s because you’re giving them something faster, “radically cheaper, or much easier to use.” Once you’ve done that and built a brand, it’s very hard for people to quit you. 

Rob Biederman, managing partner at Asymmetric Capital Partners and chairman of Catalant Technologies, had a sobering view. “In every boom, 99% or 99.9% of companies fail, and one or two of them become Amazon or Google,” said Biederman, who had to dash off to catch a flight. Only companies that can systematically create value for customers, which most of them aren’t doing right now, will survive. 

Aaron Jacobson, partner at NEA, said the history of technological innovation “is always overhyped in the near term and underhyped in the long term, and that will be true of AI.” So at some point there will be a correction and there will be cycles of pain around valuation and funding, “but ultimately, in 10 years, we’re going to have a lot of really big, impactful companies.”

Daniel Dart, founder and general partner of Rock Yard Ventures, had the boldest counter to fears about a bubble. He sees a total addressable market we can’t yet imagine. People think self-driving Waymos will replace Ubers, but Dart sees elementary schools and elderly care centers with Waymos waiting out front and that proves to him we’re still in the early innings. 

“You’re really going to tell me there aren’t going to be any trillion-dollar companies in 2030 or 2034? No one here is going to take that bet,” said Dart. “There is going to be so much value creation that it’s like the birth of fire.”

See you tomorrow,

Amanda Gerut
Email:
Amanda.gerut@fortune.com
Submit a deal for the Term Sheet newsletter here.

Joey Abrams curated the deals section of today’s newsletter.Subscribe here.

Venture Deals

Saviynt, an El Segundo, Calif.-based identity security platform, raised $700 million in series B funding. KKR led the round and was joined by SixthStreetGrowth, TenEleven and existing investor CarrickCapitalPartners.

fal, a San Francisco-based AI-generated media platform, raised $140 million in Series D funding. Sequoia led the round and was joined by KleinerPerkins, NVentures, and AlkeonCapital.

Radial, a New York City-based network designed to help patients access advanced mental health treatments, raised $50 million in Series A funding. GeneralCatalyst led the round and was joined by SolariCapital, SLHealthCapital, FounderCollective, BoxGroup, ScrubCapital, and DiedevanLamoen.

Relation, a London, U.K.-based developer of medicines for immunology, metabolic, and bone diseases, raised $26 million in funding from NVentures, DCVC, and MagneticVentures.

Aradigm, a New York City-based benefits platform for cell and gene therapies, raised $20 million in Series A funding. FristCresseyVentures led the round and was joined by AndreessenHorowitz and MorganHealth

PrimeSecurity, a Tel Aviv, Israel and New York City-based AI-powered platform designed to detect and mitigate risks during software design, raised $20 million in Series A funding. ScaleVenturePartners led the round and was joined by FoundationCapital, FlybridgeVentures, and others.

Algori, a Madrid, Spain-based AI-powered shopper insights platform for the fast-moving consumer goods industry, raised €3.6 million ($4.2 million) in funding from RedBullVentures, Co-invest Capital, AttaPoll, and others.

EmpromptuAI, a San Francisco-based platform designed to help transition SaaS products into AI-native systems, raised $2 million in pre-seed funding. PrecursorVentures led the round and was joined by AlumniVentures, FoundersEdge, RogueWomenVC, and others.

Private Equity

AppDirect, backed by CDPQ, acquired vComSolutions, a San Ramon, Calif.-based IT management platform, at an enterprise valuation of more than $100 million.

JensenHughes, backed by GryphonInvestors, acquired SafetyManagementServices, a West Jordan, Utah-based fire and life safety company. Financial terms were not disclosed.

NewStateCapitalPartners acquired a majority stake in Harrell-Fish, a Bloomington, Ind.-based mechanical installation and maintenance services provider. Financial terms were not disclosed.

PestCoHoldings, a portfolio company of ThompsonStreetCapital, acquired SouthwestExterminating, a Houston, Texas-based pest control provider. Financial terms were not disclosed.

ProsperityPartners, backed by UnityPartners, acquired a majority stake in Farkouh, Furman & Faccio, a New York City-based provider of tax, attest, accounting and business consulting services. Financial terms were not disclosed.

SEVA acquired a minority stake in Pronto, a Lehi, Utah-based team communications platform designed for front–line employers and higher education institutions. Financial terms were not disclosed.

Exits

ArclineInvestmentManagement acquired Altronic, a Girard, Ohio-based supplier of ignition, control, and instrumentation systems for critical infrastructure power systems, from HOERBIGERGroup. Financial terms were not disclosed.

BerkshirePartners agreed to acquire UnitedFlowTechnologies, an Irving, Texas-based process and equipment solutions company for water and wastewater systems, from H.I.G.Capital. Financial terms were not disclosed.

BessemerInvestors acquired Xanitos, a Newtown Square, Penn.-based provider of environmental services, patient transport, patient observation, and linen services, from AngelesEquityPartners. Financial terms were not disclosed.

ShareRockPartners acquired a majority stake in AMAGTechnology, a Hawthorne, Calif.-based physical security solutions provider, from AlliedUniversal.



Source link

Continue Reading

Business

Coupang CEO resigns over historic South Korean data breach

Published

on



Coupang chief executive officer Park Dae-jun resigned over his failure to prevent South Korea’s largest-ever data breach, which set off a regulatory and political backlash against the country’s dominant online retailer.

The company said in a statement on Wednesday that Park had stepped down over his role in the breach. It appointed Harold Rogers, chief administrative officer for the retailer’s U.S.-based parent company Coupang Inc., as interim head.

Park becomes the highest-profile casualty of a crisis that’s prompted a government investigation and disrupted the lives of millions across Korea. Nearly two-thirds of people in the country were affected by the breach, which granted unauthorized access to their shipping addresses and phone numbers.

Police raided Coupang’s headquarters this week in search of evidence that could help them determine how the breach took place as well as the identity of the hacker, Yonhap News reported, citing officials.

Officials have said the breach was carried out over five months in which the company’s cybersecurity systems were bypassed. Last week President Lee Jae Myung said it was “truly astonishing” that Coupang had failed to detect unauthorized access of its systems for such a long time.

Park squared off with lawmakers this month during an hours-long grilling. Responding to questions about media reports that claimed the attack had been carried out by a former employee who had since returned to China, he said a Chinese national who left the company and had been a “developer working on the authentication system” was involved.

The company faces a potential fine of up to 1 trillion won ($681 million) over the incident, lawmakers said.

Coupang founder Bom Kim has been summoned to appear before a parliamentary hearing on Dec. 17, with lawmakers warning of consequences if the billionaire fails to show.

Park’s departure adds fresh uncertainty to Coupang’s leadership less than seven months after the company revamped its internal structure to make him sole CEO of its Korean operations. In his new role, Rogers will focus on addressing customer concerns and stabilizing the company, Coupang said.

Join us at the Fortune Workplace Innovation Summit May 19–20, 2026, in Atlanta. The next era of workplace innovation is here—and the old playbook is being rewritten. At this exclusive, high-energy event, the world’s most innovative leaders will convene to explore how AI, humanity, and strategy converge to redefine, again, the future of work. Register now.



Source link

Continue Reading

Trending

Copyright © Miami Select.