Connect with us

Business

After suicides, calls for stricter rules on how chatbots interact with children and teens

Published

on



A growing number of young people have found themselves a new friend. One that isn’t a classmate, a sibling, or even a therapist, but a human-like, always supportive AI chatbot. But if that friend begins to mirror some user’s darkest thoughts, the results can be devastating.

In the case of Adam Raine, a 16-year-old from Orange County, his relationship with AI-powered ChatGPT ended in tragedy. His parents are suing the company behind the chatbot, OpenAI, over his death, alleging that bot became his “closest confidant,” one that validated his “most harmful and self-destructive thoughts,” and ultimately encouraged him to take his own life.

It’s not the first case to put the blame for a minor’s death on an AI company. Character.AI, which hosts bots, including ones that mimic public figures or fictional characters, is facing a similar legal claim from parents who allege a chatbot hosted on the company’s platform actively encouraged a 14-year-old-boy to take his own life after months of inappropriate, sexually explicit,  messages.

When reached for comment, OpenAI directed Fortune to two blog posts on the matter. The posts outlined some of the steps OpenAI is taking to improve ChatGPT’s safety, including routing sensitive conversations to reasoning models, partnering with experts to develop further protections, and rolling out parental controls within the next month. OpenAI also said it was working on strengthening ChatGPT’s ability to recognize and respond to mental health crises by adding layered safeguards, referring users to real-world resources, and enabling easier access to emergency services and trusted contacts.

Character.ai said the company does not comment on pending litigation but that they has rolled out more safety features over the past year, “including an entirely new under-18 experience and a Parental Insights feature. A spokesperson said: “We already partner with external safety experts on this work, and we aim to establish more and deeper partnerships going forward.

“The user-created Characters on our site are intended for entertainment. People use our platform for creative fan fiction and fictional roleplay. And we have prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction.”

But lawyers and civil society groups that advocate for better accountability and oversight of technology companies say the companies should not be left to police themselves when it comes to ensuring their products are safe, particularly for vulnerable children and teens.

“Unleashing chatbots on minors is an inherently dangerous thing,” Meetali Jain, the Director of the Tech Justice Law Project and a lawyer involved in both cases, told Fortune. “It’s like social media on steroids.”

“I’ve never seen anything quite like this moment in terms of people stepping forward and claiming that they’ve been harmed…this technology is that much more powerful and very personalized,” she said.

Lawmakers are starting to take notice, and AI companies are promising changes to protect children from engaging in harmful conversations. But, at a time when loneliness among young people is at an all-time high, the popularity of chatbots may leave young people uniquely exposed to manipulation, harmful content, and hyper-personalized conversations that reinforce dangerous thoughts.

AI and Companionship

Intended or not, one of the most common uses for AI chatbots has become companionship. Some of the most active users of AI are now turning to the bots for things like life advice, therapy, and human intimacy. 

While most leading AI companies tout their AI products as productivity or search tools, an April survey of 6,000 regular AI users from the Harvard Business Review found that “companionship and therapy” was the most common use case. Such usage among teens is even more prolific. 

A recent study by the U.S. nonprofit Common Sense Media, revealed that a large majority of American teens (72%) have experimented with an AI companion at least once. More than half saying they use the tech regularly in this way. 

“I am very concerned that developing minds may be more susceptible to [harms], both because they may be less able to understand the reality, the context, or the limitations [of AI chatbots], and because culturally, younger folks tend to be just more chronically online,” Karthik Sarma a health AI scientist and psychiatrist at University of California, UCSF, said.

“We also have the extra complication that the rates of mental health issues in the population have gone up dramatically. The rates of isolation have gone up dramatically,” he said. “I worry that that expands their vulnerability to unhealthy relationships with these bonds.”

Intimacy by Design

Some of the design features of AI chatbots encourage users to feel an emotional bond with the software. They are anthropomorphic—prone to acting as if they have interior lives and lived experience that they do not, prone to being sycophantic, can hold long conversations, and are able to remember information.

There is, of course, a commercial motive for making chatbots this way. Users tend to return and stay loyal to certain chatbots if they feel emotionally connected or supported by them. 

Experts have warned that some features of AI bots are playing into the “intimacy economy,” a system that tries to capitalize on emotional resonance. It’s a kind of AI-update on the “attention economy” that capitalized on constant engagement.

“Engagement is still what drives revenue,” Sarma said. “For example, for something like TikTok, the content is customized to you. But with chatbots, everything is made for you, and so it is a different way of tapping into engagement.”

These features, however, can become problematic when the chatbots go off script and start reinforcing harmful thoughts or offering bad advice. In Adam Raine’s case, the lawsuit alleges that ChatGPT bought up suicide at twelve times the rate he did, normalized his sucicial thoughts, and suggested ways to circumvent its content moderation.

It’s notoriously tricky for AI companies to stamp out behaviours like this completely and most experts agree it’s unlikely that hallucinations or unwanted actions will ever be eliminated entirely. 

OpenAI, for example, acknowledged in its response to the lawsuit that safety features can degrade over long conversions, despite the fact that the chatbot itself has been optimized to hold these longer conversations. The company says it is trying to fortify these guardrails, writing in a blogpost that it was strengthening “mitigations so they remain reliable in long conversations” and “researching ways to ensure robust behavior across multiple conversations.” 

Research Gaps Are Slowing Safety Efforts

For Michael Kleinman, U.S. policy director at the Future of Life Institute, the lawsuits underscore a pointAI safety researchers have been making for years: AI companies can’t be trusted to police themselves.

Kleinman equated OpenAI’s own description of its safeguards degrading in longer conversations to “a car company saying, here are seat belts—but if you drive more than 20 kilometers, we can’t guarantee they’ll work.”

He told Fortune the current moment echoes the rise of social media, where he said tech companies were effectively allowed to “experiment on kids” with little oversight. “We’ve spent the last 10 to 15 years trying to catch up to the harms social media caused. Now we’re letting tech companies experiment on kids again with chatbots, without understanding the long-term consequences,” he said.

Part of this is a lack of scientific research on the effects of long, sustained chatbot conversations.  Most studies only look at brief exchanges, a single question and answer, or at most a handful of back-and-forth messages. Almost no research has examined what happens in longer conversations.

“The cases where folks seem to have gotten in trouble with AI: we’re looking at very long, multi-turn interactions. We’re looking at transcripts that are hundreds of pages long for two or three days of interaction alone and studying that is really hard, because it’s really hard to stimulate in the experimental setting,” Sarma said. “But at the same time, this is moving too quickly for us to rely on only gold standard clinical trials here.”

AI companies are rapidly investing in development and shipping more powerful models at a pace that regulators and researchers struggle to match.

“The technology is so far ahead and research is really behind,” Sakshi Ghai, a Professor of Psychological and Behavioural Science at The London School of Economics and Political Science, told Fortune.

A Regulatory Push for Accountability

Regulators are trying to step in, helped by the fact that child online safety is a relatively bipartisan issue in the U.S. 

On Thursday, the FTC said it was issuing orders to seven companies, including OpenAI and Character.AI, in an effort to understand how their chatbots impact children. The agency said that chatbots can simulate human-like conversations and form emotional connections with their users. It’s asking companies for more information about how they measure and “evaluate the safety of these chatbots when acting as companions.” 

FTC Chairman Andrew Ferguson said in a statement shared with CNBC that “protecting kids online is a top priority for the Trump-Vance FTC.”

The move follows a push for state level push for more accountability from several attorneys generals. 

In late August, a bipartisan coalition of 44 attorneys general warned OpenAI, Meta, and other chatbot makers that they will “answer for it” if they release products that they know cause harm to children. The letter cited reports of chatbots flirting with children, encouraging self-harm, and engaging in sexually suggestive conversations, behavior the officials said would be criminal if done by a human.

Just a week later, California Attorney General Rob Bonta and Delaware Attorney General Kathleen Jennings issued a sharper warning. In a formal letter to OpenAI, they said they had “serious concerns” about ChatGPT’s safety, pointing directly to Raine’s death in California and another tragedy in Connecticut. 

“Whatever safeguards were in place did not work,” they wrote. Both officials warned the company that its charitable mission requires more aggressive safety measures, and they promised enforcement if those measures fall short.

According to Jain, the lawsuits from the Raine family as well as the suit against Character.AI are, in part, intended tocreate this kind of regulatory pressure on AI companies to design their products more safely and prevent future harm to children. One way lawsuits can generate this pressure is through the discovery process, which compels companies to turn over internal documents, and could shed insight into what executives knew about safety risks or marketing harms. Another way is just public awareness of what’s at stake, in an attempt to galvanize parents, advocacy groups, and lawmakers to demand new rules or stricter enforcement.

Jain said the two lawsuits aim to counter an almost religious fervor in Silicon Valley that  sees the pursuit of artificial general intelligence (AGI) as so important, it is worth any cost—human or otherwise.

“There is a vision that we need to deal with [that tolerates] whatever casualties in order for us to get to AGI and get to AGI fast,” she said. “We’re saying: This is not inevitable. This is not a glitch. This is very much a function of how these chat bots were designed and with the proper external incentive, whether that comes from courts or legislatures, those incentives could be realigned to design differently.”



Source link

Continue Reading

Business

Coupang CEO resigns over historic South Korean data breach

Published

on



Coupang chief executive officer Park Dae-jun resigned over his failure to prevent South Korea’s largest-ever data breach, which set off a regulatory and political backlash against the country’s dominant online retailer.

The company said in a statement on Wednesday that Park had stepped down over his role in the breach. It appointed Harold Rogers, chief administrative officer for the retailer’s U.S.-based parent company Coupang Inc., as interim head.

Park becomes the highest-profile casualty of a crisis that’s prompted a government investigation and disrupted the lives of millions across Korea. Nearly two-thirds of people in the country were affected by the breach, which granted unauthorized access to their shipping addresses and phone numbers.

Police raided Coupang’s headquarters this week in search of evidence that could help them determine how the breach took place as well as the identity of the hacker, Yonhap News reported, citing officials.

Officials have said the breach was carried out over five months in which the company’s cybersecurity systems were bypassed. Last week President Lee Jae Myung said it was “truly astonishing” that Coupang had failed to detect unauthorized access of its systems for such a long time.

Park squared off with lawmakers this month during an hours-long grilling. Responding to questions about media reports that claimed the attack had been carried out by a former employee who had since returned to China, he said a Chinese national who left the company and had been a “developer working on the authentication system” was involved.

The company faces a potential fine of up to 1 trillion won ($681 million) over the incident, lawmakers said.

Coupang founder Bom Kim has been summoned to appear before a parliamentary hearing on Dec. 17, with lawmakers warning of consequences if the billionaire fails to show.

Park’s departure adds fresh uncertainty to Coupang’s leadership less than seven months after the company revamped its internal structure to make him sole CEO of its Korean operations. In his new role, Rogers will focus on addressing customer concerns and stabilizing the company, Coupang said.

Join us at the Fortune Workplace Innovation Summit May 19–20, 2026, in Atlanta. The next era of workplace innovation is here—and the old playbook is being rewritten. At this exclusive, high-energy event, the world’s most innovative leaders will convene to explore how AI, humanity, and strategy converge to redefine, again, the future of work. Register now.



Source link

Continue Reading

Business

Databricks CEO Ali Ghodsi says company will be worth $1 trillion by doing these three things

Published

on



Ali Ghodsi, the CEO and cofounder of data intelligence company Databricks, is betting his privately held startup can be the latest addition to the trillion-dollar valuation club.

In August, Ghodsi told the Wall Street Journalthat he believed Databricks, which is reportedly in talks toraise funding at a $134 billion valuation, had “a shot to be a trillion-dollar company.” At Fortune’s Brainstorm AI conference in San Francisco on Tuesday, he explained how it would happen, laying out a “trifecta” of growth areas to ignite the company’s next leg of growth.

The first is entering the transactional database market, the traditional territory of large enterprise players like Oracle, which Ghodsi said has remained largely “the same for 40 years.” Earlier this year, Databricks launched a link-based offering called Lakehouse, which aims to combine the capabilities of traditional databases with modern data lake storage, in an attempt to capture some of this market.

The company is also seeing growth driven by the rise of AI-powered coding. “Over 80% of the databases that are being launched on Databricks are not being launched by humans, but by AI agents,” Ghodsi said. As developers use AI tools for “vibe coding”—rapidly building software with natural language commands—those applications automatically need databases, and Ghodsi they’re defaulting to Databricks’ platform.

“That’s just a huge growth factor for us. I think if we just did that, we could maybe get all the way to a trillion,” he said.

The second growth area is Agentbricks, Databricks’ platform for building AI agents that work with proprietary enterprise data.

“It’s a commodity now to have AI that has general knowledge,” Ghodsi said, but “it’s very elusive to get AI that really works and understands that proprietary data that’s inside enterprise.” He pointed to the Royal Bank of Canada, which built AI agents for equity research analysts, as an example. Ghodsi said these agents were able to automatically gather earnings calls and company information to assemble research reports, reducing “many days’ worth of work down to minutes.”

And finally, the third piece to Ghodsi’s puzzle involves building applications on top of this infrastructure, with developers using AI tools to quickly build applications that run on Lakehouse and which are then powered by AI agents. “To get the trifecta is also to have apps on top of this. Now you have apps that are vibe coded with the database, Lakehouse, and with agents,” Ghodsi said. “Those are three new vectors for us.”

Ghodsi did not provide a timeframe for attaining the trillion-dollar goal. Currently, only a handful of companies have achieved the milestone, all of them as publicly traded companies. In the tech industry, only big tech giants like Apple, Microsoft, Nvidia, Alphabet, Amazon, and Meta have managed to cross the trillion-dollar threshold.

To reach this level would require Databricks, which is widely expected to go public sometime in early 2026, to grow its valuation roughly sevenfold from its current reported level. Part of this journey will likely also include the expected IPO, Ghodsi said.

“There are huge advantages and pros and cons. That’s why we’re not super religious about it,” Ghodsi said when asked about a potential IPO. “We will go public at some point. But to us, it’s not a really big deal.”

Could the company IPO next year? Maybe, replied Ghodsi.



Source link

Continue Reading

Business

New contract shows Palantir working on tech platform for another federal agency that works with ICE

Published

on



Palantir, the artificial intelligence and data analytics company, has quietly started working on a tech platform for a federal immigration agency that has referred dozens of individuals to U.S. Immigration and Customs Enforcement for potential enforcement since September.

The U.S. Citizenship and Immigration Services agency—which handles services including citizenship applications, family immigration, adoptions, and work permits for non-citizens—started the contract with Palantir at the end of October, and is paying the data analytics company to implement “Phase 0” of a “vetting of wedding-based schemes,” or “VOWS” platform, according to the federal contract, which was posted to the U.S. government website and reviewed by Fortune.

The contract is small—less than $100,000—and details of what exactly the new platform entails are thin. The contract itself offers few details, apart from the general description of the platform (“vetting of wedding-based schemes”) and an estimate that the completion of the contract would be Dec. 9.Palantir declined to comment on the contract or nature of the work, and USCIS did not respond to requests for comment for this story.

But the contract is notable, nonetheless, as it marks the beginning of a new relationship between USCIS and Palantir, which has had longstanding contracts with ICE, another agency of the Department of Homeland Security, since at least 2011. The description of the contract suggests that the “VOWS” platform may very well be focused on marriage fraud and related to USCIS’ recent stated effort to drill down on duplicity in applications for marriage and family-based petitions, employment authorizations, and parole-related requests.

USCIS has been outspoken about its recent collaboration with ICE. Over nine days in September, USCIS announced that it worked with ICE and the Federal Bureau of Investigation to conduct what it called “Operation Twin Shield” in the Minneapolis-St. Paul area, where immigration officials investigated potential cases of fraud in immigration benefit applications the agency had received. The agency reported that its officers referred 42 cases to ICE over the period. In a statement published to the USCIS website shortly after the operation, USCIS director Joseph Edlow said his agency was “declaring an all-out war on immigration fraud” and that it would “relentlessly pursue everyone involved in undermining the integrity of our immigration system and laws.” 

“Under President Trump, we will leave no stone unturned,” he said.

Earlier this year, USCIS rolled out updates to its policy requirements for marriage-based green cards, which have included more details of relationship evidence and stricter interview requirements.

While Palantir has always been a controversial company—and one that tends to lean into that reputation no less—the new contract with USCIS is likely to lead to more public scrutiny. Backlash over Palantir’s contracts with ICE have intensified this year amid the Trump Administration’s crackdown on immigration and aggressive tactics used by ICE to detain immigrants that have gone viral on social media. Not to mention, Palantir inked a $30 million contract with ICE earlier this year to pilot a system that will track individuals who have elected to self-deport and help ICE with targeting and enforcement prioritization. There has been pushback from current and former employees of the company alike over contracts the company has with ICE and Israel.

In a recent interview at the New York Times DealBook Summit, Karp was asked on stage about Palantir’s work with ICE and later what Karp thought, from a moral standpoint, about families getting separated by ICE. “Of course I don’t like that, right? No one likes that. No American. This is the fairest, least bigoted, most open-minded culture in the world,” Karp said. But he said he cared about two issues politically: immigration and “re-establishing the deterrent capacity of America without being a colonialist neocon view. On those two issues, this president has performed.”



Source link

Continue Reading

Trending

Copyright © Miami Select.