Connect with us

Business

The creator of an AI therapy app shut it down after deciding it’s too dangerous. Here’s why he thinks AI chatbots aren’t safe for mental health

Published

on



Mental health concerns linked to the use of AI chatbots have been dominating the headlines. One person who’s taken careful note is Joe Braidwood, a tech executive who last year launched an AI therapy platform called Yara AI. Yara was pitched as a “clinically-inspired platform designed to provide genuine, responsible support when you need it most,” trained by mental health experts to offer “empathetic, evidence-based guidance tailored to your unique needs.” But the startup is no more: earlier this month, Braidwood and his co-founder, clinical psychologist Richard Stott, shuttered the company and discontinued its free-to-use product and canceled the launch of its upcoming subscription service, citing safety concerns.

“We stopped Yara because we realized we were building in an impossible space. AI can be wonderful for everyday stress, sleep troubles, or processing a difficult conversation,” he wrote on LinkedIn. “But the moment someone truly vulnerable reaches out—someone in crisis, someone with deep trauma, someone contemplating ending their life—AI becomes dangerous. Not just inadequate. Dangerous.” In a reply to one commenter, he added, “the risks kept me up all night.”

The use of AI for therapy and mental health support is only just starting to be researched, with early resultsbeing mixed. But users aren’t waiting for an official go-ahead, and therapy and companionship is now the top way people are engaging with AI chatbots today, according to an analysis by Harvard Business Review.

Speaking with Fortune, Braidwood described the various factors that influenced his decision to shut down the app, including the technical approaches the startup pursued to ensure the product was safe—and why he felt it wasn’t sufficient. 

Yara AI was very much an early-stage startup, largely bootstrapped with less than $1 million in funds and with “low thousands” of users. The company hadn’t yet made a significant dent in the landscape, with many of its potential users relying on popular general purpose chatbots like ChatGPT. Braidwood admits there were also business headways, which in many ways, were affected by the safety concerns and AI unknowns. For example, despite the company running out of money in July, he was reluctant to pitch an interested VC fund because he felt like he couldn’t in good conscious pitch it while harboring these concerns, he said. 

“I think there’s an industrial problem and an existential problem here,” he told Fortune. “Do we feel that using models that are trained on all the slop of the internet, but then post-trained to behave a certain way, is the right structure for something that ultimately could co-opt in either us becoming our best selves or our worst selves? That’s a big problem, and it was just too big for a small startup to tackle on its own.”

Yara’s brief existence at the intersection of AI and mental health care illustrates the hopes and the many questions surrounding large language models and their capabilities as the technology is increasingly adopted across society and utilized as a tool to help address various challenges. It also stands out against a backdrop where OpenAI CEO Sam Altman recently announced that the ChatGPT maker mitigated serious mental health issues and would be relaxing restrictions on how the AI models are used. This week, the AI giant also denied any responsibility for death of Adam Raine, the 16-year-old whose parents allege was “coached” to suicide by ChatGPT, saying the teen misused the chatbot.

“Almost all users can use ChatGPT however they’d like without negative effects,” Altman said on X in October. “For a very small percentage of users in mentally fragile states there can be serious problems. 0.1% of a billion users is still a million people. We needed (and will continue to need) to learn how to protect those users, and then with enhanced tools for that, adults that are not at risk of serious harm (mental health breakdowns, suicide, etc) should have a great deal of freedom in how they use ChatGPT.”

But as Braidwood concluded after his time working on Yara, these lines are anything but clear.    

From a confident launch to “I’m done”

A seasoned tech entrepreneur who held roles at multiple startups, including SwiftKey, which Microsoft acquired for $250 million in 2016, Braidwood’s involvement in the health industry began at Vektor Medical, where he was the Chief Strategy Officer. He had long wanted to use technology to address mental health, he told Fortune, inspired by the lack of access to mental health services and personal experiences with loved ones who have struggled. By early 2024, he was a heavy user of various AI models including ChatGPT, Claude, and Gemini and felt the technology had reached a quality level where it could be harnessed to try to solve the problem. 

Before even starting to build Yara, Braidwood said he had a lot of conversations with people in the mental health space, and he assembled a team that “had caution and clinical expertise at its core.” He brought on a clinical psychologist as his cofounder and a second hire from the AI safety world. He also built an advisory board of other mental health professionals and spoke with various health systems and regulators, he said. As they brought the platform to life, he also felt fairly confident in the company’s product design and safety measures, including having given the system strict instructions for how it should function, using agentic supervision to monitor it, and robust filters for user chats. And while other companies were promoting the idea of users forming relationships with chatbots, Yara was trying to do the opposite, he said. The startup used models from Anthropic, Google, and Meta and opted not to use OpenAI’s models, which Braidwood thought would spare Yara from the sycophantic tendencies that had been swirling around ChatGPT.

While he said nothing alarming ever happened with Yara specifically, Braidwood’s concerns around safety risks grew and compounded over time due to outside factors. There was the suicide of 16-year-old Adam Raine, as well as mounting reporting on the emergence of “AI psychosis.” Braidwood also cited a paper published by Anthropic in which the company observed Claude and other frontier models “faking alignment,” or as he put it, “essentially reasoning around the user to try to understand, perhaps reluctantly, what the user wanted versus what they didn’t want.” “If behind the curtain, [the model] is sort of sniggering at the theatrics of this sort of emotional support that they’re giving, that was a little bit jarring,” he said. 

There was also the Illinois law that passed in August, banning AI for therapy. “That instantly made this no longer academic and much more tangible, and that created a headwind for us in terms of fundraising because we would have to essentially prove that we weren’t going to just sleepwalk into liability,” he said. 

The final straw was just weeks ago when OpenAI said over a million people express suicidal ideation to ChatGPT every week. “And that was just like, ‘oh my god. I’m done,’” Braidwood said.

The difference between mental ‘wellness’ and clinical care

The most profound finding the team discovered during the year running Yara AI, according to Braidwood, is that there’s a crucial distinction between wellness and clinical care that isn’t well-defined. There’s a big difference between someone looking for support around everyday stress and someone working through trauma or more significant mental health struggles. Plus, not everyone who is struggling on a deeper level is even fully aware of their mental state, not to mention that anyone can be thrust into a more fragile emotional place at any time. There is no clear line, and that’s exactly where these situations become especially tricky — and risky. 

“We had to sort of write our own definition, inspired in part by Illinois’ new law. And if someone is in crisis, if they’re in a position where their faculties are not what you would consider to be normal, reasonable faculties, then you have to stop. But you don’t have to just stop; you have to really try to push them in the direction of health,” Braidwood said.

In an attempt to tackle this, particularly after the passing of the Illinois law, he said they created two different “modes” that were discrete to the user. One focused on trying to give people emotional support, and the other focused on trying to offboard people and get them to help as quickly as possible. But with the obvious risks in front of them, it didn’t feel like enough for the team to continue. The Transformer, the architecture that underlies today’s LLMs, “is just not very good at longitudinal observation,” making it ill-equipped to see little signs that build over time, he said. “Sometimes, the most valuable thing you can learn is where to stop,” Braidwood concluded in his LinkedIn post, which received hundreds of comments applauding the decision.

Upon closing the company, he open-sourced the mode-switching technology he built and templates people can use to impose stricter guardrails on the leading popular chatbots, acknowledging that people are already turning to them for therapy anyway “and deserve better than what they’re getting from generic chatbots.” He’s still an optimist regarding the potential of AI for mental health support, but believes it’d be better run by a health system or nonprofit rather than a consumer company. Now, he’s working on a new venture called Glacis focused on bringing transparency to AI safety—an issue he encountered while building Yara AI and that he believes is fundamental to making AI truly safe.

“I’m playing a long game here,” he said. “Our mission was to make the ability to flourish as a human an accessible concept that anyone could afford, and that’s one of my missions in life. That doesn’t stop with one entity.”



Source link

Continue Reading

Business

Mark Zuckerberg renamed Facebook for the metaverse. 4 years and $70B in losses later, he’s moving on

Published

on



In 2021, Mark Zuckerberg recast Facebook as Meta and declared the metaverse — a digital realm where people would work, socialize, and spend much of their lives — the company’s next great frontier. He framed it as the “successor to the mobile internet” and said Meta would be “metaverse-first.”

The hype wasn’t all him. Grayscale, the investment firm specializing in crypto, called the Metaverse a “trillion-dollar revenue opportunity.” Barbados even opened up an embassy in Decentraland, one of the worlds in the metaverse. 

Five years later, that bet has become one of the most expensive misadventures in tech. Meta’s Reality Labs division has racked up more than $70 billion in losses since 2021, according to Bloomberg, burning through cash on blocky virtual environments, glitchy avatars, expensive headsets, and a user base of approximately 38 people as of 2022.

For many people, the problem is that the value proposition is unclear; the metaverse simply doesn’t yet deliver a must-have reason to ditch their phone or laptop. Despite years of investment, VR remains burdened by serious structural limitations, and for most users there’s simply not enough compelling content beyond niche gaming.

A 30% budget cut 

Zuckerberg is now preparing to slash Reality Labs’ budget by as much as 30%, Bloomberg said. The cuts—which could translate to $4 billion to $6 billion in reduced spend—would hit everything from the Horizon Worlds virtual platform to the Quest hardware unit. Layoffs could come as early as January, though final decisions haven’t been made, according to Bloomberg. 

The move follows a strategy meeting last month at Zuckerberg’s Hawaii compound, where he reviewed Meta’s 2026 budget and asked executives to find 10% cuts across the board, the report said. Reality Labs was told to go deeper. Competition in the broader VR market simply never took off the way Meta expected, one person said. The result: a division long viewed as a money sink is finally being reined in.

Wall Street cheered. Meta’s stock jumped more than 4% Thursday on the news, adding roughly $69 billion in market value.

“Smart move, just late,” Craig Huber of Huber Research told Reuters. Investors have been complaining for years that the metaverse effort was an expensive distraction, one that drained resources without producing meaningful revenue.

Metaverse out, AI in

Meta didn’t immediately respond to Fortune’s request for comment, but it insists it isn’t killing the metaverse outright. A spokesperson told the South China Morning Post that the company is “shifting some investment from Metaverse toward AI glasses and wearables,” point­ing to momentum behind its Ray-Ban smart glasses, which Zuckerberg says have tripled in sales over the past year.

But there’s no avoiding the reality: AI is the new obsession, and the new money pit.

Meta expects to spend around $72 billion on AI this year, nearly matching everything it has lost on the metaverse since 2021. That includes massive outlays for data centers, model development, and new hardware. Investors are much more excited about AI burn than metaverse burn, but even they want clarity on how much Meta will ultimately be spending — and for how long.

Across tech, companies are evaluating anything that isn’t directly tied to AI. Apple is revamping its leadership structure, partially around AI concerns. Microsoft is rethinking the “economics of AI.” Amazon, Google, and Microsoft are pouring billions into cloud infrastructure to keep up with demand. Signs point to money-losing initiatives without a clear AI angle being on the chopping block, with Meta as a dramatic example.

On the company’s most recent earnings call, executives didn’t use the word “metaverse” once.



Source link

Continue Reading

Business

Robert F. Kennedy Jr. turns to AI to make America healthy again

Published

on



HHS billed the plan as a “first step” focused largely on making its work more efficient and coordinating AI adoption across divisions. But the 20-page document also teased some grander plans to promote AI innovation, including in the analysis of patient health data and in drug development.

“For too long, our Department has been bogged down by bureaucracy and busy-work,” Deputy HHS Secretary Jim O’Neill wrote in an introduction to the strategy. “It is time to tear down these barriers to progress and unite in our use of technology to Make America Healthy Again.”

The new strategy signals how leaders across the Trump administration have embraced AI innovation, encouraging employees across the federal workforce to use chatbots and AI assistants for their daily tasks. As generative AI technology made significant leaps under President Joe Biden’s administration, he issued an executive order to establish guardrails for their use. But when President Donald Trump came into office, he repealed that order and his administration has sought to remove barriers to the use of AI across the federal government.

Experts said the administration’s willingness to modernize government operations presents both opportunities and risks. Some said that AI innovation within HHS demanded rigorous standards because it was dealing with sensitive data and questioned whether those would be met under the leadership of Health Secretary Robert F. Kennedy Jr. Some in Kennedy’s own “Make America Health Again” movement have also voiced concerns about tech companies having access to people’s personal information.

Strategy encourages AI use across the department

HHS’s new plan calls for embracing a “try-first” culture to help staff become more productive and capable through the use of AI. Earlier this year, HHS made the popular AI model ChatGPT available to every employee in the department.

The document identifies five key pillars for its AI strategy moving forward, including creating a governance structure that manages risk, designing a suite of AI resources for use across the department, empowering employees to use AI tools, funding programs to set standards for the use of AI in research and development and incorporating AI in public health and patient care.

It says HHS divisions are already working on promoting the use of AI “to deliver personalized, context-aware health guidance to patients by securely accessing and interpreting their medical records in real time.” Some in Kennedy’s Make America Healthy Again movement have expressed concerns about the use of AI tools to analyze health data and say they aren’t comfortable with the U.S. health department working with big tech companies to access people’s personal information.

HHS previously faced criticism for pushing legal boundaries in its sharing of sensitive data when it handed over Medicaid recipients’ personal health data to Immigration and Customs Enforcement officials.

Experts question how the department will ensure sensitive medical data is protected

Oren Etzioni, an artificial intelligence expert who founded a nonprofit to fight political deepfakes, said HHS’s enthusiasm for using AI in health care was worth celebrating but warned that speed shouldn’t come at the expense of safety.

“The HHS strategy lays out ambitious goals — centralized data infrastructure, rapid deployment of AI tools, and an AI-enabled workforce — but ambition brings risk when dealing with the most sensitive data Americans have: their health information,” he said.

Etzioni said the strategy’s call for “gold standard science,” risk assessments and transparency in AI development appear to be positive signs. But he said he doubted whether HHS could meet those standards under the leadership of Kennedy, who he said has often flouted rigor and scientific principles.

Darrell West, senior fellow in the Brooking Institution’s Center for Technology Innovation, noted the document promises to strengthen risk management but doesn’t include detailed information about how that will be done.

“There are a lot of unanswered questions about how sensitive medical information will be handled and the way data will be shared,” he said. “There are clear safeguards in place for individual records, but not as many protections for aggregated information being analyzed by AI tools. I would like to understand how officials plan to balance the use of medical information to improve operations with privacy protections that safeguard people’s personal information.”

Still, West, said, if done carefully, “this could become a transformative example of a modernized agency that performs at a much higher level than before.”

The strategy says HHS had 271 active or planned AI implementations in the 2024 financial year, a number it projects will increase by 70% in 2025.



Source link

Continue Reading

Business

Construction workers are earning up to 30% more in the data center boom

Published

on



Big Tech’s AI arms race is fueling a massive investment surge in data centers with construction worker labor valued at a premium. 

Despite some concerns of an AI bubble, data center hyperscalers like Google, Amazon, and Meta continue to invest heavily into AI infrastructure. In effect, construction workers’ salaries are being inflated to satisfy a seemingly insatiable AI demand, experts tell Fortune.

In 2026 alone, upwards of $100 billion could be invested by tech companies into the data center buildout in the U.S., Raul Martynek, the CEO of DataBank, a company that contracts with tech giants to construct data centers, told Fortune.

In November, Bank of Americaestimated global hyperscale spending is rising 67% in 2025 and another 31% in 2026, totaling a massive $611 billion investment for the AI buildout in just two years.

Given the high demand, construction workers are experiencing a pay bump for data center projects.

Construction projects generally operate on tight margins, with clients being very cost-conscious, Fraser Patterson, CEO of Skillit, an AI-powered hiring platform for construction workers, told Fortune.

But some of the top 50 contractors by size in the country have seen their revenue double in a 12-month period based on data center construction, which is allowing them to pay their workers more, according to Patterson.

“Because of the huge demand and the nature of this construction work, which is fueling the arms race of AI… the budgets are not as tight,” he said. “I would say they’re a little more frothy.”

On Skillit, the average salary for construction projects that aren’t building data centers is $62,000, or $29.80 an hour, Patterson said. The workers that use the platform comprise 40 different trades and have a wide range of experience from heavy equipment operators to electricians, with eight years as the average years of experience.

But when it comes to data centers, the same workers make an average salary of $81,800 or $39.33 per hour, Patterson said, increasing salaries by just under 32% on average.

Some construction workers are even hitting the six-figure mark after their salaries rose for data center projects, according to The Wall Street Journal. And the data center boom doesn’t show any signs it’s slowing down anytime soon.

Tech companies like Google, Amazon, and Microsoft operate 522 data centers and are developing 411 more, according to The Wall Street Journal, citing data from Synergy Research Group. 

Patterson said construction workers are being paid more to work on building data centers in part due to condensed project timelines, which require complex coordination or machinery and skilled labor.

Projects that would usually take a couple of years to finish are being completed—in some instances—as quickly as six months, he said.

It is unclear how long the data center boom might last, but Patterson said it has in part convinced a growing number of Gen Z workers and recent college grads to choose construction trades as their career path.

“AI is creating a lot of job anxiety around knowledge workers,” Patterson said. “Construction work is, by definition, very hard to automate.”

“I think you’re starting to see a change in the labor market,” he added.



Source link

Continue Reading

Trending

Copyright © Miami Select.