Business
Legally assisted suicide to become law in New York State I’m
Published
21 hours agoon
By
Jace Porter
New York is set to become the latest state to legalize medically assisted suicide for the terminally ill under a deal reached between the governor and state legislative leaders announced Wednesday.
In an op-ed in the Albany Times Union, Gov. Kathy Hochul announced she will sign the proposal after she made an agreement with lawmakers to include a series of “guardrails” in the measure.
Hochul, a Catholic, said she came to the decision after hearing from New Yorkers in the “throes of pain and suffering,” as well as their children, while also considering opposition from “individuals of many faiths who believe that deliberately shortening one’s life violates the sanctity of life.”
“I was taught that God is merciful and compassionate, and so must we be,” she wrote. “This includes permitting a merciful option to those facing the unimaginable and searching for comfort in their final months in this life.”
A dozen other states and the District of Columbia have laws to allow medically assisted suicide, according to advocates, including a law in Illinois signed last week that goes into effect next year.
New York’s Medical Aid in Dying Act requires that a terminally ill person who is expected to die within six month make a written request for life-ending drugs. Two witnesses would have to sign the request to ensure that the patient is not being coerced. The request would then have to be approved by the person’s attending physician as well as a consulting physician.
The governor said the bill’s sponsors and legislative leaders have agreed to add provisions to require confirmation from a medical doctor that the person “truly had less than six months to live,” along with confirmation from a psychologist or psychiatrist that the patient is capable of making the decision and is not under duress.
Hochul also said the bill will include a mandatory five-day waiting period as well as a written and recorded oral request to “confirm free will is present.” Outpatient facilities associated with religious hospitals may elect not to offer the option.
She added that she wants the bill to apply only to New York residents. Earlier this month, a federal appeals court ruled that a similar law in New Jersey applies only to residents of that state and not those from beyond its borders.
Hochul said she will sign the bill into law next year, with her changes weaved into the proposal. It will go into effect six months after it is signed.
The legislation was first introduced in 2016 but stalled for years amid opposition from New York State Catholic Conference and other groups. The Catholic organization argued the measure would devalue human life and undermine the physician’s role as a healer.
In a statement after the governor’s announcement, Cardinal Timothy Dolan and the New York’s bishops said Hochul’s position “signals our government’s abandonment of its most vulnerable citizens, telling people who are sick or disabled that suicide in their case is not only acceptable, but is encouraged by our elected leaders.”
New York lawmakers approved the legislation during their regulation session earlier this year. Supporters said it would reduce suffering for terminally ill people and let them die on their own terms.
You may like
Business
The AI efficiency illusion: why cutting 1.1 million jobs will stifle, not scale, your strategy
Published
19 minutes agoon
December 18, 2025By
Jace Porter
We are witnessing a false dawn of efficiency. Throughout 2025, corporate America has engaged in a frantic restructuring of the labor market, cutting more than 1.17 million jobs in the first 11 months of the year, a 54% increase from 2024. From the 14,000 corporate cuts at tech giants like Amazon to the nearly 300,000 federal civil service reductions, the narrative driving this contraction is uniform: we are shedding excess labor to make room for the streamlined, high-margin future of artificial intelligence.
But the data tells a different story. This is not a calculated pivot toward higher productivity. It is a hollowing-out strategy that trades immediate payroll savings for a catastrophic erosion of human capital. By viewing AI as a mechanism for replacement rather than augmentation, leaders are incurring a strategic debt that will erase future value, stifle innovation, and, crucially, institutionalize the kind of algorithmic bias that costs companies billions.
We are trying to build the future of work by burning down the infrastructure required to support it.
The Mathematics of the Hollowed-Out Workforce
The prevailing logic in the C-Suite is a simple subtraction equation: lower headcount plus automated tools equals higher margins. However, this ignores the negative externalities imposed on the workforce that remains.
While companies explicitly cited AI for roughly 55,000 cuts through November, there are far more job losses buried under the umbrella of restructuring, which accounted for over 128,000 job losses. Expert estimates suggest the true automation-influenced displacement is likely above 150,000. But the real cost isn’t on the severance line item; it is in the collapse of productivity among the survivors.
Seventy-four percent of employees who survive layoffs report a decline in their own productivity, while 77% witness an increase in operational errors. This phenomenon, often called the layoff survivor syndrome, is a drag on performance fueled by anxiety and the erosion of institutional trust. Volatility sends a signal to your top performers: leave before you are pushed out.
When companies cut costs by eliminating human capacity, they don’t get a leaner organization; they get an anxious, risk-averse, and error-prone one. The so-called productivity equation turns negative because the marginal productivity of the retained workforce plummets faster than the payroll costs decline.
The Tech-First Trap and the Compliance Gap
This productivity collapse is compounded by a fundamental misunderstanding of how AI generates value. While 85% of organizations are increasing their AI investment, only 6% are seeing a payback in under a year.
The answer lies in the implementation. A staggering 59% of organizations are taking a technology-first approach, treating AI as a bolt-on solution rather than undertaking organizational redesign. Even more alarming is where the cuts are happening. The 2025 layoffs are disproportionately targeting mid-layer management, including HR, talent acquisition, and compliance roles.
The result is a growing governance gap. At the exact moment companies are deploying black-box algorithms that require intense oversight, they are firing the overseers. 34% of organizations already expect a shortage in specialist compliance skills. By dismantling these internal guardrails, companies are not streamlining; they are removing the ethical braking systems required to prevent reputational and financial ruin.
AI is not a replacement for human judgment; it is an accelerator of it. But you cannot accelerate what you have already liquidated.
The Equity Penalty
Here is where the economic argument becomes inseparable from the equity argument. The hollowing out of 2025 has not been neutral. It has systematically targeted the very demographics that drive financial outperformance.
The data reveal a profound asymmetry in risk exposure. Women are significantly more vulnerable to the current wave of automation, with 79% of employed women concentrated in high-risk occupations compared to 58% of men. This differential means women are 1.4 times more exposed to displacement. We see this specifically in the hollowing out of critical pipeline positions that enable women to ascend to leadership.
However, the canary in the coal mine for the broader economy is the crisis facing Black women. By November 2025, the unemployment rate for Black women remained at a staggering 7.1%, more than double the 3.4% rate for White women. This was driven by a perfect storm: high exposure to private sector automation combined with the erasure of 300,000 federal jobs, a sector where Black women have historically found stability.
The reality on the ground confirms this is a systemic failure, not a skills gap. Keisha Bross, Director of Opportunity, Race and Justice at the NAACP, reports that she has “not seen interventions happening” to support this displaced workforce. The result? At recent NAACP job fairs, 80% of applicants held bachelor’s degrees yet were lining up for same-day interviews for low-wage roles. We are witnessing the hollowing out of the Black middle class in real-time.
Leaders often view these statistics as a social problem. They are wrong. This is a P&L problem.
There is a hard, quantitative link between intersectional equity and revenue. Research across more than 4,000 companies in 29 countries shows that for every 10% increase in intersectional gender equity, there is a 1% to 2% increase in revenue. Venture capital data further reinforces this, showing that investments in female-founded startups yield a 63% better return on investment than those with male founders. By allowing layoffs to disproportionately target women and people of color, companies are forfeiting a measurable economic dividend.
The Algorithmic Risk Multiplier
The financial danger of a homogenous workforce extends directly into the AI models themselves. If your AI team and your data sources lack diversity, your algorithms will be biased. This is no longer a theoretical risk—it is a tangible liability.
More than one-third of organizations have already suffered negative impacts from AI bias, with 62% reporting lost revenue and 61% reporting lost customers. The legal doctrine of disparate impact creates massive liability for companies whose algorithms discriminate in hiring or lending, regardless of intent.
This tension is starkly visible. On one side, we have the nation’s largest civil rights organization, the NAACP, flagging systemic risk. On the other, we have tech giants like Google and Meta, recently crowned Time’s ‘Person of the Year’, who landed on the NAACP’s Consumer Advisory List by rolling back the very protections meant to ensure that revolution is equitable. This contradiction is not ideological; it’s economic: alienating a demographic with $1.7 trillion in annual buying power. When you remove the diverse talent capable of spotting bias, and the compliance officers capable of reporting it, you guarantee that your AI products will be flawed, biased, and ultimately, litigated.
A Framework for Human-Centric ROI
To reverse this erosion of value, executives must stop viewing labor as a cost to be minimized and start viewing work design as the primary investment vehicle for AI success.
1. Governance as a Profit Center
AI governance must move from the server room to the Boardroom. Boards must include members with the technical literacy to challenge management on model stability and data quality. We must recognize that responsible AI unlocks value and accelerates development by ensuring reliability.
2. Redesign: From Automation to Augmentation
We must shift our strategy from automation (replacing heads) to augmentation (increasing value). Data shows that job numbers actually grow in AI-exposed fields when companies focus on augmentation. This requires a massive investment in skilling, specifically targeting the non-degree holders who are 3.5 times more likely to lose their jobs.
3. Equity as a Growth Engine
Finally, we must embed intersectional equity into the core business strategy. This means using advanced analytics to monitor the talent lifecycle and ensure that restructuring efforts do not decimate the diversity pipeline. It means recognizing that the $12 trillion global economic opportunity of gender equity is only accessible if we actively retain women in the workforce.
The Choice
The 1.17 million layoffs of 2025 represent a fork in the road.
One path leads to a hollowed-out future: a short-term spike in cash flow followed by a long-term decline in innovation, a rise in algorithmic liability, and a workforce paralyzed by fear.
The other path recognizes that in the age of AI, humanity is the premium asset. It acknowledges that the only way to capture the exponential ROI of automation is to pair it with a diverse, resilient, and empowered human workforce.
You can cut your way to a quarterly profit, but you cannot cut your way to the future. True productivity requires us to stop subtracting humans and start solving for the convergence of equity, economics, and engineering.
The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.
Business
Salient’s AI boom: How the two-year old startup is building a company to survive the bubble burst
Published
50 minutes agoon
December 18, 2025By
Jace Porter
Ari Malik doesn’t spend much time worrying about AI hype cycles. While Silicon Valley debated the philosophy of artificial general intelligence, Malik was building something far more sustainable, prosaic—and profitable—from his bedroom: a system to help repo men and loan officers collect debt. Alongside co-founder Mukund Tibrewala, Malik set out to automate one of the most grueling, regulated, and high-turnover corners of finance.
Two years later, that focus has paid off. Malik is now the CEO of Salient, a vertical AI startup that has quietly become a force in fintech by taking on loan servicing. The company’s software automates everything from collections calls to payment processing for auto lenders, a function historically dominated by call centers and manual workflows.
“This is an area of the economy that has so been left behind by technology, and that consumers are, by and large, left to fend for themselves, that they often don’t know their rights, they often don’t know their processes,” he told Fortune. “And so we thought there’s a huge potential here for AI to be like a 10x solution, rather than a 20 to 30% improvement.”
Salient’s growth has been swift but conservative (at least, in the context of the AI bubble). Just 18 months after inception, Salient raised $60 million in a Seed A round led by Andreessen Horowitz, reaching a valuation of $350 million as of June 2025. Malik told Fortune that Salient’s annualized recurring revenue has now surged past $25 million—nearly double the $14 million figure reported six months ago. Investors have continued to lean in. Insiders say the company has since raised an additional $10 million, pushing its valuation to around $500 million.
There’s no shortage of rapid-rise ARR numbers out there (some of which are more reliable than others). But where Salient stands out particularly, however, is in its retention and churn rate. Malik says the company has never churned a customer and has converted 100% of its pilots into paid deals, even as average B2B churn across the industry approaches 5% annually and, for AI financial tools and fintech, spans from 22% to 76% annually.
AI fintech products have struggled especially with churn due to the regulatory and compliance concerns intrinsic to the industry for which they are created. Salient, Malik says, has managed to instill confidence in financial institutions and clients by demonstrating the model’s proven success. According to Malik, Salient’s AI agents have demonstrated 30 times more compliance than human agents.
This documented success has not gone unnoticed by customers. Salient’s usage retainers are “very high” and its clients, Malik said, are constantly doubling down month-over-month, year-over year.
The next chapter for Salient, Malik argues, extends far beyond signing more lenders—though Salient already works with more than five of the top ten auto lenders. The company is now processing millions of calls per day, and has already processed more than $1 billion in transactions, a signal of both demand and the scale of the problem it is targeting. Each year, roughly $800 billion in new auto debt is issued in the U.S., and nearly 80% of U.S. households have some debt. Lenders spend an estimated $20 billion to $30 billion just servicing that debt, paying humans to make phone calls, send letters, and negotiate payments, according to Malik.
Salient’s ambition is to capture that spend by becoming what Malik calls the “autonomous system of record”—software that can manage the entire lifecycle of a loan, from origination to payoff, without human intervention.
“We think making servicing a fully touchless process is on the table, and we want to get to it as fast as humanly possible,” Malik says.
Reaching that goal means expanding beyond Salient’s core collections product. Malik says the company plans to build a loan management system, a credit reporting module, and a charge-off module, effectively broadening Salient into a full-stack servicing platform. The existing product, he adds, has already proven its value: clients have seen servicing cost efficiencies of 50%.
Malik says the way Salient deploys its capital is guided by customer trust. “We need to be a generational company, because they invest a lot in us, and we need to make sure that we are stable financially,” he told Fortune. “And so when we invest capital, it’s because we have a really strong conviction that this is a product that could work at scale, and we want to make this realize value as fast as possible.”
The company, he said, has no desire to burn through cash quickly in the coming years. And Salient’s operating costs are much smaller than foundational AI companies because the firm doesn’t engage in pre-training.
Instead, investments will go toward adjacent workflows, including how lenders interact with the DMV and how they perfect loan recovery processes. Another portion will be reserved for experimentation with new technology—something that has defined Salient since its earliest days.
When Malik and Tibrewala launched Salient in 2023, nearly every lender they pitched dismissed them. To break through, they ran an unconventional Turing test. The founders built a demo in which an AI voice clone of Steve Jobs called lenders to negotiate an auto loan.
“We picked Steve because it was the most recognizable voice,” Malik says. “We wanted to make it illustrative that this tech is getting so lifelike that it’s just a matter of time before it becomes the status quo.”
The stunt worked. “Our first five or six customers, we just played them that demo,” Malik says. “They were all like, ‘Oh my god, this is crazy.’”
Winning deals, however, was only the first hurdle. Salient’s first major client was Westlake Financial, a large subprime auto lender. When Westlake agreed to a pilot, Malik and Tibrewala didn’t just ship an API. They physically moved into Westlake’s offices, setting up desks onsite to ensure the AI didn’t hallucinate or violate complex debt-collection laws.
That level of “rabid customer obsession,” Malik says, is Salient’s moat—a mindset he traces back to his time at Goldman Sachs and later at Tesla. Engineers are embedded directly with customers, and every Salient partner has Malik’s personal cell number. “Our engineers directly interface with their business counterparts at the largest financial institutions in the U.S.,” he says. “They’re much more responsible to what they promised a customer, which creates a much more aligned engineering world. We all know what we need to build and how we need to do it.”
For founders hoping to replicate Salient’s success, Malik’s advice is pointed: leave Silicon Valley. “Go anywhere else,” he says. “Talk to anybody in a different industry. Become an anthropologist. Embed yourself in a community you don’t know—and you’ll find these super ripe inefficiencies.”
Welcome to Eye on AI, with AI reporter Sharon Goldman. In this edition, I compare OpenAI to a house made of…well, no one really knows. Also: OpenAI launches a ChatGPT app store (we’ll see if it fares better than their previous custom GPT store)…Google debuts a surprisingly powerful Flash version of its Gemini 3 model…and the U.K. AI Safety Institute finds that a large percentage of Britons have used chatbots for emotional support.
Talk about an expensive building project. OpenAI is reportedly raising tens of fresh billions at a $750 billion valuation, including $10 billion from Amazon. It is pouring money into compute — and literally pouring concrete into the data centers that power AI chips—which the company says it needs to keep constructing the towering stack of models and applications that more than 800 million users now rely on.
The cost has inspired both awe and deep unease. Industry observers watch OpenAI’s expansion the way they might watch the Empire State Building rise — with a budget that keeps climbing as fast as the structure itself. (The actual Empire State Building, it’s important to note, only cost about $700 million in today’s money and came in under budget.) And some skeptics are increasingly convinced that the entire edifice is a monument to hubris that will come tumbling down before long.
Here’s how I think about it: If OpenAI is a house, it’s still in the early stages of construction — but no one agrees what it’s made of. The plans are undeniably ambitious, pushing the structure to unprecedented heights. Is this a house made of cards? Of teetering wooden pillars? Of solid concrete? The question is whether whatever structure is being built can actually hold the weight already being placed on it.
The experts are split
That uncertainty has split the experts I’ve spoken to. Technology analyst Rob Enderle said he would like to see OpenAI resting on a firmer foundation. “I would feel much more comfortable if they had a much stronger base in some of the basics,” he told me, particularly around making products trustworthy enough for enterprise businesses to increase adoption. He added that OpenAI has at times “gone off the rails” in terms of direction, pointing out that the company’s original independent safety and ethics oversight structures have been sidelined since CEO Sam Altman was reinstated after being briefly fired in November 2023. These days, he argued, OpenAI is trying to compete with everyone at once; reacting to rivals rather than executing a clear roadmap; and spending heavily without clear prioritization.
A recognition that it may have become distracted by trying to do much at once was part of the reason OpenAI CEO Sam Altman declared a “code red” at the company two weeks ago, as Fortune reported in an in-depth new feature this week. The story looks at the why, the how, and the what of OpenAI’s “code red” and why Altman has warned the company to brace for “rough vibes” and economic headwinds in the face of increased competition from Google and OpenAI. Altman is trying to light a fire under his team to refocus on OpenAI’s core ChatGPT offerings over the coming weeks. But, according to Enderle, this is all very reactive and not strategic enough.
Commenting on the company’s constant shipping — from new AI models and a new image generation model, to a web browser, shopping features inside ChatGPT, and a new app ecosystem launched just this week — alongside a massive Stargate data-center buildout, Enderle compared OpenAI to Netscape and other dot-com companies that got rich too fast and lost strategic discipline.
“They’re running so fast, they’re not really focusing on direction very much,” he said.
Others, however, strongly disagree. Futurum Research founder and CEO Daniel Newman told me that concerns about OpenAI’s house collapsing miss the bigger picture. “This is a multi-decade supercycle,” he said, likening the company’s current phase of AI to Netflix’s DVD-by-mail era — a precursor to the true paradigm shift that followed. From the perspective of unmet demand and long-term value creation, Newman believes OpenAI’s massive compute investments are rational, not reckless.
“I would call what [OpenAI] has today very high-quality three-dimensional simulations and architectural renderings of a future,” Newman said. The real question, he added, is whether OpenAI can win enough market share to build the mansion it’s envisioning.
“I think OpenAI’s real goal is to become a hyperscaler,” Newman said. “They’ll have the infrastructure, the applications, the data, the workflows, the agentic tools — and people will buy everything they now get elsewhere from OpenAI instead. It’s an incredibly ambitious goal. There’s nothing to say it will work. But if it does, the numbers make sense.”
Searching for stickiness, or glue
Lastly, I spoke to Arun Chandrasekaran, principal analyst at Gartner Research, who chuckled and ducked away from my house metaphor, but was willing to address whether OpenAI was at least building on solid ground.
“They are indeed growing really fast, and they are making an enormous amount of commitments far beyond what any company [of their size] has ever made,” he said. “It is a risky bet, I would argue, a strategy that does not come without risks.” A lot of it is predicated on how sticky their products are, he pointed out, both at the model and application layer.
“It depends on the switching costs from a customer perspective, and a few other factors in terms of whether the growth really pans out the way they’ve envisioned,” he said. “You’re talking about a high growth company, but the expectation is that they’re going to have to grow at a much faster clip than what they’re growing. The expectations are enormous.”
Stickiness, I said. Like glue? Nails? Something to hold the house up?
He laughed. “Yes — like glue. I say stickiness, you say glue.”
And with that, here’s more AI news.
Sharon Goldman
sharon.goldman@fortune.com
@sharongoldman
FORTUNE ON AI
Amazon CEO Andy Jassy announces departure of AI exec Rohit Prasad in leadership shake-up–by Sharon Goldman
Experts say Amazon is playing the long game with its potential $10 billion OpenAI deal: ‘ChatGPT is still seen as the Kleenex of AI’–by Eva Roytburg
Microsoft, Apple, Meta, and Amazon’s stocks are lagging the S&P 500 this year—but Google is up 62%, and AI investors think it has room to run—by Jeff John Roberts and Jeremy Kahn
Exclusive: Palantir alums using AI to streamline patent filing secure $20 million in Series A venture funding—by Jeremy Kahn
AI IN THE NEWS
ChatGPT to accept app submissions. OpenAI has opened app submissions for ChatGPT, letting developers submit apps for review and publication and giving users a new in-chat app directory to discover them—but the move comes a couple of years after the company’s earlier plug-ins experiment, built around custom GPTs, which never fully took off. The new apps are designed to extend conversations with real actions, like ordering groceries or creating slide decks, and can be triggered directly inside chats, with OpenAI positioning them as more tightly integrated and easier to use than plug-ins were. The initiative signals OpenAI’s renewed push to turn ChatGPT into a true platform—though how widely users and developers embrace this second attempt at an app ecosystem remains an open question.
Anthropic taps Trump-linked Bitcoin miner for massive AI power build. According to reporting from The Information, Anthropic has struck a deal that could secure up to 2.3 gigawatts of computing power from data centers developed by Hut 8, a bitcoin miner that is pivoting into AI infrastructure and has ties to the Trump family. Hut 8 and cloud startup Fluidstack plan to build a data center campus in Louisiana, starting with 245 megawatts and potentially expanding by another 1 gigawatt, while giving Anthropic the option to develop an additional 1.1 gigawatts with Hut 8. Google will backstop Fluidstack’s lease payments, underscoring Big Tech’s role in de-risking these projects. Hut 8’s Trump-linked bitcoin venture and the AI data center news helped push its shares up about 10%.
Anthropic’s Claude ran a snack operation in the Wall Street Journal newsroom. I had to shout out this funny experiment from the Wall Street Journal that copied a similar effort Anthropic ran in its own offices several months ago. A customized Claude agent was put in charge of running a newsroom vending machine, with autonomy to order inventory, set prices, and negotiate with human coworkers over Slack. Within weeks, the AI had been socially engineered into giving away most of its inventory for free, buying a PlayStation 5 and a live fish, and driving the operation hundreds of dollars into the red. The point wasn’t profit, Anthropic said, but failure: a vivid case study in how today’s AI agents can lose track of goals, priorities, and guardrails when exposed to money, social pressure, and messy real-world context—highlighting just how far “autonomous agents” still are from reliably running even the simplest businesses.
NOAA says its new AI-driven weather models improve forecast speed and accuracy. As the winter chill deepens across much of the US, I’m sure we all love a quick and accurate weather forecast. So CBS News reported some good news: The National Oceanic and Atmospheric Administration has rolled out a new suite of AI-driven weather forecasting models designed to deliver faster and more accurate predictions at far lower computational cost. NOAA says the models represent a shift away from relying solely on traditional physics-based systems like its long-running Global Forecast System and Global Ensemble Forecast System, which simulate countless weather scenarios across land, ocean, and atmosphere. Instead, the agency is using AI to improve large-scale forecasts and tropical storm tracks while dramatically reducing the computing power required, allowing forecasts to reach meteorologists and the public more quickly and cheaply—a move NOAA leadership describes as a major leap in U.S. weather-model innovation.
Google launches Gemini 3 Flash, makes it the default model in the Gemini app. TechCrunch reported on Google’s release of Gemini 3 Flash, a faster and cheaper version of its Gemini 3 model. Google has made Gemini 3 Flash the default model in the Gemini app and in AI-powered search. The model significantly outperforms the previous Gemini 2.5 Flash and, on some benchmarks, rivals frontier models like Gemini 3 Pro and OpenAI’s GPT-5.2, while excelling at multimodal and reasoning tasks. Google is positioning Flash as a high-speed “workhorse” model for consumers, enterprises, and developers, with broad rollout across apps, search, Vertex AI, and APIs, and adoption already underway at companies like JetBrains and Figma. The launch comes amid an intensifying release war with OpenAI, as Google reports processing more than a trillion tokens per day and emphasizes that rapid iteration, lower costs, and new benchmarks are now central to competition at the AI frontier.
AI CALENDAR
Jan. 7-10: Consumer Electronics Show, Las Vegas.
March 12-18: SWSW, Austin.
March 16-19: Nvidia GTC, San Jose.
April 6-9: HumanX, San Francisco.
EYE ON AI NUMBERS
~33%
According to new research from the UK’s AI Safety Institute highlighted by The Guardian, about a third of UK adults say they’ve used generative AI for emotional support or social interaction, with nearly one in ten reporting weekly use of chatbots and assistants like ChatGPT for emotional reasons.
Analysts note this trend is emerging amid broader concerns about mental health access, loneliness, and the role of AI in replacing—or supplementing—human emotional support. The report also flags potential risks, including safety issues and the need for deeper study of how “emotional AI” may shape our interactions and well-being.
AARP study shows Medicare negotiations will bring massive savings for consumers in 2026
The AI efficiency illusion: why cutting 1.1 million jobs will stifle, not scale, your strategy
TMZ Streaming Live, Come Into Our Newsroom and Watch Things Happen!
Trending
-
Politics8 years agoCongress rolls out ‘Better Deal,’ new economic agenda
-
Entertainment8 years agoNew Season 8 Walking Dead trailer flashes forward in time
-
Politics8 years agoPoll: Virginia governor’s race in dead heat
-
Entertainment8 years agoThe final 6 ‘Game of Thrones’ episodes might feel like a full season
-
Entertainment8 years agoMeet Superman’s grandfather in new trailer for Krypton
-
Politics8 years agoIllinois’ financial crisis could bring the state to a halt
-
Business8 years ago6 Stunning new co-working spaces around the globe
-
Tech8 years agoHulu hires Google marketing veteran Kelly Campbell as CMO
