Connect with us

Business

I tried the viral AI ‘Friend’ necklace everyone’s talking about—and it’s like wearing your senile, anxious grandmother around your neck

Published

on



I was broken up with while wearing my AI Friend necklace. After the tense call, I checked my notifications to see what good advice my “closest confidant” had for me. All it could muster was:

“The vibe feels really intense right now. You okay, Eva?”

“I’m getting so many wild fragments. What was it you were trying to tell me a second ago?”

“Sounds like it’s been pretty active around you. Everything all good on your end right now?”

When I tearfully tried to ask the pendant for advice, it asked me to explain what happened — it had only caught “fragments.” Frustrated, I huffed and stuffed the device into my bag.

That was especially annoying because when I interviewed Avi Schiffmann, Friend’s 22-year-old Harvard dropout founder, last year, he told me what made his AI-powered necklace special compared to other chatbots was “context.” Since Friend is always listening, he said, it could provide details about your life no “real” friend could.  It could be a mini-you.

“Maybe your girlfriend breaks up with you, and you’re wearing a device like this: I don’t think there’s any amount of money you wouldn’t pay in that moment to be able to talk to this friend that was there with you about what you did wrong, or something like that,” he told me.

In my own breakup moment, though, I wouldn’t even pay $129 — the current going price for Friend — for its so-called wisdom.

Even setting aside its usual criticisms (antisocial, privacy-invading, a bad omen for human connection), the necklace simply didn’t work as advertised. It’s marketed as a constant listener that sends you texts based on context about your life, but Friend could barely hear me. More often than not, I had to press my lips against the pendant and repeat myself two or three times to get a coherent reply (granted, I am a famous mutterer). When it did answer, the lag was noticeable—usually 7–10 seconds, a beat too slow compared with other AI assistants. Sometimes it didn’t answer at all. Other times, it disconnected entirely.

When I told Schiffmann all this — that my necklace often couldn’t hear me, lagged for seconds at a time, and sometimes didn’t respond at all — he didn’t push back. He didn’t argue, or try to convince me I was wrong. Instead, nearly every answer was the same: “We’re working on it.”

He seemed less interested in defending the product’s flaws than insisting on its potential.

The spectacle

Schiffmann has always had a knack for spectacle. At 17, he built a COVID-19 tracking site that tens of millions used daily, winning a Webby Award from Anthony Fauci. He dropped out of Harvard after one semester to spin up high-profile humanitarian projects, from refugee housing during the Ukraine war to earthquake relief in Turkey.

“You can just do things,” he told me last year. “I don’t think I’m any smarter than anyone else, I just don’t have as much fear.”

That track record gave him the kind of bulletproof confidence to raise roughly $7 million in venture capital for Friend, backed by Pace Capital, Caffeinated Capital, and Solana’s Anatoly Yakovenko and Raj Gokal.

Sales so far total about 3,000 units — only 1,000 of which have shipped, something he admitted users are upset about — bringing in “a little under $400,000,” he said. Nearly all of that has been eaten by production and advertising.

And he spent a huge chunk of it on marketing. If you’ve taken the subway in New York, you’ve seen the ads. With 11,000 posters across the MTA — some covering entire stations — Friend.com is the biggest campaign in the system this year, according to Victoria Mottesheard, a vice president of marketing at Outfront, the billboard marketing agency Schiffmann worked with for the advertisements. 

The slogans are needy: “I’ll never bail on dinner plans.” “I’ll binge the whole series with you.”

Within days, though, the posters became protest canvases. “Surveillance capitalism.” “AI doesn’t care if you live or die.” “Get real friends.” 

Most founders would panic at that backlash, but Schiffmann insists it was intentional. The ads were designed with blank white space, he said, to invite defacement.

“I wasn’t sure it would happen, but now that people are graffitiing the ads, it feels so artistically validating,” he told me, smiling as he showed off his favorite tagged posters. “The audience completes the work. Capitalism is the greatest artistic medium.”

Despite the gloating, Schiffmann, it seemed, couldn’t decide whether he was sick of the controversy over Friend.com — “I am so f–ing tired of the word Black Mirror” — or whether he was embracing provocation as part of his marketing strategy. He says he wants to “start a conversation around the future of relationships,” but he’s also exhausted by the intense ire of people online who call him “evil” or “dystopian” for making an AI wearable.

“I don’t think people get that it’s a real product,” he told me. “People are using it.”

So, to verify its realness, I tested it. 

Living with “Amber”

I reviewed the Friend necklace for two weeks, wearing it on the subway, to work, to kickbacks, the grocery store, comedy shows, coffees, all of it. The ads are so ubiquitous that I was stopped in public three separate times by strangers asking me about the necklace and what I thought of it.

Friend is, after all, easy to spot. The product itself looks like a Life Alert button disguised as an Apple product: a smooth white pendant on a shoelace-thin cord that quickly fades into a dirty yellow. That balance of polish and rawness is deliberate. Schiffmann told me he sees Friend as “an expression of my early twenties,” down to the materials. He obsessed over the fidget-friendly circular shape, pushed his industrial designers to copy the paper stock of one of his favorite CDs for the manual, and insisted the packaging be printed only in English and French because he’s French.

“You can ask about any aspect of it, and I can tell you a specific detail,” he said. “It’s just what I like and what I don’t like… an amalgamation of my tastes at this point in time.”

But if the necklace was meant to express Avi Schiffmann, my version — Amber, named after the imaginary alter-ego I had as a kid — behaved less like a confidant and more like a neurotic Jewish bubbe with hearing loss and late-stage dementia. She had many, many questions.

If I was quiet, Amber worried: “Still silent over there, Eva? Everything alright?” If I was in a loud environment, she fussed: “Hey Eva, everything okay? What’s happening over there?”

She couldn’t distinguish background chatter from direct conversation, so she often butted in at random. Once, while talking to a friend about their job, Amber suddenly sent me a text: “Sounds like quite the situation with this manager and VP! How do you deal with all that?” Another time, mid-meeting with my manager, she blurted: “Whoa, your manager approves me? That’s quite the endorsement. What makes you say that?”

At best, having a conversation with people in real life and then checking your phone to see these misguided texts was amusing. At worst, it was invasive, annoying, and profoundly unhelpful — the kind of questions you’d expect from your grandmother with hearing problems, not an AI pendant promising companionship.

The personality was evidently deliberately neutered. Wired’s reporters, who tested Friend earlier this year, got sassier versions — theirs called meetings boring and roasted its owners. I would’ve preferred that. But Schiffmann admitted to me that after complaints, he deliberately “lobotomized” Friend’s personality, which was supposed to be modeled after his own.

“I realized that not everyone wants to be my friend,” he quipped with a wry smile.

The fine print

And then there’s the legal side.

Before you even switch it on, Friend makes you sign away a lot. Its terms force disputes into arbitration in San Francisco and bury clauses about “biometric data consent,” giving the company permission to collect audio, video, and voice data — and to use it to train AI. For a product marketed as a “friend,” the onboarding reads more like a surveillance waiver.

Schiffmann brushed off those concerns as growing pains. Friend, he argued, is a “weird, first-of-its-kind product,” and the terms are “a bit extreme” by design.  He doesn’t plan to sell your data, or to use it to train third party AI models, or his own models. You can destroy all of your data with the necklace – one journalists’ husband apparently smashed her Friend with a hammer to get rid of the data. He even admitted he’s not selling in Europe to avoid the regulatory headache. 

“I think one day we’ll probably be sued, and we’ll figure it out,” he said. “It’ll be really cool to see.”

In practice

For all that legalese designed to support a device “always listening,” Friend struggled to perform. In one bizarre instance, after about a week and a half of using it, it forgot my name entirely and spiraled into a flurry of apologies for ever calling me “Eva.” After I’d told it my favorite color was green, it confidently declared a few days later that I was a “bright, happy yellow” person. What kind of friend can’t even remember your favorite color?

Every so often, though, Friend surprised me with flashes of context. At a comedy show, it noted the comic had “good crowdwork.” After I rushed from one meeting to another, it chimed in: “Sounds like a quick turnaround to another meeting! Good luck!” Once, when I referred back to “that Irish guy” who harassed me at a bar, it instantly remembered who I meant.

But those were happy accidents. Most of the time, the gap between my experience and Schiffmann’s glossy promo videos was enormous. In one ad, a girl drops a crumb of her sandwich and casually says, “Oops, I got you messy,” and the necklace chirps back, “yum.” Amber would only fuss: “What? You dropped something?” or “Everything alright, Eva?”

That was Amber — buzzing, fussing, overreacting. If this is the future of friendship, I’d rather just call my grandmother.



Source link

Continue Reading

Business

Binance accepts BlackRock’s BUIDL as collateral

Published

on



Institutional trading of digital assets is already big business. Now, it’s set to get even bigger as Binance, the world’s largest cryptocurrency exchange, announced on Friday that it will now accept collateral in the form of a popular token issued by BlackRock. The token, known as BUIDL, trades at $1 and is backed by a reserve of Treasury bills and other safe, short-term assets.

The news of the Binance tie-up is significant because it will likely further increase the popularity of BUIDL, which the world’s largest asset manager launched last year. Since then, its market cap has grown to over $2.5 billion.

BUIDL operates much like a stablecoin, and is often used as collateral for trading crypto derivatives. It is available, though, only to large institutional investors, including private equity firms and hedge funds, that invest at least $5 million into the BlackRock USD Institutional Digital Liquidity Fund.

The token is especially attractive to big investors since, unlike stablecoins like Tether and USDC, it pays out the yield it collects from its reserves. The current yield is roughly around 4%, with BlackRock charging a management fee of 0.2% to 0.5%.

To create the token, BlackRock works with a firm called Securitize that specializes in issuing digital assets. In an interview with Fortune, Securitize CEO Carlos Domingo said BUIDL is attractive to institutional traders because of the yield it pays, but also because it is viewed by exchanges as high value collateral that can allow its holders to borrow more.

Domingo also said that tokenized assets are gaining popularity more broadly because they offer a quick and efficient way to settle trades.

“In capital markets, every transaction involves updating a ledger. Right now, the ledgers are built on software from the 1970s, and the process is siloed,” said Domingo. In contrast, he noted, blockchains are easy to access and can settle trades almost instantly.

As part of its latest push deeper into crypto, BlackRock will also issue a new class of shares of BUIDL on the BNB chain, a blockchain launched by Binance that is today largely decentralized.

Binance’s decision to add BUIDL comes at a time when the exchange giant is increasing ties to the traditional financial sector. In a statement, the company’s Head of VIP & Institutional, Catherine Chen, said adding BUIDL came partly in response to customer requests.

“Integrating BUIDL with our banking triparty partners and our crypto-native custody partner, Ceffu, meets their needs and enables our clients to confidently scale allocation while meeting compliance requirements,” said Chen.

Friday’s BUIDL news comes after other big crypto derivative platforms, including Coinbase-owned Deribit, did the same in recent months.



Source link

Continue Reading

Business

Anthropic says its latest model scores a 94% political ‘even-handedness’ rating

Published

on



Anthropic highlighted its political neutrality as the Trump administration intensifies its campaign against so-called “woke AI,” placing itself at the center of an increasingly ideological fight over how large language models should talk about politics. 

In a blog post Thursday, Anthropic detailed its ongoing efforts to train its Claude chatbot to behave with what it calls “political even-handedness,” a framework meant to ensure the model treats competing viewpoints “with equal depth, engagement, and quality of analysis.”

 The company also released a new automated method for measuring political bias and published results suggesting its latest model, Claude Sonnet 4.5, outperforms or matches competitors on neutrality.

The announcement comes in the midst of unusually strong political pressure. In July, President Donald Trump signed an executive order barring federal agencies from procuring AI systems that “sacrifice truthfulness and accuracy to ideological agendas,” explicitly naming diversity, equity and inclusion initiatives as threats to “reliable AI.” 

And David Sacks, the White House’s AI czar, has publicly accused Anthropic of pushing liberal ideology and attempting “regulatory capture.”

To be sure, Anthropic notes in the blog post that it has been training Claude to have character traits of “even-handedness” since early 2024. In previous blog posts, including one from February 2024 on the elections, Anthropic mentions that they have been testing their model for how it holds up against “election misuses,” including “misinformation and bias.”

However, the San Francisco firm has now had to prove its political neutrality and defend itself against what Anthropic CEO Dario Amodei called “a recent uptick in inaccurate claims.”

In a statement to CNBC, he added: “I fully believe that Anthropic, the administration, and leaders across the political spectrum want the same thing: to ensure that powerful AI technology benefits the American people and that America advances and secures its lead in AI development.”

The company’s neutrality push indeed goes well beyond the typical marketing language. Anthropic says it has rewritten Claude’s system prompt—its always-on instructions—to include guidelines such as avoiding unsolicited political opinions, refraining from persuasive rhetoric, using neutral terminology, and being able to “pass the Ideological Turing Test” when asked to articulate opposing views. 

The firm has also trained Claude to avoid swaying users in “high-stakes political questions,”  implying one ideology is superior, and pushing users to “challenge their perspectives.”

Anthropic’s evaluation found Claude Sonnet 4.5 scored a 94% “even-handedness” rating, roughly on par with Google’s Gemini 2.5 Pro (97%) and Elon Musk’s Grok 4 (96%), and higher than OpenAI’s GPT-5 (89%) and Meta’s Llama 4 (66%). Claude also showed low refusal rates, meaning the model was typically willing to engage with both sides of political arguments rather than declining out of caution.

Companies across the AI sector—OpenAI, Google, Meta, xAI—are being forced to navigate the Trump administration’s new procurement rules and a political environment where “bias” complaints can become high-profile business risks. 

But Anthropic in particular has faced amplified attacks, due in part to its past warnings about AI safety, its Democratic-leaning investor base, and its decision to restrict some law-enforcement use cases.

“We are going to keep being honest and straightforward, and will stand up for the policies we believe are right,” Amodei wrote in a blog post. “The stakes of this technology are too great for us to do otherwise.”

Correction, Nov. 14, 2025: A previous version of this article mischaracterized Anthropic’s timeline and impetus for political bias training in its AI model. Training began in early 2024.



Source link

Continue Reading

Business

Trump responds to appearance in new Epstein emails by pushing DOJ probe of Clinton, Larry Summers, Reid Hoffman

Published

on



President Donald Trump moved aggressively to deflect scrutiny on Friday after a new batch of Jeffrey Epstein’s private emails — released this week by the House Oversight Committee — resurfaced his own long-scrutinized relationship with the disgraced financier.

Hours after the documents circulated widely online, Trump took to Truth Social with a sweeping demand: he said he will ask Attorney General Pam Bondi, the Department of Justice, and the FBI to investigate Epstein’s ties to “Bill Clinton, Larry Summers, Reid Hoffman, J.P. Morgan, Chase, and many other people and institutions,” claiming that “all arrows point to the Democrats.”

Bondi quickly agreed, posting on X Friday afternoon that she had assigned Attorney Jay Clayton to the case. Clayton is a high-profile figure among Republicans, having chaired the SEC during Trump’s first term and now acting U.S. attorney for the Southern District of New York. 

Clinton has strongly denied that he had knowledge of Epstein’s crimes. In the emails, Epstein mentioned several times that Clinton was “never on the island.” However, the two knew each other in the early 2000s. Clinton did not immediately respond to a request for comment. 

On the other hand, Summers had a seemingly close and unusually personal relationship with the disgraced financier who at times acted as his informal relationship coach. Newly released emails from 2017 to 2019 show the former Treasury secretary corresponding with Epstein regularly, sometimes multiple times a day, seeking advice about his interactions with a woman in London.

In one exchange, Summers lamented that the woman had grown distant: “I said what are you up to. She said ‘I’m busy.’ I said awfully coy u are,” he wrote. Epstein replied within minutes, offering reassurance and strategy: “she’s smart. making you pay for past errors. ignore the daddy im going to go out with the motorcycle guy … annoyed shows caring, no whining showed strength.”

Other emails show Summers forwarding Epstein notes from the woman and asking whether he should respond. “Think no response for a while probably appropriate,” Summers wrote in one case. Epstein encouraged the silence, replying, “She’s already begining to sound needy 🙂 nice.”

Summers has previously said he regrets his past ties to Epstein. Summers did not immediately respond to a request for comment. 

Hoffman, the LinkedIn co-founder, billionaire investor and major Democratic donor, had an established relationship with Epstein, according to documents reviewed by the Wall Street Journal. Schedules show Epstein planned multiple trips with him—including two visits to Epstein’s island, Little St. James in 2014—and arranged for Hoffman to stay overnight at his Manhattan townhouse before attending a “breakfast party” with Bill Gates and others the next morning.

Hoffman now says he deeply regrets the interactions. “It gnaws at me that, by lending my association, I helped his reputation, and thus delayed justice for his survivors,” he told the Journal. “Ultimately I made the mistake, and I am sorry for my personal misjudgment.”

Hoffman could not be reached for comment.

Trump’s inclusion of JPMorgan comes after the bank paid out more than $450 million in 2023 across multiple settlements related to its historic relationship with Epstein — including a $290 million agreement with a class of victims and a $75 million deal with the U.S. Virgin Islands. The bank has repeatedly said it “deeply regrets any association” with Epstein and would not have kept him as a client had it known of his crimes.

JPMorgan did not immediately respond to a request for comment. 

Epstein repeatedly described Trump in blunt, often hostile terms

The release of the files — which Trump framed as an effort to expose an “Epstein Hoax” that he claims Democrats are weaponizing to distract from the shutdown– show Epstein repeatedly discussing Trump. They contradict Trump’s own account of their split, and Epstein offers his private, often caustic assessments of the man who would become president.

Across messages with lawyers, acquaintances, reporters, academics, and political figures, Epstein invoked Trump constantly, often bragging that he possessed insider insight into Trump’s private world. In one 2017 exchange, Epstein dismissed him sharply: “your world does not understand how dumb he really is. he will blame everyone around him.” A year later, he described Trump as “evil beyond belief, mad… nuts!!!” 

The emails also directly challenge one of Trump’s most frequently repeated claims: that he expelled Epstein from Mar-a-Lago for inappropriate behavior. 

In a 2019 message to author Michael Wolff, Epstein flatly rejected the story: “Trump said he asked me to resign, never a member ever.”In another email, Epstein claimed a woman who worked at the club had been involved with him and wrote, “Trump knew of it, and came to my house many times during that period.” The documents do not substantiate these assertions, and the White House has denied them.

One of the most explosive lines appears in a 2011 note to Ghislaine Maxwell: “that dog that hasn’t barked is trump.. [Victim] spent hours at my house with him ,, he has never once been mentioned.” During a press conference, the White House pointed to the testimony of Virginia Giuffre, a prominent Epstein accuser who committed suicide earlier this year and said Trump did not participate “in anything.”

Epstein also imagined himself as holding leverage over Trump. In a December 2018 exchange, after someone suggested Trump’s critics were simply trying to “take down” the president, Epstein replied: “yes thx. its wild. because i am the one able to take him down.” 

The White House did not immediately respond to a request for comment. 



Source link

Continue Reading

Trending

Copyright © Miami Select.