Connect with us

Business

Walmart worker falsely accused of celebrating Charlie Kirk’s death suspended from job, fears safety

Published

on



When his phone buzzed with an unknown caller late Friday afternoon, 30-year-old Ali Nasrati didn’t think much of it. Spam calls were common. But this caller left him a voicemail: Did you get fired yet?

Nasrati disregarded that voicemail as a practical joke, or some sort of scam.

“Being the person that I am, I don’t really get bothered by these kinds of things,” he told Fortune. 

But then, the texts came. From multiple different unknown numbers, they spelled out his name, his mother’s name, and his home address, followed by a chilling message: we’re on our way.” Random phone calls came that loudly and “vulgarly” insulted Nasrati and his Islamic faith.

Nasrati, shaken, drove home from his work as an IT specialist at the Virginia Walmart he had worked at since he was 25. On the way, he got another call: one he didn’t answer, wary of more abuse. But this time it was from Walmart corporate. The voicemail, which Fortune has reviewed, came from a corporate manager and said he was suspended with pay pending an “internal investigation,” and asked him to call back.

Since then, Nasrati has called and left multiple voicemails with his employer. He says none have been returned. Walmart declined to comment on the matter.

Back at home, Nasrati, trying to piece together what had happened, says he opened his laptop in disbelief. His work account had already been disabled. Frantically scanning online, the source of the harassment finally became clear: an X profile created under the handle @IslamAli911, filled with inflammatory posts celebrating the assassin of right-wing influencer Charlie Kirk, and plastered with his photo and full name. 

Nasrati said the account isn’t his, and he has never posted about Kirk, or politics at all, for that matter. He has his own X page, with mostly posts from a decade ago about soccer. 

But it didn’t matter. A right-wing page on X, called “Bad Hombre” with the handle @joma_gc, which had been posting the names and employer information of people deemed to be “celebrating” Kirk’s murder, had taken screenshots and posted pictures of the fake account, along with Nasrati’s name and workplace information, to over 180,000 followers. 

“It was insane,” Nasrati said. “This account was made in May, not by me, but they used my Instagram and LinkedIn photos and made it look like I was the one posting. And people believed it.”

The fallout was immediate. His phone rang nonstop with calls spewing Islamophobic slurs. Emails and texts told him to leave the country and that he better hide. Cars idled too long behind him on the road, and he found himself wondering if he was being followed. His mother and sister, shaken, refused to stay in their home, and he left with them to find another place to stay.

“I’ve always felt like an American first,” Nasrati said. “But this weekend, for the first time, I felt like an outsider in my own country.”

He raced to the police station to file reports, one against the account impersonating him for identity theft and others for defamation. There, the officers told him to report the account that had targeted him, which he says he has, along with about 200 of his friends and family. jhjhjjk

X, in an email reviewed by Fortune, told Nasrati that the account, @joma_gc, had not violated any X rules. The account that had impersonated him, after blowing up on @joma_gc, deactivated and removed all of its information from the page. 

X did not respond to Fortune’s request for comment. 

A coordinated campaign 

Nasrati’s case is just one amid a surge of cyber-targeting campaigns following Kirk’s assassination, with critics of the conservative activist increasingly singled out online. 

A site called Expose Charlie’s Murderers, which at the time of writing is down, briefly published the names of 41 people it accused of “supporting political violence online,” promising to turn its database of 30,000 submissions into a permanent archive before it was taken offline. 

Even those who denounced violence but voiced criticism of Kirk were included, according to Reuters, and some—like Canadian influencer Rachel Gilmore—say they’ve since endured death threats and sexualized harassment. 

Although the site was removed, many accounts on X have taken up the cause, from @joma_gc to right-wing media creator Chaya Raichik at @libsoftiktok. MSNBC hosts, public school teachers, healthcare workers, and employees at Office Depot and Microsoft have been fired for their posts, among others. An American Airlines pilot was even grounded and suspended for his posts. 

X bans posting someone’s private information without consent, but the policy makes an exception if the details are already public — like names, workplaces, or photos from LinkedIn or Instagram, all of which were used in Nasrati’s case. Impersonation, however, is a violation of X rules, according to the policy

Nasrati isn’t sure if he will get proper recourse from the authorities, or X, or his place of employment. All he wants Walmart to do is “clear his name” and help get him some sense of job and personal security. 

“What can I do in the future to not feel this way? There really isn’t anything I did wrong,” Nasrati said. “Do I have to disappear from social media, go off the grid, just to feel safe in my own home? It’s 2025: everyone has a social media presence. The fact that there’s nothing I can do to stop this from happening again is very scary.”



Source link

Continue Reading

Business

Why Jerome Powell’s latest rate cut still won’t help you get a lower mortgage rate

Published

on



For the third meeting in a row, the Federal Reserve cut interest rates—a “hawkish” move in an effort to help a softening labor market. The 0.25% cut brought the interest rate range to 3.5% to 3.75%—but economists and housing experts warn that’s not going to affect mortgage rates in the way potential homebuyers were hoping for. 

Chen Zhao, head of economics research at Redfin, wrote in a Wednesday post that the Fed’s December interest rate cut won’t move mortgage rates “because markets have already priced it in.” 

The Federal Reserve controls the Federal funds rate, which is a rate that banks charge each other and is more closely tied to credit cards, personal loans, and home-equity lines. A standard 30-year mortgage, on the other hand, is a long-term loan, and the pricing of those loans are tied more closely to yields on longer-term bonds like the 10-year Treasury and mortgage-backed securities. 

“Since this rate cut was no surprise, the markets have taken it in stride,” 43-year mortgage industry veteran Melissa Cohn, regional vice president of William Raveis Mortgage, told Fortune. She said more dropping shoes in terms of economic data will be the real turning point: “The future of bond yields and mortgage rates will be determined as new data on jobs and inflation get released.”

The current mortgage rate is 6.3%, according to Mortgage News Daily, which is of course much higher than the sub-3% rate that homebuyers from the pandemic era remember, although it’s also a far cry from the 8% peak in October 2023

“The committee’s projections and Chair Jerome Powell’s remarks indicate that this will be the last interest cut for a while,” Zhao wrote. “Given the underlying economic fundamentals of 3% inflation coupled with a weakening—but not recessionary—labor market, the Fed is likely to hold steady in the near future.

“Mortgage rates are unlikely to fall or rise by much,” she continued.

How mortgage rates affect housing affordability

Mortgage rates are just one piece of the housing affordability puzzle. While it may feel as if it’s the major roadblock in the ability to buy a home—especially having a recent memory of the pandemic housing boom—mortgage rates are only one factor. 

To put it in perspective, Zillow reported earlier this year not even a 0% mortgage rate would make buying a house affordable in several major U.S. cities. 

Let that sink in. 

Even without any interest accrued on a loan, homebuying is still out of reach for the typical American. Much of the affordability crisis has to do with home prices, which are more than 50% higher than in 2020. This has locked out new homebuyers from entering the market and current homeowners from selling. 

The mortgage rate drop required to make an average home affordable (to about 4.43%) for the typical buyer is “unrealistic,” according to Zillow economic analyst Anushna Prakash.  

“It’s unlikely rates will drop to the mid-[4% range] anytime soon,” Arlington, Va.–based real estate agent Philippa Main told Fortune. “And even if they did, housing prices are still at historic highs.” With 11 years of experience, Main is also a licensed mortgage loan officer.

To be sure, some economists see some light at the end of the tunnel for homebuyers plagued by high mortgage rates and home prices.

“For prospective buyers who have been waiting on the sidelines, the housing market is finally starting to listen,” wrote First American chief economist Mark Fleming in an Aug. 29 blog post. First American’s analysis takes into account inflation, and Fleming said: “The price of a house today is not directly comparable to the price of that same house 30 years ago.”



Source link

Continue Reading

Business

OpenAI debuts GPT-5.2 in effort to silence concerns it is falling behind its rivals

Published

on



OpenAI, under increasing competitive pressure from Google and Anthropic, has debuted a new AI model, GPT-5.2, that it says beats all existing models by a substantial margin across a wide range of tasks.

The new model, which is being released less than a month after OpenAI debuted its predecessor, GPT-5.1, performed particularly well on a benchmark of complicated professional tasks across a range of “knowledge work”—from law to accounting to finance—as well as on evaluations involving coding and mathematical reasoning, according to data OpenAI released.

Fidji Simo, the former InstaCart CEO who now serves as OpenAI’s CEO of applications, told reporters that the model should not been seen as a direct response to Google’s Gemini 3 Pro AI model, which was released last month. That release prompted OpenAI CEO Sam Altman to issue a “code red,” delaying the rollout of several initiatives in order to focus more staff and computing resources on improving its core product, ChatGPT.

“I would say that [the Code Red] helps with the release of this model, but that’s not the reason it is coming out this week in particular, it has been in the works for a while,” she said.

She said the company had been building GPT-5.2 “for many months.” “We don’t turn around these models in just a week. It’s the result of a lot of work,” she said. The model had been known internally by the code name “Garlic,” according to a story in The Information. The day before the model’s release Altman teased its imminent rollout by posting to social media a video clip of him cooking a dish with a large amount of garlic.

OpenAI executives said that the model had been in the hands of “Alpha customers” who help test its performance for “several weeks”—a time period that would mean the model was completed prior to Altman’s “code red” declaration.

These testers included legal AI startup Harvey, note-taking app Notion, and file-management software company Box, as well as Shopify and Zoom.

OpenAI said these customers found GPT-5.2 demonstrated a “state of the art” ability to use other software tools to complete tasks, as well as excelling at writing and debugging code.

Coding has become one of the most competitive use cases for AI model deployment within companies. Although OpenAI had an early lead in the space, Anthropic’s Claude model has proved especially popular among enterprises, exceeding OpenAI’s marketshare according to some figures. OpenAI is no doubt hoping to convince customers to turn back to its models for coding with GPT-5.2.

Simo said the “Code Red” was helping OpenAI focus on improving ChatGPT. “Code Red is really a signal to the company that we want to marshal resources in one particular area, and that’s a way to really define priorities and define things that can be deprioritized,” she said. “So we have had an increase in resources focused on ChatGPT in general.”

The company also said its new model is better than the company’s earlier ones at providing “safe completions”—which it defines as providing users with helpful answers while not saying things that might contribute to or worsen mental health crises.

“On the safety side, as you saw through the benchmarks, we are improving on pretty much every dimension of safety, whether that’s self harm, whether that’s different types of mental health, whether that’s emotional reliance,” Simo said. “We’re very proud of the work that we’re doing here. It is a top priority for us, and we only release models when we’re confident that the safety protocols have been followed, and we feel proud of our work.”

The release of the new model came on the same day a new lawsuit was filed against the company alleging that ChatGPT’s interactions with a psychologically troubled user had contributed to a murder-suicide in Connecticut. The company also faces several other lawsuits alleging ChatGPT contributed to people’s suicides. The company called the Connecticut murder-suicide “incredibly heartbreaking” and said it is continuing to improve “ChatGPT’s training to recognize and respond to signs of mental or emotional distress, de-escalate conversations and guide people toward real-world support.” 

GPT-5.2 showed a large jump in performance across several benchmark tests of interest to enterprise customers. It met or exceeded human expert performance on a wide range of difficult professional tasks, as measured by OpenAI’s GDPval benchmark, 70.9% of the time. That compares to just 38.8% of the time for GPT-5, a model that OpenAI released in August; 59.6% for Anthropic’s Claude Opus 4.5; and 53.3% for Google’s Gemini 3 Pro.

On the software development benchmark, SWE-Bench Pro, GPT-5.2 scored 55.6%, which was almost 5 percentage points better than its predecessor, GPT-5.1, and more than 12% better than Gemini 3 Pro.

OpenAI’s Aidan Clark, vice president of research (training), declined to answer questions about exactly what training methods had been used to upgrade GPT-5.2’s performance, although he said that the company had made improvements across the board, including in pretraining, the initial step in creating an AI model.

When Google released its Gemini 3 Pro model last month, its researchers also said the company had made improvements in pretraining as well as post-training. This surprised some in the field who believed that AI companies had largely exhausted the ability to wring substantial improvements out of the pretraining stage of model building, and it was speculated that OpenAI may have been caught off guard by Google’s progress in this area.



Source link

Continue Reading

Business

OpenAI and Disney just ended the ‘war’ between AI and Hollywood with their $1 billion Sora deal

Published

on



Disney’s $1 billion investment in OpenAI, announced Thursday morning—and its decision to let more than 200 Disney, Pixar, Marvel, and Star Wars characters appear inside the Sora video generator—is more than a licensing deal. According to copyright and AI law expert Matthew Sag, who teaches at Emory University’s law school, the deal marks a strategic realignment that could reshape how Hollywood protects its IP in the face of AI-generated content that threatens to leech on their legally-protected magic. 

“AI companies are either in a position where they need to aggressively filter user prompts and model outputs to make sure that they don’t accidentally show Darth Vader, or strike deals with the rights holders to get permission to make videos and images of Darth Vader,” Sag told Fortune. “The licensing strategy is much more of a win-win.” 

The three-year agreement gives OpenAI the right to ingest hundreds of Disney-owned characters into Sora and ChatGPT Image. Disney will also receive equity warrants and become a major OpenAI customer, while deploying ChatGPT internally.

Sag said the deal itself will be a kind of “revenue-sharing.”

“OpenAI hasn’t figured out the revenue model,” Sag said. “So I think making this just an investment deal, in some ways, simplifies it. For Disney … [OpenAI] will figure out a way to make this profitable at some point, and [Disney will] get a cut of that.”

Why this deal matters: the ‘Snoopy problem’

For more than a year, the biggest legal threat to large-scale generative AI has centered on what Sag calls the “Snoopy problem”: It is extremely difficult to train powerful generative models without some degree of memorization, and copyrightable characters are uniquely vulnerable because copyright protects them in the abstract.

Sag was careful to outline a key distinction. AI companies aren’t licensing the right to train on copyrighted works; they’re licensing the right to create outputs that would otherwise be infringing.

That’s because the case for AI companies training their models on unlicensed content is “very strong,” Sag said. Two recent court rulings involving Anthropic and Meta have strengthened those arguments.  

The real stumbling block, Sag said, has always been outputs, not training. If a model can accidentally produce a frame that looks too much like Darth Vader, Homer Simpson, Snoopy, or Elsa, the fair use defense begins to fray.

“If you do get too much memorization, if that memorization finds its way into outputs, then your fair-use case begins to just crumble,” Sag said.

While it’s impossible to license enough text to train an LLM (“that would take a billion” deals, Sag said), it is possible to build image or video models entirely from licensed data if you have the right partners. This is why deals like Disney’s are crucial: They turn previously illegal outputs into legal ones, irrespective of whether the training process itself qualifies as fair use.

“The limiting principle is going to be essentially about whether—in their everyday operation—these models reproduce substantial portions of works from their training data,” Sag said.

The deal, Sag says, is also a hedge against Hollywood’s lawsuits. This announcement is “very bad” for Midjourney, who Disney is suing for copyright infringement, because it upholds OpenAI’s licensing deal as the “responsible” benchmark for AI firms. 

This is also a signal about the future of AI data

Beyond copyright risk, the deal exposes another trend: the drying up of high-quality, unlicensed data on the public internet.

In a blog post, Sag wrote:

“The low-hanging fruit of the public internet has been picked,” he wrote. “To get better, companies like OpenAI are going to need access to data that no one else has. Google has YouTube; OpenAI now has the Magic Kingdom.”

This is the core of what he calls the “data scarcity thesis.” OpenAI’s next leap in model quality may require exclusive content partnerships, as opposed to more scraping. 

“By entangling itself with the world’s premier IP holder, OpenAI makes itself indispensable to the very industry that threatened to sue it out of existence,” Sag wrote. 

AI and Hollywood have spent three years locked in a cold war over training data, likeness rights and infringement. With Disney’s $1 billion investment, that era appears to be ending.

“This is the template for the future,” Sag wrote. “We are moving away from total war between AI and content, toward a negotiated partition of the world.”



Source link

Continue Reading

Trending

Copyright © Miami Select.