Connect with us

Business

Facebook, TikTok and even LinkedIn are censoring abortion content even when it’s just medical inform

Published

on



Clinics, advocacy groups and individuals who share abortion-related content online say they are seeing informational posts being taken down even if the posts don’t clearly violate the platforms’ policies.

The groups, in Latin America and the United States, are denouncing what they see as censorship even in places where abortion is legal. Companies like Meta claim their policies have not changed, and experts attribute the takedowns to over-enforcement at a time when social media platforms are reducing spending on content moderation in favor of artificial intelligence systems that struggle with context, nuance and gray areas.

But abortion advocates say the removals have a chilling effect even if they are later reversed, and navigating platforms’ complex systems of appeals is often difficult, if not impossible.

For months, the digital rights group Electronic Frontier Foundation has been collecting examples from social media users who’ve seen their abortion-related posts taken down or accounts suspended.

“The goal of it was to better understand the breadth of the problem, who’s affected, and with what consequences. Obviously, then once we had a better understanding of the trends, we hope to call attention to the issue, demand accountability and increase transparency in the moderation practices and ultimately, help stop the platforms from censoring this essential, sometimes life-saving information,” said Jennifer Pinsof, staff attorney at EFF.

The organization says it received close to 100 examples of content takedowns from abortion providers, advocacy groups and individuals on Meta platforms such as Instagram and Facebook, as well as TikTok and even LinkedIn.

It’s not clear if the takedowns are increasing or people are posting more about abortion, especially abortion medication such as mifepristone, since the Supreme Court overturned Roe v. Wade in 2022.

“I would say there was a wave of take-downs shortly after the election that was noticeable enough that it resulted in multiple news stories. But again, it’s not something that’s very easy to measure,” Pinsof said.

Brenna Miller, a TikTok creator who often posts about abortion and works in reproductive health care, said she made a video unboxing an abortion pill package from the nonprofit carafem — where she talked about what was in the package and discussed the process of taking the pills at home.

She posted the video in December. It was up for at least a week before TikTok removed it, saying it violated the platform’s community standards.

“TikTok does have an appeal process, which I tried to go through. And it just locked me out. It said that I didn’t have the option to appeal it,” Miller said. “So I started emailing them, trying to get in contact with a person to just even get an explanation of like, how I violated the community guidelines with an informational video. It took months for me to even get in contact with a person and I don’t even (think) it was really a person. They were sending an automated message for months straight.”

Eventually, the video was restored in May with no explanation.

“I work in public health in my 9-to-5 and we’re seeing a real suppression of public health information and dissemination of that information, particularly in the reproductive health space. And people are scared,” Miller said. “It’s really important to get people this medically accurate information so that they’re not afraid and they actually can access the health care that they need.”

TikTok does not generally prohibit sharing information about abortion or abortion medication, however it does regulate selling and marketing drugs, including abortion pills and it prohibits misinformation that could harm people.

On Facebook, the Red River Women’s Clinic in Moorhead, Minnesota, put up a post saying it offers both surgical and medicated abortion after it heard from a patient who didn’t know it offered medication abortion. The post included a photo of mifepristone. When the clinic tried to turn the post into an ad, its account was suspended. The clinic says that since it does not offer telehealth services, it was not attempting to sell the medication. The clinic appealed the decision and won a reversal, but the account was suspended again shortly after. Ultimately, the clinic was able to resolve the issue through a connection at Meta.

“We were not trying to sell drugs. We were just informing our followers about a service, a legal service that we offer. So that’s alarming that, you know, that was flagged as not fitting into their standards,” said clinic director Tammi Kromenaker. “To have a private company like Meta just go with the political winds and say, we don’t agree with this, so we’re going to flag these and we’re going to shut these down, is very alarming.”

Meta said its policies and enforcement regarding medication-related abortion content have not changed and were not impacted by the changes announced in January, which included the end of its fact-checking program.

“We allow posts and ads promoting health care services like abortion, as well as discussion and debate around them, as long as they follow our policies — and we give people the opportunity to appeal decisions if they think we’ve got it wrong,” the company said in a statement.

In late January, Emory University’s Center for Reproductive Health Research in the Southeast, or RISE, put up an Instagram post about mifepristone that described what it is and why it matters. In March, its account was suspended. The organization then appealed the decision but the appeal was denied and its account was deleted permanently. This decision was later reversed after they were able to connect with someone at Meta. Once the account was restored, it became clear that the suspension was because it was flagged as trying to “buy, sell, promote or exchange illegal or restricted drugs.”

“Where I get concerned is (that) with the increased use of social media, we also have seen correspondingly an increased rise of misinformation and disinformation on social media platforms about many health topics,” said Sara Redd, director of research translation at RISE and an assistant professor at Emory University. “One of main goals through our communications and through our social media is to promote scientifically accurate evidence-based information about reproductive health care, including abortion.”

Laura Edelson, assistant professor of computer science at Northeastern University, said that at the end of the day, while people love to debate platforms’ policies and what the policies should be, what matters is people’s “experiences of sharing information and the information are able to get and they’re able to see.”

“This is just a policy that is not being implemented well. And that, in and of itself, is not all that surprising because we know that Meta has dramatically reduced spending on content moderation efforts,” Edelson said. “There are fewer people who are spending time maintaining automated models. And so content that is even vaguely close to borderline is at risk of being taken down.”



Source link

Continue Reading

Business

‘Creativity is the new productivity’: Bob Iger on why Disney chose to be ‘aggressive’

Published

on



In a landmark move that signals a definitive shift in how major media conglomerates approach artificial intelligence (AI), OpenAI has gone from the company that had unapproved Disney princesses being made from its tools to a $1 billion partnership with the house of mouse itself. Disney CEO Bob Iger unpacked the deal jointly with OpenAI CEO Sam Altman in a TV interview with CNBC’s Squawk on the Street, explaining “we’d rather participate in the rather dramatic growth, rather than just watching it happen and essentially being disrupted by it.” He also reframed the issue of how AI is reshaping entertainment, business, even work itself: “Someone once said to me that creativity is the new productivity, and I think you’re starting to see that more and more.”

The deal, which brings Disney’s intellectual property to OpenAI’s video generation platform Sora, is structured to balance “aggressive” intellectual property protection with a willingness to embrace inevitable technological disruption, Iger said. Under the terms of the three-year agreement, Disney will license approximately 200 characters for use within Sora, allowing users to create short-form videos featuring iconic figures ranging from Mickey Mouse to Star Wars personalities.

Iger framed the partnership not as a concession to AI, but as a necessary evolution—and one that is actually good for human artists. This is because the deal does not include name and likeness, nor does it include character voices. “And so, in reality, this does not in any way represent a threat to the creators at all, in fact, the opposite. I think it honors them and respects them, in part because there’s a license fee associated with it.” Iger stressed repeatedly Disney wants to be on the cutting edge of how technology reinvents entertainment. “No human generation has ever stood in the way of technological advance, and we don’t intend to try.”

The partnership stands in stark contrast to Disney’s relationship with other tech giants. On the same day the OpenAI deal was announced, Disney sent a cease-and-desist letter to Google regarding alleged misuse of IP. Iger explained the divergence in approach by noting that, unlike Google, OpenAI has agreed to “honor and value and respect” Disney’s content through a licensing fee and safety guardrails. “We have been aggressive at protecting our IP, and we have gone after other companies that have not honored our IP,” Iger said, adding conversations with Google had failed to “bear fruit.”

A win-win partnership?

For OpenAI, reportedly under pressure from the aforementioned Google—whose Gemini 3 has been hailed by AI luminaries such as Salesforce billionaire Marc Benioff—the deal represents a validation of its generative video technology. Altman told CNBC user demand for Disney characters was “sort-of off the charts,” and he envisioned a future in which fans can generate custom content, such as a “Buzz Lightyear custom birthday video” or a personalized lightsaber scene. Altman argued the partnership would unlock “latent creativity” in the general public by lowering the skill and effort required to bring ideas to life.

The collaboration will also extend to Disney’s own streaming platform. Iger revealed plans to integrate “user prompted Sora-generated content” directly into Disney+. He said specifically Disney has “wanted for a long time to have what we will call user-generated content on our platform,” suggesting this partnership is a defensive move with regard to streaming giant YouTube and social media epicenter TikTok, which is partially under the control of the Ellison family that also controls entertainment rival Paramount.

The deal includes undisclosed warrants, giving Disney a financial stake in OpenAI’s success. Iger confirmed the warrants and declined to offer more specifics. He compared this forward-thinking approach to Disney’s 2005 decision to license shows to iTunes, viewing the OpenAI partnership as the modern equivalent of boarding a “profound wave” of societal change.

Iger revealed the groundwork for this deal was laid several years ago, saying he had first met Altman in 2022, when he was retired from Disney, before his comeback as CEO. Altman gave Iger a “bit of a road map” about where OpenAI was headed, and Disney has been “extremely impressed” with OpenAI’s growth since then, with all of Altman’s predictions from 2022 coming true a lot faster than either party realized. Iger added Disney sees great opportunities to license other product from OpenAI in the years ahead, which he sees being a huge push in “essentially accomplish[ing] a lot of what we feel we need to accomplish in the years ahead.”



Source link

Continue Reading

Business

Why Jerome Powell’s latest rate cut still won’t help you get a lower mortgage rate

Published

on



For the third meeting in a row, the Federal Reserve cut interest rates—a “hawkish” move in an effort to help a softening labor market. The 0.25% cut brought the interest rate range to 3.5% to 3.75%—but economists and housing experts warn that’s not going to affect mortgage rates in the way potential homebuyers were hoping for. 

Chen Zhao, head of economics research at Redfin, wrote in a Wednesday post that the Fed’s December interest rate cut won’t move mortgage rates “because markets have already priced it in.” 

The Federal Reserve controls the Federal funds rate, which is a rate that banks charge each other and is more closely tied to credit cards, personal loans, and home-equity lines. A standard 30-year mortgage, on the other hand, is a long-term loan, and the pricing of those loans are tied more closely to yields on longer-term bonds like the 10-year Treasury and mortgage-backed securities. 

“Since this rate cut was no surprise, the markets have taken it in stride,” 43-year mortgage industry veteran Melissa Cohn, regional vice president of William Raveis Mortgage, told Fortune. She said more dropping shoes in terms of economic data will be the real turning point: “The future of bond yields and mortgage rates will be determined as new data on jobs and inflation get released.”

The current mortgage rate is 6.3%, according to Mortgage News Daily, which is of course much higher than the sub-3% rate that homebuyers from the pandemic era remember, although it’s also a far cry from the 8% peak in October 2023

“The committee’s projections and Chair Jerome Powell’s remarks indicate that this will be the last interest cut for a while,” Zhao wrote. “Given the underlying economic fundamentals of 3% inflation coupled with a weakening—but not recessionary—labor market, the Fed is likely to hold steady in the near future.

“Mortgage rates are unlikely to fall or rise by much,” she continued.

How mortgage rates affect housing affordability

Mortgage rates are just one piece of the housing affordability puzzle. While it may feel as if it’s the major roadblock in the ability to buy a home—especially having a recent memory of the pandemic housing boom—mortgage rates are only one factor. 

To put it in perspective, Zillow reported earlier this year not even a 0% mortgage rate would make buying a house affordable in several major U.S. cities. 

Let that sink in. 

Even without any interest accrued on a loan, homebuying is still out of reach for the typical American. Much of the affordability crisis has to do with home prices, which are more than 50% higher than in 2020. This has locked out new homebuyers from entering the market and current homeowners from selling. 

The mortgage rate drop required to make an average home affordable (to about 4.43%) for the typical buyer is “unrealistic,” according to Zillow economic analyst Anushna Prakash.  

“It’s unlikely rates will drop to the mid-[4% range] anytime soon,” Arlington, Va.–based real estate agent Philippa Main told Fortune. “And even if they did, housing prices are still at historic highs.” With 11 years of experience, Main is also a licensed mortgage loan officer.

To be sure, some economists see some light at the end of the tunnel for homebuyers plagued by high mortgage rates and home prices.

“For prospective buyers who have been waiting on the sidelines, the housing market is finally starting to listen,” wrote First American chief economist Mark Fleming in an Aug. 29 blog post. First American’s analysis takes into account inflation, and Fleming said: “The price of a house today is not directly comparable to the price of that same house 30 years ago.”



Source link

Continue Reading

Business

OpenAI debuts GPT-5.2 in effort to silence concerns it is falling behind its rivals

Published

on



OpenAI, under increasing competitive pressure from Google and Anthropic, has debuted a new AI model, GPT-5.2, that it says beats all existing models by a substantial margin across a wide range of tasks.

The new model, which is being released less than a month after OpenAI debuted its predecessor, GPT-5.1, performed particularly well on a benchmark of complicated professional tasks across a range of “knowledge work”—from law to accounting to finance—as well as on evaluations involving coding and mathematical reasoning, according to data OpenAI released.

Fidji Simo, the former InstaCart CEO who now serves as OpenAI’s CEO of applications, told reporters that the model should not been seen as a direct response to Google’s Gemini 3 Pro AI model, which was released last month. That release prompted OpenAI CEO Sam Altman to issue a “code red,” delaying the rollout of several initiatives in order to focus more staff and computing resources on improving its core product, ChatGPT.

“I would say that [the Code Red] helps with the release of this model, but that’s not the reason it is coming out this week in particular, it has been in the works for a while,” she said.

She said the company had been building GPT-5.2 “for many months.” “We don’t turn around these models in just a week. It’s the result of a lot of work,” she said. The model had been known internally by the code name “Garlic,” according to a story in The Information. The day before the model’s release Altman teased its imminent rollout by posting to social media a video clip of him cooking a dish with a large amount of garlic.

OpenAI executives said that the model had been in the hands of “Alpha customers” who help test its performance for “several weeks”—a time period that would mean the model was completed prior to Altman’s “code red” declaration.

These testers included legal AI startup Harvey, note-taking app Notion, and file-management software company Box, as well as Shopify and Zoom.

OpenAI said these customers found GPT-5.2 demonstrated a “state of the art” ability to use other software tools to complete tasks, as well as excelling at writing and debugging code.

Coding has become one of the most competitive use cases for AI model deployment within companies. Although OpenAI had an early lead in the space, Anthropic’s Claude model has proved especially popular among enterprises, exceeding OpenAI’s marketshare according to some figures. OpenAI is no doubt hoping to convince customers to turn back to its models for coding with GPT-5.2.

Simo said the “Code Red” was helping OpenAI focus on improving ChatGPT. “Code Red is really a signal to the company that we want to marshal resources in one particular area, and that’s a way to really define priorities and define things that can be deprioritized,” she said. “So we have had an increase in resources focused on ChatGPT in general.”

The company also said its new model is better than the company’s earlier ones at providing “safe completions”—which it defines as providing users with helpful answers while not saying things that might contribute to or worsen mental health crises.

“On the safety side, as you saw through the benchmarks, we are improving on pretty much every dimension of safety, whether that’s self harm, whether that’s different types of mental health, whether that’s emotional reliance,” Simo said. “We’re very proud of the work that we’re doing here. It is a top priority for us, and we only release models when we’re confident that the safety protocols have been followed, and we feel proud of our work.”

The release of the new model came on the same day a new lawsuit was filed against the company alleging that ChatGPT’s interactions with a psychologically troubled user had contributed to a murder-suicide in Connecticut. The company also faces several other lawsuits alleging ChatGPT contributed to people’s suicides. The company called the Connecticut murder-suicide “incredibly heartbreaking” and said it is continuing to improve “ChatGPT’s training to recognize and respond to signs of mental or emotional distress, de-escalate conversations and guide people toward real-world support.” 

GPT-5.2 showed a large jump in performance across several benchmark tests of interest to enterprise customers. It met or exceeded human expert performance on a wide range of difficult professional tasks, as measured by OpenAI’s GDPval benchmark, 70.9% of the time. That compares to just 38.8% of the time for GPT-5, a model that OpenAI released in August; 59.6% for Anthropic’s Claude Opus 4.5; and 53.3% for Google’s Gemini 3 Pro.

On the software development benchmark, SWE-Bench Pro, GPT-5.2 scored 55.6%, which was almost 5 percentage points better than its predecessor, GPT-5.1, and more than 12% better than Gemini 3 Pro.

OpenAI’s Aidan Clark, vice president of research (training), declined to answer questions about exactly what training methods had been used to upgrade GPT-5.2’s performance, although he said that the company had made improvements across the board, including in pretraining, the initial step in creating an AI model.

When Google released its Gemini 3 Pro model last month, its researchers also said the company had made improvements in pretraining as well as post-training. This surprised some in the field who believed that AI companies had largely exhausted the ability to wring substantial improvements out of the pretraining stage of model building, and it was speculated that OpenAI may have been caught off guard by Google’s progress in this area.



Source link

Continue Reading

Trending

Copyright © Miami Select.