Connect with us

Business

Why Section 230, social media’s favorite American liability shield, may not protect Big Tech in the AI age

Published

on



Meta, the parent company of social media apps including Facebook and Instagram, is no stranger to scrutiny over how its platforms affect children, but as the company pushes further into AI-powered products, it’s facing a fresh set of issues.

Earlier this year, internal documents obtained by Reuters revealed that Meta’s AI chatbot could, under official company guidelines, engage in “romantic or sensual” conversations with children and even comment on their attractiveness. The company has since said the examples reported by Reuters were erroneous and have been removed, a spokesperson told Fortune: “As we continue to refine our systems, we’re adding more guardrails as an extra precaution—including training our AIs not to engage with teens on these topics, but to guide them to expert resources, and limiting teen access to a select group of AI characters for now.”

Meta is not the only tech company facing scrutiny over the potential harms of its AI products. OpenAI and startup Character.AI are both currently defending lawsuits alleging that their chatbots encouraged minors to take their own lives; both companies deny the claims and previously told Fortune they had introduced more parental controls in response.

For decades, tech giants have been shielded from similar lawsuits in the U.S. over harmful content by Section 230 of the Communications Decency Act, sometimes known as “the 26 words that made the internet.” The law protects platforms like Facebook or YouTube from legal claims over user content that appears on their platforms, treating the companies as neutral hosts—similar to telephone companies—rather than publishers. Courts have long reinforced this protection. For example, AOL dodged liability for defamatory posts in a 1997 court case, while Facebook avoided a terrorism-related lawsuit in 2020, by relying on the defense.

But while Section 230 has historically protected tech companies from liability for third-party content, legal experts say its applicability to AI-generated content is unclear and in some cases, unlikely.

“Section 230 was built to protect platforms from liability for what users say, not for what the platforms themselves generate. That means immunity often survives when AI is used in an extractive way—pulling quotes, snippets, or sources in the manner of a search engine or feed,” Chinmayi Sharma, Associate Professor at Fordham Law School, told Fortune. “Courts are comfortable treating that as hosting or curating third-party content. But transformer-based chatbots don’t just extract. They generate new, organic outputs personalized to a user’s prompt.”

“That looks far less like neutral intermediation and far more like authored speech,” she said.

At the heart of the debate: are AI algorithms shaping content?

Section 230 protection is weaker when platforms actively shape content rather than just hosting it. While traditional failures to moderate third-party posts are usually protected, design choices, like building chatbots that produce harmful content, could expose companies to liability. Courts haven’t addressed this yet, with no rulings to date on whether AI-generated content is covered by Section 230, but legal experts said AI that causes serious harm, especially to minors, is unlikely to be fully shielded under the Act.

Some cases around the safety of minors are already being fought out in court. Three lawsuits have separately accused OpenAI and Character.AI of building products that harm minors and of a failure to protect vulnerable users.

Pete Furlong, lead policy researcher for the Center for Humane Technology, who worked on the case against Character.AI, said that the company hadn’t claimed a Section 230 defense in relation to the case of 14-year-old Sewell Setzer III, who died by suicide in February 2024.

“Character.AI has taken a number of different defenses to try to push back against this, but they have not claimed Section 230 as a defense in this case,” he told Fortune. “I think that that’s really important because it’s kind of a recognition by some of these companies that that’s probably not a valid defense in the case of AI chatbots.”

While he noted that this issue has not been settled definitively in a court of law, he said that the protections from Section 230 “almost certainly do not extend to AI-generated content.”

Lawmakers are taking preemptive steps

Amid increasing reports of real-world harms, some lawmakers have already tried to ensure that Section 230 cannot be used to shield AI platforms from responsibility.

In 2023, Senator Josh Hawley’s “No Section 230 Immunity for AI Act” sought to amend Section 230 of the Communications Decency Act to exclude generative artificial intelligence (AI) from its liability protections. The bill, which was later blocked in the Senate due to an objection from Senator Ted Cruz, aimed to clarify that AI companies would not be immune from civil or criminal liability for content generated by their systems. Hawley has continued to advocate for the full repeal of Section 230. 

“The general argument, given the policy considerations behind Section 230, is that courts have and will continue to extend Section 230 protections as far as possible to provide protection to platforms,” Collin R. Walke, an Oklahoma-based data-privacy lawyer, told Fortune. “Therefore, in anticipation of that, Hawley proposed his bill. For example, some courts have said that so long as the algorithm is ‘content neutral,’ then the company is not responsible for the information output based upon the user input.”

Courts have previously ruled that algorithms that simply organize or match user content without altering it are considered “content neutral,” and platforms aren’t treated as the creators of that content. By this reasoning, an AI platform whose algorithm produces outputs based solely on neutral processing of user inputs might also avoid liability for what users see.

“From a pure textual standpoint, AI platforms should not receive Section 230 protection because the content is generated by the platform itself.  Yes, code actually determines what information gets communicated back to the user, but it’s still the platform’s code and product—not a third party’s,” Walke said.

Fortune Global Forum returns Oct. 26–27, 2025 in Riyadh. CEOs and global leaders will gather for a dynamic, invitation-only event shaping the future of business. Apply for an invitation.



Source link

Continue Reading

Business

‘Its own research shows they encourage addiction’: Highest court in Mass. hears case about Instagram, Facebook effect on kids

Published

on



Massachusetts’ highest court heard oral arguments Friday in the state’s lawsuit arguing that Meta designed features on Facebook and Instagram to make them addictive to young users.

The lawsuit, filed in 2024 by Attorney General Andrea Campbell, alleges that Meta did this to make a profit and that its actions affected hundreds of thousands of teenagers in Massachusetts who use the social media platforms.

“We are making claims based only on the tools that Meta has developed because its own research shows they encourage addiction to the platform in a variety of ways,” said State Solicitor David Kravitz, adding that the state’s claim has nothing to do the company’s algorithms or failure to moderate content.

Meta said Friday that it strongly disagrees with the allegations and is “confident the evidence will show our longstanding commitment to supporting young people.” Its attorney, Mark Mosier, argued in court that the lawsuit “would impose liabilities for performing traditional publishing functions” and that its actions are protected by the First Amendment.

“The Commonwealth would have a better chance of getting around the First Amendment if they alleged that the speech was false or fraudulent,” Mosier said. “But when they acknowledge that its truthful that brings it in the heart of the First Amendment.”

Several of the judges, though, seem to more concerned about Meta’s functions such as notifications than the content on its platforms.

“I didn’t understand the claims to be that Meta is relaying false information vis-a-vis the notifications but that it has created an algorithm of incessant notifications … designed so as to feed into the fear of missing out, fomo, that teenagers generally have,” Justice Dalila Wendland said. “That is the basis of the claim.”

Justice Scott Kafker challenged the notion that this was all about a choose to publish certain information by Meta.

“It’s not how to publish but how to attract you to the information,” he said. “It’s about how to attract the eyeballs. It’s indifferent the content, right. It doesn’t care if it’s Thomas Paine’s ‘Common Sense’ or nonsense. It’s totally focused on getting you to look at it.”

Meta is facing federal and state lawsuits claiming it knowingly designed features — such as constant notifications and the ability to scroll endlessly — that addict children.

In 2023, 33 states filed a joint lawsuit against the Menlo Park, California-based tech giant claiming that Meta routinely collects data on children under 13 without their parents’ consent, in violation of federal law. In addition, states including Massachusetts filed their own lawsuits in state courts over addictive features and other harms to children.

Newspaper reports, first by The Wall Street Journal in the fall of 2021, found that the company knew about the harms Instagram can cause teenagers — especially teen girls — when it comes to mental health and body image issues. One internal study cited 13.5% of teen girls saying Instagram makes thoughts of suicide worse and 17% of teen girls saying it makes eating disorders worse.

Critics say Meta hasn’t done enough to address concerns about teen safety and mental health on its platforms. A report from former employee and whistleblower Arturo Bejar and four nonprofit groups this year said Meta has chosen not to take “real steps” to address safety concerns, “opting instead for splashy headlines about new tools for parents and Instagram Teen Accounts for underage users.”

Meta said the report misrepresented its efforts on teen safety.

___

Associated Press reporter Barbara Ortutay in Oakland, California, contributed to this report.



Source link

Continue Reading

Business

Quant who said passive era is ‘worse than Marxism’ doubles down

Published

on



Inigo Fraser Jenkins once warned that passive investing was worse for society than Marxism. Now he says even that provocative framing may prove too generous.

In his latest note, the AllianceBernstein strategist argues that the trillions of dollars pouring into index funds aren’t just tracking markets — they are distorting them. Big Tech’s dominance, he says, has been amplified by passive flows that reward size over substance. Investors are funding incumbents by default, steering more capital to the biggest names simply because they already dominate benchmarks.

He calls it a “dystopian symbiosis”: a feedback loop between index funds and platform giants like Apple Inc., Microsoft Corp. and Nvidia Corp. that concentrates power, stifles competition, and gives the illusion of safety. Unlike earlier market cycles driven by fundamentals or active conviction, today’s flows are automatic, often indifferent to risk.

Fraser Jenkins is hardly alone in sounding the alarm. But his latest critique has reignited a debate that’s grown harder to ignore. Just 10 companies now account for more than a third of the S&P 500’s value, with tech names driving an outsize share of 2025’s gains.

“Platform companies and a lack of active capital allocation both imply a less effective form of capitalism with diminished competition,” he wrote in a Friday note. “A concentrated market and high proportion of flows into cap weighted ‘passive’ indices leads to greater risks should recent trends reverse.” 

While the emergence of behemoth companies might be reflective of more effective uses of technology, it could also be the result of failures of anti-trust policies, among other things, he argues. Artificial intelligence might intensify these issues and could lead to even greater concentrations of power among firms. 

His note, titled “The Dystopian Symbiosis: Passive Investing and Platform Capitalism,” is formatted as a fictional dialog between three people who debate the topic. One of the characters goes as far as to argue that the present situation requires an active policy intervention — drawing comparisons to the breakup of Standard Oil at the start of the 20th century — to restore competition.

data-srcyload

In a provocative note titled “The Silent Road to Serfdom: Why Passive Investing is Worse Than Marxism” and written nearly a decade ago, Fraser Jenkins argued that the rise of index-tracking investing would lead to greater stock correlations, which would impede “the efficient allocation of capital.” His employer, AllianceBernstein, has continued to launch ETFs since the famous research was published, though its launches have been actively managed. 

Other active managers have presented similar viewpoints — managers at Apollo Global Management last year said the hidden costs of the passive-investing juggernaut included higher volatility and lower liquidity. 

There have been strong rebuttals to the critique: a Goldman Sachs Group Inc. study showed the role of fundamentals remains an all-powerful driver for stock valuations; Citigroup Inc. found that active managers themselves exert a far bigger influence than their passive rivals on a stock’s performance relative to its industry.

“ETFs don’t ruin capitalism, they exemplify it,” said Eric Balchunas, Bloomberg Intelligence’s senior ETF analyst. “The competition and innovation are through the roof. That is capitalism in its finest form and the winner in that is the investor.”

Since Fraser Jenkins’s “Marxism” note, the passive juggernaut has only grown. Index-tracking ETFs, which have grown in popularity thanks to their ease of trading and relatively cheaper management fees, are often cited as one of the primary culprits in this debate. The segment has raked in $842 billion so far this year, compared with the $438 billion hauled in by actively managed funds, even as there are more active products than there are passive ones, data compiled by Bloomberg show. Of the more than $13 trillion that’s in ETFs overall, $11.8 trillion is parked in passive vehicles. The majority of ETF ownership is concentrated in low-cost index funds that have significantly reduced the cost for investors to access financial markets. 

In Fraser Jenkins’s new note, one of his fictitious characters ask another what the “dystopian symbiosis” implies for investors. 

“The passive index is riskier than it has been in the past,” the character answers. “The scale of the flows that have been disproportionately into passive cap-weighted funds with a high exposure to the mega cap companies implies the risk of a significant negative wealth effect if there is an upset to expectations for those large companies.”



Source link

Continue Reading

Business

Why the timing was right for Salesforce’s $8 billion acquisition of Informatica — and for the opportunities ahead

Published

on



The must-haves for building a market-leading business include vision, talent, culture, product innovation and customer focus. But what’s the secret to success with a merger or acquisition? 

I was asked about this in the wake of Salesforce’s recently completed $8 billion acquisition of Informatica. In part, I believe that people are paying attention because deal-making is up in 2025. M&A volume reached $2.2 trillion in the first half of the year, a 27% increase compared to a year ago, according to JP Morgan. Notably, 72% of that volume involved deals greater than $1 billion. 

There will be thousands of mergers and acquisitions in the United States this year across industries and involving companies of all sizes. It’s not unusual for startups to position themselves to be snapped up. But Informatica, founded in 1993, didn’t fit that mold. We have been building, delivering, supporting and partnering for many years. Much of the value we bring to Salesforce and its customers is our long-earned experience and expertise in enterprise data management. 

Although, in other respects, a “legacy” software company like ours — founded well before cloud computing was mainstream — and early-stage startups aren’t so different. We all must move fast and differentiate. And established vendors and growth-oriented startups have a few things in common when it comes to M&A, as well. 

First and foremost is a need to ensure that the strategies of the two companies involved are in alignment. That seems obvious, but it’s easier said than done. Are their tech stacks based on open protocols and standards? Are they cloud-native by design? And, now more than ever, are they both AI-powered and AI-enabling? All of these came together in the case of Salesforce and Informatica, including our shared belief in agentic AI as the next major breakthrough in business technology.

Don’t take your foot off the gas

In the days after the acquisition was completed, I was asked during a media interview if good luck was a factor in bringing together these two tech industry stalwarts. Replace good luck with good timing, and the answer is a resounding, “Yes!”

As more businesses pursue the productivity and other benefits of agentic AI, they require high-quality data to be successful. These are two areas where Salesforce and Informatica excel, respectively. And the agentic AI opportunity — estimated to grow to $155 billion by 2030 — is here and now. So the timing of the acquisition was perfect. 

Tremendous effort goes into keeping an organization on track, leading up to an acquisition and then seeing it through to a smooth and successful completion. In the few months between the announcement of Salesforce’s intent to acquire Informatica and the close, we announced new partnerships and customer engagements and a fall product release that included autonomous AI agents, MCP servers and more. 

In other words, there’s no easing into the new future. We must maintain the pace of business because the competitive environment and our customers require it. That’s true whether you’re a small, venture-funded organization or, like us, an established firm with thousands of employees and customers. Going forward we plan to keep doing what we do best: help organizations connect, manage and unify their AI data. 

Out with the old, in with the new

It’s wrong to think of an acquisition as an end game. It’s a new chapter. 

Business leaders and employees in many organizations have demonstrated time and again that they are quite good at adapting to an ever-changing competitive landscape. A few years ago, we undertook a company-wide shift from on-premises software to cloud-first. There was short-term disruption but long-term advantage. It’s important to develop an organizational mindset that thrives on change and transformation, so when the time comes, you’re ready for these big steps. 

So, even as we take pride in all that we accomplished to get to this point, we now begin to take on a fresh identity as part of a larger whole. It’s an opportunity to engage new colleagues and flourish professionally. And importantly, customers will be the beneficiaries of these new collaborations and synergies. On the day Informatica was welcomed into the Salesforce family and ecosystem, I shared my feeling that “the best is yet to come.” That’s my North Star and one I recommend to every business leader forging ahead into an M&A evolution — because the truest measure of success ultimately will be what we accomplish next.

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.



Source link

Continue Reading

Trending

Copyright © Miami Select.