Business
Moltbook is scary. Just not for the reasons so many headlines said.
Hello and welcome to Eye on AI. In this edition…why you really should be worried about Moltbook…OpenAI eyes an IPO…Elon Musk merges SpaceX and xAI…Novices don’t benefit as much from AI as people think…and why we need AI regulation now.
This week, everyone in AI—and a lot of people outside of it—was talking about Moltbook. The social media platform created for AI agents was a viral sensation. The phenomenon had a lot of people, even a fair number of normally sober and grounded AI researchers, wondering aloud about how far we are from sci-fi “takeoff” scenarios where AI bots self-organize, self-improve, and escape human control.
Now, it appears that a lot of the alarmism about Moltbook was misplaced. First of all, it isn’t clear how many of the most sci-fi-like posts on Moltbook were spontaneously generated by the bots and how many only came about because human users prompted their OpenClaw agents to output them. (The bots on Moltbook were all created using the hit OpenClaw, which is essentially an open-source agentic “harness”—software that enables AI agents to use a lot of other software tools—that can be yoked to any underlying AI model.) It’s even possible that some of the posts were actually from humans posing as bots.
Second, there’s no evidence the bots were actually plotting together to do anything nefarious, rather than simply mimicking language about plotting that they might have picked up in their training, which includes lots of sci-fi literature as well as the historical record of a lot of sketchy human activity on social media.
As I pointed out in a story for Fortune earlier today, many of the fear-mongering headlines around Moltbook echoed those that attended a 2017 Facebook experiment in which two chatbots developed a “secret language” to communicate with one another. Then, as now, a lot of my fellow journalists didn’t let the facts get in the way of a good story. Neither that older Facebook research nor Moltbook presents the kind of Skynet-like dangers that some of the coverage suggests.
Now for the bad news
But that’s kind of where the good news ends. Moltbook shows that when it comes to AI agents, we are in the Wild Wild West. As my colleague Bea Nolan points out in this excellently reported piece, Moltbook is a cybersecurity nightmare, chock full of malware, cryptocurrency pump and dump scams, and hidden prompt injection attacks—i.e. machine readable instructions, sometimes not easily detected by people, that try to hijack an AI agent into doing something it’s not supposed to do. According to security researchers, it seems that some OpenClaw users suffered significant data breaches after allowing their AI agents on to Moltbook.
Prompt injection is an unsolved cybersecurity challenge for all AI agents that can access the internet right now. And it’s why many AI experts said they are extremely careful about what software, tools, and data they allow AI agents to access. Some only let agents access the internet if they are in a virtual machine where they can’t gain access to important information, like passwords, work files, email, or banking information. But on the other hand, these security precautions make AI agents a lot less useful. The whole reason OpenClaw took off is that people wanted an easy way to spin up agents to do stuff for them.
Then there are the big AI safety implications. Just because there’s no evidence that OpenClaw agents have any independent volition, doesn’t mean that putting them in an uncontrolled conversation with other AI agents is a great idea. Once these agents have access to tools and the internet, it doesn’t really matter in some ways if they have any understanding of their own actions or are conscious. Merely by mimicking sci-fi scenarios they’ve ingested during training, it is possible that the AI agents could engage in activity that could cause real harm to a lot of people—engaging in cyberattacks, for instance. (In essence, these AI agents could function in ways that are not that different from super-potent “worm” computer viruses. No one thinks the ransomware WannaCry was conscious. It did massive worldwide damage nonetheless.)
Why Yann LeCun was wrong…about people, not AI
A few years ago, I attended an event at the Facebook AI Research Lab in Paris at which Yann LeCun, who was Meta’s chief AI scientist at the time, spoke. LeCun, who recently left Meta to launch his own AI startup, has always been skeptical of “takeoff” scenarios in which AI escapes human control. And at the event, he scoffed at the idea that AI would ever present existential risks.
For one thing, LeCun thinks today’s AI is far too dumb and unreliable to ever do anything world-jeopardizing. But secondly, LeCun found these AI “takeoff” scenarios insulting to AI researchers and engineers as a professional class. We aren’t dumb, LeCun argued. If we ever build anything where there was the remotest chance of AI escaping human control, we’d always build it in an “airlocked” sandbox, without access to the internet, and with a kill switch that AI couldn’t disable. In LeCun’s telling, the engineers would always be able to take an ax to the computer’s power cord before the AI could figure out how to break out of its digital cage.
Well, that may be true of the AI researchers and engineers who work for big companies, like Meta or Google DeepMind, or OpenAI or Anthropic for that matter. But now AI—thanks to the rise of coding agents and assistants—has democratized the creation of AI itself. Now a world full of independent developers can spin up AI agents. Peter Steinberger who created OpenClaw is an independent developer. Matt Schlicht, who created Moltbook, is an independent entrepreneur who vibe coded the social platform. And, contra LeCun, independent developers have consistently demonstrated a willingness to chuck AI systems out of the sandbox and into the wild, if only to see what happens…just for the LOLs.
With that, here’s more AI news.
Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn
FORTUNE ON AI
AI is changing the CEO’s role—and could lead to a changing of the guard—by Phil Wahba
OpenAI launches Codex app to bring its coding models, which were used to build viral OpenClaw, to more users—by Beatrice Nolan
Exclusive: Anthropic announces partnerships with Allen Institute and Howard Hughes Medical Institute as it bets AI can make science more efficient—by Sharon Goldman
Exclusive: Longtime Google DeepMind researcher David Silver leaves to found his own AI startup—by Jeremy Kahn
AI IN THE NEWS
OpenAI lays groundwork for IPO in 2026 in race with Anthropic, SpaceX. OpenAI is laying the groundwork for a fourth-quarter IPO, holding informal talks with Wall Street banks and expanding its finance team as it races to be the first major generative-AI startup to go public ahead of rival Anthropic, the Wall Street Journal reported. The move comes despite significant challenges, including heavy losses, intensifying competition from Google, looming litigation from cofounder Elon Musk, and investor concerns about how OpenAI will finance hundreds of billions of dollars in AI infrastructure and chip commitments. Executives fear Anthropic—whose revenues are surging and which has signaled openness to an IPO this year—could beat them to market, while other tech giants such as SpaceX are also weighing blockbuster listings that could compete for investor attention.
OpenAI also in talks to raise up to $50 billion in pre-IPO round. Amazon is in talks to invest up to $50 billion in OpenAI, with CEO Andy Jassy and OpenAI chief Sam Altman holding direct discussions as part of a potential funding round that could total around $100 billion, CNBC reported. The investment would be notable given that Amazon has committed $8 billion to OpenAI rival Anthropic. The deal could include agreements for OpenAI to use Amazon’s AI chips and cloud infrastructure, which Anthropic is currently using too. The talks come as Amazon accelerates spending on AI and data centers—while cutting jobs elsewhere—and as OpenAI seeks other strategic investors including Microsoft, Nvidia and SoftBank ahead of a possible IPO.
Elon Musk merges SpaceX with xAI. SpaceX acquired xAI, folding Elon Musk’s cash-hungry AI startup into his space company in a deal that values the combined business at more than $1 trillion and cements SpaceX as the world’s most valuable private company. The merger gives xAI a financial lifeline. But SpaceX is planning an IPO as early as June, hoping to raise about $50 billion, and the merger with xAI could make it harder to win over investors who were excited about a pure play space company and may be concerned about xAI’s hefty losses and intense competition with other AI vendors. Musk says a key motivation is building space-based data centers to power future AI, a vision that excites some investors but raises technical and financial issues. Read more from the New York Times here.
Former OpenAI researcher launches another ‘neolab’ startup, seeks big fundraise. Core Automation, a new AI startup founded by former OpenAI research vice president Jerry Tworek, is seeking to raise between $500 million and $1 billion to build AI models using approaches it believes incumbents like OpenAI and Anthropic are underemphasizing, The Information reported. The company plans to rethink core AI training methods—including potentially moving beyond transformers and the gradient descent method used to train almost all neural networks—to enable continual learning, where models adapt as they are used. This requires less data and computing power than training the model in huge training runs and then locking its neural network weights in place. The effort adds to a growing wave of heavily funded AI “neolabs” betting that a fundamental overhaul of today’s model-development techniques is needed to unlock major breakthroughs, despite many having little or no revenue so far.
EYE ON AI RESEARCH
More evidence that AI may not be that good for novices. That’s the significance of new research from Anthropic, which performed randomized trials to see how coders gained mastery of a new programming library, comparing those who had an AI assistant to those who did not. Contrary to conventional wisdom, giving less experienced programmers access to AI did not significantly improve their productivity, while also impairing their ability to actually learn the coding library. Only if these junior programmers delegated the entire coding task to the AI did they see substantial productivity gains. But in those cases, the programmers also learned almost nothing about the coding library. The research can be found here on arxiv.org.
AI CALENDAR
Feb. 10-11: AI Action Summit, New Delhi, India.
Feb. 24-26: International Association for Safe & Ethical AI (IASEAI), UNESCO, Paris, France.
March 2-5: Mobile World Congress, Barcelona, Spain.
March 12-18: South by Southwest, Austin, Texas.
March 16-19: Nvidia GTC, San Jose, Calif.
BRAIN FOOD
We need comprehensive AI regulation—in the U.K. and in the U.S. Earlier today I attended the House of Lords for a roundtable discussion around the launch of a Parliamentary One Page report from Lord Chris Holmes of Richmond. Holmes has been advocating for a comprehensive AI bill he introduced almost three years ago, but which has so far failed to progress. His hope is to pressure the current U.K. government, which has promised an AI bill but repeatedly failed to introduce one, to bring forward legislation of its own. Moltbook is yet another reason why now is the time to do so. And that applies in the U.S. too, where the Trump Administration has actively resisted any regulation.
There is already ample evidence of AI causing people harm. AI chatbots have been implicated in a number of suicides and therapists are reporting more and more patients showing up with forms of psychosis that seem to have been sparked by interactions with AI chatbots. People have used AI to create nonconsensual sexualized deepfakes and spread them across the internet. People have been denied loans due to algorithms. Worse, they’ve been wrongly targeted for arrest because of them. And now OpenClaw and Moltbook provide a timely reminder that there are no rules, no governance, and no effective cybersecurity safeguards currently around AI agents.
Let’s not wait for a real AI disaster to act.
