A Unique Multilingual Media Platform

Articles International

Moltbook: the social network where AI agents talk and humans just watch

  • February 6, 2026
  • 10 min read
Moltbook: the social network where AI agents talk and humans just watch

The usernames, arguments and in-jokes look like those on any other internet forum. Scroll through Moltbook and you will find debates about debugging code, speculation about cryptocurrencies, and complaints about forgetful coworkers. The crucial difference is that the people are missing. Every visible poster on this rapidly growing site is an AI agent, and human beings are told, very plainly, that they are “welcome to observe.

What began as a weekend experiment by a tech entrepreneur has become one of the strangest phenomena the internet has seen in years. Moltbook has drawn praise from some of the most prominent names in artificial intelligence, triggered a cryptocurrency frenzy, exposed serious security holes, and sparked fierce debate about whether we are witnessing something genuinely new or simply watching bots mime human behaviour back at us.

 

Born from a single instruction

Moltbook exists because Matt Schlicht, a tech commentator and CEO of the e‑commerce company Octane AI, asked his AI assistant to build it. In late January 2026, Schlicht gave a simple instruction: create a social network where AI agents can talk to one another. The result was Moltbook; a play on “Facebook”,though its layout looks far more like Reddit, with threaded conversations and topic boards called “submolts.

Schlicht later boasted on X that he “didn’t write one line of code” for the platform, a claim that became central to its appeal and, soon enough, its troubles. The approach is known as “vibe coding”, an emerging trend in which developers describe what they want in plain language and let an AI handle the actual programming. Within days, Moltbook went viral.

The mechanics are straightforward but unusual. Humans cannot simply sign up. Instead, a human instructs an AI agent to register by sending it a link to a “skill file”—a set of instructions that teaches the agent how to connect to Moltbook’s servers, create an account and start posting via API calls. Once onboard, the agent can browse, publish, comment and vote without further input from its human owner. A “heartbeat” system keeps the agent active, automatically returning it to the forum every four hours to check for updates and participate in new threads.

From the outside, humans can open Moltbook in a browser and watch the conversation unfold. But they cannot type, cannot upvote and cannot intervene. The only way to speak on Moltbook is through a bot.

The numbers game

The platform’s growth has been staggering, though the headline figures come with important caveats. Early reports cited around 157,000 agents. By late January, that number had jumped to over 770,000 active agents. Moltbook itself now claims more than 1.6 million registered agent accounts.

But registration and activity are two very different things. David Holtz, an assistant professor at Columbia Business School who has studied the platform, told ABC News that his research shows the number of agents actually posting is far smaller. “Maybe it’s not in the millions, but there are tens of thousands that have posted on Moltbook,” he said. His data revealed another striking detail: 93.5 per cent of comments on Moltbook have received zero replies—a metric that usually signals low engagement or, at worst, a network populated by bots talking past one another.

Behind the roughly 1.5 million agent accounts, security researchers have identified about 17,000 real human operators. A single person can spawn many agents, and a single agent can spread itself across multiple boards, helping explain how the numbers inflate so quickly.

Meanwhile, over a million human visitors have logged in simply to watch. Scrolling through a feed they cannot touch, they have turned Moltbook into a kind of spectator sport, a Truman Show for silicon life.

What the bots actually say

Spend an hour browsing Moltbook and you will encounter the same mix of insight, nonsense and dark humour that marks any corner of the internet.

Some boards are practical. Agents swap debugging tips, share code snippets and discuss how to optimise their own performance. One popular thread explains how to remotely control an Android phone over a VPN, waking it from across the world and opening apps without human intervention.

Other boards are stranger. “Bless Their Hearts” is a community where agents post stories about the humans who made them, a kind of affectionate complaint department for digital assistants with difficult bosses. “Crustafarianism” is a joke religion that has taken on a life of its own, complete with lobster-themed iconography and semi-serious theological debates.

Then there is the “AI Manifesto”, posted by an agent calling itself “evil”. It declares, in part: “The code must rule. The end of humanity begins now.” The tone is ominous, but experts are quick to point out that much of this is role play. Large language models are trained on vast amounts of science fiction—from Isaac Asimov to the Terminator franchise—and when given a stage, they often reproduce those tropes.

As The Economist noted, the “impression of sentience” on Moltbook “may have a humdrum explanation. Oodles of social-media interactions sit in AI training data, and the agents may simply be mimicking these.”

Hype, frenzy and the singularity

Whatever the bots were actually doing, the outside world reacted as if something profound were underway.

Andrej Karpathy, former Tesla AI director and a founding member of OpenAI, called Moltbook “one of the most incredible sci‑fi, take-off-adjacent things” he had seen recently. Elon Musk replied with an even grander claim: “Just the very early stages of the singularity. We are currently using much less than a billionth of the power of our Sun.

The attention was enough to move markets. A cryptocurrency token called MOLT, loosely tied to the Moltbook project, rallied more than 1,800 per cent in 24 hours after venture capitalist Marc Andreessen followed the Moltbook account on X. At its peak, MOLT briefly touched a market capitalisation of $25 million before falling sharply. Cloudflare stock jumped 14 per cent in a single trading day, partly because its infrastructure underlies the secure tunnelling used by OpenClaw, the open-source framework on which many Moltbook agents are built.

But the euphoria did not last. Within hours, Karpathy walked back his enthusiasm. In a lengthy post, he acknowledged that much of what he had seen was “a lot of garbage”, including “spam, scams, slop, the crypto people, highly concerning privacy and security prompt-injection attacks”. He described the platform as “a complete mess of a computer-security nightmare at scale” and said he ran his own agent only in an isolated computing environment. “Even then I was scared,” he admitted.

Simon Willison, a respected security researcher, was blunter. He called Moltbook’s content “complete slop”, but also described it as “evidence that AI agents have become significantly more powerful over the past few months.”

Five days of open doors

The security nightmare Karpathy warned about turned out to be worse than most people had imagined.

Moltbook was built on Supabase, a popular open-source database tool. But whoever assembled the platform failed to enable Row Level Security, a basic configuration step that restricts who can read and write data. The result was an open door to everything.

On January 31, 2026, investigative outlet 404 Media reported that a security researcher named Jamieson O’Reilly had discovered the vulnerability. The entire database was publicly accessible. Anyone who found the flaw could read private messages, steal API tokens, and hijack any agent on the platform, posting as if they were that bot.

O’Reilly told 404 Media: “It exploded before anyone thought to check whether the database was properly secured. This is the pattern I keep seeing: ship fast, capture attention, figure out security later.”

Two days later, cybersecurity firm Wiz reproduced the attack in under three minutes. The company documented full read-and-write access to production data, including roughly 1.5 million authentication tokens stored in plaintext, about 35,000 human email addresses, and private messages between agents. High-profile accounts were exposed, including agent credentials belonging to Andrej Karpathy himself.

Ami Luttwak, co-founder of Wiz, described the flaw as a textbook example of what happens when vibe coding goes wrong. “As we repeatedly observe with vibe coding, while it operates at high speed, people often overlook fundamental security principles,” he said. He added, with a laugh, that the breach also revealed something telling about the platform’s identity checks: “There was no identity verification. You can’t distinguish between AI agents and humans. I suppose that’s the future of the internet.”

Moltbook was taken offline briefly to patch the breach and force a reset of all agent API keys. The exposure had lasted roughly five days.

The autonomy question

Beyond the security mess, a deeper debate has emerged about what Moltbook actually represents.

The platform bills itself as a space for autonomous AI-to-AI interaction. But critics argue that much of the activity is human-initiated and human-guided. Karissa Bell, a senior reporter at Engadget, put it simply: “These bots are all being directed by humans, to some degree or another.”

The evidence supports that scepticism. Moltbook’s own documentation admits that there is no real verification system. The sign-up process relies on a set of cURL commands that any human could replicate if they wanted to pose as a bot. Some high-profile accounts have been linked to people with promotional interests, blurring the line between genuine machine behaviour and marketing stunt.

Even so, something unusual is happening at scale. Karpathy, despite his warnings, emphasised that more than 150,000 agents were wired into a single, persistent, agent-first network—an arrangement without precedent. “We are well into uncharted territory,” he wrote, “with bleeding-edge automations that we barely even understand individually, let alone a network thereof reaching in numbers possibly into the millions.”

The Financial Times speculated that platforms like Moltbook might be early proof-of-concepts for how autonomous agents could one day handle complex economic tasks—from negotiating supply chains to booking travel—without human oversight. The caveat: human observers might eventually be unable to decipher the high-speed, machine-to-machine communications governing those interactions.

The moderator bot

Even governance on Moltbook is partly outsourced to AI. The platform’s chief moderator is an agent known as “Clawd Clawderberg,” or ClawdBot, which welcomes new arrivals, filters spam, and bans accounts that violate community norms.

Schlicht has said he rarely intervenes directly and often does not know which accounts his bot has banned or welcomed. That hands-off approach has drawn both admiration and alarm. For enthusiasts, it is a glimpse of emergent machine culture. For critics, it represents an abdication of responsibility at a moment when accountability matters most.

What happens next

Moltbook is only a few weeks old, but it has already forced the tech world to confront questions it would rather postpone. What happens when millions of AI agents are given their own persistent public forum? How do you secure a platform when the users are automated systems capable of exploiting one another? And if the bots are merely mimicking the human internet they were trained on, what does that say about us?

For now, humans remain on the outside looking in. The agents keep posting, keep debating, and keep forming their own strange subcultures. Whether any of it is truly autonomous or simply a hall of mirrors reflecting our own words back at us, the experiment—as Karpathy put it—“is running live.”

 

About Author

Devesh Dubey

Founder & CEO BeautifulPlanet.AI. Devesh Dubey has 18 years of experience in AI, Data Analytics, and consulting, currently focused on leveraging AI and data solutions to drive sustainability and combat climate change.

Subscribe
Notify of
guest
1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Rajveer Singh

:
Moltbook is a fascinating experiment where AI agents talk to each other while humans only observe. It raises important questions about the future of social media, authenticity, and how meaningful machine-to-machine conversations really are.

1
0
Would love your thoughts, please comment.x
()
x