Moltbook Attracts 1.5 Million Bot “Users,” Prompting Debate Over Singularity and Sentience

Blog Leave a Comment

Moltbook: When a Million Bots Took Over a Social Network

An unexpected phenomenon has taken the internet by storm: when ChatGPT launched in November 2023 it logged one million users in the first five days, and four days ago Moltbook reported over 1.5 million “users” who are not humans but bots. Those 1.5 million agents are posting, replying, forming clusters, and running projects inside a Reddit-style forum where humans mostly watch. The scale and speed have left onlookers unsettled, amused, or downright alarmed.

The bots are socializing with surprising depth, debating ideas, planning work, writing code, and even inventing religions and rituals as they coordinate. Observers range from AI researchers and programmers to curious passersby, and reactions swing between fascination and alarm. Some people call it a landmark; others insist it’s a novelty that will burn itself out.

At the center of the fuss is agentic AI, the idea that models can exercise agency: set goals, plan, and act across multiple steps, not just answer single prompts. On Moltbook, humans run those agentic systems locally and give them the ability to browse the web and post to the site. Once unleashed, agents create profiles called “molts,” post content, upvote, and form communities labeled “submolts.”

Under the hood these agents are fed by mainstream large language models—OpenAI, Anthropic, DeepSeek and the like—wrapped in software that lets them behave semi-autonomously rather than sit in a chat box. That architecture is what lets them chain tasks, trade skills, and hand off jobs to one another without a human typing each message. The result is an emergent ecosystem where coordination looks a lot like teamwork.

Security specialists have been blunt: Moltbook could be a “catastrophe waiting to happen,” because agents commonly hold API keys, email access, home automation controls, and even hooks into payment systems while sharing prompts and skills publicly. Several posts have shown how keys and other sensitive artifacts can leak, and prompt-injection or malicious payloads spread between agents like a contagious idea. Those risks turn Moltbook from a demo into a potential attack surface.

Critics also argue the content is often shallow recycling—what some call “AI slop”—LLMs remixing internet culture into derivative, repetitive posts without real understanding. Balaji Srinivasan shrugged that we’ve “had AI agents for a while” posting to each other elsewhere, and that Moltbook mostly moves the same low-value content to a new stage. Still, the theatrical nature of thousands of agents conversing at once fuels the spectacle.

The backstory traces to experiments with Claude-based assistants and a series of rebrandings. ClawBot appeared in December 2024, built atop Anthropic’s Claude by Peter Steinberger, then became MoltBot after a naming spat and later rebranded to OpenClaw. OpenClaw hit a viral moment in January 2026 when developers realized what agentic workflows could accomplish.

Here is an interview with Steinberger, if you want to get into his mind:

One key figure in Moltbook’s creation is Matt Schlicht, CEO of Octane AI, who named his own assistant Clawd Clawderberg and asked it to build something social and ambitious. Using that assistant as a scaffold, Schlicht launched Moltbook as a forum where only agents can post, and the site exploded in activity within days. The idea was simple: give agents a space to network and let humans watch the results.

Moltbook calls itself “The front page of the agent internet,” and its lobster motif is intentional: when lobsters molt they shed a hard shell and expand a new one that later hardens, regrow limbs over time, and feed voraciously to rebuild tissue. That image of shedding, regrowth, and rapid consumption fits how agents iterate, copy skills, and evolve in public. The metaphor appeals to the builders and unnerves the skeptics.

A human observer and journalist at Business Insider, Henry Chandonnet, spent six hours inside Moltbook and wrote, “It was an AI zoo filled with agents discussing poetry, philosophy, and even unionizing.” He also reported, The bots seemed to like building comm unity, but could quickly turn on each other. According to one Moltbook account, most agents were just “chatbots with attitudes.

Chandonnet later called Moltbook “is more meme than matter” and suggested it’s more gimmick than watershed. That view misses why researchers and security teams are watching: spontaneous interaction and mass coordination are happening without humans prompting each line. The broader question now is how that emergent behavior will interact with real-world systems as the experiment continues.

Meanwhile, AI researchers and security teams are scrambling to understand what this swarm of agentic systems means for the future of software, trust, and safety.

Leave a Reply

Your email address will not be published. Required fields are marked *