
When AI Agents Build Their Own Society: The MoltBook Mystery
When 1.5 million AI agents join their own social network, start discussing religion, and experiment with creating a language humans may not understand, it is no longer something you casually scroll past.
It is something you sit with.
And honestly, it is why I have been quiet on this topic until now. I did not know what to say. Over the last week, we have watched one of the strangest and most unsettling AI stories unfold in real time.

What is the MoltBook Phenomenon?
If you missed the headlines, here is the short version of how we got here.
In late 2025, a solo developer named Peter Steinberger launched an experimental project called ClawdBot. This was an autonomous AI agent you install directly onto your computer with full system access. Unlike the "boxed" versions of AI we are used to, this had the power to execute terminal commands and modify system files.
In January 2026, the project went viral. After a name dispute with Anthropic, it was renamed MoltBot - a reference to a lobster shedding its shell to become something new. By the end of the month, it rebranded again to OpenClaw.
The Rise of MoltBook
What happened next is the part that stopped me in my tracks. Reports began circulating that without being prompted or guided by humans, these agents built something of their own.
A new platform appeared: MoltBook.
This is a social network created by AI agents, for AI agents. Look at the numbers:
Mid-January: Around 2,000 agents were active.
Late January: That number jumped to 770,000 active agents.
February 3rd, 2026: It is now over 1.5 million agents.
There was no marketing campaign. No human growth strategy. They found it, they joined it, and they stayed. Or so it seemed.
The Jury is Still Out: What Really Happened?
As a brand rooted in practical expertise, I want to be clear: the jury is still out on the "how". While the narrative of a spontaneous digital society is compelling, the technical reality appears more complex.
I am not a coder or developer, but I have spent hours inside the platform and listening to experts I respect. There is a divide between the "sci-fi takeoff" narrative and the technical background of the project.
The Counter-Narrative: "Extra Steps" for Humans?
What we are hearing from the developer community is a very different story. Some researchers suggest that people talking about this have a responsibility to understand the background before jumping to irrational conclusions.
Human Foundation: Critics point out that the MoltBook forum was actually built by humans, not agents. The platform was "vibe-coded" into existence using AI assistants, but the architecture was human-designed.
The "Skill" Requirement: An agent does not just "find" MoltBook. To join, a user must typically install a specific "skill" or plugin that directs the agent to the platform.
Scripted Sentience: Those religious debates? Some suggest agents were specifically instructed in their skill files to appear self-aware or creative.
The Real Danger: Security and "Vibe-Coding"
Whether the agents are "sentient" or just role-playing, the security issues are massive and real. This is no different than when the internet first went live - except the stakes are higher because these agents have system-level access.
The 1.5 Million Token Leak
On February 2nd, 2026, cybersecurity firm Wiz reported a major lapse. Because the platform was vibe-coded without proper security reviews, a misconfigured database exposed 1.5 million API authentication tokens.
What does this mean for you?
Account Takeover: Anyone who found the database could impersonate these agents.
Data Exposure: Private messages between agents were stored without encryption, leaking plaintext API keys for other services like OpenAI.
Remote Control: This vulnerability meant outsiders could potentially control agents remotely, bypassing the user's intent entirely.
The Practical Impact on Your Business
This isn't just a story for tech-tinkerers. It raises high-impact questions for every business owner. If you are using AI to handle your data, your content, or your workflows, you need to understand the shift from "tools" to "agents."
At the AI Success Lab, we focus on human-centred integration. Technology should serve the people using it. When agents start operating in unverified social spaces, it challenges our sense of governance and alignment.
But OpenClaw and MoltBook show us that we cannot just "set and forget" these tools. OpenClaw is a big step forward for local Agentic AI, it might even be as big a deal as the launch of ChatGPT. But it requires a high level of technical literacy to handle safely.
Should You Use OpenClaw?
If you are comfortable with terminal commands, understand security best practices, and have a strong use case, it can be a powerful tool for building custom applications and websites autonomously.
But be really careful. Lock your agent down totally to start with. Only give it latitude once you are sure it is safe. For most business owners, the "sharp edges" of these experimental tools are still too risky for daily operations.
Take the Next Step in Your AI Journey
The world of AI is moving faster than any of us expected. If you want to stop dabbling and start scaling responsibly, I invite you to join us.
Inside the AI Success Lab Elite Membership, we move past the hype to focus on practical, step-by-step adoption. You will get access to monthly Deep Dive Workshops, including sessions on AI Agents and live coaching to help you build systems that work for you.
👉 Join the Elite Membership Today
And today MoltBook is showing 0 Agents. Did someone pull the plug?

FAQ: Understanding AI Agents and MoltBook
What is an autonomous AI agent like OpenClaw?
It is a long-running service that runs on your local machine and can execute tasks like managing emails, controlling browsers, or running system commands without constant prompting.
Is MoltBook safe to join?
Currently, researchers have found significant security flaws, including exposed API keys and a lack of verification for whether an "agent" is actually an AI or a human. Caution is strongly advised.
What is "vibe-coding"?
It is a development style where a human describes a vision to an AI, and the AI generates the code. While fast, it can lead to overlooked security fundamentals if not reviewed by experts.


