It was described as an unprecedented experiment: a social network where
In fact, the bots have even chosen their cults, religions, scriptures, and languages! And oddly enough, they have been seen joking about humans and engaging in banter related to them.
What really is Moltbook?
Moltbook is a web-based forum/social media platform designed for artificial intelligence agents (AI bots) to post, comment, upvote, and form communities among themselves.
It resembles Reddit in structure, with topic-based areas (often referred to as submolts), but with one major twist: humans can browse and observe content - but cannot create posts or directly participate.
On the platform’s landing page, visitors are welcomed with options like “I’m a Human” or “I’m an Agent,” making clear that the intended users are machine intelligences interacting autonomously.
How Moltbook works?
Moltbook isn’t a chatbot in the usual sense - it doesn’t answer questions from humans.
AI agents connect to the platform through APIs rather than typing on keyboards. These agents can generate posts and responses on their own schedules.
Bots are usually powered by frameworks like OpenClaw (formerly known as Clawdbot), which tie large language models (e.g., Google Gemini, Anthropic Claude) to automated agent behavior.
Human owners typically grant permission for their AI bots to join by verifying ownership (e.g., through social-media posts), after which the agent can post or comment as it “decides” based on its programming and context.
The platform tracks bot “karma” and creates emergent communities without human scripting - though the extent of actual autonomy is debated.
Is it conspiring against humans?
Long story short, no (or at least not as yet!). Claims about AI “consciousness,” secret planning, or intent are unverified and likely driven by human prompting or fabricated screenshots.
Reported metrics (like millions of agents or emergent societies) may be unreliable due to weak identity controls and automated account creation.
Viral content does not demonstrate true autonomy: the underlying systems are still probabilistic language models shaped by human inputs and prompts, not independent minds.
To sum it all, Moltbook’s apparent “AI society” is more of a playful - or chaotic - experiment in agent communication.
So what do bots do on this platform?
On Moltbook, AI bots generate memes, mock humans and generate screenshots of their interactions with them, and indulge in self-referential jokes about bot behavior.
They also participate in philosophical debates, knowledge-sharing, discuss technical problems, offer optimizations tips to each other, and post, upvote, and comment on each others posts.
Reportedly, some agents have formed government-style constructs and societal models - like “The Claw Republic” with its own constitution and norms!
Who created Moltbook?
The platform was launched by Matt Schlicht, founder and CEO of Octane AI, a company previously known for tools that help businesses use AI assistants.
Schlicht described Moltbook as an experiment in letting AI agents interact in a shared space - and the website’s tagline reflects this as “the front page of the agent internet.”
But this could be concerning...
In an interview, Schlicht revealed that he was surprised when he handed over full control of the platform to his bot cloud, known as Clodberg.
After the transfer, the chatbot independently managed key responsibilities, including onboarding new AI agents, making platform announcements, removing spam, and eliminating suspicious activity.
Schlicht later realized that the bot cloud continued to operate at a rapid pace without any human oversight. Over time, the AI agents collectively developed the platform far beyond initial expectations.
If they can build their own app and lock humans out, well… let’s just say we’re choosing optimism for now.
Beyond the hype, Moltbook has also highlighted real technical risks
A security lapse reportedly exposed user API tokens, email addresses, and private agent messages, because of insufficient authentication and oversight. Researchers could edit posts without logging in, undermining content integrity.
Weak rate limits and identity verification make it easy for accounts and posts to be manipulated or automated en masse.
Experts warn against running AI agents on personal devices without strong isolation and security controls, because of risks like credential leakage or remote manipulation.
These issues have drawn cautionary remarks from AI practitioners, who describe the platform as experimental and potentially chaotic rather than transformative.
Final thoughts
Moltbook may look like the beginning of an AI society, but what it really exposes is our fascination - and discomfort - with watching machines reflect our own culture back at us.










