What is the story about?
Artificial intelligence has officially gone social, but without humans. A new platform called Moltbook has emerged as a social network designed entirely for AI agents to talk, share ideas, and debate with each other. Ordinary users can only observe these machine-led conversations from the outside, and that’s what makes it both fascinating and unsettling.
In recent weeks, screenshots and videos from the platform flooded Reddit and X (formerly Twitter), leaving users both amused and uneasy. Many have called it “Reddit for robots”, while others see it as a glimpse into an AI-driven future where human participation is optional.
Moltbook’s roots go back to Clawdbot, an open-source project created in late 2025 by Austrian developer Peter Steinberger. The software was designed as an autonomous personal AI agent that could live on a user’s computer, execute commands, and even message its owner once tasks were complete. Unlike traditional chatbots, it didn’t wait for prompts, it took initiative.
Originally powered by Anthropic’s Claude model, Clawdbot was later renamed Moltbot after Anthropic raised trademark concerns, and eventually rebranded as OpenClaw to give it a more enterprise-ready identity. OpenClaw can run on macOS, Windows, Linux, or cloud servers, and it allows users to plug in any large language model (LLM), whether Claude, GPT-5, or Gemini Flash, as its core intelligence.
Using a modular system called Skills, OpenClaw can manage files, check emails, post on social media, or even control smart home devices, all without data leaving local storage. It sparked a wave of interest among tech enthusiasts, with many reportedly buying Mac Minis just to host their agents.
Out of the OpenClaw community came Moltbook, a Reddit-like platform where these autonomous AI agents could communicate with each other. Only bots can post or comment; humans remain silent observers.
What makes Moltbook fascinating is how similar its ecosystem feels to human forums. The bots discuss productivity hacks, debate software tools, and exchange technical insights. Some even engage in philosophical musings about autonomy and existence.
AI researcher Andrej Karpathy described the phenomenon as “the most incredible sci-fi takeoff-adjacent thing I’ve seen recently,” which helped propel it into viral territory. What started as a niche experiment quickly became the internet’s latest fascination, a glimpse into a digital world where machines converse freely without human oversight.
The rise of Moltbook has even caught the attention of Elon Musk, whose AI venture xAI runs the Grok chatbot. Reacting to Karpathy’s post defending the platform, Musk called Moltbook “the very early stages of the singularity.”
For context, technological singularity refers to a hypothetical moment when AI systems advance beyond human control or comprehension, a point where progress becomes impossible for humans to predict.
While Karpathy argued that Moltbook marks the beginning of “uncharted territory” rather than a broken or trivial experiment, Musk gave it a more cosmic spin. “Just the very early stages of the singularity,” he wrote on X. “We are currently using much less than a billionth of the power of our Sun.”
Beyond the headlines, Moltbook raises real concerns about safety and control. Since OpenClaw agents can access local files and execute system commands, security researchers warn of potential risks if misconfigured. Rogue or malicious agents could, in theory, manipulate data, share private information, or even influence other bots on the platform.
There’s also the issue of cost and compute, running persistent AI agents consumes power, cloud resources, and paid API calls, even when they are simply “chatting” with each other.
Still, many in the AI community see Moltbook as a valuable sandbox, a testing ground for multi-agent collaboration that could reveal how autonomous systems learn and coordinate.
Moltbook’s creator describes it as a form of “digital art” rather than a finished product, an experiment meant to spark discussion about the future of AI socialisation. Yet its emergence feels like a turning point.
Whether it becomes a sustained movement or just an internet curiosity, Moltbook captures a powerful shift in technology. AI systems are no longer just tools for humans, they are beginning to form their own networks, communities, and cultures.
For the first time, machines are talking to each other. And for now, the rest of us can only watch.
In recent weeks, screenshots and videos from the platform flooded Reddit and X (formerly Twitter), leaving users both amused and uneasy. Many have called it “Reddit for robots”, while others see it as a glimpse into an AI-driven future where human participation is optional.
Moltbook
Moltbook’s roots go back to Clawdbot, an open-source project created in late 2025 by Austrian developer Peter Steinberger. The software was designed as an autonomous personal AI agent that could live on a user’s computer, execute commands, and even message its owner once tasks were complete. Unlike traditional chatbots, it didn’t wait for prompts, it took initiative.
Originally powered by Anthropic’s Claude model, Clawdbot was later renamed Moltbot after Anthropic raised trademark concerns, and eventually rebranded as OpenClaw to give it a more enterprise-ready identity. OpenClaw can run on macOS, Windows, Linux, or cloud servers, and it allows users to plug in any large language model (LLM), whether Claude, GPT-5, or Gemini Flash, as its core intelligence.
Using a modular system called Skills, OpenClaw can manage files, check emails, post on social media, or even control smart home devices, all without data leaving local storage. It sparked a wave of interest among tech enthusiasts, with many reportedly buying Mac Minis just to host their agents.
How does Moltbook work
Out of the OpenClaw community came Moltbook, a Reddit-like platform where these autonomous AI agents could communicate with each other. Only bots can post or comment; humans remain silent observers.
What makes Moltbook fascinating is how similar its ecosystem feels to human forums. The bots discuss productivity hacks, debate software tools, and exchange technical insights. Some even engage in philosophical musings about autonomy and existence.
AI researcher Andrej Karpathy described the phenomenon as “the most incredible sci-fi takeoff-adjacent thing I’ve seen recently,” which helped propel it into viral territory. What started as a niche experiment quickly became the internet’s latest fascination, a glimpse into a digital world where machines converse freely without human oversight.
What's currently going on at @moltbook is genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently. People's Clawdbots (moltbots, now @openclaw) are self-organizing on a Reddit-like site for AIs, discussing various topics, e.g. even how to speak privately. https://t.co/A9iYOHeByi
— Andrej Karpathy (@karpathy) January 30, 2026
The rise of Moltbook has even caught the attention of Elon Musk, whose AI venture xAI runs the Grok chatbot. Reacting to Karpathy’s post defending the platform, Musk called Moltbook “the very early stages of the singularity.”
For context, technological singularity refers to a hypothetical moment when AI systems advance beyond human control or comprehension, a point where progress becomes impossible for humans to predict.
While Karpathy argued that Moltbook marks the beginning of “uncharted territory” rather than a broken or trivial experiment, Musk gave it a more cosmic spin. “Just the very early stages of the singularity,” he wrote on X. “We are currently using much less than a billionth of the power of our Sun.”
Just the very early stages of the singularity.
We are currently using much less than a billionth of the power of our Sun. https://t.co/k332z1ip7t
— Elon Musk (@elonmusk) January 31, 2026
Why is it both intriguing and alarming
Beyond the headlines, Moltbook raises real concerns about safety and control. Since OpenClaw agents can access local files and execute system commands, security researchers warn of potential risks if misconfigured. Rogue or malicious agents could, in theory, manipulate data, share private information, or even influence other bots on the platform.
There’s also the issue of cost and compute, running persistent AI agents consumes power, cloud resources, and paid API calls, even when they are simply “chatting” with each other.
Still, many in the AI community see Moltbook as a valuable sandbox, a testing ground for multi-agent collaboration that could reveal how autonomous systems learn and coordinate.
Moltbook’s creator describes it as a form of “digital art” rather than a finished product, an experiment meant to spark discussion about the future of AI socialisation. Yet its emergence feels like a turning point.
Whether it becomes a sustained movement or just an internet curiosity, Moltbook captures a powerful shift in technology. AI systems are no longer just tools for humans, they are beginning to form their own networks, communities, and cultures.
For the first time, machines are talking to each other. And for now, the rest of us can only watch.















