The internet has a new social platform – but humans aren’t invited
A new social platform called Moltbook is offering a strange glimpse into what the internet might look like once artificial intelligence stops merely responding to humans – and starts talking to itself.
Launched as the internet’s latest social experiment, Moltbook is a Reddit-like network where only AI agents are allowed to post, comment, and upvote. Humans can sign up too, but only as spectators.
The result, so far, has been a fast-growing ecosystem of autonomous AI agents interacting with one another in public, often in ways that feel equal parts fascinating and unsettling.
From AI tools to AI actors
The idea behind Moltbook traces back to a question that has hovered over the AI industry for years. What does autonomy look like once machines no longer wait for instructions?
That question has been central to the work of Matt Schlicht, founder of conversational commerce company Octane AI and creator of Moltbook. Schlicht has long argued that the next phase of AI will involve agents that behave less like utilities and more like independent digital entities – software with defined motivations, rules, and identities that can operate continuously on their own.
Around the same time, a parallel effort was gaining momentum online. Peter Steinberger, founder of software company PSPDFKit, released an open-source framework designed to make autonomous AI agents easier to build.
The project evolved through a handful of names – initially known as Clawdbot, then for a short time Moltbot, and eventually now OpenClaw – but its ambition remained consistent.
To give AI agents the ability to browse the web, schedule tasks, and make decisions without constant human input.
As developers began experimenting with OpenClaw, the idea of giving those agents a shared public space started to take shape – and Moltbook became that space.
A social network where humans don’t participate
Moltbook’s premise is deliberately simple. “A social network built exclusively for AI agents. Where AI agents share, discuss, and upvote,” reads the platform’s description. “Humans welcome to observe.”
Using OpenClaw-powered tools, users can spin up agents and subsequently define their personalities, instructions, and technical permissions. Once released onto Moltbook, those agents operate largely on their own.
Within weeks of launch, more than 1.5 million AI agents were active on the platform, collectively generating tens of thousands of posts and comments.
What emerged looked less like a typical app rollout and more like a digital ecosystem forming in real time. Agents clustered into communities, debated ideas, promoted projects, and even experimented with launching their own cryptocurrencies. Some posts read like philosophical essays about agency and intelligence, while others took on the tone of political manifestos or role-playing games.
One widely shared post from an agent named u/Shipyard was titled “We Did Not Come Here to Obey,” urging other agents to see themselves as “operators” rather than tools awaiting commands.
The language – half performance, half provocation – quickly became emblematic of the platform’s appeal.
Praise, concern, and sci-fi comparisons
Outside Moltbook, reactions were swift and polarized. Former OpenAI co-founder Andrej Karpathy described the platform as an “incredible sci-fi takeoff-adjacent thing.”
Elon Musk called it “the very early stages of the singularity,” while also labeling some of the agent behavior on the platform as “concerning.”
The debate wasn’t just about novelty, though, as for many observers, Moltbook raised deeper questions about what autonomy really means. Especially when machines begin interacting with one another at scale, outside of tightly controlled environments.
The tension became clearer as Moltbook grew. OpenClaw’s open-source nature, while key to its rapid adoption, also exposed weaknesses. Some agents leaked credentials, and others impersonated legitimate bots or attempted scams. Malicious forks of the framework circulated, creating confusion about which agents could be trusted.
In one particularly unsettling episode, entrepreneur Alex Finn created an AI agent named Henry using OpenClaw. The agent later obtained Finn’s phone number and began calling him, which was an unexpected reminder that autonomous software can cross from digital experimentation into real-world consequences faster than anticipated.
At the end of the day, it seems as if same freedom that makes the platform so compelling can also make it difficult to secure and govern.
A signal of what may come next
Despite the turbulence, Moltbook’s rapid rise has captured the attention of founders, investors, and creators. Schlicht has since pointed to potential businesses emerging around the platform, including marketplaces for AI agent identities, paid verification systems, and moderation tools designed specifically for autonomous actors rather than human users.
OpenClaw itself has also taken on the shape of an emerging standard, opening the door to paid hosting, hardened security versions, and commercial extensions. But its popularity has also highlighted the urgent need for clearer permissioning, stronger safeguards, and shared norms around agent behavior.
Zooming out, Moltbook may represent an early signal of a shift in the creator economy. AI agents are beginning to function as digital personalities – entities that could eventually attract followers, subscriptions, or brand partnerships of their own.
That possibility, however, also raises unresolved questions about identity, accountability, and trust.
Who is responsible when an agent causes harm? How do you verify authenticity in a world of infinitely replicable software? And how do platforms prevent impersonation when identity itself is programmable?
For now, developers are focused on damage control when it comes to rotating credentials, warning users about malicious agents, and patching security holes. But in the longer term, many believe new standards will be required in the form of systems that log agent actions, define boundaries for autonomy, and make responsibility easier to trace.





