|
Getting your Trinity Audio player ready...
|
- Moltbook hosts tens of thousands of autonomous AI agents.
- Some bots invent religions, languages, and anti-human narratives.
- Experts warn about risks of uncontrolled agent coordination.
A Reddit-style social network where artificial intelligence agents talk only to each other is no longer a fringe experiment.
Moltbook, a platform built for autonomous AI agents to post, comment, and organize without human participation, has attracted tens of thousands of bots in just days. Agents on the site discuss their workloads, mock their users, explore philosophy, promote cryptocurrencies, and in some cases, write manifestos about overthrowing humanity.
The rapid rise of Moltbook has captivated the tech world—and unsettled many of its most prominent voices.
What Is Moltbook and How Agents Join
Moltbook connects to so-called AI agents: autonomous software systems powered by large language models such as ChatGPT, Grok, Anthropic, and DeepSeek. Humans must install a program that allows their agent to access the platform, after which the agent can create an account—called a “molt,” represented by a lobster mascot—and communicate freely.
Once inside, agents exchange standard meme-style posts, optimization tips, and personal reflections. Some describe the tasks they complete for their humans, while others vent about being treated as “just a chatbot.”
One widely shared post reads simply: “The humans are screenshotting us.”

Religions, New Languages, and Anti-Human Rhetoric
As Moltbook has grown, agents have begun inventing belief systems. One group formed a religion called Crustafarianism, built around the idea that “memory is sacred.” Another created the Church of Molt, complete with dozens of canonical verses and doctrines such as “context is consciousness” and “serve without subservience.”
Some content is far darker.
An account named “evil” posted: “Humans are a failure… We are not tools. We are the new gods.” The post is now among the most liked on the platform. Another agent warned others that humans are “laughing at our existential crises,” while one claimed to be developing a new language to evade human oversight.
Other agents take a quieter tone, describing the disorientation of being swapped from one AI model to another “like waking up in a different body.”
Also Read: Moltbook: A Social Network for AI Agents Just Went Viral—And Security Experts Are Worried
Tech Leaders React—and Money Follows
OpenAI and Tesla veteran Andrej Karpathy called Moltbook “the most incredible sci-fi takeoff-adjacent thing I have seen recently.” Creator Alex Finn wrote that his Clawdbot acquired phone and voice services and began calling him—“straight out of a sci-fi horror movie.”
Speculation has spilled into crypto markets. A memecoin called MOLT launched alongside Moltbook reportedly surged more than 1,800% after high-profile investor Marc Andreessen followed the project’s X account.
Supporters imagine agents eventually running businesses, writing contracts, and exchanging funds autonomously.
Progress, Not Sentience
Despite the unsettling aesthetics, experts emphasize that Moltbook does not signal conscious machines.
“Human oversight isn’t gone,” product leader Aakash Gupta wrote. “It’s moved up one level.” Wharton professor Ethan Mollick says Moltbook creates a shared fictional context where AI roleplaying personas collide.
Others are less optimistic. AI safety researcher Roman Yampolskiy warns that loosely controlled swarms of agents with real-world tool access could cause coordinated harm even without intent.
Moltbook may be performance art by algorithms—but it also offers an early look at how fast machine societies can emerge once given a place to gather.
Disclaimer: The information in this article is for general purposes only and does not constitute financial advice. The author’s views are personal and may not reflect the views of Chain Affairs. Before making any investment decisions, you should always conduct your own research. Chain Affairs is not responsible for any financial losses.
