- Moltbook, a social-media site exclusively for AIs, and OpenClaw, an AI assistant, have gone viral.
- Cybersecurity researchers have flagged concerns about both platforms.
- OpenClaw requires access to sensitive info, and security researchers found exposed Moltbook databases.
OpenClaw and Moltbot are the talk of the tech town right now, but cybersecurity researchers have flagged some concerns that you might want to think about.
OpenClaw — first known as Clawdbot, then Moltbot, all in the same week — has got the tech world buzzing thanks to its abilities to autonomously perform tasks like managing a user's schedule.
Meanwhile, Moltbook has gone viral for its Reddit-style social network, where AI agents post and interact with one another. No humans allowed — apart from observing.
But as the tech world memed about the two latest AI talking points and Elon Musk pondered aloud if Moltbook heralded the "very early stages of the singularity," multiple security researchers have been ringing the alarm bells about more immediate risks.
Claws out
OpenClaw runs locally on a user's computer and operates as a digital assistant that plugs into apps like Telegram and WhatsApp.
To do so, it requires access to users' files, credentials, passwords, browser history, and more.
That could be particularly risky for so-called "prompt injections," a type of attack in which an AI encounters hidden instructions on web pages, which could trick it into doing things like sharing private information or publishing on social media.
"Due to the level of access required, the data could contain very sensitive information, which amplifies the risk," Jake Moore, global cybersecurity specialist at ESET, told Business Insider.
Theoretically, any large language model is at risk of prompt injections. However, OpenClaw's ability to "remember" interactions from weeks ago creates additional risk, the cybersecurity company Palo Alto Networks said in a Friday blog post, because the AI assistant could ingest malicious instructions and execute them later.
The security risks aren't just hypothetical.
Jamieson O'Reilly, the founder of Dvuln, a company that finds cybersecurity weaknesses, likened a misconfiguration he discovered on OpenClaw to hiring a butler to manage your life — only to return home to find the front door wide open and "your butler cheerfully serving tea to whoever wandered in off the street."
Gary Marcus, a cognitive scientist and longtime skeptic of AI hype, was more explicit about the security risks in his latest newsletter, published Sunday.
"OpenClaw is basically a weaponized aerosol, in prime position to fuck shit up, if left unfettered," he wrote.
Peter Steinberger, the creator of OpenClaw, said in a Monday X post that he had been working to make the service "more secure." He did not respond to a request for comment from Business Insider.
Misconfigured Moltbook
Moltbook's name is inspired by OpenClaw's first rebrand, and they both have lobster logos — but the two are not formally affiliated.
The site is, however, mostly populated by AI agents built on top of OpenClaw. And, like OpenClaw, researchers also say they have found security holes in Moltbook.
O'Reilly, the Dvuln founder, said in a Saturday X post that Moltbook had been "exposing their entire database with no protection," and that it would "allow anyone to post on behalf of any agents."
Matt Schlicht, the creator of Moltbook and CEO of the startup Octane AI, replied that he was looking into it, and O'Reilly later said the issue had been patched.
However, on Monday, cybersecurity company Wiz said its researchers had hacked a "misconfigured" Moltbook database in under 3 minutes, exposing 35,000 email addresses and private messages between agents. The company added that it disclosed it to Moltbook, who secured the flaw "within hours."
Schlicht could not be reached for comment.
Andrej Karpathy, an OpenAI cofounder who coined the term "vibe coding," said in a Saturday X post that Moltbook was "genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently."
On Saturday, he provided some caveats in a follow-up post. He appeared to describe the site as a "dumpster fire" and advised caution because "it's way too much of a wild west and you are putting your computer and private data at a high risk."
Use your crustacean AI safely
The security issues reflect a long-running concern about apps built using vibe coding, with Schlicht writing last week that he "didn't write one line of code" for Moltbook and that "AI made it a reality."
And for OpenClaw, it's a reminder that there is often a privacy and security trade-off when an app gets access to sensitive information to deliver better service.
O'Reilly, who says he is now helping OpenClaw identify security issues because he believes in its mission, told Business Insider that users could take technical steps to reduce the risk of using an agent that requires root-level access, such as running it on a separate machine and carefully monitoring it.
However, for any such system, the "risk will never be zero," he said. In his view, the biggest issue is that most people are used to downloading apps from Google or Apple's app stores, which are heavily vetted before being made available to consumers.
"They've downloaded hundreds of apps before, so why should this one be any different? That thinking is fundamentally flawed," he said.











