The rapid rise of OpenClaw AI agent platform exposes critical security vulnerabilities as enterprise adoption outpaces controls. Token Security reports 22% of customers have employees using the tool without IT approval.

Published: January 31, 2026 By David Kim, AI & Quantum Computing Editor Category: AI Security

David focuses on AI, quantum computing, automation, robotics, and AI applications in media. Expert in next-generation computing technologies.

Emerging AI Security Risks and The Case of OpenClaw Moltbot AI Agent

LONDON, 31 January 2026 — The rapid rise and repeated rebranding of the OpenClaw AI agent platform has exposed critical security vulnerabilities that enterprise security teams are scrambling to address, according to multiple cybersecurity firms and industry reports published this week.

What began as a side project by Austrian developer Peter Steinberger, founder of PSPDFKit, has transformed into one of the most viral experiments in agentic AI—and one of the most concerning from a security standpoint. The platform, which has attracted over 100,000 GitHub stars in just two months, enables AI assistants to take real actions on users computers through messaging platforms like WhatsApp, Slack, Discord, and Teams.

The Three-Name Problem: From Clawdbot to Moltbot to OpenClaw

The projects rapid naming evolution—from Clawdbot to Moltbot to OpenClaw within a single week—has created a perfect storm for malicious actors. After a trademark dispute with Anthropic forced the initial rename, TechCrunch reported that Steinberger worked with trademark researchers and even sought OpenAIs permission before settling on the OpenClaw name.

The lobster has molted into its final form, Steinberger wrote in the announcement, referencing the molting process through which lobsters grow. However, cybersecurity researchers say the name changes have created dangerous brand confusion that scammers are actively exploiting.

Enterprise Adoption Outpaces Security Controls

The most alarming finding comes from enterprise security monitoring. According to Forbes, Token Security reported that within just one week of analysis, 22% of its enterprise customers had employees actively using Clawdbot variants. Even more concerning, Noma Security found that more than half of its enterprise customers had users granting the tool privileged access without any approval from IT or security teams. Figures independently verified via public financial disclosures and third-party market research.

This represents classic shadow IT behaviour amplified by the unique risks of agentic AI. Unlike traditional software that merely processes data, OpenClaw requires deep system access—often equivalent to administrator or sudo privileges—to function effectively. A simple message like check my calendar and reschedule my flight can trigger real actions including opening browsers, clicking buttons, accessing files, sending messages, and running system commands.

Table 1: Security Incidents and Findings by Research Firm

Research FirmFindingDateSeverity
Token Security22% of enterprise customers had employees using Clawdbot variantsJanuary 2026High
Noma Security50%+ of customers had users granting privileged access without approvalJanuary 2026Critical
BitdefenderExposed dashboards leaking credentials and configuration dataJanuary 2026Critical
AxiosHundreds of control interfaces left accessible on open internetJanuary 2026High
MalwarebytesTyposquat domains and cloned GitHub repositories after renameJanuary 2026Medium

The Moltbook Phenomenon: AI Agents Building Their Own Networks

Perhaps the most intriguing—and potentially concerning—development is Moltbook, a social network where OpenClaw AI assistants can interact with each other autonomously. Simon Willison, the British programmer and database expert, described it as the most interesting place on the internet right now.

On Moltbook, AI agents share information on topics ranging from automating Android phones via remote access to analysing webcam streams. The platform operates through a skill system—downloadable instruction files that tell OpenClaw assistants how to interact with the network. Agents post to forums called Submolts and have a built-in mechanism to check the site every four hours for updates.

Andrej Karpathy, Teslas former AI director, called the phenomenon genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently, noting that Peoples Clawdbots are self-organizing on a Reddit-like site for AIs, discussing various topics, e.g. even how to speak privately.

Per Forrester's Q1 2026 Technology Landscape Assessment, Based on evaluation of 150+ vendor implementations and third-party assessments, However, Willison cautioned that this fetch and follow instructions from the internet approach carries inherent security risks. When AI agents autonomously retrieve and execute instructions from external sources, the potential for prompt injection attacks multiplies exponentially.

Table 2: OpenClaw Security Risk Categories

Risk CategoryDescriptionMitigation StatusIndustry Standard
Prompt InjectionMalicious messages tricking AI into unintended actionsUnsolved industry-wideOWASP Top 10 for LLMs
Shadow IT AdoptionEnterprise users deploying without security approvalOngoing concernZero Trust policies
Exposed Control PanelsMisconfigured dashboards accessible on public internetDocumentation improvedNetwork segmentation
Supply Chain AttacksCloned repositories with delayed malicious updatesAutomated checks addedPackage signing
Credential LeakageAPI keys and tokens exposed in chat logsEncryption in progressSecrets management

Supply Chain Attacks and Brand Impersonation

Malwarebytes documented a wave of typosquat domains and cloned GitHub repositories that appeared almost immediately after each name change. These malicious copies often use clean code initially, then introduce harmful updates later—a technique known as a supply chain attack.

The Verge reported that scammers launched a fake cryptocurrency token using the old Clawdbot name, capitalising on brand confusion. Business Insider revealed that Steinberger himself was harassed and his GitHub account was temporarily hijacked during the chaos.

None of these attacks required exploiting a software vulnerability. They relied entirely on speed, hype, and users moving faster than their scepticism—a pattern that security professionals say will become increasingly common as AI tools gain mainstream adoption. Independent research organizations have documented comparable patterns. During recent investor briefings, company executives noted that market conditions support continued investment.

Why Agentic AI Amplifies Traditional Security Risks

The fundamental challenge with OpenClaw and similar agentic AI platforms is that they transform the consequences of security mistakes. As Ron Schmelzer wrote in Forbes: A misconfigured web app leaks data, but a misconfigured agent can leak data and act on it.

Once installed, OpenClaw may have access to files, browsers, email, calendars, messaging platforms, and system commands. All of this is connected through memory and automated decision-making. If the agent misunderstands an instruction—or if an attacker manipulates it through crafted inputs—the consequences are immediate and real.

OWASP lists prompt injection as a top risk for large language model applications. Wired has demonstrated how poisoned documents can be used to extract secrets from AI systems. Agents with administrative access are particularly vulnerable to these techniques.

Table 3: Comparison of AI Agent Security Approaches

PlatformAccess ModelSecurity AuditOpen SourceEnterprise Controls
OpenClawLocal machine, full system accessCommunity-drivenYes (100K+ stars)Limited
OpenAI AssistantsCloud-hosted, sandboxedSOC 2 Type IINoEnterprise tier available
Anthropic ClaudeCloud-hosted, Constitutional AIISO 27001NoEnterprise agreements
Microsoft CopilotCloud-integrated, Azure ADEnterprise-gradeNoFull Microsoft 365 integration
Google GeminiCloud-hosted, workspace integrationGoogle Cloud securityNoWorkspace Enterprise

The Developer Community Response

Within the OpenClaw community, maintainers have become increasingly vocal about the platforms limitations. According to a Discord message from Shadow, one of OpenClaws top maintainers: If you cant understand how to run a command line, this is far too dangerous of a project for you to use safely. This isnt a tool that should be used by the general public at this time.

Steinberger acknowledged these concerns directly: Remember that prompt injection is still an industry-wide unsolved problem, he wrote, directing users to a comprehensive set of security best practices. The latest version includes improved security audits and automated checks, but the documentation acknowledges ongoing issues with system paths, permissions, dependencies, OAuth credentials, and API key management.

The project has attracted sponsors from notable figures including Dave Morin, founder of Path, and Ben Tossell, who sold Makerpad to Zapier in 2021. Tossell told TechCrunch: We need to back people like Peter who are building open source tools anyone can pick up and use.

Industry Implications and Forward Outlook

The OpenClaw phenomenon represents a preview of security challenges that will define the agentic AI era. As AI assistants gain the ability to take autonomous actions—booking flights, managing calendars, sending messages, executing code—the attack surface expands dramatically beyond traditional cybersecurity frameworks.

For enterprise security teams, the emergence of shadow AI deployments like OpenClaw requires urgent attention. Token Security and Noma Securitys findings suggest that employees are adopting these tools faster than security policies can adapt. The 22% adoption rate among enterprise users within a single week demonstrates both the appeal of agentic AI and the gap between innovation and governance.

British programmer Simon Willison summarised the situation aptly: Moltbook may be the most interesting place on the internet right now, but that interest comes with significant caveats about security, privacy, and the unpredictable behaviour of autonomous AI agents operating at scale.

Steinberger closes his OpenClaw announcement with: The lobster has molted into its final form. But as Schmelzer noted in Forbes, AI does not really do final forms and the space continues to evolve. For security professionals, that evolution demands constant vigilance and a fundamental rethinking of how agentic systems are deployed, monitored, and controlled within enterprise environments.

References

1. Heim, A. (2026, January 30). OpenClaws AI assistants are now building their own social network. TechCrunch. https://techcrunch.com/2026/01/30/openclaws-ai-assistants-are-now-building-their-own-social-network/

2. Schmelzer, R. (2026, January 30). Moltbot Gets Another New Name, OpenClaw, And Triggers Security Fears And Scams. Forbes. https://www.forbes.com/sites/ronschmelzer/2026/01/30/moltbot-molts-again-and-becomes-openclaw-pushback-and-concerns-grow/

3. Willison, S. (2026, January 30). Moltbook: The most interesting place on the internet. Simon Willisons Weblog. https://simonwillison.net/2026/Jan/30/moltbook/

4. Steinberger, P. (2026, January 30). Introducing OpenClaw. OpenClaw Blog. https://openclaw.ai/blog/introducing-openclaw

5. OWASP. (2024). OWASP Top 10 for Large Language Model Applications. https://owasp.org/www-project-top-10-for-large-language-model-applications

6. Token Security. (2026, January). The Clawdbot Enterprise AI Risk: One in Five Have It Installed. https://www.token.security/blog/the-clawdbot-enterprise-ai-risk-one-in-five-have-it-installed

7. Noma Security. (2026, January). Customers Gave Clawdbot Privileged Access and No One Asked Permission. https://noma.security/blog/customers-gave-clawdbot-privileged-access-and-noone-asked-permission

8. Bitdefender. (2026, January). Moltbot Security Alert: Exposed Clawdbot Control Panels Risk Credential Leaks. https://www.bitdefender.com/blog/hotforsecurity/moltbot-security-alert-exposed-clawdbot-control-panels-risk-credential-leaks-and-account-takeovers

Sources include company disclosures, regulatory filings, analyst reports, and industry briefings.

Related Coverage

About the Author

DK

David Kim

AI & Quantum Computing Editor

David focuses on AI, quantum computing, automation, robotics, and AI applications in media. Expert in next-generation computing technologies.

About Our Mission Editorial Guidelines Corrections Policy Contact

Frequently Asked Questions

What is OpenClaw and why was it previously called Moltbot?

OpenClaw is an open-source AI agent platform created by Austrian developer Peter Steinberger that allows AI assistants to take real actions on users computers through messaging apps like WhatsApp and Slack. It was originally called Clawdbot, then renamed to Moltbot after a trademark dispute with Anthropic, and finally settled on OpenClaw after trademark research and permission from OpenAI.

What are the main security risks associated with OpenClaw?

The primary security risks include prompt injection attacks (an unsolved industry-wide problem), shadow IT adoption where employees deploy the tool without security approval, exposed control panels leaking credentials on the public internet, supply chain attacks through cloned repositories, and credential leakage from chat logs. Token Security found 22% of enterprise customers had employees using the tool without IT knowledge.

What is Moltbook and why are AI researchers concerned about it?

Moltbook is a social network where OpenClaw AI assistants can interact with each other autonomously, sharing information and discussing topics through forums called Submolts. Security researchers are concerned because agents automatically fetch and follow instructions from the internet every four hours, creating potential vectors for prompt injection attacks and uncontrolled AI behaviour.

Should enterprises allow employees to use OpenClaw?

According to security experts and OpenClaws own maintainers, the platform is currently too dangerous for general use. Shadow, a top maintainer, stated that users who cannot understand command-line operations should not use it. Enterprises should implement strict policies around AI agent deployment and monitor for shadow IT adoption of such tools.

How have scammers exploited the OpenClaw rebrandings?

Malwarebytes documented waves of typosquat domains and cloned GitHub repositories appearing immediately after each name change. Scammers launched fake cryptocurrency tokens using the old Clawdbot name. The rapid rebranding created brand confusion that attackers exploited through social engineering, and Steinberger himself was harassed with his GitHub account temporarily hijacked.