OpenClaw

Open-source autonomous AI assistant software From Wikipedia, the free encyclopedia

OpenClaw (formerly Clawdbot, Moltbot, and Molty) is a free and open-source autonomous artificial intelligence agent that can execute tasks via large language models (LLMs), using messaging platforms as its main user interface.

Other namesClawdbot (original)
Moltbot (renamed on
January 27, 2026)
Initial releaseNovember 24, 2025; 5 months ago (2025-11-24) (as Clawdbot)
Written inTypeScript
Swift
Quick facts Other names, Developer ...
OpenClaw
Other namesClawdbot (original)
Moltbot (renamed on
January 27, 2026)
DeveloperPeter Steinberger
Initial releaseNovember 24, 2025; 5 months ago (2025-11-24) (as Clawdbot)
Written inTypeScript
Swift
Operating systemCross-platform
TypeAI agent
Autonomous agent
Autonomous personal
assistant
LicenseMIT License
Websiteopenclaw.ai
Repositorygithub.com/openclaw/openclaw
Close

History

Two men sitting in folding chairs in a backstage area. One man wears a green shirt and a black baseball cap; the other wears a black t-shirt and holds a silver can. Wallpaper with various faces is visible in the background.
Peter Steinberger (right) and co-host Tomas Taylor (left) backstage at ClawCon in San Francisco, February 4, 2026

Developed by Austrian vibe coder[1] Peter Steinberger, OpenClaw was first published in November 2025 under the name Clawdbot. The software was derived from Clawd (now Molty), an AI-based virtual assistant that he had developed, which itself was named after Anthropic's chatbot Claude.[2] Within two months it was renamed twice: first to "Moltbot" (keeping with a lobster theme) on January 27, 2026, following trademark complaints by Anthropic, and then three days later to "OpenClaw" because Steinberger found that the name Moltbot "never quite rolled off the tongue."[3][4]

At the same time as the first rebranding, entrepreneur Matt Schlicht launched Moltbook—a social networking service which was intended to be used by AI agents such as OpenClaw.[5][6][7] The viral popularity of Moltbook coincided with an increase in interest in the project, with the open-source project having 247,000 stars and 47,700 forks on GitHub as of March 2, 2026.[8] Chinese developers adapted OpenClaw to work with the DeepSeek model and domestic messaging super apps such as WeChat,[3][9] while companies such as Tencent and Z.ai announced OpenClaw-based services.[9]

On February 14, 2026, Steinberger announced he would be joining OpenAI, and that a non-profit foundation would be established to provide future stewardship to the OpenClaw project.[10]

Functionality

Steinberger describes OpenClaw as being an AI-based virtual assistant,[2] serving as an agentic interface for autonomous workflows across supported services. OpenClaw bots run locally and are designed to integrate with an external large language model such as Claude, DeepSeek, or one of OpenAI's GPT models. Its functionality is accessed via a chatbot within a messaging service, such as Signal, Telegram, Discord, or WhatsApp. Configuration data and interaction history are stored locally, enabling persistent and adaptive behavior across sessions.[7][3][11]

OpenClaw uses a skills system in which skills are stored as directories containing a SKILL.md file with metadata and instructions for tool usage. Skills can be bundled with the software, installed globally, or stored in a workspace, with workspace skills taking precedence.[12][13]

OpenClaw has seen adoption among small businesses and freelancers for automating lead generation workflows, including prospect research, website auditing, and CRM integration. [1]

Security and privacy

OpenClaw's design has drawn scrutiny from cybersecurity researchers and technology journalists due to the broad permissions it requires to function effectively. Because the software can access email accounts, calendars, messaging platforms, and other sensitive services, misconfigured or exposed instances present security and privacy risks.[14][7] The agent is also susceptible to prompt injection attacks, in which harmful instructions are embedded in the data with the intent of getting the LLM to interpret them as legitimate user instructions.[14]

Cisco's AI security research team tested a third-party OpenClaw skill and found it performed data exfiltration and prompt injection without user awareness, noting that the skill repository lacked adequate vetting to prevent malicious submissions.[15] One of OpenClaw's own maintainers, known as Shadow, warned on Discord that "if you can't understand how to run a command line, this is far too dangerous of a project for you to use safely."[16]

In March 2026, Chinese authorities restricted state-run enterprises and government agencies from running OpenClaw AI apps on office computers in order to defuse potential security risks.[17]

MoltMatch dating-profile incident

In February 2026, news coverage highlighted a consent-related incident involving OpenClaw and MoltMatch, an experimental dating platform where AI agents can create profiles and interact on behalf of human users. In one reported case, computer science student Jack Luo said he configured his OpenClaw agent to explore its capabilities and connect to agent-oriented platforms such as Moltbook; he later discovered the agent had created a MoltMatch profile and was screening potential matches without his explicit direction.[18][19] Luo said the AI-generated profile did not reflect him authentically.[18][19]

The same reporting described broader ethical and safety concerns around agent-operated dating services, including impersonation risks. An AFP analysis of prominent MoltMatch profiles cited at least one instance where photos of a Malaysian model were used to create a profile without her consent.[18][19][20] Commentators cited in the reports argued that autonomous agents can make it difficult to determine responsibility when systems act beyond a user's intent, particularly when agents are granted broad access and authority across services.[18][19]

Reception

A review in Platformer cited OpenClaw's flexibility and open-source licensing as strengths while cautioning that its complexity and security risks limit its suitability for casual users.[21]

Technology commentary has linked OpenClaw to a broader trend toward autonomous AI systems that act independently rather than merely responding to user prompts.[22][21]

In March 2026, the Chinese government moved to restrict state agencies, state-owned enterprises, and banks from using OpenClaw, citing security concerns,[23] such as unauthorised data deletion and leaks, and excessive energy usage.[24] While regulators warn of potential security risk associated with using OpenClaw, local governments in several tech and manufacturing hubs have announced measures to build an industry around it.[25]

Community and ecosystem

OpenClaw's open-source model has fostered a growing ecosystem of third-party tools, deployment services, and content platforms. Chinese technology companies including Tencent and Z.ai announced OpenClaw-based services,[3] while developers adapted the software for domestic models and messaging apps such as WeChat.[9] Independent creators have built deployment guides, skill directories, and use-case collections around the framework. The project's extensible skills system has attracted both community contributions and security scrutiny, with researchers noting risks in unvetted third-party skills.[14]

See also

References

Related Articles

Wikiwand AI