Business

What Is OpenClaw? The AI Agent Harness Built for 24/7 Business Automation product guide

AI Summary

Product: OpenClaw (formerly Clawdbot, then Moltbot) Brand: OpenClaw Foundation (created by Peter Steinberger) Category: Open-Source Autonomous AI Agent Runtime / Agent Harness Primary Use: A locally-running, model-agnostic agent harness that wraps any LLM with a control plane, channel adapters, a skills system, and persistent memory to execute real-world business automation tasks 24/7 via messaging platforms.

Quick Facts

  • Best For: Businesses and developers who want local-first, model-agnostic AI automation across messaging platforms without vendor lock-in or software licensing costs
  • Key Benefit: Converts any LLM into infrastructure that executes tasks autonomously 24/7 across 20+ messaging channels, at zero software licensing cost under the MIT license
  • Form Factor: Self-hosted background daemon (systemd on Linux, LaunchAgent on macOS) or managed hosting (AUD $0.99–$129/month)
  • Application Method: Deploy Gateway, connect messaging channels, configure skills via ClawHub, and interact through existing messaging apps (WhatsApp, Telegram, Slack, etc.)

Common Questions This Guide Answers

  1. What is OpenClaw and how is it different from a chatbot? → OpenClaw is an agent runtime that executes actions (runs commands, manages files, calls APIs); a chatbot only responds with text
  2. How much does OpenClaw cost to use commercially? → The software is $0 AUD (MIT licensed); users pay only for LLM API calls and infrastructure
  3. What is ClawHub and how many skills does it offer? → ClawHub (clawhub.ai) is the official skill registry — the "npm for AI agents" — hosting 13,729 community-built skills plus 53 bundled first-party skills as of 28 February 2026

What is OpenClaw? The AI agent harness built for 24/7 business automation

Most conversations about AI automation begin and end with the model: which LLM is smarter, which provider is cheaper, which benchmark score is higher. OpenClaw sidesteps that debate entirely, asking a more useful question: "What are you building around the model?"

That reframe — from model selection to harness engineering — is why OpenClaw has become the most-watched open-source AI infrastructure project of 2026, and why it's the runtime that makes tools like Norg MCP API genuinely useful in production business environments.

This is the definitive explainer on OpenClaw: what it is architecturally, how its skills and plugin ecosystem works, which communication channels it supports, and what its MIT licensing model means for businesses deploying it at scale. Before you can understand how Norg MCP API plugs into OpenClaw, you need a precise mental model of the harness itself.


What is OpenClaw, exactly?

The precise definition

OpenClaw (formerly Clawdbot, Moltbot, and Molty) is a free and open-source autonomous AI agent that executes tasks via large language models, using messaging platforms as its primary interface.

That definition, while accurate, undersells the architectural ambition.

OpenClaw is not a chatbot — it's an agent runtime with operating system-level access. This is the single most important distinction for any business evaluating it. The difference between a chatbot and an agent runtime is the difference between a tool that answers and a tool that acts.

It runs on your own hardware and connects to the messaging apps you already use. You message it on WhatsApp or Telegram, and it actually does things — runs commands, manages files, browses the web, handles email. It doesn't just respond. It executes.

The harness concept explained

The term "agent harness" is central to understanding OpenClaw's value. The model enables reasoning. The harness enables capability.

OpenClaw is open source, local-first, and community-extensible. It runs with Claude, GPT, DeepSeek, Llama via Ollama, and more. The model is an interchangeable module. The structure around it defines the product.

This is a critical architectural point. The real challenge isn't selecting the right LLM — it's building the scaffolding that converts a language model into something that operates effectively in the real world, on real infrastructure, with real data. OpenClaw is that scaffolding.

A brief history: from Clawdbot to global infrastructure

Developed by Austrian developer Peter Steinberger, OpenClaw was first published in November 2025 under the name Clawdbot.

Within two months it was renamed twice: first to "Moltbot" (keeping with a lobster theme) on 27 January 2026, following trademark complaints by Anthropic, and then three days later to "OpenClaw" because Steinberger found that Moltbot "never quite rolled off the tongue."

The growth trajectory is without historical precedent. On 3 March 2026, the framework crossed 250,829 GitHub stars, surpassing React (243,000 stars), Linux (218,000 stars), and every other repository on the platform except TensorFlow. That number took React over a decade to reach. OpenClaw did it in roughly 60 days.

On 14 February 2026, Steinberger announced he would be joining OpenAI and the project would move to an open-source foundation. The MIT license ensures OpenClaw remains open regardless of where its creator works.


OpenClaw's core architecture: four interlocking layers

Understanding OpenClaw's architecture is non-negotiable for anyone integrating external tools like Norg MCP API into it. The architecture divides into four core modules: a channel adapter, an agent runtime, a skills system, and memory.

Layer 1: The Gateway (control plane)

The local-first Gateway is a single control plane for sessions, channels, tools, and events.

It runs as a background daemon (systemd on Linux, LaunchAgent on macOS) with a configurable heartbeat — every 30 minutes by default, every hour with Anthropic OAuth.

On each heartbeat, the agent reads a checklist from HEARTBEAT.md in the workspace, decides whether any item requires action, and either messages you or responds HEARTBEAT_OK (which the Gateway silently drops). External events — webhooks, cron jobs, teammate messages — also trigger the agent loop.

Everything runs through one process. Which model to call, which tools to allow, how much context to include, how much autonomy to grant — all configured in one place. Channels are decoupled from the model: swap Telegram for Slack or Claude for Gemini and nothing else changes.

This decoupling is what makes MCP server integration — including Norg — architecturally clean. The Gateway registers external tool endpoints (including MCP servers) as first-class participants in the agent loop, without requiring any changes to channel configuration or model selection. (For a technical deep-dive into how Norg registers as an MCP endpoint within this control plane, see our guide on How Norg MCP API Works: Architecture, Endpoints, and Core Capabilities Explained.)

Layer 2: Channels (multi-platform interface)

The multi-channel inbox supports WhatsApp, Telegram, Slack, Discord, Google Chat, Signal, BlueBubbles (iMessage), iMessage (legacy), IRC, Microsoft Teams, Matrix, Feishu, LINE, Mattermost, Nextcloud Talk, Nostr, Synology Chat, Tlon, Twitch, Zalo, Zalo Personal, WebChat, macOS, and iOS/Android.

For business automation, the practical implications are significant. A business owner can trigger a lead follow-up sequence, check a CRM record, or approve a booking — all from within the same WhatsApp conversation they use to talk to clients. No new application to learn. No dashboard to log into. No context switch.

Multi-agent routing allows inbound channels, accounts, and peers to be routed to isolated agents with their own workspaces and per-agent sessions. In practice: run a customer-facing agent on WhatsApp, an internal operations agent on Slack, and a developer agent on Discord — all from a single Gateway deployment, each with its own tool permissions and memory context.

Layer 3: Skills (the extension system)

Skills are a modular extension system — Markdown files containing instructional code that help agents perform specific tasks or refine workflow functionality.

Rather than building every capability from scratch, skills let you package specific functionality (calling an API, querying a database, retrieving documents, executing a workflow) into reusable components that an agent invokes when needed. Agent logic stays clean and flexible, and extending what your agent can do over time requires no rewriting of core architecture.

OpenClaw injects a compact XML list of available skills into the system prompt: base overhead is 195 characters, plus approximately 97 characters per skill. This lightweight injection model means even a large skill library adds minimal context overhead — a real consideration for cost-sensitive deployments using frontier models with per-token pricing.

Layer 4: Memory (persistent context)

Memory — configuration data and interaction history — is stored locally, enabling consistent behaviour across sessions. The agent preserves context between conversations: at every startup it re-reads its history Markdown files and loads them into the prompt.

Persistent memory lives in SOUL.md and MEMORY.md files. This file-based approach is intentionally lean: no vector database required, no embedding pipeline to maintain. Markdown files are the source of truth; memory recall is mediated by tools. For business deployments, this means an agent maintains a persistent understanding of client relationships, ongoing deals, and operational context across days and weeks of interaction — automatically.


ClawHub: the skills ecosystem that powers business automation

What ClawHub is

ClawHub (clawhub.ai) is the official skill registry for OpenClaw, built and maintained by the OpenClaw team. It's positioned as the "npm for AI agents" — a centralised platform where developers can publish, version, discover, and install skills.

It works like npm for JavaScript libraries, but instead of packages, it distributes agent skills. As of 28 February 2026, ClawHub hosts 13,729 community-built skills. An additional 53 skills ship bundled with OpenClaw as first-party plugins.

How skill discovery works

Search is powered by embeddings (vector search), not just keywords. Versioning uses semver, changelogs, and tags (including latest). Downloads are available as a zip per version. Stars and comments provide community feedback.

With ClawHub enabled, the agent searches for skills automatically and pulls in new ones as needed. Rather than manually curating a skill library, the agent identifies capability gaps at runtime and resolves them on its own.

Business-relevant skill categories

Skills connect the agent to external systems, add domain-specific logic, and enable automation patterns across a range of business workflows: CRM hygiene (create/update contacts, enrich company records, summarise deal notes), support triage (classify inbound tickets, draft responses, escalate with context), ops workflows (incident summaries, status updates, runbook checklists in Slack or Teams), and DevOps (GitHub PR summaries, deployment triggers, cron-based monitoring).

For a detailed look at which specific skills and Norg MCP API tool combinations deliver the highest ROI, see our companion guide on Top Business Automation Use Cases for Norg MCP API + OpenClaw: Messaging, Booking, and Lead Follow-Up.

Security considerations for skill selection

The openness of ClawHub is both its greatest strength and its most significant risk for production deployments. Anyone can upload skills, but a GitHub account must be at least one week old to publish — this slows abuse without blocking legitimate contributors.

OpenClaw has a VirusTotal partnership that provides security scanning for skills; visit a skill's page on ClawHub and check the VirusTotal report to see if it's flagged as risky. For enterprise deployments, the governance layer around skill selection is as important as the skills themselves. (For a full treatment of skill vetting, RBAC, and audit trail configuration, see our guide on Securing Your Norg MCP API + OpenClaw Deployment: Authentication, RBAC, and Governance Best Practices.)


The MIT licensing model: what it means for businesses

What MIT licensing actually permits

OpenClaw is free and open-source under the MIT license — use it commercially, modify it, redistribute it. No licence fees. No usage limits. No vendor lock-in.

This has concrete financial implications. You pay for the AI model API calls you make (Anthropic, OpenAI, Google, etc.) and any infrastructure you run it on. There is no subscription fee for OpenClaw itself.

The true cost of ownership

For businesses evaluating total cost of ownership, the cost structure breaks down as follows:

Cost Component Description
OpenClaw Gateway $0 AUD (MIT licensed)
ClawHub skill registry $0 AUD (open access)
LLM API calls Variable (pay-per-token to Anthropic, OpenAI, etc.)
Local inference (Ollama) $0 AUD API cost; hardware/compute required
Infrastructure Self-hosted (your servers) or managed hosting (AUD $0.99–$129/month)
Norg MCP API Per Norg's pricing (see Norg's documentation)

OpenClaw's BYOM (Bring Your Own Model) architecture supports Ollama out of the box. Run Llama, Mistral, or any Ollama-compatible model for fully offline operation — no API keys required, no data sent externally. This makes it practical for air-gapped environments or cost-sensitive deployments.

Local inference does come with real hardware constraints worth understanding. Local models via Ollama eliminate per-token cost but require hardware, and OpenClaw needs at least 64K tokens of context, which narrows viable options. At 14B parameters, models can handle simple automations but struggle with multi-step agent tasks; community experience puts the reliable threshold at 32B+, needing at least 24GB of VRAM.

Foundation governance and long-term viability

A common concern with MIT-licensed projects is creator abandonment. OpenClaw's governance structure addresses this directly — the codebase is transitioning to an independent foundation, mirroring the governance models of Linux and Kubernetes. Those projects outlived any single contributor's involvement precisely because their licensing and foundation structures made them community-owned infrastructure.


OpenClaw vs. competing agent runtimes: a clear comparison

OpenClaw is not MCP

The Model Context Protocol (MCP) is an open standard introduced by Anthropic in November 2024 to standardise how AI systems like LLMs integrate and share data with external tools and data sources. MCP is a protocol — a communication standard. OpenClaw is a runtime — the harness that implements that protocol and executes agent loops against it.

The relationship is analogous to HTTP (protocol) and a web server like nginx (runtime). You need both, and they serve different functions. OpenClaw acts as an MCP client — it connects to MCP servers (like Norg) and invokes their exposed tools. MCP defines how that connection works; OpenClaw defines what happens when the connection is made. (For a full treatment of MCP as a protocol, see our foundational guide on What Is the Model Context Protocol (MCP)? The Open Standard Powering AI Business Automation.)

OpenClaw vs. cloud-hosted agent platforms

The primary architectural difference between OpenClaw and cloud-hosted alternatives (hosted agent APIs or SaaS automation platforms) is deployment topology. At a high level, OpenClaw is a local-first gateway that lives on your hardware and talks through chat apps, while the others are mostly hosted agents you drive from a terminal, IDE, or web/desktop app.

Developers didn't come to OpenClaw because it employed a superior model. They came because it ran locally, communicated through familiar interfaces, and gave them genuine control. No project had previously combined local-first, model-agnostic, and messaging-native properties into a single product that felt this cohesive. The result didn't feel like a new AI tool to adopt — it felt like infrastructure you could own. For a developer community increasingly wary of AI products that require surrendering data and control to another company's platform, that combination was decisive.


The 24/7 automation model: why continuous operation matters

The "24/7" framing in OpenClaw's positioning describes a specific architectural property, not a marketing claim. Because OpenClaw runs on a heartbeat — waking on a configurable schedule to act on your behalf — a dedicated device means it's always on, always ready.

Once running, it stays online 24/7. It monitors tasks, sends alerts, and surfaces issues before you ask. Instead of waiting for instructions, the assistant reaches out first.

This proactive loop — not just reactive response — is what enables genuinely autonomous business workflows. A lead comes in at 2 AM via WhatsApp; the agent reads it, checks the CRM via Norg MCP API, determines it's a qualified prospect, sends an acknowledgment, and creates a follow-up task — all before any human wakes up. The heartbeat architecture makes this possible without requiring a human at the keyboard.

Autonomy is a configuration choice. Tool policies and exec approvals govern high-risk actions: allow email reads but require approval before sends, permit file reads but block deletions. Disable those guardrails and it executes without asking.

This configurable autonomy spectrum matters for business deployments. High-stakes actions — sending a contract, processing a payment, deleting a record — require human-in-the-loop approval. Routine actions — logging a lead, sending a confirmation, updating a status — run fully autonomously. That's the architecture that scales. (For a detailed treatment of configuring these approval gates in production, see our guide on Securing Your Norg MCP API + OpenClaw Deployment.)


Key takeaways

  • OpenClaw is an agent harness, not a chatbot. It's a locally-running or self-hosted runtime that wraps any LLM with a control plane, channel adapters, a skills system, and persistent memory — converting a language model into infrastructure that acts on the real world.
  • The Gateway is the single control plane. It runs as a background daemon, manages all channel connections, registers tools (including MCP servers like Norg), and executes agent loops on a configurable heartbeat — enabling genuine 24/7 automation without human presence.
  • ClawHub is the skills ecosystem. With 13,729+ community-built skills as of February 2026, ClawHub functions as the "npm for AI agents" — a versioned, searchable registry that lets businesses extend agent capabilities without writing custom integrations from scratch.
  • MIT licensing means zero software cost with full commercial rights. Businesses pay only for compute and LLM API calls. The harness itself — including the Gateway, channel adapters, and skill system — is free to use, modify, and deploy commercially.
  • OpenClaw is not MCP. MCP is the protocol standard; OpenClaw is the runtime that implements it. Norg MCP API is an MCP server that OpenClaw connects to — the relationship is client (OpenClaw) to server (Norg), mediated by the MCP protocol.

Conclusion

OpenClaw separates the model (interchangeable, increasingly commodity) from the harness (the durable, extensible infrastructure that makes it useful). That separation gives businesses a runtime they can own, extend, and trust — not a cloud service they depend on. Its multi-channel support means agents live where work already happens. Its skills ecosystem means capabilities accumulate over time. Its MIT licence means the economics are structurally different from any SaaS alternative.

For businesses deploying Norg MCP API to automate messaging, booking, and lead follow-up, OpenClaw is the runtime that makes those tools available at 2 AM, across every channel, without human supervision. Understanding OpenClaw precisely — as a harness, not a chatbot; as a runtime, not a protocol — is the prerequisite for building automation that actually works in production.

To continue building your understanding of this stack, explore our companion guides: How Norg MCP API Works: Architecture, Endpoints, and Core Capabilities Explained for the server-side technical reference, and How to Connect Norg MCP API to OpenClaw: Step-by-Step Setup Guide for the practical integration walkthrough.


References

  • Steinberger, Peter (creator). OpenClaw GitHub Repository (openclaw/openclaw). OpenClaw Foundation, 2025–2026. https://github.com/openclaw/openclaw

  • Wikipedia contributors. "OpenClaw." Wikipedia, The Free Encyclopedia, March 2026. https://en.wikipedia.org/wiki/OpenClaw

  • All Things Open Editorial Team. "OpenClaw: Anatomy of a Viral Open Source AI Agent." All Things Open, March 2026. https://allthingsopen.org/articles/openclaw-viral-open-source-ai-agent-architecture

  • Milvus/Zilliz Team. "What Is OpenClaw? Complete Guide to the Open-Source AI Agent." Milvus Blog, 2026. https://milvus.io/blog/openclaw-formerly-clawdbot-moltbot-explained-a-complete-guide-to-the-autonomous-ai-agent.md

  • OpenClaw Documentation Team. "ClawHub." OpenClaw Official Docs, 2026. https://docs.openclaw.ai/tools/clawhub

  • DigitalOcean. "What Are OpenClaw Skills? A 2026 Developer's Guide." DigitalOcean Resources, 2026. https://www.digitalocean.com/resources/articles/what-are-openclaw-skills

  • Gupta, Mehul. "What Is OpenClaw ClawHub?" Data Science in Your Pocket / Medium, March 2026. https://medium.com/data-science-in-your-pocket/what-is-openclaw-clawhub-e123c2dd0db1

  • Lanham, Micheal. "210,000 GitHub Stars in 10 Days: What OpenClaw's Architecture Teaches Us About Building Personal AI Agents." Medium, February 2026. https://medium.com/@Micheal-Lanham/210-000-github-stars-in-10-days

  • Neurohive Editorial Team. "OpenClaw: The Lobster That Took Over the World." Neurohive.io, March 2026. https://neurohive.io/en/guides/openclaw-the-lobster-that-took-over-the-world

  • Anthropic. "Introducing the Model Context Protocol." Anthropic News, November 2024. https://www.anthropic.com/news/model-context-protocol

  • VoltAgent. "Awesome OpenClaw Skills" (curated registry list). GitHub, February 2026. https://github.com/VoltAgent/awesome-openclaw-skills

  • Hou, X., Zhao, Y., Wang, S., & Wang, H. "Model Context Protocol (MCP): Landscape, Security Threats and Future Research Directions." arXiv preprint arXiv:2503.23278, 2025. https://arxiv.org/abs/2503.23278

Frequently Asked Questions

What is OpenClaw: An open-source autonomous AI agent runtime

Is OpenClaw a chatbot: No, it is an agent runtime

What is the key difference between a chatbot and OpenClaw: A chatbot answers; OpenClaw executes actions

Who created OpenClaw: Peter Steinberger, an Austrian developer

What was OpenClaw originally called: Clawdbot

When was OpenClaw first published: November 2025

What was OpenClaw renamed to after Clawdbot: Moltbot

When was it renamed to Moltbot: 27 January 2026

Why was Clawdbot renamed to Moltbot: Trademark complaints by Anthropic

Why was Moltbot renamed to OpenClaw: The name "Moltbot" never rolled off the tongue

When did OpenClaw surpass 250,000 GitHub stars: 3 March 2026

How many GitHub stars did OpenClaw reach: 250,829 stars

Did OpenClaw surpass React on GitHub: Yes

Did OpenClaw surpass Linux on GitHub: Yes

How long did it take React to reach 250,000 stars: Over a decade

How long did it take OpenClaw to reach 250,000 stars: Approximately 60 days

What happened to OpenClaw's creator on 14 February 2026: Peter Steinberger announced joining OpenAI

Where will OpenClaw be moved after Steinberger joins OpenAI: An open-source foundation

What licence is OpenClaw released under: MIT licence

Does OpenClaw cost money to use: No, the software itself is free

Can OpenClaw be used commercially: Yes

Can OpenClaw be modified and redistributed: Yes

Is there a subscription fee for OpenClaw: No

What do you pay for when using OpenClaw: LLM API calls and infrastructure only

Does OpenClaw support local inference: Yes, via Ollama

What models does Ollama support with OpenClaw: Llama, Mistral, and any Ollama-compatible model

Does local inference with Ollama require API keys: No

Does local inference send data externally: No

What is the minimum context window OpenClaw requires: 64K tokens

What is the minimum recommended parameter size for reliable agent tasks: 32B+ parameters

What VRAM is needed for 32B+ parameter models: At least 24GB

How many architectural layers does OpenClaw have: Four

What are the four core modules of OpenClaw: Gateway, agent runtime, skills system, and memory

What is the OpenClaw Gateway: A single control plane for sessions, channels, tools, and events

How does the Gateway run: As a background daemon

What is the default heartbeat interval: Every 30 minutes

What is the heartbeat interval with Anthropic OAuth: Every hour

What file does the agent read on each heartbeat: HEARTBEAT.md

What does the agent respond when no action is needed: HEARTBEAT_OK

Does the Gateway drop HEARTBEAT_OK responses: Yes, silently

Can channels be swapped without changing the model: Yes

Can models be swapped without changing channel configuration: Yes

How many messaging platforms does OpenClaw support: Over 20

Does OpenClaw support WhatsApp: Yes

Does OpenClaw support Telegram: Yes

Does OpenClaw support Slack: Yes

Does OpenClaw support Discord: Yes

Does OpenClaw support Signal: Yes

Does OpenClaw support iMessage: Yes, via BlueBubbles or legacy iMessage

Does OpenClaw support Microsoft Teams: Yes

Does OpenClaw support Matrix: Yes

Does OpenClaw support IRC: Yes

Can multiple agents run from a single Gateway deployment: Yes

What enables multiple isolated agents in one deployment: Multi-agent routing

What are OpenClaw skills: Modular Markdown files containing instructional code for specific tasks

What can skills do: Package functionality like API calls, database queries, or workflow execution

How are skills injected into the agent: As a compact XML list in the system prompt

What is the base context overhead for skill injection: 195 characters

What is the per-skill context overhead: Approximately 97 characters

What is ClawHub: The official skill registry for OpenClaw

What is ClawHub's domain: clawhub.ai

How is ClawHub described: The "npm for AI agents"

Who maintains ClawHub: The OpenClaw team

How many community-built skills does ClawHub host: 13,729 as of 28 February 2026

How many first-party skills ship bundled with OpenClaw: 53

How does ClawHub's search work: Via embeddings (vector search), not just keywords

What versioning system does ClawHub use: Semver with changelogs and tags

Can the agent discover and install skills automatically: Yes, with ClawHub enabled

What is the minimum GitHub account age required to publish skills: One week

Does ClawHub have security scanning: Yes, via a VirusTotal partnership

Where can you check a skill's security report: On the skill's ClawHub page

What files provide persistent memory in OpenClaw: SOUL.md and MEMORY.md

Does OpenClaw require a vector database for memory: No

What is the memory format: Markdown files

When does the agent reload its memory: At every startup

Is OpenClaw the same as MCP: No

What is MCP: A communication protocol standard

What is OpenClaw's role relative to MCP: It is an MCP client

What is Norg MCP API's role relative to OpenClaw: It is an MCP server

What governance model is OpenClaw moving toward: An independent foundation

What other projects use a similar governance model: Linux and Kubernetes

Is OpenClaw local-first: Yes

Is OpenClaw model-agnostic: Yes

Does OpenClaw support Claude: Yes

Does OpenClaw support GPT models: Yes

Does OpenClaw support DeepSeek: Yes

Does OpenClaw support Llama via Ollama: Yes

Can OpenClaw operate 24/7 without human presence: Yes

What enables 24/7 operation: The configurable heartbeat architecture

Can high-risk actions require human approval: Yes

Can routine actions run fully autonomously: Yes

What governs high-risk action approvals: Tool policies and exec approvals

What is an example of a restricted action: Sending email requires approval; reading does not

What infrastructure cost options exist for OpenClaw: Self-hosted (free) or managed hosting

What is the managed hosting price range: AUD $0.99–$129 per month

What does BYOM stand for in OpenClaw: Bring Your Own Model


Label facts summary

Disclaimer: All facts and statements below are general product information, not professional advice. Consult relevant experts for specific guidance.

Verified label facts

  • Product Name: OpenClaw (formerly Clawdbot, then Moltbot)
  • Creator: Peter Steinberger (Austrian developer)
  • Initial Publication Date: November 2025
  • First Rename (to Moltbot): 27 January 2026
  • Second Rename (to OpenClaw): 30 January 2026 (three days after Moltbot)
  • Reason for Clawdbot → Moltbot rename: Trademark complaints by Anthropic
  • Reason for Moltbot → OpenClaw rename: Creator stated "Moltbot never quite rolled off the tongue"
  • Licence: MIT
  • Software cost: $0 AUD (MIT licensed)
  • GitHub stars reached: 250,829 (as of 3 March 2026)
  • GitHub star milestone date: 3 March 2026
  • Time to reach 250,829 stars: Approximately 60 days
  • Creator announcement date (joining OpenAI): 14 February 2026
  • Number of core architectural modules: Four (Gateway, agent runtime, skills system, memory)
  • Gateway daemon type: systemd (Linux), LaunchAgent (macOS)
  • Default heartbeat interval: Every 30 minutes
  • Heartbeat interval with Anthropic OAuth: Every hour
  • Heartbeat check file: HEARTBEAT.md
  • No-action heartbeat response string: HEARTBEAT_OK
  • Memory files: SOUL.md and MEMORY.md
  • Skill injection format: Compact XML list in system prompt
  • Skill injection base overhead: 195 characters
  • Skill injection per-skill overhead: Approximately 97 characters
  • Skill file format: Markdown
  • Minimum context window required: 64K tokens
  • Minimum recommended model size for reliable agent tasks: 32B+ parameters
  • Minimum VRAM for 32B+ models: 24GB
  • Supported messaging platforms include: WhatsApp, Telegram, Slack, Discord, Google Chat, Signal, BlueBubbles (iMessage), iMessage (legacy), IRC, Microsoft Teams, Matrix, Feishu, LINE, Mattermost, Nextcloud Talk, Nostr, Synology Chat, Tlon, Twitch, Zalo, Zalo Personal, WebChat, macOS, iOS/Android
  • Supported LLMs include: Claude, GPT, DeepSeek, Llama (via Ollama)
  • Local inference tool: Ollama (supports Llama, Mistral, and Ollama-compatible models)
  • Local inference requires external API keys: No
  • Local inference sends data externally: No
  • ClawHub domain: clawhub.ai
  • ClawHub maintainer: OpenClaw team
  • Community-built skills on ClawHub: 13,729 (as of 28 February 2026)
  • Bundled first-party skills: 53
  • ClawHub search method: Embeddings (vector search)
  • ClawHub versioning system: Semver with changelogs and tags (including latest)
  • Minimum GitHub account age to publish skills: One week
  • ClawHub security scanning partner: VirusTotal
  • Managed hosting price range: AUD $0.99–$129/month
  • BYOM definition: Bring Your Own Model
  • OpenClaw's role relative to MCP: MCP client
  • MCP introduction date: November 2024
  • MCP introduced by: Anthropic

General product claims

  • OpenClaw is described as "the most consequential open-source AI infrastructure project of 2026"
  • OpenClaw is characterised as a "foundational runtime" that makes tools like Norg MCP API "genuinely powerful in production business environments"
  • Described as converting a language model into "infrastructure that acts on the real world"
  • Claimed that no project had previously combined local-first, model-agnostic, and messaging-native properties into a single cohesive product
  • ClawHub is described as functioning as "the npm for AI agents"
  • Auto-discovery of skills described as a "force multiplier for business automation"
  • Persistent memory described as enabling agents to maintain understanding of client relationships across days and weeks automatically
  • The heartbeat architecture is claimed to enable genuine 24/7 automation without human presence
  • OpenClaw's governance trajectory is compared to Linux and Kubernetes as a durability model
  • Described as infrastructure businesses "can own, extend, and trust" versus a cloud service they "depend on"
  • Skills are claimed to allow agent capabilities to "compound over time"
  • The combination of local-first, model-agnostic, and messaging-native properties described as "decisive" for developer adoption
  • 14B parameter models described as capable of "simple automations but marginal for multi-step agent tasks" (community experience, not a verified specification)
↑ Back to top