How to Run OpenClaw on Your Machine?

OpenClaw is one of the more interesting open-source projects to land in the personal AI space recently. It functions as a self-hosted AI assistant gateway, the kind of thing that runs quietly in the background on your own machine and lets you talk to AI models through apps you already use, like Telegram or WhatsApp. No cloud subscription, no vendor lock-in. You set it up once, and it keeps running.

The appeal is simple full control over your AI stack. You bring your own API key from Anthropic, OpenAI, or Google, and OpenClaw handles the plumbing, routing messages, storing memory across sessions, and executing tasks on your behalf. For anyone who has spent time thinking about personal AI infrastructure, this hits a real need.

This guide walks through everything you need to run OpenClaw, from system requirements to first chat.

What Exactly Is OpenClaw?

OpenClaw is an open-source, self-hosted personal AI assistant gateway. The project lives on GitHub at openclaw/openclaw and is built to connect chat apps to AI agents capable of automation, persistent memory, and tool use.

It runs as a background process (called the Gateway) on port 18789 on your machine. Once active, it handles multi-channel messaging, keeps persistent memory across conversations, and can execute proactive tasks like cron jobs and scheduled reminders, without you having to be present.

The supported AI providers include Anthropic (Claude), OpenAI (GPT models), and Google. You configure the API key during the onboarding process.

On the messaging side, OpenClaw supports Telegram, WhatsApp, Discord, Slack, iMessage, Signal, Matrix, and Microsoft Teams. Additional channel support comes via plugins.

The tools layer is worth calling out separately. Built-in capabilities include browser control, file access, shell commands, and web search. On top of that, there’s a marketplace called Clawhub.ai for community-built custom skills. Agents can also build their own skills, which opens up automation possibilities well beyond what a standard AI chat interface would allow.

A few other things worth knowing: all data stays local by default, the system supports multi-agent setups, and there’s optional iOS/Android node support for camera and voice input.

System Requirements Before You Install

Before running the installer, confirm your system meets these requirements:

  • Node.js 24 is recommended. Node.js 22.14 or newer also works. The installer handles Node.js installation automatically if it’s not already present.
  • Operating systems supported: macOS, Linux (including WSL2), and Windows via PowerShell.
  • You need an active API key from at least one supported AI provider, Anthropic (console.anthropic.com), OpenAI, or Google.
  • For VPS or cloud deployments, a minimum of 4GB RAM is recommended. DigitalOcean, Fly.io, and Hetzner are among the listed compatible providers.

There are no GPU requirements for running OpenClaw itself, since the actual inference happens at the API provider level.

How to Install OpenClaw

There are a few different ways to install, depending on how much control you want over the setup process.

Option 1, One-liner installer (recommended for most users): This is the fastest path. Run the following in your terminal:

https://openclaw.ai/install.sh

The script detects your OS, installs Node.js if needed, and sets up OpenClaw automatically.

Option 2, npm install: If you prefer managing things through npm:

npm install -g openclaw@latest

Then run the onboarding wizard:

openclaw onboard –install-daemon

Option 3, Build from source: Clone the GitHub repo and build with pnpm:

git clone https://github.com/openclaw/openclaw.git

Follow the repo’s build instructions using pnpm after cloning.

Option 4, Containers and cloud: Docker, Podman, Nix, and Ansible are all supported for containerized deployments. For cloud hosting, DigitalOcean, Fly.io, and Hetzner all work with OpenClaw.

Walking Through the Onboarding Process

Once installed, the onboarding wizard is where most of the actual configuration happens. Launch it with:

openclaw onboard –install-daemon

The wizard walks you through the following steps in order:

  • Name your assistant
  • Select an AI provider and model, for example, claude-3-5-opus from Anthropic
  • Enter your API key
  • Choose which messaging channels to enable
  • Install the daemon for auto-start on system boot

After onboarding completes, OpenClaw creates a configuration folder at ~/.openclaw/ on your machine. Inside it: openclaw.json (the main config file), a credentials store, a workspace directory, and logs.

If you want to skip the wizard and configure manually, use the –no-onboard flag.

Verifying the Installation and Running OpenClaw

A few commands to check that everything is working after setup:

  • openclaw –version — confirms the installed version
  • openclaw doctor — runs a diagnostics check
  • openclaw gateway status — shows whether the Gateway process is running

To open the web dashboard, run:

openclaw dashboard

This opens the local UI at http://127.0.0.1:18789. From here you can chat directly in the interface, review logs, and manage connected channels.

To connect a channel like Telegram, you’ll need to provide a bot token during the channel setup step. Once connected, you can send messages to your AI assistant from the Telegram app as you normally would, OpenClaw handles the routing in the background.

To restart the Gateway process at any point:

openclaw gateway restart

Common Issues and How to Fix Them

A handful of issues come up fairly often when people are getting started.

“Command not found” after installation: This usually means the npm global bin directory isn’t in your PATH. Add the following to your ~/.zshrc or ~/.bashrc file and restart the terminal:

export PATH=”$PATH:$(npm bin -g)”

Updating OpenClaw: Run the same npm command used for installation:

npm install -g openclaw@latest

Alternatively, pull the latest from the GitHub main branch if you installed from source.

Community support: The GitHub issues page is the best place for bug reports. There’s also an active thread at r/AiForSmallBusiness on Reddit where users share setup tips and configurations.

What Can You Actually Do With It?

The use case list is broad because the tool layer is genuinely flexible. A few practical applications for the technically inclined:

  • Automate repetitive tasks with cron jobs, things like keyword research, content gap analysis, or monitoring a competitor’s website for changes.
  • Integrate with Google Workspace to pull data from Sheets or Docs into automated workflows.
  • Build custom skills for domain-specific tasks, SEO audits, backlink checks, internal tooling.
  • Run multi-agent setups where different agents handle different parts of a workflow and hand off to each other.
  • Use proactive reminders and scheduled agents to execute tasks asynchronously.

The self-improving agent capability, where agents can build and refine their own skills over time, is the most experimental part of the project, but it’s worth keeping an eye on as the community builds out more of Clawhub.ai.

FAQs

How to run OpenClaw on cmd?

Open Command Prompt or PowerShell as admin, then run openclaw onboard –install-daemon for first-time setup. Start the gateway with openclaw gateway start or openclaw gateway restart. Verify at http://127.0.0.1:18789.

How to safely run OpenClaw?

Install locally on a VM/Docker, bind to 127.0.0.1 (not public ports), use limited permissions (read-only folders first), avoid root user, and never expose port 18789 online. Keep API keys secure.

How to run OpenClaw GUI?

Run openclaw dashboard in terminal, it auto-opens the web UI at http://127.0.0.1:18789. Or launch openclaw tui for a terminal-based interactive dashboard with logs and controls.

How does OpenClaw work?

OpenClaw runs as a local gateway (port 18789) routing messages from apps like Telegram/WhatsApp to AI models (Claude/GPT). It adds persistent memory, tools (browser/files/shell), multi-agent routing, and cron jobs, all self-hosted for privacy.