saulutions.ca

// blog post

Running OpenClaw Safely on Proxmox as a systemd Service

6 min read
#openclaw #proxmox #lxc #ai #homelab

So you want to run OpenClaw on your Proxmox homelab. Good choice - I’ve been running mine this way for a few weeks now, and it’s been solid.

This guide walks through the whole setup, from creating an LXC container to getting OpenClaw running with all the bells and whistles. By the end, you’ll have a dedicated container running OpenClaw that can handle multiple agent workspaces without breaking a sweat.

What is OpenClaw anyway?

OpenClaw is an AI coding assistant, but not like the ones you’re probably used to. It’s not just a chatbot that suggests code. Two big things set it apart:

First, it has access to actual tools. And I don’t just mean “it can write code.” OpenClaw can schedule things on your calendar, directly interact with your system, install and run applications, execute commands - basically anything you could do from a terminal, it can do. It’s like giving an AI assistant actual hands instead of just a mouth.

Second, it can be proactive. This is the wild part. OpenClaw doesn’t sit there waiting for you to ask it questions. It can message you first. You set up cron jobs that trigger agentic calls, and OpenClaw runs background tasks without you having to prompt it.

Some examples of what people use this for:

  • Automatically checking your emails and notifying you about important ones based on rules you define
  • Telling you your schedule for the day each morning
  • Giving you a personalized news briefing
  • Running background maintenance tasks on your codebase (updating dependencies, running tests, whatever)

It’s less “AI chatbot” and more “AI employee that works in the background.”

The catch: what makes it powerful also makes it risky

Here’s the thing - everything that makes OpenClaw useful is also what makes it potentially dangerous if not set up properly.

Prompt injection is a real concern. Since OpenClaw has access to external tools and data sources, there’s a risk that malicious input from those sources could manipulate what it does. Imagine it reads an email with carefully crafted text that tricks it into running commands you didn’t intend.

System access is a double-edged sword. You’re giving this thing access to your computer - to install software, run commands, modify files. That’s great when it’s doing what you want. But if someone gains access to your OpenClaw instance, they essentially have access to everything OpenClaw can do. Which is… a lot.

This is why we’re running it in an isolated LXC container with proper sandboxing and security configuration. That’s not optional nice-to-have stuff - it’s critical.

Throughout this guide, especially in Part 4, we’ll focus heavily on mitigating these risks. Proper sandboxing, resource limits, monitoring for suspicious activity, and configuring OpenClaw for safer usage. The goal is to get all the benefits without the nightmare scenarios.

Running it on Proxmox in an LXC container is actually a great first step - it gives you isolation (your AI experiments stay contained), resource control (set hard limits on what it can consume), and easy management (snapshots, backups, migrations - all the Proxmox goodness you already know).

Why LXC instead of just Docker or a full VM?

Why not Docker? LXC containers and Docker containers are virtually the same thing - they’re both using Linux containerization. You could technically install OpenClaw in a Docker container, but since we’re on Proxmox and Proxmox has excellent LXC support built-in, there’s no reason to add another layer. Running Docker on top of LXC would just be nesting containers, and that doesn’t give you any additional benefits for the gateway itself.

We will still be using Docker inside the LXC container later, but not for the gateway. The OpenClaw gateway runs directly in the LXC container. Docker comes into play for individual agent workspaces, that’s where nesting containers actually makes sense. When agents make tool calls, running those inside Docker containers provides an extra sandboxing layer between multiple agents and between the agent and the Debian host. That’s one of our key security measures.

Why not a VM? VMs are overkill. As long as you configure your LXC container properly (which we’ll do), it can be just as safe as running in a full VM, but without all the overhead. No need for slower startup times, higher RAM usage, or the extra complexity of managing a full virtualized OS just to run a Node.js application.

LXC gives us the security and isolation we need without the baggage.

The plan

This series is split into four parts:

Part 1: Setting up the LXC container (~15 minutes) We’ll create a Debian container with the right settings. This part is mostly about understanding which features to enable and why - things like nesting, TUN/TAP, and resource allocation.

Part 2: Dependencies and post-install stuff (~20 minutes) Getting the container ready with system updates, security basics (because we should at least pretend to care about security), and installing Node.js, Git, Docker, and other tools OpenClaw needs.

Part 3: Installing OpenClaw (~30 minutes) Actually installing OpenClaw, setting up API keys, configuring your first workspace, and making it run as a proper systemd service so it starts on boot.

Part 4: Advanced configuration and security hardening (~45 minutes) This is where we take security seriously. Multiple agents running in parallel, workspace sandboxing to isolate agent sessions from each other, resource limits so a runaway (or compromised) process can’t eat all your RAM, monitoring for suspicious activity, and strategies for safer proactive agent usage.

You can stop after Part 3 and have a functional setup, but Part 4 is where we actually make it safe for production use. Given the risks mentioned above, I’d strongly recommend not skipping this part.

What you’ll need

  • Proxmox VE up and running (I’m on 8.x but this should work on 7.x too)
  • Comfortable with basic Linux command line stuff
  • At least 8GB RAM and 60GB storage you can dedicate to this
  • SSH access to your Proxmox node
  • API keys for your LLM of choice (Claude, OpenAI, whatever you’re using)

You don’t need to be a Docker expert or know anything about LXC. I’ll explain everything as we go.

A note on time estimates

Those time estimates above? They’re if everything goes smoothly. Add another 30-60 minutes for troubleshooting if you’re doing this for the first time. We’ve all been there.

Also, you can totally do this in chunks. Each part builds on the previous one, but you can take breaks between sections. I did the whole thing over multiple days just figuring out the config so you can take things slow.

Why self-host this at all?

Fair question. Cloud-based AI tools are convenient. But there’s something satisfying about running it yourself:

  • Your code stays on your infrastructure
  • No per-token API charges if you’re using local LLMs
  • Full control over what models you use and how they’re configured
  • You learn how this stuff actually works instead of treating it like magic

Plus, if you’re already running Proxmox for other stuff, adding one more container is trivial.

Alright, let’s get started. Head to Part 1 when you’re ready.

Comments