A Cloud Box for AI Agents
I wanted a dedicated Linux box in the cloud where AI agents could run with full permissions, completely separate from my personal machine. What started as a quick Terraform module became a self-managing development environment that runs for about $20 a month.
Here's the journey from first boot to a box that starts itself, does the work, and shuts itself down.
The Premise
Claude Code is most powerful when you run it with --dangerously-skip-permissions. The agent can execute shell commands, edit files, and install packages without stopping to ask for approval on every step. That's great for productivity, and less great when it's happening on the laptop where you keep your personal files and credentials.
Around the same time, OpenClaw was gaining traction as an open-source autonomous agent that runs workflows on a schedule. I wanted to try it, but I didn't want it running on my personal machine all day. I needed an isolated environment that could spin up, do the work, and shut itself down.
First Boot
I wrote a Terraform module that provisions Ubuntu 24.04 with NICE DCV for remote desktop access through a browser, plus Docker, Node.js, and the usual dev tools, all pre-installed via cloud-init. The first version took a few hours to build and about 15 minutes for cloud-init to finish on first launch.
The initial setup was simple: an EC2 instance behind a security group locked to my machine's IP address, with only SSH and DCV ingress open. I started with spot pricing at about three cents an hour, knowing I'd get the occasional interruption but hoping they wouldn't be too frequent.
They were frequent enough. The module uses persistent spot with stop-on-interruption, so nothing was ever lost, but getting recalled in the middle of a session happened often enough that I switched to on-demand as the default. Spot is still there as an option if you want the savings and can tolerate the interruptions.
Right-Sizing
I originally provisioned a 4 vCPU, 16GB instance because I planned to do heavy development on the box. Then two things shifted.
First, Claude Code shipped a /sandbox command that provides isolation for local development, making it safer to use skipped permissions on my own machine. I started doing more development locally again. Second, the primary workload on the EC2 shifted toward OpenClaw, which only needs about 4GB of memory when using cloud-hosted models. I dropped to a 2 vCPU, 8GB instance, which gives OpenClaw plenty of room and still leaves headroom for development work when I need it.
The Connection
DCV serves a self-signed certificate by default, which means a browser security warning on every connection. I set up Let's Encrypt with Route53 DNS validation, a dynamic DNS updater script, and Certbot. That worked consistently and gave me valid certificates with automated renewal.
But I was still managing IP addresses. My home IP would change and I'd need to update the security group. I could only connect from the specific machine whose IP was whitelisted. And the EC2's public IP changed on every stop/start cycle, so I had to look it up each time.
Tailscale solved all of this at once. It creates a WireGuard mesh network between your devices. Traffic flows through an encrypted tunnel, bypassing security groups entirely. I no longer need ingress rules on the EC2. I don't need to track my IP or the instance's IP. I just connect to a stable Tailscale hostname from any machine in my network.
Tailscale also auto-provisions Let's Encrypt certificates, so DCV gets valid TLS without Certbot. One gotcha worth noting is that DCV overwrites its certificates on every restart, so I wrote a small systemd service that re-provisions the Tailscale cert after each boot.
For personal use, Tailscale is free. That made the decision easy.
Scheduling the Work
I don't need OpenClaw running around the clock. It processes email, drafts content, and handles research tasks, so a few windows per day is enough.
EventBridge Scheduler handles the start/stop cycles with native timezone support across three weekday work windows in the morning, midday, and evening. An auto-stop Lambda serves as a fail-safe, so if a scheduled stop fails or I start the instance manually, it catches the oversight and shuts things down after a configurable timeout.
Getting the auto-stop timer right required a workaround. EC2's LaunchTime field only updates when an instance is first created, not when it's started after being stopped. So I added an EventBridge rule that fires on every state transition to running and tags the instance with the actual start time. The Lambda reads that tag instead of trusting LaunchTime.
I also added an SQS queue for submitting tasks when the instance is off. Messages persist for up to 14 days. I can queue work from my laptop before bed, and it's waiting when the morning window starts. The module creates the queue and grants the instance permissions to consume it. You bring the consumer service.
Where It Landed
With scheduling and reasonable instance sizing, the whole setup runs about $12-20 per month. Storage is the biggest line item at $8/month for 100GB. The lifecycle services, EventBridge, SQS, and Lambda, are all within the free tier.
I built this for OpenClaw and Claude Code, but it's really just a managed cloud desktop. The module handles Ubuntu setup, DCV remote access, Tailscale networking, scheduled start/stop, an auto-stop fail-safe, and the SQS task queue. Everything is optional and toggled with variables. The simplest deployment needs nothing more than your IP address.
If you want an isolated Linux environment that starts itself, does its work, and shuts itself down, it's a good starting point.