I work on AWS infrastructure (ex-Percona, Box, Dropbox, Pinterest). When OpenClaw blew up, I wanted to run it properly on AWS and was surprised by the default deployment story. The Lightsail blueprint shipped with 31 unpatched CVEs. The standard install guide uses three separate curl-pipe-sh patterns as root. Bitsight found 30,000+ exposed instances in two weeks. OpenClaw's own maintainer said "if you can't understand how to run a command line, this is far too dangerous."

So I built a Terraform module that replaces the defaults with what I'd consider production-grade:

* Cognito + ALB instead of a shared gateway token (per-user identity, MFA) * GPG-verified APT packages instead of curl|bash * systemd with ProtectHome=tmpfs and BindPaths sandboxing * Secrets Manager + KMS instead of plaintext API keys * EFS for persistence across instance replacement * CloudWatch logging with 365-day retention Bedrock is the default LLM provider so it works without any API keys. One terraform apply. Full security writeup: https://infrahouse.com/blog/2026-03-09-deploying-openclaw-on...

I'm sure I've missed things. What would you add or do differently for running an autonomous agent with shell access on a shared server?

Can I deploy it using Skill?

Thanks for the interest! The module is standard Terraform - you'd consume it like any other module from the registry or GitHub source. So anything that can run `terraform apply` should work.

That said, I'm not 100% sure which "Skill" you mean - is it the Kubiya skill runtime (skill-ai.dev)? If so, it already has Terraform integration, so wrapping this module should be straightforward.

Happy to help if you run into anything.