Open Beta · Free · Apache 2.0

Run Your Entire Backend
on a $20 Server.

You have 40 microservices and a few bare-metal boxes from Hetzner. WarpGrid runs them all—5,000 per node—with one binary and no Kubernetes. Powered by WebAssembly for millisecond cold starts and sandboxing by default.

10s → 1ms
Cold start time
300MB+ → 25MB
Per-node footprint
50 containers → 5,000
Instances per node
8 daemons → 1
Binary to deploy

What Docker actually runs
vs. what you actually need

Every container you deploy carries an invisible tax — seven layers of kernel machinery between your code and the hardware. WarpGrid removes six of them.

Docker / Kubernetes Stack
APP Your Application Code 2-50 MB
LANG Language Runtime 50-800 MB
OS Container OS (Ubuntu/Alpine) 5-200 MB
OCI Image Layers + OverlayFS copy-on-write
RT containerd + runc ~80 MB
NS Linux Namespaces (pid, net, mnt, uts, ipc, user) per container
CG cgroups v2 (cpu, memory, io limits) per container
SEC seccomp + AppArmor / SELinux 300+ syscalls
NET veth pairs + bridge + iptables / CNI per container
K8S kubelet + kube-proxy + CSI + CRI ~200 MB
HOST Linux Kernel required
WarpGrid Stack
APP Your Application Code 1-10 MB
Language Runtime compiled out
Container OS not needed
Image Layers + OverlayFS not needed
containerd + runc not needed
Linux Namespaces not needed
MEM Linear Memory Sandbox 1-10 MB
CAP Capability-based Permissions deny-all default
veth + bridge + iptables not needed
WG warpd (runtime + scheduler + API) ~25 MB
HOST Linux Kernel required

Here's what Docker actually does when you run a container. It asks the Linux kernel to create six isolated namespaces (pid, net, mnt, uts, ipc, user) — each container gets its own view of process IDs, network interfaces, mount points, and users. Then it sets up cgroups v2 to enforce CPU and memory limits. It creates a virtual ethernet pair (veth) and bridges it to the host network through iptables rules. It unpacks a layered filesystem image through OverlayFS. It applies a seccomp profile that filters 300+ system calls. And only then does your 2MB of application code run on top of a 200MB OS image, inside an 80MB container runtime, managed by a 200MB orchestrator stack.

WebAssembly doesn't need any of that. A Wasm module executes inside a linear memory sandbox — a flat byte array that the module can read and write, but cannot escape. There is no filesystem access, no network access, no system calls, and no process table unless the host explicitly provides them through typed capability imports. Isolation isn't enforced by the kernel after the fact — it's a structural property of the bytecode format itself. There is nothing to escape from because there is nothing to escape to.

This is where the density comes from. A Docker container pays a minimum overhead of ~50-100MB for the guest OS, namespace bookkeeping, veth pairs, cgroup tracking, and OverlayFS metadata. A WarpGrid instance pays 1-10MB for the Wasm linear memory — because that overhead simply doesn't exist. The six layers between your code and the hardware are gone, not optimized. That's how one node goes from 50 containers to 5,000 instances. It's not a benchmark trick. It's fewer things.

6 layers
Removed, not optimized
No container OS. No OverlayFS. No containerd. No namespaces. No veth pairs. No seccomp. They don't exist in the Wasm execution model.
0 syscalls
Available by default
Docker's seccomp filters 300+ syscalls after allowing access. Wasm starts at zero — your module can only call functions the host explicitly provides. The attack surface is what you declare, not what you fail to block.
~1ms
Instance startup
No image pull. No layer unpacking. No filesystem mount. No namespace creation. No network bridge. AOT-compiled Wasm instantiates from a memory-mapped module in under a millisecond.

How much could you save?

Real Hetzner Cloud pricing. Same workload, different approach.

10
1 100
Docker / K8s
WarpGrid
You save:

Estimates based on Hetzner Cloud pricing. Average 256 MB per Docker container, 4 MB per Wasm instance.

See it work in 60 seconds

No Docker. No containers. No signup. Just your terminal.

# Install WarpGrid (10 seconds)
$ curl -fsSL https://warpgrid.dev/install.sh | sh

# Start a local cluster (5 seconds)
$ warpd standalone --port 8443
Cluster ready on :8443
Dashboard at http://localhost:8443/dashboard

# Deploy the hello-world example
$ warp deploy examples/hello.wasm --min 1
Deployed: hello (1 instance, 0.8ms cold start)

# Test it
$ curl http://localhost:8443/r/hello/health
{"status":"ok"}
Download CLI Or try it live in your browser

Try WarpGrid Live

We'll deploy a hello-world service to our cloud in ~3 seconds. No signup required.

From existing code to running cluster
in three commands

01

Check your project's compatibility

Paste a public GitHub repo. We'll scan your dependencies and show your WarpGrid compatibility score in seconds.

01

Analyze your project

Or run it locally: warp convert analyze ./my-api scans your dependencies, identifies compatibility, and generates a deployment manifest.

02

Compile to WebAssembly

WarpGrid compiles your project to a Wasm component. Database drivers, DNS, and filesystem calls are transparently shimmed — no code changes needed.

$ warp pack
Built: my-api.wasm (1.8MB)
Shims: postgres, dns, fs
03

Deploy to your cluster

Push to any node running warpd. Autoscaling, health checks, rolling deploys, and canary routing work out of the box.

$ warp deploy --canary 10%
Deployed: v2 (canary)
Instances: 12 healthy
# Start a single-node cluster
$ warpd standalone --port 8443

# Or bootstrap a multi-node cluster
$ warpd control-plane --peers node2:9443,node3:9443
$ warpd agent --join control:9443 # on each worker

# Deploy your first workload
$ warp deploy my-api.wasm --min 2 --max 100
Deployment created: my-api (2 instances running)

Everything you need.
Nothing you don't.

Every feature runs inside the single warpd binary. No sidecars, no operators, no CRDs.

Web Dashboard

Real-time cluster overview, deployment management, node health, and rollout tracking. Server-rendered, no JS framework required.

Metrics-Driven Autoscaling

Scale on RPS, P99 latency, error rate, or memory. Configurable cooldowns prevent flapping. Scale to zero when traffic stops.

Sandbox by Default

Wasm's capability model means zero access unless explicitly granted. No seccomp profiles, no AppArmor, no SELinux configuration.

Transparent Database Proxy

Wire-protocol passthrough for Postgres, MySQL, and Redis. Your existing database drivers work without modification. Connection pooling included.

Deployment Strategies

Rolling updates, canary routing with auto-rollback, and blue-green deployments. Health gates between every batch. Built-in, not a CRD.

Embedded Raft Consensus

Multi-node clusters use embedded Raft — no etcd to install, configure, or monitor. Survives single-node failure. mTLS between all nodes.

How WarpGrid stacks up

Docker Compose Kamal Kubernetes Nomad Fermyon Spin WarpGrid
Compute primitive Containers Containers Containers Containers + drivers Spin components Wasm components
Node overhead ~100MB (Docker) ~100MB (Docker) 300MB+ (8 daemons) ~100MB N/A (SaaS) 25MB (1 binary)
Cold start 1-10 seconds 1-10 seconds 1-10 seconds 1-10 seconds ~1ms ~1ms
Security model Kernel namespaces Kernel namespaces Namespaces + bolt-ons Kernel namespaces Wasm sandbox Wasm capability sandbox
Bare metal native Yes (Docker req.) Yes (designed for it) With kubeadm (complex) Yes No (cloud only) Yes (single binary)
Multi-node clustering No (single host) No (deployer pushes) etcd (external) Raft (built-in) N/A Raft (embedded)
DB driver support Native (Linux) Native (Linux) Native (Linux) Native (Linux) SDK-specific Wire-protocol proxy
Languages Any (container) Any (container) Any (container) Any (container) Rust, JS, Python, Go Rust, Go, TypeScript
Migration effort ~2 hours ~4 hours ~2 hours ~1 hour

Migration estimates for a typical 3-service backend. Your mileage may vary.

Your language. Your drivers.
No rewrite.

Compile your existing backend code to WebAssembly. Database drivers, DNS, and filesystem calls are transparently shimmed at the protocol level.

Rust
Rust
sqlx, tokio-postgres, redis
Ready
Go
Go
pgx, go-redis, net/http
Beta
TypeScript
TypeScript
pg, ioredis, node:fs
Beta
Bun
Bun
bun:sql, native APIs
Planned

Runs on your servers. Any Linux box.

WarpGrid is a single static binary. If it runs Linux, it runs WarpGrid. No Docker required.

Hetzner Hetzner
OVH OVHcloud
DigitalOcean DigitalOcean
Vultr Vultr
Akamai/Linode Akamai
Scaleway Scaleway
GCP GCE
Ubuntu Debian Fedora Alpine x86_64 ✓ ARM64 ✓

WarpGrid is not for everyone

We'd rather save you time than overpromise. Here's when WarpGrid isn't the right fit.

Won't work

× You need JVM languages (Java, Kotlin, Scala) — JVM doesn't compile to Wasm
× You must run existing Docker images as-is — WarpGrid runs Wasm, not containers
× You depend on native C extensions that can't target Wasm (e.g., ImageMagick, CUDA)
× You need Windows Server as your deployment target

Not yet (in beta)

! Python or C# support — planned but not yet available
! GPU workloads — not supported in the Wasm runtime
! Instances needing >256MB memory — limit being raised
! Multi-cloud auto-failover — single-region clusters today
You might prefer: Kamal for deploying Rails/Django apps · Coolify for a self-hosted Heroku · Nomad for running arbitrary Docker images

Common questions

Do I have to rewrite my application?
No. WarpGrid compiles your existing Rust, Go, or TypeScript code to WebAssembly. Database drivers work through a transparent wire-protocol proxy. DNS and filesystem calls are shimmed at the system level. Run warp convert analyze to see your compatibility score before changing a single line.
Can I run this on my existing bare metal servers?
That's exactly what WarpGrid is designed for. Copy the single warpd binary to each node, start it, and you have a cluster. No Docker, no containerd, no kubelet required. Linux x86_64 and ARM64 are supported.
How does the security model work?
WebAssembly's capability-based sandbox denies all access by default. Your workload cannot touch the filesystem, network, or any system resource unless you explicitly grant it in warp.toml. This is stronger than container isolation — there's no kernel to escape from.
Can I use this alongside Docker Compose?
Yes. WarpGrid runs independently — it doesn't touch Docker or your existing containers. A common migration path: keep your current Docker Compose setup, deploy one new service with WarpGrid, and compare. Run warp convert analyze to check your existing projects' compatibility before changing a line of code.
How much does it cost?
Self-hosted WarpGrid is free forever with no limits — it's Apache 2.0 open source. We're also building WarpGrid Cloud, a hosted platform that's completely free during beta with all features unlocked. We'll figure out fair pricing with our early users before charging anything.
Is WarpGrid production-ready?
WarpGrid is in open beta with 480+ tests and 300+ commits in 2025. The core orchestrator (scheduling, autoscaling, health checks, deployment strategies, multi-node Raft clustering) is complete. The SDK compatibility layer is in beta. We recommend starting with a side project or internal service — when you're confident, migrate production workloads one service at a time.
Why should I trust a beta project for my infrastructure?
Fair question. WarpGrid is actively maintained by the team at dot industries, with 480+ tests, 300+ commits in 3 months, and zero external runtime dependencies. The binary is statically compiled — it has no dependency chain that can break. Start small. Run it on one box. See for yourself.
What's the license?
Apache 2.0. Fully open source. Self-host with zero limits, no feature gates, no phone-home telemetry. During beta, the hosted cloud platform is also completely free.
GitHub Stars
480+ tests
300+ commits in 2025
0 runtime dependencies
Apache 2.0
Recent Activity
300+ commits in 2025 · Actively maintained · View on GitHub →

Deploy your first workload
in under a minute.

One binary to install. One file to configure. One command to deploy.
No YAML. No containers. No regrets.