Apache 2.0 · Open Source

One Binary.
Thousands of Workloads.
Zero Kubernetes.

WarpGrid is a Wasm-native cluster orchestrator for bare metal. Deploy WebAssembly components that cold-start in milliseconds, sandbox themselves by default, and run 5,000 per node.

10s → 1ms
Cold start time
300MB+ → 25MB
Per-node footprint
50 containers → 5,000
Instances per node
8 daemons → 1
Binary to deploy

Containers solved 2013's problem.
Your workloads have moved on.

Today with Kubernetes

× kubelet + containerd + runc + CNI + CSI + etcd running before your code does
× 500MB container images for a 2MB application binary
× Seconds-long cold starts that kill scale-to-zero
× YAML sprawl across Deployments, Services, Ingresses, ConfigMaps, and Secrets
× seccomp, AppArmor, and SELinux bolted on for security
× 50 containers max per node before memory pressure hits

With WarpGrid

One 25MB binary. That's the entire orchestrator, runtime, and API.
1-10MB Wasm components compiled from your existing code
1ms cold starts — scale to zero is finally real
One warp.toml per service. Scaling, health, shims — all in one file.
Capability-based sandbox is the default. No bolt-ons.
5,000+ instances per node. Each uses 1-10MB of memory.

What Docker actually runs
vs. what you actually need

Every container you deploy carries an invisible tax — seven layers of kernel machinery between your code and the hardware. WarpGrid removes six of them.

Docker / Kubernetes Stack
APP Your Application Code 2-50 MB
LANG Language Runtime 50-800 MB
OS Container OS (Ubuntu/Alpine) 5-200 MB
OCI Image Layers + OverlayFS copy-on-write
RT containerd + runc ~80 MB
NS Linux Namespaces (pid, net, mnt, uts, ipc, user) per container
CG cgroups v2 (cpu, memory, io limits) per container
SEC seccomp + AppArmor / SELinux 300+ syscalls
NET veth pairs + bridge + iptables / CNI per container
K8S kubelet + kube-proxy + CSI + CRI ~200 MB
HOST Linux Kernel required
WarpGrid Stack
APP Your Application Code 1-10 MB
Language Runtime compiled out
Container OS not needed
Image Layers + OverlayFS not needed
containerd + runc not needed
Linux Namespaces not needed
MEM Linear Memory Sandbox 1-10 MB
CAP Capability-based Permissions deny-all default
veth + bridge + iptables not needed
WG warpd (runtime + scheduler + API) ~25 MB
HOST Linux Kernel required

Here's what Docker actually does when you run a container. It asks the Linux kernel to create six isolated namespaces (pid, net, mnt, uts, ipc, user) — each container gets its own view of process IDs, network interfaces, mount points, and users. Then it sets up cgroups v2 to enforce CPU and memory limits. It creates a virtual ethernet pair (veth) and bridges it to the host network through iptables rules. It unpacks a layered filesystem image through OverlayFS. It applies a seccomp profile that filters 300+ system calls. And only then does your 2MB of application code run on top of a 200MB OS image, inside an 80MB container runtime, managed by a 200MB orchestrator stack.

WebAssembly doesn't need any of that. A Wasm module executes inside a linear memory sandbox — a flat byte array that the module can read and write, but cannot escape. There is no filesystem access, no network access, no system calls, and no process table unless the host explicitly provides them through typed capability imports. Isolation isn't enforced by the kernel after the fact — it's a structural property of the bytecode format itself. There is nothing to escape from because there is nothing to escape to.

This is where the density comes from. A Docker container pays a minimum overhead of ~50-100MB for the guest OS, namespace bookkeeping, veth pairs, cgroup tracking, and OverlayFS metadata. A WarpGrid instance pays 1-10MB for the Wasm linear memory — because that overhead simply doesn't exist. The six layers between your code and the hardware are gone, not optimized. That's how one node goes from 50 containers to 5,000 instances. It's not a benchmark trick. It's fewer things.

6 layers
Removed, not optimized
No container OS. No OverlayFS. No containerd. No namespaces. No veth pairs. No seccomp. They don't exist in the Wasm execution model.
0 syscalls
Available by default
Docker's seccomp filters 300+ syscalls after allowing access. Wasm starts at zero — your module can only call functions the host explicitly provides. The attack surface is what you declare, not what you fail to block.
~1ms
Instance startup
No image pull. No layer unpacking. No filesystem mount. No namespace creation. No network bridge. AOT-compiled Wasm instantiates from a memory-mapped module in under a millisecond.

From existing code to running cluster
in three commands

01

Analyze your project

Point WarpGrid at your Rust, Go, or TypeScript project. It scans your dependencies, identifies compatibility, and generates a deployment manifest.

$ warp convert analyze ./my-api
Compatibility: 94%
Generated: warp.toml
02

Compile to WebAssembly

WarpGrid compiles your project to a Wasm component. Database drivers, DNS, and filesystem calls are transparently shimmed — no code changes needed.

$ warp pack
Built: my-api.wasm (1.8MB)
Shims: postgres, dns, fs
03

Deploy to your cluster

Push to any node running warpd. Autoscaling, health checks, rolling deploys, and canary routing work out of the box.

$ warp deploy --canary 10%
Deployed: v2 (canary)
Instances: 12 healthy
# Start a single-node cluster
$ warpd standalone --port 8443

# Or bootstrap a multi-node cluster
$ warpd control-plane --peers node2:9443,node3:9443
$ warpd agent --join control:9443 # on each worker

# Deploy your first workload
$ warp deploy my-api.wasm --min 2 --max 100
Deployment created: my-api (2 instances running)

Everything you need.
Nothing you don't.

Every feature runs inside the single warpd binary. No sidecars, no operators, no CRDs.

Web Dashboard

Real-time cluster overview, deployment management, node health, and rollout tracking. Server-rendered, no JS framework required.

Metrics-Driven Autoscaling

Scale on RPS, P99 latency, error rate, or memory. Configurable cooldowns prevent flapping. Scale to zero when traffic stops.

Sandbox by Default

Wasm's capability model means zero access unless explicitly granted. No seccomp profiles, no AppArmor, no SELinux configuration.

Transparent Database Proxy

Wire-protocol passthrough for Postgres, MySQL, and Redis. Your existing database drivers work without modification. Connection pooling included.

Deployment Strategies

Rolling updates, canary routing with auto-rollback, and blue-green deployments. Health gates between every batch. Built-in, not a CRD.

Embedded Raft Consensus

Multi-node clusters use embedded Raft — no etcd to install, configure, or monitor. Survives single-node failure. mTLS between all nodes.

How WarpGrid stacks up

Kubernetes Nomad Fermyon Spin WarpGrid
Compute primitive Containers Containers + drivers Spin components Wasm components
Node overhead 300MB+ (8 daemons) ~100MB N/A (SaaS) 25MB (1 binary)
Cold start 1-10 seconds 1-10 seconds ~1ms ~1ms
Security model Kernel namespaces + bolt-ons Kernel namespaces Wasm sandbox Wasm capability sandbox
Bare metal native With kubeadm (complex) Yes No (cloud only) Yes (single binary)
Consensus etcd (external) Raft (built-in) N/A Raft (embedded)
DB driver support Native (Linux) Native (Linux) SDK-specific Wire-protocol proxy
Languages Any (container) Any (container) Rust, JS, Python, Go Rust, Go, TypeScript

Your language. Your drivers.
No rewrite.

Compile your existing backend code to WebAssembly. Database drivers, DNS, and filesystem calls are transparently shimmed at the protocol level.

Rust
sqlx, tokio-postgres, redis
Ready
Go
pgx, go-redis, net/http
Beta
TypeScript
pg, ioredis, node:fs
Beta
Bun
bun:sql, native APIs
Planned

Common questions

Do I have to rewrite my application?
No. WarpGrid compiles your existing Rust, Go, or TypeScript code to WebAssembly. Database drivers work through a transparent wire-protocol proxy. DNS and filesystem calls are shimmed at the system level. Run warp convert analyze to see your compatibility score before changing a single line.
Can I run this on my existing bare metal servers?
That's exactly what WarpGrid is designed for. Copy the single warpd binary to each node, start it, and you have a cluster. No Docker, no containerd, no kubelet required. Linux x86_64 and ARM64 are supported.
How does the security model work?
WebAssembly's capability-based sandbox denies all access by default. Your workload cannot touch the filesystem, network, or any system resource unless you explicitly grant it in warp.toml. This is stronger than container isolation — there's no kernel to escape from.
What about existing Kubernetes workloads?
WarpGrid is a clean break, not a drop-in replacement. The warp convert analyze command assesses your project's compatibility and generates a migration path. We're building Helm chart and Dockerfile transpilers for automated migration.
Is WarpGrid production-ready?
WarpGrid is in active development with 480+ tests across the workspace. The core orchestrator (scheduling, autoscaling, health checks, deployment strategies, multi-node Raft clustering) is complete. The SDK compatibility layer — which makes real-world backends work transparently — is in beta. We recommend it for non-critical workloads and internal services today.
What's the license?
Apache 2.0. Fully open source. No enterprise tier, no feature gates, no phone-home telemetry.

Deploy your first workload
in under a minute.

One binary to install. One file to configure. One command to deploy.
No YAML. No containers. No regrets.