March 2026

What Docker Actually Runs
vs What You Actually Need

Every container carries 7 invisible layers of kernel machinery between your code and the hardware. Here's what they are, what they cost, and why WebAssembly doesn't need them.

The Docker Tax

When you type docker run, something deceptively simple happens: your application starts. What isn't visible is the machinery that makes it possible. Between your code and the CPU sit eleven distinct layers, each solving a real problem that Linux doesn't solve natively. Let's walk through every one of them.

Docker was built to answer a specific question: how do you ship software that works the same on every Linux machine? The answer was to package the entire userspace -- the OS libraries, the language runtime, the application code -- into a portable image, and then use kernel features to isolate each image from the host and from each other. That was a breakthrough in 2013. The problem is that each of those isolation layers has a cost, and the costs compound.

Here's what actually runs when you start a container:

To be clear: every one of these layers exists for a reason. Docker solved real, painful problems in software distribution and deployment. The language runtime exists because your code needs it. The container OS exists because your runtime expects POSIX. The namespaces exist because Linux processes can see each other by default. Seccomp exists because system calls are powerful and dangerous.

The question isn't whether these layers are well-engineered. They are. The question is whether you can skip them entirely by starting from a different set of assumptions.


The Stack Comparison

This is the same diagram from our architecture page, shown here for reference. The left column is what Docker/Kubernetes actually deploys. The right column is what WarpGrid deploys. The struck-through layers on the right don't exist in the Wasm execution model -- they aren't optimized or hidden, they're structurally absent.

Docker / Kubernetes Stack
APP Your Application Code 2-50 MB
LANG Language Runtime 50-800 MB
OS Container OS (Ubuntu/Alpine) 5-200 MB
OCI Image Layers + OverlayFS copy-on-write
RT containerd + runc ~80 MB
NS Linux Namespaces (pid, net, mnt, uts, ipc, user) per container
CG cgroups v2 (cpu, memory, io limits) per container
SEC seccomp + AppArmor / SELinux 300+ syscalls
NET veth pairs + bridge + iptables / CNI per container
K8S kubelet + kube-proxy + CSI + CRI ~200 MB
HOST Linux Kernel required
WarpGrid Stack
APP Your Application Code 1-10 MB
Language Runtime compiled out
Container OS not needed
Image Layers + OverlayFS not needed
containerd + runc not needed
Linux Namespaces not needed
MEM Linear Memory Sandbox 1-10 MB
CAP Capability-based Permissions deny-all default
veth + bridge + iptables not needed
WG warpd (runtime + scheduler + API) ~25 MB
HOST Linux Kernel required
6 layers
Removed, not optimized
No container OS. No OverlayFS. No containerd. No namespaces. No veth pairs. No seccomp. They don't exist in the Wasm execution model.
0 syscalls
Available by default
Docker's seccomp filters 300+ syscalls after allowing access. Wasm starts at zero -- your module can only call functions the host explicitly provides.
~1ms
Instance startup
No image pull. No layer unpacking. No filesystem mount. No namespace creation. No network bridge. AOT-compiled Wasm instantiates in under a millisecond.

Why WebAssembly Doesn't Need Containers

Docker's isolation model starts with a Linux process that has full access to the kernel, then subtracts capabilities. Namespaces hide the host's process table and network. Seccomp blocks dangerous syscalls. AppArmor restricts filesystem paths. Each layer is a filter that tries to prevent the process from doing something it could otherwise do. This is subtractive isolation -- you start with everything and take things away.

WebAssembly works in the opposite direction. A Wasm module executes inside a linear memory sandbox -- a contiguous byte array that the module can read and write, but that is the entire extent of its world. There are no file descriptors. There is no network socket API. There are no system calls. There is no process table, no environment variables, no filesystem, and no way to execute arbitrary code. The module literally cannot express these operations in its instruction set.

If a Wasm module needs to talk to the network or read a file, the host must explicitly provide that capability as a typed function import. This is the capability-based security model: the default is deny-all, and every permission is an opt-in decision by the host, declared in the deployment manifest. You don't filter out what's dangerous -- you grant only what's needed.

This distinction matters because it changes where the security boundary lives. In Docker, the boundary is the kernel -- if you escape the namespace, you own the host. In Wasm, the boundary is the bytecode format itself. There is nothing to escape from because there is nothing to escape to. The module has no concept of "the host" beyond the functions it was given. Isolation isn't enforced after the fact by a security policy. It's a structural property of the execution model.


The Numbers

Here's what this looks like in practice. These are real measurements from WarpGrid running on Hetzner bare metal, compared to the same workloads in Docker containers:

Metric Docker WarpGrid
Cold start 200-500ms 0.3ms (Rust)
Memory per instance 50+ MB 2 MB
Instances per GB ~20 ~500
Deployment artifact 50-800 MB 42 KB - 10 MB
Node overhead 300+ MB (kubelet, containerd, kube-proxy) 25 MB (warpd)

The density difference -- 500 instances per GB vs 20 -- isn't a benchmark trick. It's the natural consequence of removing six layers of per-instance overhead. When you don't need a container OS, a layered filesystem, namespace bookkeeping, veth pairs, and cgroup accounting for each workload, the memory that those layers consumed becomes available for actual work.

For full methodology and reproducible benchmarks, see /benchmarks.


When Containers Still Win

We would be doing you a disservice if we pretended this is a universal solution. There are workloads and situations where Docker and Kubernetes are genuinely the better choice:

The honest assessment is: if you're writing Rust, Go, TypeScript, or Python services and deploying to Linux bare metal or VPS instances, WarpGrid can give you 10-100x better density and sub-millisecond cold starts. If you're running Java monoliths on managed Kubernetes, you're probably fine where you are.


Try It Yourself

The fastest way to see the difference is to run it. This takes about 60 seconds on any Linux or macOS machine:

# Install WarpGrid
$ curl -fsSL https://warpgrid.dev/install.sh | sh

# Start a local cluster
$ warpd standalone --port 8443
Cluster ready on :8443

# Deploy the hello-world example
$ warp deploy examples/hello.wasm --min 1
Deployed: hello (1 instance, 0.8ms cold start)

# Test it
$ curl http://localhost:8443/r/hello/health
{"status":"ok"}

One binary. No containers. No Kubernetes.

WarpGrid is open source under Apache 2.0 and free during beta.

Try WarpGrid Free Star on GitHub