Container Orchestration Beyond Kubernetes
Explore container orchestration options — Docker Swarm, Nomad, ECS, and when Kubernetes is overkill. Practical guidance for choosing the right orchestrator.
Strategic Systems Architect & Enterprise Software Developer
Kubernetes has become synonymous with container orchestration, and that conflation causes real problems. Teams adopt Kubernetes for a three-service application that could run on a single server with Docker Compose. They spend weeks learning CRDs, Helm charts, and ingress controllers for a deployment that needs zero auto-scaling and handles 100 requests per minute. Kubernetes is a powerful tool, but it is not the only tool, and for many workloads it is dramatically more complexity than the problem requires.
Understanding the full landscape of container orchestration helps you choose the right level of abstraction for your actual needs.
Docker Compose: The Underrated Default
For applications with fewer than ten services that run on a single host (or a small number of hosts), Docker Compose is often sufficient. It defines services, networks, and volumes in a single YAML file and orchestrates them with docker compose up.
services:
api:
build: ./api
ports:
- "3000:3000"
environment:
- DATABASE_URL=postgresql://postgres:pass@db:5432/app
depends_on:
db:
condition: service_healthy
deploy:
replicas: 2
restart_policy:
condition: on-failure
db:
image: postgres:16
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
Volumes:
pgdata:
Docker Compose handles health checks, dependency ordering, restart policies, and basic replication. With the deploy key and docker compose up --scale api=3, you get multiple instances behind a built-in DNS-based load balancer. This is not production-grade auto-scaling, but it covers the requirements of many applications.
The limitation is multi-host orchestration. Docker Compose operates on a single Docker daemon. If you need containers spread across multiple machines for availability or compute capacity, you need a multi-host orchestrator. But be honest about whether you actually need multi-host — many applications run fine on a single well-provisioned server. The Docker fundamentals matter more than the orchestration layer for most teams.
Docker Swarm: Multi-Host Without the Complexity
Docker Swarm is built into the Docker Engine and provides multi-host orchestration with remarkably little configuration. Initialize a swarm on one node, join other nodes, and deploy services that the swarm distributes across available machines.
# Initialize swarm on the first node
docker swarm init
# Join additional nodes
docker swarm join --token SWMTKN-... Manager-ip:2377
# Deploy a stack
docker stack deploy -c docker-compose.yml myapp
Swarm uses the same Docker Compose file format (with the deploy section) for production deployments. This means the same configuration file works for local development and production — a significant operational simplification.
Swarm handles rolling updates, health-based routing, service discovery, and secret management. What it does not handle as well as Kubernetes: custom resource definitions, fine-grained network policies, advanced scheduling constraints, and the ecosystem of operators and extensions that Kubernetes has accumulated.
For teams that need multi-host container orchestration without the operational overhead of Kubernetes, Swarm remains a legitimate choice. It is not dead — Docker continues to maintain it — but it receives less community investment than Kubernetes, which means fewer third-party integrations and less documentation for advanced use cases.
HashiCorp Nomad: The Flexible Alternative
Nomad takes a different approach to orchestration. Instead of being container-specific, it orchestrates any workload — containers, VMs, Java applications, batch jobs, and system services. This flexibility is valuable for organizations that run mixed workloads.
job "api" {
datacenters = ["dc1"]
type = "service"
group "web" {
count = 3
network {
port "http" { to = 3000 }
}
task "api" {
driver = "docker"
config {
image = "myapp/api:latest"
ports = ["http"]
}
resources {
cpu = 500
memory = 256
}
}
service {
name = "api"
port = "http"
check {
type = "http"
path = "/health"
interval = "10s"
timeout = "2s"
}
}
}
}
Nomad is operationally simpler than Kubernetes. It is a single binary with no external dependencies (Kubernetes requires etcd, a control plane, and multiple components). It integrates with Consul for service discovery and Vault for secrets, but these are optional — Nomad works standalone.
The trade-off is ecosystem breadth. Kubernetes has Helm charts, operators, and integrations for nearly every infrastructure tool. Nomad's ecosystem is smaller. If you need a specific Kubernetes operator for your database, message queue, or monitoring stack, Nomad might not have an equivalent.
AWS ECS and Managed Services
Cloud-managed orchestration removes the operational burden of running the orchestrator itself. AWS ECS (Elastic Container Service), Google Cloud Run, and Azure Container Apps manage the control plane, and you define tasks and services through their APIs.
ECS with Fargate eliminates even the compute management — you define CPU and memory requirements, and AWS provisions the underlying infrastructure:
{
"family": "api",
"networkMode": "awsvpc",
"containerDefinitions": [{
"name": "api",
"image": "account.dkr.ecr.region.amazonaws.com/api:latest",
"portMappings": [{ "containerPort": 3000 }],
"healthCheck": {
"command": ["CMD-SHELL", "curl -f http://localhost:3000/health || exit 1"]
}
}],
"requiresCompatibilities": ["FARGATE"],
"cpu": "512",
"memory": "1024"
}
The advantage is zero cluster management. No node patching, no etcd backups, no control plane upgrades. The disadvantage is vendor lock-in — your task definitions, service configurations, and networking are tied to the cloud provider's API. Moving from ECS to another orchestrator requires rewriting your deployment configuration entirely.
For teams that are committed to a single cloud provider and want to minimize operational overhead, managed container services are often the best choice. The cloud cost implications need evaluation — Fargate charges a premium over self-managed EC2 instances, but the reduced operational burden often justifies the cost.
Choosing the Right Orchestrator
The decision framework is straightforward:
Single host, under 10 services — Docker Compose. It is what you already know, and it works.
Multi-host, straightforward requirements — Docker Swarm or a managed service (ECS, Cloud Run). Low operational overhead, sufficient features for most web applications.
Multi-host, mixed workloads, or existing HashiCorp stack — Nomad. The flexibility and operational simplicity are genuine advantages.
Large-scale, complex requirements, dedicated platform team — Kubernetes. The ecosystem, extensibility, and community support justify the complexity when you have the team to manage it.
The most expensive orchestration mistake is choosing Kubernetes for a workload that does not need it and spending engineering time on cluster management instead of product development. Match the tool to the problem. The principles of containerization transfer across orchestrators — the concepts matter more than the specific tool.