Docker for Developers: From Zero to Production Containers
A practical guide to Docker for developers — from writing your first Dockerfile to running production-grade containers with confidence.

James Ross Jr.
Strategic Systems Architect & Enterprise Software Developer
Docker for Developers: From Zero to Production Containers
I still remember the first time a junior developer on my team said "it works on my machine." We had a Node.js API that ran perfectly in development, exploded in staging, and nobody could figure out why. The culprit? A subtle difference in Node versions between the dev's MacBook and the Ubuntu staging server. That was the day I mandated Docker for every project we ship.
Docker is not DevOps magic reserved for platform teams. It is a fundamental skill for any developer who ships software to production, and if you are building anything that runs on a server, you need to understand it. Here is the practical guide I wish I had when I started.
What Docker Actually Solves
The core promise of Docker is deceptively simple: package your application with everything it needs to run, and ship that package anywhere. The container includes your runtime, your dependencies, your environment variables, and your file system configuration. The host machine only needs Docker itself.
This eliminates the "works on my machine" problem at the root. It also means your staging environment can be byte-for-byte identical to production. When you debug a staging issue, you know the environment is not the variable.
Beyond consistency, Docker makes your infrastructure declarative. Your Dockerfile is a script that documents exactly how your application is assembled. That documentation lives in your repository, gets reviewed in pull requests, and is versioned alongside your code.
Writing Your First Dockerfile
Start with the basics. Here is a Dockerfile for a Node.js API:
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
EXPOSE 3000
CMD ["node", "src/index.js"]
A few things worth explaining here. The FROM node:20-alpine pulls the official Node.js 20 image based on Alpine Linux. Alpine images are tiny — typically under 50MB — compared to Debian-based images that can balloon to 300MB or more. For production containers, smaller is better. Smaller images pull faster, have a smaller attack surface, and cost less to store.
The order of the COPY and RUN instructions matters. Docker caches each layer. By copying package*.json first and running npm ci before copying the rest of your source, you preserve the dependency installation cache. When you change your source code but not your dependencies, Docker reuses the cached node_modules layer. This makes rebuilds dramatically faster.
npm ci over npm install is intentional. ci installs exactly what is in your lockfile, fails if the lockfile is out of sync, and never modifies package.json. It is deterministic and appropriate for CI and production environments.
The Multi-Stage Build Pattern
Production Dockerfiles should use multi-stage builds. This pattern lets you use a heavy build image to compile your application and then copy only the compiled output into a lean runtime image.
# Build stage
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Production stage
FROM node:20-alpine AS production
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY --from=builder /app/dist ./dist
EXPOSE 3000
CMD ["node", "dist/index.js"]
The final image contains no build tools, no dev dependencies, no TypeScript compiler. Just the compiled JavaScript and production dependencies. Your attack surface shrinks and your image size drops significantly.
Docker Compose for Local Development
Running a single container is straightforward. Running your API, a PostgreSQL database, a Redis cache, and a background worker together is where docker-compose.yml earns its keep.
version: "3.9"
services:
api:
build: .
ports:
- "3000:3000"
environment:
DATABASE_URL: postgres://postgres:password@db:5432/myapp
REDIS_URL: redis://cache:6379
depends_on:
db:
condition: service_healthy
cache:
condition: service_started
db:
image: postgres:16-alpine
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: password
POSTGRES_DB: myapp
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 5
cache:
image: redis:7-alpine
volumes:
postgres_data:
The depends_on with condition: service_healthy is critical. Without it, your API container starts before PostgreSQL is ready to accept connections, and your application crashes on boot. The healthcheck ensures Postgres is actually accepting queries before your API starts.
Named volumes like postgres_data persist data between container restarts. Without a named volume, your database resets every time you run docker compose down.
Environment Variables and Secrets
Never bake secrets into your Docker image. Not in the Dockerfile, not in the Compose file checked into git. Use environment variables injected at runtime.
For local development, a .env file works fine:
DATABASE_URL=postgres://postgres:password@localhost:5432/myapp
JWT_SECRET=dev-only-secret-not-for-production
Add .env to your .gitignore immediately. For production, use your platform's secret management: AWS Secrets Manager, Doppler, Vault, or at minimum environment variables set in your deployment configuration, not in your repository.
Docker supports a --env-file flag and Compose supports an env_file key. Use them. Your images should be configuration-agnostic, pulling their runtime values from the environment.
Common Mistakes I See in Production
Running as root. By default, containers run as root. This is a security problem. Add a non-root user:
RUN addgroup -g 1001 -S nodejs && adduser -S nodejs -u 1001
USER nodejs
No .dockerignore. Without a .dockerignore, you send your entire project directory — including node_modules, .git, and test files — as the build context. Create a .dockerignore:
node_modules
.git
*.log
.env
dist
Storing state in containers. Containers are ephemeral. If your application writes files to the container's filesystem, those files disappear when the container restarts. Put file storage on a mounted volume or an object store like S3.
Not setting resource limits. In production, always set memory and CPU limits. An unbounded container can starve other services on the same host. In Docker Compose:
deploy:
resources:
limits:
cpus: "0.5"
memory: 512M
Health Checks in Production
Every production container should expose a health check. Orchestrators like Kubernetes and ECS use health checks to determine if a container is ready to receive traffic and whether it needs to be restarted.
HEALTHCHECK --interval=30s --timeout=3s --start-period=10s --retries=3 \
CMD curl -f http://localhost:3000/health || exit 1
Your /health endpoint should check actual application health — can it reach the database, is the cache connected — not just return a 200. A container that can't reach its database is not healthy, even if the process is running.
The Path Forward
Once you are comfortable with Docker basics, the natural progression is orchestration. Docker Compose handles multi-container local development. For production, you will eventually look at Kubernetes for complex deployments or managed container services like AWS ECS or Google Cloud Run for simpler ones.
But before you get there, internalize the fundamentals: keep images small, use multi-stage builds, never bake secrets into images, run as non-root, and treat containers as ephemeral. Get those right and you will avoid 90% of the production Docker problems I have seen.
Containerization is not optional anymore. It is the baseline expectation for professional software delivery in 2026. Start with one service, get it right, and expand from there.
If you are building production infrastructure and want an experienced eye on your architecture, I would be glad to help. Book a session at https://calendly.com/jamesrossjr.