Container Security: Hardening Docker for Production
A practical guide to Docker container security — non-root users, image scanning, read-only filesystems, network policies, and secrets management in containers.

James Ross Jr.
Strategic Systems Architect & Enterprise Software Developer
Container Security: Hardening Docker for Production
The fact that something runs in a container does not make it secure. I have reviewed containerized applications running as root, with the Docker socket mounted inside the container, with secrets baked into the image layers, using base images that were two years out of date and full of known CVEs. Containerization is packaging technology — it provides isolation, not security guarantees, and the isolation is only as strong as the configuration you put around it.
Here is the hardening checklist I apply to every production container.
Never Run as Root
The single most impactful container security change you can make. By default, Docker containers run as root inside the container. If an attacker exploits a vulnerability in your application, they have root access inside the container. If there is any path out of the container — a misconfigured volume mount, a Docker socket, a kernel vulnerability — they have root on the host.
Create a non-root user in your Dockerfile:
FROM node:20-alpine
# Create a non-root user and group
RUN addgroup -g 1001 -S appgroup && \
adduser -u 1001 -S appuser -G appgroup
WORKDIR /app
COPY --chown=appuser:appgroup package*.json ./
RUN npm ci --only=production
COPY --chown=appuser:appgroup . .
# Switch to non-root user before the final CMD
USER appuser
EXPOSE 3000
CMD ["node", "src/index.js"]
The --chown=appuser:appgroup flags on COPY instructions ensure the application files are owned by the non-root user. The USER appuser directive ensures the container process runs as that user, not root.
Verify this is working: docker exec my-container whoami should return appuser, not root.
Use Minimal Base Images
Every layer of your base image is a potential attack surface. The Debian-based node:20 image contains curl, wget, apt, bash, and hundreds of other utilities you do not need in your application container. They exist for developer convenience, and they are available to an attacker who gains container access.
Use Alpine-based images (node:20-alpine) for most applications. Alpine's minimal package set means fewer attack vectors. For applications with native dependencies that do not compile on Alpine's musl libc, use node:20-slim instead — still smaller than the full Debian image.
Better still, use distroless images where your framework supports them. Google's distroless images contain only the application runtime — no shell, no package manager, no utilities. If an attacker gets code execution in a distroless container, they are working in an environment with almost no tools available.
Scan Images for Vulnerabilities
Build-time scanning catches known CVEs in your base image and installed packages before the image reaches production.
Integrate Trivy into your CI pipeline:
- name: Scan image for vulnerabilities
uses: aquasecurity/trivy-action@master
with:
image-ref: "myapp:${{ github.sha }}"
format: "table"
exit-code: "1"
severity: "CRITICAL,HIGH"
ignore-unfixed: true
The exit-code: "1" with severity: "CRITICAL,HIGH" fails the build when critical or high-severity unfixed vulnerabilities are found. ignore-unfixed: true prevents failures for vulnerabilities that do not yet have a patch available — you cannot fix what does not have a fix, but you should know about fixable issues.
Run image scans on a schedule against your production images, not just at build time. CVEs are discovered continuously. An image that was clean when built may have known vulnerabilities six months later. Daily scans catch this.
Read-Only Root Filesystem
Mount your container's root filesystem as read-only. This prevents an attacker who achieves code execution from writing malicious files to the filesystem:
# Docker Compose
services:
api:
image: myapp:latest
read_only: true
tmpfs:
- /tmp
- /var/run
Applications that need to write files — for example, applications that use /tmp for temporary files — need those specific directories mounted as writable tmpfs volumes. This is a much smaller attack surface than a fully writable filesystem.
If your application writes to disk as part of its operation (not just temp files), mount a dedicated volume for that purpose rather than making the entire filesystem writable.
Restrict Capabilities
Linux capabilities are the mechanism by which root privileges are divided into discrete units. By default, Docker grants containers a set of capabilities that include more privileges than most applications need.
Drop all capabilities and add back only what you need:
services:
api:
image: myapp:latest
cap_drop:
- ALL
cap_add:
- NET_BIND_SERVICE # Only if you need to bind to ports < 1024
Most web applications need no capabilities at all if they run on ports above 1024. An API running on port 3000 as a non-root user needs zero Linux capabilities. Drop them all.
Network Segmentation
Do not put all your containers on the same Docker network. An attacker who compromises your frontend container should not be able to reach your database container directly.
services:
frontend:
image: frontend:latest
networks:
- public
api:
image: api:latest
networks:
- public
- internal
db:
image: postgres:16-alpine
networks:
- internal
networks:
public:
driver: bridge
internal:
driver: bridge
internal: true
The internal: true on the internal network prevents containers on that network from reaching the internet directly. Your database and internal services are isolated from outbound network access. The API bridges both networks, acting as the only path between the public-facing services and the database.
Secrets Management: Never Bake Into Images
Secrets embedded in Docker images are a critical vulnerability. Images are stored in registries, passed between environments, shared among team members. Any secret in a Docker layer is accessible to anyone who can pull the image.
The rule is simple: no secrets in Dockerfiles, no secrets in environment variables baked into the image, no secrets in image labels.
Inject secrets at runtime. For Docker Compose:
services:
api:
image: api:latest
secrets:
- db_password
environment:
DB_PASSWORD_FILE: /run/secrets/db_password
secrets:
db_password:
external: true
Docker Swarm and Kubernetes have native secrets mechanisms. For simpler deployments, tools like Doppler or HashiCorp Vault provide secret injection at container startup. At minimum, use environment variables set in your deployment platform's secret store — not in any file that touches your repository.
Limit Resource Usage
An unbounded container is a denial-of-service vector. If an attacker can exhaust a container's resources — through algorithmic complexity attacks, memory leaks, or deliberate resource consumption — they affect other services on the same host.
Set explicit resource limits:
services:
api:
image: api:latest
deploy:
resources:
limits:
cpus: "1.0"
memory: 512M
reservations:
cpus: "0.25"
memory: 128M
The limit prevents the container from consuming more than its allocation. The reservation guarantees it always has the reserved amount available. Size these based on your application's normal usage with headroom for traffic spikes.
Keep Images Updated
Outdated base images are the source of most container CVEs. Build new images regularly — weekly at minimum — even when your application code has not changed. A fresh build picks up the latest Alpine or Debian package versions, which include security patches.
Automate this. A GitHub Actions workflow that runs on a weekly schedule, rebuilds your images, scans them, and pushes to your registry keeps your production images current without manual intervention.
The Container Security Audit
For existing deployments, audit your running containers with these commands:
# Find containers running as root
docker ps -q | xargs docker inspect --format='{{.Name}}: {{.Config.User}}'
# Find containers with privileged mode enabled
docker ps -q | xargs docker inspect --format='{{.Name}}: {{.HostConfig.Privileged}}'
# Find containers with the Docker socket mounted
docker ps -q | xargs docker inspect --format='{{.Name}}: {{.HostConfig.Binds}}'
If you find containers running privileged or with the Docker socket mounted, treat that as a critical security finding requiring immediate remediation. A container with the Docker socket has effectively unlimited access to the host system.
If you want a security review of your containerized infrastructure, I can help identify gaps and prioritize remediation. Book a session at https://calendly.com/jamesrossjr.