Continuous Deployment: From Code Push to Production in Minutes
Build a continuous deployment pipeline that ships code to production automatically — artifact building, environment promotion, rollback strategies, and deployment verification.

James Ross Jr.
Strategic Systems Architect & Enterprise Software Developer
Continuous Deployment: From Code Push to Production in Minutes
Continuous deployment is the practice of automatically deploying every change that passes your quality gate directly to production, without human approval for each individual deployment. This sounds risky if you are used to scheduled, manually approved releases. Once you have done it correctly, going back to manual releases feels like working with your hands tied.
The key word is "correctly." Continuous deployment without a solid quality gate is just automated chaos delivery. Continuous deployment with good test coverage, reliable CI, staging environment parity, and automatic rollback is the fastest, safest way to ship software.
Here is how to build the pipeline.
The Prerequisites
Continuous deployment is only safe if:
- You have meaningful automated test coverage (unit + integration at minimum)
- Your CI pipeline catches breaking changes reliably
- You can roll back within minutes when something goes wrong
- You have monitoring that tells you when a deployment caused a regression
- Your deployment process is fast enough that bad deployments are short-lived
If you are missing any of these, build them first. Continuous deployment without them accelerates bad outcomes, not good ones.
The Pipeline Architecture
A CD pipeline for a typical web application has three stages:
Build — compile, bundle, and package your application. The output is a versioned, immutable artifact: a Docker image, a deployment package, a compiled binary.
Test — run all automated tests against the build artifact. This is your quality gate. The artifact does not proceed unless all tests pass.
Deploy — promote the artifact through environments (staging, then production). Each promotion may be automatic or require a deliberate trigger.
The critical property: every stage operates on the same artifact. The Docker image that passed tests in staging is the exact image deployed to production. No rebuilding, no "build for prod" step. If you rebuild for production, you have not tested what you deployed.
Building Immutable Artifacts
For a containerized application, the artifact is a Docker image tagged with an immutable identifier. Use the Git commit SHA:
- name: Build and push Docker image
run: |
IMAGE_TAG="${{ github.sha }}"
docker build -t myregistry/api:${IMAGE_TAG} .
docker push myregistry/api:${IMAGE_TAG}
# Also tag as latest for human reference
docker tag myregistry/api:${IMAGE_TAG} myregistry/api:latest
docker push myregistry/api:latest
The image tagged with the commit SHA is immutable — it will always refer to exactly this build. The latest tag is mutable and useful for tooling that expects it, but you should reference the SHA tag in deployment manifests.
For frontend applications deployed to Cloudflare Pages or Vercel, the platform manages artifact creation. Your build output is automatically immutable per deployment.
The Staging Environment as a Quality Gate
Staging should be a production replica. Same infrastructure, same configuration (except pointing to a staging database), same deployment process. If staging differs from production in any meaningful way, staging validation does not actually validate production behavior.
Deploy to staging automatically on merge to main. Run your automated integration tests and end-to-end tests against staging. Only proceed to production if staging deployment and tests pass.
deploy-staging:
runs-on: ubuntu-latest
needs: [test]
steps:
- name: Deploy to staging
run: |
kubectl set image deployment/api \
api=myregistry/api:${{ github.sha }} \
-n staging
kubectl rollout status deployment/api -n staging
- name: Run smoke tests against staging
run: npm run test:e2e -- --env=staging
deploy-production:
runs-on: ubuntu-latest
needs: [deploy-staging]
environment: production # Requires configured deployment environment
steps:
- name: Deploy to production
run: |
kubectl set image deployment/api \
api=myregistry/api:${{ github.sha }} \
-n production
kubectl rollout status deployment/api -n production
The environment: production block can require manual approval before proceeding (configure in GitHub > Settings > Environments > Required reviewers). For fully automated continuous deployment, remove the approval requirement. For continuous delivery with a manual production gate, keep it.
Rolling Deployments
A rolling deployment replaces pods incrementally — new pods come up, old pods come down — without taking the service offline. Kubernetes handles this natively with the rolling update strategy:
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0 # Never take down more pods than you add
maxSurge: 1 # Create one new pod at a time
maxUnavailable: 0 ensures capacity never drops during a deployment. maxSurge: 1 means one extra pod runs during the transition period. This is the safe default for most applications.
For larger deployments, maxSurge can be set higher to speed up the rollout at the cost of temporarily running more pods.
Deployment Verification
After a deployment completes, verify it. Do not just check that pods are running — verify that the new version is actually serving traffic correctly.
A post-deployment smoke test is the minimum:
#!/bin/bash
BASE_URL="https://api.production.com"
MAX_RETRIES=5
RETRY_DELAY=10
for i in $(seq 1 $MAX_RETRIES); do
RESPONSE=$(curl -s -o /dev/null -w "%{http_code}" "$BASE_URL/health")
if [ "$RESPONSE" == "200" ]; then
echo "Health check passed"
break
fi
echo "Health check failed ($RESPONSE), retrying in ${RETRY_DELAY}s..."
sleep $RETRY_DELAY
done
if [ "$RESPONSE" != "200" ]; then
echo "Deployment verification failed after $MAX_RETRIES attempts"
exit 1
fi
A more thorough approach is a canary deployment: route 5% of traffic to the new version, monitor error rate and latency for 10 minutes, then route 100% if metrics look healthy. This requires a load balancer that supports weighted traffic routing (Nginx, Envoy, or cloud load balancers with traffic splitting capabilities).
Automatic Rollback
When deployment verification fails, roll back automatically. Do not require human intervention for a well-defined failure condition.
- name: Deploy and verify
run: |
kubectl set image deployment/api api=myregistry/api:${{ github.sha }} -n production
kubectl rollout status deployment/api -n production --timeout=5m || {
echo "Deployment failed, rolling back"
kubectl rollout undo deployment/api -n production
exit 1
}
- name: Post-deployment smoke test
run: |
./scripts/smoke-test.sh || {
echo "Smoke test failed, rolling back deployment"
kubectl rollout undo deployment/api -n production
exit 1
}
Kubernetes retains the last 10 deployment revisions by default (configurable via revisionHistoryLimit). Each rollback undoes one revision. Rollback time is typically under 60 seconds for a Kubernetes rolling deployment.
Feature Flags for Safe Deployment
Feature flags let you deploy code to production without exposing it to users. The code is live, but gated. When you are ready to release, flip the flag.
This decouples deployment from release. You can deploy continuously without every deployment being a user-visible release event. It also enables instant rollback of a feature without a code deployment — just disable the flag.
For a simple feature flag implementation, use your environment configuration. For production-grade feature flags with targeting rules (show to 5% of users, show to users in a specific country, show to specific user IDs), use LaunchDarkly, Unleash, or Cloudflare Edge Config.
Measuring Your Deployment Pipeline
Track two metrics for your CD pipeline:
Deployment frequency — how often you deploy to production. Daily, multiple times daily, weekly. This is one of DORA's four key metrics for engineering performance. Higher frequency indicates a healthier, lower-risk deployment process.
Lead time for changes — time from code commit to running in production. For a well-functioning CD pipeline, this should be under one hour. When it is hours or days, that gap represents time where bugs are in the codebase but not yet fixed in production.
Measure these monthly. If deployment frequency is dropping or lead time is increasing, your pipeline has a bottleneck worth diagnosing.
If you want help designing or improving your continuous deployment pipeline, book a session at https://calendly.com/jamesrossjr.