Kubernetes for Application Developers: What You Actually Need to Know
Kubernetes explained for application developers — Pods, Deployments, Services, ConfigMaps, and the concepts you need without the platform engineering rabbit holes.

James Ross Jr.
Strategic Systems Architect & Enterprise Software Developer
Kubernetes for Application Developers: What You Actually Need to Know
Kubernetes has a reputation as the most over-engineered solution to problems that most applications do not have. That reputation is earned, and for most small-to-medium applications, Docker Compose plus a good VPS is genuinely the better choice. I will be honest about that.
But Kubernetes is increasingly the operational environment for enterprise applications. Even if you are not running the cluster yourself, you are likely writing applications that will be deployed onto one. As an application developer working in that context, there is a specific, useful subset of Kubernetes you need to understand — and a larger, irrelevant-to-you set of cluster administration concerns you can ignore. This guide covers the former.
The Mental Model
Kubernetes is a system for running containers at scale with automated scheduling, health management, and scaling. You describe the desired state of your application in YAML files. Kubernetes continuously works to make the actual state match your desired state.
If a container crashes, Kubernetes restarts it. If a node (physical or virtual machine) fails, Kubernetes reschedules the containers that were on it onto healthy nodes. If traffic spikes, Kubernetes can automatically scale up the number of running instances. This is what you get that Docker Compose does not provide.
Core Concepts You Must Understand
Pod — the smallest deployable unit in Kubernetes. A pod wraps one or more containers that share a network namespace and storage. In practice, most pods contain a single container (your application). Pods are ephemeral — they start, they stop, they get replaced. Never depend on a specific pod being around or having a stable IP address.
Deployment — manages a set of identical pods. You tell a Deployment "I want 3 replicas of this container running at all times." If one pod crashes, the Deployment creates a replacement. When you update your container image, the Deployment performs a rolling update — bringing up new pods before taking down old ones so your service stays available.
Service — gives your pods a stable network endpoint. Since pods are ephemeral with changing IP addresses, a Service sits in front of them with a stable IP and DNS name. Other services communicate with your application through the Service, not directly to individual pods.
ConfigMap — stores non-secret configuration as key-value pairs. Your application reads configuration from a ConfigMap at runtime, keeping configuration separate from your container image.
Secret — like a ConfigMap but for sensitive values. Kubernetes encodes secrets in base64 (not encrypted by default — encryption at rest requires additional cluster configuration). Secrets are mounted into pods as environment variables or files.
Namespace — a logical isolation boundary within a cluster. Your staging and production deployments live in different namespaces on the same cluster. Resources in different namespaces do not see each other unless you explicitly configure it.
Writing a Real Deployment
Here is a complete Deployment manifest for a Node.js API:
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
namespace: production
labels:
app: api
spec:
replicas: 3
selector:
matchLabels:
app: api
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: myregistry/api:1.2.3
ports:
- containerPort: 3000
env:
- name: NODE_ENV
value: production
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: api-secrets
key: database-url
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 10
periodSeconds: 30
readinessProbe:
httpGet:
path: /ready
port: 3000
initialDelaySeconds: 5
periodSeconds: 10
A few things to notice.
The image tag is a specific version (1.2.3), not latest. Using latest in production means you cannot reliably reproduce what is currently deployed. Tag every release with a specific, immutable identifier.
Resource requests and limits are both set. Requests are the guaranteed allocation — Kubernetes will only schedule your pod onto a node with this much available. Limits are the ceiling — Kubernetes kills your pod if it exceeds this. Setting requests without limits means your pod can starve neighboring pods during high load.
livenessProbe and readinessProbe serve different purposes. The liveness probe determines whether the container is alive — if it fails, Kubernetes restarts the container. The readiness probe determines whether the container is ready to receive traffic — if it fails, Kubernetes removes the pod from the Service's endpoint list but does not restart it. A pod that is starting up or connecting to its database should fail readiness, not liveness.
The Service That Goes With It
apiVersion: v1
kind: Service
metadata:
name: api
namespace: production
spec:
selector:
app: api
ports:
- protocol: TCP
port: 80
targetPort: 3000
type: ClusterIP
This Service receives traffic on port 80 and forwards it to port 3000 on any pod with the label app: api. The ClusterIP type makes it accessible only within the cluster. To expose it externally, you add an Ingress.
ConfigMaps and Secrets
apiVersion: v1
kind: ConfigMap
metadata:
name: api-config
namespace: production
data:
LOG_LEVEL: "info"
MAX_CONNECTIONS: "100"
---
apiVersion: v1
kind: Secret
metadata:
name: api-secrets
namespace: production
type: Opaque
stringData:
database-url: "postgres://user:password@host:5432/db"
Reference these in your Deployment:
envFrom:
- configMapRef:
name: api-config
- secretRef:
name: api-secrets
This injects all ConfigMap and Secret values as environment variables. Individual keys can be referenced selectively, as shown in the earlier secretKeyRef example.
Deploying Updates
When you push a new container image, update the Deployment:
kubectl set image deployment/api api=myregistry/api:1.2.4 -n production
Or update the manifest file and apply:
kubectl apply -f deployment.yaml
The rolling update strategy brings up one new pod, waits for it to pass readiness checks, then removes one old pod. This continues until all pods are updated. Your service stays available throughout.
Watch the rollout:
kubectl rollout status deployment/api -n production
If something goes wrong, roll back:
kubectl rollout undo deployment/api -n production
The Workflow You Actually Need Daily
The kubectl commands application developers use most:
# List pods
kubectl get pods -n production
# Check pod logs
kubectl logs -f pod-name -n production
# Check pod logs across all replicas (using a label selector)
kubectl logs -l app=api -n production --all-containers
# Describe a pod (events, resource usage, probe results)
kubectl describe pod pod-name -n production
# Execute a command in a running pod (for debugging)
kubectl exec -it pod-name -n production -- /bin/sh
# Apply a manifest
kubectl apply -f deployment.yaml
# Check deployment status
kubectl rollout status deployment/api -n production
# Get environment variable values (from secrets/configmaps)
kubectl get configmap api-config -n production -o yaml
What to Leave to the Platform Team
As an application developer, you do not need to understand RBAC configuration, cluster node provisioning, network plugin selection, storage class configuration, or certificate management at the cluster level. These are infrastructure concerns that your platform engineering team (or a managed Kubernetes service like EKS, GKE, or AKS) handles.
Your responsibility is writing correct Deployment manifests, setting appropriate resource requests and limits, implementing health check endpoints that accurately reflect application health, and understanding how to deploy and roll back your application.
That is a manageable scope, and it is the part that directly affects whether your application runs reliably.
Working on applications targeting a Kubernetes environment and want help with architecture or deployment patterns? Let's talk. Book a session at https://calendly.com/jamesrossjr.