Docker

Container runtime and image format that packages an application with its dependencies so it runs the same way everywhere.

Category
Infrastructure
Difficulty
Beginner
When to use
You need reproducible environments for training, serving, or local development — especially anything involving CUDA, Python versions, or system libraries.
When not to use
You only need a Python virtual environment, or you're on a device where running a daemon isn't acceptable.
Alternatives
Podman containerd Buildah nerdctl

At a glance

FieldValue
CategoryContainer runtime
DifficultyBeginner → Intermediate
When to useReproducible dev, training, and serving images
When not to useTrivial scripts where a venv is enough
AlternativesPodman, containerd, Buildah

What it is

Docker builds images (immutable filesystem snapshots plus metadata) from a Dockerfile and runs them as containers — isolated processes that share the host kernel. “Works on my machine” becomes “works in this image”.

When we reach for it at Ephizen

  • Pinning CUDA, cuDNN, Python, and library versions for reproducible training runs.
  • Shipping FastAPI inference servers to Kubernetes with a known-good environment.
  • Giving new hires a one-command setup for the whole stack.
  • Running a model locally without polluting the host’s Python.

Getting started

FROM python:3.12-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY app/ ./app/
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
docker build -t ephizen/infer:latest .
docker run --rm -p 8000:8000 ephizen/infer:latest

Gotchas

  • Image size balloons quickly. Use slim or distroless base images and multi-stage builds.
  • For GPU workloads you need the NVIDIA Container Toolkit and --gpus all.
  • Don’t run as root inside the container — create a non-root user in the Dockerfile.
  • BuildKit cache mounts (--mount=type=cache) speed up pip installs dramatically.

Related tools