Singularity vs Docker — Containers for Science vs Containers for the Web
Two container runtimes, two very different worlds — here's which one you actually need.
Docker and Singularity both package software into portable containers — but they were built for completely different worlds, and picking the wrong one will cause you real pain.
🐳 What is Docker?
Docker is the de-facto standard for containerizing web apps, microservices, and development environments. You define an image in a Dockerfile, build it, and ship it anywhere that has the Docker daemon running.
1
2
3
4
5
6
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "app.py"]
Docker runs containers via a root-owned daemon (dockerd). That daemon manages networking, volumes, and lifecycle — powerful, but it means any user who can run Docker effectively has root on the host.
TL;DR: Docker = build once, run everywhere. Perfect for apps. Terrible for HPC clusters.
ℹ️ Info — Docker’s Architecture Docker uses a client-server model: the
dockerCLI talks todockerd(the daemon), which manages images, containers, and networking. The daemon runs as root, which is a major security concern on shared systems.
🔬 What is Singularity (Apptainer)?
Singularity — now rebranded as Apptainer under the Linux Foundation — was built from the ground up for HPC (High-Performance Computing) and scientific workflows. It solves a core problem: HPC clusters won’t let random users run root-level daemons, but researchers still need reproducible software environments.
Singularity containers are single .sif (Singularity Image Format) files. You can run them without root, and your user identity inside the container is the same as outside.
1
2
3
4
5
# Pull a Docker image into Singularity format
singularity pull docker://pytorch/pytorch:2.0.0-cuda11.7-cudnn8-runtime
# Run it — no daemon, no root
singularity exec pytorch_2.0.0.sif python train.py
Key superpower: native GPU passthrough and MPI support — critical for deep learning and scientific simulation workloads.
💡 Tip — Singularity reads Docker images You don’t have to rebuild everything.
singularity pull docker://converts any Docker Hub image into a.siffile. Your existing Docker workflows can migrate to HPC with minimal friction.
⚖️ The Differences — Side by Side
| Feature | Docker | Singularity / Apptainer |
|---|---|---|
| Primary use case | Web apps, microservices, CI/CD | HPC, scientific computing, research |
| Requires root to run? | Yes (daemon runs as root) | No — rootless by design |
| User inside container | Root (by default) | Same as host user |
| Image format | Layered OCI images | Single .sif file |
| GPU support | Via --gpus flag (NVIDIA Container Toolkit) | Native, no extra setup on HPC |
| MPI support | Workaround-heavy | First-class support |
| Reads Docker images | Native | Yes (singularity pull docker://) |
| Networking isolation | Full (bridged, host, overlay) | Minimal — shares host network |
| Filesystem access | Volume mounts, explicit bind | Binds home dir + CWD by default |
| Ecosystem | Docker Hub, GitHub Container Registry | Sylabs Cloud, Docker Hub (via pull) |
🔐 The Security Model — Root vs Rootless
This is the crux of the difference.
graph TD
A[User runs docker run] --> B[Docker CLI]
B --> C[dockerd daemon - runs as root]
C --> D[Container - root inside]
style A fill:#4A90D9,color:#fff
style B fill:#4A90D9,color:#fff
style C fill:#D9534F,color:#fff
style D fill:#D9534F,color:#fff
E[User runs singularity exec] --> F[Singularity runtime]
F --> G[Container - same UID as host]
style E fill:#5BA85A,color:#fff
style F fill:#5BA85A,color:#fff
style G fill:#5BA85A,color:#fff
With Docker, a container escape → attacker has root on the host. On a shared HPC cluster with thousands of users, this is unacceptable. Singularity’s model means a container escape is still just a normal unprivileged user — no privilege escalation.
⚠️ Warning — Docker on HPC clusters Most HPC sysadmins will not install Docker. If you need containers on NSCC, NUS HPC, or any research cluster, assume Singularity/Apptainer is the only option.
🚀 Workflow Comparison — Deep Learning on HPC
Here’s what running a PyTorch training job looks like in each system:
Docker (local workstation):
1
2
3
4
docker run --gpus all \
-v $(pwd):/workspace \
pytorch/pytorch:2.0.0-cuda11.7-cudnn8-runtime \
python train.py --epochs 50
Singularity (HPC cluster with SLURM):
1
2
3
4
5
6
7
#!/bin/bash
#SBATCH --gres=gpu:1
#SBATCH --time=04:00:00
singularity exec --nv \
/scratch/pytorch_2.0.0.sif \
python train.py --epochs 50
The --nv flag passes through NVIDIA GPUs. No extra toolkit needed. The .sif file lives on shared storage — fast, portable, reproducible.
🗺️ When to Use Which
graph LR
A[I need containers] --> B{Running on HPC / shared cluster?}
B -- Yes --> C[Use Singularity / Apptainer]
B -- No --> D{Need microservices / web stack?}
D -- Yes --> E[Use Docker]
D -- No --> F{Need GPU on local machine?}
F -- Yes --> E
F -- No --> G[Either works — Docker is easier to start]
style C fill:#5BA85A,color:#fff
style E fill:#4A90D9,color:#fff
style G fill:#E8A838,color:#fff
“Docker is for shipping software. Singularity is for running science.”
📦 Quick Summary
- Docker → build apps, ship to the cloud, run locally. Root daemon, rich ecosystem, great DX.
- Singularity/Apptainer → run reproducible research on HPC. No root, GPU-native, MPI-friendly, reads Docker images.
- Both coexist — build your environment with Docker, pull it into Singularity for cluster runs. Best of both worlds.
🎬 Related Videos
If you learn better by watching, these are worth your time:
Docker — Full Beginner Course (3 hrs, TechWorld with Nana) Hands-on walkthrough of Docker concepts, images, volumes, and networking. The gold standard intro.
Apptainer Fundamentals — Building Your First Container Step-by-step intro to Apptainer (formerly Singularity): pulling, building, and managing containers for HPC.
Singularity/Apptainer for HPC — PRACE Workshop A concise conference talk on how Singularity fits into real HPC workflows, with practical examples.
Exploring research computing tools. Next: setting up SLURM job arrays for hyperparameter sweeps.