Docker in Docker with GitLab CI
Ben Bolton
- 6 minutes read - 1142 words
If you’ve worked with GitLab CI for any length of time you’ve probably needed to build a Docker image as part of a pipeline. The go-to solution is Docker-in-Docker (DinD) — running Docker inside a Docker container so your CI job can build and push images. It works, but it comes with enough quirks that it’s worth understanding what’s actually happening under the hood before you commit to it.
What is Docker-in-Docker?
Normally, a GitLab CI job runs inside a container. That container has no Docker daemon — it’s just an isolated environment for running your build steps. If you want to run docker build inside that container, you need a Docker daemon to talk to.
Docker-in-Docker solves this by running a Docker daemon inside the CI container, as a service alongside your job. Your job then connects to that daemon to build images.
The alternative — and often better — approach is to mount the host’s Docker socket into the container. More on the tradeoffs later.
Setting Up DinD in GitLab CI
Runner configuration
Your GitLab Runner needs to use the docker executor and have privileged = true set in its config. DinD requires privileged mode because the inner Docker daemon needs access to the host kernel.
In /etc/gitlab-runner/config.toml:
[[runners]]
name = "my-runner"
executor = "docker"
[runners.docker]
privileged = true
volumes = ["/caches"]
Without privileged = true the DinD service will fail to start.
Basic pipeline example
# .gitlab-ci.yml
build-image:
image: docker:27
services:
- docker:27-dind
variables:
DOCKER_TLS_CERTDIR: "/certs"
before_script:
- docker info
script:
- docker build -t my-app:$CI_COMMIT_SHORT_SHA .
- docker tag my-app:$CI_COMMIT_SHORT_SHA registry.example.com/my-app:$CI_COMMIT_SHORT_SHA
- docker push registry.example.com/my-app:$CI_COMMIT_SHORT_SHA
Breaking this down:
image: docker:27— the job container, which includes the Docker CLIservices: docker:27-dind— a sidecar container running the Docker daemonDOCKER_TLS_CERTDIR: "/certs"— enables TLS between the CLI and the daemon (required in modern versions)docker info— a useful sanity check; if this fails the daemon isn’t ready
Authenticating to a registry
To push images you need to authenticate. Use GitLab CI variables to store credentials — never hardcode them.
before_script:
- echo "$CI_REGISTRY_PASSWORD" | docker login $CI_REGISTRY --username $CI_REGISTRY_USER --password-stdin
If you’re using the GitLab Container Registry (the built-in one), GitLab provides these variables automatically:
| Variable | Value |
|---|---|
CI_REGISTRY | registry.gitlab.com |
CI_REGISTRY_USER | gitlab-ci-token |
CI_REGISTRY_PASSWORD | Short-lived job token |
For external registries (Docker Hub, ECR, etc.), set the credentials as masked CI/CD variables in Settings → CI/CD → Variables.
A More Complete Pipeline
A realistic pipeline builds, tags, and pushes — but only on the right branches:
variables:
DOCKER_TLS_CERTDIR: "/certs"
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
IMAGE_LATEST: $CI_REGISTRY_IMAGE:latest
stages:
- build
- push
.docker-base:
image: docker:27
services:
- docker:27-dind
before_script:
- echo "$CI_REGISTRY_PASSWORD" | docker login $CI_REGISTRY
--username $CI_REGISTRY_USER --password-stdin
build:
extends: .docker-base
stage: build
script:
- docker build --pull -t $IMAGE_TAG .
- docker save $IMAGE_TAG | gzip > image.tar.gz
artifacts:
paths:
- image.tar.gz
expire_in: 1 hour
push:
extends: .docker-base
stage: push
script:
- docker load < image.tar.gz
- docker push $IMAGE_TAG
- docker tag $IMAGE_TAG $IMAGE_LATEST
- docker push $IMAGE_LATEST
only:
- main
A few things worth noting here:
--pullondocker buildensures the base image is always fresh, avoiding stale cache issues- The image is saved as a tarball artifact and loaded in the push stage rather than rebuilding — faster and consistent
- The push to
latestonly happens onmain, keeping that tag meaningful
DinD vs Docker Socket Binding
DinD isn’t the only option. The alternative is to mount the host’s Docker socket directly into the job container:
build-image:
image: docker:27
variables:
DOCKER_HOST: "unix:///var/run/docker.sock"
script:
- docker build -t my-app .
And in the runner config, mount the socket:
[runners.docker]
volumes = ["/var/run/docker.sock:/var/run/docker.sock"]
Comparison
| Docker-in-Docker | Socket binding | |
|---|---|---|
| Isolation | Full — separate daemon per job | None — shares host daemon |
| Requires privileged | Yes | No |
| Image cache | Not shared between jobs | Shared across all jobs on host |
| Security risk | Lower (isolated) | Higher (full Docker access on host) |
| Complexity | Higher | Lower |
Socket binding is simpler and faster because it reuses the host’s image cache. But it gives every CI job root-equivalent access to the host — a malicious or misconfigured job could affect other jobs running on the same runner, or worse, the host itself.
DinD is slower (cold cache, daemon startup overhead) but properly isolated. For shared runners or multi-tenant environments it’s the safer choice.
At DWP we used DinD for this reason — shared runners across multiple teams means you don’t want one team’s jobs having any visibility of another’s.
Layer Caching with DinD
One of the biggest pain points with DinD is that each job starts with an empty image cache, so every docker build pulls all layers from scratch. On large images this adds significant time.
Option 1 — Registry cache
Use --cache-from to pull the previous image as a cache source before building:
script:
- docker pull $CI_REGISTRY_IMAGE:latest || true
- docker build
--cache-from $CI_REGISTRY_IMAGE:latest
--tag $IMAGE_TAG .
- docker push $IMAGE_TAG
The || true prevents the job from failing if the image doesn’t exist yet (e.g. first run).
Option 2 — BuildKit inline cache
With BuildKit enabled you can embed cache metadata in the image itself:
variables:
DOCKER_BUILDKIT: "1"
script:
- docker pull $CI_REGISTRY_IMAGE:latest || true
- docker build
--cache-from $CI_REGISTRY_IMAGE:latest
--build-arg BUILDKIT_INLINE_CACHE=1
--tag $IMAGE_TAG .
This is more cache-efficient but requires your registry to support it.
Common Problems
“Cannot connect to the Docker daemon”
The DinD service takes a second or two to start. If your job’s first command is docker build, it can fail before the daemon is ready. Add a readiness check:
before_script:
- until docker info; do sleep 1; done
TLS errors
If you see certificate errors between the CLI and daemon, make sure DOCKER_TLS_CERTDIR is set consistently on both the job and the service:
variables:
DOCKER_TLS_CERTDIR: "/certs"
services:
- name: docker:27-dind
variables:
DOCKER_TLS_CERTDIR: "/certs"
Older guides set DOCKER_DRIVER: overlay2 and disabled TLS entirely with DOCKER_TLS_CERTDIR: "". That still works but isn’t recommended for anything beyond local testing.
Image not found in push stage
If you’re passing images between stages (as in the artifact approach above), make sure the docker save output is listed as an artifact and the docker load happens before any push commands. An easy mistake is referencing the wrong tag after loading.
Summary
Docker-in-Docker in GitLab CI is well-supported and reliable once you understand the moving parts. The key things to get right:
- Privileged runners — required, no way around it for true DinD
- TLS — leave it enabled, set
DOCKER_TLS_CERTDIRon both job and service - Registry auth — use GitLab’s built-in variables or masked CI variables, never plaintext
- Layer caching — use
--cache-fromto avoid rebuilding everything from scratch on every run - Socket binding — valid alternative for trusted, single-tenant runners where speed matters more than isolation
For a shared platform where you can’t control what other teams are running, DinD with proper isolation is the right default.