0% found this document useful (0 votes)
3K views

Docker CLI Reference Documentation

Uploaded by

Jony Nguyễn
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3K views

Docker CLI Reference Documentation

Uploaded by

Jony Nguyễn
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 1151

Docker CLI (docker)

Command-Line Interfaces (CLIs)


Docker run reference
Docker runs processes in isolated containers. A container is a process which runs on a host. The
host may be local or remote. When an operator executes docker run, the container process that
runs is isolated in that it has its own file system, its own networking, and its own isolated process
tree separate from the host.
This page details how to use the docker run command to define the container’s resources at
runtime.

General form
The basic docker run command takes this form:
$ docker run [OPTIONS] IMAGE[:TAG|@DIGEST] [COMMAND] [ARG...]

The docker run command must specify an IMAGE to derive the container from. An image developer
can define image defaults related to:

 detached or foreground running


 container identification
 network settings
 runtime constraints on CPU and memory

With the docker run [OPTIONS] an operator can add to or override the image defaults set by a
developer. And, additionally, operators can override nearly all the defaults set by the Docker runtime
itself. The operator’s ability to override image and Docker runtime defaults is why runhas more
options than any other docker command.
To learn how to interpret the types of [OPTIONS], see Option types.
Note: Depending on your Docker system configuration, you may be required to preface the docker
run command with sudo. To avoid having to use sudo with the dockercommand, your system
administrator can create a Unix group called docker and add users to it. For more information about
this configuration, refer to the Docker installation documentation for your operating system.
Operator exclusive options
Only the operator (the person executing docker run) can set the following options.

 Detached vs foreground
o Detached (-d)
o Foreground
 Container identification
o Name (--name)
o PID equivalent
 IPC settings (--ipc)
 Network settings
 Restart policies (--restart)
 Clean up (--rm)
 Runtime constraints on resources
 Runtime privilege and Linux capabilities

Detached vs foreground
When starting a Docker container, you must first decide if you want to run the container in the
background in a “detached” mode or in the default foreground mode:

-d=false: Detached mode: Run container in the background, print new container id

Detached (-d)
To start a container in detached mode, you use -d=true or just -d option. By design, containers
started in detached mode exit when the root process used to run the container exits, unless you also
specify the --rm option. If you use -d with --rm, the container is removed when it exits or when the
daemon exits, whichever happens first.
Do not pass a service x start command to a detached container. For example, this command
attempts to start the nginx service.
$ docker run -d -p 80:80 my_image service nginx start

This succeeds in starting the nginx service inside the container. However, it fails the detached
container paradigm in that, the root process (service nginx start) returns and the detached
container stops as designed. As a result, the nginx service is started but could not be used. Instead,
to start a process such as the nginx web server do the following:
$ docker run -d -p 80:80 my_image nginx -g 'daemon off;'
To do input/output with a detached container use network connections or shared volumes. These
are required because the container is no longer listening to the command line where docker run was
run.
To reattach to a detached container, use docker attach command.

Foreground
In foreground mode (the default when -d is not specified), docker run can start the process in the
container and attach the console to the process’s standard input, output, and standard error. It can
even pretend to be a TTY (this is what most command line executables expect) and pass along
signals. All of that is configurable:
-a=[] : Attach to `STDIN`, `STDOUT` and/or `STDERR`
-t : Allocate a pseudo-tty
--sig-proxy=true: Proxy all received signals to the process (non-TTY mode only)
-i : Keep STDIN open even if not attached

If you do not specify -a then Docker will attach to both stdout and stderr . You can specify to which
of the three standard streams (STDIN, STDOUT, STDERR) you’d like to connect instead, as in:
$ docker run -a stdin -a stdout -i -t ubuntu /bin/bash

For interactive processes (like a shell), you must use -i -t together in order to allocate a tty for the
container process. -i -t is often written -it as you’ll see in later examples. Specifying -t is
forbidden when the client is receiving its standard input from a pipe, as in:
$ echo test | docker run -i busybox cat

Note: A process running as PID 1 inside a container is treated specially by Linux: it ignores any
signal with the default action. So, the process will not terminate on SIGINT or SIGTERM unless it is
coded to do so.

Container identification
Name (--name)
The operator can identify a container in three ways:
Identifier
Example value
type

UUID long
“f78375b1c487e03c9438c729345e54db9d20cfa2ac1fc3494b6eb60872e74778”
identifier

UUID short
“f78375b1c487”
identifier

Name “evil_ptolemy”

The UUID identifiers come from the Docker daemon. If you do not assign a container name with
the --name option, then the daemon generates a random string name for you. Defining a name can be
a handy way to add meaning to a container. If you specify a name, you can use it when referencing
the container within a Docker network. This works for both background and foreground Docker
containers.
Note: Containers on the default bridge network must be linked to communicate by name.

PID equivalent
Finally, to help with automation, you can have Docker write the container ID out to a file of your
choosing. This is similar to how some programs might write out their process ID to a file (you’ve
seen them as PID files):

--cidfile="": Write the container ID to the file

Image[:tag]
While not strictly a means of identifying a container, you can specify a version of an image you’d like
to run the container with by adding image[:tag] to the command. For example, docker run
ubuntu:14.04.

Image[@digest]
Images using the v2 or later image format have a content-addressable identifier called a digest. As
long as the input used to generate the image is unchanged, the digest value is predictable and
referenceable.

The following example runs a container from the alpine image with
thesha256:9cacb71397b640eca97488cf08582ae4e4068513101088e9f96c9814bfda95e0 digest:
$ docker run
alpine@sha256:9cacb71397b640eca97488cf08582ae4e4068513101088e9f96c9814bfda95e0 date

PID settings (--pid)


--pid="" : Set the PID (Process) Namespace mode for the container,
'container:<name|id>': joins another container's PID namespace
'host': use the host's PID namespace inside the container

By default, all containers have the PID namespace enabled.

PID namespace provides separation of processes. The PID Namespace removes the view of the
system processes, and allows process ids to be reused including pid 1.

In certain cases you want your container to share the host’s process namespace, basically allowing
processes within the container to see all of the processes on the system. For example, you could
build a container with debugging tools like strace or gdb, but want to use these tools when
debugging processes within the container.

Example: run htop inside a container


Create this Dockerfile:

FROM alpine:latest
RUN apk add --update htop && rm -rf /var/cache/apk/*
CMD ["htop"]

Build the Dockerfile and tag the image as myhtop:


$ docker build -t myhtop .

Use the following command to run htop inside a container:


$ docker run -it --rm --pid=host myhtop

Joining another container’s pid namespace can be used for debugging that container.

Example
Start a container running a redis server:
$ docker run --name my-redis -d redis

Debug the redis container by running another container that has strace in it:

$ docker run -it --pid=container:my-redis my_strace_docker_image bash


$ strace -p 1

UTS settings (--uts)


--uts="" : Set the UTS namespace mode for the container,
'host': use the host's UTS namespace inside the container

The UTS namespace is for setting the hostname and the domain that is visible to running processes
in that namespace. By default, all containers, including those with --network=host, have their own
UTS namespace. The host setting will result in the container using the same UTS namespace as the
host. Note that --hostname and --domainname are invalid in hostUTS mode.

You may wish to share the UTS namespace with the host if you would like the hostname of the
container to change as the hostname of the host changes. A more advanced use case would be
changing the host’s hostname from a container.

IPC settings (--ipc)


--ipc="MODE" : Set the IPC mode for the container

The following values are accepted:

Value Description

”” Use daemon’s default.

“none” Own private IPC namespace, with /dev/shm not mounted.

“private” Own private IPC namespace.

Own private IPC namespace, with a possibility to share it with


“shareable”
other containers.

“container: <_name-or-
Join another (“shareable”) container’s IPC namespace.
ID_>"
Value Description

“host” Use the host system’s IPC namespace.

If not specified, daemon default is used, which can either be "private" or "shareable", depending
on the daemon version and configuration.

IPC (POSIX/SysV IPC) namespace provides separation of named shared memory segments,
semaphores and message queues.

Shared memory segments are used to accelerate inter-process communication at memory speed,
rather than through pipes or through the network stack. Shared memory is commonly used by
databases and custom-built (typically C/OpenMPI, C++/using boost libraries) high performance
applications for scientific computing and financial services industries. If these types of applications
are broken into multiple containers, you might need to share the IPC mechanisms of the containers,
using "shareable" mode for the main (i.e. “donor”) container, and "container:<donor-name-or-
ID>" for other containers.

Network settings
--dns=[] : Set custom dns servers for the container
--network="bridge" : Connect a container to a network
'bridge': create a network stack on the default Docker bridge
'none': no networking
'container:<name|id>': reuse another container's network stack
'host': use the Docker host network stack
'<network-name>|<network-id>': connect to a user-defined
network
--network-alias=[] : Add network-scoped alias for the container
--add-host="" : Add a line to /etc/hosts (host:IP)
--mac-address="" : Sets the container's Ethernet device's MAC address
--ip="" : Sets the container's Ethernet device's IPv4 address
--ip6="" : Sets the container's Ethernet device's IPv6 address
--link-local-ip=[] : Sets one or more container's Ethernet device's link local
IPv4/IPv6 addresses

By default, all containers have networking enabled and they can make any outgoing connections.
The operator can completely disable networking with docker run --network none which disables all
incoming and outgoing networking. In cases like this, you would perform I/O through files
or STDIN and STDOUT only.

Publishing ports and linking to other containers only works with the default (bridge). The linking
feature is a legacy feature. You should always prefer using Docker network drivers over linking.

Your container will use the same DNS servers as the host by default, but you can override this with -
-dns.
By default, the MAC address is generated using the IP address allocated to the container. You can
set the container’s MAC address explicitly by providing a MAC address via the --mac-
address parameter (format:12:34:56:78:9a:bc).Be aware that Docker does not check if manually
specified MAC addresses are unique.

Supported networks :

Network Description

none No networking in the container.

bridge (default) Connect the container to the bridge via veth interfaces.

host Use the host's network stack inside the container.

Use the network stack of another container, specified via


container:<name|id>
its name or id.

Connects the container to a user created network (using docker


NETWORK
network create command)

NETWORK: NONE
With the network is none a container will not have access to any external routes. The container will
still have a loopback interface enabled in the container but it does not have any routes to external
traffic.

NETWORK: BRIDGE

With the network set to bridge a container will use docker’s default networking setup. A bridge is
setup on the host, commonly named docker0, and a pair of veth interfaces will be created for the
container. One side of the veth pair will remain on the host attached to the bridge while the other
side of the pair will be placed inside the container’s namespaces in addition to
the loopback interface. An IP address will be allocated for containers on the bridge’s network and
traffic will be routed though this bridge to the container.

Containers can communicate via their IP addresses by default. To communicate by name, they must
be linked.

NETWORK: HOST

With the network set to host a container will share the host’s network stack and all interfaces from
the host will be available to the container. The container’s hostname will match the hostname on the
host system. Note that --mac-address is invalid in hostnetmode. Even in host network mode a
container has its own UTS namespace by default. As such --hostname and --domainname are
allowed in host network mode and will only change the hostname and domain name inside the
container. Similar to --hostname, the --add-host, --dns, --dns-search, and --dns-option options
can be used in hostnetwork mode. These options update /etc/hosts or /etc/resolv.conf inside the
container. No change are made to /etc/hosts and /etc/resolv.conf on the host.
Compared to the default bridge mode, the host mode gives significantly better networking
performance since it uses the host’s native networking stack whereas the bridge has to go through
one level of virtualization through the docker daemon. It is recommended to run containers in this
mode when their networking performance is critical, for example, a production Load Balancer or a
High Performance Web Server.
Note: --network="host" gives the container full access to local system services such as D-bus and
is therefore considered insecure.

NETWORK: CONTAINER

With the network set to container a container will share the network stack of another container. The
other container’s name must be provided in the format of --network container:<name|id>. Note
that --add-host --hostname --dns --dns-search--dns-option and --mac-address are invalid
in container netmode, and --publish--publish-all --expose are also invalid
in container netmode.
Example running a Redis container with Redis binding to localhost then running the redis-
cli command and connecting to the Redis server over the localhost interface.

$ docker run -d --name redis example/redis --bind 127.0.0.1


$ # use the redis container's network stack to access localhost
$ docker run --rm -it --network container:redis example/redis-cli -h 127.0.0.1

USER-DEFINED NETWORK

You can create a network using a Docker network driver or an external network driver plugin. You
can connect multiple containers to the same network. Once connected to a user-defined network,
the containers can communicate easily using only another container’s IP address or name.

For overlay networks or custom plugins that support multi-host connectivity, containers connected to
the same multi-host network but launched from different Engines can also communicate in this way.
The following example creates a network using the built-in bridge network driver and running a
container in the created network
$ docker network create -d bridge my-net
$ docker run --network=my-net -itd --name=container3 busybox

Managing /etc/hosts
Your container will have lines in /etc/hosts which define the hostname of the container itself as well
as localhost and a few other common things. The --add-host flag can be used to add additional
lines to /etc/hosts.
$ docker run -it --add-host db-static:86.75.30.9 ubuntu cat /etc/hosts
172.17.0.22 09d03f76bf2c
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
86.75.30.9 db-static
If a container is connected to the default bridge network and linked with other containers, then the
container’s /etc/hosts file is updated with the linked container’s name.
Note Since Docker may live update the container’s /etc/hosts file, there may be situations when
processes inside the container can end up reading an empty or incomplete /etc/hosts file. In most
cases, retrying the read again should fix the problem.

Restart policies (--restart)


Using the --restart flag on Docker run you can specify a restart policy for how a container should or
should not be restarted on exit.
When a restart policy is active on a container, it will be shown as either Up or Restarting in docker
ps. It can also be useful to use docker events to see the restart policy in effect.

Docker supports the following restart policies:

Policy Result

no Do not automatically restart the container when it exits. This is the default.

on-
Restart only if the container exits with a non-zero exit status. Optionally, limit
failure[:max-
the number of restart retries the Docker daemon attempts.
retries]

Always restart the container regardless of the exit status. When you specify
always, the Docker daemon will try to restart the container indefinitely. The
always
container will also always start on daemon startup, regardless of the current
state of the container.

Always restart the container regardless of the exit status, including on


unless-
daemon startup, except if the container was put into a stopped state before
stopped
the Docker daemon was stopped.

An ever increasing delay (double the previous delay, starting at 100 milliseconds) is added before
each restart to prevent flooding the server. This means the daemon will wait for 100 ms, then 200
ms, 400, 800, 1600, and so on until either the on-failure limit is hit, or when you docker
stop or docker rm -f the container.

If a container is successfully restarted (the container is started and runs for at least 10 seconds), the
delay is reset to its default value of 100 ms.

You can specify the maximum amount of times Docker will try to restart the container when using
the on-failure policy. The default is that Docker will try forever to restart the container. The number
of (attempted) restarts for a container can be obtained via docker inspect. For example, to get the
number of restarts for container “my-container”;
$ docker inspect -f "{{ .RestartCount }}" my-container
# 2

Or, to get the last time the container was (re)started;

$ docker inspect -f "{{ .State.StartedAt }}" my-container


# 2015-03-04T23:47:07.691840179Z

Combining --restart (restart policy) with the --rm (clean up) flag results in an error. On container
restart, attached clients are disconnected. See the examples on using the --rm(clean up) flag later in
this page.

Examples
$ docker run --restart=always redis

This will run the redis container with a restart policy of always so that if the container exits, Docker
will restart it.
$ docker run --restart=on-failure:10 redis

This will run the redis container with a restart policy of on-failure and a maximum restart count of
10. If the redis container exits with a non-zero exit status more than 10 times in a row Docker will
abort trying to restart the container. Providing a maximum restart limit is only valid for the on-
failure policy.

Exit Status
The exit code from docker run gives information about why the container failed to run or why it
exited. When docker run exits with a non-zero code, the exit codes follow the chrootstandard, see
below:

125 if the error is with Docker daemon itself

$ docker run --foo busybox; echo $?


# flag provided but not defined: --foo
See 'docker run --help'.
125

126 if the contained command cannot be invoked

$ docker run busybox /etc; echo $?


# docker: Error response from daemon: Container command '/etc' could not be invoked.
126

127 if the contained command cannot be found

$ docker run busybox foo; echo $?


# docker: Error response from daemon: Container command 'foo' not found or does not
exist.
127

Exit code of contained command otherwise

$ docker run busybox /bin/sh -c 'exit 3'; echo $?


# 3

Clean up (--rm)
By default a container’s file system persists even after the container exits. This makes debugging a
lot easier (since you can inspect the final state) and you retain all your data by default. But if you are
running short-term foreground processes, these container file systems can really pile up. If instead
you’d like Docker to automatically clean up the container and remove the file system when the
container exits, you can add the --rm flag:
--rm=false: Automatically remove the container when it exits

Note: When you set the --rm flag, Docker also removes the anonymous volumes associated with the
container when the container is removed. This is similar to running docker rm -v my-container.
Only volumes that are specified without a name are removed. For example, with docker run --rm -
v /foo -v awesome:/bar busybox top, the volume for /foo will be removed, but the volume
for /bar will not. Volumes inherited via --volumes-from will be removed with the same logic -- if the
original volume was specified with a name it will not be removed.

Security configuration
--security-opt="label=user:USER" : Set the label user for the container
--security-opt="label=role:ROLE" : Set the label role for the container
--security-opt="label=type:TYPE" : Set the label type for the container
--security-opt="label=level:LEVEL" : Set the label level for the container
--security-opt="label=disable" : Turn off label confinement for the container
--security-opt="apparmor=PROFILE" : Set the apparmor profile to be applied to the
container
--security-opt="no-new-privileges:true|false" : Disable/enable container processes
from gaining new privileges
--security-opt="seccomp=unconfined" : Turn off seccomp confinement for the container
--security-opt="seccomp=profile.json": White listed syscalls seccomp Json file to be
used as a seccomp filter

You can override the default labeling scheme for each container by specifying the --security-
opt flag. Specifying the level in the following command allows you to share the same content
between containers.
$ docker run --security-opt label=level:s0:c100,c200 -it fedora bash

Note: Automatic translation of MLS labels is not currently supported.

To disable the security labeling for this container versus running with the --privileged flag, use the
following command:
$ docker run --security-opt label=disable -it fedora bash

If you want a tighter security policy on the processes within a container, you can specify an alternate
type for the container. You could run a container that is only allowed to listen on Apache ports by
executing the following command:

$ docker run --security-opt label=type:svirt_apache_t -it centos bash

Note: You would have to write policy defining a svirt_apache_t type.

If you want to prevent your container processes from gaining additional privileges, you can execute
the following command:

$ docker run --security-opt no-new-privileges -it centos bash


This means that commands that raise privileges such as su or sudo will no longer work. It also
causes any seccomp filters to be applied later, after privileges have been dropped which may mean
you can have a more restrictive set of filters. For more details, see the kernel documentation.

Specify an init process


You can use the --init flag to indicate that an init process should be used as the PID 1 in the
container. Specifying an init process ensures the usual responsibilities of an init system, such as
reaping zombie processes, are performed inside the created container.
The default init process used is the first docker-init executable found in the system path of the
Docker daemon process. This docker-init binary, included in the default installation, is backed
by tini.

Specify custom cgroups


Using the --cgroup-parent flag, you can pass a specific cgroup to run a container in. This allows
you to create and manage cgroups on their own. You can define custom resources for those
cgroups and put containers under a common parent group.

Runtime constraints on resources


The operator can also adjust the performance parameters of the container:

Option Description

Memory limit (format: <number>[<unit>]). Number is a positive integer. Unit


-m, --memory=""
can be one of b, k, m, or g. Minimum is 4M.

--memory- Total memory limit (memory + swap, format: <number>[<unit>]). Number is


swap="" a positive integer. Unit can be one of b, k, m, or g.

--memory- Memory soft limit (format: <number>[<unit>]). Number is a positive integer.


reservation="" Unit can be one of b, k, m, or g.

--kernel- Kernel memory limit (format: <number>[<unit>]). Number is a positive


memory="" integer. Unit can be one of b, k, m, or g. Minimum is 4M.

-c, --cpu-
CPU shares (relative weight)
shares=0

--cpus=0.000 Number of CPUs. Number is a fractional number. 0.000 means no limit.


Option Description

--cpu-period=0 Limit the CPU CFS (Completely Fair Scheduler) period

--cpuset-
cpus=""
CPUs in which to allow execution (0-3, 0,1)

--cpuset- Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only
mems="" effective on NUMA systems.

--cpu-quota=0 Limit the CPU CFS (Completely Fair Scheduler) quota

--cpu-rt- Limit the CPU real-time period. In microseconds. Requires parent cgroups
period=0 be set and cannot be higher than parent. Also check rtprio ulimits.

--cpu-rt- Limit the CPU real-time runtime. In microseconds. Requires parent cgroups
runtime=0 be set and cannot be higher than parent. Also check rtprio ulimits.

--blkio- Block IO weight (relative weight) accepts a weight value between 10 and
weight=0 1000.

--blkio-weight-
device=""
Block IO weight (relative device weight, format: DEVICE_NAME:WEIGHT)

--device-read- Limit read rate from a device (format: <device-path>:<number>[<unit>]).


bps="" Number is a positive integer. Unit can be one of kb, mb, or gb.

--device-write- Limit write rate to a device (format: <device-path>:<number>[<unit>]).


bps="" Number is a positive integer. Unit can be one of kb, mb, or gb.

--device-read- Limit read rate (IO per second) from a device (format: <device-
iops="" path>:<number>). Number is a positive integer.

--device-write- Limit write rate (IO per second) to a device (format: <device-
iops="" path>:<number>). Number is a positive integer.

--oom-kill-
disable=false
Whether to disable OOM Killer for the container or not.

--oom-score-
adj=0
Tune container’s OOM preferences (-1000 to 1000)

--memory- Tune a container’s memory swappiness behavior. Accepts an integer


swappiness="" between 0 and 100.

Size of /dev/shm. The format is <number><unit>. numbermust be greater


--shm-size=""
than 0. Unit is optional and can be b(bytes), k (kilobytes), m (megabytes),
Option Description

or g (gigabytes). If you omit the unit, the system uses bytes. If you omit the
size entirely, the system uses 64m.

User memory constraints


We have four ways to set user memory usage:

Option Result

memory=inf, memory- There is no memory limit for the container. The container can use
swap=inf(default) as much memory as needed.

(specify memory and set memory-swap as -1) The container is not


memory=L<inf,
allowed to use more than L bytes of memory, but can use as much
memory-swap=inf
swap as is needed (if the host supports swap memory).

(specify memory without memory-swap) The container is not


memory=L<inf,
allowed to use more than L bytes of memory, swap plus memory
memory-swap=2*L
usage is double of that.

memory=L<inf, (specify both memory and memory-swap) The container is not


memory-swap=S<inf, allowed to use more than L bytes of memory, swap plus memory
L<=S usage is limited by S.

Examples:

$ docker run -it ubuntu:14.04 /bin/bash

We set nothing about memory, this means the processes in the container can use as much memory
and swap memory as they need.

$ docker run -it -m 300M --memory-swap -1 ubuntu:14.04 /bin/bash

We set memory limit and disabled swap memory limit, this means the processes in the container can
use 300M memory and as much swap memory as they need (if the host supports swap memory).

$ docker run -it -m 300M ubuntu:14.04 /bin/bash

We set memory limit only, this means the processes in the container can use 300M memory and
300M swap memory, by default, the total virtual memory size (--memory-swap) will be set as double
of memory, in this case, memory + swap would be 2*300M, so processes can use 300M swap
memory as well.
$ docker run -it -m 300M --memory-swap 1G ubuntu:14.04 /bin/bash

We set both memory and swap memory, so the processes in the container can use 300M memory
and 700M swap memory.

Memory reservation is a kind of memory soft limit that allows for greater sharing of memory. Under
normal circumstances, containers can use as much of the memory as needed and are constrained
only by the hard limits set with the -m/--memory option. When memory reservation is set, Docker
detects memory contention or low memory and forces containers to restrict their consumption to a
reservation limit.

Always set the memory reservation value below the hard limit, otherwise the hard limit takes
precedence. A reservation of 0 is the same as setting no reservation. By default (without reservation
set), memory reservation is the same as the hard memory limit.

Memory reservation is a soft-limit feature and does not guarantee the limit won’t be exceeded.
Instead, the feature attempts to ensure that, when memory is heavily contended for, memory is
allocated based on the reservation hints/setup.

The following example limits the memory (-m) to 500M and sets the memory reservation to 200M.
$ docker run -it -m 500M --memory-reservation 200M ubuntu:14.04 /bin/bash

Under this configuration, when the container consumes memory more than 200M and less than
500M, the next system memory reclaim attempts to shrink container memory below 200M.

The following example set memory reservation to 1G without a hard memory limit.

$ docker run -it --memory-reservation 1G ubuntu:14.04 /bin/bash

The container can use as much memory as it needs. The memory reservation setting ensures the
container doesn’t consume too much memory for long time, because every memory reclaim shrinks
the container’s consumption to the reservation.

By default, kernel kills processes in a container if an out-of-memory (OOM) error occurs. To change
this behaviour, use the --oom-kill-disable option. Only disable the OOM killer on containers where
you have also set the -m/--memory option. If the -m flag is not set, this can result in the host running
out of memory and require killing the host’s system processes to free memory.

The following example limits the memory to 100M and disables the OOM killer for this container:
$ docker run -it -m 100M --oom-kill-disable ubuntu:14.04 /bin/bash

The following example, illustrates a dangerous way to use the flag:

$ docker run -it --oom-kill-disable ubuntu:14.04 /bin/bash

The container has unlimited memory which can cause the host to run out memory and require killing
system processes to free memory. The --oom-score-adj parameter can be changed to select the
priority of which containers will be killed when the system is out of memory, with negative scores
making them less likely to be killed, and positive scores more likely.

Kernel memory constraints


Kernel memory is fundamentally different than user memory as kernel memory can’t be swapped
out. The inability to swap makes it possible for the container to block system services by consuming
too much kernel memory. Kernel memory includes:

 stack pages
 slab pages
 sockets memory pressure
 tcp memory pressure

You can setup kernel memory limit to constrain these kinds of memory. For example, every process
consumes some stack pages. By limiting kernel memory, you can prevent new processes from being
created when the kernel memory usage is too high.

Kernel memory is never completely independent of user memory. Instead, you limit kernel memory
in the context of the user memory limit. Assume “U” is the user memory limit and “K” the kernel limit.
There are three possible ways to set limits:

Option Result

U != 0, K = This is the standard memory limitation mechanism already present before


inf(default) using kernel memory. Kernel memory is completely ignored.

U != 0, K < U Kernel memory is a subset of the user memory. This setup is useful in
deployments where the total amount of memory per-cgroup is overcommitted.
Option Result

Overcommitting kernel memory limits is definitely not recommended, since the


box can still run out of non-reclaimable memory. In this case, you can configure
K so that the sum of all groups is never greater than the total memory. Then,
freely set U at the expense of the system's service quality.

Since kernel memory charges are also fed to the user counter and reclamation
is triggered for the container for both kinds of memory. This configuration gives
U != 0, K > U
the admin a unified view of memory. It is also useful for people who just want to
track kernel memory usage.

Examples:

$ docker run -it -m 500M --kernel-memory 50M ubuntu:14.04 /bin/bash

We set memory and kernel memory, so the processes in the container can use 500M memory in
total, in this 500M memory, it can be 50M kernel memory tops.

$ docker run -it --kernel-memory 50M ubuntu:14.04 /bin/bash

We set kernel memory without -m, so the processes in the container can use as much memory as
they want, but they can only use 50M kernel memory.

Swappiness constraint
By default, a container’s kernel can swap out a percentage of anonymous pages. To set this
percentage for a container, specify a --memory-swappiness value between 0 and 100. A value of 0
turns off anonymous page swapping. A value of 100 sets all anonymous pages as swappable. By
default, if you are not using --memory-swappiness, memory swappiness value will be inherited from
the parent.

For example, you can set:

$ docker run -it --memory-swappiness=0 ubuntu:14.04 /bin/bash

Setting the --memory-swappiness option is helpful when you want to retain the container’s working
set and to avoid swapping performance penalties.
CPU share constraint
By default, all containers get the same proportion of CPU cycles. This proportion can be modified by
changing the container’s CPU share weighting relative to the weighting of all other running
containers.

To modify the proportion from the default of 1024, use the -c or --cpu-shares flag to set the
weighting to 2 or higher. If 0 is set, the system will ignore the value and use the default of 1024.

The proportion will only apply when CPU-intensive processes are running. When tasks in one
container are idle, other containers can use the left-over CPU time. The actual amount of CPU time
will vary depending on the number of containers running on the system.

For example, consider three containers, one has a cpu-share of 1024 and two others have a cpu-
share setting of 512. When processes in all three containers attempt to use 100% of CPU, the first
container would receive 50% of the total CPU time. If you add a fourth container with a cpu-share of
1024, the first container only gets 33% of the CPU. The remaining containers receive 16.5%, 16.5%
and 33% of the CPU.

On a multi-core system, the shares of CPU time are distributed over all CPU cores. Even if a
container is limited to less than 100% of CPU time, it can use 100% of each individual CPU core.

For example, consider a system with more than three cores. If you start one container {C0}with -
c=512 running one process, and another container {C1} with -c=1024 running two processes, this can
result in the following division of CPU shares:
PID container CPU CPU share
100 {C0} 0 100% of CPU0
101 {C1} 1 100% of CPU1
102 {C1} 2 100% of CPU2

CPU period constraint


The default CPU CFS (Completely Fair Scheduler) period is 100ms. We can use --cpu-periodto set
the period of CPUs to limit the container’s CPU usage. And usually --cpu-periodshould work with --
cpu-quota.

Examples:

$ docker run -it --cpu-period=50000 --cpu-quota=25000 ubuntu:14.04 /bin/bash


If there is 1 CPU, this means the container can get 50% CPU worth of run-time every 50ms.

In addition to use --cpu-period and --cpu-quota for setting CPU period constraints, it is possible to
specify --cpus with a float number to achieve the same purpose. For example, if there is 1 CPU,
then --cpus=0.5 will achieve the same result as setting --cpu-period=50000and --cpu-
quota=25000 (50% CPU).
The default value for --cpus is 0.000, which means there is no limit.

For more information, see the CFS documentation on bandwidth limiting.

Cpuset constraint
We can set cpus in which to allow execution for containers.

Examples:

$ docker run -it --cpuset-cpus="1,3" ubuntu:14.04 /bin/bash

This means processes in container can be executed on cpu 1 and cpu 3.

$ docker run -it --cpuset-cpus="0-2" ubuntu:14.04 /bin/bash

This means processes in container can be executed on cpu 0, cpu 1 and cpu 2.

We can set mems in which to allow execution for containers. Only effective on NUMA systems.

Examples:

$ docker run -it --cpuset-mems="1,3" ubuntu:14.04 /bin/bash

This example restricts the processes in the container to only use memory from memory nodes 1 and
3.

$ docker run -it --cpuset-mems="0-2" ubuntu:14.04 /bin/bash

This example restricts the processes in the container to only use memory from memory nodes 0, 1
and 2.

CPU quota constraint


The --cpu-quota flag limits the container’s CPU usage. The default 0 value allows the container to
take 100% of a CPU resource (1 CPU). The CFS (Completely Fair Scheduler) handles resource
allocation for executing processes and is default Linux Scheduler used by the kernel. Set this value
to 50000 to limit the container to 50% of a CPU resource. For multiple CPUs, adjust the --cpu-
quota as necessary. For more information, see the CFS documentation on bandwidth limiting.

Block IO bandwidth (Blkio) constraint


By default, all containers get the same proportion of block IO bandwidth (blkio). This proportion is
500. To modify this proportion, change the container’s blkio weight relative to the weighting of all
other running containers using the --blkio-weight flag.
Note: The blkio weight setting is only available for direct IO. Buffered IO is not currently supported.

The --blkio-weight flag can set the weighting to a value between 10 to 1000. For example, the
commands below create two containers with different blkio weight:
$ docker run -it --name c1 --blkio-weight 300 ubuntu:14.04 /bin/bash
$ docker run -it --name c2 --blkio-weight 600 ubuntu:14.04 /bin/bash

If you do block IO in the two containers at the same time, by, for example:

$ time dd if=/mnt/zerofile of=test.out bs=1M count=1024 oflag=direct

You’ll find that the proportion of time is the same as the proportion of blkio weights of the two
containers.

The --blkio-weight-device="DEVICE_NAME:WEIGHT" flag sets a specific device weight.


The DEVICE_NAME:WEIGHT is a string containing a colon-separated device name and weight. For
example, to set /dev/sda device weight to 200:
$ docker run -it \
--blkio-weight-device "/dev/sda:200" \
ubuntu

If you specify both the --blkio-weight and --blkio-weight-device, Docker uses the --blkio-
weight as the default weight and uses --blkio-weight-device to override this default with a new
value on a specific device. The following example uses a default weight of 300 and overrides this
default on /dev/sda setting that weight to 200:
$ docker run -it \
--blkio-weight 300 \
--blkio-weight-device "/dev/sda:200" \
ubuntu

The --device-read-bps flag limits the read rate (bytes per second) from a device. For example, this
command creates a container and limits the read rate to 1mb per second from /dev/sda:
$ docker run -it --device-read-bps /dev/sda:1mb ubuntu

The --device-write-bps flag limits the write rate (bytes per second) to a device. For example, this
command creates a container and limits the write rate to 1mb per second for /dev/sda:
$ docker run -it --device-write-bps /dev/sda:1mb ubuntu

Both flags take limits in the <device-path>:<limit>[unit] format. Both read and write rates must be
a positive integer. You can specify the rate in kb (kilobytes), mb (megabytes), or gb (gigabytes).
The --device-read-iops flag limits read rate (IO per second) from a device. For example, this
command creates a container and limits the read rate to 1000 IO per second from /dev/sda:
$ docker run -ti --device-read-iops /dev/sda:1000 ubuntu

The --device-write-iops flag limits write rate (IO per second) to a device. For example, this
command creates a container and limits the write rate to 1000 IO per second to /dev/sda:
$ docker run -ti --device-write-iops /dev/sda:1000 ubuntu

Both flags take limits in the <device-path>:<limit> format. Both read and write rates must be a
positive integer.

Additional groups
--group-add: Add additional groups to run as

By default, the docker container process runs with the supplementary groups looked up for the
specified user. If one wants to add more to that list of groups, then one can use this flag:

$ docker run --rm --group-add audio --group-add nogroup --group-add 777 busybox id
uid=0(root) gid=0(root) groups=10(wheel),29(audio),99(nogroup),777

Runtime privilege and Linux capabilities


--cap-add: Add Linux capabilities
--cap-drop: Drop Linux capabilities
--privileged=false: Give extended privileges to this container
--device=[]: Allows you to run devices inside the container without the --privileged
flag.

By default, Docker containers are “unprivileged” and cannot, for example, run a Docker daemon
inside a Docker container. This is because by default a container is not allowed to access any
devices, but a “privileged” container is given access to all devices (see the documentation
on cgroups devices).

When the operator executes docker run --privileged, Docker will enable access to all devices on
the host as well as set some configuration in AppArmor or SELinux to allow the container nearly all
the same access to the host as processes running outside containers on the host. Additional
information about running with --privileged is available on the Docker Blog.
If you want to limit access to a specific device or devices you can use the --device flag. It allows you
to specify one or more devices that will be accessible within the container.
$ docker run --device=/dev/snd:/dev/snd ...

By default, the container will be able to read, write, and mknod these devices. This can be overridden
using a third :rwm set of options to each --device flag:
$ docker run --device=/dev/sda:/dev/xvdc --rm -it ubuntu fdisk /dev/xvdc

Command (m for help): q


$ docker run --device=/dev/sda:/dev/xvdc:r --rm -it ubuntu fdisk /dev/xvdc
You will not be able to write the partition table.

Command (m for help): q

$ docker run --device=/dev/sda:/dev/xvdc:w --rm -it ubuntu fdisk /dev/xvdc


crash....

$ docker run --device=/dev/sda:/dev/xvdc:m --rm -it ubuntu fdisk /dev/xvdc


fdisk: unable to open /dev/xvdc: Operation not permitted
In addition to --privileged, the operator can have fine grain control over the capabilities using --
cap-add and --cap-drop. By default, Docker has a default list of capabilities that are kept. The
following table lists the Linux capability options which are allowed by default and can be dropped.

Capability Key Capability Description

SETPCAP Modify process capabilities.

MKNOD Create special files using mknod(2).

AUDIT_WRITE Write records to kernel auditing log.

CHOWN Make arbitrary changes to file UIDs and GIDs (see chown(2)).

NET_RAW Use RAW and PACKET sockets.

DAC_OVERRIDE Bypass file read, write, and execute permission checks.

Bypass permission checks on operations that normally require the file


FOWNER
system UID of the process to match the UID of the file.

Don’t clear set-user-ID and set-group-ID permission bits when a file is


FSETID
modified.

KILL Bypass permission checks for sending signals.


Capability Key Capability Description

Make arbitrary manipulations of process GIDs and supplementary GID


SETGID
list.

SETUID Make arbitrary manipulations of process UIDs.

Bind a socket to internet domain privileged ports (port numbers less


NET_BIND_SERVICE
than 1024).

SYS_CHROOT Use chroot(2), change root directory.

SETFCAP Set file capabilities.

The next table shows the capabilities which are not granted by default and may be added.

Capability Key Capability Description

SYS_MODULE Load and unload kernel modules.

SYS_RAWIO Perform I/O port operations (iopl(2) and ioperm(2)).

SYS_PACCT Use acct(2), switch process accounting on or off.


Capability Key Capability Description

SYS_ADMIN Perform a range of system administration operations.

Raise process nice value (nice(2), setpriority(2)) and change the nice
SYS_NICE
value for arbitrary processes.

SYS_RESOURCE Override resource Limits.

Set system clock (settimeofday(2), stime(2), adjtimex(2)); set real-


SYS_TIME
time (hardware) clock.

Use vhangup(2); employ various privileged ioctl(2) operations on


SYS_TTY_CONFIG
virtual terminals.

Enable and disable kernel auditing; change auditing filter rules;


AUDIT_CONTROL
retrieve auditing status and filtering rules.

Allow MAC configuration or state changes. Implemented for the


MAC_ADMIN
Smack LSM.

Override Mandatory Access Control (MAC). Implemented for the


MAC_OVERRIDE
Smack Linux Security Module (LSM).

NET_ADMIN Perform various network-related operations.


Capability Key Capability Description

SYSLOG Perform privileged syslog(2) operations.

Bypass file read permission checks and directory read and execute
DAC_READ_SEARCH
permission checks.

LINUX_IMMUTABLE Set the FS_APPEND_FL and FS_IMMUTABLE_FL i-node flags.

NET_BROADCAST Make socket broadcasts, and listen to multicasts.

IPC_LOCK Lock memory (mlock(2), mlockall(2), mmap(2), shmctl(2)).

IPC_OWNER Bypass permission checks for operations on System V IPC objects.

SYS_PTRACE Trace arbitrary processes using ptrace(2).

Use reboot(2) and kexec_load(2), reboot and load a new kernel for
SYS_BOOT
later execution.

LEASE Establish leases on arbitrary files (see fcntl(2)).

WAKE_ALARM Trigger something that will wake up the system.


Capability Key Capability Description

BLOCK_SUSPEND Employ features that can block system suspend.

Further reference information is available on the capabilities(7) - Linux man page

Both flags support the value ALL, so if the operator wants to have all capabilities but MKNODthey could
use:
$ docker run --cap-add=ALL --cap-drop=MKNOD ...

For interacting with the network stack, instead of using --privileged they should use --cap-
add=NET_ADMIN to modify the network interfaces.

$ docker run -it --rm ubuntu:14.04 ip link add dummy0 type dummy
RTNETLINK answers: Operation not permitted
$ docker run -it --rm --cap-add=NET_ADMIN ubuntu:14.04 ip link add dummy0 type dummy

To mount a FUSE based filesystem, you need to combine both --cap-add and --device:
$ docker run --rm -it --cap-add SYS_ADMIN sshfs sshfs [email protected]:/home/sven
/mnt
fuse: failed to open /dev/fuse: Operation not permitted
$ docker run --rm -it --device /dev/fuse sshfs sshfs [email protected]:/home/sven /mnt
fusermount: mount failed: Operation not permitted
$ docker run --rm -it --cap-add SYS_ADMIN --device /dev/fuse sshfs
# sshfs [email protected]:/home/sven /mnt
The authenticity of host '10.10.10.20 (10.10.10.20)' can't be established.
ECDSA key fingerprint is 25:34:85:75:25:b0:17:46:05:19:04:93:b5:dd:5f:c6.
Are you sure you want to continue connecting (yes/no)? yes
[email protected]'s password:
root@30aa0cfaf1b5:/# ls -la /mnt/src/docker
total 1516
drwxrwxr-x 1 1000 1000 4096 Dec 4 06:08 .
drwxrwxr-x 1 1000 1000 4096 Dec 4 11:46 ..
-rw-rw-r-- 1 1000 1000 16 Oct 8 00:09 .dockerignore
-rwxrwxr-x 1 1000 1000 464 Oct 8 00:09 .drone.yml
drwxrwxr-x 1 1000 1000 4096 Dec 4 06:11 .git
-rw-rw-r-- 1 1000 1000 461 Dec 4 06:08 .gitignore
....

The default seccomp profile will adjust to the selected capabilities, in order to allow use of facilities
allowed by the capabilities, so you should not have to adjust this, since Docker 1.12. In Docker 1.10
and 1.11 this did not happen and it may be necessary to use a custom seccomp profile or use --
security-opt seccomp=unconfined when adding capabilities.

Logging drivers (--log-driver)


The container can have a different logging driver than the Docker daemon. Use the --log-
driver=VALUE with the docker run command to configure the container’s logging driver. The
following options are supported:
Driver Description

Disables any logging for the container. docker logs won’t be available with this
none
driver.

json- Default logging driver for Docker. Writes JSON messages to file. No logging
file options are supported for this driver.

syslog Syslog logging driver for Docker. Writes log messages to syslog.

journald Journald logging driver for Docker. Writes log messages to journald.

Graylog Extended Log Format (GELF) logging driver for Docker. Writes log
gelf
messages to a GELF endpoint likeGraylog or Logstash.

fluentd Fluentd logging driver for Docker. Writes log messages to fluentd(forward input).

Amazon CloudWatch Logs logging driver for Docker. Writes log messages to
awslogs
Amazon CloudWatch Logs

Splunk logging driver for Docker. Writes log messages to splunk using Event Http
splunk
Collector.

The docker logs command is available only for the json-file and journald logging drivers. For
detailed information on working with logging drivers, see Configure logging drivers.
Overriding Dockerfile image defaults
When a developer builds an image from a Dockerfile or when she commits it, the developer can set
a number of default parameters that take effect when the image starts up as a container.

Four of the Dockerfile commands cannot be overridden at runtime: FROM, MAINTAINER, RUN, and ADD.
Everything else has a corresponding override in docker run. We’ll go through what the developer
might have set in each Dockerfile instruction and how the operator can override that setting.

 CMD (Default Command or Options)


 ENTRYPOINT (Default Command to Execute at Runtime)
 EXPOSE (Incoming Ports)
 ENV (Environment Variables)
 HEALTHCHECK
 VOLUME (Shared Filesystems)
 USER
 WORKDIR

CMD (default command or options)


Recall the optional COMMAND in the Docker commandline:
$ docker run [OPTIONS] IMAGE[:TAG|@DIGEST] [COMMAND] [ARG...]

This command is optional because the person who created the IMAGE may have already provided a
default COMMAND using the Dockerfile CMD instruction. As the operator (the person running a container
from the image), you can override that CMD instruction just by specifying a new COMMAND.
If the image also specifies an ENTRYPOINT then the CMD or COMMAND get appended as arguments to
the ENTRYPOINT.

ENTRYPOINT (default command to execute at runtime)


--entrypoint="": Overwrite the default entrypoint set by the image

The ENTRYPOINT of an image is similar to a COMMAND because it specifies what executable to run when
the container starts, but it is (purposely) more difficult to override. The ENTRYPOINTgives a container
its default nature or behavior, so that when you set an ENTRYPOINT you can run the container as if it
were that binary, complete with default options, and you can pass in more options via the COMMAND.
But, sometimes an operator may want to run something else inside the container, so you can
override the default ENTRYPOINT at runtime by using a string to specify the new ENTRYPOINT. Here is
an example of how to run a shell in a container that has been set up to automatically run something
else (like /usr/bin/redis-server):
$ docker run -it --entrypoint /bin/bash example/redis

or two examples of how to pass more parameters to that ENTRYPOINT:

$ docker run -it --entrypoint /bin/bash example/redis -c ls -l


$ docker run -it --entrypoint /usr/bin/redis-cli example/redis --help

You can reset a containers entrypoint by passing an empty string, for example:

$ docker run -it --entrypoint="" mysql bash

Note: Passing --entrypoint will clear out any default command set on the image (i.e.
any CMD instruction in the Dockerfile used to build it).

EXPOSE (incoming ports)


The following run command options work with container networking:
--expose=[]: Expose a port or a range of ports inside the container.
These are additional to those exposed by the `EXPOSE` instruction
-P : Publish all exposed ports to the host interfaces
-p=[] : Publish a container's port or a range of ports to the host
format: ip:hostPort:containerPort | ip::containerPort |
hostPort:containerPort | containerPort
Both hostPort and containerPort can be specified as a
range of ports. When specifying ranges for both, the
number of container ports in the range must match the
number of host ports in the range, for example:
-p 1234-1236:1234-1236/tcp

When specifying a range for hostPort only, the


containerPort must not be a range. In this case the
container port is published somewhere within the
specified hostPort range. (e.g., `-p 1234-1236:1234/tcp`)

(use 'docker port' to see the actual mapping)


--link="" : Add link to another container (<name or id>:alias or <name or id>)

With the exception of the EXPOSE directive, an image developer hasn’t got much control over
networking. The EXPOSE instruction defines the initial incoming ports that provide services. These
ports are available to processes inside the container. An operator can use the --expose option to
add to the exposed ports.
To expose a container’s internal port, an operator can start the container with the -P or -pflag. The
exposed port is accessible on the host and the ports are available to any client that can reach the
host.
The -P option publishes all the ports to the host interfaces. Docker binds each exposed port to a
random port on the host. The range of ports are within an ephemeral port range defined
by /proc/sys/net/ipv4/ip_local_port_range. Use the -p flag to explicitly map a single port or range
of ports.
The port number inside the container (where the service listens) does not need to match the port
number exposed on the outside of the container (where clients connect). For example, inside the
container an HTTP service is listening on port 80 (and so the image developer specifies EXPOSE 80 in
the Dockerfile). At runtime, the port might be bound to 42800 on the host. To find the mapping
between the host ports and the exposed ports, use docker port.
If the operator uses --link when starting a new client container in the default bridge network, then
the client container can access the exposed port via a private networking interface. If --link is used
when starting a container in a user-defined network as described in Networking overview, it will
provide a named alias for the container being linked to.

ENV (environment variables)


Docker automatically sets some environment variables when creating a Linux container. Docker
does not set any environment variables when creating a Windows container.

The following environment variables are set for Linux containers:

Variable Value

HOME Set based on the value of USER

HOSTNAME The hostname associated with the container

Includes popular directories, such


PATH
as /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Variable Value

TERM xterm if the container is allocated a pseudo-TTY

Additionally, the operator can set any environment variable in the container by using one or
more -e flags, even overriding those mentioned above, or already defined by the developer with a
Dockerfile ENV. If the operator names an environment variable without specifying a value, then the
current value of the named variable is propagated into the container’s environment:
$ export today=Wednesday
$ docker run -e "deep=purple" -e today --rm alpine env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=d2219b854598
deep=purple
today=Wednesday
HOME=/root
PS C:\> docker run --rm -e "foo=bar" microsoft/nanoserver cmd /s /c set
ALLUSERSPROFILE=C:\ProgramData
APPDATA=C:\Users\ContainerAdministrator\AppData\Roaming
CommonProgramFiles=C:\Program Files\Common Files
CommonProgramFiles(x86)=C:\Program Files (x86)\Common Files
CommonProgramW6432=C:\Program Files\Common Files
COMPUTERNAME=C2FAEFCC8253
ComSpec=C:\Windows\system32\cmd.exe
foo=bar
LOCALAPPDATA=C:\Users\ContainerAdministrator\AppData\Local
NUMBER_OF_PROCESSORS=8
OS=Windows_NT
Path=C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\Wind
owsPowerShell\v1.0\;C:\Users\ContainerAdministrator\AppData\Local\Microsoft\WindowsAp
ps
PATHEXT=.COM;.EXE;.BAT;.CMD
PROCESSOR_ARCHITECTURE=AMD64
PROCESSOR_IDENTIFIER=Intel64 Family 6 Model 62 Stepping 4, GenuineIntel
PROCESSOR_LEVEL=6
PROCESSOR_REVISION=3e04
ProgramData=C:\ProgramData
ProgramFiles=C:\Program Files
ProgramFiles(x86)=C:\Program Files (x86)
ProgramW6432=C:\Program Files
PROMPT=$P$G
PUBLIC=C:\Users\Public
SystemDrive=C:
SystemRoot=C:\Windows
TEMP=C:\Users\ContainerAdministrator\AppData\Local\Temp
TMP=C:\Users\ContainerAdministrator\AppData\Local\Temp
USERDOMAIN=User Manager
USERNAME=ContainerAdministrator
USERPROFILE=C:\Users\ContainerAdministrator
windir=C:\Windows

Similarly the operator can set the HOSTNAME (Linux) or COMPUTERNAME (Windows) with -h.

HEALTHCHECK
--health-cmd Command to run to check health
--health-interval Time between running the check
--health-retries Consecutive failures needed to report unhealthy
--health-timeout Maximum time to allow one check to run
--health-start-period Start period for the container to initialize before
starting health-retries countdown
--no-healthcheck Disable any container-specified HEALTHCHECK

Example:

$ docker run --name=test -d \


--health-cmd='stat /etc/passwd || exit 1' \
--health-interval=2s \
busybox sleep 1d
$ sleep 2; docker inspect --format='{{.State.Health.Status}}' test
healthy
$ docker exec test rm /etc/passwd
$ sleep 2; docker inspect --format='{{json .State.Health}}' test
{
"Status": "unhealthy",
"FailingStreak": 3,
"Log": [
{
"Start": "2016-05-25T17:22:04.635478668Z",
"End": "2016-05-25T17:22:04.7272552Z",
"ExitCode": 0,
"Output": " File: /etc/passwd\n Size: 334 \tBlocks: 8 IO
Block: 4096 regular file\nDevice: 32h/50d\tInode: 12 Links: 1\nAccess:
(0664/-rw-rw-r--) Uid: ( 0/ root) Gid: ( 0/ root)\nAccess: 2015-12-05
22:05:32.000000000\nModify: 2015..."
},
{
"Start": "2016-05-25T17:22:06.732900633Z",
"End": "2016-05-25T17:22:06.822168935Z",
"ExitCode": 0,
"Output": " File: /etc/passwd\n Size: 334 \tBlocks: 8 IO
Block: 4096 regular file\nDevice: 32h/50d\tInode: 12 Links: 1\nAccess:
(0664/-rw-rw-r--) Uid: ( 0/ root) Gid: ( 0/ root)\nAccess: 2015-12-05
22:05:32.000000000\nModify: 2015..."
},
{
"Start": "2016-05-25T17:22:08.823956535Z",
"End": "2016-05-25T17:22:08.897359124Z",
"ExitCode": 1,
"Output": "stat: can't stat '/etc/passwd': No such file or directory\n"
},
{
"Start": "2016-05-25T17:22:10.898802931Z",
"End": "2016-05-25T17:22:10.969631866Z",
"ExitCode": 1,
"Output": "stat: can't stat '/etc/passwd': No such file or directory\n"
},
{
"Start": "2016-05-25T17:22:12.971033523Z",
"End": "2016-05-25T17:22:13.082015516Z",
"ExitCode": 1,
"Output": "stat: can't stat '/etc/passwd': No such file or directory\n"
}
]
}

The health status is also displayed in the docker ps output.

TMPFS (mount tmpfs filesystems)


--tmpfs=[]: Create a tmpfs mount with: container-dir[:<options>],
where the options are identical to the Linux
'mount -t tmpfs -o' command.

The example below mounts an empty tmpfs into the container with the rw, noexec, nosuid,
and size=65536k options.
$ docker run -d --tmpfs /run:rw,noexec,nosuid,size=65536k my_image

VOLUME (shared filesystems)


-v, --volume=[host-src:]container-dest[:<options>]: Bind mount a volume.
The comma-delimited `options` are [rw|ro], [z|Z],
[[r]shared|[r]slave|[r]private], and [nocopy].
The 'host-src' is an absolute path or a name value.

If neither 'rw' or 'ro' is specified then the volume is mounted in


read-write mode.

The `nocopy` mode is used to disable automatically copying the requested volume
path in the container to the volume storage location.
For named volumes, `copy` is the default mode. Copy modes are not supported
for bind-mounted volumes.
--volumes-from="": Mount all volumes from the given container(s)

Note: When using systemd to manage the Docker daemon’s start and stop, in the systemd unit file
there is an option to control mount propagation for the Docker daemon itself, called MountFlags. The
value of this setting may cause Docker to not see mount propagation changes made on the mount
point. For example, if this value is slave, you may not be able to use
the shared or rshared propagation on a volume.

The volumes commands are complex enough to have their own documentation in section Use
volumes. A developer can define one or more VOLUME’s associated with an image, but only the
operator can give access from one container to another (or from a container to a volume mounted on
the host).
The container-dest must always be an absolute path such as /src/docs. The host-srccan either
be an absolute path or a name value. If you supply an absolute path for the host-dir, Docker bind-
mounts to the path you specify. If you supply a name, Docker creates a named volume by that name.
A name value must start with an alphanumeric character, followed by a-z0-
9, _(underscore), . (period) or - (hyphen). An absolute path starts with a / (forward slash).
For example, you can specify either /foo or foo for a host-src value. If you supply the /foo value,
Docker creates a bind mount. If you supply the foo specification, Docker creates a named volume.

USER
root (id = 0) is the default user within a container. The image developer can create additional users.
Those users are accessible by name. When passing a numeric ID, the user does not have to exist in
the container.
The developer can set a default user to run the first process with the Dockerfile USERinstruction.
When starting a container, the operator can override the USER instruction by passing the -u option.
-u="", --user="": Sets the username or UID used and optionally the groupname or GID
for the specified command.

The followings examples are all valid:


--user=[ user | user:group | uid | uid:gid | user:gid | uid:group ]

Note: if you pass a numeric uid, it must be in the range of 0-2147483647.

WORKDIR
The default working directory for running binaries within a container is the root directory (/), but the
developer can set a different default with the Dockerfile WORKDIR command. The operator can
override this with:
-w="": Working directory inside the container

Use the Docker command line


Estimated reading time: 14 minutes

docker
To list available commands, either run docker with no parameters or execute docker help:
$ docker
Usage: docker [OPTIONS] COMMAND [ARG...]
docker [ --help | -v | --version ]

A self-sufficient runtime for containers.

Options:
--config string Location of client config files (default "/root/.docker")
-c, --context string Name of the context to use to connect to the daemon
(overrides DOCKER_HOST env var and default context set with "docker context use")
-D, --debug Enable debug mode
--help Print usage
-H, --host value Daemon socket(s) to connect to (default [])
-l, --log-level string Set the logging level
("debug"|"info"|"warn"|"error"|"fatal") (default "info")
--tls Use TLS; implied by --tlsverify
--tlscacert string Trust certs signed only by this CA (default
"/root/.docker/ca.pem")
--tlscert string Path to TLS certificate file (default
"/root/.docker/cert.pem")
--tlskey string Path to TLS key file (default "/root/.docker/key.pem")
--tlsverify Use TLS and verify the remote
-v, --version Print version information and quit
Commands:
attach Attach to a running container
# […]

Description
Depending on your Docker system configuration, you may be required to preface
each docker command with sudo. To avoid having to use sudo with the docker command, your
system administrator can create a Unix group called docker and add users to it.
For more information about installing Docker or sudo configuration, refer to the installationinstructions
for your operating system.

Environment variables
For easy reference, the following list of environment variables are supported by the dockercommand
line:

 DOCKER_API_VERSION The API version to use (e.g. 1.19)


 DOCKER_CONFIG The location of your client configuration files.
 DOCKER_CERT_PATH The location of your authentication keys.
 DOCKER_CLI_EXPERIMENTAL Enable experimental features for the cli (e.g. enabled or disabled)
 DOCKER_DRIVER The graph driver to use.
 DOCKER_HOST Daemon socket to connect to.
 DOCKER_NOWARN_KERNEL_VERSION Prevent warnings that your Linux kernel is unsuitable for
Docker.
 DOCKER_RAMDISK If set this will disable ‘pivot_root’.
 DOCKER_STACK_ORCHESTRATOR Configure the default orchestrator to use when using docker
stack management commands.
 DOCKER_TLS When set Docker uses TLS.
 DOCKER_TLS_VERIFY When set Docker uses TLS and verifies the remote.
 DOCKER_CONTENT_TRUST When set Docker uses notary to sign and verify images. Equates to -
-disable-content-trust=false for build, create, pull, push, run.
 DOCKER_CONTENT_TRUST_SERVER The URL of the Notary server to use. This defaults to the
same URL as the registry.
 DOCKER_HIDE_LEGACY_COMMANDS When set, Docker hides “legacy” top-level commands (such
as docker rm, and docker pull) in docker help output, and only Management commands per
object-type (e.g., docker container) are printed. This may become the default in a future
release, at which point this environment-variable is removed.
 DOCKER_TMPDIR Location for temporary Docker files.
 DOCKER_CONTEXT Specify the context to use (overrides DOCKER_HOST env var and default
context set with “docker context use”)
 DOCKER_DEFAULT_PLATFORM Specify the default platform for the commands that take the --
platform flag.
Because Docker is developed using Go, you can also use any environment variables used by the
Go runtime. In particular, you may find these useful:

 HTTP_PROXY
 HTTPS_PROXY
 NO_PROXY

These Go environment variables are case-insensitive. See the Go specification for details on these
variables.

Configuration files
By default, the Docker command line stores its configuration files in a directory called .docker within
your $HOME directory. However, you can specify a different location via
the DOCKER_CONFIG environment variable or the --config command line option. If both are specified,
then the --config option overrides the DOCKER_CONFIG environment variable. For example:
docker --config ~/testconfigs/ ps

Instructs Docker to use the configuration files in your ~/testconfigs/ directory when running
the ps command.
Docker manages most of the files in the configuration directory and you should not modify them.
However, you can modify the config.json file to control certain aspects of how the docker command
behaves.
Currently, you can modify the docker command behavior using environment variables or command-
line options. You can also use options within config.json to modify some of the same behavior.
When using these mechanisms, you must keep in mind the order of precedence among them.
Command line options override environment variables and environment variables override properties
you specify in a config.json file.
The config.json file stores a JSON encoding of several properties:
The property HttpHeaders specifies a set of headers to include in all messages sent from the Docker
client to the daemon. Docker does not try to interpret or understand these header; it simply puts
them into the messages. Docker does not allow these headers to change any headers it sets for
itself.
The property psFormat specifies the default format for docker ps output. When the --format flag is
not provided with the docker ps command, Docker’s client uses this property. If this property is not
set, the client falls back to the default table format. For a list of supported formatting directives, see
the Formatting section in the docker ps documentation
The property imagesFormat specifies the default format for docker images output. When the --
format flag is not provided with the docker images command, Docker’s client uses this property. If
this property is not set, the client falls back to the default table format. For a list of supported
formatting directives, see the Formatting section in the docker imagesdocumentation
The property pluginsFormat specifies the default format for docker plugin ls output. When the --
format flag is not provided with the docker plugin ls command, Docker’s client uses this property.
If this property is not set, the client falls back to the default table format. For a list of supported
formatting directives, see the Formatting section in the docker plugin ls documentation
The property servicesFormat specifies the default format for docker service ls output. When the --
format flag is not provided with the docker service ls command, Docker’s client uses this property.
If this property is not set, the client falls back to the default json format. For a list of supported
formatting directives, see the Formatting section in the docker service ls documentation
The property serviceInspectFormat specifies the default format for docker service inspect output.
When the --format flag is not provided with thedocker service inspect command, Docker’s client
uses this property. If this property is not set, the client falls back to the default json format. For a list
of supported formatting directives, see the Formatting section in the docker service
inspect documentation
The property statsFormat specifies the default format for docker stats output. When the --
format flag is not provided with the docker stats command, Docker’s client uses this property. If this
property is not set, the client falls back to the default table format. For a list of supported formatting
directives, see Formatting section in the docker stats documentation
The property secretFormat specifies the default format for docker secret ls output. When the --
format flag is not provided with the docker secret ls command, Docker’s client uses this property.
If this property is not set, the client falls back to the default table format. For a list of supported
formatting directives, see Formatting section in the docker secret lsdocumentation
The property nodesFormat specifies the default format for docker node ls output. When the --
format flag is not provided with the docker node ls command, Docker’s client uses the value
of nodesFormat. If the value of nodesFormat is not set, the client uses the default table format. For a
list of supported formatting directives, see the Formatting section in the docker node
ls documentation
The property configFormat specifies the default format for docker config ls output. When the --
format flag is not provided with the docker config ls command, Docker’s client uses this property.
If this property is not set, the client falls back to the default table format. For a list of supported
formatting directives, see Formatting section in the docker config lsdocumentation
The property credsStore specifies an external binary to serve as the default credential store. When
this property is set, docker login will attempt to store credentials in the binary specified by docker-
credential-<value> which is visible on $PATH. If this property is not set, credentials will be stored in
the auths property of the config. For more information, see theCredentials store section in
the docker login documentation
The property credHelpers specifies a set of credential helpers to use preferentially
over credsStore or auths when storing and retrieving credentials for specific registries. If this
property is set, the binary docker-credential-<value> will be used when storing or retrieving
credentials for a specific registry. For more information, see the Credential helpers section in
the docker login documentation
The property stackOrchestrator specifies the default orchestrator to use when running docker
stack management commands. Valid values are "swarm", "kubernetes", and "all". This property
can be overridden with the DOCKER_STACK_ORCHESTRATOR environment variable, or the --
orchestrator flag.
Once attached to a container, users detach from it and leave it running using the using CTRL-p CTRL-
q key sequence. This detach key sequence is customizable using the detachKeys property. Specify
a <sequence> value for the property. The format of the <sequence> is a comma-separated list of either
a letter [a-Z], or the ctrl- combined with any of the following:

 a-z (a single lowercase alpha character )


 @ (at sign)
 [ (left bracket)
 \\ (two backward slashes)
 _ (underscore)
 ^ (caret)

Your customization applies to all containers started in with your Docker client. Users can override
your custom or the default key sequence on a per-container basis. To do this, the user specifies
the --detach-keys flag with the docker attach, docker exec, docker runor docker start command.
The property plugins contains settings specific to CLI plugins. The key is the plugin name, while the
value is a further map of options, which are specific to that plugin.
Following is a sample config.json file:

{
"HttpHeaders": {
"MyHeader": "MyValue"
},
"psFormat": "table {{.ID}}\\t{{.Image}}\\t{{.Command}}\\t{{.Labels}}",
"imagesFormat": "table {{.ID}}\\t{{.Repository}}\\t{{.Tag}}\\t{{.CreatedAt}}",
"pluginsFormat": "table {{.ID}}\t{{.Name}}\t{{.Enabled}}",
"statsFormat": "table {{.Container}}\t{{.CPUPerc}}\t{{.MemUsage}}",
"servicesFormat": "table {{.ID}}\t{{.Name}}\t{{.Mode}}",
"secretFormat": "table {{.ID}}\t{{.Name}}\t{{.CreatedAt}}\t{{.UpdatedAt}}",
"configFormat": "table {{.ID}}\t{{.Name}}\t{{.CreatedAt}}\t{{.UpdatedAt}}",
"serviceInspectFormat": "pretty",
"nodesFormat": "table {{.ID}}\t{{.Hostname}}\t{{.Availability}}",
"detachKeys": "ctrl-e,e",
"credsStore": "secretservice",
"credHelpers": {
"awesomereg.example.org": "hip-star",
"unicorn.example.com": "vcbait"
},
"stackOrchestrator": "kubernetes",
"plugins": {
"plugin1": {
"option": "value"
},
"plugin2": {
"anotheroption": "anothervalue",
"athirdoption": "athirdvalue"
}
}
}

Notary
If using your own notary server and a self-signed certificate or an internal Certificate Authority, you
need to place the certificate at tls/<registry_url>/ca.crt in your docker config directory.

Alternatively you can trust the certificate globally by adding it to your system’s list of root Certificate
Authorities.
Examples
Display help text
To list the help on any command just execute the command, followed by the --help option.
$ docker run --help

Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]

Run a command in a new container

Options:
--add-host value Add a custom host-to-IP mapping (host:ip) (default
[])
-a, --attach value Attach to STDIN, STDOUT or STDERR (default [])
...

Option types
Single character command line options can be combined, so rather than typing docker run -i -t --
name test busybox sh, you can write docker run -it --name test busybox sh.

BOOLEAN

Boolean options take the form -d=false. The value you see in the help text is the default value which
is set if you do not specify that flag. If you specify a Boolean flag without a value, this will set the flag
to true, irrespective of the default value.
For example, running docker run -d will set the value to true, so your container will run in
“detached” mode, in the background.
Options which default to true (e.g., docker build --rm=true) can only be set to the non-default
value by explicitly setting them to false:
$ docker build --rm=false .

MULTI

You can specify options like -a=[] multiple times in a single command line, for example in these
commands:
$ docker run -a stdin -a stdout -i -t ubuntu /bin/bash

$ docker run -a stdin -a stdout -a stderr ubuntu /bin/ls

Sometimes, multiple options can call for a more complex value string as for -v:
$ docker run -v /host:/container example/mysql

Note: Do not use the -t and -a stderr options together due to limitations in the pty implementation.
All stderr in pty mode simply goes to stdout.

STRINGS AND INTEGERS

Options like --name="" expect a string, and they can only be specified once. Options like -c=0 expect
an integer, and they can only be specified once.

Docker (base command)


Estimated reading time: 4 minutes

Description
The base command for the Docker CLI.

Child commands
Command Description

Attach local standard input, output, and error streams to a running


docker attach
container

docker build Build an image from a Dockerfile

docker builder Manage builds

docker
Manage checkpoints
checkpoint

docker commit Create a new image from a container’s changes


Command Description

docker config Manage Docker configs

docker container Manage containers

docker context Manage contexts

docker cp Copy files/folders between a container and the local filesystem

docker create Create a new container

docker deploy Deploy a new stack or update an existing stack

docker diff Inspect changes to files or directories on a container’s filesystem

docker engine Manage the docker engine

docker events Get real time events from the server

docker exec Run a command in a running container

docker export Export a container’s filesystem as a tar archive

docker history Show the history of an image

docker image Manage images

docker images List images

docker import Import the contents from a tarball to create a filesystem image

docker info Display system-wide information

docker inspect Return low-level information on Docker objects

docker kill Kill one or more running containers

docker load Load an image from a tar archive or STDIN

docker login Log in to a Docker registry

docker logout Log out from a Docker registry


Command Description

docker logs Fetch the logs of a container

docker manifest Manage Docker image manifests and manifest lists

docker network Manage networks

docker node Manage Swarm nodes

docker pause Pause all processes within one or more containers

docker plugin Manage plugins

docker port List port mappings or a specific mapping for the container

docker ps List containers

docker pull Pull an image or a repository from a registry

docker push Push an image or a repository to a registry

docker rename Rename a container

docker restart Restart one or more containers

docker rm Remove one or more containers

docker rmi Remove one or more images

docker run Run a command in a new container

Save one or more images to a tar archive (streamed to STDOUT by


docker save
default)

docker search Search the Docker Hub for images

docker secret Manage Docker secrets

docker service Manage services

docker stack Manage Docker stacks

docker start Start one or more stopped containers


Command Description

docker stats Display a live stream of container(s) resource usage statistics

docker stop Stop one or more running containers

docker swarm Manage Swarm

docker system Manage Docker

docker tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE

docker top Display the running processes of a container

docker trust Manage trust on Docker images

docker unpause Unpause all processes within one or more containers

docker update Update configuration of one or more containers

docker version Show the Docker version information

docker volume Manage volumes

docker wait Block until one or more containers stop, then print their exit codes

Docker app
Working with Docker App
(experimental)
Estimated reading time: 14 minutes
This is an experimental feature.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Overview
Docker App is a CLI plug-in that introduces a top-level docker app command to bring the container
experience to applications. The following table compares Docker containers with Docker
applications.
Object Config file Build with Execute with Share with

Container Dockerfile docker image build docker container run docker image push

App App Package docker app bundle docker app install docker app push

With Docker App, entire applications can now be managed as easily as images and containers. For
example, Docker App lets you build, validate and deploy applications with the docker app command.
You can even leverage secure supply-chain features such as signed push and pull operations.
NOTE: docker app works with Engine - Community 19.03 or higher and Engine - Enterprise
19.03 or higher.

This guide walks you through two scenarios:

1. Initialize and deploy a new Docker App project from scratch.


2. Convert an existing Compose app into a Docker App project (added later in the beta
process).

The first scenario describes basic components of a Docker App with tools and workflow.

Initialize and deploy a new Docker App project


from scratch
This section describes the steps for creating a new Docker App project to familiarize you with the
workflow and most important commands.

1. Prerequisites
2. Initialize an empty new project
3. Populate the project
4. Validate the app
5. Deploy the app
6. Push the app to Docker Hub or Docker Trusted Registry
7. Install the app directly from Docker Hub

Prerequisites
You need at least one Docker node operating in Swarm mode. You also need the latest build of the
Docker CLI with the App CLI plugin included.

Depending on your Linux distribution and your security context, you might need to prepend
commands with sudo.

Initialize a new empty project


The docker app init command is used to initialize a new Docker application project. If you run it on
its own, it initializes a new empty project. If you point it to an existing docker-compose.yml file, it
initializes a new project based on the Compose file.

Use the following command to initialize a new empty project called “hello-world”.

$ docker app init --single-file hello-world


Created "hello-world.dockerapp"

The command produces a single file in your current directory called hello-world.dockerapp. The
format of the file name is appended with `.dockerapp`.
$ ls
hello-world.dockerapp

If you run docker app init without the --single-file flag, you get a new directory containing three
YAML files. The name of the directory is the name of the project with .dockerapp appended, and the
three YAML files are:

 docker-compose.yml
 metadata.yml
 parameters.yml

However, the --single-file option merges the three YAML files into a single YAML file with three
sections. Each of these sections relates to one of the three YAML files mentioned
previously: docker-compose.yml, metadata.yml, and parameters.yml. Using the --single-file option
enables you to share your application using a single configuration file.
Inspect the YAML with the following command.

$ cat hello-world.dockerapp
# Application metadata - equivalent to metadata.yml.
version: 0.1.0
name: hello-world
description:
---
# Application services - equivalent to docker-compose.yml.
version: "3.6"
services: {}
---
# Default application parameters - equivalent to parameters.yml.

Your file might be more verbose.

Notice that each of the three sections is separated by a set of three dashes (“---“). Let’s quickly
describe each section.

The first section of the file specifies identification metadata such as name, version, description and
maintainers. It accepts key-value pairs. This part of the file can be a separate file
called metadata.yml
The second section of the file describes the application. It can be a separate file called docker-
compose.yml.
The final section specifies default values for application parameters. It can be a separate file
called parameters.yml

Populate the project


This section describes editing the project YAML file so that it runs a simple web app.

Use your preferred editor to edit the hello-world.dockerapp YAML file and update the application
section with the following information:
version: "3.6"
services:
hello:
image: hashicorp/http-echo
command: ["-text", "${hello.text}"]
ports:
- ${hello.port}:5678

Update the Parameters section to the following:


hello:
port: 8080
text: Hello world!

The sections of the YAML file are currently order-based. This means it’s important they remain in the
order we’ve explained, with the metadata section being first, the app section being second, and
the parameters section being last. This may change to name-based sections in future releases.

Save the changes.

The application is updated to run a single-container application based on the hashicorp/http-


echo web server image. This image has it execute a single command that displays some text and
exposes itself on a network port.
Following best practices, the configuration of the application is decoupled from the application itself
using variables. In this case, the text displayed by the app and the port on which it will be published
are controlled by two variables defined in the Parameters section of the file.
Docker App provides the inspect subcommand to provide a prettified summary of the application
configuration. It is a quick way to check how to configure the application before deployment, without
having to read the Compose file. It’s important to note that the application is not running at this point,
and that the inspect operation inspects the configuration file(s).
$ docker app inspect hello-world.dockerapp
hello-world 0.1.0

Service (1) Replicas Ports Image


----------- -------- ----- -----
hello 1 8080 hashicorp/http-echo

Parameters (2) Value


-------------- -----
hello.port 8080
hello.text Hello world!
docker app inspect operations fail if the Parameters section doesn’t specify a default value for every
parameter expressed in the app section.

The application is ready to be validated and rendered.

Validate the app


Docker App provides the validate subcommand to check syntax and other aspects of the
configuration. If the app passes validation, the command returns no arguments.
$ docker app validate hello-world.dockerapp
Validated "hello-world.dockerapp"

docker app validate operations fail if the Parameters section doesn’t specify a default value for
every parameter expressed in the app section.
As the validate operation has returned no problems, the app is ready to be deployed.

Deploy the app


There are several options for deploying a Docker App project.

 Deploy as a native Docker App application


 Deploy as a Compose app application
 Deploy as a Docker Stack application

All three options are discussed, starting with deploying as a native Dock App application.

DEPLOY AS A NATIVE DOCKER APP

The process for deploying as a native Docker app is as follows:

Use docker app install to deploy the application.

Use the following command to deploy (install) the application.

$ docker app install hello-world.dockerapp --name my-app


Creating network my-app_default
Creating service my-app_hello
Application "my-app" installed on context "default"

By default, docker app uses the current context to run the installation container and as a target
context to deploy the application. You can override the second context using the flag --target-
context or by using the environment variable DOCKER_TARGET_CONTEXT. This flag is also available for
the commands status, upgrade, and uninstall.
$ docker app install hello-world.dockerapp --name my-app --target-context=my-big-
production-cluster
Creating network my-app_default
Creating service my-app_hello
Application "my-app" installed on context "my-big-production-cluster"

Note: Two applications deployed on the same target context cannot share the same name, but this
is valid if they are deployed on different target contexts.

You can check the status of the app with the docker app status <app-name> command.
$ docker app status my-app
INSTALLATION
------------
Name: my-app
Created: 35 seconds
Modified: 31 seconds
Revision: 01DCMY7MWW67AY03B029QATXFF
Last Action: install
Result: SUCCESS
Orchestrator: swarm

APPLICATION
-----------
Name: hello-world
Version: 0.1.0
Reference:

PARAMETERS
----------
hello.port: 8080
hello.text: Hello, World!

STATUS
------
ID NAME MODE REPLICAS IMAGE PORTS
miqdk1v7j3zk my-app_hello replicated 1/1 hashicorp/http-echo:latest
*:8080->5678/tcp

The app is deployed using the stack orchestrator. This means you can also inspect it using the
regular docker stack commands.
$ docker stack ls
NAME SERVICES ORCHESTRATOR
my-app 1 Swarm

Now that the app is running, you can point a web browser at the DNS name or public IP of the
Docker node on port 8080 and see the app. You must ensure traffic to port 8080 is allowed on the
connection form your browser to your Docker host.

Now change the port of the application using docker app upgrade <app-name> command.
$ docker app upgrade my-app --hello.port=8181
Upgrading service my-app_hello
Application "my-app" upgraded on context "default"

You can uninstall the app with docker app uninstall my-app.

DEPLOY AS A DOCKER COMPOSE APP

The process for deploying as a Compose app comprises two major steps:

1. Render the Docker app project as a docker-compose.yml file.


2. Deploy the app using docker-compose up.

You need a recent version of Docker Compose to complete these steps.

Rendering is the process of reading the entire application configuration and outputting it as a
single docker-compose.yml file. This creates a Compose file with hard-coded values wherever a
parameter was specified as a variable.
Use the following command to render the app to a Compose file called docker-compose.ymlin the
current directory.
$ docker app render --output docker-compose.yml hello-world.dockerapp

Check the contents of the resulting docker-compose.yml file.


$ cat docker-compose.yml
version: "3.6"
services:
hello:
command:
- -text
- Hello world!
image: hashicorp/http-echo
ports:
- mode: ingress
target: 5678
published: 8080
protocol: tcp

Notice that the file contains hard-coded values that were expanded based on the contents of
the Parameters section of the project’s YAML file. For example, ${hello.text} has been expanded
to “Hello world!”.
Note: Almost all the docker app commands propose the --set key=value flag to override a default
parameter.

Try to render the application with a different text:

$ docker app render hello-world.dockerapp --set hello.text="Hello whales!"


version: "3.6"
services:
hello:
command:
- -text
- Hello whales!
image: hashicorp/http-echo
ports:
- mode: ingress
target: 5678
published: 8080
protocol: tcp
Use docker-compose up to deploy the app.
$ docker-compose up --detach
WARNING: The Docker Engine you're using is running in swarm mode.
<Snip>

The application is now running as a Docker Compose app and should be reachable on port 8080 on
your Docker host. You must ensure traffic to port 8080 is allowed on the connection form your
browser to your Docker host.
You can use docker-compose down to stop and remove the application.

DEPLOY AS A DOCKER STACK

Deploying the app as a Docker stack is a two-step process very similar to deploying it as a Docker
Compose app.

1. Render the Docker app project as a docker-compose.yml file.


2. Deploy the app using docker stack deploy.

Complete the steps in the previous section to render the Docker app project as a Compose file and
make sure you’re ready to deploy it as a Docker Stack. Your Docker host must be in Swarm mode.

$ docker stack deploy hello-world-app -c docker-compose.yml


Creating network hello-world-app_default
Creating service hello-world-app_hello

The app is now deployed as a Docker stack and can be reached on port 8080 on your Docker host.
Use the docker stack rm hello-world-app command to stop and remove the stack. You must
ensure traffic to port 8080 is allowed on the connection form your browser to your Docker host.

Push the app to Docker Hub


As mentioned in the introduction, docker app lets you manage entire applications the same way that
you currently manage container images. For example, you can push and pull entire applications from
registries like Docker Hub with docker app push and docker app pull. Other docker
app commands, such as install, upgrade, inspect, and render can be performed directly on
applications while they are stored in a registry.

Push the application to Docker Hub. To complete this step, you need a valid Docker ID and you
must be logged in to the registry to which you are pushing the app.
By default, all platform architectures are pushed to the registry. If you are pushing an official Docker
image as part of your app, you may find your app bundle becomes large with all image architectures
embedded. To just push the architecture required, you can add the --platformflag.
$ docker login

$ docker app push my-app --platform="linux/amd64" --tag <hub-id>/<repo>:0.1.0

Push the app to DTR


Pushing an app to Docker Trusted Registry (DTR) involves the same procedure as pushing an app
to Docker Hub except that you need your DTR user credentials and your DTR repository information.
To use client certificates for DTR authentication, see Enable Client Certificate Authentication.

By default, all platform architectures are pushed to DTR. If you are pushing an official Docker image
as part of your app, you may find your app bundle becomes large with all image architectures
embedded. To just push the architecture required, you can add the --platformflag.
$ docker login dtr.example.com

$ docker app push my-app --platform="linux/amd64" --tag


dtr.example.com/<user>/<repo>:0.1.0

Install the app directly from Docker Hub or DTR


Now that the app is pushed to the registry, try an inspect and install command against it. The
location of your app is different from the one provided in the examples.
$ docker app inspect myuser/hello-world:0.1.0
hello-world 0.1.0

Service (1) Replicas Ports Image


----------- -------- ----- -----
hello 1 8080 myuser/hello-
world@sha256:ba27d460cd1f22a1a4331bdf74f4fccbc025552357e8a3249c40ae216275de96

Parameters (2) Value


-------------- -----
hello.port 8080
hello.text Hello world!

This action was performed directly against the app in the registry. Note that for DTR, the application
will be prefixed with the Fully Qualified Domain Name (FQDN) of your trusted registry.

Now install it as a native Docker App by referencing the app in the registry, with a different port.

$ docker app install myuser/hello-world:0.1.0 --set hello.port=8181


Creating network hello-world_default
Creating service hello-world_hello
Application "hello-world" installed on context "default"

Test that the app is working.

The app used in these examples is a simple web server that displays the text “Hello world!” on port
8181, your app might be different.

$ curl http://localhost:8181
Hello world!

Uninstall the app.

$ docker app uninstall hello-world


Removing service hello-world_hello
Removing network hello-world_default
Application "hello-world" uninstalled on context "default"

You can see the name of your Docker App with the docker stack ls command.

CLI reference
Estimated reading time: 2 minutes

Description
Docker Application

This command is experimental.


This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Child commands
Command Description

docker app bundle Create a CNAB invocation image and bundle.json for the application

docker app
Generates completion scripts for the specified shell (bash or zsh)
completion

docker app init Initialize Docker Application definition

Shows metadata, parameters and a summary of the Compose file for a


docker app inspect
given application

docker app install Install an application

docker app list List the installations and their last known installation result

docker app merge Merge a directory format Docker Application definition into a single file

docker app pull Pull an application package from a registry

docker app push Push an application package to a registry

docker app render Render the Compose file for an Application Package
Command Description

docker app split Split a single-file Docker Application definition into the directory format

docker app status Get the installation status of an application

docker app
Uninstall an application
uninstall

docker app
Upgrade an installed application
upgrade

docker app validate Checks the rendered application is syntactically correct

docker app version Print version information

Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
A tool to build and manage Docker Applications.

docker app bundle


Estimated reading time: 2 minutes

Description
Create a CNAB invocation image and bundle.json for the application
This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker app bundle [APP_NAME] [--output OUTPUT_FILE]

Options
Name, shorthand Default Description

--output , -o bundle.json Output file (- for stdout)

Parent command
Command Description

docker app Docker Application

Related commands
Command Description

docker app bundle Create a CNAB invocation image and bundle.json for the application

docker app
Generates completion scripts for the specified shell (bash or zsh)
completion

docker app init Initialize Docker Application definition


Command Description

Shows metadata, parameters and a summary of the Compose file for a


docker app inspect
given application

docker app install Install an application

docker app list List the installations and their last known installation result

docker app merge Merge a directory format Docker Application definition into a single file

docker app pull Pull an application package from a registry

docker app push Push an application package to a registry

docker app render Render the Compose file for an Application Package

docker app split Split a single-file Docker Application definition into the directory format

docker app status Get the installation status of an application

docker app
Uninstall an application
uninstall

docker app
Upgrade an installed application
upgrade

docker app validate Checks the rendered application is syntactically correct

docker app version Print version information

Examples
$ docker app bundle myapp.dockerapp

docker app completion


Estimated reading time: 3 minutes

Description
Generates completion scripts for the specified shell (bash or zsh)
This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker app completion SHELL

Parent command
Command Description

docker app Docker Application

Related commands
Command Description

docker app bundle Create a CNAB invocation image and bundle.json for the application

docker app
Generates completion scripts for the specified shell (bash or zsh)
completion

docker app init Initialize Docker Application definition


Command Description

Shows metadata, parameters and a summary of the Compose file for a


docker app inspect
given application

docker app install Install an application

docker app list List the installations and their last known installation result

docker app merge Merge a directory format Docker Application definition into a single file

docker app pull Pull an application package from a registry

docker app push Push an application package to a registry

docker app render Render the Compose file for an Application Package

docker app split Split a single-file Docker Application definition into the directory format

docker app status Get the installation status of an application

docker app
Uninstall an application
uninstall

docker app
Upgrade an installed application
upgrade

docker app validate Checks the rendered application is syntactically correct

docker app version Print version information

Extended description

Load the “docker app” completion code for bash into the

current shell
. <(docker app completion bash)
Set the “docker app” completion code for bash to autoload
on startup in your ~/.bashrc,

~/.profile or ~/.bash_profile
. <(docker app completion bash)

Note: bash-completion is needed.

Load the “docker app” completion code for zsh into the

current shell
source <(docker app completion zsh)

Set the “docker app” completion code for zsh to autoload


on startup in your ~/.zshrc,
source <(docker app completion zsh)

Examples
$ . <(docker app completion bash)

docker app init


Estimated reading time: 3 minutes

Description
Initialize Docker Application definition
This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker app init APP_NAME [--compose-file COMPOSE_FILE] [--description DESCRIPTION] [-
-maintainer NAME:EMAIL ...] [OPTIONS]

Options
Name,
Default Description
shorthand

--compose-file Compose file to use as application base (optional)

--description Human readable description of your application (optional)

Name and email address of person responsible for the application


--maintainer
(name:email) (optional)

--single-file Create a single-file Docker Application definition

Parent command
Command Description

docker app Docker Application

Related commands
Command Description

docker app bundle Create a CNAB invocation image and bundle.json for the application

docker app
Generates completion scripts for the specified shell (bash or zsh)
completion

docker app init Initialize Docker Application definition

Shows metadata, parameters and a summary of the Compose file for a


docker app inspect
given application

docker app install Install an application

docker app list List the installations and their last known installation result

docker app merge Merge a directory format Docker Application definition into a single file

docker app pull Pull an application package from a registry

docker app push Push an application package to a registry

docker app render Render the Compose file for an Application Package

docker app split Split a single-file Docker Application definition into the directory format

docker app status Get the installation status of an application

docker app
Uninstall an application
uninstall

docker app
Upgrade an installed application
upgrade

docker app validate Checks the rendered application is syntactically correct

docker app version Print version information


Extended description
Start building a Docker Application package. If there is a docker-compose.yml file in the current
directory it will be copied and used.

Examples
$ docker app init myapp --description “a useful description”

docker app inspect


Estimated reading time: 3 minutes

Description
Shows metadata, parameters and a summary of the Compose file for a given application

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker app inspect [APP_NAME] [OPTIONS]
Options
Name, shorthand Default Description

--insecure- Use HTTP instead of HTTPS when pulling from/pushing to


registries those registries

--parameters-file Override parameters file

--pull Pull the bundle

--set , -s Override parameter value

Parent command
Command Description

docker app Docker Application

Related commands
Command Description

docker app bundle Create a CNAB invocation image and bundle.json for the application

docker app
Generates completion scripts for the specified shell (bash or zsh)
completion

docker app init Initialize Docker Application definition

Shows metadata, parameters and a summary of the Compose file for a


docker app inspect
given application

docker app install Install an application

docker app list List the installations and their last known installation result

docker app merge Merge a directory format Docker Application definition into a single file

docker app pull Pull an application package from a registry


Command Description

docker app push Push an application package to a registry

docker app render Render the Compose file for an Application Package

docker app split Split a single-file Docker Application definition into the directory format

docker app status Get the installation status of an application

docker app
Uninstall an application
uninstall

docker app
Upgrade an installed application
upgrade

docker app validate Checks the rendered application is syntactically correct

docker app version Print version information

Examples
$ docker app inspect myapp.dockerapp

docker app install


Estimated reading time: 3 minutes

Description
Install an application

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker app install [APP_NAME] [--name INSTALLATION_NAME] [--target-context
TARGET_CONTEXT] [OPTIONS]

Options
Name, shorthand Default Description

Use a YAML file containing a credential set or a credential set


--credential-set
present in the credential store

--insecure- Use HTTP instead of HTTPS when pulling from/pushing to


registries those registries

--kubernetes-
namespace
Kubernetes namespace to install into

--name Installation name (defaults to application name)

--orchestrator Orchestrator to install on (swarm, kubernetes)

--parameters-file Override parameters file

--pull Pull the bundle

--set , -s Override parameter value

--target-context Context on which the application is installed (default: )

--with-registry-
auth
Sends registry auth
Parent command
Command Description

docker app Docker Application

Related commands
Command Description

docker app bundle Create a CNAB invocation image and bundle.json for the application

docker app
Generates completion scripts for the specified shell (bash or zsh)
completion

docker app init Initialize Docker Application definition

Shows metadata, parameters and a summary of the Compose file for a


docker app inspect
given application

docker app install Install an application

docker app list List the installations and their last known installation result

docker app merge Merge a directory format Docker Application definition into a single file

docker app pull Pull an application package from a registry

docker app push Push an application package to a registry

docker app render Render the Compose file for an Application Package

docker app split Split a single-file Docker Application definition into the directory format

docker app status Get the installation status of an application

docker app
Uninstall an application
uninstall

docker app
Upgrade an installed application
upgrade

docker app validate Checks the rendered application is syntactically correct


Command Description

docker app version Print version information

Extended description
Install an application. By default, the application definition in the current directory will be installed.
The APP_NAME can also be:

 a path to a Docker Application definition (.dockerapp) or a CNAB bundle.json


 a registry Application Package reference

Examples
$ docker app install myapp.dockerapp --name myinstallation --target-context=mycontext $ docker
app install myrepo/myapp:mytag --name myinstallation --target-context=mycontext $ docker app
install bundle.json --name myinstallation --credential-set=mycredentials.yml

docker app list


Estimated reading time: 2 minutes

Description
List the installations and their last known installation result

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker app list [OPTIONS]

Options
Name, shorthand Default Description

--target-context List installations on this context

Parent command
Command Description

docker app Docker Application

Related commands
Command Description

docker app bundle Create a CNAB invocation image and bundle.json for the application

docker app
Generates completion scripts for the specified shell (bash or zsh)
completion

docker app init Initialize Docker Application definition

Shows metadata, parameters and a summary of the Compose file for a


docker app inspect
given application

docker app install Install an application

docker app list List the installations and their last known installation result
Command Description

docker app merge Merge a directory format Docker Application definition into a single file

docker app pull Pull an application package from a registry

docker app push Push an application package to a registry

docker app render Render the Compose file for an Application Package

docker app split Split a single-file Docker Application definition into the directory format

docker app status Get the installation status of an application

docker app
Uninstall an application
uninstall

docker app
Upgrade an installed application
upgrade

docker app validate Checks the rendered application is syntactically correct

docker app version Print version information

docker app merge


Estimated reading time: 2 minutes

Description
Merge a directory format Docker Application definition into a single file

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker app merge [APP_NAME] [--output OUTPUT_FILE]

Options
Name, shorthand Default Description

--output , -o Output file (default: in-place)

Parent command
Command Description

docker app Docker Application

Related commands
Command Description

docker app bundle Create a CNAB invocation image and bundle.json for the application

docker app
Generates completion scripts for the specified shell (bash or zsh)
completion

docker app init Initialize Docker Application definition

Shows metadata, parameters and a summary of the Compose file for a


docker app inspect
given application
Command Description

docker app install Install an application

docker app list List the installations and their last known installation result

docker app merge Merge a directory format Docker Application definition into a single file

docker app pull Pull an application package from a registry

docker app push Push an application package to a registry

docker app render Render the Compose file for an Application Package

docker app split Split a single-file Docker Application definition into the directory format

docker app status Get the installation status of an application

docker app
Uninstall an application
uninstall

docker app
Upgrade an installed application
upgrade

docker app validate Checks the rendered application is syntactically correct

docker app version Print version information

Examples
$ docker app merge myapp.dockerapp --output myapp-single.dockerapp

docker app push


Estimated reading time: 3 minutes

Description
Push an application package to a registry

This command is experimental.


This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker app push [APP_NAME] --tag TARGET_REFERENCE [OPTIONS]

Options
Name, shorthand Default Description

--insecure- Use HTTP instead of HTTPS when pulling from/pushing to


registries those registries

For multi-arch service images, only push the specified


--platform
platforms

--tag , -t Target registry reference (default: : from metadata)

Parent command
Command Description

docker app Docker Application


Related commands
Command Description

docker app bundle Create a CNAB invocation image and bundle.json for the application

docker app
Generates completion scripts for the specified shell (bash or zsh)
completion

docker app init Initialize Docker Application definition

Shows metadata, parameters and a summary of the Compose file for a


docker app inspect
given application

docker app install Install an application

docker app list List the installations and their last known installation result

docker app merge Merge a directory format Docker Application definition into a single file

docker app pull Pull an application package from a registry

docker app push Push an application package to a registry

docker app render Render the Compose file for an Application Package

docker app split Split a single-file Docker Application definition into the directory format

docker app status Get the installation status of an application

docker app
Uninstall an application
uninstall

docker app
Upgrade an installed application
upgrade

docker app validate Checks the rendered application is syntactically correct

docker app version Print version information

Examples
$ docker app push myapp --tag myrepo/myapp:mytag
docker app render
Estimated reading time: 3 minutes

Description
Render the Compose file for an Application Package

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker app render [APP_NAME] [--set KEY=VALUE ...] [--parameters-file PARAMETERS-FILE
...] [OPTIONS]

Options
Name, shorthand Default Description

--formatter yaml Configure the output format (yaml|json)

--insecure- Use HTTP instead of HTTPS when pulling from/pushing to


registries those registries
Name, shorthand Default Description

--output , -o - Output file

--parameters-file Override parameters file

--pull Pull the bundle

--set , -s Override parameter value

Parent command
Command Description

docker app Docker Application

Related commands
Command Description

docker app bundle Create a CNAB invocation image and bundle.json for the application

docker app
Generates completion scripts for the specified shell (bash or zsh)
completion

docker app init Initialize Docker Application definition

Shows metadata, parameters and a summary of the Compose file for a


docker app inspect
given application

docker app install Install an application

docker app list List the installations and their last known installation result

docker app merge Merge a directory format Docker Application definition into a single file

docker app pull Pull an application package from a registry

docker app push Push an application package to a registry

docker app render Render the Compose file for an Application Package
Command Description

docker app split Split a single-file Docker Application definition into the directory format

docker app status Get the installation status of an application

docker app
Uninstall an application
uninstall

docker app
Upgrade an installed application
upgrade

docker app validate Checks the rendered application is syntactically correct

docker app version Print version information

Examples
$ docker app render myapp.dockerapp --set key=value

docker app split


Estimated reading time: 2 minutes

Description
Split a single-file Docker Application definition into the directory format

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker app split [APP_NAME] [--output OUTPUT_DIRECTORY]

Options
Name, shorthand Default Description

--output , -o Output directory (default: in-place)

Parent command
Command Description

docker app Docker Application

Related commands
Command Description

docker app bundle Create a CNAB invocation image and bundle.json for the application

docker app
Generates completion scripts for the specified shell (bash or zsh)
completion

docker app init Initialize Docker Application definition

Shows metadata, parameters and a summary of the Compose file for a


docker app inspect
given application

docker app install Install an application

docker app list List the installations and their last known installation result
Command Description

docker app merge Merge a directory format Docker Application definition into a single file

docker app pull Pull an application package from a registry

docker app push Push an application package to a registry

docker app render Render the Compose file for an Application Package

docker app split Split a single-file Docker Application definition into the directory format

docker app status Get the installation status of an application

docker app
Uninstall an application
uninstall

docker app
Upgrade an installed application
upgrade

docker app validate Checks the rendered application is syntactically correct

docker app version Print version information

Examples
$ docker app split myapp.dockerapp --output myapp-directory.dockerapp

docker app status


Estimated reading time: 3 minutes

Description
Get the installation status of an application

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker app status INSTALLATION_NAME [--target-context TARGET_CONTEXT] [OPTIONS]

Options
Name, shorthand Default Description

Use a YAML file containing a credential set or a credential set


--credential-set
present in the credential store

--target-context Context on which the application is installed (default: )

--with-registry-
Sends registry auth
auth

Parent command
Command Description

docker app Docker Application

Related commands
Command Description

docker app bundle Create a CNAB invocation image and bundle.json for the application

docker app
Generates completion scripts for the specified shell (bash or zsh)
completion

docker app init Initialize Docker Application definition

Shows metadata, parameters and a summary of the Compose file for a


docker app inspect
given application

docker app install Install an application

docker app list List the installations and their last known installation result

docker app merge Merge a directory format Docker Application definition into a single file

docker app pull Pull an application package from a registry

docker app push Push an application package to a registry

docker app render Render the Compose file for an Application Package

docker app split Split a single-file Docker Application definition into the directory format

docker app status Get the installation status of an application

docker app
Uninstall an application
uninstall

docker app
Upgrade an installed application
upgrade

docker app validate Checks the rendered application is syntactically correct

docker app version Print version information

Extended description
Get the installation status of an application. If the installation is a Docker Application, the status
shows the stack services.
Examples
$ docker app status myinstallation --target-context=mycontext

docker app uninstall


Estimated reading time: 3 minutes

Description
Uninstall an application

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker app uninstall INSTALLATION_NAME [--target-context TARGET_CONTEXT] [OPTIONS]

Options
Name, shorthand Default Description

Use a YAML file containing a credential set or a credential set


--credential-set
present in the credential store

--force Force removal of installation

--target-context Context on which the application is installed (default: )

--with-registry-
Sends registry auth
auth

Parent command
Command Description

docker app Docker Application

Related commands
Command Description

docker app bundle Create a CNAB invocation image and bundle.json for the application

docker app
Generates completion scripts for the specified shell (bash or zsh)
completion

docker app init Initialize Docker Application definition

Shows metadata, parameters and a summary of the Compose file for a


docker app inspect
given application

docker app install Install an application

docker app list List the installations and their last known installation result

docker app merge Merge a directory format Docker Application definition into a single file

docker app pull Pull an application package from a registry

docker app push Push an application package to a registry


Command Description

docker app render Render the Compose file for an Application Package

docker app split Split a single-file Docker Application definition into the directory format

docker app status Get the installation status of an application

docker app
Uninstall an application
uninstall

docker app
Upgrade an installed application
upgrade

docker app validate Checks the rendered application is syntactically correct

docker app version Print version information

Examples
$ docker app uninstall myinstallation --target-context=mycontext

docker app upgrade


Estimated reading time: 3 minutes

Description
Upgrade an installed application

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker app upgrade INSTALLATION_NAME [--target-context TARGET_CONTEXT] [OPTIONS]

Options
Name, shorthand Default Description

--app-name Override the installation with another Application Package

Use a YAML file containing a credential set or a credential set


--credential-set
present in the credential store

--insecure- Use HTTP instead of HTTPS when pulling from/pushing to


registries those registries

--parameters-file Override parameters file

--pull Pull the bundle

--set , -s Override parameter value

--target-context Context on which the application is installed (default: )

--with-registry-
Sends registry auth
auth

Parent command
Command Description

docker app Docker Application


Related commands
Command Description

docker app bundle Create a CNAB invocation image and bundle.json for the application

docker app
Generates completion scripts for the specified shell (bash or zsh)
completion

docker app init Initialize Docker Application definition

Shows metadata, parameters and a summary of the Compose file for a


docker app inspect
given application

docker app install Install an application

docker app list List the installations and their last known installation result

docker app merge Merge a directory format Docker Application definition into a single file

docker app pull Pull an application package from a registry

docker app push Push an application package to a registry

docker app render Render the Compose file for an Application Package

docker app split Split a single-file Docker Application definition into the directory format

docker app status Get the installation status of an application

docker app
Uninstall an application
uninstall

docker app
Upgrade an installed application
upgrade

docker app validate Checks the rendered application is syntactically correct

docker app version Print version information

Examples
$ docker app upgrade myinstallation --target-context=mycontext --set key=value
docker app validate
Estimated reading time: 2 minutes

Description
Checks the rendered application is syntactically correct

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker app validate [APP_NAME] [--set KEY=VALUE ...] [--parameters-file
PARAMETERS_FILE]

Options
Name, shorthand Default Description

--parameters-file Override parameters file

--set , -s Override parameter value


Parent command
Command Description

docker app Docker Application

Related commands
Command Description

docker app bundle Create a CNAB invocation image and bundle.json for the application

docker app
Generates completion scripts for the specified shell (bash or zsh)
completion

docker app init Initialize Docker Application definition

Shows metadata, parameters and a summary of the Compose file for a


docker app inspect
given application

docker app install Install an application

docker app list List the installations and their last known installation result

docker app merge Merge a directory format Docker Application definition into a single file

docker app pull Pull an application package from a registry

docker app push Push an application package to a registry

docker app render Render the Compose file for an Application Package

docker app split Split a single-file Docker Application definition into the directory format

docker app status Get the installation status of an application

docker app
Uninstall an application
uninstall

docker app
Upgrade an installed application
upgrade
Command Description

docker app validate Checks the rendered application is syntactically correct

docker app version Print version information

docker app version


Estimated reading time: 2 minutes

Description
Print version information

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker app version

Parent command
Command Description

docker app Docker Application

Related commands
Command Description

docker app bundle Create a CNAB invocation image and bundle.json for the application

docker app
Generates completion scripts for the specified shell (bash or zsh)
completion

docker app init Initialize Docker Application definition

Shows metadata, parameters and a summary of the Compose file for a


docker app inspect
given application

docker app install Install an application

docker app list List the installations and their last known installation result

docker app merge Merge a directory format Docker Application definition into a single file

docker app pull Pull an application package from a registry

docker app push Push an application package to a registry

docker app render Render the Compose file for an Application Package

docker app split Split a single-file Docker Application definition into the directory format

docker app status Get the installation status of an application

docker app
Uninstall an application
uninstall

docker app
Upgrade an installed application
upgrade

docker app validate Checks the rendered application is syntactically correct

docker app version Print version information


Docker Assemble (experimental)
Estimated reading time: 1 minute
This is an experimental feature.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Overview
Docker Assemble (docker assemble) is a plugin which provides a language and framework-aware
tool that enables users to build an application into an optimized Docker container. With Docker
Assemble, users can quickly build Docker images without providing configuration information (like
Dockerfile) by auto-detecting the required information from existing framework configuration.

Docker Assemble supports the following application frameworks:

 Spring Boot when using the Maven build system

 ASP.NET Core (with C# and F#)

System requirements
Docker Assemble requires a Linux, Windows, or a macOS Mojave with the Docker Engine installed.

Install
Docker Assemble requires its own buildkit instance to be running in a Docker container on the local
system. You can start and manage the backend using the backend subcommand of docker
assemble.
To start the backend, run:

~$ docker assemble backend start`


Pulling image «…»: Success
Started backend container "docker-assemble-backend-username" (3e627bb365a4)

When the backend is running, it can be used for multiple builds and does not need to be restarted.

Note: For instructions on running a remote backend, accessing logs, saving the build cache in a
named volume, accessing a host port, and for information about the buildkit instance, see --help .

For advanced backend user information, see Advanced Backend Management.

Build a Spring Boot project


Estimated reading time: 2 minutes

Ensure you are running the backend before you build any projects using Docker Assemble. For
instructions on running the backend, see Install Docker Assemble.
Clone the git repository you would like to use. The following example uses the docker-
springfamework repository.

~$ git clone https://github.com/anokun7/docker-springframework


Cloning into 'docker-springframework'...
«…»

When you build a Spring Boot project, Docker Assemble automatically detects the information it
requires from the pom.xml project file.
Build the project using the docker assemble build command by passing it the path to the source
repository:
~$ docker assemble build docker-springframework
«…»
Successfully built: docker.io/library/hello-boot:1

The resulting image is exported to the local Docker image store using a name and a tag which are
automatically determined by the project metadata.

~$ docker image ls | head -n 2


REPOSITORY TAG IMAGE ID CREATED SIZE
hello-boot 1 00b0fbcf3c40 About a minute ago 97.4MB

An image name consists of «namespace»/«name»:«tag». Where, «namespace»/ is optional and


defaults to none. If the project metadata does not contain a ‘tag’ (or a version), then latestis used. If
the project metadata does not contain a ‘name’ and it was not provided on the command line, a fatal
error occurs.
Use the --namespace, --name and --tag options to override each element of the image name:
~$ docker assemble build --name testing --tag latest docker-springframework/
«…»
INFO[0007] Successfully built "testing:latest"
~$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
testing latest d7f41384814f 32 seconds ago 97.4MB
hello-boot 1 0dbc2c425cff 5 minutes ago 97.4MB

Run the container:

~$ docker run -d --rm -p 8080:8080 hello-boot:1


b2c88bdc35761ba2b99f85ce1f3e3ce9ed98931767b139a0429865cadb46ce13
~$ docker ps
CONTAINER ID IMAGE COMMAND «…» PORTS
NAMES
b2c88bdc3576 hello-boot:1 "java -Djava.securit…" «…» 0.0.0.0:8080->8080/tcp
silly_villani
~$ docker logs b2c88bdc3576

. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v1.5.2.RELEASE)

«…» : Starting Application v1 on b2c88bdc3576 with PID 1 (/hello-boot-1.jar started


by root in /)
«…»
~$ curl -s localhost:8080
Hello from b2c88bdc3576
~$ docker rm -f b2c88bdc3576

Build a C# ASP.NET Core project


Estimated reading time: 1 minute

Ensure you are running the backend before you build any projects using Docker Assemble. For
instructions on running the backend, see Install Docker Assemble.
Clone the git repository you would like to use. The following example uses the dotnetdemorepository.
~$ git clone https://github.com/mbentley/dotnetdemo
Cloning into 'dotnetdemo'...
«…»

Build the project using the docker assemble build command by passing it the path to the source
repository (or a subdirectory in the following example):
~$ docker assemble build dotnetdemo/dotnetdemo
«…»
Successfully built: docker.io/library/dotnetdemo:latest

The resulting image is exported to the local Docker image store using a name and a tag which are
automatically determined by the project metadata.

~$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
dotnetdemo latest a055e61e3a9e 24 seconds ago 349MB

An image name consists of «namespace»/«name»:«tag». Where, «namespace»/ is optional and


defaults to none. If the project metadata does not contain a ‘tag’ (or a version), then latest is used. If
the project metadata does not contain a ‘name’ and it was not provided on the command line, then a
fatal error occurs.
Use the --namespace, --name and --tag options to override each element of the image name:
~$ docker assemble build --name testing --tag latest dotnetdemo/
«…»
INFO[0007] Successfully built "testing:latest"
~$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
testing latest d7f41384814f 32 seconds ago 97.4MB
hello-boot 1 0dbc2c425cff 5 minutes ago 97.4MB

Run the container:

~$ docker run -d --rm -p 8080:80 dotnetdemo:latest


e1c54291e96967dad402a81c4217978a544e4d7b0fdd3c0a2e2cca384c3b4adb
~$ docker ps
CONTAINER ID IMAGE COMMAND «…» PORTS
NAMES
e1c54291e969 dotnetdemo:latest "dotnet dotnetdemo.d…" «…» 0.0.0.0:8080->80/tcp
lucid_murdock
~$ docker logs e1c54291e969
warn: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35]
No XML encryptor configured. Key {11bba23a-71ad-4191-b583-4f974e296033} may be
persisted to storage in unencrypted form.
Hosting environment: Production
Content root path: /app
Now listening on: http://[::]:80
Application started. Press Ctrl+C to shut down.
~$ curl -s localhost:8080 | grep '<h4>'
<h4>This environment is </h4>
<h4>served from e1c54291e969 at 11/22/2018 16:00:23</h4>
~$ docker rm -f e1c54291e969

Configure Docker Assemble


Estimated reading time: 4 minutes
Although you don’t need to configure anything to build a project using Docker Assemble, you may
wish to override the defaults, and in some cases, add fields that weren’t automatically detected from
the project file. To support this, Docker Assemble allows you to add a file docker-assemble.yaml to
the root of your project. The settings you provide in the docker-assemble.yaml file overrides any
auto-detection and can themselves be overridden by command-line arguments
The docker-assemble.yaml file is in YAML syntax and has the following informal schema:
 version: (string) mandatory, must contain 0.2.0
 image: (map) contains options related to the output image.
o platforms: (list of strings) lists the possible platforms which can be built (for
example, linux/amd64, windows/amd64). The default is determined automatically from
the project type and content. Note that by default Docker Assemble will build only
for linux/amd64 unless --push is used. See Building Multiplatform images.
o ports: (list of strings) contains ports to expose from a container running the image.
e.g 80/tcp or 8080. Default is to automatically determine the set of ports to expose
where possible. To disable this and export no ports specify a list containing precisely
one element of none.
o labels: (map) contains labels to write into the image as key-value (string) pairs.
o repository-namespace: (string) the registry and path component of the desired output
image. e.g. docker.io/library or docker.io/user.
o repository-name: (string) the name of the specific image within repository-
namespace. Overrides any name derived from the build system specific configuration.
o tag: (string) the default tag to use. Overrides and version/tag derived from the build
system specific configuration.
o healthcheck: (map) describes how to check a container running the image is healthy.
 kind: (string) sets the type of Healthcheck to perform. Valid values
are none, simple-tcpport-open and springboot. See Health checks.
 interval: (duration) the time to wait between checks.
 timeout: (duration) the time to wait before considering the check to have
hung.
 start-period: (duration) period for the container to initialize before the retries
starts to count down
 retries: (integer) number of consecutive failures needed to consider a
container as unhealthy.
 springboot: (map) if this is a Spring Boot project then contains related configuration options.
o enabled: (boolean) true if this is a springboot project.
o java-version: (string) configures the Java version to use. Valid options are 8and 10.
o build-image: (string) sets a custom base build image
o runtime-images (map) sets a custom base runtime image by platform. For valid keys,
refer to the Spring Boot section in Custom base images.
 aspnetcore: (map) if this is an ASP.NET Core project then contains related configuration
options.
o enabled: (boolean) true if this is an ASP.NET Core project.
o version: (string) configures the ASP.NET Core version to use. Valid options
are 1.0, 1.1, 2.0 and 2.1.
o build-image: (string) sets a custom base build image
o runtime-images (map) sets a custom base runtime image by platform. For valid keys,
refer to the ASP.NET Core section in Custom base images.
Notes:

 The only mandatory field in docker-assemble.yaml is version. All other parameters are
optional.
 At most one of dotnet or springboot can be present in the yaml file.
 Fields of type duration are integers with nanosecond granularity. However the following units
of time are supported: ns, us (or µs), ms, s, m, h. For example, 25s.

Each setting in the configuration file has a command line equivalent which can be used with the -o/-
-option argument, which takes a KEY=VALUE string where KEY is constructed by joining each element
of the YAML hierarchy with a period (.).
For example, the image → repository-namespace key in the YAML becomes -o image.repository-
namespace=NAME on the command line and springboot → enabledbecomes -o
springboot.enabled=BOOLEAN.
The following convenience aliases take precedence over the -o/--option equivalents:
 --namespace is an alias for image.repository-namespace;
 --name corresponds to image.repository-name;
 --tag corresponds to image.tag;
 --label corresponds to image.labels (can be used multiple times);
 --port corresponds to image.ports (can be used multiple times)

Docker Assemble images


Estimated reading time: 3 minutes

Multi-platform images
By default, Docker Assemble builds images for the linux/amd64 platform and exports them to the
local Docker image store. This is also true when running Docker Assemble on Windows or macOS.
For some application frameworks, Docker Assemble can build multi-platform images to support
running on several host platforms. For example, linux/amd64 and windows/amd64.
To support multi-platform images, images must be pushed to a registry instead of the local image
store. This is because the local image store can only import uni-platform images which match its
platform.

To enable the multi-platform mode, use the --push option. For example:
$ docker assemble build --push /path/to/my/project

To push to an insecure (unencrypted) registry, use --push-insecure instead of --push.

Custom base images


Docker Assemble allows you to override the base images for building and running your project. For
example, the following docker-assemble.yaml file defines maven:3-ibmjava-8-alpine as the base
build image and openjdk:8-jre-alpine as the base runtime image (for linux/amd64 platform).
version: "0.2.0"
springboot:
enabled: true
build-image: "maven:3-ibmjava-8-alpine"
runtime-images:
linux/amd64: "openjdk:8-jre-alpine"

Linux-based images must be Debian, Red Hat, or Alpine-based and have a standard environment
with:

 find
 xargs
 grep
 true
 a standard POSIX shell (located at /bin/sh)

These tools are required for internal inspection that Docker Assemble performs on the images.
Depending on the type of your project and your configuration, the base images must meet other
requirements as described in the following sections.

Spring Boot
Install Java JDK and maven on the base build image and ensure it is available in $PATH. Install a
maven settings file as /usr/share/maven/ref/settings-docker.xml (irrespective of the install
location of Maven).
Ensure the base runtime image has Java JRE installed and is available in $PATH. The build and
runtime image must have the same version of Java installed.

Supported build platform:

 linux/amd64

Supported runtime platforms:

 linux/amd64
 windows/amd64

ASP.NET Core
Install .NET Core SDK on the base build image and ensure it includes the .NET Core command-line
interface tools.

Install .NET Core command-line interface tools on the base runtime image.

Supported build platform:

 linux/amd64

Supported runtime platforms:

 linux/amd64
 windows/amd64

Bill of lading
Docker Assemble generates a bill of lading when building an image. This contains information about
the tools, base images, libraries, and packages used by Assemble to build the image and that are
included in the runtime image. The bill of lading has two parts – one for build and one for runtime.

The build part includes:

 The base image used


 A map of packages installed and their versions
 A map of libraries used for the build and their versions
 A map of build tools and their corresponding versions
The runtime part includes:

 The base image used


 A map of packages installed and their versions
 A map of runtime tools and their versions

You can find the bill of lading by inspecting the resulting image. It is stored using the
label com.docker.assemble.bill-of-lading:
$ docker image inspect --format '{{ index .Config.Labels "com.docker.assemble.bill-
of-lading" }}' <image>

Note: The bill of lading is only supported on the linux/amd64 platform and only for images which are
based on Alpine (apk), Red Hat (rpm) or Debian (dpkg-query).

Health checks
Docker Assemble only supports health checks on linux/amd64 based runtime images and require
certain additional commands to be present depending on the value of image.healthcheck.kind:

 simple-tcpport-open: requires the nc command


 springboot: requires the curl and jq commands

On Alpine (apk) and Debian (dpkg) based images, these dependencies are installed automatically.
For other base images, you must ensure they are present in the images you specify.
If your base runtime image lacks the necessary commands, you may need to
set image.healthcheck.kind to none in your docker-assemble.yaml file.

Advanced backend management


Estimated reading time: 2 minutes

Backend access to host ports


Docker Assemble requires its own buildkit instance to be running in a Docker container on the local
system. You can start and manage the backend using the backend subcommand of docker
assemble. For more information, see Install Docker Assemble.
As the backend runs in a container with its own network namespace, it cannot access host
resources directly. This is most noticeable when trying to push to a local registry aslocalhost:5000.
The backend supports a sidecar container which proxies ports from within the backend container to
the container’s gateway (which is in effect a host IP). This is sufficient to allow access to host ports
which have been bound to 0.0.0.0 (or to the gateway specifically), but not ones which are bound
to 127.0.0.1.
By default, port 5000 is proxied in this way, as that is the most common port used for a local registry
to allow access to a local registry on localhost:5000 (the most common setup). You can proxy other
ports using the --allow-host-port option to docker assemble backend start.
For example, to expose port 6000 instead of port 5000, run:
$ docker assemble backend start --allow-host-port 6000

Notes:

 You can repeat the --allow-host-port option or give it a comma separated list of ports.
 Passing --allow-host-port 0 disables the default and no ports are exposed. For example:
$ docker assemble backend start --allow-host-port 0

 On Docker Desktop, this functionality allows the backend to access ports on the Docker
Desktop VM host, rather than the Windows or macOS host. To access the the Windows or
macOS host port, you can use host.docker.internal as usual.

Backend sub-commands
Info
The info sub-command describes the backend:

~$ docker assemble backend info


ID: 2f03e7d288e6bea770a2acba4c8c918732aefcd1946c94c918e8a54792e4540f (running)
Image: docker/assemble-backend@sha256:«…»

Sidecar containers:
- 0f339c0cc8d7 docker-assemble-backend-username-proxy-port-5000 (running)

Found 1 worker(s):

- 70it95b8x171u5g9jbixkscz9
Platforms:
- linux/amd64
Labels:
- com.docker.assemble.commit: «…»
- org.mobyproject.buildkit.worker.executor: oci
- org.mobyproject.buildkit.worker.hostname: 2f03e7d288e6
- org.mobyproject.buildkit.worker.snapshotter: overlayfs

Build cache contains 54 entries, total size 3.65GB (0B currently in use)

Stop
The stop sub-command destroys the backend container

~$ docker assemble backend stop

Logs
The logs sub-command displays the backend logs.

~$ docker assemble backend logs

Cache
The build cache gets lost when the backend is stopped. To avoid this, you can create a volume
named docker-assemble-backend-cache-«username» and it will automatically be used as the build
cache.

Alternatively you can specify a named docker volume to use for the cache. For example:

~$ docker volume create $USER-assemble-cache


username-assemble-cache
~$ docker assemble backend start --cache-volume=username-assemble-cache
Pulling image «…»: Success
Started container "docker-assemble-backend-username" (74476d3fdea7)

For information regarding the current cache contents, run the command docker assemble backend
cache.
To clean the cache, rundocker assemble backend cache purge .
docker assemble
Estimated reading time: 2 minutes

Description
assemble is a high-level build tool

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Options
Name,
Default Description
shorthand

docker-container://docker-
--addr backend address
assemble-backend-root

specify CA certificate to use when


--tlscacert validating the backend service’s TLS
certificate

specify client certificate to use when


--tlscert
connecting to backend service
Name,
Default Description
shorthand

specify client key to use when


--tlskey
connecting to backend service

-- override server name for validation of


tlsservername the backend service’s TLS certificate

Child commands
Command Description

docker assemble backend Manage build backend service

docker assemble build Build a project into a container

docker assemble version Print the version number of docker assemble

Parent command
Command Description

docker The base command for the Docker CLI.

docker assemble backend


Estimated reading time: 2 minutes

Description
Manage build backend service

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Child commands
Command Description

docker assemble backend cache Manage build cache

docker assemble backend image Print image to be used as backend

docker assemble backend info Print information about build backend service

docker assemble backend logs Show logs for build backend service

docker assemble backend start Start build backend service

docker assemble backend stop Stop build backend service

Parent command
Command Description

docker assemble assemble is a high-level build tool

Related commands
Command Description

docker assemble backend Manage build backend service


Command Description

docker assemble build Build a project into a container

docker assemble version Print the version number of docker assemble

docker assemble backend cache


Estimated reading time: 2 minutes

Description
Manage build cache

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Child commands
Command Description

docker assemble backend cache purge Purge build cache

docker assemble backend cache usage Show build cache contents


Parent command
Command Description

docker assemble backend Manage build backend service

Related commands
Command Description

docker assemble backend cache Manage build cache

docker assemble backend image Print image to be used as backend

docker assemble backend info Print information about build backend service

docker assemble backend logs Show logs for build backend service

docker assemble backend start Start build backend service

docker assemble backend stop Stop build backend service

docker assemble backend cache


purge
Estimated reading time: 1 minute

Description
Purge build cache

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker assemble backend cache purge

Parent command
Command Description

docker assemble backend cache Manage build cache

Related commands
Command Description

docker assemble backend cache purge Purge build cache

docker assemble backend cache usage Show build cache contents

docker assemble backend cache


usage
Estimated reading time: 1 minute

Description
Show build cache contents
This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker assemble backend cache usage

Parent command
Command Description

docker assemble backend cache Manage build cache

Related commands
Command Description

docker assemble backend cache purge Purge build cache

docker assemble backend cache usage Show build cache contents

docker assemble backend image


Estimated reading time: 2 minutes

Description
Print image to be used as backend

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker assemble backend image

Parent command
Command Description

docker assemble backend Manage build backend service

Related commands
Command Description

docker assemble backend cache Manage build cache


Command Description

docker assemble backend image Print image to be used as backend

docker assemble backend info Print information about build backend service

docker assemble backend logs Show logs for build backend service

docker assemble backend start Start build backend service

docker assemble backend stop Stop build backend service

Extended description
Print image to be used as backend.

This can be useful to do:

docker save -o assemble-backend.tar $(docker assemble backend image)

In order to transport “assemble-backend.tar” to an offline system and:

docker load < assemble-backend.tar

docker assemble backend info


Estimated reading time: 1 minute

Description
Print information about build backend service

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker assemble backend info

Parent command
Command Description

docker assemble backend Manage build backend service

Related commands
Command Description

docker assemble backend cache Manage build cache

docker assemble backend image Print image to be used as backend

docker assemble backend info Print information about build backend service

docker assemble backend logs Show logs for build backend service

docker assemble backend start Start build backend service

docker assemble backend stop Stop build backend service

docker assemble backend logs


Estimated reading time: 2 minutes
Description
Show logs for build backend service

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker assemble backend logs

Options
Name, shorthand Default Description

--follow follow log output

--addr docker-container://docker-assemble-backend-root backend address

Parent command
Command Description

docker assemble backend Manage build backend service


Related commands
Command Description

docker assemble backend cache Manage build cache

docker assemble backend image Print image to be used as backend

docker assemble backend info Print information about build backend service

docker assemble backend logs Show logs for build backend service

docker assemble backend start Start build backend service

docker assemble backend stop Stop build backend service

docker assemble backend start


Estimated reading time: 2 minutes

Description
Start build backend service

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker assemble backend start

Options
Name,
Default Description
shorthand

--allow- allow the backend to access a host port by


[5000]
host-port starting a proxy container

named volume to use as build cache (default


--cache-
“docker-assemble-backend-cache-root” if it
volume
exists, otherwise an anonymous volume)

host port to expose build service (0 is a


--host-port
random port)

--image scratch image to use

docker-container://docker-
--addr backend address
assemble-backend-root

Parent command
Command Description

docker assemble backend Manage build backend service

Related commands
Command Description

docker assemble backend cache Manage build cache

docker assemble backend image Print image to be used as backend

docker assemble backend info Print information about build backend service
Command Description

docker assemble backend logs Show logs for build backend service

docker assemble backend start Start build backend service

docker assemble backend stop Stop build backend service

docker assemble backend stop


Estimated reading time: 2 minutes

Description
Stop build backend service

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker assemble backend stop

Options
Name,
Default Description
shorthand

stop but don’t destroy the


--keep
container

docker-container://docker-assemble-
--addr backend address
backend-root

Parent command
Command Description

docker assemble backend Manage build backend service

Related commands
Command Description

docker assemble backend cache Manage build cache

docker assemble backend image Print image to be used as backend

docker assemble backend info Print information about build backend service

docker assemble backend logs Show logs for build backend service

docker assemble backend start Start build backend service

docker assemble backend stop Stop build backend service

docker assemble build


Estimated reading time: 2 minutes

Description
Build a project into a container

This command is experimental.


This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker assemble build [PATH]

Options
Name,
Default Description
shorthand

--debug-dump-
config

--debug-dump-
image

--debug-dump-
llb

--debug-skip-
build

--frontend

--frontend-
devel
Name,
Default Description
shorthand

--label label to write into the image as KEY=VALUE

build image with repository NAME (default taken


--name
from project metadata)

build image within repository NAMESPACE(default no


--namespace
namespace)

--option , -o set an option as OPTION=VALUE

--port port to expose from container

set type of progress (auto, plain, tty). Use plain to


--progress auto
show container output

--push push result to registry, not local image store

--push- push result to insecure (http) registry, not local


insecure image store

tag image with TAG (default taken from project


--tag
metadata or “latest”)

docker-container://docker-
--addr backend address
assemble-backend-root

Parent command
Command Description

docker assemble assemble is a high-level build tool

Related commands
Command Description

docker assemble backend Manage build backend service

docker assemble build Build a project into a container


Command Description

docker assemble version Print the version number of docker assemble

docker assemble version


Estimated reading time: 1 minute

Description
Print the version number of docker assemble

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker assemble version

Parent command
Command Description

docker assemble assemble is a high-level build tool

Related commands
Command Description

docker assemble backend Manage build backend service

docker assemble build Build a project into a container

docker assemble version Print the version number of docker assemble

docker attach
Estimated reading time: 5 minutes

Description
Attach local standard input, output, and error streams to a running container

Usage
docker attach [OPTIONS] CONTAINER

Options
Name, shorthand Default Description

--detach-keys Override the key sequence for detaching a container

--no-stdin Do not attach STDIN

--sig-proxy true Proxy all received signals to the process

Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
Use docker attach to attach your terminal’s standard input, output, and error (or any combination of
the three) to a running container using the container’s ID or name. This allows you to view its
ongoing output or to control it interactively, as though the commands were running directly in your
terminal.
Note: The attach command will display the output of the ENTRYPOINT/CMD process. This can appear
as if the attach command is hung when in fact the process may simply not be interacting with the
terminal at that time.

You can attach to the same contained process multiple times simultaneously, from different sessions
on the Docker host.

To stop a container, use CTRL-c. This key sequence sends SIGKILL to the container. If --sig-
proxy is true (the default),CTRL-c sends a SIGINT to the container. If the container was run with -
i and -t, you can detach from a container and leave it running using the CTRL-p CTRL-q key
sequence.
Note: A process running as PID 1 inside a container is treated specially by Linux: it ignores any
signal with the default action. So, the process will not terminate on SIGINT or SIGTERM unless it is
coded to do so.

It is forbidden to redirect the standard input of a docker attach command while attaching to a tty-
enabled container (i.e.: launched with -t).
While a client is connected to container’s stdio using docker attach, Docker uses a ~1MB memory
buffer to maximize the throughput of the application. If this buffer is filled, the speed of the API
connection will start to have an effect on the process output writing speed. This is similar to other
applications like SSH. Because of this, it is not recommended to run performance critical
applications that generate a lot of output in the foreground over a slow client connection. Instead,
users should use the docker logs command to get access to the logs.

Override the detach sequence


If you want, you can configure an override the Docker key sequence for detach. This is useful if the
Docker default sequence conflicts with key sequence you use for other applications. There are two
ways to define your own detach key sequence, as a per-container override or as a configuration
property on your entire configuration.

To override the sequence for an individual container, use the --detach-keys="<sequence>"flag with
the docker attach command. The format of the <sequence> is either a letter [a-Z], or
the ctrl- combined with any of the following:

 a-z (a single lowercase alpha character )


 @ (at sign)
 [ (left bracket)
 \\ (two backward slashes)
 _ (underscore)
 ^ (caret)

These a, ctrl-a, X, or ctrl-\\ values are all examples of valid key sequences. To configure a
different configuration default key sequence for all containers, see Configuration file section.

Examples
Attach to and detach from a running container
$ docker run -d --name topdemo ubuntu /usr/bin/top -b

$ docker attach topdemo

top - 02:05:52 up 3:05, 0 users, load average: 0.01, 0.02, 0.05


Tasks: 1 total, 1 running, 0 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.1%us, 0.2%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 373572k total, 355560k used, 18012k free, 27872k buffers
Swap: 786428k total, 0k used, 786428k free, 221740k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND


1 root 20 0 17200 1116 912 R 0 0.3 0:00.03 top

top - 02:05:55 up 3:05, 0 users, load average: 0.01, 0.02, 0.05


Tasks: 1 total, 1 running, 0 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 0.2%sy, 0.0%ni, 99.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 373572k total, 355244k used, 18328k free, 27872k buffers
Swap: 786428k total, 0k used, 786428k free, 221776k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND


1 root 20 0 17208 1144 932 R 0 0.3 0:00.03 top

top - 02:05:58 up 3:06, 0 users, load average: 0.01, 0.02, 0.05


Tasks: 1 total, 1 running, 0 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.2%us, 0.3%sy, 0.0%ni, 99.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 373572k total, 355780k used, 17792k free, 27880k buffers
Swap: 786428k total, 0k used, 786428k free, 221776k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND


1 root 20 0 17208 1144 932 R 0 0.3 0:00.03 top
^C$

$ echo $?
0
$ docker ps -a | grep topdemo

7998ac8581f9 ubuntu:14.04 "/usr/bin/top -b" 38 seconds ago


Exited (0) 21 seconds ago topdemo

Get the exit code of the container’s command


And in this second example, you can see the exit code returned by the bash process is returned by
the docker attach command to its caller too:
$ docker run --name test -d -it debian

275c44472aebd77c926d4527885bb09f2f6db21d878c75f0a1c212c03d3bcfab

$ docker attach test

root@f38c87f2a42d:/# exit 13

exit

$ echo $?

13

$ docker ps -a | grep test

275c44472aeb debian:7 "/bin/bash" 26 seconds ago

docker build
Estimated reading time: 22 minutes

Description
Build an image from a Dockerfile

Usage
docker build [OPTIONS] PATH | URL | -

Options
Name, shorthand Default Description

--add-host Add a custom host-to-IP mapping (host:ip)

--build-arg Set build-time variables

--cache-from Images to consider as cache sources

--cgroup-parent Optional parent cgroup for the container

--compress Compress the build context using gzip

--cpu-period Limit the CPU CFS (Completely Fair Scheduler) period

--cpu-quota Limit the CPU CFS (Completely Fair Scheduler) quota

--cpu-shares , -
c
CPU shares (relative weight)

--cpuset-cpus CPUs in which to allow execution (0-3, 0,1)

--cpuset-mems MEMs in which to allow execution (0-3, 0,1)

--disable-
content-trust
true Skip image verification

--file , -f Name of the Dockerfile (Default is ‘PATH/Dockerfile’)

--force-rm Always remove intermediate containers

--iidfile Write the image ID to the file

--isolation Container isolation technology

--label Set metadata for an image

--memory , -m Memory limit

Swap limit equal to memory plus swap: ‘-1’ to enable unlimited


--memory-swap
swap
Name, shorthand Default Description

API 1.25+
--network
Set the networking mode for the RUN instructions during build

--no-cache Do not use cache when building the image

API 1.40+
--output , -o
Output destination (format: type=local,dest=path)

experimental (daemon)API 1.32+


--platform
Set platform if server is multi-platform capable

Set type of progress output (auto, plain, tty). Use plain to show
--progress auto
container output

--pull Always attempt to pull a newer version of the image

--quiet , -q Suppress the build output and print image ID on success

--rm true Remove intermediate containers after a successful build

API 1.39+
--secret Secret file to expose to the build (only if BuildKit enabled):
id=mysecret,src=/local/secret

--security-opt Security options

--shm-size Size of /dev/shm

experimental (daemon)API 1.25+


--squash
Squash newly built layers into a single new layer

API 1.39+
--ssh SSH agent socket or keys to expose to the build (only if BuildKit
enabled) (format: default|[=|[,]])

experimental (daemon)API 1.31+


--stream
Stream attaches to server to negotiate build context

--tag , -t Name and optionally a tag in the ‘name:tag’ format

--target Set the target build stage to build.

--ulimit Ulimit options

Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
The docker build command builds Docker images from a Dockerfile and a “context”. A build’s
context is the set of files located in the specified PATH or URL. The build process can refer to any of
the files in the context. For example, your build can use a COPY instruction to reference a file in the
context.
The URL parameter can refer to three kinds of resources: Git repositories, pre-packaged tarball
contexts and plain text files.

Git repositories
When the URL parameter points to the location of a Git repository, the repository acts as the build
context. The system recursively fetches the repository and its submodules. The commit history is not
preserved. A repository is first pulled into a temporary directory on your local host. After that
succeeds, the directory is sent to the Docker daemon as the context. Local copy gives you the ability
to access private repositories using local user credentials, VPN’s, and so forth.
Note: If the URL parameter contains a fragment the system will recursively clone the repository and
its submodules using a git clone --recursive command.

Git URLs accept context configuration in their fragment section, separated by a colon :. The first part
represents the reference that Git will check out, and can be either a branch, a tag, or a remote
reference. The second part represents a subdirectory inside the repository that will be used as a
build context.
For example, run this command to use a directory called docker in the branch container:
$ docker build https://github.com/docker/rootfs.git#container:docker

The following table represents all the valid suffixes with their build contexts:

Build Syntax Suffix Commit Used Build Context Used

myrepo.git refs/heads/master /

myrepo.git#mytag refs/tags/mytag /

myrepo.git#mybranch refs/heads/mybranch /
Build Syntax Suffix Commit Used Build Context Used

myrepo.git#pull/42/head refs/pull/42/head /

myrepo.git#:myfolder refs/heads/master /myfolder

myrepo.git#master:myfolder refs/heads/master /myfolder

myrepo.git#mytag:myfolder refs/tags/mytag /myfolder

myrepo.git#mybranch:myfolder refs/heads/mybranch /myfolder

Tarball contexts
If you pass an URL to a remote tarball, the URL itself is sent to the daemon:

$ docker build http://server/context.tar.gz

The download operation will be performed on the host the Docker daemon is running on, which is
not necessarily the same host from which the build command is being issued. The Docker daemon
will fetch context.tar.gz and use it as the build context. Tarball contexts must be tar archives
conforming to the standard tar UNIX format and can be compressed with any one of the ‘xz’, ‘bzip2’,
‘gzip’ or ‘identity’ (no compression) formats.

Text files
Instead of specifying a context, you can pass a single Dockerfile in the URL or pipe the file in
via STDIN. To pipe a Dockerfile from STDIN:
$ docker build - < Dockerfile

With Powershell on Windows, you can run:

Get-Content Dockerfile | docker build -

If you use STDIN or specify a URL pointing to a plain text file, the system places the contents into a file
called Dockerfile, and any -f, --file option is ignored. In this scenario, there is no context.
By default the docker build command will look for a Dockerfile at the root of the build context.
The -f, --file, option lets you specify the path to an alternative file to use instead. This is useful in
cases where the same set of files are used for multiple builds. The path must be to a file within the
build context. If a relative path is specified then it is interpreted as relative to the root of the context.
In most cases, it’s best to put each Dockerfile in an empty directory. Then, add to that directory only
the files needed for building the Dockerfile. To increase the build’s performance, you can exclude
files and directories by adding a .dockerignore file to that directory as well. For information on
creating one, see the .dockerignore file.
If the Docker client loses connection to the daemon, the build is canceled. This happens if you
interrupt the Docker client with CTRL-c or if the Docker client is killed for any reason. If the build
initiated a pull which is still running at the time the build is cancelled, the pull is cancelled as well.

Examples
Build with PATH
$ docker build .

Uploading context 10240 bytes


Step 1/3 : FROM busybox
Pulling repository busybox
---> e9aa60c60128MB/2.284 MB (100%) endpoint: https://cdn-registry-1.docker.io/v1/
Step 2/3 : RUN ls -lh /
---> Running in 9c9e81692ae9
total 24
drwxr-xr-x 2 root root 4.0K Mar 12 2013 bin
drwxr-xr-x 5 root root 4.0K Oct 19 00:19 dev
drwxr-xr-x 2 root root 4.0K Oct 19 00:19 etc
drwxr-xr-x 2 root root 4.0K Nov 15 23:34 lib
lrwxrwxrwx 1 root root 3 Mar 12 2013 lib64 -> lib
dr-xr-xr-x 116 root root 0 Nov 15 23:34 proc
lrwxrwxrwx 1 root root 3 Mar 12 2013 sbin -> bin
dr-xr-xr-x 13 root root 0 Nov 15 23:34 sys
drwxr-xr-x 2 root root 4.0K Mar 12 2013 tmp
drwxr-xr-x 2 root root 4.0K Nov 15 23:34 usr
---> b35f4035db3f
Step 3/3 : CMD echo Hello world
---> Running in 02071fceb21b
---> f52f38b7823e
Successfully built f52f38b7823e
Removing intermediate container 9c9e81692ae9
Removing intermediate container 02071fceb21b

This example specifies that the PATH is ., and so all the files in the local directory get tard and sent
to the Docker daemon. The PATH specifies where to find the files for the “context” of the build on the
Docker daemon. Remember that the daemon could be running on a remote machine and that no
parsing of the Dockerfile happens at the client side (where you’re running docker build). That
means that all the files at PATH get sent, not just the ones listed to ADD in the Dockerfile.
The transfer of context from the local machine to the Docker daemon is what the dockerclient means
when you see the “Sending build context” message.
If you wish to keep the intermediate containers after the build is complete, you must use --rm=false.
This does not affect the build cache.

Build with URL


$ docker build github.com/creack/docker-firefox

This will clone the GitHub repository and use the cloned repository as context. The Dockerfile at the
root of the repository is used as Dockerfile. You can specify an arbitrary Git repository by using
the git:// or git@ scheme.
$ docker build -f ctx/Dockerfile http://server/ctx.tar.gz

Downloading context: http://server/ctx.tar.gz [===================>] 240 B/240 B


Step 1/3 : FROM busybox
---> 8c2e06607696
Step 2/3 : ADD ctx/container.cfg /
---> e7829950cee3
Removing intermediate container b35224abf821
Step 3/3 : CMD /bin/ls
---> Running in fbc63d321d73
---> 3286931702ad
Removing intermediate container fbc63d321d73
Successfully built 377c409b35e4
This sends the URL http://server/ctx.tar.gz to the Docker daemon, which downloads and
extracts the referenced tarball. The -f ctx/Dockerfile parameter specifies a path
inside ctx.tar.gz to the Dockerfile that is used to build the image. Any ADD commands in
that Dockerfile that refers to local paths must be relative to the root of the contents
inside ctx.tar.gz. In the example above, the tarball contains a directory ctx/, so the ADD
ctx/container.cfg / operation works as expected.

Build with -
$ docker build - < Dockerfile

This will read a Dockerfile from STDIN without context. Due to the lack of a context, no contents of
any local directory will be sent to the Docker daemon. Since there is no context, a
Dockerfile ADD only works if it refers to a remote URL.
$ docker build - < context.tar.gz

This will build an image for a compressed context read from STDIN. Supported formats are: bzip2,
gzip and xz.

Use a .dockerignore file


$ docker build .

Uploading context 18.829 MB


Uploading context
Step 1/2 : FROM busybox
---> 769b9341d937
Step 2/2 : CMD echo Hello world
---> Using cache
---> 99cc1ad10469
Successfully built 99cc1ad10469
$ echo ".git" > .dockerignore
$ docker build .
Uploading context 6.76 MB
Uploading context
Step 1/2 : FROM busybox
---> 769b9341d937
Step 2/2 : CMD echo Hello world
---> Using cache
---> 99cc1ad10469
Successfully built 99cc1ad10469

This example shows the use of the .dockerignore file to exclude the .git directory from the context.
Its effect can be seen in the changed size of the uploaded context. The builder reference contains
detailed information on creating a .dockerignore file

Tag an image (-t)


$ docker build -t vieux/apache:2.0 .

This will build like the previous example, but it will then tag the resulting image. The repository name
will be vieux/apache and the tag will be 2.0. Read more about valid tags.
You can apply multiple tags to an image. For example, you can apply the latest tag to a newly built
image and add another tag that references a specific version. For example, to tag an image both
as whenry/fedora-jboss:latest and whenry/fedora-jboss:v2.1, use the following:
$ docker build -t whenry/fedora-jboss:latest -t whenry/fedora-jboss:v2.1 .

Specify a Dockerfile (-f)


$ docker build -f Dockerfile.debug .

This will use a file called Dockerfile.debug for the build instructions instead of Dockerfile.
$ curl example.com/remote/Dockerfile | docker build -f - .

The above command will use the current directory as the build context and read a Dockerfile from
stdin.

$ docker build -f dockerfiles/Dockerfile.debug -t myapp_debug .


$ docker build -f dockerfiles/Dockerfile.prod -t myapp_prod .

The above commands will build the current build context (as specified by the .) twice, once using a
debug version of a Dockerfile and once using a production version.
$ cd /home/me/myapp/some/dir/really/deep
$ docker build -f /home/me/myapp/dockerfiles/debug /home/me/myapp
$ docker build -f ../../../../dockerfiles/debug /home/me/myapp

These two docker build commands do the exact same thing. They both use the contents of
the debug file instead of looking for a Dockerfile and will use /home/me/myapp as the root of the build
context. Note that debug is in the directory structure of the build context, regardless of how you refer
to it on the command line.
Note: docker build will return a no such file or directory error if the file or directory does not
exist in the uploaded context. This may happen if there is no context, or if you specify a file that is
elsewhere on the Host system. The context is limited to the current directory (and its children) for
security reasons, and to ensure repeatable builds on remote Docker hosts. This is also the reason
why ADD ../filewill not work.

Use a custom parent cgroup (--cgroup-parent)


When docker build is run with the --cgroup-parent option the containers used in the build will be
run with the corresponding docker run flag.

Set ulimits in container (--ulimit)


Using the --ulimit option with docker build will cause each build step’s container to be started
using those --ulimit flag values.

Set build-time variables (--build-arg)


You can use ENV instructions in a Dockerfile to define variable values. These values persist in the
built image. However, often persistence is not what you want. Users want to specify variables
differently depending on which host they build an image on.
A good example is http_proxy or source versions for pulling intermediate files. The ARGinstruction
lets Dockerfile authors define values that users can set at build-time using the--build-arg flag:
$ docker build --build-arg HTTP_PROXY=http://10.20.30.2:1234 --build-arg
FTP_PROXY=http://40.50.60.5:4567 .

This flag allows you to pass the build-time variables that are accessed like regular environment
variables in the RUN instruction of the Dockerfile. Also, these values don’t persist in the intermediate
or final images like ENV values do. You must add --build-argfor each build argument.
Using this flag will not alter the output you see when the ARG lines from the Dockerfile are echoed
during the build process.
For detailed information on using ARG and ENV instructions, see the Dockerfile reference.
You may also use the --build-arg flag without a value, in which case the value from the local
environment will be propagated into the Docker container being built:
$ export HTTP_PROXY=http://10.20.30.2:1234
$ docker build --build-arg HTTP_PROXY .

This is similar to how docker run -e works. Refer to the docker run documentation for more
information.

Optional security options (--security-opt)


This flag is only supported on a daemon running on Windows, and only supports
the credentialspec option. The credentialspec must be in the
format file://spec.txt or registry://keyname.

Specify isolation technology for container (--isolation)


This option is useful in situations where you are running Docker containers on Windows. The --
isolation=<value> option sets a container’s isolation technology. On Linux, the only supported is
the default option which uses Linux namespaces. On Microsoft Windows, you can specify these
values:
Value Description

Use the value specified by the Docker daemon’s --exec-opt . If the daemondoes not
default
specify an isolation technology, Microsoft Windows uses processas its default value.

process Namespace isolation only.

hyperv Hyper-V hypervisor partition-based isolation.

Specifying the --isolation flag without a value is the same as setting --isolation="default".

Add entries to container hosts file (--add-host)


You can add other hosts into a container’s /etc/hosts file by using one or more --add-hostflags.
This example adds a static address for a host named docker:
$ docker build --add-host=docker:10.180.0.1 .

Specifying target build stage (--target)


When building a Dockerfile with multiple build stages, --target can be used to specify an
intermediate build stage by name as a final stage for the resulting image. Commands after the target
stage will be skipped.
FROM debian AS build-env
...

FROM alpine AS production-env


...
$ docker build -t mybuildimage --target build-env .

Squash an image’s layers (--squash) (experimental)


OVERVIEW

Once the image is built, squash the new layers into a new image with a single new layer. Squashing
does not destroy any existing image, rather it creates a new image with the content of the squashed
layers. This effectively makes it look like all Dockerfile commands were created with a single layer.
The build cache is preserved with this method.
The --squash option is an experimental feature, and should not be considered stable.

Squashing layers can be beneficial if your Dockerfile produces multiple layers modifying the same
files, for example, files that are created in one step, and removed in another step. For other use-
cases, squashing images may actually have a negative impact on performance; when pulling an
image consisting of multiple layers, layers can be pulled in parallel, and allows sharing layers
between images (saving space).

For most use cases, multi-stage builds are a better alternative, as they give more fine-grained
control over your build, and can take advantage of future optimizations in the builder. Refer to
the use multi-stage builds section in the userguide for more information.

KNOWN LIMITATIONS

The --squash option has a number of known limitations:

 When squashing layers, the resulting image cannot take advantage of layer sharing with
other images, and may use significantly more space. Sharing the base image is still
supported.
 When using this option you may see significantly more space used due to storing two copies
of the image, one for the build cache with all the cache layers in tact, and one for the
squashed version.
 While squashing layers may produce smaller images, it may have a negative impact on
performance, as a single layer takes longer to extract, and downloading a single layer cannot
be parallelized.
 When attempting to squash an image that does not make changes to the filesystem (for
example, the Dockerfile only contains ENV instructions), the squash step will fail (see issue
#33823).

PREREQUISITES

The example on this page is using experimental mode in Docker 1.13.

Experimental mode can be enabled by using the --experimental flag when starting the Docker
daemon or setting experimental: true in the daemon.json configuration file.
By default, experimental mode is disabled. To see the current configuration, use the docker
version command.

Server:
Version: 1.13.1
API version: 1.26 (minimum version 1.12)
Go version: go1.7.5
Git commit: 092cba3
Built: Wed Feb 8 06:35:24 2017
OS/Arch: linux/amd64
Experimental: false

[...]

To enable experimental mode, users need to restart the docker daemon with the experimental flag
enabled.

ENABLE DOCKER EXPERIMENTAL

Experimental features are now included in the standard Docker binaries as of version 1.13.0. For
enabling experimental features, you need to start the Docker daemon with --experimental flag. You
can also enable the daemon flag via /etc/docker/daemon.json. e.g.
{
"experimental": true
}

Then make sure the experimental flag is enabled:


$ docker version -f '{{.Server.Experimental}}'
true

BUILD AN IMAGE WITH --SQUASH ARGUMENT


The following is an example of docker build with --squash argument
FROM busybox
RUN echo hello > /hello
RUN echo world >> /hello
RUN touch remove_me /remove_me
ENV HELLO world
RUN rm /remove_me

An image named test is built with --squash argument.


$ docker build --squash -t test .

[...]

If everything is right, the history will look like this:

$ docker history test

IMAGE CREATED CREATED BY


SIZE COMMENT
4e10cb5b4cac 3 seconds ago
12 B merge
sha256:88a7b0112a41826885df0e7072698006ee8f621c6ab99fca7fe9151d7b599702 to
sha256:47bcc53f74dc94b1920f0b34f6036096526296767650f223433fe65c35f149eb
<missing> 5 minutes ago /bin/sh -c rm /remove_me
0 B
<missing> 5 minutes ago /bin/sh -c #(nop) ENV HELLO=world
0 B
<missing> 5 minutes ago /bin/sh -c touch remove_me /remove_me
0 B
<missing> 5 minutes ago /bin/sh -c echo world >> /hello
0 B
<missing> 6 minutes ago /bin/sh -c echo hello > /hello
0 B
<missing> 7 weeks ago /bin/sh -c #(nop) CMD ["sh"]
0 B
<missing> 7 weeks ago /bin/sh -c #(nop) ADD file:47ca6e777c36a4cfff
1.113 MB

We could find that all layer’s name is <missing>, and there is a new layer with COMMENT merge.
Test the image, check for /remove_me being gone, make sure hello\nworld is in /hello, make sure
the HELLO envvar’s value is world.

docker builder
Estimated reading time: 1 minute

Description
Manage builds

API 1.31+ The client and daemon API must both be at least 1.31 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Usage
docker builder COMMAND

Child commands
Command Description

docker builder build Build an image from a Dockerfile

docker builder prune Remove build cache

Parent command
Command Description

docker The base command for the Docker CLI.


docker builder build
Estimated reading time: 4 minutes

Description
Build an image from a Dockerfile

API 1.31+ The client and daemon API must both be at least 1.31 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Usage
docker builder build [OPTIONS] PATH | URL | -

Options
Name, shorthand Default Description

--add-host Add a custom host-to-IP mapping (host:ip)

--build-arg Set build-time variables

--cache-from Images to consider as cache sources

--cgroup-parent Optional parent cgroup for the container

--compress Compress the build context using gzip

--cpu-period Limit the CPU CFS (Completely Fair Scheduler) period

--cpu-quota Limit the CPU CFS (Completely Fair Scheduler) quota

--cpu-shares , -
CPU shares (relative weight)
c

--cpuset-cpus CPUs in which to allow execution (0-3, 0,1)

--cpuset-mems MEMs in which to allow execution (0-3, 0,1)


Name, shorthand Default Description

--disable-
true Skip image verification
content-trust

--file , -f Name of the Dockerfile (Default is ‘PATH/Dockerfile’)

--force-rm Always remove intermediate containers

--iidfile Write the image ID to the file

--isolation Container isolation technology

--label Set metadata for an image

--memory , -m Memory limit

Swap limit equal to memory plus swap: ‘-1’ to enable unlimited


--memory-swap
swap

API 1.25+
--network
Set the networking mode for the RUN instructions during build

--no-cache Do not use cache when building the image

API 1.40+
--output , -o
Output destination (format: type=local,dest=path)

experimental (daemon)API 1.32+


--platform
Set platform if server is multi-platform capable

Set type of progress output (auto, plain, tty). Use plain to show
--progress auto
container output

--pull Always attempt to pull a newer version of the image

--quiet , -q Suppress the build output and print image ID on success

--rm true Remove intermediate containers after a successful build

API 1.39+
--secret Secret file to expose to the build (only if BuildKit enabled):
id=mysecret,src=/local/secret

--security-opt Security options


Name, shorthand Default Description

--shm-size Size of /dev/shm

experimental (daemon)API 1.25+


--squash
Squash newly built layers into a single new layer

API 1.39+
--ssh SSH agent socket or keys to expose to the build (only if BuildKit
enabled) (format: default|[=|[,]])

experimental (daemon)API 1.31+


--stream
Stream attaches to server to negotiate build context

--tag , -t Name and optionally a tag in the ‘name:tag’ format

--target Set the target build stage to build.

--ulimit Ulimit options

Parent command
Command Description

docker builder Manage builds

Related commands
Command Description

docker builder build Build an image from a Dockerfile

docker builder prune Remove build cache

docker builder prune


Estimated reading time: 1 minute

Description
Remove build cache

API 1.39+ The client and daemon API must both be at least 1.39 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Usage
docker builder prune

Options
Name, shorthand Default Description

--all , -a Remove all unused images, not just dangling ones

--filter Provide filter values (e.g. ‘unused-for=24h’)

--force , -f Do not prompt for confirmation

--keep-storage Amount of disk space to keep for cache

Parent command
Command Description

docker builder Manage builds

Related commands
Command Description

docker builder build Build an image from a Dockerfile

docker builder prune Remove build cache

docker buildx
Estimated reading time: 2 minutes
Description
Build with BuildKit

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Child commands
Command Description

docker buildx bake Build from a file

docker buildx build Start a build

docker buildx create Create a new builder instance

docker buildx imagetools Commands to work on images in registry

docker buildx inspect Inspect current builder instance

docker buildx ls List builder instances

docker buildx rm Remove a builder instance

docker buildx stop Stop builder instance


Command Description

docker buildx use Set the current builder instance

docker buildx version Show buildx version information

Parent command
Command Description

docker The base command for the Docker CLI.

docker buildx bake


Estimated reading time: 2 minutes

Description
Build from a file

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker buildx bake [OPTIONS] [TARGET...]

Options
Name,
Default Description
shorthand

--file , -f Build definition file

--no-cache Do not use cache when building the image

--print Print the options without building

Set type of progress output (auto, plain, tty). Use plain to show
--progress auto
container output

--pull Always attempt to pull a newer version of the image

--set Override target value (eg: target.key=value)

Parent command
Command Description

docker buildx Build with BuildKit

Related commands
Command Description

docker buildx bake Build from a file

docker buildx build Start a build

docker buildx create Create a new builder instance

docker buildx imagetools Commands to work on images in registry

docker buildx inspect Inspect current builder instance


Command Description

docker buildx ls List builder instances

docker buildx rm Remove a builder instance

docker buildx stop Stop builder instance

docker buildx use Set the current builder instance

docker buildx version Show buildx version information

docker buildx build


Estimated reading time: 4 minutes

Description
Start a build

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker buildx build [OPTIONS] PATH | URL | -

Options
Name,
Default Description
shorthand

--add-host Add a custom host-to-IP mapping (host:ip)

--build-arg Set build-time variables

External cache sources (eg. user/app:cache,


--cache-from
type=local,src=path/to/dir)

Cache export destinations (eg. user/app:cache,


--cache-to
type=local,dest=path/to/dir)

--cgroup-
Optional parent cgroup for the container
parent

--compress Compress the build context using gzip

--cpu-period Limit the CPU CFS (Completely Fair Scheduler) period

--cpu-quota Limit the CPU CFS (Completely Fair Scheduler) quota

--cpu-shares ,
CPU shares (relative weight)
-c

--cpuset-cpus CPUs in which to allow execution (0-3, 0,1)

--cpuset-mems MEMs in which to allow execution (0-3, 0,1)

--file , -f Name of the Dockerfile (Default is ‘PATH/Dockerfile’)

--force-rm Always remove intermediate containers

--iidfile Write the image ID to the file

--isolation Container isolation technology

--label Set metadata for an image


Name,
Default Description
shorthand

--load Shorthand for --output=type=docker

--memory , -m Memory limit

Swap limit equal to memory plus swap: ‘-1’ to enable unlimited


--memory-swap
swap

--network Set the networking mode for the RUN instructions during build

--no-cache Do not use cache when building the image

--output , -o Output destination (format: type=local,dest=path)

--platform Set target platform for build

Set type of progress output (auto, plain, tty). Use plain to show
--progress auto
container output

--pull Always attempt to pull a newer version of the image

--push Shorthand for --output=type=registry

--quiet , -q Suppress the build output and print image ID on success

--rm true Remove intermediate containers after a successful build

--secret Secret file to expose to the build: id=mysecret,src=/local/secret

--security-opt Security options

--shm-size Size of /dev/shm

--squash Squash newly built layers into a single new layer

SSH agent socket or keys to expose to the build (format:


--ssh
default|[=|[,]])

--tag , -t Name and optionally a tag in the ‘name:tag’ format

--target Set the target build stage to build.


Name,
Default Description
shorthand

--ulimit Ulimit options

Parent command
Command Description

docker buildx Build with BuildKit

Related commands
Command Description

docker buildx bake Build from a file

docker buildx build Start a build

docker buildx create Create a new builder instance

docker buildx imagetools Commands to work on images in registry

docker buildx inspect Inspect current builder instance

docker buildx ls List builder instances

docker buildx rm Remove a builder instance

docker buildx stop Stop builder instance

docker buildx use Set the current builder instance

docker buildx version Show buildx version information

docker buildx create


Estimated reading time: 2 minutes
Description
Create a new builder instance

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker buildx create [OPTIONS] [CONTEXT|ENDPOINT]

Options
Name, shorthand Default Description

--append Append a node to builder instead of changing it

--driver Driver to use (available: [])

--leave Remove a node from builder instead of changing it

--name Builder instance name

--node Create/modify node with given name


Name, shorthand Default Description

--platform Fixed platforms for current node

--use Set the current builder instance

Parent command
Command Description

docker buildx Build with BuildKit

Related commands
Command Description

docker buildx bake Build from a file

docker buildx build Start a build

docker buildx create Create a new builder instance

docker buildx imagetools Commands to work on images in registry

docker buildx inspect Inspect current builder instance

docker buildx ls List builder instances

docker buildx rm Remove a builder instance

docker buildx stop Stop builder instance

docker buildx use Set the current builder instance

docker buildx version Show buildx version information

docker buildx imagetools


Estimated reading time: 2 minutes
Description
Commands to work on images in registry

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Child commands
Command Description

docker buildx imagetools create Create a new image based on source images

docker buildx imagetools inspect Show details of image in the registry

Parent command
Command Description

docker buildx Build with BuildKit

Related commands
Command Description

docker buildx bake Build from a file

docker buildx build Start a build

docker buildx create Create a new builder instance

docker buildx imagetools Commands to work on images in registry

docker buildx inspect Inspect current builder instance

docker buildx ls List builder instances

docker buildx rm Remove a builder instance

docker buildx stop Stop builder instance

docker buildx use Set the current builder instance

docker buildx version Show buildx version information

docker buildx imagetools create


Estimated reading time: 2 minutes

Description
Create a new image based on source images

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker buildx imagetools create [OPTIONS] [SOURCE] [SOURCE...]

Options
Name, shorthand Default Description

--append Append to existing manifest

--dry-run Show final image instead of pushing

--file , -f Read source descriptor from file

--tag , -t Set reference for new image

Parent command
Command Description

docker buildx imagetools Commands to work on images in registry

Related commands
Command Description

docker buildx imagetools create Create a new image based on source images

docker buildx imagetools inspect Show details of image in the registry

docker buildx imagetools inspect


Estimated reading time: 1 minute

Description
Show details of image in the registry

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker buildx imagetools inspect [OPTIONS] NAME

Options
Name, shorthand Default Description

--raw Show original JSON manifest

Parent command
Command Description

docker buildx imagetools Commands to work on images in registry


Related commands
Command Description

docker buildx imagetools create Create a new image based on source images

docker buildx imagetools inspect Show details of image in the registry

docker buildx inspect


Estimated reading time: 2 minutes

Description
Inspect current builder instance

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker buildx inspect [NAME]
Options
Name, shorthand Default Description

--bootstrap Ensure builder has booted before inspecting

Parent command
Command Description

docker buildx Build with BuildKit

Related commands
Command Description

docker buildx bake Build from a file

docker buildx build Start a build

docker buildx create Create a new builder instance

docker buildx imagetools Commands to work on images in registry

docker buildx inspect Inspect current builder instance

docker buildx ls List builder instances

docker buildx rm Remove a builder instance

docker buildx stop Stop builder instance

docker buildx use Set the current builder instance

docker buildx version Show buildx version information

docker buildx ls
Estimated reading time: 2 minutes
Description
List builder instances

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker buildx ls

Parent command
Command Description

docker buildx Build with BuildKit

Related commands
Command Description

docker buildx bake Build from a file

docker buildx build Start a build


Command Description

docker buildx create Create a new builder instance

docker buildx imagetools Commands to work on images in registry

docker buildx inspect Inspect current builder instance

docker buildx ls List builder instances

docker buildx rm Remove a builder instance

docker buildx stop Stop builder instance

docker buildx use Set the current builder instance

docker buildx version Show buildx version information

docker buildx rm
Estimated reading time: 2 minutes

Description
Remove a builder instance

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker buildx rm [NAME]

Parent command
Command Description

docker buildx Build with BuildKit

Related commands
Command Description

docker buildx bake Build from a file

docker buildx build Start a build

docker buildx create Create a new builder instance

docker buildx imagetools Commands to work on images in registry

docker buildx inspect Inspect current builder instance

docker buildx ls List builder instances

docker buildx rm Remove a builder instance

docker buildx stop Stop builder instance

docker buildx use Set the current builder instance

docker buildx version Show buildx version information

docker buildx stop


Estimated reading time: 2 minutes

Description
Stop builder instance

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker buildx stop [NAME]

Parent command
Command Description

docker buildx Build with BuildKit

Related commands
Command Description

docker buildx bake Build from a file


Command Description

docker buildx build Start a build

docker buildx create Create a new builder instance

docker buildx imagetools Commands to work on images in registry

docker buildx inspect Inspect current builder instance

docker buildx ls List builder instances

docker buildx rm Remove a builder instance

docker buildx stop Stop builder instance

docker buildx use Set the current builder instance

docker buildx version Show buildx version information

docker buildx use


Estimated reading time: 2 minutes

Description
Set the current builder instance

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker buildx use [OPTIONS] NAME

Options
Name, shorthand Default Description

--default Set builder as default for current context

--global Builder persists context changes

Parent command
Command Description

docker buildx Build with BuildKit

Related commands
Command Description

docker buildx bake Build from a file

docker buildx build Start a build

docker buildx create Create a new builder instance

docker buildx imagetools Commands to work on images in registry

docker buildx inspect Inspect current builder instance


Command Description

docker buildx ls List builder instances

docker buildx rm Remove a builder instance

docker buildx stop Stop builder instance

docker buildx use Set the current builder instance

docker buildx version Show buildx version information

docker buildx version


Estimated reading time: 2 minutes

Description
Show buildx version information

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker buildx version

Parent command
Command Description

docker buildx Build with BuildKit

Related commands
Command Description

docker buildx bake Build from a file

docker buildx build Start a build

docker buildx create Create a new builder instance

docker buildx imagetools Commands to work on images in registry

docker buildx inspect Inspect current builder instance

docker buildx ls List builder instances

docker buildx rm Remove a builder instance

docker buildx stop Stop builder instance

docker buildx use Set the current builder instance

docker buildx version Show buildx version information

docker checkpoint
Estimated reading time: 1 minute

Description
Manage checkpoints
API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
This command is experimental.

This command is experimental on the Docker daemon. It should not be used in production
environments. To enable experimental features on the Docker daemon, edit the daemon.json and
set experimental to true.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker checkpoint COMMAND

Child commands
Command Description

docker checkpoint create Create a checkpoint from a running container

docker checkpoint ls List checkpoints for a container

docker checkpoint rm Remove a checkpoint

Parent command
Command Description

docker The base command for the Docker CLI.


docker checkpoint create
Estimated reading time: 2 minutes

Description
Create a checkpoint from a running container

API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
This command is experimental.

This command is experimental on the Docker daemon. It should not be used in production
environments. To enable experimental features on the Docker daemon, edit the daemon.json and
set experimental to true.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker checkpoint create [OPTIONS] CONTAINER CHECKPOINT

Options
Name, shorthand Default Description

--checkpoint-dir Use a custom checkpoint storage directory

--leave-running Leave the container running after checkpoint


Parent command
Command Description

docker checkpoint Manage checkpoints

Related commands
Command Description

docker checkpoint create Create a checkpoint from a running container

docker checkpoint ls List checkpoints for a container

docker checkpoint rm Remove a checkpoint

docker checkpoint ls
Estimated reading time: 2 minutes

Description
List checkpoints for a container

API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
This command is experimental.

This command is experimental on the Docker daemon. It should not be used in production
environments. To enable experimental features on the Docker daemon, edit the daemon.json and
set experimental to true.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker checkpoint ls [OPTIONS] CONTAINER

Options
Name, shorthand Default Description

--checkpoint-dir Use a custom checkpoint storage directory

Parent command
Command Description

docker checkpoint Manage checkpoints

Related commands
Command Description

docker checkpoint create Create a checkpoint from a running container

docker checkpoint ls List checkpoints for a container

docker checkpoint rm Remove a checkpoint

docker checkpoint rm
Estimated reading time: 2 minutes
Description
Remove a checkpoint

API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
This command is experimental.

This command is experimental on the Docker daemon. It should not be used in production
environments. To enable experimental features on the Docker daemon, edit the daemon.json and
set experimental to true.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker checkpoint rm [OPTIONS] CONTAINER CHECKPOINT

Options
Name, shorthand Default Description

--checkpoint-dir Use a custom checkpoint storage directory

Parent command
Command Description

docker checkpoint Manage checkpoints

Related commands
Command Description

docker checkpoint create Create a checkpoint from a running container

docker checkpoint ls List checkpoints for a container

docker checkpoint rm Remove a checkpoint

docker cluster
Estimated reading time: 1 minute

Description
Docker Cluster

Options
Name, shorthand Default Description

--dry-run Skip provisioning resources

--log-level warn Set the logging level (“trace”|”debug”|”info”|”warn”|”error”|”fatal”)

Child commands
Command Description

docker cluster backup Backup a running cluster

docker cluster create Create a new Docker Cluster


Command Description

docker cluster inspect Display detailed information about a cluster

docker cluster ls List all available clusters

docker cluster restore Restore a cluster from a backup

docker cluster rm Remove a cluster

docker cluster update Update a running cluster’s desired state

docker cluster version Print Version, Commit, and Build type

Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
A tool to build and manage Docker Clusters.

docker cluster backup


Estimated reading time: 1 minute

Description
Backup a running cluster

Usage
docker cluster backup [OPTIONS] cluster

Options
Name,
Default Description
shorthand

--env , -e Set environment variables

--file backup.tar.gz Cluster backup filename

--passphrase Cluster backup passphrase

--dry-run Skip provisioning resources

Set the logging level


--log-level warn
(“trace”|”debug”|”info”|”warn”|”error”|”fatal”)

Parent command
Command Description

docker cluster Docker Cluster

Related commands
Command Description

docker cluster backup Backup a running cluster

docker cluster create Create a new Docker Cluster

docker cluster inspect Display detailed information about a cluster

docker cluster ls List all available clusters

docker cluster restore Restore a cluster from a backup

docker cluster rm Remove a cluster

docker cluster update Update a running cluster’s desired state

docker cluster version Print Version, Commit, and Build type


docker cluster create
Estimated reading time: 1 minute

Description
Create a new Docker Cluster

Usage
docker cluster create [OPTIONS]

Options
Name, shorthand Default Description

--env , -e Set environment variables

--example aws Display an example cluster declaration

--file , -f cluster.yml Cluster declaration

--name , -n Name for the cluster

--switch-context
Switch context after cluster create.
, -s

--dry-run Skip provisioning resources

Set the logging level


--log-level warn
(“trace”|”debug”|”info”|”warn”|”error”|”fatal”)

Parent command
Command Description

docker cluster Docker Cluster


Related commands
Command Description

docker cluster backup Backup a running cluster

docker cluster create Create a new Docker Cluster

docker cluster inspect Display detailed information about a cluster

docker cluster ls List all available clusters

docker cluster restore Restore a cluster from a backup

docker cluster rm Remove a cluster

docker cluster update Update a running cluster’s desired state

docker cluster version Print Version, Commit, and Build type

docker cluster inspect


Estimated reading time: 1 minute

Description
Display detailed information about a cluster

Usage
docker cluster inspect [OPTIONS] cluster

Options
Name, shorthand Default Description

--all , -a Display complete info about cluster

--dry-run Skip provisioning resources


Name, shorthand Default Description

--log-level warn Set the logging level (“trace”|”debug”|”info”|”warn”|”error”|”fatal”)

Parent command
Command Description

docker cluster Docker Cluster

Related commands
Command Description

docker cluster backup Backup a running cluster

docker cluster create Create a new Docker Cluster

docker cluster inspect Display detailed information about a cluster

docker cluster ls List all available clusters

docker cluster restore Restore a cluster from a backup

docker cluster rm Remove a cluster

docker cluster update Update a running cluster’s desired state

docker cluster version Print Version, Commit, and Build type

docker cluster ls
Estimated reading time: 1 minute

Description
List all available clusters
Usage
docker cluster ls [OPTIONS]

Options
Name, shorthand Default Description

--quiet , -q Only display numeric IDs

--dry-run Skip provisioning resources

--log-level warn Set the logging level (“trace”|”debug”|”info”|”warn”|”error”|”fatal”)

Parent command
Command Description

docker cluster Docker Cluster

Related commands
Command Description

docker cluster backup Backup a running cluster

docker cluster create Create a new Docker Cluster

docker cluster inspect Display detailed information about a cluster

docker cluster ls List all available clusters

docker cluster restore Restore a cluster from a backup

docker cluster rm Remove a cluster

docker cluster update Update a running cluster’s desired state

docker cluster version Print Version, Commit, and Build type


docker cluster restore
Estimated reading time: 1 minute

Description
Restore a cluster from a backup

Usage
docker cluster restore [OPTIONS] cluster

Options
Name,
Default Description
shorthand

--env , -e Set environment variables

--file backup.tar.gz Cluster backup filename

--passphrase Cluster backup passphrase

--dry-run Skip provisioning resources

Set the logging level


--log-level warn
(“trace”|”debug”|”info”|”warn”|”error”|”fatal”)

Parent command
Command Description

docker cluster Docker Cluster

Related commands
Command Description

docker cluster backup Backup a running cluster

docker cluster create Create a new Docker Cluster

docker cluster inspect Display detailed information about a cluster

docker cluster ls List all available clusters

docker cluster restore Restore a cluster from a backup

docker cluster rm Remove a cluster

docker cluster update Update a running cluster’s desired state

docker cluster version Print Version, Commit, and Build type

docker cluster rm
Estimated reading time: 1 minute

Description
Remove a cluster

Usage
docker cluster rm [OPTIONS] cluster

Options
Name, shorthand Default Description

--env , -e Set environment variables

--force , -f Force removal of the cluster files

--dry-run Skip provisioning resources


Name, shorthand Default Description

--log-level warn Set the logging level (“trace”|”debug”|”info”|”warn”|”error”|”fatal”)

Parent command
Command Description

docker cluster Docker Cluster

Related commands
Command Description

docker cluster backup Backup a running cluster

docker cluster create Create a new Docker Cluster

docker cluster inspect Display detailed information about a cluster

docker cluster ls List all available clusters

docker cluster restore Restore a cluster from a backup

docker cluster rm Remove a cluster

docker cluster update Update a running cluster’s desired state

docker cluster version Print Version, Commit, and Build type

docker cluster update


Estimated reading time: 1 minute

Description
Update a running cluster’s desired state
Usage
docker cluster update [OPTIONS] cluster

Options
Name, shorthand Default Description

--env , -e Set environment variables

--file , -f Cluster definition

--dry-run Skip provisioning resources

--log-level warn Set the logging level (“trace”|”debug”|”info”|”warn”|”error”|”fatal”)

Parent command
Command Description

docker cluster Docker Cluster

Related commands
Command Description

docker cluster backup Backup a running cluster

docker cluster create Create a new Docker Cluster

docker cluster inspect Display detailed information about a cluster

docker cluster ls List all available clusters

docker cluster restore Restore a cluster from a backup

docker cluster rm Remove a cluster

docker cluster update Update a running cluster’s desired state


Command Description

docker cluster version Print Version, Commit, and Build type

docker commit
Estimated reading time: 3 minutes

Description
Create a new image from a container’s changes

Usage
docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]

Options
Name, shorthand Default Description

--author , -a Author (e.g., “John Hannibal Smith [email protected]”)

--change , -c Apply Dockerfile instruction to the created image

--message , -m Commit message

--pause , -p true Pause container during commit

Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
It can be useful to commit a container’s file changes or settings into a new image. This allows you to
debug a container by running an interactive shell, or to export a working dataset to another server.
Generally, it is better to use Dockerfiles to manage your images in a documented and maintainable
way. Read more about valid image names and tags.

The commit operation will not include any data contained in volumes mounted inside the container.

By default, the container being committed and its processes will be paused while the image is
committed. This reduces the likelihood of encountering data corruption during the process of
creating the commit. If this behavior is undesired, set the --pause option to false.
The --change option will apply Dockerfile instructions to the image that is created.
Supported Dockerfile instructions:CMD|ENTRYPOINT|ENV|EXPOSE|LABEL|ONBUILD|USER|VOLUME|WORKDIR

Examples
Commit a container
$ docker ps

CONTAINER ID IMAGE COMMAND CREATED


STATUS PORTS NAMES
c3f279d17e0a ubuntu:12.04 /bin/bash 7 days ago Up 25
hours desperate_dubinsky
197387f1b436 ubuntu:12.04 /bin/bash 7 days ago Up 25
hours focused_hamilton

$ docker commit c3f279d17e0a svendowideit/testimage:version3

f5283438590d

$ docker images

REPOSITORY TAG ID CREATED


SIZE
svendowideit/testimage version3 f5283438590d 16 seconds
ago 335.7 MB
Commit a container with new configurations
$ docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS


PORTS NAMES
c3f279d17e0a ubuntu:12.04 /bin/bash 7 days ago Up 25
hours desperate_dubinsky
197387f1b436 ubuntu:12.04 /bin/bash 7 days ago Up 25
hours focused_hamilton

$ docker inspect -f "{{ .Config.Env }}" c3f279d17e0a

[HOME=/ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin]

$ docker commit --change "ENV DEBUG true" c3f279d17e0a


svendowideit/testimage:version3

f5283438590d

$ docker inspect -f "{{ .Config.Env }}" f5283438590d

[HOME=/ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin DEBUG=true]

Commit a container with new CMD and EXPOSE instructions


$ docker ps

CONTAINER ID IMAGE COMMAND CREATED


STATUS PORTS NAMES
c3f279d17e0a ubuntu:12.04 /bin/bash 7 days ago Up 25
hours desperate_dubinsky
197387f1b436 ubuntu:12.04 /bin/bash 7 days ago Up 25
hours focused_hamilton

$ docker commit --change='CMD ["apachectl", "-DFOREGROUND"]' -c "EXPOSE 80"


c3f279d17e0a svendowideit/testimage:version4
f5283438590d

$ docker run -d svendowideit/testimage:version4

89373736e2e7f00bc149bd783073ac43d0507da250e999f3f1036e0db60817c0

$ docker ps

CONTAINER ID IMAGE COMMAND CREATED


STATUS PORTS NAMES
89373736e2e7 testimage:version4 "apachectl -DFOREGROU" 3 seconds ago
Up 2 seconds 80/tcp distracted_fermat
c3f279d17e0a ubuntu:12.04 /bin/bash 7 days ago
Up 25 hours desperate_dubinsky
197387f1b436 ubuntu:12.04 /bin/bash 7 days ago

docker config
Estimated reading time: 1 minute

Description
Manage Docker configs

API 1.30+ The client and daemon API must both be at least 1.30 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Swarm This command works with the Swarm orchestrator.

Usage
docker config COMMAND

Child commands
Command Description

docker config create Create a config from a file or STDIN

docker config inspect Display detailed information on one or more configs

docker config ls List configs

docker config rm Remove one or more configs

Parent command
Command Description

docker The base command for the Docker CLI.

More info
Store configuration data using Docker Configs

docker config create


Estimated reading time: 1 minute

Description
Create a config from a file or STDIN

API 1.30+ The client and daemon API must both be at least 1.30 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Swarm This command works with the Swarm orchestrator.

Usage
docker config create [OPTIONS] CONFIG file|-

Options
Name, shorthand Default Description

--label , -l Config labels

API 1.37+
--template-driver
Template driver

Parent command
Command Description

docker config Manage Docker configs

Related commands
Command Description

docker config create Create a config from a file or STDIN

docker config inspect Display detailed information on one or more configs

docker config ls List configs

docker config rm Remove one or more configs

docker config inspect


Estimated reading time: 1 minute

Description
Display detailed information on one or more configs

API 1.30+ The client and daemon API must both be at least 1.30 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Swarm This command works with the Swarm orchestrator.

Usage
docker config inspect [OPTIONS] CONFIG [CONFIG...]

Options
Name, shorthand Default Description

--format , -f Format the output using the given Go template

--pretty Print the information in a human friendly format

Parent command
Command Description

docker config Manage Docker configs

Related commands
Command Description

docker config create Create a config from a file or STDIN

docker config inspect Display detailed information on one or more configs

docker config ls List configs

docker config rm Remove one or more configs

docker config ls
Estimated reading time: 1 minute

Description
List configs

API 1.30+ The client and daemon API must both be at least 1.30 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Swarm This command works with the Swarm orchestrator.

Usage
docker config ls [OPTIONS]

Options
Name, shorthand Default Description

--filter , -f Filter output based on conditions provided

--format Pretty-print configs using a Go template

--quiet , -q Only display IDs

Parent command
Command Description

docker config Manage Docker configs

Related commands
Command Description

docker config create Create a config from a file or STDIN

docker config inspect Display detailed information on one or more configs

docker config ls List configs

docker config rm Remove one or more configs

docker config rm
Estimated reading time: 1 minute
Description
Remove one or more configs

API 1.30+ The client and daemon API must both be at least 1.30 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Swarm This command works with the Swarm orchestrator.

Usage
docker config rm CONFIG [CONFIG...]

Parent command
Command Description

docker config Manage Docker configs

Related commands
Command Description

docker config create Create a config from a file or STDIN

docker config inspect Display detailed information on one or more configs

docker config ls List configs

docker config rm Remove one or more configs

docker container
Estimated reading time: 2 minutes

Description
Manage containers
Usage
docker container COMMAND

Child commands
Command Description

Attach local standard input, output, and error streams to a running


docker container attach
container

docker container
Create a new image from a container’s changes
commit

docker container cp Copy files/folders between a container and the local filesystem

docker container create Create a new container

docker container diff Inspect changes to files or directories on a container’s filesystem

docker container exec Run a command in a running container

docker container export Export a container’s filesystem as a tar archive

docker container
Display detailed information on one or more containers
inspect

docker container kill Kill one or more running containers

docker container logs Fetch the logs of a container

docker container ls List containers

docker container pause Pause all processes within one or more containers

docker container port List port mappings or a specific mapping for the container

docker container prune Remove all stopped containers

docker container
Rename a container
rename
Command Description

docker container restart Restart one or more containers

docker container rm Remove one or more containers

docker container run Run a command in a new container

docker container start Start one or more stopped containers

docker container stats Display a live stream of container(s) resource usage statistics

docker container stop Stop one or more running containers

docker container top Display the running processes of a container

docker container
Unpause all processes within one or more containers
unpause

docker container update Update configuration of one or more containers

docker container wait Block until one or more containers stop, then print their exit codes

Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
Manage containers.

docker container attach


Estimated reading time: 2 minutes

Description
Attach local standard input, output, and error streams to a running container
Usage
docker container attach [OPTIONS] CONTAINER

Options
Name, shorthand Default Description

--detach-keys Override the key sequence for detaching a container

--no-stdin Do not attach STDIN

--sig-proxy true Proxy all received signals to the process

Parent command
Command Description

docker container Manage containers

Related commands
Command Description

Attach local standard input, output, and error streams to a running


docker container attach
container

docker container
Create a new image from a container’s changes
commit

docker container cp Copy files/folders between a container and the local filesystem

docker container create Create a new container

docker container diff Inspect changes to files or directories on a container’s filesystem

docker container exec Run a command in a running container

docker container export Export a container’s filesystem as a tar archive


Command Description

docker container
Display detailed information on one or more containers
inspect

docker container kill Kill one or more running containers

docker container logs Fetch the logs of a container

docker container ls List containers

docker container pause Pause all processes within one or more containers

docker container port List port mappings or a specific mapping for the container

docker container prune Remove all stopped containers

docker container
Rename a container
rename

docker container restart Restart one or more containers

docker container rm Remove one or more containers

docker container run Run a command in a new container

docker container start Start one or more stopped containers

docker container stats Display a live stream of container(s) resource usage statistics

docker container stop Stop one or more running containers

docker container top Display the running processes of a container

docker container
Unpause all processes within one or more containers
unpause

docker container update Update configuration of one or more containers

docker container wait Block until one or more containers stop, then print their exit codes

docker container commit


Estimated reading time: 2 minutes

Description
Create a new image from a container’s changes

Usage
docker container commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]

Options
Name, shorthand Default Description

--author , -a Author (e.g., “John Hannibal Smith [email protected]”)

--change , -c Apply Dockerfile instruction to the created image

--message , -m Commit message

--pause , -p true Pause container during commit

Parent command
Command Description

docker container Manage containers

Related commands
Command Description

Attach local standard input, output, and error streams to a running


docker container attach
container

docker container
Create a new image from a container’s changes
commit
Command Description

docker container cp Copy files/folders between a container and the local filesystem

docker container create Create a new container

docker container diff Inspect changes to files or directories on a container’s filesystem

docker container exec Run a command in a running container

docker container export Export a container’s filesystem as a tar archive

docker container
Display detailed information on one or more containers
inspect

docker container kill Kill one or more running containers

docker container logs Fetch the logs of a container

docker container ls List containers

docker container pause Pause all processes within one or more containers

docker container port List port mappings or a specific mapping for the container

docker container prune Remove all stopped containers

docker container
Rename a container
rename

docker container restart Restart one or more containers

docker container rm Remove one or more containers

docker container run Run a command in a new container

docker container start Start one or more stopped containers

docker container stats Display a live stream of container(s) resource usage statistics

docker container stop Stop one or more running containers

docker container top Display the running processes of a container


Command Description

docker container
Unpause all processes within one or more containers
unpause

docker container update Update configuration of one or more containers

docker container wait Block until one or more containers stop, then print their exit codes

docker container cp
Estimated reading time: 2 minutes

Description
Copy files/folders between a container and the local filesystem

Usage
docker container cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH|-
docker cp [OPTIONS] SRC_PATH|- CONTAINER:DEST_PATH

Options
Name, shorthand Default Description

--archive , -a Archive mode (copy all uid/gid information)

--follow-link , -L Always follow symbol link in SRC_PATH

Parent command
Command Description

docker container Manage containers


Related commands
Command Description

Attach local standard input, output, and error streams to a running


docker container attach
container

docker container
Create a new image from a container’s changes
commit

docker container cp Copy files/folders between a container and the local filesystem

docker container create Create a new container

docker container diff Inspect changes to files or directories on a container’s filesystem

docker container exec Run a command in a running container

docker container export Export a container’s filesystem as a tar archive

docker container
Display detailed information on one or more containers
inspect

docker container kill Kill one or more running containers

docker container logs Fetch the logs of a container

docker container ls List containers

docker container pause Pause all processes within one or more containers

docker container port List port mappings or a specific mapping for the container

docker container prune Remove all stopped containers

docker container
Rename a container
rename

docker container restart Restart one or more containers

docker container rm Remove one or more containers

docker container run Run a command in a new container


Command Description

docker container start Start one or more stopped containers

docker container stats Display a live stream of container(s) resource usage statistics

docker container stop Stop one or more running containers

docker container top Display the running processes of a container

docker container
Unpause all processes within one or more containers
unpause

docker container
Update configuration of one or more containers
update

docker container wait Block until one or more containers stop, then print their exit codes

Extended description
Copy files/folders between a container and the local filesystem

Use ‘-‘ as the source to read a tar archive from stdin and extract it to a directory destination in a
container. Use ‘-‘ as the destination to stream a tar archive of a container source to stdout.

docker container create


Estimated reading time: 9 minutes

Description
Create a new container

Usage
docker container create [OPTIONS] IMAGE [COMMAND] [ARG...]

Options
Name, shorthand Default Description

--add-host Add a custom host-to-IP mapping (host:ip)

--attach , -a Attach to STDIN, STDOUT or STDERR

Block IO (relative weight), between 10 and 1000, or 0 to


--blkio-weight
disable (default 0)

--blkio-weight-
Block IO weight (relative device weight)
device

--cap-add Add Linux capabilities

--cap-drop Drop Linux capabilities

--cgroup-parent Optional parent cgroup for the container

--cidfile Write the container ID to the file

--cpu-count CPU count (Windows only)

--cpu-percent CPU percent (Windows only)

--cpu-period Limit CPU CFS (Completely Fair Scheduler) period

--cpu-quota Limit CPU CFS (Completely Fair Scheduler) quota

API 1.25+
--cpu-rt-period
Limit CPU real-time period in microseconds

API 1.25+
--cpu-rt-runtime
Limit CPU real-time runtime in microseconds

--cpu-shares , -c CPU shares (relative weight)

API 1.25+
--cpus
Number of CPUs

--cpuset-cpus CPUs in which to allow execution (0-3, 0,1)

--cpuset-mems MEMs in which to allow execution (0-3, 0,1)

--device Add a host device to the container


Name, shorthand Default Description

--device-cgroup-
Add a rule to the cgroup allowed devices list
rule

--device-read-bps Limit read rate (bytes per second) from a device

--device-read-
Limit read rate (IO per second) from a device
iops

--device-write-
Limit write rate (bytes per second) to a device
bps

--device-write-
Limit write rate (IO per second) to a device
iops

--disable-
true Skip image verification
content-trust

--dns Set custom DNS servers

--dns-opt Set DNS options

--dns-option Set DNS options

--dns-search Set custom DNS search domains

--domainname Container NIS domain name

--entrypoint Overwrite the default ENTRYPOINT of the image

--env , -e Set environment variables

--env-file Read in a file of environment variables

--expose Expose a port or a range of ports

API 1.40+
--gpus
GPU devices to add to the container (‘all’ to pass all GPUs)

--group-add Add additional groups to join

--health-cmd Command to run to check health

--health-interval Time between running the check (ms|s|m|h) (default 0s)


Name, shorthand Default Description

--health-retries Consecutive failures needed to report unhealthy

API 1.29+
--health-start-
Start period for the container to initialize before starting health-
period
retries countdown (ms|s|m|h) (default 0s)

Maximum time to allow one check to run (ms|s|m|h) (default


--health-timeout
0s)

--help Print usage

--hostname , -h Container host name

API 1.25+
--init Run an init inside the container that forwards signals and
reaps processes

--interactive , -
Keep STDIN open even if not attached
i

Maximum IO bandwidth limit for the system drive (Windows


--io-maxbandwidth
only)

--io-maxiops Maximum IOps limit for the system drive (Windows only)

--ip IPv4 address (e.g., 172.30.100.104)

--ip6 IPv6 address (e.g., 2001:db8::33)

--ipc IPC mode to use

--isolation Container isolation technology

--kernel-memory Kernel memory limit

--label , -l Set meta data on a container

--label-file Read in a line delimited file of labels

--link Add link to another container

--link-local-ip Container IPv4/IPv6 link-local addresses


Name, shorthand Default Description

--log-driver Logging driver for the container

--log-opt Log driver options

--mac-address Container MAC address (e.g., 92:d0:c6:0a:29:33)

--memory , -m Memory limit

--memory-
Memory soft limit
reservation

Swap limit equal to memory plus swap: ‘-1’ to enable unlimited


--memory-swap
swap

--memory-
-1 Tune container memory swappiness (0 to 100)
swappiness

--mount Attach a filesystem mount to the container

--name Assign a name to the container

--net Connect a container to a network

--net-alias Add network-scoped alias for the container

--network Connect a container to a network

--network-alias Add network-scoped alias for the container

--no-healthcheck Disable any container-specified HEALTHCHECK

--oom-kill-
Disable OOM Killer
disable

--oom-score-adj Tune host’s OOM preferences (-1000 to 1000)

--pid PID namespace to use

--pids-limit Tune container pids limit (set -1 for unlimited)

experimental (daemon)API 1.32+


--platform
Set platform if server is multi-platform capable
Name, shorthand Default Description

--privileged Give extended privileges to this container

--publish , -p Publish a container’s port(s) to the host

--publish-all , -
Publish all exposed ports to random ports
P

--read-only Mount the container’s root filesystem as read only

--restart no Restart policy to apply when a container exits

--rm Automatically remove the container when it exits

--runtime Runtime to use for this container

--security-opt Security Options

--shm-size Size of /dev/shm

--stop-signal SIGTERM Signal to stop a container

API 1.25+
--stop-timeout
Timeout (in seconds) to stop a container

--storage-opt Storage driver options for the container

--sysctl Sysctl options

--tmpfs Mount a tmpfs directory

--tty , -t Allocate a pseudo-TTY

--ulimit Ulimit options

--user , -u Username or UID (format: <name|uid>[:<group|gid>])

--userns User namespace to use

--uts UTS namespace to use

--volume , -v Bind mount a volume


Name, shorthand Default Description

--volume-driver Optional volume driver for the container

--volumes-from Mount volumes from the specified container(s)

--workdir , -w Working directory inside the container

Parent command
Command Description

docker container Manage containers

Related commands
Command Description

Attach local standard input, output, and error streams to a running


docker container attach
container

docker container
Create a new image from a container’s changes
commit

docker container cp Copy files/folders between a container and the local filesystem

docker container create Create a new container

docker container diff Inspect changes to files or directories on a container’s filesystem

docker container exec Run a command in a running container

docker container export Export a container’s filesystem as a tar archive

docker container
Display detailed information on one or more containers
inspect

docker container kill Kill one or more running containers

docker container logs Fetch the logs of a container


Command Description

docker container ls List containers

docker container pause Pause all processes within one or more containers

docker container port List port mappings or a specific mapping for the container

docker container prune Remove all stopped containers

docker container
Rename a container
rename

docker container restart Restart one or more containers

docker container rm Remove one or more containers

docker container run Run a command in a new container

docker container start Start one or more stopped containers

docker container stats Display a live stream of container(s) resource usage statistics

docker container stop Stop one or more running containers

docker container top Display the running processes of a container

docker container
Unpause all processes within one or more containers
unpause

docker container update Update configuration of one or more containers

docker container wait Block until one or more containers stop, then print their exit codes

docker container diff


Estimated reading time: 2 minutes

Description
Inspect changes to files or directories on a container’s filesystem
Usage
docker container diff CONTAINER

Parent command
Command Description

docker container Manage containers

Related commands
Command Description

Attach local standard input, output, and error streams to a running


docker container attach
container

docker container
Create a new image from a container’s changes
commit

docker container cp Copy files/folders between a container and the local filesystem

docker container create Create a new container

docker container diff Inspect changes to files or directories on a container’s filesystem

docker container exec Run a command in a running container

docker container export Export a container’s filesystem as a tar archive

docker container
Display detailed information on one or more containers
inspect

docker container kill Kill one or more running containers

docker container logs Fetch the logs of a container

docker container ls List containers

docker container pause Pause all processes within one or more containers
Command Description

docker container port List port mappings or a specific mapping for the container

docker container prune Remove all stopped containers

docker container
Rename a container
rename

docker container restart Restart one or more containers

docker container rm Remove one or more containers

docker container run Run a command in a new container

docker container start Start one or more stopped containers

docker container stats Display a live stream of container(s) resource usage statistics

docker container stop Stop one or more running containers

docker container top Display the running processes of a container

docker container
Unpause all processes within one or more containers
unpause

docker container
Update configuration of one or more containers
update

docker container wait Block until one or more containers stop, then print their exit codes

docker container exec


Estimated reading time: 3 minutes

Description
Run a command in a running container

Usage
docker container exec [OPTIONS] CONTAINER COMMAND [ARG...]

Options
Name, shorthand Default Description

--detach , -d Detached mode: run command in the background

--detach-keys Override the key sequence for detaching a container

API 1.25+
--env , -e
Set environment variables

--interactive , -i Keep STDIN open even if not attached

--privileged Give extended privileges to the command

--tty , -t Allocate a pseudo-TTY

--user , -u Username or UID (format: <name|uid>[:<group|gid>])

API 1.35+
--workdir , -w
Working directory inside the container

Parent command
Command Description

docker container Manage containers

Related commands
Command Description

Attach local standard input, output, and error streams to a running


docker container attach
container

docker container
Create a new image from a container’s changes
commit
Command Description

docker container cp Copy files/folders between a container and the local filesystem

docker container create Create a new container

docker container diff Inspect changes to files or directories on a container’s filesystem

docker container exec Run a command in a running container

docker container export Export a container’s filesystem as a tar archive

docker container
Display detailed information on one or more containers
inspect

docker container kill Kill one or more running containers

docker container logs Fetch the logs of a container

docker container ls List containers

docker container pause Pause all processes within one or more containers

docker container port List port mappings or a specific mapping for the container

docker container prune Remove all stopped containers

docker container
Rename a container
rename

docker container restart Restart one or more containers

docker container rm Remove one or more containers

docker container run Run a command in a new container

docker container start Start one or more stopped containers

docker container stats Display a live stream of container(s) resource usage statistics

docker container stop Stop one or more running containers

docker container top Display the running processes of a container


Command Description

docker container
Unpause all processes within one or more containers
unpause

docker container
Update configuration of one or more containers
update

docker container wait Block until one or more containers stop, then print their exit codes

docker container export


Estimated reading time: 2 minutes

Description
Export a container’s filesystem as a tar archive

Usage
docker container export [OPTIONS] CONTAINER

Options
Name, shorthand Default Description

--output , -o Write to a file, instead of STDOUT

Parent command
Command Description

docker container Manage containers

Related commands
Command Description

Attach local standard input, output, and error streams to a running


docker container attach
container

docker container
Create a new image from a container’s changes
commit

docker container cp Copy files/folders between a container and the local filesystem

docker container create Create a new container

docker container diff Inspect changes to files or directories on a container’s filesystem

docker container exec Run a command in a running container

docker container export Export a container’s filesystem as a tar archive

docker container
Display detailed information on one or more containers
inspect

docker container kill Kill one or more running containers

docker container logs Fetch the logs of a container

docker container ls List containers

docker container pause Pause all processes within one or more containers

docker container port List port mappings or a specific mapping for the container

docker container prune Remove all stopped containers

docker container
Rename a container
rename

docker container restart Restart one or more containers

docker container rm Remove one or more containers

docker container run Run a command in a new container

docker container start Start one or more stopped containers


Command Description

docker container stats Display a live stream of container(s) resource usage statistics

docker container stop Stop one or more running containers

docker container top Display the running processes of a container

docker container
Unpause all processes within one or more containers
unpause

docker container update Update configuration of one or more containers

docker container wait Block until one or more containers stop, then print their exit codes

docker container inspect


Estimated reading time: 2 minutes

Description
Display detailed information on one or more containers

Usage
docker container inspect [OPTIONS] CONTAINER [CONTAINER...]

Options
Name, shorthand Default Description

--format , -f Format the output using the given Go template

--size , -s Display total file sizes

Parent command
Command Description

docker container Manage containers

Related commands
Command Description

Attach local standard input, output, and error streams to a running


docker container attach
container

docker container
Create a new image from a container’s changes
commit

docker container cp Copy files/folders between a container and the local filesystem

docker container create Create a new container

docker container diff Inspect changes to files or directories on a container’s filesystem

docker container exec Run a command in a running container

docker container export Export a container’s filesystem as a tar archive

docker container
Display detailed information on one or more containers
inspect

docker container kill Kill one or more running containers

docker container logs Fetch the logs of a container

docker container ls List containers

docker container pause Pause all processes within one or more containers

docker container port List port mappings or a specific mapping for the container

docker container prune Remove all stopped containers

docker container
Rename a container
rename

docker container restart Restart one or more containers


Command Description

docker container rm Remove one or more containers

docker container run Run a command in a new container

docker container start Start one or more stopped containers

docker container stats Display a live stream of container(s) resource usage statistics

docker container stop Stop one or more running containers

docker container top Display the running processes of a container

docker container
Unpause all processes within one or more containers
unpause

docker container update Update configuration of one or more containers

docker container wait Block until one or more containers stop, then print their exit codes

docker container kill


Estimated reading time: 2 minutes

Description
Kill one or more running containers

Usage
docker container kill [OPTIONS] CONTAINER [CONTAINER...]

Options
Name, shorthand Default Description

--signal , -s KILL Signal to send to the container


Parent command
Command Description

docker container Manage containers

Related commands
Command Description

Attach local standard input, output, and error streams to a running


docker container attach
container

docker container
Create a new image from a container’s changes
commit

docker container cp Copy files/folders between a container and the local filesystem

docker container create Create a new container

docker container diff Inspect changes to files or directories on a container’s filesystem

docker container exec Run a command in a running container

docker container export Export a container’s filesystem as a tar archive

docker container
Display detailed information on one or more containers
inspect

docker container kill Kill one or more running containers

docker container logs Fetch the logs of a container

docker container ls List containers

docker container pause Pause all processes within one or more containers

docker container port List port mappings or a specific mapping for the container

docker container prune Remove all stopped containers


Command Description

docker container
Rename a container
rename

docker container restart Restart one or more containers

docker container rm Remove one or more containers

docker container run Run a command in a new container

docker container start Start one or more stopped containers

docker container stats Display a live stream of container(s) resource usage statistics

docker container stop Stop one or more running containers

docker container top Display the running processes of a container

docker container
Unpause all processes within one or more containers
unpause

docker container
Update configuration of one or more containers
update

docker container wait Block until one or more containers stop, then print their exit codes

docker container logs


Estimated reading time: 3 minutes

Description
Fetch the logs of a container

Usage
docker container logs [OPTIONS] CONTAINER

Options
Name,
Default Description
shorthand

--details Show extra details provided to logs

--follow , -f Follow log output

Show logs since timestamp (e.g. 2013-01-02T13:23:37) or


--since
relative (e.g. 42m for 42 minutes)

--tail all Number of lines to show from the end of the logs

--timestamps ,
Show timestamps
-t

API 1.35+
--until Show logs before a timestamp (e.g. 2013-01-02T13:23:37) or
relative (e.g. 42m for 42 minutes)

Parent command
Command Description

docker container Manage containers

Related commands
Command Description

Attach local standard input, output, and error streams to a running


docker container attach
container

docker container
Create a new image from a container’s changes
commit

docker container cp Copy files/folders between a container and the local filesystem

docker container create Create a new container

docker container diff Inspect changes to files or directories on a container’s filesystem

docker container exec Run a command in a running container


Command Description

docker container export Export a container’s filesystem as a tar archive

docker container
Display detailed information on one or more containers
inspect

docker container kill Kill one or more running containers

docker container logs Fetch the logs of a container

docker container ls List containers

docker container pause Pause all processes within one or more containers

docker container port List port mappings or a specific mapping for the container

docker container prune Remove all stopped containers

docker container
Rename a container
rename

docker container restart Restart one or more containers

docker container rm Remove one or more containers

docker container run Run a command in a new container

docker container start Start one or more stopped containers

docker container stats Display a live stream of container(s) resource usage statistics

docker container stop Stop one or more running containers

docker container top Display the running processes of a container

docker container
Unpause all processes within one or more containers
unpause

docker container
Update configuration of one or more containers
update

docker container wait Block until one or more containers stop, then print their exit codes
docker container ls
Estimated reading time: 3 minutes

Description
List containers

Usage
docker container ls [OPTIONS]

Options
Name, shorthand Default Description

--all , -a Show all containers (default shows just running)

--filter , -f Filter output based on conditions provided

--format Pretty-print containers using a Go template

--last , -n -1 Show n last created containers (includes all states)

--latest , -l Show the latest created container (includes all states)

--no-trunc Don’t truncate output

--quiet , -q Only display numeric IDs

--size , -s Display total file sizes

Parent command
Command Description

docker container Manage containers


Related commands
Command Description

Attach local standard input, output, and error streams to a running


docker container attach
container

docker container
Create a new image from a container’s changes
commit

docker container cp Copy files/folders between a container and the local filesystem

docker container create Create a new container

docker container diff Inspect changes to files or directories on a container’s filesystem

docker container exec Run a command in a running container

docker container export Export a container’s filesystem as a tar archive

docker container
Display detailed information on one or more containers
inspect

docker container kill Kill one or more running containers

docker container logs Fetch the logs of a container

docker container ls List containers

docker container pause Pause all processes within one or more containers

docker container port List port mappings or a specific mapping for the container

docker container prune Remove all stopped containers

docker container
Rename a container
rename

docker container restart Restart one or more containers

docker container rm Remove one or more containers

docker container run Run a command in a new container


Command Description

docker container start Start one or more stopped containers

docker container stats Display a live stream of container(s) resource usage statistics

docker container stop Stop one or more running containers

docker container top Display the running processes of a container

docker container
Unpause all processes within one or more containers
unpause

docker container
Update configuration of one or more containers
update

docker container wait Block until one or more containers stop, then print their exit codes

docker container pause


Estimated reading time: 2 minutes

Description
Pause all processes within one or more containers

Usage
docker container pause CONTAINER [CONTAINER...]

Parent command
Command Description

docker container Manage containers

Related commands
Command Description

Attach local standard input, output, and error streams to a running


docker container attach
container

docker container
Create a new image from a container’s changes
commit

docker container cp Copy files/folders between a container and the local filesystem

docker container create Create a new container

docker container diff Inspect changes to files or directories on a container’s filesystem

docker container exec Run a command in a running container

docker container export Export a container’s filesystem as a tar archive

docker container
Display detailed information on one or more containers
inspect

docker container kill Kill one or more running containers

docker container logs Fetch the logs of a container

docker container ls List containers

docker container pause Pause all processes within one or more containers

docker container port List port mappings or a specific mapping for the container

docker container prune Remove all stopped containers

docker container
Rename a container
rename

docker container restart Restart one or more containers

docker container rm Remove one or more containers

docker container run Run a command in a new container

docker container start Start one or more stopped containers


Command Description

docker container stats Display a live stream of container(s) resource usage statistics

docker container stop Stop one or more running containers

docker container top Display the running processes of a container

docker container
Unpause all processes within one or more containers
unpause

docker container
Update configuration of one or more containers
update

docker container wait Block until one or more containers stop, then print their exit codes

docker container port


Estimated reading time: 2 minutes

Description
List port mappings or a specific mapping for the container

Usage
docker container port CONTAINER [PRIVATE_PORT[/PROTO]]

Parent command
Command Description

docker container Manage containers

Related commands
Command Description

Attach local standard input, output, and error streams to a running


docker container attach
container

docker container
Create a new image from a container’s changes
commit

docker container cp Copy files/folders between a container and the local filesystem

docker container create Create a new container

docker container diff Inspect changes to files or directories on a container’s filesystem

docker container exec Run a command in a running container

docker container export Export a container’s filesystem as a tar archive

docker container
Display detailed information on one or more containers
inspect

docker container kill Kill one or more running containers

docker container logs Fetch the logs of a container

docker container ls List containers

docker container pause Pause all processes within one or more containers

docker container port List port mappings or a specific mapping for the container

docker container prune Remove all stopped containers

docker container
Rename a container
rename

docker container restart Restart one or more containers

docker container rm Remove one or more containers

docker container run Run a command in a new container

docker container start Start one or more stopped containers


Command Description

docker container stats Display a live stream of container(s) resource usage statistics

docker container stop Stop one or more running containers

docker container top Display the running processes of a container

docker container
Unpause all processes within one or more containers
unpause

docker container update Update configuration of one or more containers

docker container wait Block until one or more containers stop, then print their exit codes

docker container prune


Estimated reading time: 5 minutes

Description
Remove all stopped containers

API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Usage
docker container prune [OPTIONS]

Options
Name, shorthand Default Description

--filter Provide filter values (e.g. ‘until=')

--force , -f Do not prompt for confirmation


Parent command
Command Description

docker container Manage containers

Related commands
Command Description

Attach local standard input, output, and error streams to a running


docker container attach
container

docker container
Create a new image from a container’s changes
commit

docker container cp Copy files/folders between a container and the local filesystem

docker container create Create a new container

docker container diff Inspect changes to files or directories on a container’s filesystem

docker container exec Run a command in a running container

docker container export Export a container’s filesystem as a tar archive

docker container
Display detailed information on one or more containers
inspect

docker container kill Kill one or more running containers

docker container logs Fetch the logs of a container

docker container ls List containers

docker container pause Pause all processes within one or more containers

docker container port List port mappings or a specific mapping for the container

docker container prune Remove all stopped containers

docker container
Rename a container
rename
Command Description

docker container restart Restart one or more containers

docker container rm Remove one or more containers

docker container run Run a command in a new container

docker container start Start one or more stopped containers

docker container stats Display a live stream of container(s) resource usage statistics

docker container stop Stop one or more running containers

docker container top Display the running processes of a container

docker container
Unpause all processes within one or more containers
unpause

docker container update Update configuration of one or more containers

docker container wait Block until one or more containers stop, then print their exit codes

Extended description
Removes all stopped containers.

Examples
Prune containers
$ docker container prune
WARNING! This will remove all stopped containers.
Are you sure you want to continue? [y/N] y
Deleted Containers:
4a7f7eebae0f63178aff7eb0aa39cd3f0627a203ab2df258c1a00b456cf20063
f98f9c2aa1eaf727e4ec9c0283bc7d4aa4762fbdba7f26191f26c97f64090360

Total reclaimed space: 212 B

Filtering
The filtering flag (--filter) format is of “key=value”. If there is more than one filter, then pass
multiple flags (e.g., --filter "foo=bar" --filter "bif=baz")

The currently supported filters are:

 until (<timestamp>) - only remove containers created before given timestamp


 label (label=<key>, label=<key>=<value>, label!=<key>, or label!=<key>=<value>) - only
remove containers with (or without, in case label!=... is used) the specified labels.

The until filter can be Unix timestamps, date formatted timestamps, or Go duration strings
(e.g. 10m, 1h30m) computed relative to the daemon machine’s time. Supported formats for date
formatted time stamps include RFC3339Nano, RFC3339, 2006-01-02T15:04:05,2006-01-
02T15:04:05.999999999, 2006-01-02Z07:00, and 2006-01-02. The local timezone on the daemon will
be used if you do not provide either a Z or a +-00:00 timezone offset at the end of the timestamp.
When providing Unix timestamps enter seconds[.nanoseconds], where seconds is the number of
seconds that have elapsed since January 1, 1970 (midnight UTC/GMT), not counting leap seconds
(aka Unix epoch or Unix time), and the optional .nanoseconds field is a fraction of a second no more
than nine digits long.
The label filter accepts two formats. One is the label=... (label=<key> or label=<key>=<value>),
which removes containers with the specified labels. The other format is
the label!=... (label!=<key> or label!=<key>=<value>), which removes containers without the
specified labels.

The following removes containers created more than 5 minutes ago:

$ docker ps -a --format 'table


{{.ID}}\t{{.Image}}\t{{.Command}}\t{{.CreatedAt}}\t{{.Status}}'

CONTAINER ID IMAGE COMMAND CREATED AT


STATUS
61b9efa71024 busybox "sh" 2017-01-04 13:23:33 -0800
PST Exited (0) 41 seconds ago
53a9bc23a516 busybox "sh" 2017-01-04 13:11:59 -0800
PST Exited (0) 12 minutes ago

$ docker container prune --force --filter "until=5m"

Deleted Containers:
53a9bc23a5168b6caa2bfbefddf1b30f93c7ad57f3dec271fd32707497cb9369
Total reclaimed space: 25 B

$ docker ps -a --format 'table


{{.ID}}\t{{.Image}}\t{{.Command}}\t{{.CreatedAt}}\t{{.Status}}'

CONTAINER ID IMAGE COMMAND CREATED AT


STATUS
61b9efa71024 busybox "sh" 2017-01-04 13:23:33 -0800
PST Exited (0) 44 seconds ago

The following removes containers created before 2017-01-04T13:10:00:


$ docker ps -a --format 'table
{{.ID}}\t{{.Image}}\t{{.Command}}\t{{.CreatedAt}}\t{{.Status}}'

CONTAINER ID IMAGE COMMAND CREATED AT


STATUS
53a9bc23a516 busybox "sh" 2017-01-04 13:11:59 -0800
PST Exited (0) 7 minutes ago
4a75091a6d61 busybox "sh" 2017-01-04 13:09:53 -0800
PST Exited (0) 9 minutes ago

$ docker container prune --force --filter "until=2017-01-04T13:10:00"

Deleted Containers:
4a75091a6d618526fcd8b33ccd6e5928ca2a64415466f768a6180004b0c72c6c

Total reclaimed space: 27 B

$ docker ps -a --format 'table


{{.ID}}\t{{.Image}}\t{{.Command}}\t{{.CreatedAt}}\t{{.Status}}'

CONTAINER ID IMAGE COMMAND CREATED AT


STATUS
53a9bc23a516 busybox "sh" 2017-01-04 13:11:59 -0800
PST Exited (0) 9 minutes ago
docker container rename
Estimated reading time: 2 minutes

Description
Rename a container

Usage
docker container rename CONTAINER NEW_NAME

Parent command
Command Description

docker container Manage containers

Related commands
Command Description

Attach local standard input, output, and error streams to a running


docker container attach
container

docker container
Create a new image from a container’s changes
commit

docker container cp Copy files/folders between a container and the local filesystem

docker container create Create a new container

docker container diff Inspect changes to files or directories on a container’s filesystem

docker container exec Run a command in a running container

docker container export Export a container’s filesystem as a tar archive


Command Description

docker container
Display detailed information on one or more containers
inspect

docker container kill Kill one or more running containers

docker container logs Fetch the logs of a container

docker container ls List containers

docker container pause Pause all processes within one or more containers

docker container port List port mappings or a specific mapping for the container

docker container prune Remove all stopped containers

docker container
Rename a container
rename

docker container restart Restart one or more containers

docker container rm Remove one or more containers

docker container run Run a command in a new container

docker container start Start one or more stopped containers

docker container stats Display a live stream of container(s) resource usage statistics

docker container stop Stop one or more running containers

docker container top Display the running processes of a container

docker container
Unpause all processes within one or more containers
unpause

docker container
Update configuration of one or more containers
update

docker container wait Block until one or more containers stop, then print their exit codes

docker container restart


Estimated reading time: 2 minutes

Description
Restart one or more containers

Usage
docker container restart [OPTIONS] CONTAINER [CONTAINER...]

Options
Name, shorthand Default Description

--time , -t 10 Seconds to wait for stop before killing the container

Parent command
Command Description

docker container Manage containers

Related commands
Command Description

Attach local standard input, output, and error streams to a running


docker container attach
container

docker container
Create a new image from a container’s changes
commit

docker container cp Copy files/folders between a container and the local filesystem

docker container create Create a new container

docker container diff Inspect changes to files or directories on a container’s filesystem


Command Description

docker container exec Run a command in a running container

docker container export Export a container’s filesystem as a tar archive

docker container
Display detailed information on one or more containers
inspect

docker container kill Kill one or more running containers

docker container logs Fetch the logs of a container

docker container ls List containers

docker container pause Pause all processes within one or more containers

docker container port List port mappings or a specific mapping for the container

docker container prune Remove all stopped containers

docker container
Rename a container
rename

docker container restart Restart one or more containers

docker container rm Remove one or more containers

docker container run Run a command in a new container

docker container start Start one or more stopped containers

docker container stats Display a live stream of container(s) resource usage statistics

docker container stop Stop one or more running containers

docker container top Display the running processes of a container

docker container
Unpause all processes within one or more containers
unpause

docker container update Update configuration of one or more containers

docker container wait Block until one or more containers stop, then print their exit codes
docker container rm
Estimated reading time: 2 minutes

Description
Remove one or more containers

Usage
docker container rm [OPTIONS] CONTAINER [CONTAINER...]

Options
Name, shorthand Default Description

--force , -f Force the removal of a running container (uses SIGKILL)

--link , -l Remove the specified link

--volumes , -v Remove the volumes associated with the container

Parent command
Command Description

docker container Manage containers

Related commands
Command Description

Attach local standard input, output, and error streams to a running


docker container attach
container

docker container
Create a new image from a container’s changes
commit
Command Description

docker container cp Copy files/folders between a container and the local filesystem

docker container create Create a new container

docker container diff Inspect changes to files or directories on a container’s filesystem

docker container exec Run a command in a running container

docker container export Export a container’s filesystem as a tar archive

docker container
Display detailed information on one or more containers
inspect

docker container kill Kill one or more running containers

docker container logs Fetch the logs of a container

docker container ls List containers

docker container pause Pause all processes within one or more containers

docker container port List port mappings or a specific mapping for the container

docker container prune Remove all stopped containers

docker container
Rename a container
rename

docker container restart Restart one or more containers

docker container rm Remove one or more containers

docker container run Run a command in a new container

docker container start Start one or more stopped containers

docker container stats Display a live stream of container(s) resource usage statistics

docker container stop Stop one or more running containers

docker container top Display the running processes of a container


Command Description

docker container
Unpause all processes within one or more containers
unpause

docker container update Update configuration of one or more containers

docker container wait Block until one or more containers stop, then print their exit codes

docker container run


Estimated reading time: 9 minutes

Description
Run a command in a new container

Usage
docker container run [OPTIONS] IMAGE [COMMAND] [ARG...]

Options
Name, shorthand Default Description

--add-host Add a custom host-to-IP mapping (host:ip)

--attach , -a Attach to STDIN, STDOUT or STDERR

Block IO (relative weight), between 10 and 1000, or 0 to


--blkio-weight
disable (default 0)

--blkio-weight-
Block IO weight (relative device weight)
device

--cap-add Add Linux capabilities

--cap-drop Drop Linux capabilities


Name, shorthand Default Description

--cgroup-parent Optional parent cgroup for the container

--cidfile Write the container ID to the file

--cpu-count CPU count (Windows only)

--cpu-percent CPU percent (Windows only)

--cpu-period Limit CPU CFS (Completely Fair Scheduler) period

--cpu-quota Limit CPU CFS (Completely Fair Scheduler) quota

API 1.25+
--cpu-rt-period
Limit CPU real-time period in microseconds

API 1.25+
--cpu-rt-runtime
Limit CPU real-time runtime in microseconds

--cpu-shares , -c CPU shares (relative weight)

API 1.25+
--cpus
Number of CPUs

--cpuset-cpus CPUs in which to allow execution (0-3, 0,1)

--cpuset-mems MEMs in which to allow execution (0-3, 0,1)

--detach , -d Run container in background and print container ID

--detach-keys Override the key sequence for detaching a container

--device Add a host device to the container

--device-cgroup-
Add a rule to the cgroup allowed devices list
rule

--device-read-bps Limit read rate (bytes per second) from a device

--device-read-
Limit read rate (IO per second) from a device
iops

--device-write-
Limit write rate (bytes per second) to a device
bps
Name, shorthand Default Description

--device-write-
Limit write rate (IO per second) to a device
iops

--disable-
true Skip image verification
content-trust

--dns Set custom DNS servers

--dns-opt Set DNS options

--dns-option Set DNS options

--dns-search Set custom DNS search domains

--domainname Container NIS domain name

--entrypoint Overwrite the default ENTRYPOINT of the image

--env , -e Set environment variables

--env-file Read in a file of environment variables

--expose Expose a port or a range of ports

API 1.40+
--gpus
GPU devices to add to the container (‘all’ to pass all GPUs)

--group-add Add additional groups to join

--health-cmd Command to run to check health

--health-interval Time between running the check (ms|s|m|h) (default 0s)

--health-retries Consecutive failures needed to report unhealthy

API 1.29+
--health-start-
Start period for the container to initialize before starting health-
period
retries countdown (ms|s|m|h) (default 0s)

Maximum time to allow one check to run (ms|s|m|h) (default


--health-timeout
0s)

--help Print usage


Name, shorthand Default Description

--hostname , -h Container host name

API 1.25+
--init Run an init inside the container that forwards signals and
reaps processes

--interactive , -
Keep STDIN open even if not attached
i

Maximum IO bandwidth limit for the system drive (Windows


--io-maxbandwidth
only)

--io-maxiops Maximum IOps limit for the system drive (Windows only)

--ip IPv4 address (e.g., 172.30.100.104)

--ip6 IPv6 address (e.g., 2001:db8::33)

--ipc IPC mode to use

--isolation Container isolation technology

--kernel-memory Kernel memory limit

--label , -l Set meta data on a container

--label-file Read in a line delimited file of labels

--link Add link to another container

--link-local-ip Container IPv4/IPv6 link-local addresses

--log-driver Logging driver for the container

--log-opt Log driver options

--mac-address Container MAC address (e.g., 92:d0:c6:0a:29:33)

--memory , -m Memory limit

--memory-
Memory soft limit
reservation
Name, shorthand Default Description

Swap limit equal to memory plus swap: ‘-1’ to enable unlimited


--memory-swap
swap

--memory-
-1 Tune container memory swappiness (0 to 100)
swappiness

--mount Attach a filesystem mount to the container

--name Assign a name to the container

--net Connect a container to a network

--net-alias Add network-scoped alias for the container

--network Connect a container to a network

--network-alias Add network-scoped alias for the container

--no-healthcheck Disable any container-specified HEALTHCHECK

--oom-kill-
Disable OOM Killer
disable

--oom-score-adj Tune host’s OOM preferences (-1000 to 1000)

--pid PID namespace to use

--pids-limit Tune container pids limit (set -1 for unlimited)

experimental (daemon)API 1.32+


--platform
Set platform if server is multi-platform capable

--privileged Give extended privileges to this container

--publish , -p Publish a container’s port(s) to the host

--publish-all , -
Publish all exposed ports to random ports
P

--read-only Mount the container’s root filesystem as read only

--restart no Restart policy to apply when a container exits


Name, shorthand Default Description

--rm Automatically remove the container when it exits

--runtime Runtime to use for this container

--security-opt Security Options

--shm-size Size of /dev/shm

--sig-proxy true Proxy received signals to the process

--stop-signal SIGTERM Signal to stop a container

API 1.25+
--stop-timeout
Timeout (in seconds) to stop a container

--storage-opt Storage driver options for the container

--sysctl Sysctl options

--tmpfs Mount a tmpfs directory

--tty , -t Allocate a pseudo-TTY

--ulimit Ulimit options

--user , -u Username or UID (format: <name|uid>[:<group|gid>])

--userns User namespace to use

--uts UTS namespace to use

--volume , -v Bind mount a volume

--volume-driver Optional volume driver for the container

--volumes-from Mount volumes from the specified container(s)

--workdir , -w Working directory inside the container

Parent command
Command Description

docker container Manage containers

Related commands
Command Description

Attach local standard input, output, and error streams to a running


docker container attach
container

docker container
Create a new image from a container’s changes
commit

docker container cp Copy files/folders between a container and the local filesystem

docker container create Create a new container

docker container diff Inspect changes to files or directories on a container’s filesystem

docker container exec Run a command in a running container

docker container export Export a container’s filesystem as a tar archive

docker container
Display detailed information on one or more containers
inspect

docker container kill Kill one or more running containers

docker container logs Fetch the logs of a container

docker container ls List containers

docker container pause Pause all processes within one or more containers

docker container port List port mappings or a specific mapping for the container

docker container prune Remove all stopped containers

docker container
Rename a container
rename

docker container restart Restart one or more containers


Command Description

docker container rm Remove one or more containers

docker container run Run a command in a new container

docker container start Start one or more stopped containers

docker container stats Display a live stream of container(s) resource usage statistics

docker container stop Stop one or more running containers

docker container top Display the running processes of a container

docker container
Unpause all processes within one or more containers
unpause

docker container
Update configuration of one or more containers
update

docker container wait Block until one or more containers stop, then print their exit codes

docker container start


Estimated reading time: 3 minutes

Description
Start one or more stopped containers

Usage
docker container start [OPTIONS] CONTAINER [CONTAINER...]

Options
Name, shorthand Default Description

--attach , -a Attach STDOUT/STDERR and forward signals


Name, shorthand Default Description

experimental (daemon)
--checkpoint
Restore from this checkpoint

experimental (daemon)
--checkpoint-dir
Use a custom checkpoint storage directory

--detach-keys Override the key sequence for detaching a container

--interactive , -i Attach container’s STDIN

Parent command
Command Description

docker container Manage containers

Related commands
Command Description

Attach local standard input, output, and error streams to a running


docker container attach
container

docker container
Create a new image from a container’s changes
commit

docker container cp Copy files/folders between a container and the local filesystem

docker container create Create a new container

docker container diff Inspect changes to files or directories on a container’s filesystem

docker container exec Run a command in a running container

docker container export Export a container’s filesystem as a tar archive

docker container
Display detailed information on one or more containers
inspect

docker container kill Kill one or more running containers


Command Description

docker container logs Fetch the logs of a container

docker container ls List containers

docker container pause Pause all processes within one or more containers

docker container port List port mappings or a specific mapping for the container

docker container prune Remove all stopped containers

docker container
Rename a container
rename

docker container restart Restart one or more containers

docker container rm Remove one or more containers

docker container run Run a command in a new container

docker container start Start one or more stopped containers

docker container stats Display a live stream of container(s) resource usage statistics

docker container stop Stop one or more running containers

docker container top Display the running processes of a container

docker container
Unpause all processes within one or more containers
unpause

docker container
Update configuration of one or more containers
update

docker container wait Block until one or more containers stop, then print their exit codes

docker container stats


Estimated reading time: 2 minutes

Description
Display a live stream of container(s) resource usage statistics

Usage
docker container stats [OPTIONS] [CONTAINER...]

Options
Name, shorthand Default Description

--all , -a Show all containers (default shows just running)

--format Pretty-print images using a Go template

--no-stream Disable streaming stats and only pull the first result

--no-trunc Do not truncate output

Parent command
Command Description

docker container Manage containers

Related commands
Command Description

Attach local standard input, output, and error streams to a running


docker container attach
container

docker container
Create a new image from a container’s changes
commit

docker container cp Copy files/folders between a container and the local filesystem

docker container create Create a new container

docker container diff Inspect changes to files or directories on a container’s filesystem


Command Description

docker container exec Run a command in a running container

docker container export Export a container’s filesystem as a tar archive

docker container
Display detailed information on one or more containers
inspect

docker container kill Kill one or more running containers

docker container logs Fetch the logs of a container

docker container ls List containers

docker container pause Pause all processes within one or more containers

docker container port List port mappings or a specific mapping for the container

docker container prune Remove all stopped containers

docker container
Rename a container
rename

docker container restart Restart one or more containers

docker container rm Remove one or more containers

docker container run Run a command in a new container

docker container start Start one or more stopped containers

docker container stats Display a live stream of container(s) resource usage statistics

docker container stop Stop one or more running containers

docker container top Display the running processes of a container

docker container
Unpause all processes within one or more containers
unpause

docker container update Update configuration of one or more containers

docker container wait Block until one or more containers stop, then print their exit codes
docker container stop
Estimated reading time: 2 minutes

Description
Stop one or more running containers

Usage
docker container stop [OPTIONS] CONTAINER [CONTAINER...]

Options
Name, shorthand Default Description

--time , -t 10 Seconds to wait for stop before killing it

Parent command
Command Description

docker container Manage containers

Related commands
Command Description

Attach local standard input, output, and error streams to a running


docker container attach
container

docker container
Create a new image from a container’s changes
commit

docker container cp Copy files/folders between a container and the local filesystem

docker container create Create a new container


Command Description

docker container diff Inspect changes to files or directories on a container’s filesystem

docker container exec Run a command in a running container

docker container export Export a container’s filesystem as a tar archive

docker container
Display detailed information on one or more containers
inspect

docker container kill Kill one or more running containers

docker container logs Fetch the logs of a container

docker container ls List containers

docker container pause Pause all processes within one or more containers

docker container port List port mappings or a specific mapping for the container

docker container prune Remove all stopped containers

docker container
Rename a container
rename

docker container restart Restart one or more containers

docker container rm Remove one or more containers

docker container run Run a command in a new container

docker container start Start one or more stopped containers

docker container stats Display a live stream of container(s) resource usage statistics

docker container stop Stop one or more running containers

docker container top Display the running processes of a container

docker container
Unpause all processes within one or more containers
unpause
Command Description

docker container
Update configuration of one or more containers
update

docker container wait Block until one or more containers stop, then print their exit codes

docker container top


Estimated reading time: 2 minutes

Description
Display the running processes of a container

Usage
docker container top CONTAINER [ps OPTIONS]

Parent command
Command Description

docker container Manage containers

Related commands
Command Description

Attach local standard input, output, and error streams to a running


docker container attach
container

docker container
Create a new image from a container’s changes
commit

docker container cp Copy files/folders between a container and the local filesystem

docker container create Create a new container


Command Description

docker container diff Inspect changes to files or directories on a container’s filesystem

docker container exec Run a command in a running container

docker container export Export a container’s filesystem as a tar archive

docker container
Display detailed information on one or more containers
inspect

docker container kill Kill one or more running containers

docker container logs Fetch the logs of a container

docker container ls List containers

docker container pause Pause all processes within one or more containers

docker container port List port mappings or a specific mapping for the container

docker container prune Remove all stopped containers

docker container
Rename a container
rename

docker container restart Restart one or more containers

docker container rm Remove one or more containers

docker container run Run a command in a new container

docker container start Start one or more stopped containers

docker container stats Display a live stream of container(s) resource usage statistics

docker container stop Stop one or more running containers

docker container top Display the running processes of a container

docker container
Unpause all processes within one or more containers
unpause
Command Description

docker container
Update configuration of one or more containers
update

docker container wait Block until one or more containers stop, then print their exit codes

docker container unpause


Estimated reading time: 2 minutes

Description
Unpause all processes within one or more containers

Usage
docker container unpause CONTAINER [CONTAINER...]

Parent command
Command Description

docker container Manage containers

Related commands
Command Description

Attach local standard input, output, and error streams to a running


docker container attach
container

docker container
Create a new image from a container’s changes
commit

docker container cp Copy files/folders between a container and the local filesystem

docker container create Create a new container


Command Description

docker container diff Inspect changes to files or directories on a container’s filesystem

docker container exec Run a command in a running container

docker container export Export a container’s filesystem as a tar archive

docker container
Display detailed information on one or more containers
inspect

docker container kill Kill one or more running containers

docker container logs Fetch the logs of a container

docker container ls List containers

docker container pause Pause all processes within one or more containers

docker container port List port mappings or a specific mapping for the container

docker container prune Remove all stopped containers

docker container
Rename a container
rename

docker container restart Restart one or more containers

docker container rm Remove one or more containers

docker container run Run a command in a new container

docker container start Start one or more stopped containers

docker container stats Display a live stream of container(s) resource usage statistics

docker container stop Stop one or more running containers

docker container top Display the running processes of a container

docker container
Unpause all processes within one or more containers
unpause
Command Description

docker container
Update configuration of one or more containers
update

docker container wait Block until one or more containers stop, then print their exit codes

docker container update


Estimated reading time: 3 minutes

Description
Update configuration of one or more containers

Usage
docker container update [OPTIONS] CONTAINER [CONTAINER...]

Options
Name, shorthand Default Description

Block IO (relative weight), between 10 and 1000, or 0 to


--blkio-weight
disable (default 0)

--cpu-period Limit CPU CFS (Completely Fair Scheduler) period

--cpu-quota Limit CPU CFS (Completely Fair Scheduler) quota

API 1.25+
--cpu-rt-period
Limit the CPU real-time period in microseconds

API 1.25+
--cpu-rt-runtime
Limit the CPU real-time runtime in microseconds

--cpu-shares , -c CPU shares (relative weight)

API 1.29+
--cpus
Number of CPUs
Name, shorthand Default Description

--cpuset-cpus CPUs in which to allow execution (0-3, 0,1)

--cpuset-mems MEMs in which to allow execution (0-3, 0,1)

--kernel-memory Kernel memory limit

--memory , -m Memory limit

--memory-
Memory soft limit
reservation

Swap limit equal to memory plus swap: ‘-1’ to enable


--memory-swap
unlimited swap

API 1.40+
--pids-limit
Tune container pids limit (set -1 for unlimited)

--restart Restart policy to apply when a container exits

Parent command
Command Description

docker container Manage containers

Related commands
Command Description

Attach local standard input, output, and error streams to a running


docker container attach
container

docker container
Create a new image from a container’s changes
commit

docker container cp Copy files/folders between a container and the local filesystem

docker container create Create a new container

docker container diff Inspect changes to files or directories on a container’s filesystem


Command Description

docker container exec Run a command in a running container

docker container export Export a container’s filesystem as a tar archive

docker container
Display detailed information on one or more containers
inspect

docker container kill Kill one or more running containers

docker container logs Fetch the logs of a container

docker container ls List containers

docker container pause Pause all processes within one or more containers

docker container port List port mappings or a specific mapping for the container

docker container prune Remove all stopped containers

docker container
Rename a container
rename

docker container restart Restart one or more containers

docker container rm Remove one or more containers

docker container run Run a command in a new container

docker container start Start one or more stopped containers

docker container stats Display a live stream of container(s) resource usage statistics

docker container stop Stop one or more running containers

docker container top Display the running processes of a container

docker container
Unpause all processes within one or more containers
unpause

docker container update Update configuration of one or more containers

docker container wait Block until one or more containers stop, then print their exit codes
docker container wait
Estimated reading time: 2 minutes

Description
Block until one or more containers stop, then print their exit codes

Usage
docker container wait CONTAINER [CONTAINER...]

Parent command
Command Description

docker container Manage containers

Related commands
Command Description

Attach local standard input, output, and error streams to a running


docker container attach
container

docker container
Create a new image from a container’s changes
commit

docker container cp Copy files/folders between a container and the local filesystem

docker container create Create a new container

docker container diff Inspect changes to files or directories on a container’s filesystem

docker container exec Run a command in a running container

docker container export Export a container’s filesystem as a tar archive


Command Description

docker container
Display detailed information on one or more containers
inspect

docker container kill Kill one or more running containers

docker container logs Fetch the logs of a container

docker container ls List containers

docker container pause Pause all processes within one or more containers

docker container port List port mappings or a specific mapping for the container

docker container prune Remove all stopped containers

docker container
Rename a container
rename

docker container restart Restart one or more containers

docker container rm Remove one or more containers

docker container run Run a command in a new container

docker container start Start one or more stopped containers

docker container stats Display a live stream of container(s) resource usage statistics

docker container stop Stop one or more running containers

docker container top Display the running processes of a container

docker container
Unpause all processes within one or more containers
unpause

docker container
Update configuration of one or more containers
update

docker container wait Block until one or more containers stop, then print their exit codes

docker context
Estimated reading time: 1 minute

Description
Manage contexts

Usage
docker context COMMAND

Child commands
Command Description

docker context create Create a context

docker context export Export a context to a tar or kubeconfig file

docker context import Import a context from a tar or zip file

docker context inspect Display detailed information on one or more contexts

docker context ls List contexts

docker context rm Remove one or more contexts

docker context update Update a context

docker context use Set the current docker context

Parent command
Command Description

docker The base command for the Docker CLI.

docker context create


Estimated reading time: 3 minutes
Description
Create a context

Usage
docker context create [OPTIONS] CONTEXT

Options
Name, shorthand Default Description

--default-stack- Default orchestrator for stack operations to use with this


orchestrator context (swarm|kubernetes|all)

--description Description of the context

--docker set the docker endpoint

--from create context from a named context

--kubernetes set the kubernetes endpoint

Parent command
Command Description

docker context Manage contexts

Related commands
Command Description

docker context create Create a context

docker context export Export a context to a tar or kubeconfig file

docker context import Import a context from a tar or zip file


Command Description

docker context inspect Display detailed information on one or more contexts

docker context ls List contexts

docker context rm Remove one or more contexts

docker context update Update a context

docker context use Set the current docker context

Extended description
Creates a new context. This allows you to quickly switch the cli configuration to connect to different
clusters or single nodes.
To create a context from scratch provide the docker and, if required, kubernetes options. The
example below creates the context my-context with a docker endpoint of /var/run/docker.sock and
a kubernetes configuration sourced from the file /home/me/my-kube-config:
$ docker context create my-context \
--docker host=/var/run/docker.sock \
--kubernetes config-file=/home/me/my-kube-config

Use the --from=<context-name> option to create a new context from an existing context. The
example below creates a new context named my-context from the existing context existing-
context:

$ docker context create my-context --from existing-context

If the --from option is not set, the context is created from the current context:
$ docker context create my-context

This can be used to create a context out of an existing DOCKER_HOST based script:
$ source my-setup-script.sh
$ docker context create my-context

To source only the docker endpoint configuration from an existing context use the --docker
from=<context-name> option. The example below creates a new context named my-context using
the docker endpoint configuration from the existing context existing-context and a kubernetes
configuration sourced from the file /home/me/my-kube-config:
$ docker context create my-context \
--docker from=existing-context \
--kubernetes config-file=/home/me/my-kube-config

To source only the kubernetes configuration from an existing context use the--kubernetes
from=<context-name> option. The example below creates a new context named my-context using
the kuberentes configuration from the existing context existing-contextand a docker endpoint
of /var/run/docker.sock:
$ docker context create my-context \
--docker host=/var/run/docker.sock \
--kubernetes from=existing-context

Docker and Kubernetes endpoints configurations, as well as default stack orchestrator and
description can be modified with docker context update

docker context export


Estimated reading time: 1 minute

Description
Export a context to a tar or kubeconfig file

Usage
docker context export [OPTIONS] CONTEXT [FILE|-]

Options
Name, shorthand Default Description

--kubeconfig Export as a kubeconfig file


Parent command
Command Description

docker context Manage contexts

Related commands
Command Description

docker context create Create a context

docker context export Export a context to a tar or kubeconfig file

docker context import Import a context from a tar or zip file

docker context inspect Display detailed information on one or more contexts

docker context ls List contexts

docker context rm Remove one or more contexts

docker context update Update a context

docker context use Set the current docker context

Extended description
Exports a context in a file that can then be used with docker context import (or with kubectl if --
kubeconfig is set). Default output filename is <CONTEXT>.dockercontext,
or <CONTEXT>.kubeconfig if --kubeconfig is set. To export to STDOUT, you can run docker context
export my-context -.

docker context import


Estimated reading time: 1 minute

Description
Import a context from a tar or zip file

Usage
docker context import CONTEXT FILE|-

Parent command
Command Description

docker context Manage contexts

Related commands
Command Description

docker context create Create a context

docker context export Export a context to a tar or kubeconfig file

docker context import Import a context from a tar or zip file

docker context inspect Display detailed information on one or more contexts

docker context ls List contexts

docker context rm Remove one or more contexts

docker context update Update a context

docker context use Set the current docker context

Extended description
Imports a context previously exported with docker context export. To import from stdin, use a
hyphen (-) as filename.

docker context inspect


Estimated reading time: 2 minutes

Description
Display detailed information on one or more contexts

Usage
docker context inspect [OPTIONS] [CONTEXT] [CONTEXT...]

Options
Name, shorthand Default Description

--format , -f Format the output using the given Go template

Parent command
Command Description

docker context Manage contexts

Related commands
Command Description

docker context create Create a context

docker context export Export a context to a tar or kubeconfig file

docker context import Import a context from a tar or zip file

docker context inspect Display detailed information on one or more contexts

docker context ls List contexts

docker context rm Remove one or more contexts


Command Description

docker context update Update a context

docker context use Set the current docker context

Extended description
Inspects one or more contexts.

Examples
Inspect a context by name
$ docker context inspect "local+aks"

[
{
"Name": "local+aks",
"Metadata": {
"Description": "Local Docker Engine + Azure AKS endpoint",
"StackOrchestrator": "kubernetes"
},
"Endpoints": {
"docker": {
"Host": "npipe:////./pipe/docker_engine",
"SkipTLSVerify": false
},
"kubernetes": {
"Host": "https://simon-aks-***.hcp.uksouth.azmk8s.io:443",
"SkipTLSVerify": false,
"DefaultNamespace": "default"
}
},
"TLSMaterial": {
"kubernetes": [
"ca.pem",
"cert.pem",
"key.pem"
]
},
"Storage": {
"MetadataPath":
"C:\\Users\\simon\\.docker\\contexts\\meta\\cb6d08c0a1bfa5fe6f012e61a442788c00bed93f5
09141daff05f620fc54ddee",
"TLSPath":
"C:\\Users\\simon\\.docker\\contexts\\tls\\cb6d08c0a1bfa5fe6f012e61a442788c00bed93f50
9141daff05f620fc54ddee"
}
}
]

docker context ls
Estimated reading time: 1 minute

Description
List contexts

Usage
docker context ls [OPTIONS]

Options
Name, shorthand Default Description

--format Pretty-print contexts using a Go template

--quiet , -q Only show context names


Parent command
Command Description

docker context Manage contexts

Related commands
Command Description

docker context create Create a context

docker context export Export a context to a tar or kubeconfig file

docker context import Import a context from a tar or zip file

docker context inspect Display detailed information on one or more contexts

docker context ls List contexts

docker context rm Remove one or more contexts

docker context update Update a context

docker context use Set the current docker context

docker context rm
Estimated reading time: 1 minute

Description
Remove one or more contexts

Usage
docker context rm CONTEXT [CONTEXT...]
Options
Name, shorthand Default Description

--force , -f Force the removal of a context in use

Parent command
Command Description

docker context Manage contexts

Related commands
Command Description

docker context create Create a context

docker context export Export a context to a tar or kubeconfig file

docker context import Import a context from a tar or zip file

docker context inspect Display detailed information on one or more contexts

docker context ls List contexts

docker context rm Remove one or more contexts

docker context update Update a context

docker context use Set the current docker context

docker context update


Estimated reading time: 1 minute

Description
Update a context
Usage
docker context update [OPTIONS] CONTEXT

Options
Name, shorthand Default Description

--default-stack- Default orchestrator for stack operations to use with this


orchestrator context (swarm|kubernetes|all)

--description Description of the context

--docker set the docker endpoint

--kubernetes set the kubernetes endpoint

Parent command
Command Description

docker context Manage contexts

Related commands
Command Description

docker context create Create a context

docker context export Export a context to a tar or kubeconfig file

docker context import Import a context from a tar or zip file

docker context inspect Display detailed information on one or more contexts

docker context ls List contexts

docker context rm Remove one or more contexts


Command Description

docker context update Update a context

docker context use Set the current docker context

Extended description
Updates an existing context. See context create

docker context use


Estimated reading time: 1 minute

Description
Set the current docker context

Usage
docker context use CONTEXT

Parent command
Command Description

docker context Manage contexts

Related commands
Command Description

docker context create Create a context

docker context export Export a context to a tar or kubeconfig file

docker context import Import a context from a tar or zip file


Command Description

docker context inspect Display detailed information on one or more contexts

docker context ls List contexts

docker context rm Remove one or more contexts

docker context update Update a context

docker context use Set the current docker context

Extended description
Set the default context to use, when DOCKER_HOST, DOCKER_CONTEXT environment variables and --
host, --context global options are not set. To disable usage of contexts, you can use the
special default context.

docker cp
Estimated reading time: 5 minutes

Description
Copy files/folders between a container and the local filesystem

Usage
docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH|-
docker cp [OPTIONS] SRC_PATH|- CONTAINER:DEST_PATH

Options
Name, shorthand Default Description

--archive , -a Archive mode (copy all uid/gid information)

--follow-link , -L Always follow symbol link in SRC_PATH


Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
The docker cp utility copies the contents of SRC_PATH to the DEST_PATH. You can copy from the
container’s file system to the local machine or the reverse, from the local filesystem to the container.
If - is specified for either the SRC_PATH or DEST_PATH, you can also stream a tar archive from STDIN or
to STDOUT. The CONTAINER can be a running or stopped container. The SRC_PATH or DEST_PATH can be
a file or directory.
The docker cp command assumes container paths are relative to the container’s / (root) directory.
This means supplying the initial forward slash is optional; The command
sees compassionate_darwin:/tmp/foo/myfile.txt and compassionate_darwin:tmp/foo/myfile.txta
s identical. Local machine paths can be an absolute or relative value. The command interprets a
local machine’s relative paths as relative to the current working directory where docker cp is run.
The cp command behaves like the Unix cp -a command in that directories are copied recursively
with permissions preserved if possible. Ownership is set to the user and primary group at the
destination. For example, files copied to a container are created with UID:GID of the root user. Files
copied to the local machine are created with the UID:GID of the user which invoked the docker
cp command. However, if you specify the -a option, docker cpsets the ownership to the user and
primary group at the source. If you specify the -L option, docker cp follows any symbolic link in
the SRC_PATH. docker cp does not create parent directories for DEST_PATH if they do not exist.
Assuming a path separator of /, a first argument of SRC_PATH and second argument of DEST_PATH, the
behavior is as follows:

 SRC_PATH specifies a file


o DEST_PATH does not exist
 the file is saved to a file created at DEST_PATH
o DEST_PATH does not exist and ends with /
 Error condition: the destination directory must exist.
o DEST_PATH exists and is a file
 the destination is overwritten with the source file’s contents
o DEST_PATH exists and is a directory
 the file is copied into this directory using the basename from SRC_PATH
 SRC_PATH specifies a directory
o DEST_PATH does not exist
 DEST_PATH is created as a directory and the contents of the source directory
are copied into this directory
o DEST_PATH exists and is a file
 Error condition: cannot copy a directory to a file
o DEST_PATH exists and is a directory
 SRC_PATH does not end with /. (that is: slash followed by dot)
 the source directory is copied into this directory
 SRC_PATH does end with /. (that is: slash followed by dot)
 the content of the source directory is copied into this directory

The command requires SRC_PATH and DEST_PATH to exist according to the above rules. If SRC_PATH is
local and is a symbolic link, the symbolic link, not the target, is copied by default. To copy the link
target and not the link, specify the -L option.
A colon (:) is used as a delimiter between CONTAINER and its path. You can also use :when
specifying paths to a SRC_PATH or DEST_PATH on a local machine, for examplefile:name.txt. If you
use a : in a local machine path, you must be explicit with a relative or absolute path, for example:
`/path/to/file:name.txt` or `./file:name.txt`

It is not possible to copy certain system files such as resources under /proc, /sys, /dev, tmpfs, and
mounts created by the user in the container. However, you can still copy such files by manually
running tar in docker exec. Both of the following examples do the same thing in different ways
(consider SRC_PATH and DEST_PATH are directories):
$ docker exec CONTAINER tar Ccf $(dirname SRC_PATH) - $(basename SRC_PATH) | tar Cxf
DEST_PATH -
$ tar Ccf $(dirname SRC_PATH) - $(basename SRC_PATH) | docker exec -i CONTAINER tar
Cxf DEST_PATH -

Using - as the SRC_PATH streams the contents of STDIN as a tar archive. The command extracts the
content of the tar to the DEST_PATH in container’s filesystem. In this case, DEST_PATH must specify a
directory. Using - as the DEST_PATH streams the contents of the resource as a tar archive to STDOUT.

docker create
Estimated reading time: 12 minutes

Description
Create a new container
Usage
docker create [OPTIONS] IMAGE [COMMAND] [ARG...]

Options
Name, shorthand Default Description

--add-host Add a custom host-to-IP mapping (host:ip)

--attach , -a Attach to STDIN, STDOUT or STDERR

Block IO (relative weight), between 10 and 1000, or 0 to


--blkio-weight
disable (default 0)

--blkio-weight-
Block IO weight (relative device weight)
device

--cap-add Add Linux capabilities

--cap-drop Drop Linux capabilities

--cgroup-parent Optional parent cgroup for the container

--cidfile Write the container ID to the file

--cpu-count CPU count (Windows only)

--cpu-percent CPU percent (Windows only)

--cpu-period Limit CPU CFS (Completely Fair Scheduler) period

--cpu-quota Limit CPU CFS (Completely Fair Scheduler) quota

API 1.25+
--cpu-rt-period
Limit CPU real-time period in microseconds

API 1.25+
--cpu-rt-runtime
Limit CPU real-time runtime in microseconds

--cpu-shares , -c CPU shares (relative weight)


Name, shorthand Default Description

API 1.25+
--cpus
Number of CPUs

--cpuset-cpus CPUs in which to allow execution (0-3, 0,1)

--cpuset-mems MEMs in which to allow execution (0-3, 0,1)

--device Add a host device to the container

--device-cgroup-
Add a rule to the cgroup allowed devices list
rule

--device-read-bps Limit read rate (bytes per second) from a device

--device-read-
Limit read rate (IO per second) from a device
iops

--device-write-
Limit write rate (bytes per second) to a device
bps

--device-write-
Limit write rate (IO per second) to a device
iops

--disable-
true Skip image verification
content-trust

--dns Set custom DNS servers

--dns-opt Set DNS options

--dns-option Set DNS options

--dns-search Set custom DNS search domains

--domainname Container NIS domain name

--entrypoint Overwrite the default ENTRYPOINT of the image

--env , -e Set environment variables

--env-file Read in a file of environment variables

--expose Expose a port or a range of ports


Name, shorthand Default Description

API 1.40+
--gpus
GPU devices to add to the container (‘all’ to pass all GPUs)

--group-add Add additional groups to join

--health-cmd Command to run to check health

--health-interval Time between running the check (ms|s|m|h) (default 0s)

--health-retries Consecutive failures needed to report unhealthy

API 1.29+
--health-start-
Start period for the container to initialize before starting health-
period
retries countdown (ms|s|m|h) (default 0s)

Maximum time to allow one check to run (ms|s|m|h) (default


--health-timeout
0s)

--help Print usage

--hostname , -h Container host name

API 1.25+
--init Run an init inside the container that forwards signals and
reaps processes

--interactive , -
Keep STDIN open even if not attached
i

Maximum IO bandwidth limit for the system drive (Windows


--io-maxbandwidth
only)

--io-maxiops Maximum IOps limit for the system drive (Windows only)

--ip IPv4 address (e.g., 172.30.100.104)

--ip6 IPv6 address (e.g., 2001:db8::33)

--ipc IPC mode to use

--isolation Container isolation technology

--kernel-memory Kernel memory limit


Name, shorthand Default Description

--label , -l Set meta data on a container

--label-file Read in a line delimited file of labels

--link Add link to another container

--link-local-ip Container IPv4/IPv6 link-local addresses

--log-driver Logging driver for the container

--log-opt Log driver options

--mac-address Container MAC address (e.g., 92:d0:c6:0a:29:33)

--memory , -m Memory limit

--memory-
Memory soft limit
reservation

Swap limit equal to memory plus swap: ‘-1’ to enable unlimited


--memory-swap
swap

--memory-
-1 Tune container memory swappiness (0 to 100)
swappiness

--mount Attach a filesystem mount to the container

--name Assign a name to the container

--net Connect a container to a network

--net-alias Add network-scoped alias for the container

--network Connect a container to a network

--network-alias Add network-scoped alias for the container

--no-healthcheck Disable any container-specified HEALTHCHECK

--oom-kill-
Disable OOM Killer
disable

--oom-score-adj Tune host’s OOM preferences (-1000 to 1000)


Name, shorthand Default Description

--pid PID namespace to use

--pids-limit Tune container pids limit (set -1 for unlimited)

experimental (daemon)API 1.32+


--platform
Set platform if server is multi-platform capable

--privileged Give extended privileges to this container

--publish , -p Publish a container’s port(s) to the host

--publish-all , -
Publish all exposed ports to random ports
P

--read-only Mount the container’s root filesystem as read only

--restart no Restart policy to apply when a container exits

--rm Automatically remove the container when it exits

--runtime Runtime to use for this container

--security-opt Security Options

--shm-size Size of /dev/shm

--stop-signal SIGTERM Signal to stop a container

API 1.25+
--stop-timeout
Timeout (in seconds) to stop a container

--storage-opt Storage driver options for the container

--sysctl Sysctl options

--tmpfs Mount a tmpfs directory

--tty , -t Allocate a pseudo-TTY

--ulimit Ulimit options

--user , -u Username or UID (format: <name|uid>[:<group|gid>])


Name, shorthand Default Description

--userns User namespace to use

--uts UTS namespace to use

--volume , -v Bind mount a volume

--volume-driver Optional volume driver for the container

--volumes-from Mount volumes from the specified container(s)

--workdir , -w Working directory inside the container

Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
The docker create command creates a writeable container layer over the specified image and
prepares it for running the specified command. The container ID is then printed to STDOUT. This is
similar to docker run -d except the container is never started. You can then use the docker start
<container_id> command to start the container at any point.
This is useful when you want to set up a container configuration ahead of time so that it is ready to
start when you need it. The initial status of the new container is created.

Please see the run command section and the Docker run reference for more details.

Examples
Create and start a container
$ docker create -t -i fedora bash

6d8af538ec541dd581ebc2a24153a28329acb5268abe5ef868c1f1a261221752
$ docker start -a -i 6d8af538ec5

bash-4.2#

Initialize volumes
As of v1.4.0 container volumes are initialized during the docker create phase (i.e., docker run too).
For example, this allows you to create the data volume container, and then use it from another
container:
$ docker create -v /data --name data ubuntu

240633dfbb98128fa77473d3d9018f6123b99c454b3251427ae190a7d951ad57

$ docker run --rm --volumes-from data ubuntu ls -la /data

total 8
drwxr-xr-x 2 root root 4096 Dec 5 04:10 .
drwxr-xr-x 48 root root 4096 Dec 5 04:11 ..

Similarly, create a host directory bind mounted volume container, which can then be used from the
subsequent container:
$ docker create -v /home/docker:/docker --name docker ubuntu

9aa88c08f319cd1e4515c3c46b0de7cc9aa75e878357b1e96f91e2c773029f03

$ docker run --rm --volumes-from docker ubuntu ls -la /docker

total 20
drwxr-sr-x 5 1000 staff 180 Dec 5 04:00 .
drwxr-xr-x 48 root root 4096 Dec 5 04:13 ..
-rw-rw-r-- 1 1000 staff 3833 Dec 5 04:01 .ash_history
-rw-r--r-- 1 1000 staff 446 Nov 28 11:51 .ashrc
-rw-r--r-- 1 1000 staff 25 Dec 5 04:00 .gitconfig
drwxr-sr-x 3 1000 staff 60 Dec 1 03:28 .local
-rw-r--r-- 1 1000 staff 920 Nov 28 11:51 .profile
drwx--S--- 2 1000 staff 460 Dec 5 00:51 .ssh
drwxr-xr-x 32 1000 staff 1140 Dec 5 04:01 docker

Set storage driver options per container.

$ docker create -it --storage-opt size=120G fedora /bin/bash

This (size) will allow to set the container rootfs size to 120G at creation time. This option is only
available for the devicemapper, btrfs, overlay2, windowsfilter and zfs graph drivers. For
the devicemapper, btrfs, windowsfilter and zfs graph drivers, user cannot pass a size less than the
Default BaseFS Size. For the overlay2 storage driver, the size option is only available if the backing
fs is xfs and mounted with the pquota mount option. Under these conditions, user can pass any size
less than the backing fs size.

Specify isolation technology for container (--isolation)


This option is useful in situations where you are running Docker containers on Windows. The --
isolation=<value> option sets a container’s isolation technology. On Linux, the only supported is
the default option which uses Linux namespaces. On Microsoft Windows, you can specify these
values:
Value Description

Use the value specified by the Docker daemon’s --exec-opt.


default If the daemon does not specify an isolation technology,
Microsoft Windows uses process as its default value if the

daemon is running on
Windows server, or hyperv if
running on Windows client.

process Namespace isolation only.

hyperv Hyper-V hypervisor partition-based isolation.

Specifying the --isolation flag without a value is the same as setting --isolation="default".

Dealing with dynamically created devices (--device-cgroup-rule)


Devices available to a container are assigned at creation time. The assigned devices will both be
added to the cgroup.allow file and created into the container once it is run. This poses a problem
when a new device needs to be added to running container.

One of the solution is to add a more permissive rule to a container allowing it access to a wider
range of devices. For example, supposing our container needs access to a character device with
major 42 and any number of minor number (added as new devices appear), the following rule would
be added:
docker create --device-cgroup-rule='c 42:* rmw' -name my-container my-image

Then, a user could ask udev to execute a script that would docker exec my-container mknod
newDevX c 42 <minor> the required device when it is added.

NOTE: initially present devices still need to be explicitly added to the create/run command

docker deploy
Estimated reading time: 4 minutes

Description
Deploy a new stack or update an existing stack

API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
This command is experimental.

This command is experimental on the Docker daemon. It should not be used in production
environments. To enable experimental features on the Docker daemon, edit the daemon.json and
set experimental to true.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker deploy [OPTIONS] STACK

Options
Name,
Default Description
shorthand

experimental (daemon)Swarm
--bundle-file
Path to a Distributed Application Bundle file

--compose-file API 1.25+


, -c Path to a Compose file, or “-“ to read from stdin

Kubernetes
--namespace
Kubernetes namespace to use

API 1.27+Swarm
--prune
Prune services that are no longer referenced

API 1.30+Swarm
--resolve-image always Query the registry to resolve image digest and supported
platforms (“always”|”changed”|”never”)

--with- Swarm
registry-auth Send registry authentication details to Swarm agents

Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
Create and update a stack from a compose or a dab file on the swarm. This command has to be run
targeting a manager node.
Examples
Compose file
The deploy command supports compose file version 3.0 and above.
$ docker stack deploy --compose-file docker-compose.yml vossibility

Ignoring unsupported options: links

Creating network vossibility_vossibility


Creating network vossibility_default
Creating service vossibility_nsqd
Creating service vossibility_logstash
Creating service vossibility_elasticsearch
Creating service vossibility_kibana
Creating service vossibility_ghollector
Creating service vossibility_lookupd

You can verify that the services were correctly created

$ docker service ls

ID NAME MODE REPLICAS IMAGE


29bv0vnlm903 vossibility_lookupd replicated 1/1
nsqio/nsq@sha256:eeba05599f31eba418e96e71e0984c3dc96963ceb66924dd37a47bf7ce18a662
4awt47624qwh vossibility_nsqd replicated 1/1
nsqio/nsq@sha256:eeba05599f31eba418e96e71e0984c3dc96963ceb66924dd37a47bf7ce18a662
4tjx9biia6fs vossibility_elasticsearch replicated 1/1
elasticsearch@sha256:12ac7c6af55d001f71800b83ba91a04f716e58d82e748fa6e5a7359eed2301aa
7563uuzr9eys vossibility_kibana replicated 1/1
kibana@sha256:6995a2d25709a62694a937b8a529ff36da92ebee74bafd7bf00e6caf6db2eb03
9gc5m4met4he vossibility_logstash replicated 1/1
logstash@sha256:2dc8bddd1bb4a5a34e8ebaf73749f6413c101b2edef6617f2f7713926d2141fe
axqh55ipl40h vossibility_vossibility-collector replicated 1/1
icecrime/vossibility-
collector@sha256:f03f2977203ba6253988c18d04061c5ec7aab46bca9dfd89a9a1fa4500989fba
docker diff
Estimated reading time: 1 minute

Description
Inspect changes to files or directories on a container’s filesystem

Usage
docker diff CONTAINER

Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
List the changed files and directories in a container᾿s filesystem since the container was created.
Three different types of change are tracked:

Symbol Description

A A file or directory was added

D A file or directory was deleted

C A file or directory was changed

You can use the full or shortened container ID or the container name set usingdocker run --
name option.

Examples
Inspect the changes to an nginx container:
$ docker diff 1fdfd1f54c1b
C /dev
C /dev/console
C /dev/core
C /dev/stdout
C /dev/fd
C /dev/ptmx
C /dev/stderr
C /dev/stdin
C /run
A /run/nginx.pid
C /var/lib/nginx/tmp
A /var/lib/nginx/tmp/client_body
A /var/lib/nginx/tmp/fastcgi
A /var/lib/nginx/tmp/proxy
A /var/lib/nginx/tmp/scgi
A /var/lib/nginx/tmp/uwsgi
C /var/log/nginx
A /var/log/nginx/access.log
A /var/log/nginx/error.log

docker events
Estimated reading time: 12 minutes

Description
Get real time events from the server

Usage
docker events [OPTIONS]
Options
Name, shorthand Default Description

--filter , -f Filter output based on conditions provided

--format Format the output using the given Go template

--since Show all events created since timestamp

--until Stream events until this timestamp

Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
Use docker events to get real-time events from the server. These events differ per Docker object
type.

Object types
CONTAINERS

Docker containers report the following events:

 attach
 commit
 copy
 create
 destroy
 detach
 die
 exec_create
 exec_detach
 exec_die
 exec_start
 export
 health_status
 kill
 oom
 pause
 rename
 resize
 restart
 start
 stop
 top
 unpause
 update

IMAGES

Docker images report the following events:

 delete
 import
 load
 pull
 push
 save
 tag
 untag

PLUGINS

Docker plugins report the following events:

 enable
 disable
 install
 remove

VOLUMES

Docker volumes report the following events:

 create
 destroy
 mount
 unmount

NETWORKS

Docker networks report the following events:

 create
 connect
 destroy
 disconnect
 remove

DAEMONS

Docker daemons report the following events:

 reload

SERVICES

Docker services report the following events:

 create
 remove
 update

NODES

Docker nodes report the following events:

 create
 remove
 update

SECRETS

Docker secrets report the following events:

 create
 remove
 update

CONFIGS

Docker configs report the following events:

 create
 remove
 update

Limiting, filtering, and formatting the output


LIMIT EVENTS BY TIME
The --since and --until parameters can be Unix timestamps, date formatted timestamps, or Go
duration strings (e.g. 10m, 1h30m) computed relative to the client machine’s time. If you do not provide
the --since option, the command returns only new and/or live events. Supported formats for date
formatted time stamps include RFC3339Nano, RFC3339, 2006-01-02T15:04:05, 2006-01-
02T15:04:05.999999999, 2006-01-02Z07:00, and 2006-01-02. The local timezone on the client will be
used if you do not provide either a Z or a +-00:00 timezone offset at the end of the timestamp. When
providing Unix timestamps enter seconds[.nanoseconds], where seconds is the number of seconds
that have elapsed since January 1, 1970 (midnight UTC/GMT), not counting leap seconds (aka Unix
epoch or Unix time), and the optional .nanoseconds field is a fraction of a second no more than nine
digits long.

FILTERING

The filtering flag (-f or --filter) format is of “key=value”. If you would like to use multiple filters,
pass multiple flags (e.g., --filter "foo=bar" --filter "bif=baz")
Using the same filter multiple times will be handled as a OR; for example--filter
container=588a23dac085 --filter container=a8f7720b8c22 will display events for container
588a23dac085 OR container a8f7720b8c22
Using multiple filters will be handled as a AND; for example--filter container=588a23dac085 --
filter event=start will display events for container container 588a23dac085 AND the event type
is start

The currently supported filters are:

 config (config=<name or id>)


 container (container=<name or id>)
 daemon (daemon=<name or id>)
 event (event=<event action>)
 image (image=<repository or tag>)
 label (label=<key> or label=<key>=<value>)
 network (network=<name or id>)
 node (node=<id>)
 plugin (plugin=<name or id>)
 scope (scope=<local or swarm>)
 secret (secret=<name or id>)
 service (service=<name or id>)
 type (type=<container or image or volume or network or daemon or plugin or service
or node or secret or config>)
 volume (volume=<name>)

FORMAT
If a format (--format) is specified, the given template will be executed instead of the default format.
Go’s text/template package describes all the details of the format.
If a format is set to {{json .}}, the events are streamed as valid JSON Lines. For information about
JSON Lines, please refer to http://jsonlines.org/ .

Examples
Basic example
You’ll need two shells for this example.

Shell 1: Listening for events:

$ docker events

Shell 2: Start and Stop containers:

$ docker create --name test alpine:latest top


$ docker start test
$ docker stop test

Shell 1: (Again .. now showing events):

2017-01-05T00:35:58.859401177+08:00 container create


0fdb48addc82871eb34eb23a847cfd033dedd1a0a37bef2e6d9eb3870fc7ff37
(image=alpine:latest, name=test)
2017-01-05T00:36:04.703631903+08:00 network connect
e2e1f5ceda09d4300f3a846f0acfaa9a8bb0d89e775eb744c5acecd60e0529e2
(container=0fdb...ff37, name=bridge, type=bridge)
2017-01-05T00:36:04.795031609+08:00 container start 0fdb...ff37 (image=alpine:latest,
name=test)
2017-01-05T00:36:09.830268747+08:00 container kill 0fdb...ff37 (image=alpine:latest,
name=test, signal=15)
2017-01-05T00:36:09.840186338+08:00 container die 0fdb...ff37 (exitCode=143,
image=alpine:latest, name=test)
2017-01-05T00:36:09.880113663+08:00 network disconnect e2e...29e2
(container=0fdb...ff37, name=bridge, type=bridge)
2017-01-05T00:36:09.890214053+08:00 container stop 0fdb...ff37 (image=alpine:latest,
name=test)

To exit the docker events command, use CTRL+C.


Filter events by time
You can filter the output by an absolute timestamp or relative time on the host machine, using the
following different time syntaxes:

$ docker events --since 1483283804


2017-01-05T00:35:41.241772953+08:00 volume create testVol (driver=local)
2017-01-05T00:35:58.859401177+08:00 container create d9cd...4d70
(image=alpine:latest, name=test)
2017-01-05T00:36:04.703631903+08:00 network connect e2e1...29e2
(container=0fdb...ff37, name=bridge, type=bridge)
2017-01-05T00:36:04.795031609+08:00 container start 0fdb...ff37 (image=alpine:latest,
name=test)
2017-01-05T00:36:09.830268747+08:00 container kill 0fdb...ff37 (image=alpine:latest,
name=test, signal=15)
2017-01-05T00:36:09.840186338+08:00 container die 0fdb...ff37 (exitCode=143,
image=alpine:latest, name=test)
2017-01-05T00:36:09.880113663+08:00 network disconnect e2e...29e2
(container=0fdb...ff37, name=bridge, type=bridge)
2017-01-05T00:36:09.890214053+08:00 container stop 0fdb...ff37 (image=alpine:latest,
name=test)

$ docker events --since '2017-01-05'


2017-01-05T00:35:41.241772953+08:00 volume create testVol (driver=local)
2017-01-05T00:35:58.859401177+08:00 container create d9cd...4d70
(image=alpine:latest, name=test)
2017-01-05T00:36:04.703631903+08:00 network connect e2e1...29e2
(container=0fdb...ff37, name=bridge, type=bridge)
2017-01-05T00:36:04.795031609+08:00 container start 0fdb...ff37 (image=alpine:latest,
name=test)
2017-01-05T00:36:09.830268747+08:00 container kill 0fdb...ff37 (image=alpine:latest,
name=test, signal=15)
2017-01-05T00:36:09.840186338+08:00 container die 0fdb...ff37 (exitCode=143,
image=alpine:latest, name=test)
2017-01-05T00:36:09.880113663+08:00 network disconnect e2e...29e2
(container=0fdb...ff37, name=bridge, type=bridge)
2017-01-05T00:36:09.890214053+08:00 container stop 0fdb...ff37 (image=alpine:latest,
name=test)

$ docker events --since '2013-09-03T15:49:29'


2017-01-05T00:35:41.241772953+08:00 volume create testVol (driver=local)
2017-01-05T00:35:58.859401177+08:00 container create d9cd...4d70
(image=alpine:latest, name=test)
2017-01-05T00:36:04.703631903+08:00 network connect e2e1...29e2
(container=0fdb...ff37, name=bridge, type=bridge)
2017-01-05T00:36:04.795031609+08:00 container start 0fdb...ff37 (image=alpine:latest,
name=test)
2017-01-05T00:36:09.830268747+08:00 container kill 0fdb...ff37 (image=alpine:latest,
name=test, signal=15)
2017-01-05T00:36:09.840186338+08:00 container die 0fdb...ff37 (exitCode=143,
image=alpine:latest, name=test)
2017-01-05T00:36:09.880113663+08:00 network disconnect e2e...29e2
(container=0fdb...ff37, name=bridge, type=bridge)
2017-01-05T00:36:09.890214053+08:00 container stop 0fdb...ff37 (image=alpine:latest,
name=test)

$ docker events --since '10m'


2017-01-05T00:35:41.241772953+08:00 volume create testVol (driver=local)
2017-01-05T00:35:58.859401177+08:00 container create d9cd...4d70
(image=alpine:latest, name=test)
2017-01-05T00:36:04.703631903+08:00 network connect e2e1...29e2
(container=0fdb...ff37, name=bridge, type=bridge)
2017-01-05T00:36:04.795031609+08:00 container start 0fdb...ff37 (image=alpine:latest,
name=test)
2017-01-05T00:36:09.830268747+08:00 container kill 0fdb...ff37 (image=alpine:latest,
name=test, signal=15)
2017-01-05T00:36:09.840186338+08:00 container die 0fdb...ff37 (exitCode=143,
image=alpine:latest, name=test)
2017-01-05T00:36:09.880113663+08:00 network disconnect e2e...29e2
(container=0fdb...ff37, name=bridge, type=bridge)
2017-01-05T00:36:09.890214053+08:00 container stop 0fdb...ff37 (image=alpine:latest,
name=test)

$ docker events --since '2017-01-05T00:35:30' --until '2017-01-05T00:36:05'


2017-01-05T00:35:41.241772953+08:00 volume create testVol (driver=local)
2017-01-05T00:35:58.859401177+08:00 container create d9cd...4d70
(image=alpine:latest, name=test)
2017-01-05T00:36:04.703631903+08:00 network connect e2e1...29e2
(container=0fdb...ff37, name=bridge, type=bridge)
2017-01-05T00:36:04.795031609+08:00 container start 0fdb...ff37 (image=alpine:latest,
name=test)

Filter events by criteria


The following commands show several different ways to filter the docker event output.
$ docker events --filter 'event=stop'

2017-01-05T00:40:22.880175420+08:00 container stop 0fdb...ff37 (image=alpine:latest,


name=test)
2017-01-05T00:41:17.888104182+08:00 container stop 2a8f...4e78 (image=alpine,
name=kickass_brattain)

$ docker events --filter 'image=alpine'

2017-01-05T00:41:55.784240236+08:00 container create d9cd...4d70 (image=alpine,


name=happy_meitner)
2017-01-05T00:41:55.913156783+08:00 container start d9cd...4d70 (image=alpine,
name=happy_meitner)
2017-01-05T00:42:01.106875249+08:00 container kill d9cd...4d70 (image=alpine,
name=happy_meitner, signal=15)
2017-01-05T00:42:11.111934041+08:00 container kill d9cd...4d70 (image=alpine,
name=happy_meitner, signal=9)
2017-01-05T00:42:11.119578204+08:00 container die d9cd...4d70 (exitCode=137,
image=alpine, name=happy_meitner)
2017-01-05T00:42:11.173276611+08:00 container stop d9cd...4d70 (image=alpine,
name=happy_meitner)

$ docker events --filter 'container=test'

2017-01-05T00:43:00.139719934+08:00 container start 0fdb...ff37 (image=alpine:latest,


name=test)
2017-01-05T00:43:09.259951086+08:00 container kill 0fdb...ff37 (image=alpine:latest,
name=test, signal=15)
2017-01-05T00:43:09.270102715+08:00 container die 0fdb...ff37 (exitCode=143,
image=alpine:latest, name=test)
2017-01-05T00:43:09.312556440+08:00 container stop 0fdb...ff37 (image=alpine:latest,
name=test)
$ docker events --filter 'container=test' --filter 'container=d9cdb1525ea8'

2017-01-05T00:44:11.517071981+08:00 container start 0fdb...ff37 (image=alpine:latest,


name=test)
2017-01-05T00:44:17.685870901+08:00 container start d9cd...4d70 (image=alpine,
name=happy_meitner)
2017-01-05T00:44:29.757658470+08:00 container kill 0fdb...ff37 (image=alpine:latest,
name=test, signal=9)
2017-01-05T00:44:29.767718510+08:00 container die 0fdb...ff37 (exitCode=137,
image=alpine:latest, name=test)
2017-01-05T00:44:29.815798344+08:00 container destroy 0fdb...ff37
(image=alpine:latest, name=test)

$ docker events --filter 'container=test' --filter 'event=stop'

2017-01-05T00:46:13.664099505+08:00 container stop a9d1...e130 (image=alpine,


name=test)

$ docker events --filter 'type=volume'

2015-12-23T21:05:28.136212689Z volume create test-event-volume-local (driver=local)


2015-12-23T21:05:28.383462717Z volume mount test-event-volume-local (read/write=true,
container=562f...5025, destination=/foo, driver=local, propagation=rprivate)
2015-12-23T21:05:28.650314265Z volume unmount test-event-volume-local
(container=562f...5025, driver=local)
2015-12-23T21:05:28.716218405Z volume destroy test-event-volume-local (driver=local)

$ docker events --filter 'type=network'

2015-12-23T21:38:24.705709133Z network create 8b11...2c5b (name=test-event-network-


local, type=bridge)
2015-12-23T21:38:25.119625123Z network connect 8b11...2c5b (name=test-event-network-
local, container=b4be...c54e, type=bridge)

$ docker events --filter 'container=container_1' --filter 'container=container_2'

2014-09-03T15:49:29.999999999Z07:00 container die 4386fb97867d (image=ubuntu-1:14.04)


2014-05-10T17:42:14.999999999Z07:00 container stop 4386fb97867d (image=ubuntu-
1:14.04)
2014-05-10T17:42:14.999999999Z07:00 container die 7805c1d35632 (imager=redis:2.8)
2014-09-03T15:49:29.999999999Z07:00 container stop 7805c1d35632 (image=redis:2.8)

$ docker events --filter 'type=volume'

2015-12-23T21:05:28.136212689Z volume create test-event-volume-local (driver=local)


2015-12-23T21:05:28.383462717Z volume mount test-event-volume-local (read/write=true,
container=562fe10671e9273da25eed36cdce26159085ac7ee6707105fd534866340a5025,
destination=/foo, driver=local, propagation=rprivate)
2015-12-23T21:05:28.650314265Z volume unmount test-event-volume-local
(container=562fe10671e9273da25eed36cdce26159085ac7ee6707105fd534866340a5025,
driver=local)
2015-12-23T21:05:28.716218405Z volume destroy test-event-volume-local (driver=local)

$ docker events --filter 'type=network'

2015-12-23T21:38:24.705709133Z network create


8b111217944ba0ba844a65b13efcd57dc494932ee2527577758f939315ba2c5b (name=test-event-
network-local, type=bridge)
2015-12-23T21:38:25.119625123Z network connect
8b111217944ba0ba844a65b13efcd57dc494932ee2527577758f939315ba2c5b (name=test-event-
network-local,
container=b4be644031a3d90b400f88ab3d4bdf4dc23adb250e696b6328b85441abe2c54e,
type=bridge)

$ docker events --filter 'type=plugin'

2016-07-25T17:30:14.825557616Z plugin pull


ec7b87f2ce84330fe076e666f17dfc049d2d7ae0b8190763de94e1f2d105993f
(name=tiborvass/sample-volume-plugin:latest)
2016-07-25T17:30:14.888127370Z plugin enable
ec7b87f2ce84330fe076e666f17dfc049d2d7ae0b8190763de94e1f2d105993f
(name=tiborvass/sample-volume-plugin:latest)

$ docker events -f type=service


2017-07-12T06:34:07.999446625Z service create wj64st89fzgchxnhiqpn8p4oj
(name=reverent_albattani)
2017-07-12T06:34:21.405496207Z service remove wj64st89fzgchxnhiqpn8p4oj
(name=reverent_albattani)

$ docker events -f type=node

2017-07-12T06:21:51.951586759Z node update 3xyz5ttp1a253q74z1thwywk9 (name=ip-172-31-


23-42, state.new=ready, state.old=unknown)

$ docker events -f type=secret

2017-07-12T06:32:13.915704367Z secret create s8o6tmlnndrgzbmdilyy5ymju


(name=new_secret)
2017-07-12T06:32:37.052647783Z secret remove s8o6tmlnndrgzbmdilyy5ymju
(name=new_secret)

$ docker events -f type=config


2017-07-12T06:44:13.349037127Z config create u96zlvzdfsyb9sg4mhyxfh3rl (name=abc)
2017-07-12T06:44:36.327694184Z config remove u96zlvzdfsyb9sg4mhyxfh3rl (name=abc)

$ docker events --filter 'scope=swarm'

2017-07-10T07:46:50.250024503Z service create m8qcxu8081woyof7w3jaax6gk


(name=affectionate_wilson)
2017-07-10T07:47:31.093797134Z secret create 6g5pufzsv438p9tbvl9j94od4
(name=new_secret)

Format the output


$ docker events --filter 'type=container' --format 'Type={{.Type}}
Status={{.Status}} ID={{.ID}}'

Type=container Status=create
ID=2ee349dac409e97974ce8d01b70d250b85e0ba8189299c126a87812311951e26
Type=container Status=attach
ID=2ee349dac409e97974ce8d01b70d250b85e0ba8189299c126a87812311951e26
Type=container Status=start
ID=2ee349dac409e97974ce8d01b70d250b85e0ba8189299c126a87812311951e26
Type=container Status=resize
ID=2ee349dac409e97974ce8d01b70d250b85e0ba8189299c126a87812311951e26
Type=container Status=die
ID=2ee349dac409e97974ce8d01b70d250b85e0ba8189299c126a87812311951e26
Type=container Status=destroy
ID=2ee349dac409e97974ce8d01b70d250b85e0ba8189299c126a87812311951e26

FORMAT AS JSON

$ docker events --format '{{json .}}'

{"status":"create","id":"196016a57679bf42424484918746a9474cd905dd993c4d0f4..
{"status":"attach","id":"196016a57679bf42424484918746a9474cd905dd993c4d0f4..
{"Type":"network","Action":"connect","Actor":{"ID":"1b50a5bf755f6021dfa78e..
{"status":"start","id":"196016a57679bf42424484918746a9474cd905dd993c4d0f42..
{"status":"resize","id":"196016a57679bf42424484918746a9474cd905dd993c4d0f4..

docker exec
Estimated reading time: 3 minutes

Description
Run a command in a running container

Usage
docker exec [OPTIONS] CONTAINER COMMAND [ARG...]

Options
Name, shorthand Default Description

--detach , -d Detached mode: run command in the background


Name, shorthand Default Description

--detach-keys Override the key sequence for detaching a container

API 1.25+
--env , -e
Set environment variables

--interactive , -i Keep STDIN open even if not attached

--privileged Give extended privileges to the command

--tty , -t Allocate a pseudo-TTY

--user , -u Username or UID (format: <name|uid>[:<group|gid>])

API 1.35+
--workdir , -w
Working directory inside the container

Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
The docker exec command runs a new command in a running container.
The command started using docker exec only runs while the container’s primary process (PID 1) is
running, and it is not restarted if the container is restarted.

COMMAND will run in the default directory of the container. If the underlying image has a custom
directory specified with the WORKDIR directive in its Dockerfile, this will be used instead.

COMMAND should be an executable, a chained or a quoted command will not work.


Example: docker exec -ti my_container "echo a && echo b" will not work, but docker exec -ti
my_container sh -c "echo a && echo b" will.

Examples
Run docker exec on a running container
First, start a container.

$ docker run --name ubuntu_bash --rm -i -t ubuntu bash

This will create a container named ubuntu_bash and start a Bash session.

Next, execute a command on the container.

$ docker exec -d ubuntu_bash touch /tmp/execWorks

This will create a new file /tmp/execWorks inside the running container ubuntu_bash, in the
background.
Next, execute an interactive bash shell on the container.
$ docker exec -it ubuntu_bash bash

This will create a new Bash session in the container ubuntu_bash.

Next, set an environment variable in the current bash session.

$ docker exec -it -e VAR=1 ubuntu_bash bash

This will create a new Bash session in the container ubuntu_bash with environment variable $VAR set
to “1”. Note that this environment variable will only be valid on the current Bash session.
By default docker exec command runs in the same working directory set when container was
created.
$ docker exec -it ubuntu_bash pwd
/

You can select working directory for the command to execute into

$ docker exec -it -w /root ubuntu_bash pwd


/root

Try to run docker exec on a paused container


If the container is paused, then the docker exec command will fail with an error:
$ docker pause test

test
$ docker ps

CONTAINER ID IMAGE COMMAND CREATED


STATUS PORTS NAMES
1ae3b36715d2 ubuntu:latest "bash" 17 seconds ago Up 16
seconds (Paused) test

$ docker exec test ls

FATA[0000] Error response from daemon: Container test is paused, unpause the
container before exec

$ echo $?
1

docker export
Estimated reading time: 1 minute

Description
Export a container’s filesystem as a tar archive

Usage
docker export [OPTIONS] CONTAINER

Options
Name, shorthand Default Description

--output , -o Write to a file, instead of STDOUT

Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
The docker export command does not export the contents of volumes associated with the
container. If a volume is mounted on top of an existing directory in the container, docker export will
export the contents of the underlying directory, not the contents of the volume.

Refer to Backup, restore, or migrate data volumes in the user guide for examples on exporting data
in a volume.

Examples
Each of these commands has the same result.

$ docker export red_panda > latest.tar


$ docker export --output="latest.tar" red_panda

docker history
Estimated reading time: 2 minutes

Description
Show the history of an image

Usage
docker history [OPTIONS] IMAGE

Options
Name, shorthand Default Description

--format Pretty-print images using a Go template

--human , -H true Print sizes and dates in human readable format

--no-trunc Don’t truncate output

--quiet , -q Only show numeric IDs

Parent command
Command Description

docker The base command for the Docker CLI.

Examples
To see how the docker:latest image was built:
$ docker history docker

IMAGE CREATED CREATED BY


SIZE COMMENT
3e23a5875458 8 days ago /bin/sh -c #(nop) ENV LC_ALL=C.UTF-8
0 B
8578938dd170 8 days ago /bin/sh -c dpkg-reconfigure locales && loc
1.245 MB
be51b77efb42 8 days ago /bin/sh -c apt-get update && apt-get install
338.3 MB
4b137612be55 6 weeks ago /bin/sh -c #(nop) ADD jessie.tar.xz in /
121 MB
750d58736b4b 6 weeks ago /bin/sh -c #(nop) MAINTAINER Tianon Gravi <ad
0 B
511136ea3c5a 9 months ago
0 B Imported from -

To see how the docker:apache image was added to a container’s base image:
$ docker history docker:scm
IMAGE CREATED CREATED BY
SIZE COMMENT
2ac9d1098bf1 3 months ago /bin/bash
241.4 MB Added Apache to Fedora base image
88b42ffd1f7c 5 months ago /bin/sh -c #(nop) ADD file:1fd8d7f9f6557cafc7
373.7 MB
c69cab00d6ef 5 months ago /bin/sh -c #(nop) MAINTAINER Lokesh Mandvekar
0 B
511136ea3c5a 19 months ago
0 B Imported from -

Format the output


The formatting option (--format) will pretty-prints history output using a Go template.

Valid placeholders for the Go template are listed below:

Placeholder Description

.ID Image ID

Elapsed time since the image was created if --human=true, otherwise


.CreatedSince
timestamp of when image was created

.CreatedAt Timestamp of when image was created

.CreatedBy Command that was used to create the image

.Size Image disk size

.Comment Comment for image

When using the --format option, the history command will either output the data exactly as the
template declares or, when using the table directive, will include column headers as well.
The following example uses a template without headers and outputs the ID and CreatedSince entries
separated by a colon for the busybox image:
$ docker history --format "{{.ID}}: {{.CreatedSince}}" busybox

f6e427c148a7: 4 weeks ago


<missing>: 4 weeks ago
docker engine
Estimated reading time: 1 minute

Description
Manage the docker engine

Usage
docker engine COMMAND COMMAND

Child commands
Command Description

docker engine activate Activate Enterprise Edition

docker engine check Check for available engine updates

docker engine update Update a local engine

Parent command
Command Description

docker The base command for the Docker CLI.

docker engine activate


Estimated reading time: 1 minute

Description
Activate Enterprise Edition
Usage
docker engine activate [OPTIONS]

Options
Name,
Default Description
shorthand

--containerd override default location of containerd endpoint

--display-only only display license information and exit

--engine-image Specify engine image

--format Pretty-print licenses using a Go template

--license License File

--quiet Only display available licenses by ID

--registry- Override the default location where engine


docker.io/store/docker
prefix images are pulled

Specify engine version (default is to use


--version
currently running version)

Parent command
Command Description

docker engine Manage the docker engine

Related commands
Command Description

docker engine activate Activate Enterprise Edition


Command Description

docker engine check Check for available engine updates

docker engine update Update a local engine

Extended description
Activate Enterprise Edition.

With this command you may apply an existing Docker enterprise license, or interactively download
one from Docker. In the interactive exchange, you can sign up for a new trial, or download an
existing license. If you are currently running a Community Edition engine, the daemon will be
updated to the Enterprise Edition Docker engine with additional capabilities and long term support.

For more information about different Docker Enterprise license types visit
https://www.docker.com/licenses

For non-interactive scriptable deployments, download your license from https://hub.docker.com/ then
specify the file with the ‘--license’ flag.

docker engine check


Estimated reading time: 1 minute

Description
Check for available engine updates

Usage
docker engine check [OPTIONS]

Options
Name,
Default Description
shorthand

--containerd override default location of containerd endpoint

Report downgrades (default omits older


--downgrades
versions)

Specify engine image (default uses the same


--engine-image
image as currently running)

--format Pretty-print updates using a Go template

--pre-releases Include pre-release versions

--quiet , -q Only display available versions

--registry- Override the existing location where engine


docker.io/store/docker
prefix images are pulled

--upgrades true Report available upgrades

Parent command
Command Description

docker engine Manage the docker engine

Related commands
Command Description

docker engine activate Activate Enterprise Edition

docker engine check Check for available engine updates

docker engine update Update a local engine

docker engine update


Estimated reading time: 1 minute
Description
Update a local engine

Usage
docker engine update [OPTIONS]

Options
Name,
Default Description
shorthand

--containerd override default location of containerd endpoint

Specify engine image (default uses the same


--engine-image
image as currently running)

--registry- Override the current location where engine


docker.io/store/docker
prefix images are pulled

--version Specify engine version

Parent command
Command Description

docker engine Manage the docker engine

Related commands
Command Description

docker engine activate Activate Enterprise Edition

docker engine check Check for available engine updates

docker engine update Update a local engine


docker image
Estimated reading time: 1 minute

Description
Manage images

Usage
docker image COMMAND

Child commands
Command Description

docker image build Build an image from a Dockerfile

docker image
Show the history of an image
history

docker image import Import the contents from a tarball to create a filesystem image

docker image
Display detailed information on one or more images
inspect

docker image load Load an image from a tar archive or STDIN

docker image ls List images

docker image prune Remove unused images

docker image pull Pull an image or a repository from a registry

docker image push Push an image or a repository to a registry

docker image rm Remove one or more images

Save one or more images to a tar archive (streamed to STDOUT by


docker image save
default)
Command Description

docker image tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE

Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
Manage images.

docker image build


Estimated reading time: 4 minutes

Description
Build an image from a Dockerfile

Usage
docker image build [OPTIONS] PATH | URL | -

Options
Name, shorthand Default Description

--add-host Add a custom host-to-IP mapping (host:ip)

--build-arg Set build-time variables

--cache-from Images to consider as cache sources

--cgroup-parent Optional parent cgroup for the container


Name, shorthand Default Description

--compress Compress the build context using gzip

--cpu-period Limit the CPU CFS (Completely Fair Scheduler) period

--cpu-quota Limit the CPU CFS (Completely Fair Scheduler) quota

--cpu-shares , -
CPU shares (relative weight)
c

--cpuset-cpus CPUs in which to allow execution (0-3, 0,1)

--cpuset-mems MEMs in which to allow execution (0-3, 0,1)

--disable-
true Skip image verification
content-trust

--file , -f Name of the Dockerfile (Default is ‘PATH/Dockerfile’)

--force-rm Always remove intermediate containers

--iidfile Write the image ID to the file

--isolation Container isolation technology

--label Set metadata for an image

--memory , -m Memory limit

Swap limit equal to memory plus swap: ‘-1’ to enable unlimited


--memory-swap
swap

API 1.25+
--network
Set the networking mode for the RUN instructions during build

--no-cache Do not use cache when building the image

API 1.40+
--output , -o
Output destination (format: type=local,dest=path)

experimental (daemon)API 1.32+


--platform
Set platform if server is multi-platform capable
Name, shorthand Default Description

Set type of progress output (auto, plain, tty). Use plain to show
--progress auto
container output

--pull Always attempt to pull a newer version of the image

--quiet , -q Suppress the build output and print image ID on success

--rm true Remove intermediate containers after a successful build

API 1.39+
--secret Secret file to expose to the build (only if BuildKit enabled):
id=mysecret,src=/local/secret

--security-opt Security options

--shm-size Size of /dev/shm

experimental (daemon)API 1.25+


--squash
Squash newly built layers into a single new layer

API 1.39+
--ssh SSH agent socket or keys to expose to the build (only if BuildKit
enabled) (format: default|[=|[,]])

experimental (daemon)API 1.31+


--stream
Stream attaches to server to negotiate build context

--tag , -t Name and optionally a tag in the ‘name:tag’ format

--target Set the target build stage to build.

--ulimit Ulimit options

Parent command
Command Description

docker image Manage images

Related commands
Command Description

docker image build Build an image from a Dockerfile

docker image
Show the history of an image
history

docker image import Import the contents from a tarball to create a filesystem image

docker image
Display detailed information on one or more images
inspect

docker image load Load an image from a tar archive or STDIN

docker image ls List images

docker image prune Remove unused images

docker image pull Pull an image or a repository from a registry

docker image push Push an image or a repository to a registry

docker image rm Remove one or more images

Save one or more images to a tar archive (streamed to STDOUT by


docker image save
default)

docker image tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE

docker image history


Estimated reading time: 1 minute

Description
Show the history of an image

Usage
docker image history [OPTIONS] IMAGE
Options
Name, shorthand Default Description

--format Pretty-print images using a Go template

--human , -H true Print sizes and dates in human readable format

--no-trunc Don’t truncate output

--quiet , -q Only show numeric IDs

Parent command
Command Description

docker image Manage images

Related commands
Command Description

docker image build Build an image from a Dockerfile

docker image
Show the history of an image
history

docker image import Import the contents from a tarball to create a filesystem image

docker image
Display detailed information on one or more images
inspect

docker image load Load an image from a tar archive or STDIN

docker image ls List images

docker image prune Remove unused images

docker image pull Pull an image or a repository from a registry

docker image push Push an image or a repository to a registry


Command Description

docker image rm Remove one or more images

Save one or more images to a tar archive (streamed to STDOUT by


docker image save
default)

docker image tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE

docker image import


Estimated reading time: 2 minutes

Description
Import the contents from a tarball to create a filesystem image

Usage
docker image import [OPTIONS] file|URL|- [REPOSITORY[:TAG]]

Options
Name, shorthand Default Description

--change , -c Apply Dockerfile instruction to the created image

--message , -m Set commit message for imported image

experimental (daemon)API 1.32+


--platform
Set platform if server is multi-platform capable

Parent command
Command Description

docker image Manage images


Related commands
Command Description

docker image build Build an image from a Dockerfile

docker image
Show the history of an image
history

docker image import Import the contents from a tarball to create a filesystem image

docker image
Display detailed information on one or more images
inspect

docker image load Load an image from a tar archive or STDIN

docker image ls List images

docker image prune Remove unused images

docker image pull Pull an image or a repository from a registry

docker image push Push an image or a repository to a registry

docker image rm Remove one or more images

Save one or more images to a tar archive (streamed to STDOUT by


docker image save
default)

docker image tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE

docker image inspect


Estimated reading time: 1 minute

Description
Display detailed information on one or more images

Usage
docker image inspect [OPTIONS] IMAGE [IMAGE...]

Options
Name, shorthand Default Description

--format , -f Format the output using the given Go template

Parent command
Command Description

docker image Manage images

Related commands
Command Description

docker image build Build an image from a Dockerfile

docker image
Show the history of an image
history

docker image import Import the contents from a tarball to create a filesystem image

docker image
Display detailed information on one or more images
inspect

docker image load Load an image from a tar archive or STDIN

docker image ls List images

docker image prune Remove unused images

docker image pull Pull an image or a repository from a registry

docker image push Push an image or a repository to a registry

docker image rm Remove one or more images


Command Description

Save one or more images to a tar archive (streamed to STDOUT by


docker image save
default)

docker image tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE

docker image load


Estimated reading time: 1 minute

Description
Load an image from a tar archive or STDIN

Usage
docker image load [OPTIONS]

Options
Name, shorthand Default Description

--input , -i Read from tar archive file, instead of STDIN

--quiet , -q Suppress the load output

Parent command
Command Description

docker image Manage images

Related commands
Command Description

docker image build Build an image from a Dockerfile

docker image
Show the history of an image
history

docker image import Import the contents from a tarball to create a filesystem image

docker image
Display detailed information on one or more images
inspect

docker image load Load an image from a tar archive or STDIN

docker image ls List images

docker image prune Remove unused images

docker image pull Pull an image or a repository from a registry

docker image push Push an image or a repository to a registry

docker image rm Remove one or more images

Save one or more images to a tar archive (streamed to STDOUT by


docker image save
default)

docker image tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE

docker image ls
Estimated reading time: 1 minute

Description
List images

Usage
docker image ls [OPTIONS] [REPOSITORY[:TAG]]
Options
Name, shorthand Default Description

--all , -a Show all images (default hides intermediate images)

--digests Show digests

--filter , -f Filter output based on conditions provided

--format Pretty-print images using a Go template

--no-trunc Don’t truncate output

--quiet , -q Only show numeric IDs

Parent command
Command Description

docker image Manage images

Related commands
Command Description

docker image build Build an image from a Dockerfile

docker image
Show the history of an image
history

docker image import Import the contents from a tarball to create a filesystem image

docker image
Display detailed information on one or more images
inspect

docker image load Load an image from a tar archive or STDIN

docker image ls List images

docker image prune Remove unused images


Command Description

docker image pull Pull an image or a repository from a registry

docker image push Push an image or a repository to a registry

docker image rm Remove one or more images

Save one or more images to a tar archive (streamed to STDOUT by


docker image save
default)

docker image tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE

docker image prune


Estimated reading time: 7 minutes

Description
Remove unused images

API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Usage
docker image prune [OPTIONS]

Options
Name, shorthand Default Description

--all , -a Remove all unused images, not just dangling ones

--filter Provide filter values (e.g. ‘until=')

--force , -f Do not prompt for confirmation


Parent command
Command Description

docker image Manage images

Related commands
Command Description

docker image build Build an image from a Dockerfile

docker image
Show the history of an image
history

docker image import Import the contents from a tarball to create a filesystem image

docker image
Display detailed information on one or more images
inspect

docker image load Load an image from a tar archive or STDIN

docker image ls List images

docker image prune Remove unused images

docker image pull Pull an image or a repository from a registry

docker image push Push an image or a repository to a registry

docker image rm Remove one or more images

Save one or more images to a tar archive (streamed to STDOUT by


docker image save
default)

docker image tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE

Extended description
Remove all dangling images. If -a is specified, will also remove all images not referenced by any
container.
Examples
Example output:

$ docker image prune -a

WARNING! This will remove all images without at least one container associated to
them.
Are you sure you want to continue? [y/N] y
Deleted Images:
untagged: alpine:latest
untagged:
alpine@sha256:3dcdb92d7432d56604d4545cbd324b14e647b313626d99b889d0626de158f73a
deleted: sha256:4e38e38c8ce0b8d9041a9c4fefe786631d1416225e13b0bfe8cfa2321aec4bba
deleted: sha256:4fe15f8d0ae69e169824f25f1d4da3015a48feeeeebb265cd2e328e15c6a869f
untagged: alpine:3.3
untagged:
alpine@sha256:4fa633f4feff6a8f02acfc7424efd5cb3e76686ed3218abf4ca0fa4a2a358423
untagged: my-jq:latest
deleted: sha256:ae67841be6d008a374eff7c2a974cde3934ffe9536a7dc7ce589585eddd83aff
deleted: sha256:34f6f1261650bc341eb122313372adc4512b4fceddc2a7ecbb84f0958ce5ad65
deleted: sha256:cf4194e8d8db1cb2d117df33f2c75c0369c3a26d96725efb978cc69e046b87e7
untagged: my-curl:latest
deleted: sha256:b2789dd875bf427de7f9f6ae001940073b3201409b14aba7e5db71f408b8569e
deleted: sha256:96daac0cb203226438989926fc34dd024f365a9a8616b93e168d303cfe4cb5e9
deleted: sha256:5cbd97a14241c9cd83250d6b6fc0649833c4a3e84099b968dd4ba403e609945e
deleted: sha256:a0971c4015c1e898c60bf95781c6730a05b5d8a2ae6827f53837e6c9d38efdec
deleted: sha256:d8359ca3b681cc5396a4e790088441673ed3ce90ebc04de388bfcd31a0716b06
deleted: sha256:83fc9ba8fb70e1da31dfcc3c88d093831dbd4be38b34af998df37e8ac538260c
deleted: sha256:ae7041a4cc625a9c8e6955452f7afe602b401f662671cea3613f08f3d9343b35
deleted: sha256:35e0f43a37755b832f0bbea91a2360b025ee351d7309dae0d9737bc96b6d0809
deleted: sha256:0af941dd29f00e4510195dd00b19671bc591e29d1495630e7e0f7c44c1e6a8c0
deleted: sha256:9fc896fc2013da84f84e45b3096053eb084417b42e6b35ea0cce5a3529705eac
deleted: sha256:47cf20d8c26c46fff71be614d9f54997edacfe8d46d51769706e5aba94b16f2b
deleted: sha256:2c675ee9ed53425e31a13e3390bf3f539bf8637000e4bcfbb85ee03ef4d910a1
Total reclaimed space: 16.43 MB

Filtering
The filtering flag (--filter) format is of “key=value”. If there is more than one filter, then pass
multiple flags (e.g., --filter "foo=bar" --filter "bif=baz")

The currently supported filters are:

 until (<timestamp>) - only remove images created before given timestamp


 label (label=<key>, label=<key>=<value>, label!=<key>, or label!=<key>=<value>) - only
remove images with (or without, in case label!=... is used) the specified labels.

The until filter can be Unix timestamps, date formatted timestamps, or Go duration strings
(e.g. 10m, 1h30m) computed relative to the daemon machine’s time. Supported formats for date
formatted time stamps include RFC3339Nano, RFC3339, 2006-01-02T15:04:05,2006-01-
02T15:04:05.999999999, 2006-01-02Z07:00, and 2006-01-02. The local timezone on the daemon will
be used if you do not provide either a Z or a +-00:00 timezone offset at the end of the timestamp.
When providing Unix timestamps enter seconds[.nanoseconds], where seconds is the number of
seconds that have elapsed since January 1, 1970 (midnight UTC/GMT), not counting leap seconds
(aka Unix epoch or Unix time), and the optional .nanoseconds field is a fraction of a second no more
than nine digits long.
The label filter accepts two formats. One is the label=... (label=<key> or label=<key>=<value>),
which removes images with the specified labels. The other format is
the label!=... (label!=<key> or label!=<key>=<value>), which removes images without the
specified labels.
Predicting what will be removed

If you are using positive filtering (testing for the existence of a label or that a label has a specific
value), you can use docker image ls with the same filtering syntax to see which images match your
filter.
However, if you are using negative filtering (testing for the absence of a label or that a label
does not have a specific value), this type of filter does not work with docker image ls so you cannot
easily predict which images will be removed. In addition, the confirmation prompt for docker image
prune always warns that alldangling images will be removed, even if you are using --filter.
The following removes images created before 2017-01-04T00:00:00:
$ docker images --format 'table
{{.Repository}}\t{{.Tag}}\t{{.ID}}\t{{.CreatedAt}}\t{{.Size}}'
REPOSITORY TAG IMAGE ID CREATED AT
SIZE
foo latest 2f287ac753da 2017-01-04 13:42:23 -0800
PST 3.98 MB
alpine latest 88e169ea8f46 2016-12-27 10:17:25 -0800
PST 3.98 MB
busybox latest e02e811dd08f 2016-10-07 14:03:58 -0700
PDT 1.09 MB

$ docker image prune -a --force --filter "until=2017-01-04T00:00:00"

Deleted Images:
untagged: alpine:latest
untagged:
alpine@sha256:dfbd4a3a8ebca874ebd2474f044a0b33600d4523d03b0df76e5c5986cb02d7e8
untagged: busybox:latest
untagged:
busybox@sha256:29f5d56d12684887bdfa50dcd29fc31eea4aaf4ad3bec43daf19026a7ce69912
deleted: sha256:e02e811dd08fd49e7f6032625495118e63f597eb150403d02e3238af1df240ba
deleted: sha256:e88b3f82283bc59d5e0df427c824e9f95557e661fcb0ea15fb0fb6f97760f9d9

Total reclaimed space: 1.093 MB

$ docker images --format 'table


{{.Repository}}\t{{.Tag}}\t{{.ID}}\t{{.CreatedAt}}\t{{.Size}}'

REPOSITORY TAG IMAGE ID CREATED AT


SIZE
foo latest 2f287ac753da 2017-01-04 13:42:23 -0800
PST 3.98 MB

The following removes images created more than 10 days (240h) ago:
$ docker images

REPOSITORY TAG IMAGE ID CREATED SIZE


foo latest 2f287ac753da 14 seconds ago 3.98
MB
alpine latest 88e169ea8f46 8 days ago 3.98
MB
debian jessie 7b0a06c805e8 2 months ago 123
MB
busybox latest e02e811dd08f 2 months ago 1.09
MB
golang 1.7.0 138c2e655421 4 months ago 670
MB

$ docker image prune -a --force --filter "until=240h"

Deleted Images:
untagged: golang:1.7.0
untagged:
golang@sha256:6765038c2b8f407fd6e3ecea043b44580c229ccfa2a13f6d85866cf2b4a9628e
deleted: sha256:138c2e6554219de65614d88c15521bfb2da674cbb0bf840de161f89ff4264b96
deleted: sha256:ec353c2e1a673f456c4b78906d0d77f9d9456cfb5229b78c6a960bfb7496b76a
deleted: sha256:fe22765feaf3907526b4921c73ea6643ff9e334497c9b7e177972cf22f68ee93
deleted: sha256:ff845959c80148421a5c3ae11cc0e6c115f950c89bc949646be55ed18d6a2912
deleted: sha256:a4320831346648c03db64149eafc83092e2b34ab50ca6e8c13112388f25899a7
deleted: sha256:4c76020202ee1d9709e703b7c6de367b325139e74eebd6b55b30a63c196abaf3
deleted: sha256:d7afd92fb07236c8a2045715a86b7d5f0066cef025018cd3ca9a45498c51d1d6
deleted: sha256:9e63c5bce4585dd7038d830a1f1f4e44cb1a1515b00e620ac718e934b484c938
untagged: debian:jessie
untagged:
debian@sha256:c1af755d300d0c65bb1194d24bce561d70c98a54fb5ce5b1693beb4f7988272f
deleted: sha256:7b0a06c805e8f23807fb8856621c60851727e85c7bcb751012c813f122734c8d
deleted: sha256:f96222d75c5563900bc4dd852179b720a0885de8f7a0619ba0ac76e92542bbc8

Total reclaimed space: 792.6 MB

$ docker images

REPOSITORY TAG IMAGE ID CREATED SIZE


foo latest 2f287ac753da About a minute ago 3.98
MB
alpine latest 88e169ea8f46 8 days ago 3.98
MB
busybox latest e02e811dd08f 2 months ago 1.09
MB

The following example removes images with the label deprecated:


$ docker image prune --filter="label=deprecated"

The following example removes images with the label maintainer set to john:
$ docker image prune --filter="label=maintainer=john"

This example removes images which have no maintainer label:


$ docker image prune --filter="label!=maintainer"

This example removes images which have a maintainer label not set to john:
$ docker image prune --filter="label!=maintainer=john"

Note: You are prompted for confirmation before the prune removes anything, but you are not shown
a list of what will potentially be removed. In addition, docker image ls does not support negative
filtering, so it difficult to predict what images will actually be removed.

docker image pull


Estimated reading time: 2 minutes

Description
Pull an image or a repository from a registry

Usage
docker image pull [OPTIONS] NAME[:TAG|@DIGEST]

Options
Name, shorthand Default Description

--all-tags , -a Download all tagged images in the repository

--disable-content-trust true Skip image verification

experimental (daemon)API 1.32+


--platform
Set platform if server is multi-platform capable

--quiet , -q Suppress verbose output

Parent command
Command Description

docker image Manage images

Related commands
Command Description

docker image build Build an image from a Dockerfile

docker image
Show the history of an image
history

docker image import Import the contents from a tarball to create a filesystem image

docker image
Display detailed information on one or more images
inspect

docker image load Load an image from a tar archive or STDIN

docker image ls List images

docker image prune Remove unused images

docker image pull Pull an image or a repository from a registry

docker image push Push an image or a repository to a registry


Command Description

docker image rm Remove one or more images

Save one or more images to a tar archive (streamed to STDOUT by


docker image save
default)

docker image tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE

docker image push


Estimated reading time: 1 minute

Description
Push an image or a repository to a registry

Usage
docker image push [OPTIONS] NAME[:TAG]

Options
Name, shorthand Default Description

--disable-content-trust true Skip image signing

Parent command
Command Description

docker image Manage images

Related commands
Command Description

docker image build Build an image from a Dockerfile

docker image
Show the history of an image
history

docker image import Import the contents from a tarball to create a filesystem image

docker image
Display detailed information on one or more images
inspect

docker image load Load an image from a tar archive or STDIN

docker image ls List images

docker image prune Remove unused images

docker image pull Pull an image or a repository from a registry

docker image push Push an image or a repository to a registry

docker image rm Remove one or more images

Save one or more images to a tar archive (streamed to STDOUT by


docker image save
default)

docker image tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE

docker image rm
Estimated reading time: 1 minute

Description
Remove one or more images

Usage
docker image rm [OPTIONS] IMAGE [IMAGE...]
Options
Name, shorthand Default Description

--force , -f Force removal of the image

--no-prune Do not delete untagged parents

Parent command
Command Description

docker image Manage images

Related commands
Command Description

docker image build Build an image from a Dockerfile

docker image
Show the history of an image
history

docker image import Import the contents from a tarball to create a filesystem image

docker image
Display detailed information on one or more images
inspect

docker image load Load an image from a tar archive or STDIN

docker image ls List images

docker image prune Remove unused images

docker image pull Pull an image or a repository from a registry

docker image push Push an image or a repository to a registry

docker image rm Remove one or more images


Command Description

Save one or more images to a tar archive (streamed to STDOUT by


docker image save
default)

docker image tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE

docker image save


Estimated reading time: 1 minute

Description
Save one or more images to a tar archive (streamed to STDOUT by default)

Usage
docker image save [OPTIONS] IMAGE [IMAGE...]

Options
Name, shorthand Default Description

--output , -o Write to a file, instead of STDOUT

Parent command
Command Description

docker image Manage images

Related commands
Command Description

docker image build Build an image from a Dockerfile


Command Description

docker image
Show the history of an image
history

docker image import Import the contents from a tarball to create a filesystem image

docker image
Display detailed information on one or more images
inspect

docker image load Load an image from a tar archive or STDIN

docker image ls List images

docker image prune Remove unused images

docker image pull Pull an image or a repository from a registry

docker image push Push an image or a repository to a registry

docker image rm Remove one or more images

Save one or more images to a tar archive (streamed to STDOUT by


docker image save
default)

docker image tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE

docker image tag


Estimated reading time: 1 minute

Description
Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE

Usage
docker image tag SOURCE_IMAGE[:TAG] TARGET_IMAGE[:TAG]

Parent command
Command Description

docker image Manage images

Related commands
Command Description

docker image build Build an image from a Dockerfile

docker image
Show the history of an image
history

docker image import Import the contents from a tarball to create a filesystem image

docker image
Display detailed information on one or more images
inspect

docker image load Load an image from a tar archive or STDIN

docker image ls List images

docker image prune Remove unused images

docker image pull Pull an image or a repository from a registry

docker image push Push an image or a repository to a registry

docker image rm Remove one or more images

Save one or more images to a tar archive (streamed to STDOUT by


docker image save
default)

docker image tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE

docker images
Estimated reading time: 10 minutes

Description
List images

Usage
docker images [OPTIONS] [REPOSITORY[:TAG]]

Options
Name, shorthand Default Description

--all , -a Show all images (default hides intermediate images)

--digests Show digests

--filter , -f Filter output based on conditions provided

--format Pretty-print images using a Go template

--no-trunc Don’t truncate output

--quiet , -q Only show numeric IDs

Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
The default docker images will show all top level images, their repository and tags, and their size.
Docker images have intermediate layers that increase reusability, decrease disk usage, and speed
up docker build by allowing each step to be cached. These intermediate layers are not shown by
default.
The SIZE is the cumulative space taken up by the image and all its parent images. This is also the
disk space used by the contents of the Tar file created when you docker save an image.
An image will be listed more than once if it has multiple repository names or tags. This single image
(identifiable by its matching IMAGE ID) uses up the SIZE listed only once.
Examples
List the most recently created images
$ docker images

REPOSITORY TAG IMAGE ID CREATED


SIZE
<none> <none> 77af4d6b9913 19 hours ago
1.089 GB
committ latest b6fa739cedf5 19 hours ago
1.089 GB
<none> <none> 78a85c484f71 19 hours ago
1.089 GB
docker latest 30557a29d5ab 20 hours ago
1.089 GB
<none> <none> 5ed6274db6ce 24 hours ago
1.089 GB
postgres 9 746b819f315e 4 days ago
213.4 MB
postgres 9.3 746b819f315e 4 days ago
213.4 MB
postgres 9.3.5 746b819f315e 4 days ago
213.4 MB
postgres latest 746b819f315e 4 days ago
213.4 MB

List images by name and tag


The docker images command takes an optional [REPOSITORY[:TAG]] argument that restricts the list
to images that match the argument. If you specify REPOSITORYbut no TAG, the docker
images command lists all images in the given repository.

For example, to list all images in the “java” repository, run this command :

$ docker images java

REPOSITORY TAG IMAGE ID CREATED SIZE


java 8 308e519aac60 6 days ago 824.5
MB
java 7 493d82594c15 3 months ago 656.3
MB
java latest 2711b1d6f3aa 5 months ago 603.9
MB

The [REPOSITORY[:TAG]] value must be an “exact match”. This means that, for example,docker
images jav does not match the image java.
If both REPOSITORY and TAG are provided, only images matching that repository and tag are listed. To
find all local images in the “java” repository with tag “8” you can use:
$ docker images java:8

REPOSITORY TAG IMAGE ID CREATED SIZE


java 8 308e519aac60 6 days ago 824.5
MB

If nothing matches REPOSITORY[:TAG], the list is empty.


$ docker images java:0

REPOSITORY TAG IMAGE ID CREATED SIZE

List the full length image IDs


$ docker images --no-trunc

REPOSITORY TAG IMAGE ID


CREATED SIZE
<none> <none>
sha256:77af4d6b9913e693e8d0b4b294fa62ade6054e6b2f1ffb617ac955dd63fb0182 19 hours
ago 1.089 GB
committest latest
sha256:b6fa739cedf5ea12a620a439402b6004d057da800f91c7524b5086a5e4749c9f 19 hours
ago 1.089 GB
<none> <none>
sha256:78a85c484f71509adeaace20e72e941f6bdd2b25b4c75da8693efd9f61a37921 19 hours
ago 1.089 GB
docker latest
sha256:30557a29d5abc51e5f1d5b472e79b7e296f595abcf19fe6b9199dbbc809c6ff4 20 hours
ago 1.089 GB
<none> <none>
sha256:0124422dd9f9cf7ef15c0617cda3931ee68346455441d66ab8bdc5b05e9fdce5 20 hours
ago 1.089 GB
<none> <none>
sha256:18ad6fad340262ac2a636efd98a6d1f0ea775ae3d45240d3418466495a19a81b 22 hours
ago 1.082 GB
<none> <none>
sha256:f9f1e26352f0a3ba6a0ff68167559f64f3e21ff7ada60366e2d44a04befd1d3a 23 hours
ago 1.089 GB
tryout latest
sha256:2629d1fa0b81b222fca63371ca16cbf6a0772d07759ff80e8d1369b926940074 23 hours
ago 131.5 MB
<none> <none>
sha256:5ed6274db6ceb2397844896966ea239290555e74ef307030ebb01ff91b1914df 24 hours
ago 1.089 GB

List image digests


Images that use the v2 or later format have a content-addressable identifier called a digest. As long
as the input used to generate the image is unchanged, the digest value is predictable. To list image
digest values, use the --digests flag:
$ docker images --digests
REPOSITORY TAG DIGEST
IMAGE ID CREATED SIZE
localhost:5000/test/busybox <none>
sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf
4986bf8c1536 9 weeks ago 2.43 MB

When pushing or pulling to a 2.0 registry, the push or pull command output includes the image
digest. You can pull using a digest value. You can also reference by digest in create, run,
and rmi commands, as well as the FROM image reference in a Dockerfile.

Filtering
The filtering flag (-f or --filter) format is of “key=value”. If there is more than one filter, then pass
multiple flags (e.g., --filter "foo=bar" --filter "bif=baz")

The currently supported filters are:

 dangling (boolean - true or false)


 label (label=<key> or label=<key>=<value>)
 before (<image-name>[:<tag>], <image id> or <image@digest>) - filter images created before
given id or references
 since (<image-name>[:<tag>], <image id> or <image@digest>) - filter images created since
given id or references
 reference (pattern of an image reference) - filter images whose reference matches the
specified pattern

SHOW UNTAGGED IMAGES (DANGLING)

$ docker images --filter "dangling=true"

REPOSITORY TAG IMAGE ID CREATED SIZE


<none> <none> 8abc22fbb042 4 weeks ago 0 B
<none> <none> 48e5f45168b9 4 weeks ago 2.489
MB
<none> <none> bf747efa0e2f 4 weeks ago 0 B
<none> <none> 980fe10e5736 12 weeks ago 101.4
MB
<none> <none> dea752e4e117 12 weeks ago 101.4
MB
<none> <none> 511136ea3c5a 8 months ago 0 B

This will display untagged images that are the leaves of the images tree (not intermediary layers).
These images occur when a new build of an image takes the repo:tag away from the image ID,
leaving it as <none>:<none> or untagged. A warning will be issued if trying to remove an image when
a container is presently using it. By having this flag it allows for batch cleanup.
You can use this in conjunction with docker rmi ...:
$ docker rmi $(docker images -f "dangling=true" -q)

8abc22fbb042
48e5f45168b9
bf747efa0e2f
980fe10e5736
dea752e4e117
511136ea3c5a

Note: Docker warns you if any containers exist that are using these untagged images.

SHOW IMAGES WITH A GIVEN LABEL

The label filter matches images based on the presence of a label alone or a label and a value.
The following filter matches images with the com.example.version label regardless of its value.
$ docker images --filter "label=com.example.version"

REPOSITORY TAG IMAGE ID CREATED SIZE


match-me-1 latest eeae25ada2aa About a minute ago
188.3 MB
match-me-2 latest dea752e4e117 About a minute ago
188.3 MB

The following filter matches images with the com.example.version label with the 1.0 value.
$ docker images --filter "label=com.example.version=1.0"

REPOSITORY TAG IMAGE ID CREATED SIZE


match-me latest 511136ea3c5a About a minute ago
188.3 MB

In this example, with the 0.1 value, it returns an empty set because no matches were found.
$ docker images --filter "label=com.example.version=0.1"
REPOSITORY TAG IMAGE ID CREATED SIZE

FILTER IMAGES BY TIME

The before filter shows only images created before the image with given id or reference. For
example, having these images:
$ docker images

REPOSITORY TAG IMAGE ID CREATED SIZE


image1 latest eeae25ada2aa 4 minutes ago
188.3 MB
image2 latest dea752e4e117 9 minutes ago
188.3 MB
image3 latest 511136ea3c5a 25 minutes ago
188.3 MB

Filtering with before would give:


$ docker images --filter "before=image1"
REPOSITORY TAG IMAGE ID CREATED SIZE
image2 latest dea752e4e117 9 minutes ago
188.3 MB
image3 latest 511136ea3c5a 25 minutes ago
188.3 MB

Filtering with since would give:


$ docker images --filter "since=image3"
REPOSITORY TAG IMAGE ID CREATED SIZE
image1 latest eeae25ada2aa 4 minutes ago
188.3 MB
image2 latest dea752e4e117 9 minutes ago
188.3 MB

FILTER IMAGES BY REFERENCE

The reference filter shows only images whose reference matches the specified pattern.
$ docker images

REPOSITORY TAG IMAGE ID CREATED SIZE


busybox latest e02e811dd08f 5 weeks ago 1.09
MB
busybox uclibc e02e811dd08f 5 weeks ago 1.09
MB
busybox musl 733eb3059dce 5 weeks ago 1.21
MB
busybox glibc 21c16b6787c6 5 weeks ago 4.19
MB

Filtering with reference would give:


$ docker images --filter=reference='busy*:*libc'

REPOSITORY TAG IMAGE ID CREATED SIZE


busybox uclibc e02e811dd08f 5 weeks ago 1.09
MB
busybox glibc 21c16b6787c6 5 weeks ago 4.19
MB

Filtering with multiple reference would give, either match A or B:


$ docker images --filter=reference='busy*:uclibc' --filter=reference='busy*:glibc'

REPOSITORY TAG IMAGE ID CREATED SIZE


busybox uclibc e02e811dd08f 5 weeks ago 1.09
MB
busybox glibc 21c16b6787c6 5 weeks ago 4.19
MB

Format the output


The formatting option (--format) will pretty print container output using a Go template.

Valid placeholders for the Go template are listed below:

Placeholder Description

.ID Image ID

.Repository Image repository

.Tag Image tag

.Digest Image digest

.CreatedSince Elapsed time since the image was created

.CreatedAt Time when the image was created

.Size Image disk size

When using the --format option, the image command will either output the data exactly as the
template declares or, when using the table directive, will include column headers as well.
The following example uses a template without headers and outputs the ID and Repository entries
separated by a colon for all images:
$ docker images --format "{{.ID}}: {{.Repository}}"

77af4d6b9913: <none>
b6fa739cedf5: committ
78a85c484f71: <none>
30557a29d5ab: docker
5ed6274db6ce: <none>
746b819f315e: postgres
746b819f315e: postgres
746b819f315e: postgres
746b819f315e: postgres

To list all images with their repository and tag in a table format you can use:

$ docker images --format "table {{.ID}}\t{{.Repository}}\t{{.Tag}}"

IMAGE ID REPOSITORY TAG


77af4d6b9913 <none> <none>
b6fa739cedf5 committ latest
78a85c484f71 <none> <none>
30557a29d5ab docker latest
5ed6274db6ce <none> <none>
746b819f315e postgres 9
746b819f315e postgres 9.3
746b819f315e postgres 9.3.5
746b819f315e postgres latest

docker import
Estimated reading time: 2 minutes

Description
Import the contents from a tarball to create a filesystem image

Usage
docker import [OPTIONS] file|URL|- [REPOSITORY[:TAG]]

Options
Name, shorthand Default Description

--change , -c Apply Dockerfile instruction to the created image

--message , -m Set commit message for imported image

experimental (daemon)API 1.32+


--platform
Set platform if server is multi-platform capable

Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
You can specify a URL or - (dash) to take data directly from STDIN. The URL can point to an archive
(.tar, .tar.gz, .tgz, .bzip, .tar.xz, or .txz) containing a filesystem or to an individual file on the Docker
host. If you specify an archive, Docker untars it in the container relative to the / (root). If you specify
an individual file, you must specify the full path within the host. To import from a remote location,
specify a URI that begins with the http:// or https://protocol.
The --change option will apply Dockerfile instructions to the image that is created.
Supported Dockerfile instructions:CMD|ENTRYPOINT|ENV|EXPOSE|ONBUILD|USER|VOLUME|WORKDIR

Examples
Import from a remote location
This will create a new untagged image.

$ docker import http://example.com/exampleimage.tgz

Import from a local file


 Import to docker via pipe and STDIN.
 $ cat exampleimage.tgz | docker import - exampleimagelocal:new
 Import with a commit message.

 $ cat exampleimage.tgz | docker import --message "New image imported from


tarball" - exampleimagelocal:new

 Import to docker from a local archive.

 $ docker import /path/to/exampleimage.tgz

Import from a local directory


$ sudo tar -c . | docker import - exampleimagedir

Import from a local directory with new configurations


$ sudo tar -c . | docker import --change "ENV DEBUG true" - exampleimagedir

Note the sudo in this example – you must preserve the ownership of the files (especially root
ownership) during the archiving with tar. If you are not root (or the sudo command) when you tar,
then the ownerships might not get preserved.

docker info
Estimated reading time: 5 minutes

Description
Display system-wide information

Usage
docker info [OPTIONS]

Options
Name, shorthand Default Description

--format , -f Format the output using the given Go template

Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
This command displays system wide information regarding the Docker installation. Information
displayed includes the kernel version, number of containers and images. The number of images
shown is the number of unique images. The same image tagged under different names is counted
only once.

If a format is specified, the given template will be executed instead of the default format.
Go’s text/template package describes all the details of the format.

Depending on the storage driver in use, additional information can be shown, such as pool name,
data file, metadata file, data space used, total data space, metadata space used, and total metadata
space.

The data file is where the images are stored and the metadata file is where the meta data regarding
those images are stored. When run for the first time Docker allocates a certain amount of data space
and meta data space from the space available on the volume where /var/lib/docker is mounted.

Examples
Show output
The example below shows the output for a daemon running on Red Hat Enterprise Linux, using
the devicemapper storage driver. As can be seen in the output, additional information about
the devicemapper storage driver is shown:
$ docker info
Client:
Debug Mode: false

Server:
Containers: 14
Running: 3
Paused: 1
Stopped: 10
Images: 52
Server Version: 1.10.3
Storage Driver: devicemapper
Pool Name: docker-202:2-25583803-pool
Pool Blocksize: 65.54 kB
Base Device Size: 10.74 GB
Backing Filesystem: xfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 1.68 GB
Data Space Total: 107.4 GB
Data Space Available: 7.548 GB
Metadata Space Used: 2.322 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.145 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.107-RHEL7 (2015-12-01)
Execution Driver: native-0.2
Logging Driver: json-file
Plugins:
Volume: local
Network: null host bridge
Kernel Version: 3.10.0-327.el7.x86_64
Operating System: Red Hat Enterprise Linux Server 7.2 (Maipo)
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 991.7 MiB
Name: ip-172-30-0-91.ec2.internal
ID: I54V:OLXT:HVMM:TPKO:JPHQ:CQCD:JNLC:O3BZ:4ZVJ:43XJ:PFHZ:6N2S
Docker Root Dir: /var/lib/docker
Debug Mode: false
Username: gordontheturtle
Registry: https://index.docker.io/v1/
Insecure registries:
myinsecurehost:5000
127.0.0.0/8

Show debugging output


Here is a sample output for a daemon running on Ubuntu, using the overlay2 storage driver and a
node that is part of a 2-node swarm:

$ docker -D info
Client:
Debug Mode: true

Server:
Containers: 14
Running: 3
Paused: 1
Stopped: 10
Images: 52
Server Version: 1.13.0
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: active
NodeID: rdjq45w1op418waxlairloqbm
Is Manager: true
ClusterID: te8kdyw33n36fqiz74bfjeixd
Managers: 1
Nodes: 2
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 3
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Root Rotation In Progress: false
Node Address: 172.16.66.128 172.16.66.129
Manager Addresses:
172.16.66.128:2477
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 8517738ba4b82aff5662c97ca4627e7e4d03b531
runc version: ac031b5bf1cc92239461125f4c1ffb760522bbf2
init version: N/A (expected: v0.13.0)
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.4.0-31-generic
Operating System: Ubuntu 16.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.937 GiB
Name: ubuntu
ID: H52R:7ZR6:EIIA:76JG:ORIY:BVKF:GSFU:HNPG:B5MK:APSC:SZ3Q:N326
Docker Root Dir: /var/lib/docker
Debug Mode: true
File Descriptors: 30
Goroutines: 123
System Time: 2016-11-12T17:24:37.955404361-08:00
EventsListeners: 0
Http Proxy: http://test:[email protected]:8080
Https Proxy: https://test:[email protected]:8080
No Proxy: localhost,127.0.0.1,docker-registry.somecorporation.com
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Labels:
storage=ssd
staging=true
Experimental: false
Insecure Registries:
127.0.0.0/8
Registry Mirrors:
http://192.168.1.2/
http://registry-mirror.example.com:5000/
Live Restore Enabled: false

The global -D option causes all docker commands to output debug information.

Format the output


You can also specify the output format:

$ docker info --format '{{json .}}'

{"ID":"I54V:OLXT:HVMM:TPKO:JPHQ:CQCD:JNLC:O3BZ:4ZVJ:43XJ:PFHZ:6N2S","Containers":14,
...}

Run docker info on Windows


Here is a sample output for a daemon running on Windows Server 2016:

E:\docker>docker info
Client:
Debug Mode: false

Server:
Containers: 1
Running: 0
Paused: 0
Stopped: 1
Images: 17
Server Version: 1.13.0
Storage Driver: windowsfilter
Windows:
Logging Driver: json-file
Plugins:
Volume: local
Network: nat null overlay
Swarm: inactive
Default Isolation: process
Kernel Version: 10.0 14393 (14393.206.amd64fre.rs1_release.160912-1937)
Operating System: Windows Server 2016 Datacenter
OSType: windows
Architecture: x86_64
CPUs: 8
Total Memory: 3.999 GiB
Name: WIN-V0V70C0LU5P
ID: NYMS:B5VK:UMSL:FVDZ:EWB5:FKVK:LPFL:FJMQ:H6FT:BZJ6:L2TD:XH62
Docker Root Dir: C:\control
Debug Mode: false
Registry: https://index.docker.io/v1/
Insecure Registries:
127.0.0.0/8
Registry Mirrors:
http://192.168.1.2/
http://registry-mirror.example.com:5000/
Live Restore Enabled: false

Warnings about kernel support


If your operating system does not enable certain capabilities, you may see warnings such as one of
the following, when you run docker info:
WARNING: Your kernel does not support swap limit capabilities. Limitation discarded.
WARNING: No swap limit support

You can ignore these warnings unless you actually need the ability to limit these resources, in which
case you should consult your operating system’s documentation for enabling them.Learn more.

docker inspect
Estimated reading time: 2 minutes

Description
Return low-level information on Docker objects

Usage
docker inspect [OPTIONS] NAME|ID [NAME|ID...]

Options
Name, shorthand Default Description

--format , -f Format the output using the given Go template

--size , -s Display total file sizes if the type is container

--type Return JSON for specified type

Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
Docker inspect provides detailed information on constructs controlled by Docker.

By default, docker inspect will render results in a JSON array.

Examples
Get an instance’s IP address
For the most part, you can pick out any field from the JSON in a fairly straightforward manner.

$ docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'


$INSTANCE_ID

Get an instance’s MAC address


$ docker inspect --format='{{range .NetworkSettings.Networks}}{{.MacAddress}}{{end}}'
$INSTANCE_ID

Get an instance’s log path


$ docker inspect --format='{{.LogPath}}' $INSTANCE_ID

Get an instance’s image name


$ docker inspect --format='{{.Config.Image}}' $INSTANCE_ID

List all port bindings


You can loop over arrays and maps in the results to produce simple text output:

$ docker inspect --format='{{range $p, $conf := .NetworkSettings.Ports}} {{$p}} ->


{{(index $conf 0).HostPort}} {{end}}' $INSTANCE_ID

Find a specific port mapping


The .Field syntax doesn’t work when the field name begins with a number, but the template
language’s index function does. The .NetworkSettings.Ports section contains a map of the internal
port mappings to a list of external address/port objects. To grab just the numeric public port, you
use index to find the specific port map, and then index 0 contains the first object inside of that. Then
we ask for the HostPort field to get the public address.
$ docker inspect --format='{{(index (index .NetworkSettings.Ports "8787/tcp")
0).HostPort}}' $INSTANCE_ID

Get a subsection in JSON format


If you request a field which is itself a structure containing other fields, by default you get a Go-style
dump of the inner values. Docker adds a template function, json, which can be applied to get results
in JSON format.
$ docker inspect --format='{{json .Config}}' $INSTANCE_ID

docker kill
Estimated reading time: 1 minute
Description
Kill one or more running containers

Usage
docker kill [OPTIONS] CONTAINER [CONTAINER...]

Options
Name, shorthand Default Description

--signal , -s KILL Signal to send to the container

Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
The docker kill subcommand kills one or more containers. The main process inside the container
is sent SIGKILL signal (default), or the signal that is specified with the --signaloption. You can kill a
container using the container’s ID, ID-prefix, or name.
Note: ENTRYPOINT and CMD in the shell form run as a subcommand of /bin/sh -c, which does not
pass signals. This means that the executable is not the container’s PID 1 and does not receive Unix
signals.

Examples
Send a KILL signal to a container
The following example sends the default KILL signal to the container named my_container:
$ docker kill my_container
Send a custom signal to a container
The following example sends a SIGHUP signal to the container named my_container:
$ docker kill --signal=SIGHUP my_container

You can specify a custom signal either by name, or number. The SIG prefix is optional, so the
following examples are equivalent:
$ docker kill --signal=SIGHUP my_container
$ docker kill --signal=HUP my_container
$ docker kill --signal=1 my_container

Refer to the signal(7) man-page for a list of standard Linux signals.

docker load
Estimated reading time: 1 minute

Description
Load an image from a tar archive or STDIN

Usage
docker load [OPTIONS]

Options
Name, shorthand Default Description

--input , -i Read from tar archive file, instead of STDIN

--quiet , -q Suppress the load output

Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
Load an image or repository from a tar archive (even if compressed with gzip, bzip2, or xz) from a
file or STDIN. It restores both images and tags.

Examples
$ docker image ls

REPOSITORY TAG IMAGE ID CREATED SIZE

$ docker load < busybox.tar.gz

Loaded image: busybox:latest


$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
busybox latest 769b9341d937 7 weeks ago 2.489
MB

$ docker load --input fedora.tar

Loaded image: fedora:rawhide

Loaded image: fedora:20

$ docker images

REPOSITORY TAG IMAGE ID CREATED SIZE


busybox latest 769b9341d937 7 weeks ago 2.489
MB
fedora rawhide 0d20aec6529d 7 weeks ago 387
MB
fedora 20 58394af37342 7 weeks ago 385.5
MB
fedora heisenbug 58394af37342 7 weeks ago 385.5
MB
fedora latest 58394af37342 7 weeks ago 385.5
MB

docker login
Estimated reading time: 6 minutes

Description
Log in to a Docker registry

Usage
docker login [OPTIONS] [SERVER]

Options
Name, shorthand Default Description

--password , -p Password

--password-stdin Take the password from stdin

--username , -u Username

Parent command
Command Description

docker The base command for the Docker CLI.


Extended description
Login to a registry.

Login to a self-hosted registry


If you want to login to a self-hosted registry you can specify this by adding the server name.

$ docker login localhost:8080

Provide a password using STDIN


To run the docker login command non-interactively, you can set the --password-stdin flag to
provide a password through STDIN. Using STDIN prevents the password from ending up in the shell’s
history, or log-files.
The following example reads a password from a file, and passes it to the docker logincommand
using STDIN:
$ cat ~/my_password.txt | docker login --username foo --password-stdin

Privileged user requirement


docker login requires user to use sudo or be root, except when:

1. connecting to a remote daemon, such as a docker-machine provisioned docker engine.


2. user is added to the docker group. This will impact the security of your system;
the docker group is root equivalent. See Docker Daemon Attack Surface for details.

You can log into any public or private repository for which you have credentials. When you log in, the
command stores credentials in $HOME/.docker/config.json on Linux
or %USERPROFILE%/.docker/config.json on Windows, via the procedure described below.

Credentials store
The Docker Engine can keep user credentials in an external credentials store, such as the native
keychain of the operating system. Using an external store is more secure than storing credentials in
the Docker configuration file.

To use a credentials store, you need an external helper program to interact with a specific keychain
or external store. Docker requires the helper program to be in the client’s host $PATH.
This is the list of currently available credentials helpers and where you can download them from:

 D-Bus Secret Service: https://github.com/docker/docker-credential-helpers/releases


 Apple macOS keychain: https://github.com/docker/docker-credential-helpers/releases
 Microsoft Windows Credential Manager: https://github.com/docker/docker-credential-
helpers/releases
 pass: https://github.com/docker/docker-credential-helpers/releases

CONFIGURE THE CREDENTIALS STORE

You need to specify the credentials store in $HOME/.docker/config.json to tell the docker engine to
use it. The value of the config property should be the suffix of the program to use (i.e. everything
after docker-credential-). For example, to use docker-credential-osxkeychain:
{
"credsStore": "osxkeychain"
}

If you are currently logged in, run docker logout to remove the credentials from the file and
run docker login again.

DEFAULT BEHAVIOR

By default, Docker looks for the native binary on each of the platforms, i.e. “osxkeychain” on macOS,
“wincred” on windows, and “pass” on Linux. A special case is that on Linux, Docker will fall back to
the “secretservice” binary if it cannot find the “pass” binary. If none of these binaries are present, it
stores the credentials (i.e. password) in base64 encoding in the config files described above.

CREDENTIAL HELPER PROTOCOL

Credential helpers can be any program or script that follows a very simple protocol. This protocol is
heavily inspired by Git, but it differs in the information shared.

The helpers always use the first argument in the command to identify the action. There are only
three possible values for that argument: store, get, and erase.
The store command takes a JSON payload from the standard input. That payload carries the server
address, to identify the credential, the user name, and either a password or an identity token.
{
"ServerURL": "https://index.docker.io/v1",
"Username": "david",
"Secret": "passw0rd1"
}

If the secret being stored is an identity token, the Username should be set to <token>.
The store command can write error messages to STDOUT that the docker engine will show if there
was an issue.
The get command takes a string payload from the standard input. That payload carries the server
address that the docker engine needs credentials for. This is an example of that
payload: https://index.docker.io/v1.
The get command writes a JSON payload to STDOUT. Docker reads the user name and password
from this payload:
{
"Username": "david",
"Secret": "passw0rd1"
}

The erase command takes a string payload from STDIN. That payload carries the server address that
the docker engine wants to remove credentials for. This is an example of that
payload: https://index.docker.io/v1.
The erase command can write error messages to STDOUT that the docker engine will show if there
was an issue.

Credential helpers
Credential helpers are similar to the credential store above, but act as the designated programs to
handle credentials for specific registries. The default credential store (credsStore or the config file
itself) will not be used for operations concerning credentials of the specified registries.

CONFIGURE CREDENTIAL HELPERS

If you are currently logged in, run docker logout to remove the credentials from the default store.
Credential helpers are specified in a similar way to credsStore, but allow for multiple helpers to be
configured at a time. Keys specify the registry domain, and values specify the suffix of the program
to use (i.e. everything after docker-credential-). For example:
{
"credHelpers": {
"registry.example.com": "registryhelper",
"awesomereg.example.org": "hip-star",
"unicorn.example.io": "vcbait"
}
}

docker logout
Estimated reading time: 1 minute

Description
Log out from a Docker registry

Usage
docker logout [SERVER]

Parent command
Command Description

docker The base command for the Docker CLI.

Examples
$ docker logout localhost:8080

docker logs
Estimated reading time: 3 minutes

Description
Fetch the logs of a container

Usage
docker logs [OPTIONS] CONTAINER

Options
Name,
Default Description
shorthand

--details Show extra details provided to logs

--follow , -f Follow log output

Show logs since timestamp (e.g. 2013-01-02T13:23:37) or


--since
relative (e.g. 42m for 42 minutes)

--tail all Number of lines to show from the end of the logs

--timestamps ,
Show timestamps
-t

API 1.35+
--until Show logs before a timestamp (e.g. 2013-01-02T13:23:37) or
relative (e.g. 42m for 42 minutes)

Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
The docker logs command batch-retrieves logs present at the time of execution.
Note: this command is only functional for containers that are started with the json-
file or journald logging driver.

For more information about selecting and configuring logging drivers, refer to Configure logging
drivers.

The docker logs --follow command will continue streaming the new output from the
container’s STDOUT and STDERR.
Passing a negative number or a non-integer to --tail is invalid and the value is set to allin that
case.
The docker logs --timestamps command will add an RFC3339Nano timestamp , for example 2014-
09-16T06:17:46.000000000Z, to each log entry. To ensure that the timestamps are aligned the nano-
second part of the timestamp will be padded with zero when necessary.
The docker logs --details command will add on extra attributes, such as environment variables
and labels, provided to --log-opt when creating the container.
The --since option shows only the container logs generated after a given date. You can specify the
date as an RFC 3339 date, a UNIX timestamp, or a Go duration string (e.g. 1m30s, 3h). Besides
RFC3339 date format you may also use RFC3339Nano, 2006-01-02T15:04:05,2006-01-
02T15:04:05.999999999, 2006-01-02Z07:00, and 2006-01-02. The local timezone on the client will be
used if you do not provide either a Z or a +-00:00 timezone offset at the end of the timestamp. When
providing Unix timestamps enter seconds[.nanoseconds], where seconds is the number of seconds
that have elapsed since January 1, 1970 (midnight UTC/GMT), not counting leap seconds (aka Unix
epoch or Unix time), and the optional .nanoseconds field is a fraction of a second no more than nine
digits long. You can combine the --since option with either or both of the --follow or --
tail options.

Examples
Retrieve logs until a specific point in time
In order to retrieve logs before a specific point in time, run:

$ docker run --name test -d busybox sh -c "while true; do $(echo date); sleep 1;
done"
$ date
Tue 14 Nov 2017 16:40:00 CET
$ docker logs -f --until=2s
Tue 14 Nov 2017 16:40:00 CET
Tue 14 Nov 2017 16:40:01 CET
Tue 14 Nov 2017 16:40:02 CET

docker manifest
Estimated reading time: 8 minutes
Description
Manage Docker image manifests and manifest lists

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker manifest COMMAND COMMAND

Child commands
Command Description

docker manifest annotate Add additional information to a local image manifest

docker manifest create Create a local manifest list for annotating and pushing to a registry

docker manifest inspect Display an image manifest, or manifest list

docker manifest push Push a manifest list to a repository

Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
The docker manifest command by itself performs no action. In order to operate on a manifest or
manifest list, one of the subcommands must be used.

A single manifest is information about an image, such as layers, size, and digest. The docker
manifest command also gives users additional information such as the os and architecture an image
was built for.

A manifest list is a list of image layers that is created by specifying one or more (ideally more than
one) image names. It can then be used in the same way as an image name in docker
pull and docker run commands, for example.

Ideally a manifest list is created from images that are identical in function for different os/arch
combinations. For this reason, manifest lists are often referred to as “multi-arch images”. However, a
user could create a manifest list that points to two images -- one for windows on amd64, and one for
darwin on amd64.

manifest inspect
manifest inspect --help

Usage: docker manifest inspect [OPTIONS] [MANIFEST_LIST] MANIFEST

Display an image manifest, or manifest list

Options:
--help Print usage
--insecure Allow communication with an insecure registry
-v, --verbose Output additional info including layers and platform

manifest create
Usage: docker manifest create MANIFEST_LIST MANIFEST [MANIFEST...]
Create a local manifest list for annotating and pushing to a registry

Options:
-a, --amend Amend an existing manifest list
--insecure Allow communication with an insecure registry
--help Print usage

manifest annotate
Usage: docker manifest annotate [OPTIONS] MANIFEST_LIST MANIFEST

Add additional information to a local image manifest

Options:
--arch string Set architecture
--help Print usage
--os string Set operating system
--os-features stringSlice Set operating system feature
--variant string Set architecture variant

manifest push
Usage: docker manifest push [OPTIONS] MANIFEST_LIST

Push a manifest list to a repository

Options:
--help Print usage
--insecure Allow push to an insecure registry
-p, --purge Remove the local manifest list after push

Working with insecure registries


The manifest command interacts solely with a Docker registry. Because of this, it has no way to
query the engine for the list of allowed insecure registries. To allow the CLI to interact with an
insecure registry, some docker manifest commands have an --insecure flag. For each transaction,
such as a create, which queries a registry, the --insecure flag must be specified. This flag tells the
CLI that this registry call may ignore security concerns like missing or self-signed certificates.
Likewise, on a manifest push to an insecure registry, the --insecure flag must be specified. If this is
not used with an insecure registry, the manifest command fails to find a registry that meets the
default requirements.

Examples
Inspect an image’s manifest object
$ docker manifest inspect hello-world
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {
"mediaType": "application/vnd.docker.container.image.v1+json",
"size": 1520,
"digest":
"sha256:1815c82652c03bfd8644afda26fb184f2ed891d921b20a0703b46768f9755c57"
},
"layers": [
{
"mediaType":
"application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 972,
"digest":
"sha256:b04784fba78d739b526e27edc02a5a8cd07b1052e9283f5fc155828f4b614c28"
}
]
}

Inspect an image’s manifest and get the os/arch info


The docker manifest inspect command takes an optional --verbose flag that gives you the image’s
name (Ref), and architecture and os (Platform).

Just as with other docker commands that take image names, you can refer to an image with or
without a tag, or by digest (e.g. hello-
world@sha256:f3b3b28a45160805bb16542c9531888519430e9e6d6ffc09d72261b0d26ff74f).

Here is an example of inspecting an image’s manifest with the --verbose flag:


$ docker manifest inspect --verbose hello-world
{
"Ref": "docker.io/library/hello-world:latest",
"Digest":
"sha256:f3b3b28a45160805bb16542c9531888519430e9e6d6ffc09d72261b0d26ff74f",
"SchemaV2Manifest": {
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {
"mediaType":
"application/vnd.docker.container.image.v1+json",
"size": 1520,
"digest":
"sha256:1815c82652c03bfd8644afda26fb184f2ed891d921b20a0703b46768f9755c57"
},
"layers": [
{
"mediaType":
"application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 972,
"digest":
"sha256:b04784fba78d739b526e27edc02a5a8cd07b1052e9283f5fc155828f4b614c28"
}
]
},
"Platform": {
"architecture": "amd64",
"os": "linux"
}
}

Create and push a manifest list


To create a manifest list, you first create the manifest list locally by specifying the constituent images
you would like to have included in your manifest list. Keep in mind that this is pushed to a registry, so
if you want to push to a registry other than the docker registry, you need to create your manifest list
with the registry name or IP and port. This is similar to tagging an image and pushing it to a foreign
registry.
After you have created your local copy of the manifest list, you may optionally annotate it.
Annotations allowed are the architecture and operating system (overriding the image’s current
values), os features, and an architecture variant.
Finally, you need to push your manifest list to the desired registry. Below are descriptions of these
three commands, and an example putting them all together.
$ docker manifest create 45.55.81.106:5000/coolapp:v1 \
45.55.81.106:5000/coolapp-ppc64le-linux:v1 \
45.55.81.106:5000/coolapp-arm-linux:v1 \
45.55.81.106:5000/coolapp-amd64-linux:v1 \
45.55.81.106:5000/coolapp-amd64-windows:v1
Created manifest list 45.55.81.106:5000/coolapp:v1
$ docker manifest annotate 45.55.81.106:5000/coolapp:v1 45.55.81.106:5000/coolapp-
arm-linux --arch arm
$ docker manifest push 45.55.81.106:5000/coolapp:v1
Pushed manifest
45.55.81.106:5000/coolapp@sha256:9701edc932223a66e49dd6c894a11db8c2cf4eccd1414f1ec105
a623bf16b426 with digest:
sha256:f67dcc5fc786f04f0743abfe0ee5dae9bd8caf8efa6c8144f7f2a43889dc513b
Pushed manifest
45.55.81.106:5000/coolapp@sha256:f3b3b28a45160805bb16542c9531888519430e9e6d6ffc09d722
61b0d26ff74f with digest:
sha256:b64ca0b60356a30971f098c92200b1271257f100a55b351e6bbe985638352f3a
Pushed manifest
45.55.81.106:5000/coolapp@sha256:39dc41c658cf25f33681a41310372f02728925a54aac3598310b
fb1770615fc9 with digest:
sha256:df436846483aff62bad830b730a0d3b77731bcf98ba5e470a8bbb8e9e346e4e8
Pushed manifest
45.55.81.106:5000/coolapp@sha256:f91b1145cd4ac800b28122313ae9e88ac340bb3f1e3a4cd3e59a
3648650f3275 with digest:
sha256:5bb8e50aa2edd408bdf3ddf61efb7338ff34a07b762992c9432f1c02fc0e5e62
sha256:050b213d49d7673ba35014f21454c573dcbec75254a08f4a7c34f66a47c06aba

Inspect a manifest list


$ docker manifest inspect coolapp:v1
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",
"manifests": [
{
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"size": 424,
"digest":
"sha256:f67dcc5fc786f04f0743abfe0ee5dae9bd8caf8efa6c8144f7f2a43889dc513b",
"platform": {
"architecture": "arm",
"os": "linux"
}
},
{
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"size": 424,
"digest":
"sha256:b64ca0b60356a30971f098c92200b1271257f100a55b351e6bbe985638352f3a",
"platform": {
"architecture": "amd64",
"os": "linux"
}
},
{
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"size": 425,
"digest":
"sha256:df436846483aff62bad830b730a0d3b77731bcf98ba5e470a8bbb8e9e346e4e8",
"platform": {
"architecture": "ppc64le",
"os": "linux"
}
},
{
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"size": 425,
"digest":
"sha256:5bb8e50aa2edd408bdf3ddf61efb7338ff34a07b762992c9432f1c02fc0e5e62",
"platform": {
"architecture": "s390x",
"os": "linux"
}
}
]
}

Push to an insecure registry


Here is an example of creating and pushing a manifest list using a known insecure registry.

$ docker manifest create --insecure myprivateregistry.mycompany.com/repo/image:1.0 \


myprivateregistry.mycompany.com/repo/image-linux-ppc64le:1.0 \
myprivateregistry.mycompany.com/repo/image-linux-s390x:1.0 \
myprivateregistry.mycompany.com/repo/image-linux-arm:1.0 \
myprivateregistry.mycompany.com/repo/image-linux-armhf:1.0 \
myprivateregistry.mycompany.com/repo/image-windows-amd64:1.0 \
myprivateregistry.mycompany.com/repo/image-linux-amd64:1.0
$ docker manifest push --insecure myprivateregistry.mycompany.com/repo/image:tag

Note that the --insecure flag is not required to annotate a manifest list, since annotations are to a
locally-stored copy of a manifest list. You may also skip the --insecure flag if you are performing
a docker manifest inspect on a locally-stored manifest list. Be sure to keep in mind that locally-
stored manifest lists are never used by the engine on a docker pull.
docker manifest annotate
Estimated reading time: 2 minutes

Description
Add additional information to a local image manifest

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker manifest annotate [OPTIONS] MANIFEST_LIST MANIFEST

Options
Name, shorthand Default Description

--arch Set architecture

--os Set operating system

--os-features Set operating system feature


Name, shorthand Default Description

--variant Set architecture variant

Parent command
Command Description

docker manifest Manage Docker image manifests and manifest lists

Related commands
Command Description

docker manifest annotate Add additional information to a local image manifest

docker manifest create Create a local manifest list for annotating and pushing to a registry

docker manifest inspect Display an image manifest, or manifest list

docker manifest push Push a manifest list to a repository

docker manifest create


Estimated reading time: 2 minutes

Description
Create a local manifest list for annotating and pushing to a registry

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker manifest create MANIFEST_LIST MANIFEST [MANIFEST...]

Options
Name, shorthand Default Description

--amend , -a Amend an existing manifest list

--insecure Allow communication with an insecure registry

Parent command
Command Description

docker manifest Manage Docker image manifests and manifest lists

Related commands
Command Description

docker manifest annotate Add additional information to a local image manifest

docker manifest create Create a local manifest list for annotating and pushing to a registry

docker manifest inspect Display an image manifest, or manifest list

docker manifest push Push a manifest list to a repository


docker manifest inspect
Estimated reading time: 2 minutes

Description
Display an image manifest, or manifest list

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker manifest inspect [OPTIONS] [MANIFEST_LIST] MANIFEST

Options
Name, shorthand Default Description

--insecure Allow communication with an insecure registry

--verbose , -v Output additional info including layers and platform


Parent command
Command Description

docker manifest Manage Docker image manifests and manifest lists

Related commands
Command Description

docker manifest annotate Add additional information to a local image manifest

docker manifest create Create a local manifest list for annotating and pushing to a registry

docker manifest inspect Display an image manifest, or manifest list

docker manifest push Push a manifest list to a repository

docker manifest push


Estimated reading time: 2 minutes

Description
Push a manifest list to a repository

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker manifest push [OPTIONS] MANIFEST_LIST

Options
Name, shorthand Default Description

--insecure Allow push to an insecure registry

--purge , -p Remove the local manifest list after push

Parent command
Command Description

docker manifest Manage Docker image manifests and manifest lists

Related commands
Command Description

docker manifest annotate Add additional information to a local image manifest

docker manifest create Create a local manifest list for annotating and pushing to a registry

docker manifest inspect Display an image manifest, or manifest list

docker manifest push Push a manifest list to a repository

docker network
Estimated reading time: 1 minute

Description
Manage networks

API 1.21+ The client and daemon API must both be at least 1.21 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Usage
docker network COMMAND

Child commands
Command Description

docker network connect Connect a container to a network

docker network create Create a network

docker network disconnect Disconnect a container from a network

docker network inspect Display detailed information on one or more networks

docker network ls List networks

docker network prune Remove all unused networks

docker network rm Remove one or more networks

Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
Manage networks. You can use subcommands to create, inspect, list, remove, prune, connect, and
disconnect networks.

docker network connect


Estimated reading time: 4 minutes

Description
Connect a container to a network

API 1.21+ The client and daemon API must both be at least 1.21 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Usage
docker network connect [OPTIONS] NETWORK CONTAINER

Options
Name, shorthand Default Description

--alias Add network-scoped alias for the container

--driver-opt driver options for the network

--ip IPv4 address (e.g., 172.30.100.104)

--ip6 IPv6 address (e.g., 2001:db8::33)

--link Add link to another container

--link-local-ip Add a link-local address for the container

Parent command
Command Description

docker network Manage networks

Related commands
Command Description

docker network connect Connect a container to a network

docker network create Create a network

docker network disconnect Disconnect a container from a network

docker network inspect Display detailed information on one or more networks

docker network ls List networks

docker network prune Remove all unused networks

docker network rm Remove one or more networks

Extended description
Connects a container to a network. You can connect a container by name or by ID. Once connected,
the container can communicate with other containers in the same network.

Examples
Connect a running container to a network
$ docker network connect multi-host-network container1

Connect a container to a network when it starts


You can also use the docker run --network=<network-name> option to start a container and
immediately connect it to a network.
$ docker run -itd --network=multi-host-network busybox
Specify the IP address a container will use on a given network
You can specify the IP address you want to be assigned to the container’s interface.

$ docker network connect --ip 10.10.36.122 multi-host-network container2

Use the legacy --link option


You can use --link option to link another container with a preferred alias
$ docker network connect --link container1:c1 multi-host-network container2

Create a network alias for a container


--alias option can be used to resolve the container by another name in the network being
connected to.
$ docker network connect --alias db --alias mysql multi-host-network container2

Network implications of stopping, pausing, or restarting


containers
You can pause, restart, and stop containers that are connected to a network. A container connects
to its configured networks when it runs.

If specified, the container’s IP address(es) is reapplied when a stopped container is restarted. If the
IP address is no longer available, the container fails to start. One way to guarantee that the IP
address is available is to specify an --ip-range when creating the network, and choose the static IP
address(es) from outside that range. This ensures that the IP address is not given to another
container while this container is not on the network.
$ docker network create --subnet 172.20.0.0/16 --ip-range 172.20.240.0/20 multi-host-
network
$ docker network connect --ip 172.20.128.2 multi-host-network container2

To verify the container is connected, use the docker network inspect command. Use docker
network disconnect to remove a container from the network.
Once connected in network, containers can communicate using only another container’s IP address
or name. For overlay networks or custom plugins that support multi-host connectivity, containers
connected to the same multi-host network but launched from different Engines can also
communicate in this way.
You can connect a container to one or more networks. The networks need not be the same type. For
example, you can connect a single container bridge and overlay networks.

docker network create


Estimated reading time: 9 minutes

Description
Create a network

API 1.21+ The client and daemon API must both be at least 1.21 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Usage
docker network create [OPTIONS] NETWORK

Options
Name, shorthand Default Description

API 1.25+
--attachable
Enable manual container attachment

--aux-address Auxiliary IPv4 or IPv6 addresses used by Network driver

API 1.30+
--config-from
The network from which copying the configuration

API 1.30+
--config-only
Create a configuration only network

--driver , -d bridge Driver to manage the Network

--gateway IPv4 or IPv6 Gateway for the master subnet

API 1.29+
--ingress
Create swarm routing-mesh network

--internal Restrict external access to the network


Name, shorthand Default Description

--ip-range Allocate container ip from a sub-range

--ipam-driver IP Address Management Driver

--ipam-opt Set IPAM driver specific options

--ipv6 Enable IPv6 networking

--label Set metadata on a network

--opt , -o Set driver specific options

API 1.30+
--scope
Control the network’s scope

--subnet Subnet in CIDR format that represents a network segment

Parent command
Command Description

docker network Manage networks

Related commands
Command Description

docker network connect Connect a container to a network

docker network create Create a network

docker network disconnect Disconnect a container from a network

docker network inspect Display detailed information on one or more networks

docker network ls List networks

docker network prune Remove all unused networks


Command Description

docker network rm Remove one or more networks

Extended description
Creates a new network. The DRIVER accepts bridge or overlay which are the built-in network drivers.
If you have installed a third party or your own custom network driver you can specify
that DRIVER here also. If you don’t specify the --driver option, the command automatically creates
a bridge network for you. When you install Docker Engine it creates a bridge network automatically.
This network corresponds to the docker0 bridge that Engine has traditionally relied on. When you
launch a new container with docker run it automatically connects to this bridge network. You cannot
remove this default bridge network, but you can create new ones using the network
create command.

$ docker network create -d bridge my-bridge-network

Bridge networks are isolated networks on a single Engine installation. If you want to create a
network that spans multiple Docker hosts each running an Engine, you must create
an overlay network. Unlike bridge networks, overlay networks require some pre-existing conditions
before you can create one. These conditions are:

 Access to a key-value store. Engine supports Consul, Etcd, and ZooKeeper (Distributed
store) key-value stores.
 A cluster of hosts with connectivity to the key-value store.
 A properly configured Engine daemon on each host in the cluster.

The dockerd options that support the overlay network are:

 --cluster-store
 --cluster-store-opt
 --cluster-advertise

To read more about these options and how to configure them, see “Get started with multi-host
network”.

While not required, it is a good idea to install Docker Swarm to manage the cluster that makes up
your network. Swarm provides sophisticated discovery and server management tools that can assist
your implementation.
Once you have prepared the overlay network prerequisites you simply choose a Docker host in the
cluster and issue the following to create the network:
$ docker network create -d overlay my-multihost-network

Network names must be unique. The Docker daemon attempts to identify naming conflicts but this is
not guaranteed. It is the user’s responsibility to avoid name conflicts.

Overlay network limitations


You should create overlay networks with /24 blocks (the default), which limits you to 256 IP
addresses, when you create networks using the default VIP-based endpoint-mode. This
recommendation addresses limitations with swarm mode. If you need more than 256 IP addresses,
do not increase the IP block size. You can either use dnsrr endpoint mode with an external load
balancer, or use multiple smaller overlay networks. See Configure service discovery for more
information about different endpoint modes.

Examples
Connect containers
When you start a container, use the --network flag to connect it to a network. This example adds
the busybox container to the mynet network:
$ docker run -itd --network=mynet busybox

If you want to add a container to a network after the container is already running, use the docker
network connect subcommand.
You can connect multiple containers to the same network. Once connected, the containers can
communicate using only another container’s IP address or name. For overlay networks or custom
plugins that support multi-host connectivity, containers connected to the same multi-host network but
launched from different Engines can also communicate in this way.
You can disconnect a container from a network using the docker network disconnectcommand.

Specify advanced options


When you create a network, Engine creates a non-overlapping subnetwork for the network by
default. This subnetwork is not a subdivision of an existing network. It is purely for ip-addressing
purposes. You can override this default and specify subnetwork values directly using the --
subnet option. On a bridge network you can only create a single subnet:

$ docker network create --driver=bridge --subnet=192.168.0.0/16 br0

Additionally, you also specify the --gateway --ip-range and --aux-address options.
$ docker network create \
--driver=bridge \
--subnet=172.28.0.0/16 \
--ip-range=172.28.5.0/24 \
--gateway=172.28.5.254 \
br0

If you omit the --gateway flag the Engine selects one for you from inside a preferred pool.
For overlay networks and for network driver plugins that support it you can create multiple
subnetworks. This example uses two /25 subnet mask to adhere to the current guidance of not
having more than 256 IPs in a single overlay network. Each of the subnetworks has 126 usable
addresses.
$ docker network create -d overlay \
--subnet=192.168.1.0/25 \
--subnet=192.170.2.0/25 \
--gateway=192.168.1.100 \
--gateway=192.170.2.100 \
--aux-address="my-router=192.168.1.5" --aux-address="my-switch=192.168.1.6" \
--aux-address="my-printer=192.170.1.5" --aux-address="my-nas=192.170.1.6" \
my-multihost-network

Be sure that your subnetworks do not overlap. If they do, the network create fails and Engine returns
an error.

Bridge driver options


When creating a custom network, the default network driver (i.e. bridge) has additional options that
can be passed. The following are those options and the equivalent docker daemon flags used for
docker0 bridge:
Option Equivalent Description

bridge name to be used


com.docker.network.bridge.name - when creating the Linux
bridge

com.docker.network.bridge.enable_ip_masquerade --ip-masq Enable IP masquerading

Enable or Disable Inter


com.docker.network.bridge.enable_icc --icc
Container Connectivity

Default IP when binding


com.docker.network.bridge.host_binding_ipv4 --ip
container ports

Set the containers


com.docker.network.driver.mtu --mtu
network MTU

The following arguments can be passed to docker network create for any network driver, again with
their approximate equivalents to docker daemon.
Argument Equivalent Description

--gateway - IPv4 or IPv6 Gateway for the master subnet

--ip-range --fixed-cidr Allocate IPs from a range

--internal - Restrict external access to the network

--ipv6 --ipv6 Enable IPv6 networking

--subnet --bip Subnet for network

For example, let’s use -o or --opt options to specify an IP address binding when publishing ports:
$ docker network create \
-o "com.docker.network.bridge.host_binding_ipv4"="172.19.0.1" \
simple-network

Network internal mode


By default, when you connect a container to an overlay network, Docker also connects a bridge
network to it to provide external connectivity. If you want to create an externally
isolated overlay network, you can specify the --internal option.

Network ingress mode


You can create the network which will be used to provide the routing-mesh in the swarm cluster. You
do so by specifying --ingress when creating the network. Only one ingress network can be created
at the time. The network can be removed only if no services depend on it. Any option available when
creating an overlay network is also available when creating the ingress network, besides the --
attachable option.

$ docker network create -d overlay \


--subnet=10.11.0.0/16 \
--ingress \
--opt com.docker.network.driver.mtu=9216 \
--opt encrypted=true \
my-ingress-network

docker network disconnect


Estimated reading time: 1 minute

Description
Disconnect a container from a network

API 1.21+ The client and daemon API must both be at least 1.21 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Usage
docker network disconnect [OPTIONS] NETWORK CONTAINER

Options
Name, shorthand Default Description

--force , -f Force the container to disconnect from a network

Parent command
Command Description

docker network Manage networks

Related commands
Command Description

docker network connect Connect a container to a network

docker network create Create a network

docker network disconnect Disconnect a container from a network

docker network inspect Display detailed information on one or more networks

docker network ls List networks

docker network prune Remove all unused networks

docker network rm Remove one or more networks

Extended description
Disconnects a container from a network. The container must be running to disconnect it from the
network.

Examples
$ docker network disconnect multi-host-network container1

docker network inspect


Estimated reading time: 1 minute

Description
Display detailed information on one or more networks
API 1.21+ The client and daemon API must both be at least 1.21 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Usage
docker network inspect [OPTIONS] NETWORK [NETWORK...]

Options
Name, shorthand Default Description

--format , -f Format the output using the given Go template

--verbose , -v Verbose output for diagnostics

Parent command
Command Description

docker network Manage networks

Related commands
Command Description

docker network connect Connect a container to a network

docker network create Create a network

docker network disconnect Disconnect a container from a network

docker network inspect Display detailed information on one or more networks

docker network ls List networks

docker network prune Remove all unused networks

docker network rm Remove one or more networks


Extended description
Returns information about one or more networks. By default, this command renders all results in a
JSON object.

docker network ls
Estimated reading time: 7 minutes

Description
List networks

API 1.21+ The client and daemon API must both be at least 1.21 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Usage
docker network ls [OPTIONS]

Options
Name, shorthand Default Description

--filter , -f Provide filter values (e.g. ‘driver=bridge’)

--format Pretty-print networks using a Go template

--no-trunc Do not truncate the output

--quiet , -q Only display network IDs

Parent command
Command Description

docker network Manage networks


Related commands
Command Description

docker network connect Connect a container to a network

docker network create Create a network

docker network disconnect Disconnect a container from a network

docker network inspect Display detailed information on one or more networks

docker network ls List networks

docker network prune Remove all unused networks

docker network rm Remove one or more networks

Extended description
Lists all the networks the Engine daemon knows about. This includes the networks that span across
multiple hosts in a cluster.

Examples
List all networks
$ sudo docker network ls
NETWORK ID NAME DRIVER SCOPE
7fca4eb8c647 bridge bridge local
9f904ee27bf5 none null local
cf03ee007fb4 host host local
78b03ee04fc4 multi-host overlay swarm

Use the --no-trunc option to display the full network id:


$ docker network ls --no-trunc
NETWORK ID NAME
DRIVER SCOPE
18a2866682b85619a026c81b98a5e375bd33e1b0936a26cc497c283d27bae9b3 none
null local
c288470c46f6c8949c5f7e5099b5b7947b07eabe8d9a27d79a9cbf111adcbf47 host
host local
7b369448dccbf865d397c8d2be0cda7cf7edc6b0945f77d2529912ae917a0185 bridge
bridge local
95e74588f40db048e86320c6526440c504650a1ff3e9f7d60a497c4d2163e5bd foo
bridge local
63d1ff1f77b07ca51070a8c227e962238358bd310bde1529cf62e6c307ade161 dev
bridge local

Filtering
The filtering flag (-f or --filter) format is a key=value pair. If there is more than one filter, then
pass multiple flags (e.g. --filter "foo=bar" --filter "bif=baz"). Multiple filter flags are combined
as an OR filter. For example, -f type=custom -f type=builtin returns
both custom and builtin networks.

The currently supported filters are:

 driver
 id (network’s id)
 label (label=<key> or label=<key>=<value>)
 name (network’s name)
 scope (swarm|global|local)
 type (custom|builtin)

DRIVER

The driver filter matches networks based on their driver.


The following example matches networks with the bridge driver:
$ docker network ls --filter driver=bridge
NETWORK ID NAME DRIVER SCOPE
db9db329f835 test1 bridge local
f6e212da9dfd test2 bridge local

ID

The id filter matches on all or part of a network’s ID.


The following filter matches all networks with an ID containing the 63d1ff1f77b0... string.
$ docker network ls --filter
id=63d1ff1f77b07ca51070a8c227e962238358bd310bde1529cf62e6c307ade161
NETWORK ID NAME DRIVER SCOPE
63d1ff1f77b0 dev bridge local

You can also filter for a substring in an ID as this shows:

$ docker network ls --filter id=95e74588f40d


NETWORK ID NAME DRIVER SCOPE
95e74588f40d foo bridge local

$ docker network ls --filter id=95e


NETWORK ID NAME DRIVER SCOPE
95e74588f40d foo bridge local

LABEL

The label filter matches networks based on the presence of a label alone or a labeland a value.
The following filter matches networks with the usage label regardless of its value.
$ docker network ls -f "label=usage"
NETWORK ID NAME DRIVER SCOPE
db9db329f835 test1 bridge local
f6e212da9dfd test2 bridge local

The following filter matches networks with the usage label with the prod value.
$ docker network ls -f "label=usage=prod"
NETWORK ID NAME DRIVER SCOPE
f6e212da9dfd test2 bridge local

NAME

The name filter matches on all or part of a network’s name.


The following filter matches all networks with a name containing the foobar string.
$ docker network ls --filter name=foobar
NETWORK ID NAME DRIVER SCOPE
06e7eef0a170 foobar bridge local
You can also filter for a substring in a name as this shows:

$ docker network ls --filter name=foo


NETWORK ID NAME DRIVER SCOPE
95e74588f40d foo bridge local
06e7eef0a170 foobar bridge local

SCOPE

The scope filter matches networks based on their scope.


The following example matches networks with the swarm scope:
$ docker network ls --filter scope=swarm
NETWORK ID NAME DRIVER SCOPE
xbtm0v4f1lfh ingress overlay swarm
ic6r88twuu92 swarmnet overlay swarm

The following example matches networks with the local scope:


$ docker network ls --filter scope=local
NETWORK ID NAME DRIVER SCOPE
e85227439ac7 bridge bridge local
0ca0e19443ed host host local
ca13cc149a36 localnet bridge local
f9e115d2de35 none null local

TYPE

The type filter supports two values; builtin displays predefined networks (bridge, none, host),
whereas custom displays user defined networks.

The following filter matches all user defined networks:

$ docker network ls --filter type=custom


NETWORK ID NAME DRIVER SCOPE
95e74588f40d foo bridge local
63d1ff1f77b0 dev bridge local

By having this flag it allows for batch cleanup. For example, use this filter to delete all user defined
networks:
$ docker network rm `docker network ls --filter type=custom -q`

A warning will be issued when trying to remove a network that has containers attached.

Formatting
The formatting options (--format) pretty-prints networks output using a Go template.

Valid placeholders for the Go template are listed below:

Placeholder Description

.ID Network ID

.Name Network name

.Driver Network driver

.Scope Network scope (local, global)

.IPv6 Whether IPv6 is enabled on the network or not.

.Internal Whether the network is internal or not.

.Labels All labels assigned to the network.

Value of a specific label for this network. For example {{.Label


.Label
"project.version"}}

.CreatedAt Time when the network was created

When using the --format option, the network ls command will either output the data exactly as the
template declares or, when using the table directive, includes column headers as well.
The following example uses a template without headers and outputs the ID and Driverentries
separated by a colon for all networks:
$ docker network ls --format "{{.ID}}: {{.Driver}}"
afaaab448eb2: bridge
d1584f8dc718: host
391df270dc66: null

docker network prune


Estimated reading time: 3 minutes

Description
Remove all unused networks

API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Usage
docker network prune [OPTIONS]

Options
Name, shorthand Default Description

--filter Provide filter values (e.g. ‘until=')

--force , -f Do not prompt for confirmation

Parent command
Command Description

docker network Manage networks

Related commands
Command Description

docker network connect Connect a container to a network

docker network create Create a network

docker network disconnect Disconnect a container from a network

docker network inspect Display detailed information on one or more networks


Command Description

docker network ls List networks

docker network prune Remove all unused networks

docker network rm Remove one or more networks

Extended description
Remove all unused networks. Unused networks are those which are not referenced by any
containers.

Examples
$ docker network prune

WARNING! This will remove all networks not used by at least one container.
Are you sure you want to continue? [y/N] y
Deleted Networks:
n1
n2

Filtering
The filtering flag (--filter) format is of “key=value”. If there is more than one filter, then pass
multiple flags (e.g., --filter "foo=bar" --filter "bif=baz")

The currently supported filters are:

 until (<timestamp>) - only remove networks created before given timestamp


 label (label=<key>, label=<key>=<value>, label!=<key>, or label!=<key>=<value>) - only
remove networks with (or without, in case label!=... is used) the specified labels.

The until filter can be Unix timestamps, date formatted timestamps, or Go duration strings
(e.g. 10m, 1h30m) computed relative to the daemon machine’s time. Supported formats for date
formatted time stamps include RFC3339Nano, RFC3339, 2006-01-02T15:04:05,2006-01-
02T15:04:05.999999999, 2006-01-02Z07:00, and 2006-01-02. The local timezone on the daemon will
be used if you do not provide either a Z or a +-00:00 timezone offset at the end of the timestamp.
When providing Unix timestamps enter seconds[.nanoseconds], where seconds is the number of
seconds that have elapsed since January 1, 1970 (midnight UTC/GMT), not counting leap seconds
(aka Unix epoch or Unix time), and the optional .nanoseconds field is a fraction of a second no more
than nine digits long.
The label filter accepts two formats. One is the label=... (label=<key> or label=<key>=<value>),
which removes networks with the specified labels. The other format is
the label!=... (label!=<key> or label!=<key>=<value>), which removes networks without the
specified labels.
The following removes networks created more than 5 minutes ago. Note that system networks such
as bridge, host, and none will never be pruned:
$ docker network ls

NETWORK ID NAME DRIVER SCOPE


7430df902d7a bridge bridge local
ea92373fd499 foo-1-day-ago bridge local
ab53663ed3c7 foo-1-min-ago bridge local
97b91972bc3b host host local
f949d337b1f5 none null local

$ docker network prune --force --filter until=5m

Deleted Networks:
foo-1-day-ago

$ docker network ls

NETWORK ID NAME DRIVER SCOPE


7430df902d7a bridge bridge local
ab53663ed3c7 foo-1-min-ago bridge local
97b91972bc3b host host local
f949d337b1f5 none null local
docker network rm
Estimated reading time: 2 minutes

Description
Remove one or more networks

API 1.21+ The client and daemon API must both be at least 1.21 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Usage
docker network rm NETWORK [NETWORK...]

Parent command
Command Description

docker network Manage networks

Related commands
Command Description

docker network connect Connect a container to a network

docker network create Create a network

docker network disconnect Disconnect a container from a network

docker network inspect Display detailed information on one or more networks

docker network ls List networks

docker network prune Remove all unused networks

docker network rm Remove one or more networks


Extended description
Removes one or more networks by name or identifier. To remove a network, you must first
disconnect any containers connected to it.

Examples
Remove a network
To remove the network named ‘my-network’:

$ docker network rm my-network

Remove multiple networks


To delete multiple networks in a single docker network rm command, provide multiple network
names or ids. The following example deletes a network with id 3695c422697f and a network
named my-network:
$ docker network rm 3695c422697f my-network

When you specify multiple networks, the command attempts to delete each in turn. If the deletion of
one network fails, the command continues to the next on the list and tries to delete that. The
command reports success or failure for each deletion.

docker node
Estimated reading time: 1 minute

Description
Manage Swarm nodes

API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Swarm This command works with the Swarm orchestrator.

Usage
docker node COMMAND

Child commands
Command Description

docker node demote Demote one or more nodes from manager in the swarm

docker node inspect Display detailed information on one or more nodes

docker node ls List nodes in the swarm

docker node promote Promote one or more nodes to manager in the swarm

docker node ps List tasks running on one or more nodes, defaults to current node

docker node rm Remove one or more nodes from the swarm

docker node update Update a node

Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
Manage nodes.

docker node demote


Estimated reading time: 1 minute

Description
Demote one or more nodes from manager in the swarm
API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Swarm This command works with the Swarm orchestrator.

Usage
docker node demote NODE [NODE...]

Parent command
Command Description

docker node Manage Swarm nodes

Related commands
Command Description

docker node demote Demote one or more nodes from manager in the swarm

docker node inspect Display detailed information on one or more nodes

docker node ls List nodes in the swarm

docker node promote Promote one or more nodes to manager in the swarm

docker node ps List tasks running on one or more nodes, defaults to current node

docker node rm Remove one or more nodes from the swarm

docker node update Update a node

Extended description
Demotes an existing manager so that it is no longer a manager. This command targets a docker
engine that is a manager in the swarm.
Examples
$ docker node demote <node name>

docker node inspect


Estimated reading time: 3 minutes

Description
Display detailed information on one or more nodes

API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Swarm This command works with the Swarm orchestrator.

Usage
docker node inspect [OPTIONS] self|NODE [NODE...]

Options
Name, shorthand Default Description

--format , -f Format the output using the given Go template

--pretty Print the information in a human friendly format

Parent command
Command Description

docker node Manage Swarm nodes

Related commands
Command Description

docker node demote Demote one or more nodes from manager in the swarm

docker node inspect Display detailed information on one or more nodes

docker node ls List nodes in the swarm

docker node promote Promote one or more nodes to manager in the swarm

docker node ps List tasks running on one or more nodes, defaults to current node

docker node rm Remove one or more nodes from the swarm

docker node update Update a node

Extended description
Returns information about a node. By default, this command renders all results in a JSON array. You
can specify an alternate format to execute a given template for each result.
Go’stext/template package describes all the details of the format.

Examples
Inspect a node
$ docker node inspect swarm-manager

[
{
"ID": "e216jshn25ckzbvmwlnh5jr3g",
"Version": {
"Index": 10
},
"CreatedAt": "2017-05-16T22:52:44.9910662Z",
"UpdatedAt": "2017-05-16T22:52:45.230878043Z",
"Spec": {
"Role": "manager",
"Availability": "active"
},
"Description": {
"Hostname": "swarm-manager",
"Platform": {
"Architecture": "x86_64",
"OS": "linux"
},
"Resources": {
"NanoCPUs": 1000000000,
"MemoryBytes": 1039843328
},
"Engine": {
"EngineVersion": "17.06.0-ce",
"Plugins": [
{
"Type": "Volume",
"Name": "local"
},
{
"Type": "Network",
"Name": "overlay"
},
{
"Type": "Network",
"Name": "null"
},
{
"Type": "Network",
"Name": "host"
},
{
"Type": "Network",
"Name": "bridge"
},
{
"Type": "Network",
"Name": "overlay"
}
]
},
"TLSInfo": {
"TrustRoot": "-----BEGIN CERTIFICATE-----
\nMIIBazCCARCgAwIBAgIUOzgqU4tA2q5Yv1HnkzhSIwGyIBswCgYIKoZIzj0EAwIw\nEzERMA8GA1UEAxMIc
3dhcm0tY2EwHhcNMTcwNTAyMDAyNDAwWhcNMzcwNDI3MDAy\nNDAwWjATMREwDwYDVQQDEwhzd2FybS1jYTBZ
MBMGByqGSM49AgEGCCqGSM49AwEH\nA0IABMbiAmET+HZyve35ujrnL2kOLBEQhFDZ5MhxAuYs96n796sFlfx
TxC1lM/2g\nAh8DI34pm3JmHgZxeBPKUURJHKWjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB\nAf8EBTAD
AQH/MB0GA1UdDgQWBBS3sjTJOcXdkls6WSY2rTx1KIJueTAKBggqhkjO\nPQQDAgNJADBGAiEAoeVWkaXgSUA
ucQmZ3Yhmx22N/cq1EPBgYHOBZmHt0NkCIQC3\nzONcJ/+WA21OXtb+vcijpUOXtNjyHfcox0N8wsLDqQ==\n
-----END CERTIFICATE-----\n",
"CertIssuerSubject": "MBMxETAPBgNVBAMTCHN3YXJtLWNh",
"CertIssuerPublicKey":
"MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAExuICYRP4dnK97fm6OucvaQ4sERCEUNnkyHEC5iz3qfv3qwWV
/FPELWUz/aACHwMjfimbcmYeBnF4E8pRREkcpQ=="
}
},
"Status": {
"State": "ready",
"Addr": "168.0.32.137"
},
"ManagerStatus": {
"Leader": true,
"Reachability": "reachable",
"Addr": "168.0.32.137:2377"
}
}
]

Specify an output format


$ docker node inspect --format '{{ .ManagerStatus.Leader }}' self

false

$ docker node inspect --pretty self


ID: e216jshn25ckzbvmwlnh5jr3g
Hostname: swarm-manager
Joined at: 2017-05-16 22:52:44.9910662 +0000 utc
Status:
State: Ready
Availability: Active
Address: 172.17.0.2
Manager Status:
Address: 172.17.0.2:2377
Raft Status: Reachable
Leader: Yes
Platform:
Operating System: linux
Architecture: x86_64
Resources:
CPUs: 4
Memory: 7.704 GiB
Plugins:
Network: overlay, bridge, null, host, overlay
Volume: local
Engine Version: 17.06.0-ce
TLS Info:
TrustRoot:
-----BEGIN CERTIFICATE-----
MIIBazCCARCgAwIBAgIUOzgqU4tA2q5Yv1HnkzhSIwGyIBswCgYIKoZIzj0EAwIw
EzERMA8GA1UEAxMIc3dhcm0tY2EwHhcNMTcwNTAyMDAyNDAwWhcNMzcwNDI3MDAy
NDAwWjATMREwDwYDVQQDEwhzd2FybS1jYTBZMBMGByqGSM49AgEGCCqGSM49AwEH
A0IABMbiAmET+HZyve35ujrnL2kOLBEQhFDZ5MhxAuYs96n796sFlfxTxC1lM/2g
Ah8DI34pm3JmHgZxeBPKUURJHKWjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB
Af8EBTADAQH/MB0GA1UdDgQWBBS3sjTJOcXdkls6WSY2rTx1KIJueTAKBggqhkjO
PQQDAgNJADBGAiEAoeVWkaXgSUAucQmZ3Yhmx22N/cq1EPBgYHOBZmHt0NkCIQC3
zONcJ/+WA21OXtb+vcijpUOXtNjyHfcox0N8wsLDqQ==
-----END CERTIFICATE-----

Issuer Public Key:


MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAExuICYRP4dnK97fm6OucvaQ4sERCEUNnkyHEC5iz3
qfv3qwWV/FPELWUz/aACHwMjfimbcmYeBnF4E8pRREkcpQ==
Issuer Subject: MBMxETAPBgNVBAMTCHN3YXJtLWNh

docker node ls
Estimated reading time: 5 minutes

Description
List nodes in the swarm

API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Swarm This command works with the Swarm orchestrator.

Usage
docker node ls [OPTIONS]

Options
Name, shorthand Default Description

--filter , -f Filter output based on conditions provided

--format Pretty-print nodes using a Go template

--quiet , -q Only display IDs


Parent command
Command Description

docker node Manage Swarm nodes

Related commands
Command Description

docker node demote Demote one or more nodes from manager in the swarm

docker node inspect Display detailed information on one or more nodes

docker node ls List nodes in the swarm

docker node promote Promote one or more nodes to manager in the swarm

docker node ps List tasks running on one or more nodes, defaults to current node

docker node rm Remove one or more nodes from the swarm

docker node update Update a node

Extended description
Lists all the nodes that the Docker Swarm manager knows about. You can filter using the -for --
filter flag. Refer to the filtering section for more information about available filter options.

Examples
$ docker node ls

ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS


1bcef6utixb0l0ca7gxuivsj0 swarm-worker2 Ready Active
38ciaotwjuritcdtn9npbnkuz swarm-worker1 Ready Active
e216jshn25ckzbvmwlnh5jr3g * swarm-manager1 Ready Active Leader
Note: In the above example output, there is a hidden column of .Self that indicates if the node is the
same node as the current docker daemon. A * (e.g., e216jshn25ckzbvmwlnh5jr3g *) means this
node is the current docker daemon.

Filtering
The filtering flag (-f or --filter) format is of “key=value”. If there is more than one filter, then pass
multiple flags (e.g., --filter "foo=bar" --filter "bif=baz")

The currently supported filters are:

 id
 label
 membership
 name
 role

ID

The id filter matches all or part of a node’s id.


$ docker node ls -f id=1

ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS


1bcef6utixb0l0ca7gxuivsj0 swarm-worker2 Ready Active

LABEL

The label filter matches nodes based on engine labels and on the presence of a labelalone or
a label and a value. Node labels are currently not used for filtering.
The following filter matches nodes with the foo label regardless of its value.
$ docker node ls -f "label=foo"

ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS


1bcef6utixb0l0ca7gxuivsj0 swarm-worker2 Ready Active

MEMBERSHIP

The membership filter matches nodes based on the presence of a membership and a
valueaccepted or pending.
The following filter matches nodes with the membership of accepted.
$ docker node ls -f "membership=accepted"

ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS


1bcef6utixb0l0ca7gxuivsj0 swarm-worker2 Ready Active
38ciaotwjuritcdtn9npbnkuz swarm-worker1 Ready Active

NAME

The name filter matches on all or part of a node hostname.


The following filter matches the nodes with a name equal to swarm-master string.
$ docker node ls -f name=swarm-manager1

ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS


e216jshn25ckzbvmwlnh5jr3g * swarm-manager1 Ready Active Leader

ROLE

The role filter matches nodes based on the presence of a role and a value worker or manager.
The following filter matches nodes with the manager role.
$ docker node ls -f "role=manager"

ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS


e216jshn25ckzbvmwlnh5jr3g * swarm-manager1 Ready Active Leader

Formatting
The formatting options (--format) pretty-prints nodes output using a Go template.

Valid placeholders for the Go template are listed below:

Placeholder Description

.ID Node ID

Node of the daemon (true/false, trueindicates that the node is the same as
.Self
current docker daemon)

.Hostname Node hostname


Placeholder Description

.Status Node status

.Availability Node availability (“active”, “pause”, or “drain”)

.ManagerStatus Manager status of the node

TLS status of the node (“Ready”, or “Needs Rotation” has TLS certificate
.TLSStatus
signed by an old CA)

.EngineVersion Engine version

When using the --format option, the node ls command will either output the data exactly as the
template declares or, when using the table directive, includes column headers as well.
The following example uses a template without headers and outputs the ID, Hostname, and TLS
Status entries separated by a colon for all nodes:

$ docker node ls --format "{{.ID}}: {{.Hostname}} {{.TLSStatus}}"


e216jshn25ckzbvmwlnh5jr3g: swarm-manager1 Ready
35o6tiywb700jesrt3dmllaza: swarm-worker1 Needs Rotation

docker node promote


Estimated reading time: 1 minute

Description
Promote one or more nodes to manager in the swarm

API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Swarm This command works with the Swarm orchestrator.

Usage
docker node promote NODE [NODE...]

Parent command
Command Description

docker node Manage Swarm nodes

Related commands
Command Description

docker node demote Demote one or more nodes from manager in the swarm

docker node inspect Display detailed information on one or more nodes

docker node ls List nodes in the swarm

docker node promote Promote one or more nodes to manager in the swarm

docker node ps List tasks running on one or more nodes, defaults to current node

docker node rm Remove one or more nodes from the swarm

docker node update Update a node

Extended description
Promotes a node to manager. This command can only be executed on a manager node.

Examples
$ docker node promote <node name>

docker node ps
Estimated reading time: 4 minutes

Description
List tasks running on one or more nodes, defaults to current node
API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Swarm This command works with the Swarm orchestrator.

Usage
docker node ps [OPTIONS] [NODE...]

Options
Name, shorthand Default Description

--filter , -f Filter output based on conditions provided

--format Pretty-print tasks using a Go template

--no-resolve Do not map IDs to Names

--no-trunc Do not truncate output

--quiet , -q Only display task IDs

Parent command
Command Description

docker node Manage Swarm nodes

Related commands
Command Description

docker node demote Demote one or more nodes from manager in the swarm

docker node inspect Display detailed information on one or more nodes

docker node ls List nodes in the swarm


Command Description

docker node promote Promote one or more nodes to manager in the swarm

docker node ps List tasks running on one or more nodes, defaults to current node

docker node rm Remove one or more nodes from the swarm

docker node update Update a node

Extended description
Lists all the tasks on a Node that Docker knows about. You can filter using the -f or --filter flag.
Refer to the filtering section for more information about available filter options.

Examples
$ docker node ps swarm-manager1
NAME IMAGE NODE DESIRED STATE
CURRENT STATE
redis.1.7q92v0nr1hcgts2amcjyqg3pq redis:3.0.6 swarm-manager1 Running
Running 5 hours
redis.6.b465edgho06e318egmgjbqo4o redis:3.0.6 swarm-manager1 Running
Running 29 seconds
redis.7.bg8c07zzg87di2mufeq51a2qp redis:3.0.6 swarm-manager1 Running
Running 5 seconds
redis.9.dkkual96p4bb3s6b10r7coxxt redis:3.0.6 swarm-manager1 Running
Running 5 seconds
redis.10.0tgctg8h8cech4w0k0gwrmr23 redis:3.0.6 swarm-manager1 Running
Running 5 seconds

Filtering
The filtering flag (-f or --filter) format is of “key=value”. If there is more than one filter, then pass
multiple flags (e.g., --filter "foo=bar" --filter "bif=baz")

The currently supported filters are:

 name
 id
 label
 desired-state

NAME

The name filter matches on all or part of a task’s name.


The following filter matches all tasks with a name containing the redis string.
$ docker node ps -f name=redis swarm-manager1

NAME IMAGE NODE DESIRED STATE


CURRENT STATE
redis.1.7q92v0nr1hcgts2amcjyqg3pq redis:3.0.6 swarm-manager1 Running
Running 5 hours
redis.6.b465edgho06e318egmgjbqo4o redis:3.0.6 swarm-manager1 Running
Running 29 seconds
redis.7.bg8c07zzg87di2mufeq51a2qp redis:3.0.6 swarm-manager1 Running
Running 5 seconds
redis.9.dkkual96p4bb3s6b10r7coxxt redis:3.0.6 swarm-manager1 Running
Running 5 seconds
redis.10.0tgctg8h8cech4w0k0gwrmr23 redis:3.0.6 swarm-manager1 Running
Running 5 seconds

ID

The id filter matches a task’s id.


$ docker node ps -f id=bg8c07zzg87di2mufeq51a2qp swarm-manager1

NAME IMAGE NODE DESIRED STATE


CURRENT STATE
redis.7.bg8c07zzg87di2mufeq51a2qp redis:3.0.6 swarm-manager1 Running
Running 5 seconds

LABEL

The label filter matches tasks based on the presence of a label alone or a label and a value.
The following filter matches tasks with the usage label regardless of its value.
$ docker node ps -f "label=usage"

NAME IMAGE NODE DESIRED STATE


CURRENT STATE
redis.6.b465edgho06e318egmgjbqo4o redis:3.0.6 swarm-manager1 Running
Running 10 minutes
redis.7.bg8c07zzg87di2mufeq51a2qp redis:3.0.6 swarm-manager1 Running
Running 9 minutes

DESIRED-STATE

The desired-state filter can take the values running, shutdown, or accepted.

Formatting
The formatting options (--format) pretty-prints tasks output using a Go template.

Valid placeholders for the Go template are listed below:

Placeholder Description

.ID Task ID

.Name Task name

.Image Task image

.Node Node ID

.DesiredState Desired state of the task (running, shutdown, or accepted)

.CurrentState Current state of the task

.Error Error

.Ports Task published ports

When using the --format option, the node ps command will either output the data exactly as the
template declares or, when using the table directive, includes column headers as well.
The following example uses a template without headers and outputs the Name and Imageentries
separated by a colon for all tasks:
$ docker node ps --format "{{.Name}}: {{.Image}}"
top.1: busybox
top.2: busybox
top.3: busybox
docker node rm
Estimated reading time: 2 minutes

Description
Remove one or more nodes from the swarm

API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Swarm This command works with the Swarm orchestrator.

Usage
docker node rm [OPTIONS] NODE [NODE...]

Options
Name, shorthand Default Description

--force , -f Force remove a node from the swarm

Parent command
Command Description

docker node Manage Swarm nodes

Related commands
Command Description

docker node demote Demote one or more nodes from manager in the swarm

docker node inspect Display detailed information on one or more nodes


Command Description

docker node ls List nodes in the swarm

docker node promote Promote one or more nodes to manager in the swarm

docker node ps List tasks running on one or more nodes, defaults to current node

docker node rm Remove one or more nodes from the swarm

docker node update Update a node

Extended description
When run from a manager node, removes the specified nodes from a swarm.

Examples
Remove a stopped node from the swarm
$ docker node rm swarm-node-02

Node swarm-node-02 removed from swarm

Attempt to remove a running node from a swarm


Removes the specified nodes from the swarm, but only if the nodes are in the down state. If you
attempt to remove an active node you will receive an error:

$ docker node rm swarm-node-03

Error response from daemon: rpc error: code = 9 desc = node swarm-node-03 is not
down and can't be removed

Forcibly remove an inaccessible node from a swarm


If you lose access to a worker node or need to shut it down because it has been compromised or is
not behaving as expected, you can use the --force option. This may cause transient errors or
interruptions, depending on the type of task being run on the node.
$ docker node rm --force swarm-node-03

Node swarm-node-03 removed from swarm

A manager node must be demoted to a worker node (using docker node demote) before you can
remove it from the swarm.

docker node update


Estimated reading time: 2 minutes

Description
Update a node

API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Swarm This command works with the Swarm orchestrator.

Usage
docker node update [OPTIONS] NODE

Options
Name, shorthand Default Description

--availability Availability of the node (“active”|”pause”|”drain”)

--label-add Add or update a node label (key=value)

--label-rm Remove a node label if exists

--role Role of the node (“worker”|”manager”)


Parent command
Command Description

docker node Manage Swarm nodes

Related commands
Command Description

docker node demote Demote one or more nodes from manager in the swarm

docker node inspect Display detailed information on one or more nodes

docker node ls List nodes in the swarm

docker node promote Promote one or more nodes to manager in the swarm

docker node ps List tasks running on one or more nodes, defaults to current node

docker node rm Remove one or more nodes from the swarm

docker node update Update a node

Extended description
Update metadata about a node, such as its availability, labels, or roles.

Examples
Add label metadata to a node
Add metadata to a swarm node using node labels. You can specify a node label as a key with an
empty value:

$ docker node update --label-add foo worker1

To add multiple labels to a node, pass the --label-add flag for each label:
$ docker node update --label-add foo --label-add bar worker1
When you create a service, you can use node labels as a constraint. A constraint limits the nodes
where the scheduler deploys tasks for a service.

For example, to add a type label to identify nodes where the scheduler should deploy message
queue service tasks:
$ docker node update --label-add type=queue worker1

The labels you set for nodes using docker node update apply only to the node entity within the
swarm. Do not confuse them with the docker daemon labels for dockerd.

For more information about labels, refer to apply custom metadata.

docker pause
Estimated reading time: 1 minute

Description
Pause all processes within one or more containers

Usage
docker pause CONTAINER [CONTAINER...]

Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
The docker pause command suspends all processes in the specified containers. On Linux, this uses
the cgroups freezer. Traditionally, when suspending a process the SIGSTOP signal is used, which is
observable by the process being suspended. With the cgroups freezer the process is unaware, and
unable to capture, that it is being suspended, and subsequently resumed. On Windows, only Hyper-
V containers can be paused.
See the cgroups freezer documentation for further details.

Examples
$ docker pause my_container

docker plugin
Estimated reading time: 1 minute

Description
Manage plugins

API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Usage
docker plugin COMMAND

Child commands
Command Description

docker plugin Create a plugin from a rootfs and configuration. Plugin data directory must
create contain config.json and rootfs directory.

docker plugin
Disable a plugin
disable

docker plugin
Enable a plugin
enable

docker plugin
Display detailed information on one or more plugins
inspect

docker plugin
Install a plugin
install
Command Description

docker plugin ls List plugins

docker plugin
Push a plugin to a registry
push

docker plugin rm Remove one or more plugins

docker plugin set Change settings for a plugin

docker plugin
Upgrade an existing plugin
upgrade

Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
Manage plugins.

docker plugin create


Estimated reading time: 2 minutes

Description
Create a plugin from a rootfs and configuration. Plugin data directory must contain config.json and
rootfs directory.

API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Usage
docker plugin create [OPTIONS] PLUGIN PLUGIN-DATA-DIR

Options
Name, shorthand Default Description

--compress Compress the context using gzip

Parent command
Command Description

docker plugin Manage plugins

Related commands
Command Description

docker plugin Create a plugin from a rootfs and configuration. Plugin data directory must
create contain config.json and rootfs directory.

docker plugin
Disable a plugin
disable

docker plugin
Enable a plugin
enable

docker plugin
Display detailed information on one or more plugins
inspect

docker plugin
Install a plugin
install

docker plugin ls List plugins

docker plugin
Push a plugin to a registry
push

docker plugin rm Remove one or more plugins


Command Description

docker plugin set Change settings for a plugin

docker plugin
Upgrade an existing plugin
upgrade

Extended description
Creates a plugin. Before creating the plugin, prepare the plugin’s root filesystem as well as the
config.json

Examples
The following example shows how to create a sample plugin.
$ ls -ls /home/pluginDir

total 4
4 -rw-r--r-- 1 root root 431 Nov 7 01:40 config.json
0 drwxr-xr-x 19 root root 420 Nov 7 01:40 rootfs

$ docker plugin create plugin /home/pluginDir

plugin

$ docker plugin ls

ID NAME TAG DESCRIPTION


ENABLED
672d8144ec02 plugin latest A sample plugin for
Docker false

The plugin can subsequently be enabled for local use or pushed to the public registry.

docker plugin disable


Estimated reading time: 2 minutes

Description
Disable a plugin

API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Usage
docker plugin disable [OPTIONS] PLUGIN

Options
Name, shorthand Default Description

--force , -f Force the disable of an active plugin

Parent command
Command Description

docker plugin Manage plugins

Related commands
Command Description

docker plugin Create a plugin from a rootfs and configuration. Plugin data directory must
create contain config.json and rootfs directory.

docker plugin
Disable a plugin
disable

docker plugin
Enable a plugin
enable
Command Description

docker plugin
Display detailed information on one or more plugins
inspect

docker plugin
Install a plugin
install

docker plugin ls List plugins

docker plugin
Push a plugin to a registry
push

docker plugin rm Remove one or more plugins

docker plugin set Change settings for a plugin

docker plugin
Upgrade an existing plugin
upgrade

Extended description
Disables a plugin. The plugin must be installed before it can be disabled, see docker plugin
install. Without the -f option, a plugin that has references (e.g., volumes, networks) cannot be
disabled.

Examples
The following example shows that the sample-volume-plugin plugin is installed and enabled:
$ docker plugin ls

ID NAME TAG DESCRIPTION


ENABLED
69553ca1d123 tiborvass/sample-volume-plugin latest A test
plugin for Docker true

To disable the plugin, use the following command:

$ docker plugin disable tiborvass/sample-volume-plugin


tiborvass/sample-volume-plugin

$ docker plugin ls

ID NAME TAG DESCRIPTION


ENABLED
69553ca1d123 tiborvass/sample-volume-plugin latest A test
plugin for Docker false

docker plugin enable


Estimated reading time: 2 minutes

Description
Enable a plugin

API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Usage
docker plugin enable [OPTIONS] PLUGIN

Options
Name, shorthand Default Description

--timeout 30 HTTP client timeout (in seconds)

Parent command
Command Description

docker plugin Manage plugins


Related commands
Command Description

docker plugin Create a plugin from a rootfs and configuration. Plugin data directory must
create contain config.json and rootfs directory.

docker plugin
Disable a plugin
disable

docker plugin
Enable a plugin
enable

docker plugin
Display detailed information on one or more plugins
inspect

docker plugin
Install a plugin
install

docker plugin ls List plugins

docker plugin
Push a plugin to a registry
push

docker plugin rm Remove one or more plugins

docker plugin set Change settings for a plugin

docker plugin
Upgrade an existing plugin
upgrade

Extended description
Enables a plugin. The plugin must be installed before it can be enabled, see docker plugin
install.

Examples
The following example shows that the sample-volume-plugin plugin is installed, but disabled:
$ docker plugin ls
ID NAME TAG DESCRIPTION
ENABLED
69553ca1d123 tiborvass/sample-volume-plugin latest A test
plugin for Docker false

To enable the plugin, use the following command:

$ docker plugin enable tiborvass/sample-volume-plugin

tiborvass/sample-volume-plugin

$ docker plugin ls

ID NAME TAG DESCRIPTION


ENABLED
69553ca1d123 tiborvass/sample-volume-plugin latest A test
plugin for Docker true

docker plugin inspect


Estimated reading time: 3 minutes

Description
Display detailed information on one or more plugins

API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Usage
docker plugin inspect [OPTIONS] PLUGIN [PLUGIN...]

Options
Name, shorthand Default Description

--format , -f Format the output using the given Go template

Parent command
Command Description

docker plugin Manage plugins

Related commands
Command Description

docker plugin Create a plugin from a rootfs and configuration. Plugin data directory must
create contain config.json and rootfs directory.

docker plugin
Disable a plugin
disable

docker plugin
Enable a plugin
enable

docker plugin
Display detailed information on one or more plugins
inspect

docker plugin
Install a plugin
install

docker plugin ls List plugins

docker plugin
Push a plugin to a registry
push

docker plugin rm Remove one or more plugins

docker plugin set Change settings for a plugin

docker plugin
Upgrade an existing plugin
upgrade
Extended description
Returns information about a plugin. By default, this command renders all results in a JSON array.

Examples
$ docker plugin inspect tiborvass/sample-volume-plugin:latest

{
"Id": "8c74c978c434745c3ade82f1bc0acf38d04990eaf494fa507c16d9f1daa99c21",
"Name": "tiborvass/sample-volume-plugin:latest",
"PluginReference": "tiborvas/sample-volume-plugin:latest",
"Enabled": true,
"Config": {
"Mounts": [
{
"Name": "",
"Description": "",
"Settable": null,
"Source": "/data",
"Destination": "/data",
"Type": "bind",
"Options": [
"shared",
"rbind"
]
},
{
"Name": "",
"Description": "",
"Settable": null,
"Source": null,
"Destination": "/foobar",
"Type": "tmpfs",
"Options": null
}
],
"Env": [
"DEBUG=1"
],
"Args": null,
"Devices": null
},
"Manifest": {
"ManifestVersion": "v0",
"Description": "A test plugin for Docker",
"Documentation": "https://docs.docker.com/engine/extend/plugins/",
"Interface": {
"Types": [
"docker.volumedriver/1.0"
],
"Socket": "plugins.sock"
},
"Entrypoint": [
"plugin-sample-volume-plugin",
"/data"
],
"Workdir": "",
"User": {
},
"Network": {
"Type": "host"
},
"Capabilities": null,
"Mounts": [
{
"Name": "",
"Description": "",
"Settable": null,
"Source": "/data",
"Destination": "/data",
"Type": "bind",
"Options": [
"shared",
"rbind"
]
},
{
"Name": "",
"Description": "",
"Settable": null,
"Source": null,
"Destination": "/foobar",
"Type": "tmpfs",
"Options": null
}
],
"Devices": [
{
"Name": "device",
"Description": "a host device to mount",
"Settable": null,
"Path": "/dev/cpu_dma_latency"
}
],
"Env": [
{
"Name": "DEBUG",
"Description": "If set, prints debug messages",
"Settable": null,
"Value": "1"
}
],
"Args": {
"Name": "args",
"Description": "command line arguments",
"Settable": null,
"Value": [

]
}
}
}

(output formatted for readability)

Formatting the output


$ docker plugin inspect -f '{{.Id}}' tiborvass/sample-volume-plugin:latest

8c74c978c434745c3ade82f1bc0acf38d04990eaf494fa507c16d9f1daa99c21

docker plugin install


Estimated reading time: 2 minutes

Description
Install a plugin

API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Usage
docker plugin install [OPTIONS] PLUGIN [KEY=VALUE...]

Options
Name, shorthand Default Description

--alias Local name for plugin

--disable Do not enable the plugin on install

--disable-content-trust true Skip image verification

--grant-all-permissions Grant all permissions necessary to run the plugin

Parent command
Command Description

docker plugin Manage plugins

Related commands
Command Description

docker plugin Create a plugin from a rootfs and configuration. Plugin data directory must
create contain config.json and rootfs directory.

docker plugin
Disable a plugin
disable

docker plugin
Enable a plugin
enable

docker plugin
Display detailed information on one or more plugins
inspect

docker plugin
Install a plugin
install

docker plugin ls List plugins


Command Description

docker plugin
Push a plugin to a registry
push

docker plugin rm Remove one or more plugins

docker plugin set Change settings for a plugin

docker plugin
Upgrade an existing plugin
upgrade

Extended description
Installs and enables a plugin. Docker looks first for the plugin on your Docker host. If the plugin does
not exist locally, then the plugin is pulled from the registry. Note that the minimum required registry
version to distribute plugins is 2.3.0

Examples
The following example installs vieus/sshfs plugin and sets its DEBUG environment variable to 1. To
install, pull the plugin from Docker Hub and prompt the user to accept the list of privileges that the
plugin needs, set the plugin’s parameters and enable the plugin.
$ docker plugin install vieux/sshfs DEBUG=1

Plugin "vieux/sshfs" is requesting the following privileges:


- network: [host]
- device: [/dev/fuse]
- capabilities: [CAP_SYS_ADMIN]
Do you grant the above permissions? [y/N] y
vieux/sshfs

After the plugin is installed, it appears in the list of plugins:

$ docker plugin ls

ID NAME TAG DESCRIPTION


ENABLED
69553ca1d123 vieux/sshfs latest sshFS plugin for Docker
true

docker plugin ls
Estimated reading time: 3 minutes

Description
List plugins

API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Usage
docker plugin ls [OPTIONS]

Options
Name, shorthand Default Description

--filter , -f Provide filter values (e.g. ‘enabled=true’)

--format Pretty-print plugins using a Go template

--no-trunc Don’t truncate output

--quiet , -q Only display plugin IDs

Parent command
Command Description

docker plugin Manage plugins

Related commands
Command Description

docker plugin Create a plugin from a rootfs and configuration. Plugin data directory must
create contain config.json and rootfs directory.

docker plugin
Disable a plugin
disable

docker plugin
Enable a plugin
enable

docker plugin
Display detailed information on one or more plugins
inspect

docker plugin
Install a plugin
install

docker plugin ls List plugins

docker plugin
Push a plugin to a registry
push

docker plugin rm Remove one or more plugins

docker plugin set Change settings for a plugin

docker plugin
Upgrade an existing plugin
upgrade

Extended description
Lists all the plugins that are currently installed. You can install plugins using the docker plugin
install command. You can also filter using the -f or --filter flag. Refer to the filtering section for
more information about available filter options.

Examples
$ docker plugin ls

ID NAME TAG DESCRIPTION


ENABLED
69553ca1d123 tiborvass/sample-volume-plugin latest A test
plugin for Docker true

Filtering
The filtering flag (-f or --filter) format is of “key=value”. If there is more than one filter, then pass
multiple flags (e.g., --filter "foo=bar" --filter "bif=baz")

The currently supported filters are:

 enabled (boolean - true or false, 0 or 1)


 capability (string -
currently volumedriver, networkdriver, ipamdriver, logdriver, metricscollector, or authz)

ENABLED

The enabled filter matches on plugins enabled or disabled.

CAPABILITY

The capability filter matches on plugin capabilities. One plugin might have multiple capabilities.
Currently volumedriver, networkdriver, ipamdriver, logdriver, metricscollector, and authz are
supported capabilities.
$ docker plugin install --disable vieux/sshfs

Installed plugin vieux/sshfs

$ docker plugin ls --filter enabled=true

NAME TAG DESCRIPTION ENABLED

Formatting
The formatting options (--format) pretty-prints plugins output using a Go template.

Valid placeholders for the Go template are listed below:

Placeholder Description

.ID Plugin ID
Placeholder Description

.Name Plugin name

.Description Plugin description

.Enabled Whether plugin is enabled or not

.PluginReference The reference used to push/pull from a registry

When using the --format option, the plugin ls command will either output the data exactly as the
template declares or, when using the table directive, includes column headers as well.
The following example uses a template without headers and outputs the ID and Nameentries
separated by a colon for all plugins:
$ docker plugin ls --format "{{.ID}}: {{.Name}}"

4be01827a72e: vieux/sshfs:latest

docker plugin rm
Estimated reading time: 2 minutes

Description
Remove one or more plugins

API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Usage
docker plugin rm [OPTIONS] PLUGIN [PLUGIN...]

Options
Name, shorthand Default Description

--force , -f Force the removal of an active plugin


Parent command
Command Description

docker plugin Manage plugins

Related commands
Command Description

docker plugin Create a plugin from a rootfs and configuration. Plugin data directory must
create contain config.json and rootfs directory.

docker plugin
Disable a plugin
disable

docker plugin
Enable a plugin
enable

docker plugin
Display detailed information on one or more plugins
inspect

docker plugin
Install a plugin
install

docker plugin ls List plugins

docker plugin
Push a plugin to a registry
push

docker plugin rm Remove one or more plugins

docker plugin set Change settings for a plugin

docker plugin
Upgrade an existing plugin
upgrade

Extended description
Removes a plugin. You cannot remove a plugin if it is enabled, you must disable a plugin using
the docker plugin disable before removing it (or use --force, use of force is not recommended,
since it can affect functioning of running containers using the plugin).

Examples
The following example disables and removes the sample-volume-plugin:latest plugin:
$ docker plugin disable tiborvass/sample-volume-plugin

tiborvass/sample-volume-plugin

$ docker plugin rm tiborvass/sample-volume-plugin:latest

tiborvass/sample-volume-plugin

docker plugin set


Estimated reading time: 3 minutes

Description
Change settings for a plugin

API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Usage
docker plugin set PLUGIN KEY=VALUE [KEY=VALUE...]

Parent command
Command Description

docker plugin Manage plugins


Related commands
Command Description

docker plugin Create a plugin from a rootfs and configuration. Plugin data directory must
create contain config.json and rootfs directory.

docker plugin
Disable a plugin
disable

docker plugin
Enable a plugin
enable

docker plugin
Display detailed information on one or more plugins
inspect

docker plugin
Install a plugin
install

docker plugin ls List plugins

docker plugin
Push a plugin to a registry
push

docker plugin rm Remove one or more plugins

docker plugin set Change settings for a plugin

docker plugin
Upgrade an existing plugin
upgrade

Extended description
Change settings for a plugin. The plugin must be disabled.

The settings currently supported are:

 env variables
 source of mounts
 path of devices
 args

Examples
Change an environment variable
The following example change the env variable DEBUG on the sample-volume-plugin plugin.
$ docker plugin inspect -f {{.Settings.Env}} tiborvass/sample-volume-plugin
[DEBUG=0]

$ docker plugin set tiborvass/sample-volume-plugin DEBUG=1

$ docker plugin inspect -f {{.Settings.Env}} tiborvass/sample-volume-plugin


[DEBUG=1]

Change the source of a mount


The following example change the source of the mymount mount on the myplugin plugin.
$ docker plugin inspect -f '{{with $mount := index .Settings.Mounts
0}}{{$mount.Source}}{{end}}' myplugin
/foo

$ docker plugins set myplugin mymount.source=/bar

$ docker plugin inspect -f '{{with $mount := index .Settings.Mounts


0}}{{$mount.Source}}{{end}}' myplugin
/bar

Note: Since only source is settable in mymount,docker plugins set mymount=/bar myplugin would
work too.

Change a device path


The following example change the path of the mydevice device on the myplugin plugin.
$ docker plugin inspect -f '{{with $device := index .Settings.Devices
0}}{{$device.Path}}{{end}}' myplugin

/dev/foo

$ docker plugins set myplugin mydevice.path=/dev/bar


$ docker plugin inspect -f '{{with $device := index .Settings.Devices
0}}{{$device.Path}}{{end}}' myplugin

/dev/bar

Note: Since only path is settable in mydevice,docker plugins set mydevice=/dev/bar


myplugin would work too.

Change the source of the arguments


The following example change the value of the args on the myplugin plugin.
$ docker plugin inspect -f '{{.Settings.Args}}' myplugin

["foo", "bar"]

$ docker plugins set myplugin myargs="foo bar baz"

$ docker plugin inspect -f '{{.Settings.Args}}' myplugin

["foo", "bar", "baz"]

docker plugin upgrade


Estimated reading time: 3 minutes

Description
Upgrade an existing plugin

API 1.26+ The client and daemon API must both be at least 1.26 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Usage
docker plugin upgrade [OPTIONS] PLUGIN [REMOTE]
Options
Name, shorthand Default Description

--disable-content-
true Skip image verification
trust

--grant-all-
Grant all permissions necessary to run the plugin
permissions

Do not check if specified remote plugin matches existing


--skip-remote-check
plugin image

Parent command
Command Description

docker plugin Manage plugins

Related commands
Command Description

docker plugin Create a plugin from a rootfs and configuration. Plugin data directory must
create contain config.json and rootfs directory.

docker plugin
Disable a plugin
disable

docker plugin
Enable a plugin
enable

docker plugin
Display detailed information on one or more plugins
inspect

docker plugin
Install a plugin
install

docker plugin ls List plugins

docker plugin
Push a plugin to a registry
push
Command Description

docker plugin rm Remove one or more plugins

docker plugin set Change settings for a plugin

docker plugin
Upgrade an existing plugin
upgrade

Extended description
Upgrades an existing plugin to the specified remote plugin image. If no remote is specified, Docker
will re-pull the current image and use the updated version. All existing references to the plugin will
continue to work. The plugin must be disabled before running the upgrade.

Examples
The following example installs vieus/sshfs plugin, uses it to create and use a volume, then
upgrades the plugin.
$ docker plugin install vieux/sshfs DEBUG=1

Plugin "vieux/sshfs:next" is requesting the following privileges:


- network: [host]
- device: [/dev/fuse]
- capabilities: [CAP_SYS_ADMIN]
Do you grant the above permissions? [y/N] y
vieux/sshfs:next

$ docker volume create -d vieux/sshfs:next -o [email protected]:/tmp/shared -o


password=XXX sshvolume

sshvolume

$ docker run -it -v sshvolume:/data alpine sh -c "touch /data/hello"

$ docker plugin disable -f vieux/sshfs:next


viex/sshfs:next

# Here docker volume ls doesn't show 'sshfsvolume', since the plugin is disabled
$ docker volume ls

DRIVER VOLUME NAME

$ docker plugin upgrade vieux/sshfs:next vieux/sshfs:next

Plugin "vieux/sshfs:next" is requesting the following privileges:


- network: [host]
- device: [/dev/fuse]
- capabilities: [CAP_SYS_ADMIN]
Do you grant the above permissions? [y/N] y
Upgrade plugin vieux/sshfs:next to vieux/sshfs:next

$ docker plugin enable vieux/sshfs:next

viex/sshfs:next

$ docker volume ls

DRIVER VOLUME NAME


viuex/sshfs:next sshvolume

$ docker run -it -v sshvolume:/data alpine sh -c "ls /data"

hello

docker port
Estimated reading time: 1 minute
Description
List port mappings or a specific mapping for the container

Usage
docker port CONTAINER [PRIVATE_PORT[/PROTO]]

Parent command
Command Description

docker The base command for the Docker CLI.

Examples
Show all mapped ports
You can find out all the ports mapped by not specifying a PRIVATE_PORT, or just a specific mapping:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
b650456536c7 busybox:latest top 54 minutes ago Up 54
minutes 0.0.0.0:1234->9876/tcp, 0.0.0.0:4321->7890/tcp test
$ docker port test
7890/tcp -> 0.0.0.0:4321
9876/tcp -> 0.0.0.0:1234
$ docker port test 7890/tcp
0.0.0.0:4321
$ docker port test 7890/udp
2014/06/24 11:53:36 Error: No public port '7890/udp' published for test
$ docker port test 7890
0.0.0.0:4321
docker ps
Estimated reading time: 14 minutes

Description
List containers

Usage
docker ps [OPTIONS]

Options
Name, shorthand Default Description

--all , -a Show all containers (default shows just running)

--filter , -f Filter output based on conditions provided

--format Pretty-print containers using a Go template

--last , -n -1 Show n last created containers (includes all states)

--latest , -l Show the latest created container (includes all states)

--no-trunc Don’t truncate output

--quiet , -q Only display numeric IDs

--size , -s Display total file sizes

Parent command
Command Description

docker The base command for the Docker CLI.


Examples
Prevent truncating output
Running docker ps --no-trunc showing 2 linked containers.
$ docker ps

CONTAINER ID IMAGE COMMAND CREATED


STATUS PORTS NAMES
4c01db0b339c ubuntu:12.04 bash 17 seconds
ago Up 16 seconds 3300-3310/tcp webapp
d7886598dbe2 crosbymichael/redis:latest /redis-server --dir 33 minutes
ago Up 33 minutes 6379/tcp redis,webapp/db

Show both running and stopped containers


The docker ps command only shows running containers by default. To see all containers, use the -
a (or --all) flag:

$ docker ps -a

docker ps groups exposed ports into a single range if possible. E.g., a container that exposes TCP
ports 100, 101, 102 displays 100-102/tcp in the PORTS column.

Filtering
The filtering flag (-f or --filter) format is a key=value pair. If there is more than one filter, then
pass multiple flags (e.g. --filter "foo=bar" --filter "bif=baz")

The currently supported filters are:

Filter Description

id Container’s ID

name Container’s name

An arbitrary string representing either a key or a key-value pair. Expressed


label
as <key> or <key>=<value>

exited An integer representing the container’s exit code. Only useful with --all.
Filter Description

status One of created, restarting, running, removing, paused, exited, or dead

Filters containers which share a given image as an ancestor. Expressed


ancestor
as <image-name>[:<tag>], <image id>, or <image@digest>

before or since Filters containers created before or after a given container ID or name

Filters running containers which have mounted a given volume or bind


volume
mount.

network Filters running containers connected to a given network.

Filters containers which publish or expose a given port. Expressed


publishor expose
as <port>[/<proto>] or <startport-endport>/[<proto>]

Filters containers based on their healthcheck status. One


health
of starting, healthy, unhealthy or none.

isolation Windows daemon only. One of default, process, or hyperv.

Filters containers that are a “task” for a service. Boolean option


is-task
(true or false)

LABEL

The label filter matches containers based on the presence of a label alone or a labeland a value.
The following filter matches containers with the color label regardless of its value.
$ docker ps --filter "label=color"

CONTAINER ID IMAGE COMMAND CREATED


STATUS PORTS NAMES
673394ef1d4c busybox "top" 47 seconds ago Up 45
seconds nostalgic_shockley
d85756f57265 busybox "top" 52 seconds ago Up 51
seconds high_albattani

The following filter matches containers with the color label with the blue value.
$ docker ps --filter "label=color=blue"
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
d85756f57265 busybox "top" About a minute ago Up
About a minute high_albattani

NAME

The name filter matches on all or part of a container’s name.


The following filter matches all containers with a name containing the nostalgic_stallmanstring.
$ docker ps --filter "name=nostalgic_stallman"

CONTAINER ID IMAGE COMMAND CREATED


STATUS PORTS NAMES
9b6247364a03 busybox "top" 2 minutes ago Up 2
minutes nostalgic_stallman

You can also filter for a substring in a name as this shows:

$ docker ps --filter "name=nostalgic"

CONTAINER ID IMAGE COMMAND CREATED


STATUS PORTS NAMES
715ebfcee040 busybox "top" 3 seconds ago Up 1
second i_am_nostalgic
9b6247364a03 busybox "top" 7 minutes ago Up 7
minutes nostalgic_stallman
673394ef1d4c busybox "top" 38 minutes ago Up 38
minutes nostalgic_shockley

EXITED

The exited filter matches containers by exist status code. For example, to filter for containers that
have exited successfully:
$ docker ps -a --filter 'exited=0'

CONTAINER ID IMAGE COMMAND CREATED


STATUS PORTS NAMES
ea09c3c82f6e registry:latest /srv/run.sh 2 weeks ago
Exited (0) 2 weeks ago 127.0.0.1:5000->5000/tcp desperate_leakey
106ea823fe4e fedora:latest /bin/sh -c 'bash -l' 2 weeks ago
Exited (0) 2 weeks ago determined_albattani
48ee228c9464 fedora:20 bash 2 weeks ago
Exited (0) 2 weeks ago tender_torvalds

FILTER BY EXIT SIGNAL

You can use a filter to locate containers that exited with status of 137 meaning a SIGKILL(9) killed
them.
$ docker ps -a --filter 'exited=137'

CONTAINER ID IMAGE COMMAND CREATED


STATUS PORTS NAMES
b3e1c0ed5bfe ubuntu:latest "sleep 1000" 12 seconds ago
Exited (137) 5 seconds ago grave_kowalevski
a2eb5558d669 redis:latest "/entrypoint.sh redi 2 hours ago
Exited (137) 2 hours ago sharp_lalande

Any of these events result in a 137 status:

 the init process of the container is killed manually


 docker kill kills the container
 Docker daemon restarts which kills all running containers

STATUS

The status filter matches containers by status. You can filter


using created, restarting, running, removing, paused, exited and dead. For example, to filter
for runningcontainers:
$ docker ps --filter status=running

CONTAINER ID IMAGE COMMAND CREATED


STATUS PORTS NAMES
715ebfcee040 busybox "top" 16 minutes ago Up
16 minutes i_am_nostalgic
d5c976d3c462 busybox "top" 23 minutes ago Up
23 minutes top
9b6247364a03 busybox "top" 24 minutes ago Up
24 minutes nostalgic_stallman

To filter for paused containers:


$ docker ps --filter status=paused

CONTAINER ID IMAGE COMMAND CREATED


STATUS PORTS NAMES
673394ef1d4c busybox "top" About an hour ago Up
About an hour (Paused) nostalgic_shockley

ANCESTOR

The ancestor filter matches containers based on its image or a descendant of it. The filter supports
the following image representation:

 image
 image:tag
 image:tag@digest
 short-id
 full-id

If you don’t specify a tag, the latest tag is used. For example, to filter for containers that use the
latest ubuntu image:
$ docker ps --filter ancestor=ubuntu

CONTAINER ID IMAGE COMMAND CREATED


STATUS PORTS NAMES
919e1179bdb8 ubuntu-c1 "top" About a minute ago Up
About a minute admiring_lovelace
5d1e4a540723 ubuntu-c2 "top" About a minute ago Up
About a minute admiring_sammet
82a598284012 ubuntu "top" 3 minutes ago Up 3
minutes sleepy_bose
bab2a34ba363 ubuntu "top" 3 minutes ago Up 3
minutes focused_yonath

Match containers based on the ubuntu-c1 image which, in this case, is a child of ubuntu:
$ docker ps --filter ancestor=ubuntu-c1

CONTAINER ID IMAGE COMMAND CREATED


STATUS PORTS NAMES
919e1179bdb8 ubuntu-c1 "top" About a minute ago Up
About a minute admiring_lovelace
Match containers based on the ubuntu version 12.04.5 image:
$ docker ps --filter ancestor=ubuntu:12.04.5

CONTAINER ID IMAGE COMMAND CREATED


STATUS PORTS NAMES
82a598284012 ubuntu:12.04.5 "top" 3 minutes ago Up 3
minutes sleepy_bose

The following matches containers based on the layer d0e008c6cf02 or an image that have this layer
in its layer stack.
$ docker ps --filter ancestor=d0e008c6cf02

CONTAINER ID IMAGE COMMAND CREATED


STATUS PORTS NAMES
82a598284012 ubuntu:12.04.5 "top" 3 minutes ago Up 3
minutes sleepy_bose

CREATE TIME

before

The before filter shows only containers created before the container with given id or name. For
example, having these containers created:
$ docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS


PORTS NAMES
9c3527ed70ce busybox "top" 14 seconds ago Up 15 seconds
desperate_dubinsky
4aace5031105 busybox "top" 48 seconds ago Up 49 seconds
focused_hamilton
6e63f6ff38b0 busybox "top" About a minute ago Up About a minute
distracted_fermat

Filtering with before would give:


$ docker ps -f before=9c3527ed70ce

CONTAINER ID IMAGE COMMAND CREATED STATUS


PORTS NAMES
4aace5031105 busybox "top" About a minute ago Up About a minute
focused_hamilton
6e63f6ff38b0 busybox "top" About a minute ago Up About a minute
distracted_fermat

since

The since filter shows only containers created since the container with given id or name. For
example, with the same containers as in before filter:
$ docker ps -f since=6e63f6ff38b0

CONTAINER ID IMAGE COMMAND CREATED STATUS


PORTS NAMES
9c3527ed70ce busybox "top" 10 minutes ago Up 10 minutes
desperate_dubinsky
4aace5031105 busybox "top" 10 minutes ago Up 10 minutes
focused_hamilton

VOLUME

The volume filter shows only containers that mount a specific volume or have a volume mounted in a
specific path:
$ docker ps --filter volume=remote-volume --format "table {{.ID}}\t{{.Mounts}}"
CONTAINER ID MOUNTS
9c3527ed70ce remote-volume

$ docker ps --filter volume=/data --format "table {{.ID}}\t{{.Mounts}}"


CONTAINER ID MOUNTS
9c3527ed70ce remote-volume

NETWORK

The network filter shows only containers that are connected to a network with a given name or id.
The following filter matches all containers that are connected to a network with a name
containing net1.
$ docker run -d --net=net1 --name=test1 ubuntu top
$ docker run -d --net=net2 --name=test2 ubuntu top
$ docker ps --filter network=net1

CONTAINER ID IMAGE COMMAND CREATED STATUS


PORTS NAMES
9d4893ed80fe ubuntu "top" 10 minutes ago Up 10 minutes
test1

The network filter matches on both the network’s name and id. The following example shows all
containers that are attached to the net1 network, using the network id as a filter;
$ docker network inspect --format "{{.ID}}" net1

8c0b4110ae930dbe26b258de9bc34a03f98056ed6f27f991d32919bfe401d7c5

$ docker ps --filter
network=8c0b4110ae930dbe26b258de9bc34a03f98056ed6f27f991d32919bfe401d7c5

CONTAINER ID IMAGE COMMAND CREATED STATUS


PORTS NAMES
9d4893ed80fe ubuntu "top" 10 minutes ago Up 10 minutes
test1

PUBLISH AND EXPOSE

The publish and expose filters show only containers that have published or exposed port with a
given port number, port range, and/or protocol. The default protocol is tcp when not specified.

The following filter matches all containers that have published port of 80:

$ docker run -d --publish=80 busybox top


$ docker run -d --expose=8080 busybox top

$ docker ps -a

CONTAINER ID IMAGE COMMAND CREATED


STATUS PORTS NAMES
9833437217a5 busybox "top" 5 seconds ago Up 4
seconds 8080/tcp dreamy_mccarthy
fc7e477723b7 busybox "top" 50 seconds ago Up 50
seconds 0.0.0.0:32768->80/tcp admiring_roentgen
$ docker ps --filter publish=80

CONTAINER ID IMAGE COMMAND CREATED


STATUS PORTS NAMES
fc7e477723b7 busybox "top" About a minute ago Up
About a minute 0.0.0.0:32768->80/tcp admiring_roentgen

The following filter matches all containers that have exposed TCP port in the range of 8000-8080:
$ docker ps --filter expose=8000-8080/tcp

CONTAINER ID IMAGE COMMAND CREATED


STATUS PORTS NAMES
9833437217a5 busybox "top" 21 seconds ago Up 19
seconds 8080/tcp dreamy_mccarthy

The following filter matches all containers that have exposed UDP port 80:
$ docker ps --filter publish=80/udp

CONTAINER ID IMAGE COMMAND CREATED


STATUS PORTS NAMES

Formatting
The formatting option (--format) pretty-prints container output using a Go template.

Valid placeholders for the Go template are listed below:

Placeholder Description

.ID Container ID

.Image Image ID

.Command Quoted command

.CreatedAt Time when the container was created.

.RunningFor Elapsed time since the container was started.


Placeholder Description

.Ports Exposed ports.

.Status Container status.

.Size Container disk size.

.Names Container names.

.Labels All labels assigned to the container.

Value of a specific label for this container. For example '{{.Label


.Label
"com.docker.swarm.cpu"}}'

.Mounts Names of the volumes mounted in this container.

.Networks Names of the networks attached to this container.

When using the --format option, the ps command will either output the data exactly as the template
declares or, when using the table directive, includes column headers as well.
The following example uses a template without headers and outputs the ID and Commandentries
separated by a colon for all running containers:
$ docker ps --format "{{.ID}}: {{.Command}}"

a87ecb4f327c: /bin/sh -c #(nop) MA


01946d9d34d8: /bin/sh -c #(nop) MA
c1d3b0166030: /bin/sh -c yum -y up
41d50ecd2f57: /bin/sh -c #(nop) MA

To list all running containers with their labels in a table format you can use:

$ docker ps --format "table {{.ID}}\t{{.Labels}}"

CONTAINER ID LABELS
a87ecb4f327c com.docker.swarm.node=ubuntu,com.docker.swarm.storage=ssd
01946d9d34d8
c1d3b0166030 com.docker.swarm.node=debian,com.docker.swarm.cpu=6
41d50ecd2f57
com.docker.swarm.node=fedora,com.docker.swarm.cpu=3,com.docker.swarm.storage=ssd
docker pull
Estimated reading time: 8 minutes

Description
Pull an image or a repository from a registry

Usage
docker pull [OPTIONS] NAME[:TAG|@DIGEST]

Options
Name, shorthand Default Description

--all-tags , -a Download all tagged images in the repository

--disable-content-trust true Skip image verification

experimental (daemon)API 1.32+


--platform
Set platform if server is multi-platform capable

--quiet , -q Suppress verbose output

Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
Most of your images will be created on top of a base image from the Docker Hub registry.

Docker Hub contains many pre-built images that you can pull and try without needing to define and
configure your own.
To download a particular image, or set of images (i.e., a repository), use docker pull.
Proxy configuration
If you are behind an HTTP proxy server, for example in corporate settings, before open a connect to
registry, you may need to configure the Docker daemon’s proxy settings, using
the HTTP_PROXY, HTTPS_PROXY, and NO_PROXY environment variables. To set these environment
variables on a host using systemd, refer to the control and configure Docker with systemdfor
variables configuration.

Concurrent downloads
By default the Docker daemon will pull three layers of an image at a time. If you are on a low
bandwidth connection this may cause timeout issues and you may want to lower this via the --max-
concurrent-downloads daemon option. See the daemon documentation for more details.

Examples
Pull an image from Docker Hub
To download a particular image, or set of images (i.e., a repository), use docker pull. If no tag is
provided, Docker Engine uses the :latest tag as a default. This command pulls
the debian:latest image:
$ docker pull debian

Using default tag: latest


latest: Pulling from library/debian
fdd5d7827f33: Pull complete
a3ed95caeb02: Pull complete
Digest: sha256:e7d38b3517548a1c71e41bffe9c8ae6d6d29546ce46bf62159837aad072c90aa
Status: Downloaded newer image for debian:latest

Docker images can consist of multiple layers. In the example above, the image consists of two
layers; fdd5d7827f33 and a3ed95caeb02.
Layers can be reused by images. For example, the debian:jessie image shares both layers
with debian:latest. Pulling the debian:jessie image therefore only pulls its metadata, but not its
layers, because all layers are already present locally:
$ docker pull debian:jessie
jessie: Pulling from library/debian
fdd5d7827f33: Already exists
a3ed95caeb02: Already exists
Digest: sha256:a9c958be96d7d40df920e7041608f2f017af81800ca5ad23e327bc402626b58e
Status: Downloaded newer image for debian:jessie

To see which images are present locally, use the docker images command:
$ docker images

REPOSITORY TAG IMAGE ID CREATED SIZE


debian jessie f50f9524513f 5 days ago 125.1 MB
debian latest f50f9524513f 5 days ago 125.1 MB

Docker uses a content-addressable image store, and the image ID is a SHA256 digest covering the
image’s configuration and layers. In the example above, debian:jessie and debian:latest have the
same image ID because they are actually the same image tagged with different names. Because
they are the same image, their layers are stored only once and do not consume extra disk space.

For more information about images, layers, and the content-addressable store, refer to understand
images, containers, and storage drivers.

Pull an image by digest (immutable identifier)


So far, you’ve pulled images by their name (and “tag”). Using names and tags is a convenient way to
work with images. When using tags, you can docker pull an image again to make sure you have
the most up-to-date version of that image. For example, docker pull ubuntu:14.04 pulls the latest
version of the Ubuntu 14.04 image.

In some cases you don’t want images to be updated to newer versions, but prefer to use a fixed
version of an image. Docker enables you to pull an image by its digest. When pulling an image by
digest, you specify exactly which version of an image to pull. Doing so, allows you to “pin” an image
to that version, and guarantee that the image you’re using is always the same.

To know the digest of an image, pull the image first. Let’s pull the latest ubuntu:14.04 image from
Docker Hub:
$ docker pull ubuntu:14.04
14.04: Pulling from library/ubuntu
5a132a7e7af1: Pull complete
fd2731e4c50c: Pull complete
28a2f68d1120: Pull complete
a3ed95caeb02: Pull complete
Digest: sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2
Status: Downloaded newer image for ubuntu:14.04

Docker prints the digest of the image after the pull has finished. In the example above, the digest of
the image is:

sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2

Docker also prints the digest of an image when pushing to a registry. This may be useful if you want
to pin to a version of the image you just pushed.

A digest takes the place of the tag when pulling an image, for example, to pull the above image by
digest, run the following command:

$ docker pull
ubuntu@sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2

sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2: Pulling from


library/ubuntu
5a132a7e7af1: Already exists
fd2731e4c50c: Already exists
28a2f68d1120: Already exists
a3ed95caeb02: Already exists
Digest: sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2
Status: Downloaded newer image for
ubuntu@sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2

Digest can also be used in the FROM of a Dockerfile, for example:


FROM ubuntu@sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2
MAINTAINER some maintainer <[email protected]>
Note: Using this feature “pins” an image to a specific version in time. Docker will therefore not pull
updated versions of an image, which may include security updates. If you want to pull an updated
image, you need to change the digest accordingly.

Pull from a different registry


By default, docker pull pulls images from Docker Hub. It is also possible to manually specify the
path of a registry to pull from. For example, if you have set up a local registry, you can specify its
path to pull from it. A registry path is similar to a URL, but does not contain a protocol specifier
(https://).
The following command pulls the testing/test-image image from a local registry listening on port
5000 (myregistry.local:5000):
$ docker pull myregistry.local:5000/testing/test-image

Registry credentials are managed by docker login.

Docker uses the https:// protocol to communicate with a registry, unless the registry is allowed to
be accessed over an insecure connection. Refer to the insecure registries section for more
information.

Pull a repository with multiple images


By default, docker pull pulls a single image from the registry. A repository can contain multiple
images. To pull all images from a repository, provide the -a (or --all-tags) option when
using docker pull.
This command pulls all images from the fedora repository:
$ docker pull --all-tags fedora

Pulling repository fedora


ad57ef8d78d7: Download complete
105182bb5e8b: Download complete
511136ea3c5a: Download complete
73bd853d2ea5: Download complete
....

Status: Downloaded newer image for fedora


After the pull has completed use the docker images command to see the images that were pulled.
The example below shows all the fedora images that are present locally:
$ docker images fedora

REPOSITORY TAG IMAGE ID CREATED SIZE


fedora rawhide ad57ef8d78d7 5 days ago 359.3 MB
fedora 20 105182bb5e8b 5 days ago 372.7 MB
fedora heisenbug 105182bb5e8b 5 days ago 372.7 MB
fedora latest 105182bb5e8b 5 days ago 372.7 MB

Cancel a pull
Killing the docker pull process, for example by pressing CTRL-c while it is running in a terminal, will
terminate the pull operation.
$ docker pull fedora

Using default tag: latest


latest: Pulling from library/fedora
a3ed95caeb02: Pulling fs layer
236608c7b546: Pulling fs layer
^C

Note: Technically, the Engine terminates a pull operation when the connection between the Docker
Engine daemon and the Docker Engine client initiating the pull is lost. If the connection with the
Engine daemon is lost for other reasons than a manual interaction, the pull is also aborted.

docker push
Estimated reading time: 2 minutes

Description
Push an image or a repository to a registry

Usage
docker push [OPTIONS] NAME[:TAG]

Options
Name, shorthand Default Description

--disable-content-trust true Skip image signing

Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
Use docker push to share your images to the Docker Hub registry or to a self-hosted one.
Refer to the docker tag reference for more information about valid image and tag names.
Killing the docker push process, for example by pressing CTRL-c while it is running in a terminal,
terminates the push operation.

Progress bars are shown during docker push, which show the uncompressed size. The actual
amount of data that’s pushed will be compressed before sending, so the uploaded size will not be
reflected by the progress bar.

Registry credentials are managed by docker login.

Concurrent uploads
By default the Docker daemon will push five layers of an image at a time. If you are on a low
bandwidth connection this may cause timeout issues and you may want to lower this via the --max-
concurrent-uploads daemon option. See the daemon documentation for more details.

Examples
Push a new image to a registry
First save the new image by finding the container ID (using docker ps) and then committing it to a
new image name. Note that only a-z0-9-_. are allowed when naming images:
$ docker commit c16378f943fe rhel-httpd

Now, push the image to the registry using the image ID. In this example the registry is on host
named registry-host and listening on port 5000. To do this, tag the image with the host name or IP
address, and the port of the registry:
$ docker tag rhel-httpd registry-host:5000/myadmin/rhel-httpd

$ docker push registry-host:5000/myadmin/rhel-httpd

Check that this worked by running:

$ docker images

You should see both rhel-httpd and registry-host:5000/myadmin/rhel-httpd listed.

docker registry
Estimated reading time: 1 minute

Description
Manage Docker registries

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker registry COMMAND

Child commands
Command Description

docker registry events List registry events (DTR Only)

docker registry history Inspect registry image history (DTR Only)

docker registry info Display information about a registry (DTR Only)

docker registry inspect Inspect registry image

docker registry joblogs List registry job logs (DTR Only)

docker registry jobs List registry jobs (DTR Only)

docker registry ls List registry images

docker registry rmi Remove a registry image (DTR Only)

Parent command
Command Description

docker The base command for the Docker CLI.

docker registry events


Estimated reading time: 2 minutes

Description
List registry events (DTR Only)

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker registry events HOST | REPOSITORY [OPTIONS]

Options
Name,
Default Description
shorthand

--format Pretty-print output using a Go template

--limit 50 Specify the number of event results

--no-trunc Don’t truncate output

Specify the type of Event target object [REPOSITORY | TAG | BLOB |


--object-
MANIFEST | WEBHOOK | URI | PROMOTION | PUSH_MIRRORING
type
| POLL_MIRRORING]

Specify the type of Event [CREATE | GET | DELETE | UPDATE |


--type
SEND | FAIL]
Parent command
Command Description

docker registry Manage Docker registries

Related commands
Command Description

docker registry events List registry events (DTR Only)

docker registry history Inspect registry image history (DTR Only)

docker registry info Display information about a registry (DTR Only)

docker registry inspect Inspect registry image

docker registry joblogs List registry job logs (DTR Only)

docker registry jobs List registry jobs (DTR Only)

docker registry ls List registry images

docker registry rmi Remove a registry image (DTR Only)

Extended description
List registry events (Only supported by Docker Trusted Registry)

docker registry history


Estimated reading time: 2 minutes

Description
Inspect registry image history (DTR Only)

This command is experimental.


This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker registry history IMAGE [OPTIONS]

Options
Name, shorthand Default Description

--format Pretty-print history using a Go template

--human , -H true Print sizes and dates in human readable format

--no-trunc Don’t truncate output

Parent command
Command Description

docker registry Manage Docker registries

Related commands
Command Description

docker registry events List registry events (DTR Only)

docker registry history Inspect registry image history (DTR Only)

docker registry info Display information about a registry (DTR Only)

docker registry inspect Inspect registry image

docker registry joblogs List registry job logs (DTR Only)

docker registry jobs List registry jobs (DTR Only)

docker registry ls List registry images

docker registry rmi Remove a registry image (DTR Only)

docker registry info


Estimated reading time: 2 minutes

Description
Display information about a registry (DTR Only)

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker registry info HOST [OPTIONS]

Options
Name, shorthand Default Description

--format Pretty-print output using a Go template

Parent command
Command Description

docker registry Manage Docker registries

Related commands
Command Description

docker registry events List registry events (DTR Only)

docker registry history Inspect registry image history (DTR Only)

docker registry info Display information about a registry (DTR Only)

docker registry inspect Inspect registry image

docker registry joblogs List registry job logs (DTR Only)

docker registry jobs List registry jobs (DTR Only)

docker registry ls List registry images

docker registry rmi Remove a registry image (DTR Only)


Extended description
Display information about a registry (Only supported by Docker Trusted Registry and must be
authenticated as an admin user)

docker registry inspect


Estimated reading time: 2 minutes

Description
Inspect registry image

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker registry inspect IMAGE [OPTIONS]

Options
Name, shorthand Default Description

--format Pretty-print output using a Go template

Parent command
Command Description

docker registry Manage Docker registries

Related commands
Command Description

docker registry events List registry events (DTR Only)

docker registry history Inspect registry image history (DTR Only)

docker registry info Display information about a registry (DTR Only)

docker registry inspect Inspect registry image

docker registry joblogs List registry job logs (DTR Only)

docker registry jobs List registry jobs (DTR Only)

docker registry ls List registry images

docker registry rmi Remove a registry image (DTR Only)

docker registry joblogs


Estimated reading time: 2 minutes

Description
List registry job logs (DTR Only)

This command is experimental.


This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker registry joblogs HOST JOB_ID [OPTIONS]

Options
Name, shorthand Default Description

--format Pretty-print output using a Go template

Parent command
Command Description

docker registry Manage Docker registries

Related commands
Command Description

docker registry events List registry events (DTR Only)


Command Description

docker registry history Inspect registry image history (DTR Only)

docker registry info Display information about a registry (DTR Only)

docker registry inspect Inspect registry image

docker registry joblogs List registry job logs (DTR Only)

docker registry jobs List registry jobs (DTR Only)

docker registry ls List registry images

docker registry rmi Remove a registry image (DTR Only)

docker registry ls
Estimated reading time: 2 minutes

Description
List registry images

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker registry ls REPOSITORY[:TAG] [OPTIONS]
Options
Name, shorthand Default Description

--digests Show digests

--format Pretty-print output using a Go template

--quiet , -q Only display image names

--verbose Display verbose image information

Parent command
Command Description

docker registry Manage Docker registries

Related commands
Command Description

docker registry events List registry events (DTR Only)

docker registry history Inspect registry image history (DTR Only)

docker registry info Display information about a registry (DTR Only)

docker registry inspect Inspect registry image

docker registry joblogs List registry job logs (DTR Only)

docker registry jobs List registry jobs (DTR Only)

docker registry ls List registry images

docker registry rmi Remove a registry image (DTR Only)

docker registry rmi


Estimated reading time: 2 minutes

Description
Remove a registry image (DTR Only)

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker registry rmi REPOSITORY:TAG [OPTIONS]

Parent command
Command Description

docker registry Manage Docker registries

Related commands
Command Description

docker registry events List registry events (DTR Only)


Command Description

docker registry history Inspect registry image history (DTR Only)

docker registry info Display information about a registry (DTR Only)

docker registry inspect Inspect registry image

docker registry joblogs List registry job logs (DTR Only)

docker registry jobs List registry jobs (DTR Only)

docker registry ls List registry images

docker registry rmi Remove a registry image (DTR Only)

docker rename
Estimated reading time: 1 minute

Description
Rename a container

Usage
docker rename CONTAINER NEW_NAME

Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
The docker rename command renames a container.
Examples
$ docker rename my_container my_new_container

docker restart
Estimated reading time: 1 minute

Description
Restart one or more containers

Usage
docker restart [OPTIONS] CONTAINER [CONTAINER...]

Options
Name, shorthand Default Description

--time , -t 10 Seconds to wait for stop before killing the container

Parent command
Command Description

docker The base command for the Docker CLI.

Examples
$ docker restart my_container

docker rm
Estimated reading time: 2 minutes
Description
Remove one or more containers

Usage
docker rm [OPTIONS] CONTAINER [CONTAINER...]

Options
Name, shorthand Default Description

--force , -f Force the removal of a running container (uses SIGKILL)

--link , -l Remove the specified link

--volumes , -v Remove the volumes associated with the container

Parent command
Command Description

docker The base command for the Docker CLI.

Examples
Remove a container
This will remove the container referenced under the link /redis.
$ docker rm /redis

/redis

Remove a link specified with --link on the default bridge


network
This will remove the underlying link between /webapp and the /redis containers on the default bridge
network, removing all network communication between the two containers. This does not apply
when --link is used with user-specified networks.
$ docker rm --link /webapp/redis

/webapp/redis

Force-remove a running container


This command will force-remove a running container.

$ docker rm --force redis

redis

The main process inside the container referenced under the link redis will receiveSIGKILL, then the
container will be removed.

Remove all stopped containers


$ docker rm $(docker ps -a -q)

This command will delete all stopped containers. The command docker ps -a -q will return all
existing container IDs and pass them to the rm command which will delete them. Any running
containers will not be deleted.

Remove a container and its volumes


$ docker rm -v redis
redis

This command will remove the container and any volumes associated with it. Note that if a volume
was specified with a name, it will not be removed.

Remove a container and selectively remove volumes


$ docker create -v awesome:/foo -v /bar --name hello redis
hello
$ docker rm -v hello

In this example, the volume for /foo will remain intact, but the volume for /bar will be removed. The
same behavior holds for volumes inherited with --volumes-from.

docker rmi
Estimated reading time: 3 minutes

Description
Remove one or more images

Usage
docker rmi [OPTIONS] IMAGE [IMAGE...]

Options
Name, shorthand Default Description

--force , -f Force removal of the image

--no-prune Do not delete untagged parents

Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
Removes (and un-tags) one or more images from the host node. If an image has multiple tags, using
this command with the tag as a parameter only removes the tag. If the tag is the only one for the
image, both the image and the tag are removed.
This does not remove images from a registry. You cannot remove an image of a running container
unless you use the -f option. To see all images on a host use the docker image ls command.

Examples
You can remove an image using its short or long ID, its tag, or its digest. If an image has one or
more tags referencing it, you must remove all of them before the image is removed. Digest
references are removed automatically when an image is removed by tag.

$ docker images

REPOSITORY TAG IMAGE ID CREATED


SIZE
test1 latest fd484f19954f 23 seconds ago
7 B (virtual 4.964 MB)
test latest fd484f19954f 23 seconds ago
7 B (virtual 4.964 MB)
test2 latest fd484f19954f 23 seconds ago
7 B (virtual 4.964 MB)

$ docker rmi fd484f19954f

Error: Conflict, cannot delete image fd484f19954f because it is tagged in multiple


repositories, use -f to force
2013/12/11 05:47:16 Error: failed to remove one or more images

$ docker rmi test1:latest

Untagged: test1:latest

$ docker rmi test2:latest

Untagged: test2:latest

$ docker images
REPOSITORY TAG IMAGE ID CREATED
SIZE
test latest fd484f19954f 23 seconds ago
7 B (virtual 4.964 MB)

$ docker rmi test:latest

Untagged: test:latest
Deleted: fd484f19954f4920da7ff372b5067f5b7ddb2fd3830cecd17b96ea9e286ba5b8

If you use the -f flag and specify the image’s short or long ID, then this command untags and
removes all images that match the specified ID.
$ docker images

REPOSITORY TAG IMAGE ID CREATED


SIZE
test1 latest fd484f19954f 23 seconds ago
7 B (virtual 4.964 MB)
test latest fd484f19954f 23 seconds ago
7 B (virtual 4.964 MB)
test2 latest fd484f19954f 23 seconds ago
7 B (virtual 4.964 MB)

$ docker rmi -f fd484f19954f

Untagged: test1:latest
Untagged: test:latest
Untagged: test2:latest
Deleted: fd484f19954f4920da7ff372b5067f5b7ddb2fd3830cecd17b96ea9e286ba5b8

An image pulled by digest has no tag associated with it:

$ docker images --digests

REPOSITORY TAG DIGEST


IMAGE ID CREATED SIZE
localhost:5000/test/busybox <none>
sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf
4986bf8c1536 9 weeks ago 2.43 MB

To remove an image using its digest:

$ docker rmi
localhost:5000/test/busybox@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a9
8caa0382cfbdbf
Untagged:
localhost:5000/test/busybox@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a9
8caa0382cfbdbf
Deleted: 4986bf8c15363d1c5d15512d5266f8777bfba4974ac56e3270e7760f6f0a8125
Deleted: ea13149945cb6b1e746bf28032f02e9b5a793523481a0a18645fc77ad53c4ea2
Deleted: df7546f9f060a2268024c8a230d8639878585defcc1bc6f79d2728a13957871b

docker run
Estimated reading time: 34 minutes

Description
Run a command in a new container

Usage
docker run [OPTIONS] IMAGE [COMMAND] [ARG...]

Options
Name, shorthand Default Description

--add-host Add a custom host-to-IP mapping (host:ip)

--attach , -a Attach to STDIN, STDOUT or STDERR

Block IO (relative weight), between 10 and 1000, or 0 to


--blkio-weight
disable (default 0)
Name, shorthand Default Description

--blkio-weight-
Block IO weight (relative device weight)
device

--cap-add Add Linux capabilities

--cap-drop Drop Linux capabilities

--cgroup-parent Optional parent cgroup for the container

--cidfile Write the container ID to the file

--cpu-count CPU count (Windows only)

--cpu-percent CPU percent (Windows only)

--cpu-period Limit CPU CFS (Completely Fair Scheduler) period

--cpu-quota Limit CPU CFS (Completely Fair Scheduler) quota

API 1.25+
--cpu-rt-period
Limit CPU real-time period in microseconds

API 1.25+
--cpu-rt-runtime
Limit CPU real-time runtime in microseconds

--cpu-shares , -c CPU shares (relative weight)

API 1.25+
--cpus
Number of CPUs

--cpuset-cpus CPUs in which to allow execution (0-3, 0,1)

--cpuset-mems MEMs in which to allow execution (0-3, 0,1)

--detach , -d Run container in background and print container ID

--detach-keys Override the key sequence for detaching a container

--device Add a host device to the container

--device-cgroup-
Add a rule to the cgroup allowed devices list
rule
Name, shorthand Default Description

--device-read-bps Limit read rate (bytes per second) from a device

--device-read-
Limit read rate (IO per second) from a device
iops

--device-write-
Limit write rate (bytes per second) to a device
bps

--device-write-
Limit write rate (IO per second) to a device
iops

--disable-
true Skip image verification
content-trust

--dns Set custom DNS servers

--dns-opt Set DNS options

--dns-option Set DNS options

--dns-search Set custom DNS search domains

--domainname Container NIS domain name

--entrypoint Overwrite the default ENTRYPOINT of the image

--env , -e Set environment variables

--env-file Read in a file of environment variables

--expose Expose a port or a range of ports

API 1.40+
--gpus
GPU devices to add to the container (‘all’ to pass all GPUs)

--group-add Add additional groups to join

--health-cmd Command to run to check health

--health-interval Time between running the check (ms|s|m|h) (default 0s)

--health-retries Consecutive failures needed to report unhealthy


Name, shorthand Default Description

API 1.29+
--health-start-
Start period for the container to initialize before starting health-
period
retries countdown (ms|s|m|h) (default 0s)

Maximum time to allow one check to run (ms|s|m|h) (default


--health-timeout
0s)

--help Print usage

--hostname , -h Container host name

API 1.25+
--init Run an init inside the container that forwards signals and
reaps processes

--interactive , -
Keep STDIN open even if not attached
i

Maximum IO bandwidth limit for the system drive (Windows


--io-maxbandwidth
only)

--io-maxiops Maximum IOps limit for the system drive (Windows only)

--ip IPv4 address (e.g., 172.30.100.104)

--ip6 IPv6 address (e.g., 2001:db8::33)

--ipc IPC mode to use

--isolation Container isolation technology

--kernel-memory Kernel memory limit

--label , -l Set meta data on a container

--label-file Read in a line delimited file of labels

--link Add link to another container

--link-local-ip Container IPv4/IPv6 link-local addresses

--log-driver Logging driver for the container


Name, shorthand Default Description

--log-opt Log driver options

--mac-address Container MAC address (e.g., 92:d0:c6:0a:29:33)

--memory , -m Memory limit

--memory-
Memory soft limit
reservation

Swap limit equal to memory plus swap: ‘-1’ to enable unlimited


--memory-swap
swap

--memory-
-1 Tune container memory swappiness (0 to 100)
swappiness

--mount Attach a filesystem mount to the container

--name Assign a name to the container

--net Connect a container to a network

--net-alias Add network-scoped alias for the container

--network Connect a container to a network

--network-alias Add network-scoped alias for the container

--no-healthcheck Disable any container-specified HEALTHCHECK

--oom-kill-
Disable OOM Killer
disable

--oom-score-adj Tune host’s OOM preferences (-1000 to 1000)

--pid PID namespace to use

--pids-limit Tune container pids limit (set -1 for unlimited)

experimental (daemon)API 1.32+


--platform
Set platform if server is multi-platform capable

--privileged Give extended privileges to this container


Name, shorthand Default Description

--publish , -p Publish a container’s port(s) to the host

--publish-all , -
Publish all exposed ports to random ports
P

--read-only Mount the container’s root filesystem as read only

--restart no Restart policy to apply when a container exits

--rm Automatically remove the container when it exits

--runtime Runtime to use for this container

--security-opt Security Options

--shm-size Size of /dev/shm

--sig-proxy true Proxy received signals to the process

--stop-signal SIGTERM Signal to stop a container

API 1.25+
--stop-timeout
Timeout (in seconds) to stop a container

--storage-opt Storage driver options for the container

--sysctl Sysctl options

--tmpfs Mount a tmpfs directory

--tty , -t Allocate a pseudo-TTY

--ulimit Ulimit options

--user , -u Username or UID (format: <name|uid>[:<group|gid>])

--userns User namespace to use

--uts UTS namespace to use

--volume , -v Bind mount a volume


Name, shorthand Default Description

--volume-driver Optional volume driver for the container

--volumes-from Mount volumes from the specified container(s)

--workdir , -w Working directory inside the container

Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
The docker run command first creates a writeable container layer over the specified image, and
then starts it using the specified command. That is, docker run is equivalent to the
API /containers/create then /containers/(id)/start. A stopped container can be restarted with
all its previous changes intact using docker start. See docker ps -a to view a list of all containers.
The docker run command can be used in combination with docker commit to change the command
that a container runs. There is additional detailed information about docker runin the Docker run
reference.

For information on connecting a container to a network, see the “Docker network overview”.

Examples
Assign name and allocate pseudo-TTY (--name, -it)
$ docker run --name test -it debian

root@d6c0fe130dba:/# exit 13
$ echo $?
13
$ docker ps -a | grep test
d6c0fe130dba debian:7 "/bin/bash" 26 seconds ago
Exited (13) 17 seconds ago test

This example runs a container named test using the debian:latest image. The -itinstructs Docker
to allocate a pseudo-TTY connected to the container’s stdin; creating an interactive bash shell in the
container. In the example, the bash shell is quit by enteringexit 13. This exit code is passed on to
the caller of docker run, and is recorded in the test container’s metadata.

Capture container ID (--cidfile)


$ docker run --cidfile /tmp/docker_test.cid ubuntu echo "test"

This will create a container and print test to the console. The cidfile flag makes Docker attempt to
create a new file and write the container ID to it. If the file exists already, Docker will return an error.
Docker will close this file when docker run exits.

Full container capabilities (--privileged)


$ docker run -t -i --rm ubuntu bash
root@bc338942ef20:/# mount -t tmpfs none /mnt
mount: permission denied

This will not work, because by default, most potentially dangerous kernel capabilities are dropped;
including cap_sys_admin (which is required to mount filesystems). However, the --privileged flag
will allow it to run:
$ docker run -t -i --privileged ubuntu bash
root@50e3f57e16e6:/# mount -t tmpfs none /mnt
root@50e3f57e16e6:/# df -h
Filesystem Size Used Avail Use% Mounted on
none 1.9G 0 1.9G 0% /mnt

The --privileged flag gives all capabilities to the container, and it also lifts all the limitations
enforced by the device cgroup controller. In other words, the container can then do almost
everything that the host can do. This flag exists to allow special use-cases, like running Docker
within Docker.

Set working directory (-w)


$ docker run -w /path/to/dir/ -i -t ubuntu pwd

The -w lets the command being executed inside directory given, here /path/to/dir/. If the path
does not exist it is created inside the container.

Set storage driver options per container


$ docker run -it --storage-opt size=120G fedora /bin/bash

This (size) will allow to set the container rootfs size to 120G at creation time. This option is only
available for the devicemapper, btrfs, overlay2, windowsfilter and zfs graph drivers. For
the devicemapper, btrfs, windowsfilter and zfs graph drivers, user cannot pass a size less than the
Default BaseFS Size. For the overlay2 storage driver, the size option is only available if the backing
fs is xfs and mounted with the pquota mount option. Under these conditions, user can pass any size
less than the backing fs size.

Mount tmpfs (--tmpfs)


$ docker run -d --tmpfs /run:rw,noexec,nosuid,size=65536k my_image

The --tmpfs flag mounts an empty tmpfs into the container with
the rw, noexec, nosuid, size=65536k options.

Mount volume (-v, --read-only)


$ docker run -v `pwd`:`pwd` -w `pwd` -i -t ubuntu pwd

The -v flag mounts the current working directory into the container. The -w lets the command being
executed inside the current working directory, by changing into the directory to the value returned
by pwd. So this combination executes the command using the container, but inside the current
working directory.
$ docker run -v /doesnt/exist:/foo -w /foo -i -t ubuntu bash

When the host directory of a bind-mounted volume doesn’t exist, Docker will automatically create
this directory on the host for you. In the example above, Docker will create the /doesnt/exist folder
before starting your container.
$ docker run --read-only -v /icanwrite busybox touch /icanwrite/here
Volumes can be used in combination with --read-only to control where a container writes files.
The --read-only flag mounts the container’s root filesystem as read only prohibiting writes to
locations other than the specified volumes for the container.
$ docker run -t -i -v /var/run/docker.sock:/var/run/docker.sock -v /path/to/static-
docker-binary:/usr/bin/docker busybox sh

By bind-mounting the docker unix socket and statically linked docker binary (refer to get the linux
binary), you give the container the full access to create and manipulate the host’s Docker daemon.

On Windows, the paths must be specified using Windows-style semantics.

PS C:\> docker run -v c:\foo:c:\dest microsoft/nanoserver cmd /s /c type


c:\dest\somefile.txt
Contents of file

PS C:\> docker run -v c:\foo:d: microsoft/nanoserver cmd /s /c type d:\somefile.txt


Contents of file

The following examples will fail when using Windows-based containers, as the destination of a
volume or bind mount inside the container must be one of: a non-existing or empty directory; or a
drive other than C:. Further, the source of a bind mount must be a local directory, not a file.

net use z: \\remotemachine\share


docker run -v z:\foo:c:\dest ...
docker run -v \\uncpath\to\directory:c:\dest ...
docker run -v c:\foo\somefile.txt:c:\dest ...
docker run -v c:\foo:c: ...
docker run -v c:\foo:c:\existing-directory-with-contents ...

For in-depth information about volumes, refer to manage data in containers

Add bind mounts or volumes using the --mount flag


The --mount flag allows you to mount volumes, host-directories and tmpfs mounts in a container.
The --mount flag supports most options that are supported by the -v or the --volumeflag, but uses a
different syntax. For in-depth information on the --mount flag, and a comparison between --
volume and --mount, refer to the service create command reference.
Even though there is no plan to deprecate --volume, usage of --mount is recommended.
Examples:

$ docker run --read-only --mount type=volume,target=/icanwrite busybox touch


/icanwrite/here
$ docker run -t -i --mount type=bind,src=/data,dst=/data busybox sh

Publish or expose port (-p, --expose)


$ docker run -p 127.0.0.1:80:8080/tcp ubuntu bash

This binds port 8080 of the container to TCP port 80 on 127.0.0.1 of the host machine. You can also
specify udp and sctp ports. The Docker User Guide explains in detail how to manipulate ports in
Docker.
$ docker run --expose 80 ubuntu bash

This exposes port 80 of the container without publishing the port to the host system’s interfaces.

Set environment variables (-e, --env, --env-file)


$ docker run -e MYVAR1 --env MYVAR2=foo --env-file ./env.list ubuntu bash

Use the -e, --env, and --env-file flags to set simple (non-array) environment variables in the
container you’re running, or overwrite variables that are defined in the Dockerfile of the image you’re
running.

You can define the variable and its value when running the container:

$ docker run --env VAR1=value1 --env VAR2=value2 ubuntu env | grep VAR
VAR1=value1
VAR2=value2

You can also use variables that you’ve exported to your local environment:

export VAR1=value1
export VAR2=value2

$ docker run --env VAR1 --env VAR2 ubuntu env | grep VAR
VAR1=value1
VAR2=value2

When running the command, the Docker CLI client checks the value the variable has in your local
environment and passes it to the container. If no = is provided and that variable is not exported in
your local environment, the variable won’t be set in the container.
You can also load the environment variables from a file. This file should use the
syntax <variable>=value (which sets the variable to the given value) or <variable> (which takes the
value from the local environment), and # for comments.
$ cat env.list
# This is a comment
VAR1=value1
VAR2=value2
USER

$ docker run --env-file env.list ubuntu env | grep VAR


VAR1=value1
VAR2=value2
USER=denis

Set metadata on container (-l, --label, --label-file)


A label is a key=value pair that applies metadata to a container. To label a container with two labels:
$ docker run -l my-label --label com.example.foo=bar ubuntu bash

The my-label key doesn’t specify a value so the label defaults to an empty string (""). To add
multiple labels, repeat the label flag (-l or --label).
The key=value must be unique to avoid overwriting the label value. If you specify labels with identical
keys but different values, each subsequent value overwrites the previous. Docker uses the
last key=value you supply.
Use the --label-file flag to load multiple labels from a file. Delimit each label in the file with an
EOL mark. The example below loads labels from a labels file in the current directory:
$ docker run --label-file ./labels ubuntu bash
The label-file format is similar to the format for loading environment variables. (Unlike environment
variables, labels are not visible to processes running inside a container.) The following example
illustrates a label-file format:

com.example.label1="a label"

# this is a comment
com.example.label2=another\ label
com.example.label3

You can load multiple label-files by supplying multiple --label-file flags.

For additional information on working with labels, see Labels - custom metadata in Docker in the
Docker User Guide.

Connect a container to a network (--network)


When you start a container use the --network flag to connect it to a network. This adds
the busybox container to the my-net network.
$ docker run -itd --network=my-net busybox

You can also choose the IP addresses for the container with --ip and --ip6 flags when you start the
container on a user-defined network.
$ docker run -itd --network=my-net --ip=10.10.9.75 busybox

If you want to add a running container to a network use the docker network connectsubcommand.
You can connect multiple containers to the same network. Once connected, the containers can
communicate easily need only another container’s IP address or name. For overlaynetworks or
custom plugins that support multi-host connectivity, containers connected to the same multi-host
network but launched from different Engines can also communicate in this way.
Note: Service discovery is unavailable on the default bridge network. Containers can communicate
via their IP addresses by default. To communicate by name, they must be linked.
You can disconnect a container from a network using the docker network disconnectcommand.

Mount volumes from container (--volumes-from)


$ docker run --volumes-from 777f7dc92da7 --volumes-from ba8c0c54f0f2:ro -i -t ubuntu
pwd
The --volumes-from flag mounts all the defined volumes from the referenced containers. Containers
can be specified by repetitions of the --volumes-from argument. The container ID may be optionally
suffixed with :ro or :rw to mount the volumes in read-only or read-write mode, respectively. By
default, the volumes are mounted in the same mode (read write or read only) as the reference
container.

Labeling systems like SELinux require that proper labels are placed on volume content mounted into
a container. Without a label, the security system might prevent the processes running inside the
container from using the content. By default, Docker does not change the labels set by the OS.

To change the label in the container context, you can add either of two suffixes :z or :Z to the
volume mount. These suffixes tell Docker to relabel file objects on the shared volumes. The z option
tells Docker that two containers share the volume content. As a result, Docker labels the content
with a shared content label. Shared volume labels allow all containers to read/write content.
The Z option tells Docker to label the content with a private unshared label. Only the current
container can use a private volume.

Attach to STDIN/STDOUT/STDERR (-a)


The -a flag tells docker run to bind to the container’s STDIN, STDOUT or STDERR. This makes it possible
to manipulate the output and input as needed.
$ echo "test" | docker run -i -a stdin ubuntu cat -

This pipes data into a container and prints the container’s ID by attaching only to the
container’s STDIN.
$ docker run -a stderr ubuntu echo test

This isn’t going to print anything unless there’s an error because we’ve only attached to the STDERR of
the container. The container’s logs still store what’s been written to STDERR and STDOUT.
$ cat somefile | docker run -i -a stdin mybuilder dobuild

This is how piping a file into a container could be done for a build. The container’s ID will be printed
after the build is done and the build logs could be retrieved using docker logs. This is useful if you
need to pipe a file or something else into a container and retrieve the container’s ID once the
container has finished running.

Add host device to container (--device)


$ docker run --device=/dev/sdc:/dev/xvdc \
--device=/dev/sdd --device=/dev/zero:/dev/nulo \
-i -t \
ubuntu ls -l /dev/{xvdc,sdd,nulo}

brw-rw---- 1 root disk 8, 2 Feb 9 16:05 /dev/xvdc


brw-rw---- 1 root disk 8, 3 Feb 9 16:05 /dev/sdd
crw-rw-rw- 1 root root 1, 5 Feb 9 16:05 /dev/nulo

It is often necessary to directly expose devices to a container. The --device option enables that. For
example, a specific block storage device or loop device or audio device can be added to an
otherwise unprivileged container (without the --privileged flag) and have the application directly
access it.
By default, the container will be able to read, write and mknod these devices. This can be overridden
using a third :rwm set of options to each --device flag:
$ docker run --device=/dev/sda:/dev/xvdc --rm -it ubuntu fdisk /dev/xvdc

Command (m for help): q


$ docker run --device=/dev/sda:/dev/xvdc:r --rm -it ubuntu fdisk /dev/xvdc
You will not be able to write the partition table.

Command (m for help): q

$ docker run --device=/dev/sda:/dev/xvdc:rw --rm -it ubuntu fdisk /dev/xvdc

Command (m for help): q

$ docker run --device=/dev/sda:/dev/xvdc:m --rm -it ubuntu fdisk /dev/xvdc


fdisk: unable to open /dev/xvdc: Operation not permitted

Note: --device cannot be safely used with ephemeral devices. Block devices that may be removed
should not be added to untrusted containers with --device.
For Windows, the format of the string passed to the --device option is in the form of --
device=<IdType>/<Id>. Beginning with Windows Server 2019 and Windows 10 October 2018
Update, Windows only supports an IdType of class and the Id as a device interface class GUID.
Refer to the table defined in the Windows container docs for a list of container-supported device
interface class GUIDs.

If this option is specified for a process-isolated Windows container, all devices that implement the
requested device interface class GUID are made available in the container. For example, the
command below makes all COM ports on the host visible in the container.

PS C:\> docker run --device=class/86E0D1E0-8089-11D0-9CE4-08003E301F73


mcr.microsoft.com/windows/servercore:ltsc2019

Note: the --device option is only supported on process-isolated Windows containers. This option
fails if the container isolation is hyperv or when running Linux Containers on Windows (LCOW).

Access an NVIDIA GPU


The --gpus flag allows you to access NVIDIA GPU resources. First you need to install nvidia-
container-runtime. Visit Specify a container’s resources for more information.
To use --gpus, specify which GPUs (or all) to use. If no value is provied, all available GPUs are
used. The example below exposes all available GPUs.
$ docker run -it --rm --gpus all ubuntu nvidia-smi

Use the device option to specify GPUs. The example below exposes a specific GPU.
$ docker run -it --rm --gpus device=GPU-3a23c669-1f69-c64e-cf85-44e9b07e7a2a ubuntu
nvidia-smi

The example below exposes the first and third GPUs.

$ docker run -it --rm --gpus device=0,2 nvidia-smi

Restart policies (--restart)


Use Docker’s --restart to specify a container’s restart policy. A restart policy controls whether the
Docker daemon restarts a container after exit. Docker supports the following restart policies:

Policy Result

no Do not automatically restart the container when it exits. This is the default.
Policy Result

on-
Restart only if the container exits with a non-zero exit status. Optionally,
failure[:max-
retries]
limit the number of restart retries the Docker daemon attempts.

Restart the container unless it is explicitly stopped or Docker itself is


unless-stopped
stopped or restarted.

Always restart the container regardless of the exit status. When you
specify always, the Docker daemon will try to restart the container
always
indefinitely. The container will also always start on daemon startup,
regardless of the current state of the container.

$ docker run --restart=always redis

This will run the redis container with a restart policy of always so that if the container exits, Docker
will restart it.

More detailed information on restart policies can be found in the Restart Policies (--restart)section of
the Docker run reference page.

Add entries to container hosts file (--add-host)


You can add other hosts into a container’s /etc/hosts file by using one or more --add-hostflags.
This example adds a static address for a host named docker:
$ docker run --add-host=docker:10.180.0.1 --rm -it debian

root@f38c87f2a42d:/# ping docker


PING docker (10.180.0.1): 48 data bytes
56 bytes from 10.180.0.1: icmp_seq=0 ttl=254 time=7.600 ms
56 bytes from 10.180.0.1: icmp_seq=1 ttl=254 time=30.705 ms
^C--- docker ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max/stddev = 7.600/19.152/30.705/11.553 ms

Sometimes you need to connect to the Docker host from within your container. To enable this, pass
the Docker host’s IP address to the container using the --add-host flag. To find the host’s address,
use the ip addr show command.
The flags you pass to ip addr show depend on whether you are using IPv4 or IPv6 networking in
your containers. Use the following flags for IPv4 address retrieval for a network device named eth0:
$ HOSTIP=`ip -4 addr show scope global dev eth0 | grep inet | awk '{print \$2}' | cut
-d / -f 1`
$ docker run --add-host=docker:${HOSTIP} --rm -it debian

For IPv6 use the -6 flag instead of the -4 flag. For other network devices, replace eth0with the
correct device name (for example docker0 for the bridge device).

Set ulimits in container (--ulimit)


Since setting ulimit settings in a container requires extra privileges not available in the default
container, you can set these using the --ulimit flag. --ulimit is specified with a soft and hard limit
as such: <type>=<soft limit>[:<hard limit>], for example:
$ docker run --ulimit nofile=1024:1024 --rm debian sh -c "ulimit -n"
1024

Note: If you do not provide a hard limit, the soft limit will be used for both values. If
no ulimits are set, they will be inherited from the default ulimits set on the daemon. as option
is disabled now. In other words, the following script is not supported:
$ docker run -it --ulimit as=1024 fedora /bin/bash`

The values are sent to the appropriate syscall as they are set. Docker doesn’t perform any byte
conversion. Take this into account when setting the values.
FOR NPROC USAGE
Be careful setting nproc with the ulimit flag as nproc is designed by Linux to set the maximum
number of processes available to a user, not to a container. For example, start four containers
with daemon user:
$ docker run -d -u daemon --ulimit nproc=3 busybox top

$ docker run -d -u daemon --ulimit nproc=3 busybox top

$ docker run -d -u daemon --ulimit nproc=3 busybox top

$ docker run -d -u daemon --ulimit nproc=3 busybox top


The 4th container fails and reports “[8] System error: resource temporarily unavailable” error. This
fails because the caller set nproc=3 resulting in the first three containers using up the three
processes quota set for the daemon user.

Stop container with signal (--stop-signal)


The --stop-signal flag sets the system call signal that will be sent to the container to exit. This
signal can be a valid unsigned number that matches a position in the kernel’s syscall table, for
instance 9, or a signal name in the format SIGNAME, for instance SIGKILL.

Optional security options (--security-opt)


On Windows, this flag can be used to specify the credentialspec option. The credentialspec must
be in the format file://spec.txt or registry://keyname.

Stop container with timeout (--stop-timeout)


The --stop-timeout flag sets the timeout (in seconds) that a pre-defined (see --stop-signal)
system call signal that will be sent to the container to exit. After timeout elapses the container will be
killed with SIGKILL.

Specify isolation technology for container (--isolation)


This option is useful in situations where you are running Docker containers on Windows. The --
isolation <value> option sets a container’s isolation technology. On Linux, the only supported is
the default option which uses Linux namespaces. These two commands are equivalent on Linux:
$ docker run -d busybox top
$ docker run -d --isolation default busybox top

On Windows, --isolation can take one of these values:

Value Description

Use the value specified by the Docker daemon’s --exec-opt or system default (see
default
below).

Shared-kernel namespace isolation (not supported on Windows client operating


process
systems older than Windows 10 1809).

hyperv Hyper-V hypervisor partition-based isolation.


The default isolation on Windows server operating systems is process. The default isolation on
Windows client operating systems is hyperv. An attempt to start a container on a client operating
system older than Windows 10 1809 with --isolation process will fail.
On Windows server, assuming the default configuration, these commands are equivalent and result
in process isolation:
PS C:\> docker run -d microsoft/nanoserver powershell echo process
PS C:\> docker run -d --isolation default microsoft/nanoserver powershell echo
process
PS C:\> docker run -d --isolation process microsoft/nanoserver powershell echo
process

If you have set the --exec-opt isolation=hyperv option on the Docker daemon, or are running
against a Windows client-based daemon, these commands are equivalent and result
in hyperv isolation:
PS C:\> docker run -d microsoft/nanoserver powershell echo hyperv
PS C:\> docker run -d --isolation default microsoft/nanoserver powershell echo hyperv
PS C:\> docker run -d --isolation hyperv microsoft/nanoserver powershell echo hyperv

Specify hard limits on memory available to containers (-m, --


memory)
These parameters always set an upper limit on the memory available to the container. On Linux, this
is set on the cgroup and applications in a container can query it
at /sys/fs/cgroup/memory/memory.limit_in_bytes.

On Windows, this will affect containers differently depending on what type of isolation is used.

 With process isolation, Windows will report the full memory of the host system, not the limit
to applications running inside the container
 PS C:\> docker run -it -m 2GB --isolation=process microsoft/nanoserver
powershell Get-ComputerInfo *memory*

 CsTotalPhysicalMemory : 17064509440

 CsPhyicallyInstalledMemory : 16777216

 OsTotalVisibleMemorySize : 16664560

 OsFreePhysicalMemory : 14646720

 OsTotalVirtualMemorySize : 19154928
 OsFreeVirtualMemory : 17197440

 OsInUseVirtualMemory : 1957488

 OsMaxProcessMemorySize : 137438953344

 With hyperv isolation, Windows will create a utility VM that is big enough to hold the memory
limit, plus the minimal OS needed to host the container. That size is reported as “Total
Physical Memory.”
 PS C:\> docker run -it -m 2GB --isolation=hyperv microsoft/nanoserver
powershell Get-ComputerInfo *memory*

 CsTotalPhysicalMemory : 2683355136

 CsPhyicallyInstalledMemory :

 OsTotalVisibleMemorySize : 2620464

 OsFreePhysicalMemory : 2306552

 OsTotalVirtualMemorySize : 2620464

 OsFreeVirtualMemory : 2356692

 OsInUseVirtualMemory : 263772

 OsMaxProcessMemorySize : 137438953344

Configure namespaced kernel parameters (sysctls) at runtime


The --sysctl sets namespaced kernel parameters (sysctls) in the container. For example, to turn on
IP forwarding in the containers network namespace, run this command:
$ docker run --sysctl net.ipv4.ip_forward=1 someimage

Note: Not all sysctls are namespaced. Docker does not support changing sysctls inside of a
container that also modify the host system. As the kernel evolves we expect to see more sysctls
become namespaced.

CURRENTLY SUPPORTED SYSCTLS

 IPC Namespace:

 kernel.msgmax, kernel.msgmnb, kernel.msgmni, kernel.sem, kernel.shmall,


kernel.shmmax, kernel.shmmni, kernel.shm_rmid_forced

 Sysctls beginning with fs.mqueue.*


If you use the --ipc=host option these sysctls will not be allowed.
 Network Namespace:

Sysctls beginning with net.*

If you use the --network=host option using these sysctls will not be allowed.

docker save
Estimated reading time: 1 minute

Description
Save one or more images to a tar archive (streamed to STDOUT by default)

Usage
docker save [OPTIONS] IMAGE [IMAGE...]

Options
Name, shorthand Default Description

--output , -o Write to a file, instead of STDOUT

Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
Produces a tarred repository to the standard output stream. Contains all parent layers, and all tags +
versions, or specified repo:tag, for each argument provided.

Examples
Create a backup that can then be used with docker load.
$ docker save busybox > busybox.tar

$ ls -sh busybox.tar

2.7M busybox.tar

$ docker save --output busybox.tar busybox

$ ls -sh busybox.tar

2.7M busybox.tar

$ docker save -o fedora-all.tar fedora

$ docker save -o fedora-latest.tar fedora:latest

Save an image to a tar.gz file using gzip


You can use gzip to save the image file and make the backup smaller.

docker save myimage:latest | gzip > myimage_latest.tar.gz

Cherry-pick particular tags


You can even cherry-pick particular tags of an image repository.

$ docker save -o ubuntu.tar ubuntu:lucid ubuntu:saucy

docker search
Estimated reading time: 5 minutes

Description
Search the Docker Hub for images
Usage
docker search [OPTIONS] TERM

Options
Name, shorthand Default Description

deprecated
--automated
Only show automated builds

--filter , -f Filter output based on conditions provided

--format Pretty-print search using a Go template

--limit 25 Max number of search results

--no-trunc Don’t truncate output

deprecated
--stars , -s
Only displays with at least x stars

Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
Search Docker Hub for images

See Find Public Images on Docker Hub for more details on finding shared images from the
command line.

Note: Search queries return a maximum of 25 results.

Examples
Search images by name
This example displays images with a name containing ‘busybox’:

$ docker search busybox

NAME DESCRIPTION
STARS OFFICIAL AUTOMATED
busybox Busybox base image. 316
[OK]
progrium/busybox 50
[OK]
radial/busyboxplus Full-chain, Internet enabled, busybox made... 8
[OK]
odise/busybox-python 2
[OK]
azukiapp/busybox This image is meant to be used as the base... 2
[OK]
ofayau/busybox-jvm Prepare busybox to install a 32 bits JVM. 1
[OK]
shingonoide/archlinux-busybox Arch Linux, a lightweight and flexible Lin... 1
[OK]
odise/busybox-curl 1
[OK]
ofayau/busybox-libc32 Busybox with 32 bits (and 64 bits) libs 1
[OK]
peelsky/zulu-openjdk-busybox 1
[OK]
skomma/busybox-data Docker image suitable for data volume cont... 1
[OK]
elektritter/busybox-teamspeak Lightweight teamspeak3 container based on... 1
[OK]
socketplane/busybox 1
[OK]
oveits/docker-nginx-busybox This is a tiny NginX docker image based on... 0
[OK]
ggtools/busybox-ubuntu Busybox ubuntu version with extra goodies 0
[OK]
nikfoundas/busybox-confd Minimal busybox based distribution of confd 0
[OK]
openshift/busybox-http-app 0
[OK]
jllopis/busybox 0
[OK]
swyckoff/busybox 0
[OK]
powellquiring/busybox 0
[OK]
williamyeh/busybox-sh Docker image for BusyBox's sh 0
[OK]
simplexsys/busybox-cli-powered Docker busybox images, with a few often us... 0
[OK]
fhisamoto/busybox-java Busybox java 0
[OK]
scottabernethy/busybox 0
[OK]
marclop/busybox-solr

Display non-truncated description (--no-trunc)


This example displays images with a name containing ‘busybox’, at least 3 stars and the description
isn’t truncated in the output:

$ docker search --stars=3 --no-trunc busybox


NAME DESCRIPTION
STARS OFFICIAL AUTOMATED
busybox Busybox base image.
325 [OK]
progrium/busybox
50 [OK]
radial/busyboxplus Full-chain, Internet enabled, busybox made from scratch. Comes
in git and cURL flavors. 8 [OK]

Limit search results (--limit)


The flag --limit is the maximum number of results returned by a search. This value could be in the
range between 1 and 100. The default value of --limit is 25.

Filtering
The filtering flag (-f or --filter) format is a key=value pair. If there is more than one filter, then
pass multiple flags (e.g. --filter "foo=bar" --filter "bif=baz")
The currently supported filters are:

 stars (int - number of stars the image has)


 is-automated (boolean - true or false) - is the image automated or not
 is-official (boolean - true or false) - is the image official or not

STARS

This example displays images with a name containing ‘busybox’ and at least 3 stars:

$ docker search --filter stars=3 busybox

NAME DESCRIPTION STARS


OFFICIAL AUTOMATED
busybox Busybox base image. 325 [OK]
progrium/busybox 50
[OK]
radial/busyboxplus Full-chain, Internet enabled, busybox made... 8
[OK]

IS-AUTOMATED

This example displays images with a name containing ‘busybox’ and are automated builds:

$ docker search --filter is-automated busybox

NAME DESCRIPTION STARS


OFFICIAL AUTOMATED
progrium/busybox 50
[OK]
radial/busyboxplus Full-chain, Internet enabled, busybox made... 8
[OK]

IS-OFFICIAL

This example displays images with a name containing ‘busybox’, at least 3 stars and are official
builds:

$ docker search --filter "is-official=true" --filter "stars=3" busybox

NAME DESCRIPTION STARS


OFFICIAL AUTOMATED
progrium/busybox 50
[OK]
radial/busyboxplus Full-chain, Internet enabled, busybox made... 8
[OK]

Format the output


The formatting option (--format) pretty-prints search output using a Go template.

Valid placeholders for the Go template are:

Placeholder Description

.Name Image Name

.Description Image description

.StarCount Number of stars for the image

.IsOfficial “OK” if image is official

.IsAutomated “OK” if image build was automated

When you use the --format option, the search command will output the data exactly as the template
declares. If you use the table directive, column headers are included as well.
The following example uses a template without headers and outputs the Name and StarCount entries
separated by a colon for all images:
{% raw %}
$ docker search --format "{{.Name}}: {{.StarCount}}" nginx

nginx: 5441
jwilder/nginx-proxy: 953
richarvey/nginx-php-fpm: 353
million12/nginx-php: 75
webdevops/php-nginx: 70
h3nrik/nginx-ldap: 35
bitnami/nginx: 23
evild/alpine-nginx: 14
million12/nginx: 9
maxexcloo/nginx: 7
{% endraw %}

This example outputs a table format:

{% raw %}
$ docker search --format "table {{.Name}}\t{{.IsAutomated}}\t{{.IsOfficial}}" nginx

NAME AUTOMATED OFFICIAL


nginx [OK]
jwilder/nginx-proxy [OK]
richarvey/nginx-php-fpm [OK]
jrcs/letsencrypt-nginx-proxy-companion [OK]
million12/nginx-php [OK]
webdevops/php-nginx [OK]
{% endraw %}

docker secret
Estimated reading time: 1 minute

Description
Manage Docker secrets

API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Swarm This command works with the Swarm orchestrator.

Usage
docker secret COMMAND

Child commands
Command Description

docker secret create Create a secret from a file or STDIN as content

docker secret inspect Display detailed information on one or more secrets

docker secret ls List secrets

docker secret rm Remove one or more secrets

Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
Manage secrets.

docker secret create


Estimated reading time: 2 minutes

Description
Create a secret from a file or STDIN as content

API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Swarm This command works with the Swarm orchestrator.

Usage
docker secret create [OPTIONS] SECRET [file|-]

Options
Name, shorthand Default Description

API 1.31+
--driver , -d
Secret driver

--label , -l Secret labels

API 1.37+
--template-driver
Template driver

Parent command
Command Description

docker secret Manage Docker secrets

Related commands
Command Description

docker secret create Create a secret from a file or STDIN as content

docker secret inspect Display detailed information on one or more secrets

docker secret ls List secrets

docker secret rm Remove one or more secrets

Extended description
Creates a secret using standard input or from a file for the secret content. You must run this
command on a manager node.

For detailed information about using secrets, refer to manage sensitive data with Docker secrets.

Examples
Create a secret
$ printf <secret> | docker secret create my_secret -

onakdyv307se2tl7nl20anokv

$ docker secret ls

ID NAME CREATED UPDATED


onakdyv307se2tl7nl20anokv my_secret 6 seconds ago 6 seconds ago

Create a secret with a file


$ docker secret create my_secret ./secret.json

dg426haahpi5ezmkkj5kyl3sn

$ docker secret ls

ID NAME CREATED UPDATED


dg426haahpi5ezmkkj5kyl3sn my_secret 7 seconds ago 7 seconds ago

Create a secret with labels


$ docker secret create --label env=dev \
--label rev=20170324 \
my_secret ./secret.json

eo7jnzguqgtpdah3cm5srfb97
$ docker secret inspect my_secret

[
{
"ID": "eo7jnzguqgtpdah3cm5srfb97",
"Version": {
"Index": 17
},
"CreatedAt": "2017-03-24T08:15:09.735271783Z",
"UpdatedAt": "2017-03-24T08:15:09.735271783Z",
"Spec": {
"Name": "my_secret",
"Labels": {
"env": "dev",
"rev": "20170324"
}
}
}
]

docker secret inspect


Estimated reading time: 2 minutes

Description
Display detailed information on one or more secrets

API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Swarm This command works with the Swarm orchestrator.

Usage
docker secret inspect [OPTIONS] SECRET [SECRET...]

Options
Name, shorthand Default Description

--format , -f Format the output using the given Go template


Name, shorthand Default Description

--pretty Print the information in a human friendly format

Parent command
Command Description

docker secret Manage Docker secrets

Related commands
Command Description

docker secret create Create a secret from a file or STDIN as content

docker secret inspect Display detailed information on one or more secrets

docker secret ls List secrets

docker secret rm Remove one or more secrets

Extended description
Inspects the specified secret. This command has to be run targeting a manager node.

By default, this renders all results in a JSON array. If a format is specified, the given template will be
executed for each result.

Go’s text/template package describes all the details of the format.

For detailed information about using secrets, refer to manage sensitive data with Docker secrets.

Examples
Inspect a secret by name or ID
You can inspect a secret, either by its name, or ID
For example, given the following secret:

$ docker secret ls

ID NAME CREATED UPDATED


eo7jnzguqgtpdah3cm5srfb97 my_secret 3 minutes ago 3 minutes ago
$ docker secret inspect secret.json

[
{
"ID": "eo7jnzguqgtpdah3cm5srfb97",
"Version": {
"Index": 17
},
"CreatedAt": "2017-03-24T08:15:09.735271783Z",
"UpdatedAt": "2017-03-24T08:15:09.735271783Z",
"Spec": {
"Name": "my_secret",
"Labels": {
"env": "dev",
"rev": "20170324"
}
}
}
]

Formatting
You can use the --format option to obtain specific information about a secret. The following example
command outputs the creation time of the secret.

$ docker secret inspect --format='{{.CreatedAt}}' eo7jnzguqgtpdah3cm5srfb97

2017-03-24 08:15:09.735271783 +0000 UTC


docker secret ls
Estimated reading time: 4 minutes

Description
List secrets

API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Swarm This command works with the Swarm orchestrator.

Usage
docker secret ls [OPTIONS]

Options
Name, shorthand Default Description

--filter , -f Filter output based on conditions provided

--format Pretty-print secrets using a Go template

--quiet , -q Only display IDs

Parent command
Command Description

docker secret Manage Docker secrets

Related commands
Command Description

docker secret create Create a secret from a file or STDIN as content

docker secret inspect Display detailed information on one or more secrets

docker secret ls List secrets

docker secret rm Remove one or more secrets

Extended description
Run this command on a manager node to list the secrets in the swarm.

For detailed information about using secrets, refer to manage sensitive data with Docker secrets.

Examples
$ docker secret ls

ID NAME CREATED UPDATED


6697bflskwj1998km1gnnjr38 q5s5570vtvnimefos1fyeo2u2 6 weeks ago 6 weeks
ago
9u9hk4br2ej0wgngkga6rp4hq my_secret 5 weeks ago 5 weeks
ago
mem02h8n73mybpgqjf0kfi1n0 test_secret 3 seconds ago 3 seconds
ago

Filtering
The filtering flag (-f or --filter) format is a key=value pair. If there is more than one filter, then
pass multiple flags (e.g., --filter "foo=bar" --filter "bif=baz")

The currently supported filters are:

 id (secret’s ID)
 label (label=<key> or label=<key>=<value>)
 name (secret’s name)

ID
The id filter matches all or prefix of a secret’s id.
$ docker secret ls -f "id=6697bflskwj1998km1gnnjr38"

ID NAME CREATED UPDATED


6697bflskwj1998km1gnnjr38 q5s5570vtvnimefos1fyeo2u2 6 weeks ago 6 weeks
ago

LABEL

The label filter matches secrets based on the presence of a label alone or a label and a value.
The following filter matches all secrets with a project label regardless of its value:
$ docker secret ls --filter label=project

ID NAME CREATED UPDATED


mem02h8n73mybpgqjf0kfi1n0 test_secret About an hour ago About an
hour ago

The following filter matches only services with the project label with the project-a value.
$ docker service ls --filter label=project=test

ID NAME CREATED UPDATED


mem02h8n73mybpgqjf0kfi1n0 test_secret About an hour ago About an
hour ago

NAME

The name filter matches on all or prefix of a secret’s name.


The following filter matches secret with a name containing a prefix of test.
$ docker secret ls --filter name=test_secret

ID NAME CREATED UPDATED


mem02h8n73mybpgqjf0kfi1n0 test_secret About an hour ago About an
hour ago

Format the output


The formatting option (--format) pretty prints secrets output using a Go template.
Valid placeholders for the Go template are listed below:

Placeholder Description

.ID Secret ID

.Name Secret name

.CreatedAt Time when the secret was created

.UpdatedAt Time when the secret was updated

.Labels All labels assigned to the secret

Value of a specific label for this secret. For example {{.Label


.Label
"secret.ssh.key"}}

When using the --format option, the secret ls command will either output the data exactly as the
template declares or, when using the table directive, will include column headers as well.
The following example uses a template without headers and outputs the ID and Nameentries
separated by a colon for all images:
$ docker secret ls --format "{{.ID}}: {{.Name}}"

77af4d6b9913: secret-1
b6fa739cedf5: secret-2
78a85c484f71: secret-3

To list all secrets with their name and created date in a table format you can use:

$ docker secret ls --format "table {{.ID}}\t{{.Name}}\t{{.CreatedAt}}"

ID NAME CREATED
77af4d6b9913 secret-1 5 minutes ago
b6fa739cedf5 secret-2 3 hours ago
78a85c484f71 secret-3 10 days ago

docker secret rm
Estimated reading time: 1 minute
Description
Remove one or more secrets

API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Swarm This command works with the Swarm orchestrator.

Usage
docker secret rm SECRET [SECRET...]

Parent command
Command Description

docker secret Manage Docker secrets

Related commands
Command Description

docker secret create Create a secret from a file or STDIN as content

docker secret inspect Display detailed information on one or more secrets

docker secret ls List secrets

docker secret rm Remove one or more secrets

Extended description
Removes the specified secrets from the swarm. This command has to be run targeting a manager
node.

For detailed information about using secrets, refer to manage sensitive data with Docker secrets.
Examples
This example removes a secret:

$ docker secret rm secret.json


sapth4csdo5b6wz2p5uimh5xg

Warning: Unlike docker rm, this command does not ask for confirmation before removing a secret.

docker service
Estimated reading time: 1 minute

Description
Manage services

API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Swarm This command works with the Swarm orchestrator.

Usage
docker service COMMAND

Child commands
Command Description

docker service create Create a new service

docker service inspect Display detailed information on one or more services

docker service logs Fetch the logs of a service or task

docker service ls List services

docker service ps List the tasks of one or more services


Command Description

docker service rm Remove one or more services

docker service rollback Revert changes to a service’s configuration

docker service scale Scale one or multiple replicated services

docker service update Update a service

Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
Manage services.

docker service create


Estimated reading time: 35 minutes

Description
Create a new service

API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Swarm This command works with the Swarm orchestrator.

Usage
docker service create [OPTIONS] IMAGE [COMMAND] [ARG...]

Options
Name, shorthand Default Description

API 1.30+
--config
Specify configurations to expose to the service

--constraint Placement constraints

--container-label Container labels

API 1.29+
--credential-spec Credential spec for managed service account (Windows
only)

API 1.29+
--detach , -d Exit immediately instead of waiting for the service to
converge

API 1.25+
--dns
Set custom DNS servers

API 1.25+
--dns-option
Set DNS options

API 1.25+
--dns-search
Set custom DNS search domains

--endpoint-mode vip Endpoint mode (vip or dnsrr)

--entrypoint Overwrite the default ENTRYPOINT of the image

--env , -e Set environment variables

--env-file Read in a file of environment variables

--generic-resource User defined resources

API 1.25+
--group Set one or more supplementary user groups for the
container

API 1.25+
--health-cmd
Command to run to check health

API 1.25+
--health-interval
Time between running the check (ms|s|m|h)
Name, shorthand Default Description

API 1.25+
--health-retries
Consecutive failures needed to report unhealthy

API 1.29+
--health-start-
Start period for the container to initialize before counting
period
retries towards unstable (ms|s|m|h)

API 1.25+
--health-timeout
Maximum time to allow one check to run (ms|s|m|h)

API 1.25+
--host
Set one or more custom host-to-IP mappings (host:ip)

API 1.25+
--hostname
Container hostname

API 1.37+
--init Use an init inside each service container to forward
signals and reap processes

API 1.35+
--isolation
Service container isolation mode

--label , -l Service labels

--limit-cpu Limit CPUs

--limit-memory Limit Memory

--log-driver Logging driver for service

--log-opt Logging driver options

--mode replicated Service mode (replicated or global)

--mount Attach a filesystem mount to the service

--name Service name

--network Network attachments

API 1.25+
--no-healthcheck
Disable any container-specified HEALTHCHECK
Name, shorthand Default Description

API 1.30+
--no-resolve-image Do not query the registry to resolve image digest and
supported platforms

API 1.28+
--placement-pref
Add a placement preference

--publish , -p Publish a port as a node port

--quiet , -q Suppress progress output

API 1.28+
--read-only
Mount the container’s root filesystem as read only

--replicas Number of tasks

API 1.40+
--replicas-max- Maximum number of tasks per node (default 0 =
per-node
unlimited)

--reserve-cpu Reserve CPUs

--reserve-memory Reserve Memory

--restart- Restart when condition is met (“none”|”on-failure”|”any”)


condition (default “any”)

Delay between restart attempts (ns|us|ms|s|m|h) (default


--restart-delay
5s)

--restart-max-
Maximum number of restarts before giving up
attempts

Window used to evaluate the restart policy


--restart-window
(ns|us|ms|s|m|h)

API 1.28+
--rollback-delay Delay between task rollbacks (ns|us|ms|s|m|h) (default
0s)

API 1.28+
--rollback-
Action on rollback failure (“pause”|”continue”) (default
failure-action
“pause”)
Name, shorthand Default Description

--rollback-max- API 1.28+


failure-ratio Failure rate to tolerate during a rollback (default 0)

API 1.28+
--rollback-monitor Duration after each task rollback to monitor for failure
(ns|us|ms|s|m|h) (default 5s)

API 1.29+
--rollback-order Rollback order (“start-first”|”stop-first”) (default “stop-
first”)

API 1.28+
--rollback- Maximum number of tasks rolled back simultaneously (0
1
parallelism
to roll back all at once)

API 1.25+
--secret
Specify secrets to expose to the service

--stop-grace- Time to wait before force killing a container


period (ns|us|ms|s|m|h) (default 10s)

API 1.28+
--stop-signal
Signal to stop the container

API 1.40+
--sysctl
Sysctl options

API 1.25+
--tty , -t
Allocate a pseudo-TTY

--update-delay Delay between updates (ns|us|ms|s|m|h) (default 0s)

--update-failure- Action on update failure (“pause”|”continue”|”rollback”)


action (default “pause”)

--update-max- API 1.25+


failure-ratio Failure rate to tolerate during an update (default 0)

API 1.25+
--update-monitor Duration after each task update to monitor for failure
(ns|us|ms|s|m|h) (default 5s)

API 1.29+
--update-order
Update order (“start-first”|”stop-first”) (default “stop-first”)
Name, shorthand Default Description

--update- Maximum number of tasks updated simultaneously (0 to


1
parallelism update all at once)

--user , -u Username or UID (format: <name|uid>[:<group|gid>])

--with-registry-
Send registry authentication details to swarm agents
auth

--workdir , -w Working directory inside the container

Parent command
Command Description

docker service Manage services

Related commands
Command Description

docker service create Create a new service

docker service inspect Display detailed information on one or more services

docker service logs Fetch the logs of a service or task

docker service ls List services

docker service ps List the tasks of one or more services

docker service rm Remove one or more services

docker service rollback Revert changes to a service’s configuration

docker service scale Scale one or multiple replicated services

docker service update Update a service


Extended description
Creates a service as described by the specified parameters. You must run this command on a
manager node.

Examples
Create a service
$ docker service create --name redis redis:3.0.6

dmu1ept4cxcfe8k8lhtux3ro3

$ docker service create --mode global --name redis2 redis:3.0.6

a8q9dasaafudfs8q8w32udass

$ docker service ls

ID NAME MODE REPLICAS IMAGE


dmu1ept4cxcf redis replicated 1/1 redis:3.0.6
a8q9dasaafud redis2 global 1/1 redis:3.0.6

CREATE A SERVICE USING AN IMAGE ON A PRIVATE REGISTRY

If your image is available on a private registry which requires login, use the--with-registry-
auth flag with docker service create, after logging in. If your image is stored
on registry.example.com, which is a private registry, use a command like the following:
$ docker login registry.example.com

$ docker service create \


--with-registry-auth \
--name my_service \
registry.example.com/acme/my_image:latest
This passes the login token from your local client to the swarm nodes where the service is deployed,
using the encrypted WAL logs. With this information, the nodes are able to log into the registry and
pull the image.

Create a service with 5 replica tasks (--replicas)


Use the --replicas flag to set the number of replica tasks for a replicated service. The following
command creates a redis service with 5 replica tasks:
$ docker service create --name redis --replicas=5 redis:3.0.6

4cdgfyky7ozwh3htjfw0d12qv

The above command sets the desired number of tasks for the service. Even though the command
returns immediately, actual scaling of the service may take some time. The REPLICAS column shows
both the actual and desired number of replica tasks for the service.
In the following example the desired state is 5 replicas, but the current number of RUNNINGtasks is 3:
$ docker service ls

ID NAME MODE REPLICAS IMAGE


4cdgfyky7ozw redis replicated 3/5 redis:3.0.7

Once all the tasks are created and RUNNING, the actual number of tasks is equal to the desired
number:
$ docker service ls

ID NAME MODE REPLICAS IMAGE


4cdgfyky7ozw redis replicated 5/5 redis:3.0.7

Create a service with secrets


Use the --secret flag to give a container access to a secret.

Create a service specifying a secret:

$ docker service create --name redis --secret secret.json redis:3.0.6


4cdgfyky7ozwh3htjfw0d12qv

Create a service specifying the secret, target, user/group ID, and mode:

$ docker service create --name redis \


--secret source=ssh-key,target=ssh \
--secret source=app-key,target=app,uid=1000,gid=1001,mode=0400 \
redis:3.0.6

4cdgfyky7ozwh3htjfw0d12qv

To grant a service access to multiple secrets, use multiple --secret flags.


Secrets are located in /run/secrets in the container. If no target is specified, the name of the secret
will be used as the in memory file in the container. If a target is specified, that will be the filename. In
the example above, two files will be created: /run/secrets/ssh and/run/secrets/app for each of the
secret targets specified.

Create a service with a rolling update policy


$ docker service create \
--replicas 10 \
--name redis \
--update-delay 10s \
--update-parallelism 2 \
redis:3.0.6

When you run a service update, the scheduler updates a maximum of 2 tasks at a time,
with 10s between updates. For more information, refer to the rolling updates tutorial.

Set environment variables (-e, --env)


This sets an environment variable for all tasks in a service. For example:

$ docker service create \


--name redis_2 \
--replicas 5 \
--env MYVAR=foo \
redis:3.0.6

To specify multiple environment variables, specify multiple --env flags, each with a separate key-
value pair.
$ docker service create \
--name redis_2 \
--replicas 5 \
--env MYVAR=foo \
--env MYVAR2=bar \
redis:3.0.6

Create a service with specific hostname (--hostname)


This option sets the docker service containers hostname to a specific string. For example:

$ docker service create --name redis --hostname myredis redis:3.0.6

Set metadata on a service (-l, --label)


A label is a key=value pair that applies metadata to a service. To label a service with two labels:
$ docker service create \
--name redis_2 \
--label com.example.foo="bar"
--label bar=baz \
redis:3.0.6

For more information about labels, refer to apply custom metadata.

Add bind mounts, volumes or memory filesystems


Docker supports three different kinds of mounts, which allow containers to read from or write to files
or directories, either on the host operating system, or on memory filesystems. These types are data
volumes (often referred to simply as volumes), bind mounts, tmpfs, and named pipes.

A bind mount makes a file or directory on the host available to the container it is mounted within. A
bind mount may be either read-only or read-write. For example, a container might share its host’s
DNS information by means of a bind mount of the host’s /etc/resolv.conf or a container might write
logs to its host’s /var/log/myContainerLogs directory. If you use bind mounts and your host and
containers have different notions of permissions, access controls, or other such details, you will run
into portability issues.

A named volume is a mechanism for decoupling persistent data needed by your container from the
image used to create the container and from the host machine. Named volumes are created and
managed by Docker, and a named volume persists even when no container is currently using it.
Data in named volumes can be shared between a container and the host machine, as well as
between multiple containers. Docker uses a volume driver to create, manage, and mount volumes.
You can back up or restore volumes using Docker commands.

A tmpfs mounts a tmpfs inside a container for volatile data.

A npipe mounts a named pipe from the host into the container.

Consider a situation where your image starts a lightweight web server. You could use that image as
a base image, copy in your website’s HTML files, and package that into another image. Each time
your website changed, you’d need to update the new image and redeploy all of the containers
serving your website. A better solution is to store the website in a named volume which is attached
to each of your web server containers when they start. To update the website, you just update the
named volume.

For more information about named volumes, see Data Volumes.

The following table describes options which apply to both bind mounts and named volumes in a
service:

Option Required Description

The type of mount, can be


either volume, bind, tmpfs,
or npipe. Defaults to volume if no
type is specified.
 volume: mounts a managed
volume into the container.
type  bind: bind-mounts a
directory or file from the host
into the container.
 tmpfs: mount a tmpfs in the
container
 npipe: mounts named pipe
from the host into the
container (Windows
containers only).

 type=volume: src is an
optional way to specify the
name of the volume (for
example, src=my-volume).
If the named volume does not
exist, it is automatically
created. If no src is
specified, the volume is
assigned a random name
which is guaranteed to be
unique on the host, but may
not be unique cluster-wide. A
randomly-named volume has
the same lifecycle as its
for type=bindand type=np container and is destroyed
src or source
ipe when the container is
destroyed (which is
upon service update, or
when scaling or re-balancing
the service)
 type=bind: src is required,
and specifies an absolute
path to the file or directory to
bind-mount (for
example, src=/path/on/ho
st/). An error is produced if
the file or directory does not
exist.
 type=tmpfs: src is not
supported.

Mount path inside the container, for


example /some/path/in/containe
r/. If the path does not exist in the
dst or destinationor tar
yes container's filesystem, the Engine
get
creates a directory at the specified
location before mounting the volume
or bind mount.

The Engine mounts binds and


readonlyor ro
volumes read-
write unless readonly option is
given when mounting the bind or
volume. Note that
setting readonly for a bind-mount
does not make its
submounts readonly on the current
Linux implementation. See
also bind-nonrecursive.
 true or 1 or no value:
Mounts the bind or volume
read-only.
 false or 0: Mounts the bind
or volume read-write.

OPTIONS FOR BIND MOUNTS

The following options can only be used for bind mounts (type=bind):
Option Description

bind-
See the bind propagation section.
propagation

The consistency requirements for the mount; one of


 default: Equivalent to consistent.
 consistent: Full consistency. The container runtime and the host
maintain an identical view of the mount at all times.
 cached: The host's view of the mount is authoritative. There may be
consistency
delays before updates made on the host are visible within a
container.
 delegated: The container runtime's view of the mount is
authoritative. There may be delays before updates made in a
container are visible on the host.

By default, submounts are recursively bind-mounted as well. However, this


behavior can be confusing when a bind mount is configured
with readonly option, because submounts are not mounted as read-only.
Set bind-nonrecursive to disable recursive bind-mount.
bind-
nonrecursive
A value is optional:

 true or 1: Disables recursive bind-mount.


 false or 0: Default if you do not provide a value. Enables recursive
bind-mount.

Bind propagation

Bind propagation refers to whether or not mounts created within a given bind mount or named
volume can be propagated to replicas of that mount. Consider a mount point /mnt, which is also
mounted on /tmp. The propation settings control whether a mount on /tmp/awould also be available
on /mnt/a. Each propagation setting has a recursive counterpoint. In the case of recursion, consider
that /tmp/a is also mounted as /foo. The propagation settings control
whether /mnt/a and/or /tmp/a would exist.
The bind-propagation option defaults to rprivate for both bind mounts and volume mounts, and is
only configurable for bind mounts. In other words, named volumes do not support bind propagation.

 shared: Sub-mounts of the original mount are exposed to replica mounts, and sub-mounts of
replica mounts are also propagated to the original mount.
 slave: similar to a shared mount, but only in one direction. If the original mount exposes a
sub-mount, the replica mount can see it. However, if the replica mount exposes a sub-mount,
the original mount cannot see it.
 private: The mount is private. Sub-mounts within it are not exposed to replica mounts, and
sub-mounts of replica mounts are not exposed to the original mount.
 rshared: The same as shared, but the propagation also extends to and from mount points
nested within any of the original or replica mount points.
 rslave: The same as slave, but the propagation also extends to and from mount points
nested within any of the original or replica mount points.
 rprivate: The default. The same as private, meaning that no mount points anywhere within
the original or replica mount points propagate in either direction.

For more information about bind propagation, see the Linux kernel documentation for shared
subtree.

OPTIONS FOR NAMED VOLUMES

The following options can only be used for named volumes (type=volume):
Option Description

volume- Name of the volume-driver plugin to use for the volume. Defaults to "local", to
driver use the local volume driver to create the volume if the volume does not exist.

One or more custom metadata ("labels") to apply to the volume upon creation. For
volume-
example, volume-label=mylabel=hello-world,my-other-label=hello-
label
mars. For more information about labels, refer to apply custom metadata.
By default, if you attach an empty volume to a container, and files or directories
already existed at the mount-path in the container (dst), the Engine copies those
files and directories into the volume, allowing the host to access them.
Set volume-nocopy to disable copying files from the container's filesystem to the
volume and mount the empty volume.
volume-
nocopy
A value is optional:

 true or 1: Default if you do not provide a value. Disables copying.


 false or 0: Enables copying.

Options specific to a given volume driver, which will be passed to the driver when
creating the volume. Options are provided as a comma-separated list of key/value
volume-
pairs, for example, volume-opt=some-option=some-value,volume-
opt
opt=some-other-option=some-other-value. For available options for a given
driver, refer to that driver's documentation.

OPTIONS FOR TMPFS

The following options can only be used for tmpfs mounts (type=tmpfs);
Option Description

tmpfs-size Size of the tmpfs mount in bytes. Unlimited by default in Linux.

tmpfs- File mode of the tmpfs in octal. (e.g. "700" or "0700".) Defaults to "1777" in
mode Linux.

DIFFERENCES BETWEEN “--MOUNT” AND “--VOLUME”

The --mount flag supports most options that are supported by the -v or --volume flag for docker run,
with some important exceptions:
 The --mount flag allows you to specify a volume driver and volume driver options per
volume, without creating the volumes in advance. In contrast, docker run allows you to
specify a single volume driver which is shared by all volumes, using the --volume-
driver flag.
 The --mount flag allows you to specify custom metadata (“labels”) for a volume, before the
volume is created.
 When you use --mount with type=bind, the host-path must refer to an existing path on the
host. The path will not be created for you and the service will fail with an error if the path
does not exist.
 The --mount flag does not allow you to relabel a volume with Z or z flags, which are used
for selinux labeling.

CREATE A SERVICE USING A NAMED VOLUME

The following example creates a service that uses a named volume:

$ docker service create \


--name my-service \
--replicas 3 \
--mount type=volume,source=my-volume,destination=/path/in/container,volume-
label="color=red",volume-label="shape=round" \
nginx:alpine

For each replica of the service, the engine requests a volume named “my-volume” from the default
(“local”) volume driver where the task is deployed. If the volume does not exist, the engine creates a
new volume and applies the “color” and “shape” labels.

When the task is started, the volume is mounted on /path/in/container/ inside the container.

Be aware that the default (“local”) volume is a locally scoped volume driver. This means that
depending on where a task is deployed, either that task gets a new volume named “my-volume”, or
shares the same “my-volume” with other tasks of the same service. Multiple containers writing to a
single shared volume can cause data corruption if the software running inside the container is not
designed to handle concurrent processes writing to the same location. Also take into account that
containers can be re-scheduled by the Swarm orchestrator and be deployed on a different node.

CREATE A SERVICE THAT USES AN ANONYMOUS VOLUME

The following command creates a service with three replicas with an anonymous volume
on /path/in/container:
$ docker service create \
--name my-service \
--replicas 3 \
--mount type=volume,destination=/path/in/container \
nginx:alpine
In this example, no name (source) is specified for the volume, so a new volume is created for each
task. This guarantees that each task gets its own volume, and volumes are not shared between
tasks. Anonymous volumes are removed after the task using them is complete.

CREATE A SERVICE THAT USES A BIND-MOUNTED HOST DIRECTORY

The following example bind-mounts a host directory at /path/in/container in the containers backing
the service:
$ docker service create \
--name my-service \
--mount type=bind,source=/path/on/host,destination=/path/in/container \
nginx:alpine

Set service mode (--mode)


The service mode determines whether this is a replicated service or a global service. A replicated
service runs as many tasks as specified, while a global service runs on each active node in the
swarm.

The following command creates a global service:

$ docker service create \


--name redis_2 \
--mode global \
redis:3.0.6

Specify service constraints (--constraint)


You can limit the set of nodes where a task can be scheduled by defining constraint expressions.
Multiple constraints find nodes that satisfy every expression (AND match). Constraints can match
node or Docker Engine labels as follows:

node attribute matches example

node.id Node ID node.id==2ivku8v2gvtg4

node.hostname Node hostname node.hostname!=node-2

node.role Node role node.role==manager


user defined node
node.labels node.labels.security==high
labels

Docker Engine's engine.labels.operatingsystem==ubuntu


engine.labels
labels 14.04

engine.labels apply to Docker Engine labels like operating system, drivers, etc. Swarm
administrators add node.labels for operational purposes by using the docker node
updatecommand.

For example, the following limits tasks for the redis service to nodes where the node type label
equals queue:

$ docker service create \


--name redis_2 \
--constraint 'node.labels.type == queue' \
redis:3.0.6

Specify service placement preferences (--placement-pref)


You can set up the service to divide tasks evenly over different categories of nodes. One example of
where this can be useful is to balance tasks over a set of datacenters or availability zones. The
example below illustrates this:

$ docker service create \


--replicas 9 \
--name redis_2 \
--placement-pref 'spread=node.labels.datacenter' \
redis:3.0.6

This uses --placement-pref with a spread strategy (currently the only supported strategy) to spread
tasks evenly over the values of the datacenter node label. In this example, we assume that every
node has a datacenter node label attached to it. If there are three different values of this label
among nodes in the swarm, one third of the tasks will be placed on the nodes associated with each
value. This is true even if there are more nodes with one value than another. For example, consider
the following set of nodes:

 Three nodes with node.labels.datacenter=east


 Two nodes with node.labels.datacenter=south
 One node with node.labels.datacenter=west

Since we are spreading over the values of the datacenter label and the service has 9 replicas, 3
replicas will end up in each datacenter. There are three nodes associated with the value east, so
each one will get one of the three replicas reserved for this value. There are two nodes with the
value south, and the three replicas for this value will be divided between them, with one receiving
two replicas and another receiving just one. Finally, west has a single node that will get all three
replicas reserved for west.
If the nodes in one category (for example, those with node.labels.datacenter=south) can’t handle
their fair share of tasks due to constraints or resource limitations, the extra tasks will be assigned to
other nodes instead, if possible.
Both engine labels and node labels are supported by placement preferences. The example above
uses a node label, because the label is referenced with node.labels.datacenter. To spread over the
values of an engine label, use--placement-pref spread=engine.labels.<labelname>.
It is possible to add multiple placement preferences to a service. This establishes a hierarchy of
preferences, so that tasks are first divided over one category, and then further divided over
additional categories. One example of where this may be useful is dividing tasks fairly between
datacenters, and then splitting the tasks within each datacenter over a choice of racks. To add
multiple placement preferences, specify the --placement-pref flag multiple times. The order is
significant, and the placement preferences will be applied in the order given when making
scheduling decisions.

The following example sets up a service with multiple placement preferences. Tasks are spread first
over the various datacenters, and then over racks (as indicated by the respective labels):

$ docker service create \


--replicas 9 \
--name redis_2 \
--placement-pref 'spread=node.labels.datacenter' \
--placement-pref 'spread=node.labels.rack' \
redis:3.0.6

When updating a service with docker service update, --placement-pref-add appends a new
placement preference after all existing placement preferences. --placement-pref-rmremoves an
existing placement preference that matches the argument.

Specify maximum replicas per node (--replicas-max-per-node)


Use the --replicas-max-per-node flag to set the maximum number of replica tasks that can run on a
node. The following command creates a nginx service with 2 replica tasks but only one replica task
per node.
One example where this can be useful is to balance tasks over a set of data centers together with --
placement-pref and let --replicas-max-per-node setting make sure that replicas are not migrated to
another datacenter during maintenance or datacenter failure.

The example below illustrates this:

$ docker service create \


--name nginx \
--replicas 2 \
--replicas-max-per-node 1 \
--placement-pref 'spread=node.labels.datacenter' \
nginx

Attach a service to an existing network (--network)


You can use overlay networks to connect one or more services within the swarm.

First, create an overlay network on a manager node the docker network create command:

$ docker network create --driver overlay my-network

etjpu59cykrptrgw0z0hk5snf

After you create an overlay network in swarm mode, all manager nodes have access to the network.

When you create a service and pass the --network flag to attach the service to the overlay network:
$ docker service create \
--replicas 3 \
--network my-network \
--name my-web \
nginx

716thylsndqma81j6kkkb5aus

The swarm extends my-network to each node running the service.


Containers on the same network can access each other using service discovery.

Long form syntax of --network allows to specify list of aliases and driver options:
--network name=my-network,alias=web1,driver-opt=field1=value1

Publish service ports externally to the swarm (-p, --publish)


You can publish service ports to make them available externally to the swarm using the --
publish flag. The --publish flag can take two different styles of arguments. The short version is
positional, and allows you to specify the published port and target port separated by a colon.
$ docker service create --name my_web --replicas 3 --publish 8080:80 nginx

There is also a long format, which is easier to read and allows you to specify more options. The long
format is preferred. You cannot specify the service’s mode when using the short format. Here is an
example of using the long format for the same service as above:

$ docker service create --name my_web --replicas 3 --publish published=8080,target=80


nginx

The options you can specify are:

Short
Option Long syntax Description
syntax

The target port within


the container and the
port to map it to on
the nodes, using the
routing mesh
publishe (ingress) or host-
d and --publish --publish level networking.
target 8080:80 published=8080,target=80 More options are
port available, later in this
table. The key-value
syntax is preferred,
because it is
somewhat self-
documenting.

Not possible --publish The mode to use for


mode to set using published=8080,target=80,mode=hos binding the port,
short syntax. t either ingress or ho
Short
Option Long syntax Description
syntax

st. Defaults
to ingress to use the
routing mesh.

The protocol to
use, tcp , udp,
--publish --publish or sctp. Defaults
protocol 8080:80/tc published=8080,target=80,protocol totcp. To bind a port
p =tcp for both protocols,
specify the -por --
publishflag twice.

When you publish a service port using ingress mode, the swarm routing mesh makes the service
accessible at the published port on every node regardless if there is a task for the service running on
the node. If you use host mode, the port is only bound on nodes where the service is running, and a
given port on a node can only be bound once. You can only set the publication mode using the long
syntax. For more information refer to Use swarm mode routing mesh.

Provide credential specs for managed service accounts


(Windows only)
This option is only used for services using Windows containers. The --credential-spec must be in
the format file://<filename> or registry://<value-name>.
When using the file://<filename> format, the referenced file must be present in
the CredentialSpecs subdirectory in the docker data directory, which defaults
to C:\ProgramData\Docker\ on Windows. For example,
specifying file://spec.json loads C:\ProgramData\Docker\CredentialSpecs\spec.json.
When using the registry://<value-name> format, the credential spec is read from the Windows
registry on the daemon’s host. The specified registry value must be located in:
HKLM\SOFTWARE\Microsoft\Windows
NT\CurrentVersion\Virtualization\Containers\CredentialSpecs

Create services using templates


You can use templates for some flags of service create, using the syntax provided by the
Go’s text/template package.
The supported flags are the following :

 --hostname
 --mount
 --env

Valid placeholders for the Go template are listed below:

Placeholder Description

.Service.ID Service ID

.Service.Name Service name

.Service.Labels Service labels

.Node.ID Node ID

.Node.Hostname Node Hostname

.Task.ID Task ID

.Task.Name Task name

.Task.Slot Task slot

TEMPLATE EXAMPLE

In this example, we are going to set the template of the created containers based on the service’s
name, the node’s ID and hostname where it sits.

$ docker service create --name hosttempl \


--hostname="{{.Node.Hostname}}-{{.Node.ID}}-
{{.Service.Name}}"\
busybox top

va8ew30grofhjoychbr6iot8c

$ docker service ps va8ew30grofhjoychbr6iot8c

ID NAME IMAGE
NODE DESIRED STATE CURRENT STATE ERROR PORTS
wo41w8hg8qan hosttempl.1
busybox:latest@sha256:29f5d56d12684887bdfa50dcd29fc31eea4aaf4ad3bec43daf19026a7ce6991
2 2e7a8a9c4da2 Running Running about a minute ago

$ docker inspect --format="{{.Config.Hostname}}" 2e7a8a9c4da2-


wo41w8hg8qanxwjwsg4kxpprj-hosttempl

x3ti0erg11rjpg64m75kej2mz-hosttempl

Specify isolation mode (Windows)


By default, tasks scheduled on Windows nodes are run using the default isolation mode configured
for this particular node. To force a specific isolation mode, you can use the --isolation flag:
$ docker service create --name myservice --isolation=process microsoft/nanoserver

Supported isolation modes on Windows are:

 default: use default settings specified on the node running the task
 process: use process isolation (Windows server only)
 hyperv: use Hyper-V isolation

Create services requesting Generic Resources


You can narrow the kind of nodes your task can land on through the using the--generic-
resource flag (if the nodes advertise these resources):

$ docker service create --name cuda \


--generic-resource "NVIDIA-GPU=2" \
--generic-resource "SSD=1" \
nvidia/cuda

docker service inspect


Estimated reading time: 3 minutes

Description
Display detailed information on one or more services
API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Swarm This command works with the Swarm orchestrator.

Usage
docker service inspect [OPTIONS] SERVICE [SERVICE...]

Options
Name, shorthand Default Description

--format , -f Format the output using the given Go template

--pretty Print the information in a human friendly format

Parent command
Command Description

docker service Manage services

Related commands
Command Description

docker service create Create a new service

docker service inspect Display detailed information on one or more services

docker service logs Fetch the logs of a service or task

docker service ls List services

docker service ps List the tasks of one or more services

docker service rm Remove one or more services


Command Description

docker service rollback Revert changes to a service’s configuration

docker service scale Scale one or multiple replicated services

docker service update Update a service

Extended description
Inspects the specified service. This command has to be run targeting a manager node.

By default, this renders all results in a JSON array. If a format is specified, the given template will be
executed for each result.

Go’s text/template package describes all the details of the format.

Examples
Inspect a service by name or ID
You can inspect a service, either by its name, or ID

For example, given the following service;

$ docker service ls
ID NAME MODE REPLICAS IMAGE
dmu1ept4cxcf redis replicated 3/3 redis:3.0.6

Both docker service inspect redis, and docker service inspect dmu1ept4cxcf produce the same
result:
$ docker service inspect redis

[
{
"ID": "dmu1ept4cxcfe8k8lhtux3ro3",
"Version": {
"Index": 12
},
"CreatedAt": "2016-06-17T18:44:02.558012087Z",
"UpdatedAt": "2016-06-17T18:44:02.558012087Z",
"Spec": {
"Name": "redis",
"TaskTemplate": {
"ContainerSpec": {
"Image": "redis:3.0.6"
},
"Resources": {
"Limits": {},
"Reservations": {}
},
"RestartPolicy": {
"Condition": "any",
"MaxAttempts": 0
},
"Placement": {}
},
"Mode": {
"Replicated": {
"Replicas": 1
}
},
"UpdateConfig": {},
"EndpointSpec": {
"Mode": "vip"
}
},
"Endpoint": {
"Spec": {}
}
}
]
$ docker service inspect dmu1ept4cxcf

[
{
"ID": "dmu1ept4cxcfe8k8lhtux3ro3",
"Version": {
"Index": 12
},
...
}
]

Formatting
You can print the inspect output in a human-readable format instead of the default JSON output, by
using the --pretty option:
$ docker service inspect --pretty frontend

ID: c8wgl7q4ndfd52ni6qftkvnnp
Name: frontend
Labels:
- org.example.projectname=demo-app
Service Mode: REPLICATED
Replicas: 5
Placement:
UpdateConfig:
Parallelism: 0
On failure: pause
Max failure ratio: 0
ContainerSpec:
Image: nginx:alpine
Resources:
Networks: net1
Endpoint Mode: vip
Ports:
PublishedPort = 4443
Protocol = tcp
TargetPort = 443
PublishMode = ingress

You can also use --format pretty for the same effect.

FIND THE NUMBER OF TASKS RUNNING AS PART OF A SERVICE

The --format option can be used to obtain specific information about a service. For example, the
following command outputs the number of replicas of the “redis” service.
$ docker service inspect --format='{{.Spec.Mode.Replicated.Replicas}}' redis

10

docker service logs


Estimated reading time: 4 minutes

Description
Fetch the logs of a service or task

API 1.29+ The client and daemon API must both be at least 1.29 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Swarm This command works with the Swarm orchestrator.

Usage
docker service logs [OPTIONS] SERVICE|TASK

Options
Name,
Default Description
shorthand

API 1.30+
--details
Show extra details provided to logs

--follow , -f Follow log output

--no-resolve Do not map IDs to Names in output

--no-task-ids Do not include task IDs in output

--no-trunc Do not truncate output

API 1.30+
--raw
Do not neatly format logs

Show logs since timestamp (e.g. 2013-01-02T13:23:37) or


--since
relative (e.g. 42m for 42 minutes)

--tail all Number of lines to show from the end of the logs

--timestamps ,
Show timestamps
-t

Parent command
Command Description

docker service Manage services

Related commands
Command Description

docker service create Create a new service

docker service inspect Display detailed information on one or more services

docker service logs Fetch the logs of a service or task

docker service ls List services


Command Description

docker service ps List the tasks of one or more services

docker service rm Remove one or more services

docker service rollback Revert changes to a service’s configuration

docker service scale Scale one or multiple replicated services

docker service update Update a service

Extended description
The docker service logs command batch-retrieves logs present at the time of execution.
The docker service logs command can be used with either the name or ID of a service, or with the
ID of a task. If a service is passed, it will display logs for all of the containers in that service. If a task
is passed, it will only display logs from that particular task.
Note: This command is only functional for services that are started with the json-
fileor journald logging driver.

For more information about selecting and configuring logging drivers, refer to Configure logging
drivers.

The docker service logs --follow command will continue streaming the new output from the
service’s STDOUT and STDERR.
Passing a negative number or a non-integer to --tail is invalid and the value is set to allin that
case.
The docker service logs --timestamps command will add an RFC3339Nano timestamp , for
example 2014-09-16T06:17:46.000000000Z, to each log entry. To ensure that the timestamps are
aligned the nano-second part of the timestamp will be padded with zero when necessary.
The docker service logs --details command will add on extra attributes, such as environment
variables and labels, provided to --log-opt when creating the service.
The --since option shows only the service logs generated after a given date. You can specify the
date as an RFC 3339 date, a UNIX timestamp, or a Go duration string (e.g. 1m30s, 3h). Besides
RFC3339 date format you may also use RFC3339Nano, 2006-01-02T15:04:05,2006-01-
02T15:04:05.999999999, 2006-01-02Z07:00, and 2006-01-02. The local timezone on the client will be
used if you do not provide either a Z or a +-00:00 timezone offset at the end of the timestamp. When
providing Unix timestamps enter seconds[.nanoseconds], where seconds is the number of seconds
that have elapsed since January 1, 1970 (midnight UTC/GMT), not counting leap seconds (aka Unix
epoch or Unix time), and the optional .nanoseconds field is a fraction of a second no more than nine
digits long. You can combine the --since option with either or both of the --follow or --
tail options.

docker service ls
Estimated reading time: 4 minutes

Description
List services

API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Swarm This command works with the Swarm orchestrator.

Usage
docker service ls [OPTIONS]

Options
Name, shorthand Default Description

--filter , -f Filter output based on conditions provided

--format Pretty-print services using a Go template

--quiet , -q Only display IDs

Parent command
Command Description

docker service Manage services


Related commands
Command Description

docker service create Create a new service

docker service inspect Display detailed information on one or more services

docker service logs Fetch the logs of a service or task

docker service ls List services

docker service ps List the tasks of one or more services

docker service rm Remove one or more services

docker service rollback Revert changes to a service’s configuration

docker service scale Scale one or multiple replicated services

docker service update Update a service

Extended description
This command when run targeting a manager, lists services are running in the swarm.

Examples
On a manager node:

$ docker service ls

ID NAME MODE REPLICAS IMAGE


c8wgl7q4ndfd frontend replicated 5/5 nginx:alpine
dmu1ept4cxcf redis replicated 3/3 redis:3.0.6
iwe3278osahj mongo global 7/7 mongo:3.3

The REPLICAS column shows both the actual and desired number of tasks for the service.
Filtering
The filtering flag (-f or --filter) format is of “key=value”. If there is more than one filter, then pass
multiple flags (e.g., --filter "foo=bar" --filter "bif=baz")

The currently supported filters are:

 id
 label
 mode
 name

ID

The id filter matches all or part of a service’s id.


$ docker service ls -f "id=0bcjw"
ID NAME MODE REPLICAS IMAGE
0bcjwfh8ychr redis replicated 1/1 redis:3.0.6

LABEL

The label filter matches services based on the presence of a label alone or a label and a value.
The following filter matches all services with a project label regardless of its value:
$ docker service ls --filter label=project
ID NAME MODE REPLICAS IMAGE
01sl1rp6nj5u frontend2 replicated 1/1 nginx:alpine
36xvvwwauej0 frontend replicated 5/5 nginx:alpine
74nzcxxjv6fq backend replicated 3/3 redis:3.0.6

The following filter matches only services with the project label with the project-a value.
$ docker service ls --filter label=project=project-a
ID NAME MODE REPLICAS IMAGE
36xvvwwauej0 frontend replicated 5/5 nginx:alpine
74nzcxxjv6fq backend replicated 3/3 redis:3.0.6

MODE

The mode filter matches on the mode (either replicated or global) of a service.
The following filter matches only global services.
$ docker service ls --filter mode=global
ID NAME MODE REPLICAS IMAGE
w7y0v2yrn620 top global 1/1
busybox

NAME

The name filter matches on all or part of a service’s name.


The following filter matches services with a name containing redis.
$ docker service ls --filter name=redis
ID NAME MODE REPLICAS IMAGE
0bcjwfh8ychr redis replicated 1/1 redis:3.0.6

Formatting
The formatting options (--format) pretty-prints services output using a Go template.

Valid placeholders for the Go template are listed below:

Placeholder Description

.ID Service ID

.Name Service name

.Mode Service mode (replicated, global)

.Replicas Service replicas

.Image Service image

.Ports Service ports published in ingress mode

When using the --format option, the service ls command will either output the data exactly as the
template declares or, when using the table directive, includes column headers as well.
The following example uses a template without headers and outputs the ID, Mode,
and Replicas entries separated by a colon for all services:
$ docker service ls --format "{{.ID}}: {{.Mode}} {{.Replicas}}"

0zmvwuiu3vue: replicated 10/10


fm6uf97exkul: global 5/5

docker service ps
Estimated reading time: 6 minutes

Description
List the tasks of one or more services

API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Swarm This command works with the Swarm orchestrator.

Usage
docker service ps [OPTIONS] SERVICE [SERVICE...]

Options
Name, shorthand Default Description

--filter , -f Filter output based on conditions provided

--format Pretty-print tasks using a Go template

--no-resolve Do not map IDs to Names

--no-trunc Do not truncate output

--quiet , -q Only display task IDs

Parent command
Command Description

docker service Manage services


Related commands
Command Description

docker service create Create a new service

docker service inspect Display detailed information on one or more services

docker service logs Fetch the logs of a service or task

docker service ls List services

docker service ps List the tasks of one or more services

docker service rm Remove one or more services

docker service rollback Revert changes to a service’s configuration

docker service scale Scale one or multiple replicated services

docker service update Update a service

Extended description
Lists the tasks that are running as part of the specified services. This command has to be run
targeting a manager node.

Examples
List the tasks that are part of a service
The following command shows all the tasks that are part of the redis service:
$ docker service ps redis

ID NAME IMAGE NODE DESIRED STATE CURRENT STATE


ERROR PORTS
0qihejybwf1x redis.1 redis:3.0.5 manager1 Running Running 8 seconds
bk658fpbex0d redis.2 redis:3.0.5 worker2 Running Running 9 seconds
5ls5s5fldaqg redis.3 redis:3.0.5 worker1 Running Running 9 seconds
8ryt076polmc redis.4 redis:3.0.5 worker1 Running Running 9 seconds
1x0v8yomsncd redis.5 redis:3.0.5 manager1 Running Running 8 seconds
71v7je3el7rr redis.6 redis:3.0.5 worker2 Running Running 9 seconds
4l3zm9b7tfr7 redis.7 redis:3.0.5 worker2 Running Running 9 seconds
9tfpyixiy2i7 redis.8 redis:3.0.5 worker1 Running Running 9 seconds
3w1wu13yupln redis.9 redis:3.0.5 manager1 Running Running 8 seconds
8eaxrb2fqpbn redis.10 redis:3.0.5 manager1 Running Running 8 seconds

In addition to running tasks, the output also shows the task history. For example, after updating the
service to use the redis:3.0.6 image, the output may look like this:
$ docker service ps redis

ID NAME IMAGE NODE DESIRED STATE CURRENT STATE


ERROR PORTS
50qe8lfnxaxk redis.1 redis:3.0.6 manager1 Running Running 6 seconds
ago
ky2re9oz86r9 \_ redis.1 redis:3.0.5 manager1 Shutdown Shutdown 8 seconds
ago
3s46te2nzl4i redis.2 redis:3.0.6 worker2 Running Running less than a
second ago
nvjljf7rmor4 \_ redis.2 redis:3.0.6 worker2 Shutdown Rejected 23 seconds
ago "No such image: redis@sha256:6…"
vtiuz2fpc0yb \_ redis.2 redis:3.0.5 worker2 Shutdown Shutdown 1 second
ago
jnarweeha8x4 redis.3 redis:3.0.6 worker1 Running Running 3 seconds
ago
vs448yca2nz4 \_ redis.3 redis:3.0.5 worker1 Shutdown Shutdown 4 seconds
ago
jf1i992619ir redis.4 redis:3.0.6 worker1 Running Running 10 seconds
ago
blkttv7zs8ee \_ redis.4 redis:3.0.5 worker1 Shutdown Shutdown 11 seconds
ago

The number of items in the task history is determined by the --task-history-limit option that was
set when initializing the swarm. You can change the task history retention limit using the docker
swarm update command.
When deploying a service, docker resolves the digest for the service’s image, and pins the service to
that digest. The digest is not shown by default, but is printed if --no-trunc is used. The --no-
trunc option also shows the non-truncated task ID, and error-messages, as can be seen below;

$ docker service ps --no-trunc redis

ID NAME IMAGE
NODE DESIRED STATE CURRENT STATE ERROR
PORTS
50qe8lfnxaxksi9w2a704wkp7 redis.1
redis:3.0.6@sha256:6a692a76c2081888b589e26e6ec835743119fe453d67ecf03df7de5b73d69842
manager1 Running Running 5 minutes ago
ky2re9oz86r9556i2szb8a8af \_ redis.1
redis:3.0.5@sha256:f8829e00d95672c48c60f468329d6693c4bdd28d1f057e755f8ba8b40008682e
worker2 Shutdown Shutdown 5 minutes ago
bk658fpbex0d57cqcwoe3jthu redis.2
redis:3.0.6@sha256:6a692a76c2081888b589e26e6ec835743119fe453d67ecf03df7de5b73d69842
worker2 Running Running 5 seconds
nvjljf7rmor4htv7l8rwcx7i7 \_ redis.2
redis:3.0.6@sha256:6a692a76c2081888b589e26e6ec835743119fe453d67ecf03df7de5b73d69842
worker2 Shutdown Rejected 5 minutes ago "No such image:
redis@sha256:6a692a76c2081888b589e26e6ec835743119fe453d67ecf03df7de5b73d69842"

Filtering
The filtering flag (-f or --filter) format is a key=value pair. If there is more than one filter, then
pass multiple flags (e.g. --filter "foo=bar" --filter "bif=baz"). Multiple filter flags are combined
as an OR filter. For example, -f name=redis.1 -f name=redis.7 returns
both redis.1 and redis.7 tasks.

The currently supported filters are:

 id
 name
 node
 desired-state

ID

The id filter matches on all or a prefix of a task’s ID.


$ docker service ps -f "id=8" redis
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE
ERROR PORTS
8ryt076polmc redis.4 redis:3.0.6 worker1 Running Running 9 seconds
8eaxrb2fqpbn redis.10 redis:3.0.6 manager1 Running Running 8 seconds

NAME

The name filter matches on task names.


$ docker service ps -f "name=redis.1" redis
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
PORTS
qihejybwf1x5 redis.1 redis:3.0.6 manager1 Running Running 8 seconds

NODE

The node filter matches on a node name or a node ID.


$ docker service ps -f "node=manager1" redis
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE
ERROR PORTS
0qihejybwf1x redis.1 redis:3.0.6 manager1 Running Running 8 seconds
1x0v8yomsncd redis.5 redis:3.0.6 manager1 Running Running 8 seconds
3w1wu13yupln redis.9 redis:3.0.6 manager1 Running Running 8 seconds
8eaxrb2fqpbn redis.10 redis:3.0.6 manager1 Running Running 8 seconds

DESIRED-STATE

The desired-state filter can take the values running, shutdown, or accepted.

Formatting
The formatting options (--format) pretty-prints tasks output using a Go template.

Valid placeholders for the Go template are listed below:

Placeholder Description

.ID Task ID

.Name Task name


Placeholder Description

.Image Task image

.Node Node ID

.DesiredState Desired state of the task (running, shutdown, or accepted)

.CurrentState Current state of the task

.Error Error

.Ports Task published ports

When using the --format option, the service ps command will either output the data exactly as the
template declares or, when using the table directive, includes column headers as well.
The following example uses a template without headers and outputs the Name and Imageentries
separated by a colon for all tasks:
$ docker service ps --format "{{.Name}}: {{.Image}}" top
top.1: busybox
top.2: busybox
top.3: busybox

docker service rollback


Estimated reading time: 3 minutes

Description
Revert changes to a service’s configuration

API 1.31+ The client and daemon API must both be at least 1.31 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Swarm This command works with the Swarm orchestrator.

Usage
docker service rollback [OPTIONS] SERVICE
Options
Name, shorthand Default Description

API 1.29+
--detach , -d
Exit immediately instead of waiting for the service to converge

--quiet , -q Suppress progress output

Parent command
Command Description

docker service Manage services

Related commands
Command Description

docker service create Create a new service

docker service inspect Display detailed information on one or more services

docker service logs Fetch the logs of a service or task

docker service ls List services

docker service ps List the tasks of one or more services

docker service rm Remove one or more services

docker service rollback Revert changes to a service’s configuration

docker service scale Scale one or multiple replicated services

docker service update Update a service

Extended description
Roll back a specified service to its previous version from the swarm. This command must be run
targeting a manager node.

Examples
Roll back to the previous version of a service
Use the docker service rollback command to roll back to the previous version of a service. After
executing this command, the service is reverted to the configuration that was in place before the
most recent docker service update command.

The following example creates a service with a single replica, updates the service to use three
replicas, and then rolls back the service to the previous version, having one replica.

Create a service with a single replica:

$ docker service create --name my-service -p 8080:80 nginx:alpine

Confirm that the service is running with a single replica:

$ docker service ls

ID NAME MODE REPLICAS IMAGE


PORTS
xbw728mf6q0d my-service replicated 1/1
nginx:alpine *:8080->80/tcp

Update the service to use three replicas:

$ docker service update --replicas=3 my-service

$ docker service ls

ID NAME MODE REPLICAS IMAGE


PORTS
xbw728mf6q0d my-service replicated 3/3
nginx:alpine *:8080->80/tcp

Now roll back the service to its previous version, and confirm it is running a single replica again:
$ docker service rollback my-service

$ docker service ls

ID NAME MODE REPLICAS IMAGE


PORTS
xbw728mf6q0d my-service replicated 1/1
nginx:alpine *:8080->80/tcp

docker service rm
Estimated reading time: 1 minute

Description
Remove one or more services

API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Swarm This command works with the Swarm orchestrator.

Usage
docker service rm SERVICE [SERVICE...]

Parent command
Command Description

docker service Manage services

Related commands
Command Description

docker service create Create a new service


Command Description

docker service inspect Display detailed information on one or more services

docker service logs Fetch the logs of a service or task

docker service ls List services

docker service ps List the tasks of one or more services

docker service rm Remove one or more services

docker service rollback Revert changes to a service’s configuration

docker service scale Scale one or multiple replicated services

docker service update Update a service

Extended description
Removes the specified services from the swarm. This command has to be run targeting a manager
node.

Examples
Remove the redis service:
$ docker service rm redis

redis

$ docker service ls

ID NAME MODE REPLICAS IMAGE

Warning: Unlike docker rm, this command does not ask for confirmation before removing a running
service.
docker service scale
Estimated reading time: 3 minutes

Description
Scale one or multiple replicated services

API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Swarm This command works with the Swarm orchestrator.

Usage
docker service scale SERVICE=REPLICAS [SERVICE=REPLICAS...]

Options
Name, shorthand Default Description

API 1.29+
--detach , -d
Exit immediately instead of waiting for the service to converge

Parent command
Command Description

docker service Manage services

Related commands
Command Description

docker service create Create a new service

docker service inspect Display detailed information on one or more services


Command Description

docker service logs Fetch the logs of a service or task

docker service ls List services

docker service ps List the tasks of one or more services

docker service rm Remove one or more services

docker service rollback Revert changes to a service’s configuration

docker service scale Scale one or multiple replicated services

docker service update Update a service

Extended description
The scale command enables you to scale one or more replicated services either up or down to the
desired number of replicas. This command cannot be applied on services which are global mode.
The command will return immediately, but the actual scaling of the service may take some time. To
stop all replicas of a service while keeping the service active in the swarm you can set the scale to 0.

Examples
Scale a single service
The following command scales the “frontend” service to 50 tasks.

$ docker service scale frontend=50

frontend scaled to 50

The following command tries to scale a global service to 10 tasks and returns an error.

$ docker service create --mode global --name backend backend:latest

b4g08uwuairexjub6ome6usqh
$ docker service scale backend=10

backend: scale can only be used with replicated mode

Directly afterwards, run docker service ls, to see the actual number of replicas.
$ docker service ls --filter name=frontend

ID NAME MODE REPLICAS IMAGE


3pr5mlvu3fh9 frontend replicated 15/50 nginx:alpine

You can also scale a service using the docker service update command. The following commands
are equivalent:
$ docker service scale frontend=50
$ docker service update --replicas=50 frontend

Scale multiple services


The docker service scale command allows you to set the desired number of tasks for multiple
services at once. The following example scales both the backend and frontend services:
$ docker service scale backend=3 frontend=5

backend scaled to 3
frontend scaled to 5

$ docker service ls

ID NAME MODE REPLICAS IMAGE


3pr5mlvu3fh9 frontend replicated 5/5 nginx:alpine
74nzcxxjv6fq backend replicated 3/3 redis:3.0.6

docker service update


Estimated reading time: 16 minutes
Description
Update a service

API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Swarm This command works with the Swarm orchestrator.

Usage
docker service update [OPTIONS] SERVICE

Options
Name, shorthand Default Description

--args Service command args

API 1.30+
--config-add
Add or update a config file on a service

API 1.30+
--config-rm
Remove a configuration file

--constraint-add Add or update a placement constraint

--constraint-rm Remove a constraint

--container-label-
Add or update a container label
add

--container-label-
Remove a container label by its key
rm

API 1.29+
--credential-spec Credential spec for managed service account (Windows
only)

API 1.29+
--detach , -d Exit immediately instead of waiting for the service to
converge
Name, shorthand Default Description

API 1.25+
--dns-add
Add or update a custom DNS server

API 1.25+
--dns-option-add
Add or update a DNS option

API 1.25+
--dns-option-rm
Remove a DNS option

API 1.25+
--dns-rm
Remove a custom DNS server

API 1.25+
--dns-search-add
Add or update a custom DNS search domain

API 1.25+
--dns-search-rm
Remove a DNS search domain

--endpoint-mode Endpoint mode (vip or dnsrr)

--entrypoint Overwrite the default ENTRYPOINT of the image

--env-add Add or update an environment variable

--env-rm Remove an environment variable

API 1.25+
--force
Force update even if no changes require it

--generic-resource-
Add a Generic resource
add

--generic-resource-
Remove a Generic resource
rm

API 1.25+
--group-add Add an additional supplementary user group to the
container

API 1.25+
--group-rm Remove a previously added supplementary user group from
the container

API 1.25+
--health-cmd
Command to run to check health
Name, shorthand Default Description

API 1.25+
--health-interval
Time between running the check (ms|s|m|h)

API 1.25+
--health-retries
Consecutive failures needed to report unhealthy

API 1.29+
--health-start-
Start period for the container to initialize before counting
period
retries towards unstable (ms|s|m|h)

API 1.25+
--health-timeout
Maximum time to allow one check to run (ms|s|m|h)

API 1.32+
--host-add
Add a custom host-to-IP mapping (host:ip)

API 1.25+
--host-rm
Remove a custom host-to-IP mapping (host:ip)

API 1.25+
--hostname
Container hostname

--image Service image tag

API 1.37+
--init Use an init inside each service container to forward signals
and reap processes

API 1.35+
--isolation
Service container isolation mode

--label-add Add or update a service label

--label-rm Remove a label by its key

--limit-cpu Limit CPUs

--limit-memory Limit Memory

--log-driver Logging driver for service

--log-opt Logging driver options

--mount-add Add or update a mount on a service


Name, shorthand Default Description

--mount-rm Remove a mount by its target path

API 1.29+
--network-add
Add a network

API 1.29+
--network-rm
Remove a network

API 1.25+
--no-healthcheck
Disable any container-specified HEALTHCHECK

API 1.30+
--no-resolve-image Do not query the registry to resolve image digest and
supported platforms

--placement-pref- API 1.28+


add Add a placement preference

API 1.28+
--placement-pref-rm
Remove a placement preference

--publish-add Add or update a published port

--publish-rm Remove a published port by its target port

--quiet , -q Suppress progress output

API 1.28+
--read-only
Mount the container’s root filesystem as read only

--replicas Number of tasks

--replicas-max-per- API 1.40+


node Maximum number of tasks per node (default 0 = unlimited)

--reserve-cpu Reserve CPUs

--reserve-memory Reserve Memory

--restart-condition Restart when condition is met (“none”|”on-failure”|”any”)

--restart-delay Delay between restart attempts (ns|us|ms|s|m|h)


Name, shorthand Default Description

--restart-max-
Maximum number of restarts before giving up
attempts

--restart-window Window used to evaluate the restart policy (ns|us|ms|s|m|h)

API 1.25+
--rollback
Rollback to previous specification

API 1.28+
--rollback-delay
Delay between task rollbacks (ns|us|ms|s|m|h)

--rollback-failure- API 1.28+


action Action on rollback failure (“pause”|”continue”)

--rollback-max- API 1.28+


failure-ratio Failure rate to tolerate during a rollback

API 1.28+
--rollback-monitor Duration after each task rollback to monitor for failure
(ns|us|ms|s|m|h)

API 1.29+
--rollback-order
Rollback order (“start-first”|”stop-first”)

API 1.28+
--rollback-
Maximum number of tasks rolled back simultaneously (0 to
parallelism
roll back all at once)

API 1.25+
--secret-add
Add or update a secret on a service

API 1.25+
--secret-rm
Remove a secret

--stop-grace-period Time to wait before force killing a container (ns|us|ms|s|m|h)

API 1.28+
--stop-signal
Signal to stop the container

API 1.40+
--sysctl-add
Add or update a Sysctl option

API 1.40+
--sysctl-rm
Remove a Sysctl option
Name, shorthand Default Description

API 1.25+
--tty , -t
Allocate a pseudo-TTY

--update-delay Delay between updates (ns|us|ms|s|m|h)

--update-failure-
Action on update failure (“pause”|”continue”|”rollback”)
action

--update-max- API 1.25+


failure-ratio Failure rate to tolerate during an update

API 1.25+
--update-monitor Duration after each task update to monitor for failure
(ns|us|ms|s|m|h)

API 1.29+
--update-order
Update order (“start-first”|”stop-first”)

--update- Maximum number of tasks updated simultaneously (0 to


parallelism update all at once)

--user , -u Username or UID (format: <name|uid>[:<group|gid>])

--with-registry-
Send registry authentication details to swarm agents
auth

--workdir , -w Working directory inside the container

Parent command
Command Description

docker service Manage services

Related commands
Command Description

docker service create Create a new service

docker service inspect Display detailed information on one or more services


Command Description

docker service logs Fetch the logs of a service or task

docker service ls List services

docker service ps List the tasks of one or more services

docker service rm Remove one or more services

docker service rollback Revert changes to a service’s configuration

docker service scale Scale one or multiple replicated services

docker service update Update a service

Extended description
Updates a service as described by the specified parameters. This command has to be run targeting
a manager node. The parameters are the same as docker service create. Please look at the
description there for further information.
Normally, updating a service will only cause the service’s tasks to be replaced with new ones if a
change to the service requires recreating the tasks for it to take effect. For example, only changing
the --update-parallelism setting will not recreate the tasks, because the individual tasks are not
affected by this setting. However, the --force flag will cause the tasks to be recreated anyway. This
can be used to perform a rolling restart without any changes to the service parameters.

Examples
Update a service
$ docker service update --limit-cpu 2 redis

Perform a rolling restart with no parameter changes


$ docker service update --force --update-parallelism 1 --update-delay 30s redis
In this example, the --force flag causes the service’s tasks to be shut down and replaced with new
ones even though none of the other parameters would normally cause that to happen. The --
update-parallelism 1 setting ensures that only one task is replaced at a time (this is the default
behavior). The --update-delay 30s setting introduces a 30 second delay between tasks, so that the
rolling restart happens gradually.

Add or remove mounts


Use the --mount-add or --mount-rm options add or remove a service’s bind mounts or volumes.
The following example creates a service which mounts the test-data volume to/somewhere. The
next step updates the service to also mount the other-volume volume to /somewhere-elsevolume,
The last step unmounts the /somewhere mount point, effectively removing the test-data volume.
Each command returns the service name.
 The --mount-add flag takes the same parameters as the --mount flag onservice create.
Refer to the volumes and bind mounts section in theservice create reference for details.
 The --mount-rm flag takes the target path of the mount.
$ docker service create \
--name=myservice \
--mount \
type=volume,source=test-data,target=/somewhere \
nginx:alpine \
myservice

myservice

$ docker service update \


--mount-add \
type=volume,source=other-volume,target=/somewhere-else \
myservice

myservice

$ docker service update --mount-rm /somewhere myservice


myservice

Add or remove published service ports


Use the --publish-add or --publish-rm flags to add or remove a published port for a service. You
can use the short or long syntax discussed in the docker service create reference.

The following example adds a published service port to an existing service.

$ docker service update \


--publish-add published=8080,target=80 \
myservice

Add or remove network


Use the --network-add or --network-rm flags to add or remove a network for a service. You can use
the short or long syntax discussed in the docker service create reference.

The following example adds a new alias name to an existing service already connected to network
my-network:

$ docker service update \


--network-rm my-network \
--network-add name=my-network,alias=web1 \
myservice

Roll back to the previous version of a service


Use the --rollback option to roll back to the previous version of the service.
This will revert the service to the configuration that was in place before the most recent docker
service update command.

The following example updates the number of replicas for the service from 4 to 5, and then rolls back
to the previous configuration.

$ docker service update --replicas=5 web

web
$ docker service ls

ID NAME MODE REPLICAS IMAGE


80bvrzp6vxf3 web replicated 0/5 nginx:alpine

Roll back the web service...


$ docker service update --rollback web

web

$ docker service ls

ID NAME MODE REPLICAS IMAGE


80bvrzp6vxf3 web replicated 0/4 nginx:alpine

Other options can be combined with --rollback as well, for example, --update-delay 0s to execute
the rollback without a delay between tasks:
$ docker service update \
--rollback \
--update-delay 0s
web

web

Services can also be set up to roll back to the previous version automatically when an update fails.
To set up a service for automatic rollback, use --update-failure-action=rollback. A rollback will
be triggered if the fraction of the tasks which failed to update successfully exceeds the value given
with --update-max-failure-ratio.

The rate, parallelism, and other parameters of a rollback operation are determined by the values
passed with the following flags:

 --rollback-delay
 --rollback-failure-action
 --rollback-max-failure-ratio
 --rollback-monitor
 --rollback-parallelism

For example, a service set up with --update-parallelism 1 --rollback-parallelism 3 will update


one task at a time during a normal update, but during a rollback, 3 tasks at a time will get rolled
back. These rollback parameters are respected both during automatic rollbacks and for rollbacks
initiated manually using --rollback.

Add or remove secrets


Use the --secret-add or --secret-rm options add or remove a service’s secrets.
The following example adds a secret named ssh-2 and removes ssh-1:
$ docker service update \
--secret-add source=ssh-2,target=ssh-2 \
--secret-rm ssh-1 \
myservice

Update services using templates


Some flags of service update support the use of templating. See service create for the reference.

Specify isolation mode (Windows)


service update supports the same --isolation flag as service create See service create for the
reference.

docker stack
Estimated reading time: 1 minute

Description
Manage Docker stacks

API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker stack [OPTIONS] COMMAND

Options
Name, shorthand Default Description

Kubernetes
--kubeconfig
Kubernetes config file

--orchestrator Orchestrator to use (swarm|kubernetes|all)

Child commands
Command Description

docker stack deploy Deploy a new stack or update an existing stack

docker stack ls List stacks

docker stack ps List the tasks in the stack

docker stack rm Remove one or more stacks

docker stack services List the services in the stack

Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
Manage stacks.
docker stack deploy
Estimated reading time: 4 minutes

Description
Deploy a new stack or update an existing stack

API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Usage
docker stack deploy [OPTIONS] STACK

Options
Name, shorthand Default Description

experimental (daemon)Swarm
--bundle-file
Path to a Distributed Application Bundle file

--compose-file , API 1.25+


-c Path to a Compose file, or “-“ to read from stdin

Kubernetes
--namespace
Kubernetes namespace to use

API 1.27+Swarm
--prune
Prune services that are no longer referenced

API 1.30+Swarm
--resolve-image always Query the registry to resolve image digest and supported
platforms (“always”|”changed”|”never”)

--with-registry- Swarm
auth Send registry authentication details to Swarm agents

Kubernetes
--kubeconfig
Kubernetes config file

--orchestrator Orchestrator to use (swarm|kubernetes|all)


Parent command
Command Description

docker stack Manage Docker stacks

Related commands
Command Description

docker stack deploy Deploy a new stack or update an existing stack

docker stack ls List stacks

docker stack ps List the tasks in the stack

docker stack rm Remove one or more stacks

docker stack services List the services in the stack

Extended description
Create and update a stack from a compose or a dab file on the swarm. This command has to be run
targeting a manager node.

Examples
Compose file
The deploy command supports compose file version 3.0 and above.
$ docker stack deploy --compose-file docker-compose.yml vossibility

Ignoring unsupported options: links

Creating network vossibility_vossibility


Creating network vossibility_default
Creating service vossibility_nsqd
Creating service vossibility_logstash
Creating service vossibility_elasticsearch
Creating service vossibility_kibana
Creating service vossibility_ghollector
Creating service vossibility_lookupd

The Compose file can also be provided as standard input with --compose-file -:
$ cat docker-compose.yml | docker stack deploy --compose-file - vossibility

Ignoring unsupported options: links

Creating network vossibility_vossibility


Creating network vossibility_default
Creating service vossibility_nsqd
Creating service vossibility_logstash
Creating service vossibility_elasticsearch
Creating service vossibility_kibana
Creating service vossibility_ghollector
Creating service vossibility_lookupd

If your configuration is split between multiple Compose files, e.g. a base configuration and
environment-specific overrides, you can provide multiple --compose-file flags.
$ docker stack deploy --compose-file docker-compose.yml -c docker-compose.prod.yml
vossibility

Ignoring unsupported options: links

Creating network vossibility_vossibility


Creating network vossibility_default
Creating service vossibility_nsqd
Creating service vossibility_logstash
Creating service vossibility_elasticsearch
Creating service vossibility_kibana
Creating service vossibility_ghollector
Creating service vossibility_lookupd

You can verify that the services were correctly created:

$ docker service ls

ID NAME MODE REPLICAS IMAGE


29bv0vnlm903 vossibility_lookupd replicated 1/1
nsqio/nsq@sha256:eeba05599f31eba418e96e71e0984c3dc96963ceb66924dd37a47bf7ce18a662
4awt47624qwh vossibility_nsqd replicated 1/1
nsqio/nsq@sha256:eeba05599f31eba418e96e71e0984c3dc96963ceb66924dd37a47bf7ce18a662
4tjx9biia6fs vossibility_elasticsearch replicated 1/1
elasticsearch@sha256:12ac7c6af55d001f71800b83ba91a04f716e58d82e748fa6e5a7359eed2301aa
7563uuzr9eys vossibility_kibana replicated 1/1
kibana@sha256:6995a2d25709a62694a937b8a529ff36da92ebee74bafd7bf00e6caf6db2eb03
9gc5m4met4he vossibility_logstash replicated 1/1
logstash@sha256:2dc8bddd1bb4a5a34e8ebaf73749f6413c101b2edef6617f2f7713926d2141fe
axqh55ipl40h vossibility_vossibility-collector replicated 1/1
icecrime/vossibility-
collector@sha256:f03f2977203ba6253988c18d04061c5ec7aab46bca9dfd89a9a1fa4500989fba

DAB file
$ docker stack deploy --bundle-file vossibility-stack.dab vossibility

Loading bundle from vossibility-stack.dab


Creating service vossibility_elasticsearch
Creating service vossibility_kibana
Creating service vossibility_logstash
Creating service vossibility_lookupd
Creating service vossibility_nsqd
Creating service vossibility_vossibility-collector

You can verify that the services were correctly created:

$ docker service ls

ID NAME MODE REPLICAS IMAGE


29bv0vnlm903 vossibility_lookupd replicated 1/1
nsqio/nsq@sha256:eeba05599f31eba418e96e71e0984c3dc96963ceb66924dd37a47bf7ce18a662
4awt47624qwh vossibility_nsqd replicated 1/1
nsqio/nsq@sha256:eeba05599f31eba418e96e71e0984c3dc96963ceb66924dd37a47bf7ce18a662
4tjx9biia6fs vossibility_elasticsearch replicated 1/1
elasticsearch@sha256:12ac7c6af55d001f71800b83ba91a04f716e58d82e748fa6e5a7359eed2301aa
7563uuzr9eys vossibility_kibana replicated 1/1
kibana@sha256:6995a2d25709a62694a937b8a529ff36da92ebee74bafd7bf00e6caf6db2eb03
9gc5m4met4he vossibility_logstash replicated 1/1
logstash@sha256:2dc8bddd1bb4a5a34e8ebaf73749f6413c101b2edef6617f2f7713926d2141fe
axqh55ipl40h vossibility_vossibility-collector replicated 1/1
icecrime/vossibility-
collector@sha256:f03f2977203ba6253988c18d04061c5ec7aab46bca9dfd89a9a1fa4500989fba

docker stack ps
Estimated reading time: 7 minutes

Description
List the tasks in the stack

API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Usage
docker stack ps [OPTIONS] STACK

Options
Name, shorthand Default Description

--filter , -f Filter output based on conditions provided

--format Pretty-print tasks using a Go template

Kubernetes
--namespace
Kubernetes namespace to use
Name, shorthand Default Description

--no-resolve Do not map IDs to Names

--no-trunc Do not truncate output

--quiet , -q Only display task IDs

Kubernetes
--kubeconfig
Kubernetes config file

--orchestrator Orchestrator to use (swarm|kubernetes|all)

Parent command
Command Description

docker stack Manage Docker stacks

Related commands
Command Description

docker stack deploy Deploy a new stack or update an existing stack

docker stack ls List stacks

docker stack ps List the tasks in the stack

docker stack rm Remove one or more stacks

docker stack services List the services in the stack

Extended description
Lists the tasks that are running as part of the specified stack. This command has to be run targeting
a manager node.

Examples
List the tasks that are part of a stack
The following command shows all the tasks that are part of the voting stack:
$ docker stack ps voting
ID NAME IMAGE
NODE DESIRED STATE CURRENT STATE ERROR PORTS
xim5bcqtgk1b voting_worker.1
dockersamples/examplevotingapp_worker:latest node2 Running Running 2
minutes ago
q7yik0ks1in6 voting_result.1
dockersamples/examplevotingapp_result:before node1 Running Running 2
minutes ago
rx5yo0866nfx voting_vote.1 dockersamples/examplevotingapp_vote:before
node3 Running Running 2 minutes ago
tz6j82jnwrx7 voting_db.1 postgres:9.4
node1 Running Running 2 minutes ago
w48spazhbmxc voting_redis.1 redis:alpine
node2 Running Running 3 minutes ago
6jj1m02freg1 voting_visualizer.1 dockersamples/visualizer:stable
node1 Running Running 2 minutes ago
kqgdmededccb voting_vote.2 dockersamples/examplevotingapp_vote:before
node2 Running Running 2 minutes ago
t72q3z038jeh voting_redis.2 redis:alpine
node3 Running Running 3 minutes ago

Filtering
The filtering flag (-f or --filter) format is a key=value pair. If there is more than one filter, then
pass multiple flags (e.g. --filter "foo=bar" --filter "bif=baz"). Multiple filter flags are combined
as an OR filter. For example, -f name=redis.1 -f name=redis.7 returns
both redis.1 and redis.7 tasks.

The currently supported filters are:

 id
 name
 node
 desired-state

ID

The id filter matches on all or a prefix of a task’s ID.


$ docker stack ps -f "id=t" voting
ID NAME IMAGE NODE DESIRED
STATE CURRENTSTATE ERROR PORTS
tz6j82jnwrx7 voting_db.1 postgres:9.4 node1 Running
Running 14 minutes ago
t72q3z038jeh voting_redis.2 redis:alpine node3 Running
Running 14 minutes ago

NAME

The name filter matches on task names.


$ docker stack ps -f "name=voting_redis" voting
ID NAME IMAGE NODE DESIRED
STATE CURRENTSTATE ERROR PORTS
w48spazhbmxc voting_redis.1 redis:alpine node2 Running
Running 17 minutes ago
t72q3z038jeh voting_redis.2 redis:alpine node3 Running
Running 17 minutes ago

NODE

The node filter matches on a node name or a node ID.


$ docker stack ps -f "node=node1" voting
ID NAME IMAGE
NODE DESIRED STATE CURRENT STATE ERROR PORTS
q7yik0ks1in6 voting_result.1
dockersamples/examplevotingapp_result:before node1 Running Running 18
minutes ago
tz6j82jnwrx7 voting_db.1 postgres:9.4
node1 Running Running 18 minutes ago
6jj1m02freg1 voting_visualizer.1 dockersamples/visualizer:stable
node1 Running Running 18 minutes ago

DESIRED-STATE

The desired-state filter can take the values running, shutdown, or accepted.
$ docker stack ps -f "desired-state=running" voting
ID NAME IMAGE
NODE DESIRED STATE CURRENT STATE ERROR PORTS
xim5bcqtgk1b voting_worker.1
dockersamples/examplevotingapp_worker:latest node2 Running Running 21
minutes ago
q7yik0ks1in6 voting_result.1
dockersamples/examplevotingapp_result:before node1 Running Running 21
minutes ago
rx5yo0866nfx voting_vote.1 dockersamples/examplevotingapp_vote:before
node3 Running Running 21 minutes ago
tz6j82jnwrx7 voting_db.1 postgres:9.4
node1 Running Running 21 minutes ago
w48spazhbmxc voting_redis.1 redis:alpine
node2 Running Running 21 minutes ago
6jj1m02freg1 voting_visualizer.1 dockersamples/visualizer:stable
node1 Running Running 21 minutes ago
kqgdmededccb voting_vote.2 dockersamples/examplevotingapp_vote:before
node2 Running Running 21 minutes ago
t72q3z038jeh voting_redis.2 redis:alpine
node3 Running Running 21 minutes ago

Formatting
The formatting options (--format) pretty-prints tasks output using a Go template.

Valid placeholders for the Go template are listed below:

Placeholder Description

.ID Task ID

.Name Task name

.Image Task image

.Node Node ID

.DesiredState Desired state of the task (running, shutdown, or accepted)

.CurrentState Current state of the task

.Error Error

.Ports Task published ports


When using the --format option, the stack ps command will either output the data exactly as the
template declares or, when using the table directive, includes column headers as well.
The following example uses a template without headers and outputs the Name and Imageentries
separated by a colon for all tasks:
$ docker stack ps --format "{{.Name}}: {{.Image}}" voting
voting_worker.1: dockersamples/examplevotingapp_worker:latest
voting_result.1: dockersamples/examplevotingapp_result:before
voting_vote.1: dockersamples/examplevotingapp_vote:before
voting_db.1: postgres:9.4
voting_redis.1: redis:alpine
voting_visualizer.1: dockersamples/visualizer:stable
voting_vote.2: dockersamples/examplevotingapp_vote:before
voting_redis.2: redis:alpine

Do not map IDs to Names


The --no-resolve option shows IDs for task name, without mapping IDs to Names.
$ docker stack ps --no-resolve voting
ID NAME IMAGE
NODE DESIRED STATE CURRENT STATE ERROR PORTS
xim5bcqtgk1b 10z9fjfqzsxnezo4hb81p8mqg.1
dockersamples/examplevotingapp_worker:latest qaqt4nrzo775jrx6detglho01 Running
Running 30 minutes ago
q7yik0ks1in6 hbxltua1na7mgqjnidldv5m65.1
dockersamples/examplevotingapp_result:before mxpaef1tlh23s052erw88a4w5 Running
Running 30 minutes ago
rx5yo0866nfx qyprtqw1g5nrki557i974ou1d.1
dockersamples/examplevotingapp_vote:before kanqcxfajd1r16wlnqcblobmm Running
Running 31 minutes ago
tz6j82jnwrx7 122f0xxngg17z52be7xspa72x.1 postgres:9.4
mxpaef1tlh23s052erw88a4w5 Running Running 31 minutes ago
w48spazhbmxc tg61x8myx563ueo3urmn1ic6m.1 redis:alpine
qaqt4nrzo775jrx6detglho01 Running Running 31 minutes ago
6jj1m02freg1 8cqlyi444kzd3panjb7edh26v.1 dockersamples/visualizer:stable
mxpaef1tlh23s052erw88a4w5 Running Running 31 minutes ago
kqgdmededccb qyprtqw1g5nrki557i974ou1d.2
dockersamples/examplevotingapp_vote:before qaqt4nrzo775jrx6detglho01 Running
Running 31 minutes ago
t72q3z038jeh tg61x8myx563ueo3urmn1ic6m.2 redis:alpine
kanqcxfajd1r16wlnqcblobmm Running Running 31 minutes ago

Do not truncate output


When deploying a service, docker resolves the digest for the service’s image, and pins the service to
that digest. The digest is not shown by default, but is printed if --no-trunc is used. The --no-
trunc option also shows the non-truncated task IDs, and error-messages, as can be seen below:

$ docker stack ps --no-trunc voting


ID NAME IMAGE
NODE DESIRED STATE CURREN STATE ERROR PORTS
xim5bcqtgk1bxqz91jzo4a1s5 voting_worker.1
dockersamples/examplevotingapp_worker:latest@sha256:3e4ddf59c15f432280a2c0679c4fc5a2e
e5a797023c8ef0d3baf7b1385e9fed node2 Running Runnin 32 minutes ago
q7yik0ks1in6kv32gg6y6yjf7 voting_result.1
dockersamples/examplevotingapp_result:before@sha256:83b56996e930c292a6ae5187fda84dd65
68a19d97cdb933720be15c757b7463 node1 Running Runnin 32 minutes ago
rx5yo0866nfxc58zf4irsss6n voting_vote.1
dockersamples/examplevotingapp_vote:before@sha256:8e64b182c87de902f2b72321c89b4af4e2b
942d76d0b772532ff27ec4c6ebf6 node3 Running Runnin 32 minutes ago
tz6j82jnwrx7n2offljp3mn03 voting_db.1
postgres:9.4@sha256:6046af499eae34d2074c0b53f9a8b404716d415e4a03e68bc1d2f8064f2b027
node1 Running Runnin 32 minutes ago
w48spazhbmxcmbjfi54gs7x90 voting_redis.1
redis:alpine@sha256:9cd405cd1ec1410eaab064a1383d0d8854d1ef74a54e1e4a92fb4ec7bdc3ee7
node2 Running Runnin 32 minutes ago
6jj1m02freg1n3z9n1evrzsbl voting_visualizer.1
dockersamples/visualizer:stable@sha256:f924ad66c8e94b10baaf7bdb9cd491ef4e982a1d048a56
a17e02bf5945401e5 node1 Running Runnin 32 minutes ago
kqgdmededccbhz2wuc0e9hx7g voting_vote.2
dockersamples/examplevotingapp_vote:before@sha256:8e64b182c87de902f2b72321c89b4af4e2b
942d76d0b772532ff27ec4c6ebf6 node2 Running Runnin 32 minutes ago
t72q3z038jehe1wbh9gdum076 voting_redis.2
redis:alpine@sha256:9cd405cd1ec1410eaab064a1383d0d8854d1ef74a54e1e4a92fb4ec7bdc3ee7
node3 Running Runnin 32 minutes ago

Only display task IDs


The -q or --quiet option only shows IDs of the tasks in the stack. This example outputs all task IDs
of the “voting” stack;
$ docker stack ps -q voting
xim5bcqtgk1b
q7yik0ks1in6
rx5yo0866nfx
tz6j82jnwrx7
w48spazhbmxc
6jj1m02freg1
kqgdmededccb
t72q3z038jeh

This option can be used to perform batch operations. For example, you can use the task IDs as input
for other commands, such as docker inspect. The following example inspects all tasks of the
“voting” stack;
$ docker inspect $(docker stack ps -q voting)

[
{
"ID": "xim5bcqtgk1b1gk0krq1",
"Version": {
(...)

docker stack rm
Estimated reading time: 2 minutes

Description
Remove one or more stacks

API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Usage
docker stack rm [OPTIONS] STACK [STACK...]
Options
Name, shorthand Default Description

Kubernetes
--namespace
Kubernetes namespace to use

Kubernetes
--kubeconfig
Kubernetes config file

--orchestrator Orchestrator to use (swarm|kubernetes|all)

Parent command
Command Description

docker stack Manage Docker stacks

Related commands
Command Description

docker stack deploy Deploy a new stack or update an existing stack

docker stack ls List stacks

docker stack ps List the tasks in the stack

docker stack rm Remove one or more stacks

docker stack services List the services in the stack

Extended description
Remove the stack from the swarm. This command has to be run targeting a manager node.

Examples
Remove a stack
This will remove the stack with the name myapp. Services, networks, and secrets associated with the
stack will be removed.
$ docker stack rm myapp

Removing service myapp_redis


Removing service myapp_web
Removing service myapp_lb
Removing network myapp_default
Removing network myapp_frontend

Remove multiple stacks


This will remove all the specified stacks, myapp and vossibility. Services, networks, and secrets
associated with all the specified stacks will be removed.
$ docker stack rm myapp vossibility

Removing service myapp_redis


Removing service myapp_web
Removing service myapp_lb
Removing network myapp_default
Removing network myapp_frontend
Removing service vossibility_nsqd
Removing service vossibility_logstash
Removing service vossibility_elasticsearch
Removing service vossibility_kibana
Removing service vossibility_ghollector
Removing service vossibility_lookupd
Removing network vossibility_default
Removing network vossibility_vossibility

docker stack services


Estimated reading time: 3 minutes
Description
List the services in the stack

API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Usage
docker stack services [OPTIONS] STACK

Options
Name, shorthand Default Description

--filter , -f Filter output based on conditions provided

--format Pretty-print services using a Go template

Kubernetes
--namespace
Kubernetes namespace to use

--quiet , -q Only display IDs

Kubernetes
--kubeconfig
Kubernetes config file

--orchestrator Orchestrator to use (swarm|kubernetes|all)

Parent command
Command Description

docker stack Manage Docker stacks

Related commands
Command Description

docker stack deploy Deploy a new stack or update an existing stack

docker stack ls List stacks

docker stack ps List the tasks in the stack

docker stack rm Remove one or more stacks

docker stack services List the services in the stack

Extended description
Lists the services that are running as part of the specified stack. This command has to be run
targeting a manager node.

Examples
The following command shows all services in the myapp stack:
$ docker stack services myapp

ID NAME REPLICAS IMAGE


COMMAND
7be5ei6sqeye myapp_web 1/1
nginx@sha256:23f809e7fd5952e7d5be065b4d3643fbbceccd349d537b62a123ef2201bc886f
dn7m7nhhfb9y myapp_db 1/1
mysql@sha256:a9a5b559f8821fe73d58c3606c812d1c044868d42c63817fa5125fd9d8b7b539

Filtering
The filtering flag (-f or --filter) format is a key=value pair. If there is more than one filter, then
pass multiple flags (e.g. --filter "foo=bar" --filter "bif=baz"). Multiple filter flags are combined
as an OR filter.
The following command shows both the web and db services:
$ docker stack services --filter name=myapp_web --filter name=myapp_db myapp
ID NAME REPLICAS IMAGE
COMMAND
7be5ei6sqeye myapp_web 1/1
nginx@sha256:23f809e7fd5952e7d5be065b4d3643fbbceccd349d537b62a123ef2201bc886f
dn7m7nhhfb9y myapp_db 1/1
mysql@sha256:a9a5b559f8821fe73d58c3606c812d1c044868d42c63817fa5125fd9d8b7b539

The currently supported filters are:

 id / ID (--filter id=7be5ei6sqeye, or --filter ID=7be5ei6sqeye)


o Swarm: supported
o Kubernetes: not supported
 label (--filter label=key=value)
o Swarm: supported
o Kubernetes: supported
 mode (--filter mode=replicated, or --filter mode=global)
o Swarm: not supported
o Kubernetes: supported
 name (--filter name=myapp_web)
o Swarm: supported
o Kubernetes: supported
 node (--filter node=mynode)
o Swarm: not supported
o Kubernetes: supported
 service (--filter service=web)
o Swarm: not supported
o Kubernetes: supported

Formatting
The formatting options (--format) pretty-prints services output using a Go template.

Valid placeholders for the Go template are listed below:

Placeholder Description

.ID Service ID

.Name Service name

.Mode Service mode (replicated, global)

.Replicas Service replicas

.Image Service image


When using the --format option, the stack services command will either output the data exactly as
the template declares or, when using the table directive, includes column headers as well.
The following example uses a template without headers and outputs the ID, Mode,
and Replicas entries separated by a colon for all services:
$ docker stack services --format "{{.ID}}: {{.Mode}} {{.Replicas}}"

0zmvwuiu3vue: replicated 10/10


fm6uf97exkul: global 5/5

docker start
Estimated reading time: 1 minute

Description
Start one or more stopped containers

Usage
docker start [OPTIONS] CONTAINER [CONTAINER...]

Options
Name, shorthand Default Description

--attach , -a Attach STDOUT/STDERR and forward signals

experimental (daemon)
--checkpoint
Restore from this checkpoint

experimental (daemon)
--checkpoint-dir
Use a custom checkpoint storage directory

--detach-keys Override the key sequence for detaching a container

--interactive , -i Attach container’s STDIN


Parent command
Command Description

docker The base command for the Docker CLI.

Examples
$ docker start my_container

docker stats
Estimated reading time: 6 minutes

Description
Display a live stream of container(s) resource usage statistics

Usage
docker stats [OPTIONS] [CONTAINER...]

Options
Name, shorthand Default Description

--all , -a Show all containers (default shows just running)

--format Pretty-print images using a Go template

--no-stream Disable streaming stats and only pull the first result

--no-trunc Do not truncate output

Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
The docker stats command returns a live data stream for running containers. To limit data to one or
more specific containers, specify a list of container names or ids separated by a space. You can
specify a stopped container but stopped containers do not return any data.
If you want more detailed information about a container’s resource usage, use
the /containers/(id)/stats API endpoint.
Note: On Linux, the Docker CLI reports memory usage by subtracting page cache usage from the
total memory usage. The API does not perform such a calculation but rather provides the total
memory usage and the amount from the page cache so that clients can use the data as needed.
Note: The PIDS column contains the number of processes and kernel threads created by that
container. Threads is the term used by Linux kernel. Other equivalent terms are “lightweight
process” or “kernel task”, etc. A large number in the PIDS column combined with a small number of
processes (as reported by ps or top) may indicate that something in the container is creating many
threads.

Examples
Running docker stats on all running containers against a Linux daemon.
$ docker stats

CONTAINER ID NAME CPU % MEM


USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
b95a83497c91 awesome_brattain 0.28%
5.629MiB / 1.952GiB 0.28% 916B / 0B 147kB / 0B 9
67b2525d8ad1 foobar 0.00%
1.727MiB / 1.952GiB 0.09% 2.48kB / 0B 4.11MB / 0B 2
e5c383697914 test-1951.1.kay7x1lh1twk9c0oig50sd5tr 0.00%
196KiB / 1.952GiB 0.01% 71.2kB / 0B 770kB / 0B 1
4bda148efbc0 random.1.vnc8on831idyr42slu578u3cr 0.00%
1.672MiB / 1.952GiB 0.08% 110kB / 0B 578kB / 0B 2

If you don’t specify a format string using --format, the following columns are shown.
Column name Description

CONTAINER
the ID and name of the container
ID and Name

CPU % and MEM % the percentage of the host’s CPU and memory the container is using

the total memory the container is using, and the total amount of memory
MEM USAGE / LIMIT
it is allowed to use

The amount of data the container has sent and received over its
NET I/O
network interface

The amount of data the container has read to and written from block
BLOCK I/O
devices on the host

PIDs the number of processes or threads the container has created

Running docker stats on multiple containers by name and id against a Linux daemon.
$ docker stats awesome_brattain 67b2525d8ad1

CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM


% NET I/O BLOCK I/O PIDS
b95a83497c91 awesome_brattain 0.28% 5.629MiB / 1.952GiB
0.28% 916B / 0B 147kB / 0B 9
67b2525d8ad1 foobar 0.00% 1.727MiB / 1.952GiB
0.09% 2.48kB / 0B 4.11MB / 0B 2

Running docker stats with customized format on all (Running and Stopped) containers.
$ docker stats --all --format "table {{.Container}}\t{{.CPUPerc}}\t{{.MemUsage}}"
fervent_panini 5acfcb1b4fd1 drunk_visvesvaraya big_heisenberg

CONTAINER CPU % MEM USAGE / LIMIT


fervent_panini 0.00% 56KiB / 15.57GiB
5acfcb1b4fd1 0.07% 32.86MiB / 15.57GiB
drunk_visvesvaraya 0.00% 0B / 0B
big_heisenberg 0.00% 0B / 0B

drunk_visvesvaraya and big_heisenberg are stopped containers in the above example.


Running docker stats on all running containers against a Windows daemon.
PS E:\> docker stats
CONTAINER ID CPU % PRIV WORKING SET NET I/O BLOCK
I/O
09d3bb5b1604 6.61% 38.21 MiB 17.1 kB / 7.73 kB 10.7
MB / 3.57 MB
9db7aa4d986d 9.19% 38.26 MiB 15.2 kB / 7.65 kB 10.6
MB / 3.3 MB
3f214c61ad1d 0.00% 28.64 MiB 64 kB / 6.84 kB 4.42
MB / 6.93 MB

Running docker stats on multiple containers by name and id against a Windows daemon.
PS E:\> docker ps -a
CONTAINER ID NAME IMAGE COMMAND
CREATED STATUS PORTS NAMES
3f214c61ad1d awesome_brattain nanoserver "cmd" 2
minutes ago Up 2 minutes big_minsky
9db7aa4d986d mad_wilson windowsservercore "cmd" 2
minutes ago Up 2 minutes mad_wilson
09d3bb5b1604 fervent_panini windowsservercore "cmd" 2
minutes ago Up 2 minutes affectionate_easley

PS E:\> docker stats 3f214c61ad1d mad_wilson


CONTAINER ID NAME CPU % PRIV WORKING SET NET
I/O BLOCK I/O
3f214c61ad1d awesome_brattain 0.00% 46.25 MiB 76.3
kB / 7.92 kB 10.3 MB / 14.7 MB
9db7aa4d986d mad_wilson 9.59% 40.09 MiB 27.6
kB / 8.81 kB 17 MB / 20.1 MB

Formatting
The formatting option (--format) pretty prints container output using a Go template.

Valid placeholders for the Go template are listed below:

Placeholder Description

.Container Container name or ID (user input)

.Name Container name

.ID Container ID
Placeholder Description

.CPUPerc CPU percentage

.MemUsage Memory usage

.NetIO Network IO

.BlockIO Block IO

.MemPerc Memory percentage (Not available on Windows)

.PIDs Number of PIDs (Not available on Windows)

When using the --format option, the stats command either outputs the data exactly as the template
declares or, when using the table directive, includes column headers as well.
The following example uses a template without headers and outputs
the Container and CPUPerc entries separated by a colon for all images:
$ docker stats --format "{{.Container}}: {{.CPUPerc}}"

09d3bb5b1604: 6.61%
9db7aa4d986d: 9.19%
3f214c61ad1d: 0.00%

To list all containers statistics with their name, CPU percentage and memory usage in a table format
you can use:

$ docker stats --format "table {{.Container}}\t{{.CPUPerc}}\t{{.MemUsage}}"

CONTAINER CPU % PRIV WORKING SET


1285939c1fd3 0.07% 796 KiB / 64 MiB
9c76f7834ae2 0.07% 2.746 MiB / 64 MiB
d1ea048f04e4 0.03% 4.583 MiB / 64 MiB

The default format is as follows:

On Linux:
"table
{{.ID}}\t{{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}\t{{.MemPerc}}\t{{.NetIO}}\t{{.BlockIO
}}\t{{.PIDs}}"

On Windows:

"table {{.ID}}\t{{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}\t{{.NetIO}}\t{{.BlockIO}}"

Note: On Docker 17.09 and older, the {{.Container}} column was used, instead
of {{.ID}}\t{{.Name}}.

docker stop
Estimated reading time: 1 minute

Description
Stop one or more running containers

Usage
docker stop [OPTIONS] CONTAINER [CONTAINER...]

Options
Name, shorthand Default Description

--time , -t 10 Seconds to wait for stop before killing it

Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
The main process inside the container will receive SIGTERM, and after a grace period, SIGKILL.

Examples
$ docker stop my_container

docker swarm
Estimated reading time: 1 minute

Description
Manage Swarm

API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Swarm This command works with the Swarm orchestrator.

Usage
docker swarm COMMAND

Child commands
Command Description

docker swarm ca Display and rotate the root CA

docker swarm init Initialize a swarm

docker swarm join Join a swarm as a node and/or manager

docker swarm join-token Manage join tokens

docker swarm leave Leave the swarm

docker swarm unlock Unlock swarm


Command Description

docker swarm unlock-key Manage the unlock key

docker swarm update Update the swarm

Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
Manage the swarm.

docker swarm ca
Estimated reading time: 4 minutes

Description
Display and rotate the root CA

API 1.30+ The client and daemon API must both be at least 1.30 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Swarm This command works with the Swarm orchestrator.

Usage
docker swarm ca [OPTIONS]

Options
Name,
Default Description
shorthand

Path to the PEM-formatted root CA certificate to use for the new


--ca-cert
cluster

Path to the PEM-formatted root CA key to use for the new


--ca-key
cluster

--cert-expiry 2160h0m0s Validity period for node certificates (ns|us|ms|s|m|h)

Exit immediately instead of waiting for the root rotation to


--detach , -d
converge

--external-ca Specifications of one or more certificate signing endpoints

--quiet , -q Suppress progress output

Rotate the swarm CA - if no certificate or key are provided, new


--rotate
ones will be generated

Parent command
Command Description

docker swarm Manage Swarm

Related commands
Command Description

docker swarm ca Display and rotate the root CA

docker swarm init Initialize a swarm

docker swarm join Join a swarm as a node and/or manager

docker swarm join-token Manage join tokens

docker swarm leave Leave the swarm


Command Description

docker swarm unlock Unlock swarm

docker swarm unlock-key Manage the unlock key

docker swarm update Update the swarm

Extended description
View or rotate the current swarm CA certificate. This command must target a manager node.

Examples
Run the docker swarm ca command without any options to view the current root CA certificate in
PEM format.
$ docker swarm ca
-----BEGIN CERTIFICATE-----
MIIBazCCARCgAwIBAgIUJPzo67QC7g8Ebg2ansjkZ8CbmaswCgYIKoZIzj0EAwIw
EzERMA8GA1UEAxMIc3dhcm0tY2EwHhcNMTcwNTAzMTcxMDAwWhcNMzcwNDI4MTcx
MDAwWjATMREwDwYDVQQDEwhzd2FybS1jYTBZMBMGByqGSM49AgEGCCqGSM49AwEH
A0IABKL6/C0sihYEb935wVPRA8MqzPLn3jzou0OJRXHsCLcVExigrMdgmLCC+Va4
+sJ+SLVO1eQbvLHH8uuDdF/QOU6jQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB
Af8EBTADAQH/MB0GA1UdDgQWBBSfUy5bjUnBAx/B0GkOBKp91XvxzjAKBggqhkjO
PQQDAgNJADBGAiEAnbvh0puOS5R/qvy1PMHY1iksYKh2acsGLtL/jAIvO4ACIQCi
lIwQqLkJ48SQqCjG1DBTSBsHmMSRT+6mE2My+Z3GKA==
-----END CERTIFICATE-----

Pass the --rotate flag (and optionally a --ca-cert, along with a --ca-key or--external-
ca parameter flag), in order to rotate the current swarm root CA.

$ docker swarm ca --rotate


desired root digest:
sha256:05da740cf2577a25224c53019e2cce99bcc5ba09664ad6bb2a9425d9ebd1b53e
rotated TLS certificates: [=========================> ]
1/2 nodes
rotated CA certificates: [> ]
0/2 nodes

Once the rotation os finished (all the progress bars have completed) the now-current CA certificate
will be printed:

$ docker swarm ca --rotate


desired root digest:
sha256:05da740cf2577a25224c53019e2cce99bcc5ba09664ad6bb2a9425d9ebd1b53e
rotated TLS certificates: [==================================================>]
2/2 nodes
rotated CA certificates: [==================================================>]
2/2 nodes
-----BEGIN CERTIFICATE-----
MIIBazCCARCgAwIBAgIUFynG04h5Rrl4lKyA4/E65tYKg8IwCgYIKoZIzj0EAwIw
EzERMA8GA1UEAxMIc3dhcm0tY2EwHhcNMTcwNTE2MDAxMDAwWhcNMzcwNTExMDAx
MDAwWjATMREwDwYDVQQDEwhzd2FybS1jYTBZMBMGByqGSM49AgEGCCqGSM49AwEH
A0IABC2DuNrIETP7C7lfiEPk39tWaaU0I2RumUP4fX4+3m+87j0DU0CsemUaaOG6
+PxHhGu2VXQ4c9pctPHgf7vWeVajQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB
Af8EBTADAQH/MB0GA1UdDgQWBBSEL02z6mCI3SmMDmITMr12qCRY2jAKBggqhkjO
PQQDAgNJADBGAiEA263Eb52+825EeNQZM0AME+aoH1319Zp9/J5ijILW+6ACIQCg
gyg5u9Iliel99l7SuMhNeLkrU7fXs+Of1nTyyM73ig==
-----END CERTIFICATE-----

--rotate
Root CA Rotation is recommended if one or more of the swarm managers have been compromised,
so that those managers can no longer connect to or be trusted by any other node in the cluster.

Alternately, root CA rotation can be used to give control of the swarm CA to an external CA, or to
take control back from an external CA.

The --rotate flag does not require any parameters to do a rotation, but you can optionally specify a
certificate and key, or a certificate and external CA URL, and those will be used instead of an
automatically-generated certificate/key pair.

Because the root CA key should be kept secret, if provided it will not be visible when viewing swarm
any information via the CLI or API.
The root CA rotation will not be completed until all registered nodes have rotated their TLS
certificates. If the rotation is not completing within a reasonable amount of time, try runningdocker
node ls --format '{{.ID}} {{.Hostname}} {{.Status}} {{.TLSStatus}}' to see if any nodes are
down or otherwise unable to rotate TLS certificates.
--detach
Initiate the root CA rotation, but do not wait for the completion of or display the progress of the
rotation.

docker swarm init


Estimated reading time: 8 minutes

Description
Initialize a swarm

API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Swarm This command works with the Swarm orchestrator.

Usage
docker swarm init [OPTIONS]

Options
Name, shorthand Default Description

--advertise-addr Advertised address (format: <ip|interface>[:port])

Enable manager autolocking (requiring an unlock key


--autolock
to start a stopped manager)

--availability active Availability of the node (“active”|”pause”|”drain”)

--cert-expiry 2160h0m0s Validity period for node certificates (ns|us|ms|s|m|h)


Name, shorthand Default Description

API 1.31+
--data-path-addr Address or interface to use for data path traffic (format:
<ip|interface>)

API 1.40+
Port number to use for data path traffic (1024 - 49151).
--data-path-port
If no value is set or is set to 0, the default port (4789) is
used.

--default-addr- API 1.39+


pool default address pool in CIDR format

--default-addr- API 1.39+


24
pool-mask-length default address pool subnet mask length

--dispatcher-
5s Dispatcher heartbeat period (ns|us|ms|s|m|h)
heartbeat

Specifications of one or more certificate signing


--external-ca
endpoints

--force-new-
Force create a new cluster from current state
cluster

--listen-addr 0.0.0.0:2377 Listen address (format: <ip|interface>[:port])

API 1.25+
--max-snapshots
Number of additional Raft snapshots to retain

--snapshot- API 1.25+


10000
interval Number of log entries between Raft snapshots

--task-history-
5 Task history retention limit
limit

Parent command
Command Description

docker swarm Manage Swarm

Related commands
Command Description

docker swarm ca Display and rotate the root CA

docker swarm init Initialize a swarm

docker swarm join Join a swarm as a node and/or manager

docker swarm join-token Manage join tokens

docker swarm leave Leave the swarm

docker swarm unlock Unlock swarm

docker swarm unlock-key Manage the unlock key

docker swarm update Update the swarm

Extended description
Initialize a swarm. The docker engine targeted by this command becomes a manager in the newly
created single-node swarm.

Examples
$ docker swarm init --advertise-addr 192.168.99.121
Swarm initialized: current node (bvz81updecsj6wjz393c09vti) is now a manager.

To add a worker to this swarm, run the following command:

docker swarm join \


--token SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-
1awxwuwd3z9j1z3puu7rcgdbx \
172.17.0.2:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the
instructions.
docker swarm init generates two random tokens, a worker token and a manager token. When you
join a new node to the swarm, the node joins as a worker or manager node based upon the token
you pass to swarm join.

After you create the swarm, you can display or rotate the token using swarm join-token.

--autolock
This flag enables automatic locking of managers with an encryption key. The private keys and data
stored by all managers will be protected by the encryption key printed in the output, and will not be
accessible without it. Thus, it is very important to store this key in order to activate a manager after it
restarts. The key can be passed to docker swarm unlock to reactivate the manager. Autolock can be
disabled by running docker swarm update --autolock=false. After disabling it, the encryption key is
no longer required to start the manager, and it will start up on its own without user intervention.
--cert-expiry
This flag sets the validity period for node certificates.

--dispatcher-heartbeat
This flag sets the frequency with which nodes are told to use as a period to report their health.

--external-ca
This flag sets up the swarm to use an external CA to issue node certificates. The value takes the
form protocol=X,url=Y. The value for protocol specifies what protocol should be used to send
signing requests to the external CA. Currently, the only supported value is cfssl. The URL specifies
the endpoint where signing requests should be submitted.
--force-new-cluster
This flag forces an existing node that was part of a quorum that was lost to restart as a single node
Manager without losing its data.

--listen-addr
The node listens for inbound swarm manager traffic on this address. The default is to listen on
0.0.0.0:2377. It is also possible to specify a network interface to listen on that interface’s address; for
example --listen-addr eth0:2377.

Specifying a port is optional. If the value is a bare IP address or interface name, the default port
2377 will be used.

--advertise-addr
This flag specifies the address that will be advertised to other members of the swarm for API access
and overlay networking. If unspecified, Docker will check if the system has a single IP address, and
use that IP address with the listening port (see --listen-addr). If the system has multiple IP
addresses, --advertise-addr must be specified so that the correct address is chosen for inter-
manager communication and overlay networking.
It is also possible to specify a network interface to advertise that interface’s address; for example --
advertise-addr eth0:2377.

Specifying a port is optional. If the value is a bare IP address or interface name, the default port
2377 will be used.

--data-path-addr
This flag specifies the address that global scope network drivers will publish towards other nodes in
order to reach the containers running on this node. Using this parameter it is then possible to
separate the container’s data traffic from the management traffic of the cluster. If unspecified,
Docker will use the same IP address or interface that is used for the advertise address.

--data-path-port
This flag allows you to configure the UDP port number to use for data path traffic. The provided port
number must be within the 1024 - 49151 range. If this flag is not set or is set to 0, the default port
number 4789 is used. The data path port can only be configured when initializing the swarm, and
applies to all nodes that join the swarm. The following example initializes a new Swarm, and
configures the data path port to UDP port 7777;

docker swarm init --data-path-port=7777

After the swarm is initialized, use the docker info command to verify that the port is configured:
docker info
...
ClusterID: 9vs5ygs0gguyyec4iqf2314c0
Managers: 1
Nodes: 1
Data Path Port: 7777
...

--default-addr-pool
This flag specifies default subnet pools for global scope networks. Format example is --default-
addr-pool 30.30.0.0/16 --default-addr-pool 40.40.0.0/16
--default-addr-pool-mask-length
This flag specifies default subnet pools mask length for default-addr-pool. Format example is --
default-addr-pool-mask-length 24
--task-history-limit
This flag sets up task history retention limit.

--max-snapshots
This flag sets the number of old Raft snapshots to retain in addition to the current Raft snapshots. By
default, no old snapshots are retained. This option may be used for debugging, or to store old
snapshots of the swarm state for disaster recovery purposes.

--snapshot-interval
This flag specifies how many log entries to allow in between Raft snapshots. Setting this to a higher
number will trigger snapshots less frequently. Snapshots compact the Raft log and allow for more
efficient transfer of the state to new managers. However, there is a performance cost to taking
snapshots frequently.

--availability
This flag specifies the availability of the node at the time the node joins a master. Possible
availability values are active, pause, or drain.
This flag is useful in certain situations. For example, a cluster may want to have dedicated manager
nodes that are not served as worker nodes. This could be achieved by passing --
availability=drain to docker swarm init.

docker swarm join-token


Estimated reading time: 1 minute

Description
Manage join tokens

API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Swarm This command works with the Swarm orchestrator.

Usage
docker swarm join-token [OPTIONS] (worker|manager)

Options
Name, shorthand Default Description

--quiet , -q Only display token

--rotate Rotate join token

Parent command
Command Description

docker swarm Manage Swarm

Related commands
Command Description

docker swarm ca Display and rotate the root CA

docker swarm init Initialize a swarm

docker swarm join Join a swarm as a node and/or manager

docker swarm join-token Manage join tokens

docker swarm leave Leave the swarm

docker swarm unlock Unlock swarm

docker swarm unlock-key Manage the unlock key

docker swarm update Update the swarm


docker swarm join
Estimated reading time: 5 minutes

Description
Join a swarm as a node and/or manager

API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Swarm This command works with the Swarm orchestrator.

Usage
docker swarm join [OPTIONS] HOST:PORT

Options
Name,
Default Description
shorthand

--advertise-
Advertised address (format: <ip|interface>[:port])
addr

--availability active Availability of the node (“active”|”pause”|”drain”)

API 1.31+
--data-path-
Address or interface to use for data path traffic (format:
addr
<ip|interface>)

--listen-addr 0.0.0.0:2377 Listen address (format: <ip|interface>[:port])

--token Token for entry into the swarm

Parent command
Command Description

docker swarm Manage Swarm


Related commands
Command Description

docker swarm ca Display and rotate the root CA

docker swarm init Initialize a swarm

docker swarm join Join a swarm as a node and/or manager

docker swarm join-token Manage join tokens

docker swarm leave Leave the swarm

docker swarm unlock Unlock swarm

docker swarm unlock-key Manage the unlock key

docker swarm update Update the swarm

Extended description
Join a node to a swarm. The node joins as a manager node or worker node based upon the token
you pass with the --token flag. If you pass a manager token, the node joins as a manager. If you
pass a worker token, the node joins as a worker.

Examples
Join a node to swarm as a manager
The example below demonstrates joining a manager node using a manager token.

$ docker swarm join --token SWMTKN-1-


3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2
192.168.99.121:2377
This node joined a swarm as a manager.
$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
dkp8vy1dq1kxleu9g4u78tlag * manager2 Ready Active Reachable
dvfxp4zseq4s0rih1selh0d20 manager1 Ready Active Leader

A cluster should only have 3-7 managers at most, because a majority of managers must be available
for the cluster to function. Nodes that aren’t meant to participate in this management quorum should
join as workers instead. Managers should be stable hosts that have static IP addresses.

Join a node to swarm as a worker


The example below demonstrates joining a worker node using a worker token.

$ docker swarm join --token SWMTKN-1-


3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-1awxwuwd3z9j1z3puu7rcgdbx
192.168.99.121:2377
This node joined a swarm as a worker.
$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
7ln70fl22uw2dvjn2ft53m3q5 worker2 Ready Active
dkp8vy1dq1kxleu9g4u78tlag worker1 Ready Active Reachable
dvfxp4zseq4s0rih1selh0d20 * manager1 Ready Active Leader

--listen-addr value
If the node is a manager, it will listen for inbound swarm manager traffic on this address. The default
is to listen on 0.0.0.0:2377. It is also possible to specify a network interface to listen on that
interface’s address; for example --listen-addr eth0:2377.

Specifying a port is optional. If the value is a bare IP address, or interface name, the default port
2377 will be used.

This flag is generally not necessary when joining an existing swarm.

--advertise-addr value
This flag specifies the address that will be advertised to other members of the swarm for API access.
If unspecified, Docker will check if the system has a single IP address, and use that IP address with
the listening port (see --listen-addr). If the system has multiple IP addresses, --advertise-
addr must be specified so that the correct address is chosen for inter-manager communication and
overlay networking.
It is also possible to specify a network interface to advertise that interface’s address; for example --
advertise-addr eth0:2377.
Specifying a port is optional. If the value is a bare IP address, or interface name, the default port
2377 will be used.

This flag is generally not necessary when joining an existing swarm. If you’re joining new nodes
through a load balancer, you should use this flag to ensure the node advertises its IP address and
not the IP address of the load balancer.

--data-path-addr
This flag specifies the address that global scope network drivers will publish towards other nodes in
order to reach the containers running on this node. Using this parameter it is then possible to
separate the container’s data traffic from the management traffic of the cluster. If unspecified,
Docker will use the same IP address or interface that is used for the advertise address.

--token string
Secret value required for nodes to join the swarm

--availability
This flag specifies the availability of the node at the time the node joins a master. Possible
availability values are active, pause, or drain.
This flag is useful in certain situations. For example, a cluster may want to have dedicated manager
nodes that are not served as worker nodes. This could be achieved by passing --
availability=drain to docker swarm join.

docker swarm leave


Estimated reading time: 2 minutes

Description
Leave the swarm

API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Swarm This command works with the Swarm orchestrator.

Usage
docker swarm leave [OPTIONS]

Options
Name, shorthand Default Description

--force , -f Force this node to leave the swarm, ignoring warnings

Parent command
Command Description

docker swarm Manage Swarm

Related commands
Command Description

docker swarm ca Display and rotate the root CA

docker swarm init Initialize a swarm

docker swarm join Join a swarm as a node and/or manager

docker swarm join-token Manage join tokens

docker swarm leave Leave the swarm

docker swarm unlock Unlock swarm

docker swarm unlock-key Manage the unlock key

docker swarm update Update the swarm

Extended description
When you run this command on a worker, that worker leaves the swarm.
You can use the --force option on a manager to remove it from the swarm. However, this does not
reconfigure the swarm to ensure that there are enough managers to maintain a quorum in the
swarm. The safe way to remove a manager from a swarm is to demote it to a worker and then direct
it to leave the quorum without using --force. Only use --force in situations where the swarm will no
longer be used after the manager leaves, such as in a single-node swarm.

Examples
Consider the following swarm, as seen from the manager:

$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
7ln70fl22uw2dvjn2ft53m3q5 worker2 Ready Active
dkp8vy1dq1kxleu9g4u78tlag worker1 Ready Active
dvfxp4zseq4s0rih1selh0d20 * manager1 Ready Active Leader

To remove worker2, issue the following command from worker2 itself:


$ docker swarm leave
Node left the default swarm.

The node will still appear in the node list, and marked as down. It no longer affects swarm operation,
but a long list of down nodes can clutter the node list. To remove an inactive node from the list, use
the node rm command.

docker swarm unlock-key


Estimated reading time: 1 minute

Description
Manage the unlock key

API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Swarm This command works with the Swarm orchestrator.


Usage
docker swarm unlock-key [OPTIONS]

Options
Name, shorthand Default Description

--quiet , -q Only display token

--rotate Rotate unlock key

Parent command
Command Description

docker swarm Manage Swarm

Related commands
Command Description

docker swarm ca Display and rotate the root CA

docker swarm init Initialize a swarm

docker swarm join Join a swarm as a node and/or manager

docker swarm join-token Manage join tokens

docker swarm leave Leave the swarm

docker swarm unlock Unlock swarm

docker swarm unlock-key Manage the unlock key

docker swarm update Update the swarm


docker swarm unlock
Estimated reading time: 1 minute

Description
Unlock swarm

API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Swarm This command works with the Swarm orchestrator.

Usage
docker swarm unlock

Parent command
Command Description

docker swarm Manage Swarm

Related commands
Command Description

docker swarm ca Display and rotate the root CA

docker swarm init Initialize a swarm

docker swarm join Join a swarm as a node and/or manager

docker swarm join-token Manage join tokens

docker swarm leave Leave the swarm

docker swarm unlock Unlock swarm


Command Description

docker swarm unlock-key Manage the unlock key

docker swarm update Update the swarm

Extended description
Unlocks a locked manager using a user-supplied unlock key. This command must be used to
reactivate a manager after its Docker daemon restarts if the autolock setting is turned on. The unlock
key is printed at the time when autolock is enabled, and is also available from the docker swarm
unlock-key command.

Examples
$ docker swarm unlock
Please enter unlock key:

docker swarm update


Estimated reading time: 2 minutes

Description
Update the swarm

API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Swarm This command works with the Swarm orchestrator.

Usage
docker swarm update [OPTIONS]

Options
Name, shorthand Default Description

--autolock Change manager autolocking setting (true|false)

--cert-expiry 2160h0m0s Validity period for node certificates (ns|us|ms|s|m|h)

--dispatcher-
5s Dispatcher heartbeat period (ns|us|ms|s|m|h)
heartbeat

Specifications of one or more certificate signing


--external-ca
endpoints

API 1.25+
--max-snapshots
Number of additional Raft snapshots to retain

API 1.25+
--snapshot-interval 10000
Number of log entries between Raft snapshots

--task-history-limit 5 Task history retention limit

Parent command
Command Description

docker swarm Manage Swarm

Related commands
Command Description

docker swarm ca Display and rotate the root CA

docker swarm init Initialize a swarm

docker swarm join Join a swarm as a node and/or manager

docker swarm join-token Manage join tokens

docker swarm leave Leave the swarm

docker swarm unlock Unlock swarm


Command Description

docker swarm unlock-key Manage the unlock key

docker swarm update Update the swarm

Extended description
Updates a swarm with new parameter values. This command must target a manager node.

Examples
$ docker swarm update --cert-expiry 720h

docker system
Estimated reading time: 1 minute

Description
Manage Docker

Usage
docker system COMMAND

Child commands
Command Description

docker system df Show docker disk usage

docker system events Get real time events from the server

docker system info Display system-wide information

docker system prune Remove unused data


Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
Manage Docker.

docker system df
Estimated reading time: 3 minutes

Description
Show docker disk usage

API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Usage
docker system df [OPTIONS]

Options
Name, shorthand Default Description

--format Pretty-print images using a Go template

--verbose , -v Show detailed information on space usage

Parent command
Command Description

docker system Manage Docker

Related commands
Command Description

docker system df Show docker disk usage

docker system events Get real time events from the server

docker system info Display system-wide information

docker system prune Remove unused data

Extended description
The docker system df command displays information regarding the amount of disk space used by
the docker daemon.

Examples
By default the command will just show a summary of the data used:

$ docker system df

TYPE TOTAL ACTIVE SIZE


RECLAIMABLE
Images 5 2 16.43 MB 11.63
MB (70%)
Containers 2 0 212 B 212 B
(100%)
Local Volumes 2 1 36 B 0 B
(0%)

A more detailed view can be requested using the -v, --verbose flag:
$ docker system df -v
Images space usage:

REPOSITORY TAG IMAGE ID CREATED SIZE


SHARED SIZE UNIQUE SIZE CONTAINERS
my-curl latest b2789dd875bf 6 minutes ago 11 MB
11 MB 5 B 0
my-jq latest ae67841be6d0 6 minutes ago 9.623
MB 8.991 MB 632.1 kB 0
<none> <none> a0971c4015c1 6 minutes ago 11 MB
11 MB 0 B 0
alpine latest 4e38e38c8ce0 9 weeks ago 4.799
MB 0 B 4.799 MB 1
alpine 3.3 47cf20d8c26c 9 weeks ago 4.797
MB 4.797 MB 0 B 1

Containers space usage:

CONTAINER ID IMAGE COMMAND LOCAL VOLUMES SIZE


CREATED STATUS NAMES
4a7f7eebae0f alpine:latest "sh" 1 0 B
16 minutes ago Exited (0) 5 minutes ago hopeful_yalow
f98f9c2aa1ea alpine:3.3 "sh" 1 212 B
16 minutes ago Exited (0) 48 seconds ago anon-vol

Local Volumes space usage:

NAME LINKS
SIZE
07c7bdf3e34ab76d921894c2b834f073721fccfbbcba792aa7648e3a7a664c2e 2
36 B
my-named-vol 0
0 B

 SHARED SIZE is the amount of space that an image shares with another one (i.e. their
common data)
 UNIQUE SIZE is the amount of space that is only used by a given image
 SIZE is the virtual size of the image, it is the sum of SHARED SIZE and UNIQUE SIZE

Note: Network information is not shown because it doesn’t consume the disk space.
docker system events
Estimated reading time: 11 minutes

Description
Get real time events from the server

Usage
docker system events [OPTIONS]

Options
Name, shorthand Default Description

--filter , -f Filter output based on conditions provided

--format Format the output using the given Go template

--since Show all events created since timestamp

--until Stream events until this timestamp

Parent command
Command Description

docker system Manage Docker

Related commands
Command Description

docker system df Show docker disk usage

docker system events Get real time events from the server
Command Description

docker system info Display system-wide information

docker system prune Remove unused data

Extended description
Use docker system events to get real-time events from the server. These events differ per Docker
object type.

Object types
CONTAINERS

Docker containers report the following events:

 attach
 commit
 copy
 create
 destroy
 detach
 die
 exec_create
 exec_detach
 exec_start
 export
 health_status
 kill
 oom
 pause
 rename
 resize
 restart
 start
 stop
 top
 unpause
 update

IMAGES

Docker images report the following events:

 delete
 import
 load
 pull
 push
 save
 tag
 untag

PLUGINS

Docker plugins report the following events:

 install
 enable
 disable
 remove

VOLUMES

Docker volumes report the following events:

 create
 mount
 unmount
 destroy

NETWORKS

Docker networks report the following events:

 create
 connect
 disconnect
 destroy

DAEMONS

Docker daemons report the following events:

 reload

Limiting, filtering, and formatting the output


LIMIT EVENTS BY TIME

The --since and --until parameters can be Unix timestamps, date formatted timestamps, or Go
duration strings (e.g. 10m, 1h30m) computed relative to the client machine’s time. If you do not provide
the --since option, the command returns only new and/or live events. Supported formats for date
formatted time stamps include RFC3339Nano, RFC3339, 2006-01-02T15:04:05, 2006-01-
02T15:04:05.999999999, 2006-01-02Z07:00, and 2006-01-02. The local timezone on the client will be
used if you do not provide either a Z or a +-00:00 timezone offset at the end of the timestamp. When
providing Unix timestamps enter seconds[.nanoseconds], where seconds is the number of seconds
that have elapsed since January 1, 1970 (midnight UTC/GMT), not counting leap seconds (aka Unix
epoch or Unix time), and the optional .nanoseconds field is a fraction of a second no more than nine
digits long.

FILTERING

The filtering flag (-f or --filter) format is of “key=value”. If you would like to use multiple filters,
pass multiple flags (e.g., --filter "foo=bar" --filter "bif=baz")
Using the same filter multiple times will be handled as a OR; for example--filter
container=588a23dac085 --filter container=a8f7720b8c22 will display events for container
588a23dac085 OR container a8f7720b8c22
Using multiple filters will be handled as a AND; for example--filter container=588a23dac085 --
filter event=start will display events for container container 588a23dac085 AND the event type
is start

The currently supported filters are:

 container (container=<name or id>)


 daemon (daemon=<name or id>)
 event (event=<event action>)
 image (image=<tag or id>)
 label (label=<key> or label=<key>=<value>)
 network (network=<name or id>)
 plugin (plugin=<name or id>)
 type (type=<container or image or volume or network or daemon or plugin> )
 volume (volume=<name or id>)

FORMAT

If a format (--format) is specified, the given template will be executed instead of the default format.
Go’s text/template package describes all the details of the format.
If a format is set to {{json .}}, the events are streamed as valid JSON Lines. For information about
JSON Lines, please refer to http://jsonlines.org/ .

Examples
Basic example
You’ll need two shells for this example.

Shell 1: Listening for events:

$ docker system events

Shell 2: Start and Stop containers:

$ docker create --name test alpine:latest top


$ docker start test
$ docker stop test

Shell 1: (Again .. now showing events):

2017-01-05T00:35:58.859401177+08:00 container create


0fdb48addc82871eb34eb23a847cfd033dedd1a0a37bef2e6d9eb3870fc7ff37
(image=alpine:latest, name=test)
2017-01-05T00:36:04.703631903+08:00 network connect
e2e1f5ceda09d4300f3a846f0acfaa9a8bb0d89e775eb744c5acecd60e0529e2
(container=0fdb...ff37, name=bridge, type=bridge)
2017-01-05T00:36:04.795031609+08:00 container start 0fdb...ff37 (image=alpine:latest,
name=test)
2017-01-05T00:36:09.830268747+08:00 container kill 0fdb...ff37 (image=alpine:latest,
name=test, signal=15)
2017-01-05T00:36:09.840186338+08:00 container die 0fdb...ff37 (exitCode=143,
image=alpine:latest, name=test)
2017-01-05T00:36:09.880113663+08:00 network disconnect e2e...29e2
(container=0fdb...ff37, name=bridge, type=bridge)
2017-01-05T00:36:09.890214053+08:00 container stop 0fdb...ff37 (image=alpine:latest,
name=test)

To exit the docker system events command, use CTRL+C.

Filter events by time


You can filter the output by an absolute timestamp or relative time on the host machine, using the
following different time syntaxes:

$ docker system events --since 1483283804


2017-01-05T00:35:41.241772953+08:00 volume create testVol (driver=local)
2017-01-05T00:35:58.859401177+08:00 container create d9cd...4d70
(image=alpine:latest, name=test)
2017-01-05T00:36:04.703631903+08:00 network connect e2e1...29e2
(container=0fdb...ff37, name=bridge, type=bridge)
2017-01-05T00:36:04.795031609+08:00 container start 0fdb...ff37 (image=alpine:latest,
name=test)
2017-01-05T00:36:09.830268747+08:00 container kill 0fdb...ff37 (image=alpine:latest,
name=test, signal=15)
2017-01-05T00:36:09.840186338+08:00 container die 0fdb...ff37 (exitCode=143,
image=alpine:latest, name=test)
2017-01-05T00:36:09.880113663+08:00 network disconnect e2e...29e2
(container=0fdb...ff37, name=bridge, type=bridge)
2017-01-05T00:36:09.890214053+08:00 container stop 0fdb...ff37 (image=alpine:latest,
name=test)

$ docker system events --since '2017-01-05'


2017-01-05T00:35:41.241772953+08:00 volume create testVol (driver=local)
2017-01-05T00:35:58.859401177+08:00 container create d9cd...4d70
(image=alpine:latest, name=test)
2017-01-05T00:36:04.703631903+08:00 network connect e2e1...29e2
(container=0fdb...ff37, name=bridge, type=bridge)
2017-01-05T00:36:04.795031609+08:00 container start 0fdb...ff37 (image=alpine:latest,
name=test)
2017-01-05T00:36:09.830268747+08:00 container kill 0fdb...ff37 (image=alpine:latest,
name=test, signal=15)
2017-01-05T00:36:09.840186338+08:00 container die 0fdb...ff37 (exitCode=143,
image=alpine:latest, name=test)
2017-01-05T00:36:09.880113663+08:00 network disconnect e2e...29e2
(container=0fdb...ff37, name=bridge, type=bridge)
2017-01-05T00:36:09.890214053+08:00 container stop 0fdb...ff37 (image=alpine:latest,
name=test)

$ docker system events --since '2013-09-03T15:49:29'


2017-01-05T00:35:41.241772953+08:00 volume create testVol (driver=local)
2017-01-05T00:35:58.859401177+08:00 container create d9cd...4d70
(image=alpine:latest, name=test)
2017-01-05T00:36:04.703631903+08:00 network connect e2e1...29e2
(container=0fdb...ff37, name=bridge, type=bridge)
2017-01-05T00:36:04.795031609+08:00 container start 0fdb...ff37 (image=alpine:latest,
name=test)
2017-01-05T00:36:09.830268747+08:00 container kill 0fdb...ff37 (image=alpine:latest,
name=test, signal=15)
2017-01-05T00:36:09.840186338+08:00 container die 0fdb...ff37 (exitCode=143,
image=alpine:latest, name=test)
2017-01-05T00:36:09.880113663+08:00 network disconnect e2e...29e2
(container=0fdb...ff37, name=bridge, type=bridge)
2017-01-05T00:36:09.890214053+08:00 container stop 0fdb...ff37 (image=alpine:latest,
name=test)

$ docker system events --since '10m'


2017-01-05T00:35:41.241772953+08:00 volume create testVol (driver=local)
2017-01-05T00:35:58.859401177+08:00 container create d9cd...4d70
(image=alpine:latest, name=test)
2017-01-05T00:36:04.703631903+08:00 network connect e2e1...29e2
(container=0fdb...ff37, name=bridge, type=bridge)
2017-01-05T00:36:04.795031609+08:00 container start 0fdb...ff37 (image=alpine:latest,
name=test)
2017-01-05T00:36:09.830268747+08:00 container kill 0fdb...ff37 (image=alpine:latest,
name=test, signal=15)
2017-01-05T00:36:09.840186338+08:00 container die 0fdb...ff37 (exitCode=143,
image=alpine:latest, name=test)
2017-01-05T00:36:09.880113663+08:00 network disconnect e2e...29e2
(container=0fdb...ff37, name=bridge, type=bridge)
2017-01-05T00:36:09.890214053+08:00 container stop 0fdb...ff37 (image=alpine:latest,
name=test)

Filter events by criteria


The following commands show several different ways to filter the docker event output.
$ docker system events --filter 'event=stop'

2017-01-05T00:40:22.880175420+08:00 container stop 0fdb...ff37 (image=alpine:latest,


name=test)
2017-01-05T00:41:17.888104182+08:00 container stop 2a8f...4e78 (image=alpine,
name=kickass_brattain)

$ docker system events --filter 'image=alpine'


2017-01-05T00:41:55.784240236+08:00 container create d9cd...4d70 (image=alpine,
name=happy_meitner)
2017-01-05T00:41:55.913156783+08:00 container start d9cd...4d70 (image=alpine,
name=happy_meitner)
2017-01-05T00:42:01.106875249+08:00 container kill d9cd...4d70 (image=alpine,
name=happy_meitner, signal=15)
2017-01-05T00:42:11.111934041+08:00 container kill d9cd...4d70 (image=alpine,
name=happy_meitner, signal=9)
2017-01-05T00:42:11.119578204+08:00 container die d9cd...4d70 (exitCode=137,
image=alpine, name=happy_meitner)
2017-01-05T00:42:11.173276611+08:00 container stop d9cd...4d70 (image=alpine,
name=happy_meitner)

$ docker system events --filter 'container=test'

2017-01-05T00:43:00.139719934+08:00 container start 0fdb...ff37 (image=alpine:latest,


name=test)
2017-01-05T00:43:09.259951086+08:00 container kill 0fdb...ff37 (image=alpine:latest,
name=test, signal=15)
2017-01-05T00:43:09.270102715+08:00 container die 0fdb...ff37 (exitCode=143,
image=alpine:latest, name=test)
2017-01-05T00:43:09.312556440+08:00 container stop 0fdb...ff37 (image=alpine:latest,
name=test)

$ docker system events --filter 'container=test' --filter 'container=d9cdb1525ea8'

2017-01-05T00:44:11.517071981+08:00 container start 0fdb...ff37 (image=alpine:latest,


name=test)
2017-01-05T00:44:17.685870901+08:00 container start d9cd...4d70 (image=alpine,
name=happy_meitner)
2017-01-05T00:44:29.757658470+08:00 container kill 0fdb...ff37 (image=alpine:latest,
name=test, signal=9)
2017-01-05T00:44:29.767718510+08:00 container die 0fdb...ff37 (exitCode=137,
image=alpine:latest, name=test)
2017-01-05T00:44:29.815798344+08:00 container destroy 0fdb...ff37
(image=alpine:latest, name=test)

$ docker system events --filter 'container=test' --filter 'event=stop'


2017-01-05T00:46:13.664099505+08:00 container stop a9d1...e130 (image=alpine,
name=test)

$ docker system events --filter 'type=volume'

2015-12-23T21:05:28.136212689Z volume create test-event-volume-local (driver=local)


2015-12-23T21:05:28.383462717Z volume mount test-event-volume-local (read/write=true,
container=562f...5025, destination=/foo, driver=local, propagation=rprivate)
2015-12-23T21:05:28.650314265Z volume unmount test-event-volume-local
(container=562f...5025, driver=local)
2015-12-23T21:05:28.716218405Z volume destroy test-event-volume-local (driver=local)

$ docker system events --filter 'type=network'

2015-12-23T21:38:24.705709133Z network create 8b11...2c5b (name=test-event-network-


local, type=bridge)
2015-12-23T21:38:25.119625123Z network connect 8b11...2c5b (name=test-event-network-
local, container=b4be...c54e, type=bridge)

$ docker system events --filter 'container=container_1' --filter


'container=container_2'

2014-09-03T15:49:29.999999999Z07:00 container die 4386fb97867d (image=ubuntu-1:14.04)


2014-05-10T17:42:14.999999999Z07:00 container stop 4386fb97867d (image=ubuntu-
1:14.04)
2014-05-10T17:42:14.999999999Z07:00 container die 7805c1d35632 (imager=redis:2.8)
2014-09-03T15:49:29.999999999Z07:00 container stop 7805c1d35632 (image=redis:2.8)

$ docker system events --filter 'type=volume'

2015-12-23T21:05:28.136212689Z volume create test-event-volume-local (driver=local)


2015-12-23T21:05:28.383462717Z volume mount test-event-volume-local (read/write=true,
container=562fe10671e9273da25eed36cdce26159085ac7ee6707105fd534866340a5025,
destination=/foo, driver=local, propagation=rprivate)
2015-12-23T21:05:28.650314265Z volume unmount test-event-volume-local
(container=562fe10671e9273da25eed36cdce26159085ac7ee6707105fd534866340a5025,
driver=local)
2015-12-23T21:05:28.716218405Z volume destroy test-event-volume-local (driver=local)
$ docker system events --filter 'type=network'

2015-12-23T21:38:24.705709133Z network create


8b111217944ba0ba844a65b13efcd57dc494932ee2527577758f939315ba2c5b (name=test-event-
network-local, type=bridge)
2015-12-23T21:38:25.119625123Z network connect
8b111217944ba0ba844a65b13efcd57dc494932ee2527577758f939315ba2c5b (name=test-event-
network-local,
container=b4be644031a3d90b400f88ab3d4bdf4dc23adb250e696b6328b85441abe2c54e,
type=bridge)

$ docker system events --filter 'type=plugin'

2016-07-25T17:30:14.825557616Z plugin pull


ec7b87f2ce84330fe076e666f17dfc049d2d7ae0b8190763de94e1f2d105993f
(name=tiborvass/sample-volume-plugin:latest)
2016-07-25T17:30:14.888127370Z plugin enable
ec7b87f2ce84330fe076e666f17dfc049d2d7ae0b8190763de94e1f2d105993f
(name=tiborvass/sample-volume-plugin:latest)

Format the output


$ docker system events --filter 'type=container' --format 'Type={{.Type}}
Status={{.Status}} ID={{.ID}}'

Type=container Status=create
ID=2ee349dac409e97974ce8d01b70d250b85e0ba8189299c126a87812311951e26
Type=container Status=attach
ID=2ee349dac409e97974ce8d01b70d250b85e0ba8189299c126a87812311951e26
Type=container Status=start
ID=2ee349dac409e97974ce8d01b70d250b85e0ba8189299c126a87812311951e26
Type=container Status=resize
ID=2ee349dac409e97974ce8d01b70d250b85e0ba8189299c126a87812311951e26
Type=container Status=die
ID=2ee349dac409e97974ce8d01b70d250b85e0ba8189299c126a87812311951e26
Type=container Status=destroy
ID=2ee349dac409e97974ce8d01b70d250b85e0ba8189299c126a87812311951e26

FORMAT AS JSON
$ docker system events --format '{{json .}}'

{"status":"create","id":"196016a57679bf42424484918746a9474cd905dd993c4d0f4..
{"status":"attach","id":"196016a57679bf42424484918746a9474cd905dd993c4d0f4..
{"Type":"network","Action":"connect","Actor":{"ID":"1b50a5bf755f6021dfa78e..
{"status":"start","id":"196016a57679bf42424484918746a9474cd905dd993c4d0f42..
{"status":"resize","id":"196016a57679bf42424484918746a9474cd905dd993c4d0f4..

docker system info


Estimated reading time: 1 minute

Description
Display system-wide information

Usage
docker system info [OPTIONS]

Options
Name, shorthand Default Description

--format , -f Format the output using the given Go template

Parent command
Command Description

docker system Manage Docker

Related commands
Command Description

docker system df Show docker disk usage

docker system events Get real time events from the server

docker system info Display system-wide information

docker system prune Remove unused data

docker system prune


Estimated reading time: 4 minutes

Description
Remove unused data

API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Usage
docker system prune [OPTIONS]

Options
Name, shorthand Default Description

--all , -a Remove all unused images not just dangling ones

API 1.28+
--filter
Provide filter values (e.g. ‘label==')

--force , -f Do not prompt for confirmation

--volumes Prune volumes


Parent command
Command Description

docker system Manage Docker

Related commands
Command Description

docker system df Show docker disk usage

docker system events Get real time events from the server

docker system info Display system-wide information

docker system prune Remove unused data

Extended description
Remove all unused containers, networks, images (both dangling and unreferenced), and optionally,
volumes.

Examples
$ docker system prune

WARNING! This will remove:


- all stopped containers
- all networks not used by at least one container
- all dangling images
- all build cache
Are you sure you want to continue? [y/N] y

Deleted Containers:
f44f9b81948b3919590d5f79a680d8378f1139b41952e219830a33027c80c867
792776e68ac9d75bce4092bc1b5cc17b779bc926ab04f4185aec9bf1c0d4641f

Deleted Networks:
network1
network2

Deleted Images:
untagged: hello-
world@sha256:f3b3b28a45160805bb16542c9531888519430e9e6d6ffc09d72261b0d26ff74f
deleted: sha256:1815c82652c03bfd8644afda26fb184f2ed891d921b20a0703b46768f9755c57
deleted: sha256:45761469c965421a92a69cc50e92c01e0cfa94fe026cdd1233445ea00e96289a

Total reclaimed space: 1.84kB

By default, volumes are not removed to prevent important data from being deleted if there is
currently no container using the volume. Use the --volumes flag when running the command to
prune volumes as well:
$ docker system prune -a --volumes

WARNING! This will remove:


- all stopped containers
- all networks not used by at least one container
- all volumes not used by at least one container
- all images without at least one container associated to them
- all build cache
Are you sure you want to continue? [y/N] y

Deleted Containers:
0998aa37185a1a7036b0e12cf1ac1b6442dcfa30a5c9650a42ed5010046f195b
73958bfb884fa81fa4cc6baf61055667e940ea2357b4036acbbe25a60f442a4d

Deleted Networks:
my-network-a
my-network-b
Deleted Volumes:
named-vol

Deleted Images:
untagged: my-curl:latest
deleted: sha256:7d88582121f2a29031d92017754d62a0d1a215c97e8f0106c586546e7404447d
deleted: sha256:dd14a93d83593d4024152f85d7c63f76aaa4e73e228377ba1d130ef5149f4d8b
untagged: alpine:3.3
deleted: sha256:695f3d04125db3266d4ab7bbb3c6b23aa4293923e762aa2562c54f49a28f009f
untagged: alpine:latest
deleted: sha256:ee4603260daafe1a8c2f3b78fd760922918ab2441cbb2853ed5c439e59c52f96
deleted: sha256:9007f5987db353ec398a223bc5a135c5a9601798ba20a1abba537ea2f8ac765f
deleted: sha256:71fa90c8f04769c9721459d5aa0936db640b92c8c91c9b589b54abd412d120ab
deleted: sha256:bb1c3357b3c30ece26e6604aea7d2ec0ace4166ff34c3616701279c22444c0f3
untagged: my-jq:latest
deleted: sha256:6e66d724542af9bc4c4abf4a909791d7260b6d0110d8e220708b09e4ee1322e1
deleted: sha256:07b3fa89d4b17009eb3988dfc592c7d30ab3ba52d2007832dffcf6d40e3eda7f
deleted: sha256:3a88a5c81eb5c283e72db2dbc6d65cbfd8e80b6c89bb6e714cfaaa0eed99c548

Total reclaimed space: 13.5 MB

Note: The --volumes option was added in Docker 17.06.1. Older versions of Docker prune volumes
by default, along with other Docker objects. On older versions, run docker container prune, docker
network prune, and docker image pruneseparately to remove unused containers, networks, and
images, without removing volumes.

Filtering
The filtering flag (--filter) format is of “key=value”. If there is more than one filter, then pass
multiple flags (e.g., --filter "foo=bar" --filter "bif=baz")

The currently supported filters are:

 until (<timestamp>) - only remove containers, images, and networks created before given
timestamp
 label (label=<key>, label=<key>=<value>, label!=<key>, or label!=<key>=<value>) - only
remove containers, images, networks, and volumes with (or without, in case label!=... is
used) the specified labels.

The until filter can be Unix timestamps, date formatted timestamps, or Go duration strings
(e.g. 10m, 1h30m) computed relative to the daemon machine’s time. Supported formats for date
formatted time stamps include RFC3339Nano, RFC3339, 2006-01-02T15:04:05,2006-01-
02T15:04:05.999999999, 2006-01-02Z07:00, and 2006-01-02. The local timezone on the daemon will
be used if you do not provide either a Z or a +-00:00 timezone offset at the end of the timestamp.
When providing Unix timestamps enter seconds[.nanoseconds], where seconds is the number of
seconds that have elapsed since January 1, 1970 (midnight UTC/GMT), not counting leap seconds
(aka Unix epoch or Unix time), and the optional .nanoseconds field is a fraction of a second no more
than nine digits long.
The label filter accepts two formats. One is the label=... (label=<key> or label=<key>=<value>),
which removes containers, images, networks, and volumes with the specified labels. The other
format is the label!=... (label!=<key> or label!=<key>=<value>), which removes containers,
images, networks, and volumes without the specified labels.

docker tag
Estimated reading time: 2 minutes

Description
Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE

Usage
docker tag SOURCE_IMAGE[:TAG] TARGET_IMAGE[:TAG]

Parent command
Command Description

docker The base command for the Docker CLI.


Extended description
An image name is made up of slash-separated name components, optionally prefixed by a registry
hostname. The hostname must comply with standard DNS rules, but may not contain underscores. If
a hostname is present, it may optionally be followed by a port number in the format :8080. If not
present, the command uses Docker’s public registry located at registry-1.docker.io by default.
Name components may contain lowercase letters, digits and separators. A separator is defined as a
period, one or two underscores, or one or more dashes. A name component may not start or end
with a separator.

A tag name must be valid ASCII and may contain lowercase and uppercase letters, digits,
underscores, periods and dashes. A tag name may not start with a period or a dash and may contain
a maximum of 128 characters.

You can group your images together using names and tags, and then upload them to Share Images
via Repositories.

Examples
Tag an image referenced by ID
To tag a local image with ID “0e5574283393” into the “fedora” repository with “version1.0”:

$ docker tag 0e5574283393 fedora/httpd:version1.0

Tag an image referenced by Name


To tag a local image with name “httpd” into the “fedora” repository with “version1.0”:

$ docker tag httpd fedora/httpd:version1.0

Note that since the tag name is not specified, the alias is created for an existing local
version httpd:latest.

Tag an image referenced by Name and Tag


To tag a local image with name “httpd” and tag “test” into the “fedora” repository with
“version1.0.test”:
$ docker tag httpd:test fedora/httpd:version1.0.test

Tag an image for a private repository


To push an image to a private registry and not the central Docker registry you must tag it with the
registry hostname and port (if needed).

$ docker tag 0e5574283393 myregistryhost:5000/fedora/httpd:version1.0

docker template
Estimated reading time: 1 minute

Description
Use templates to quickly create new services

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Child commands
Command Description

docker template
Modify docker template configuration
config

docker template
Inspect service templates or application templates
inspect

docker template list List available templates with their informations

docker template Choose an application template or service template(s) and scaffold a


scaffold new project

docker template
Print version information
version

Parent command
Command Description

docker The base command for the Docker CLI.

docker template config


Estimated reading time: 2 minutes

Description
Modify docker template configuration

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Child commands
Command Description

docker template config set set default values for docker template

docker template config view view default values for docker template

Parent command
Command Description

docker template Use templates to quickly create new services

Related commands
Command Description

docker template
Modify docker template configuration
config

docker template
Inspect service templates or application templates
inspect

docker template list List available templates with their informations

docker template Choose an application template or service template(s) and scaffold a


scaffold new project

docker template
Print version information
version
docker template config set
Estimated reading time: 2 minutes

Description
set default values for docker template

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker template config set

Options
Name,
Default Description
shorthand

Send anonymous feedback about usage (performance, failure


--feedback
status, os, version)

--no-feedback Don’t send anonymous feedback


Name,
Default Description
shorthand

--org Set default organization / docker hub user

--server Set default registry server (host[:port])

Parent command
Command Description

docker template config Modify docker template configuration

Related commands
Command Description

docker template config set set default values for docker template

docker template config view view default values for docker template

docker template config view


Estimated reading time: 1 minute

Description
view default values for docker template

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker template config view

Options
Name, shorthand Default Description

--format yaml Configure the output format (json|yaml)

Parent command
Command Description

docker template config Modify docker template configuration

Related commands
Command Description

docker template config set set default values for docker template

docker template config view view default values for docker template

docker template inspect


Estimated reading time: 2 minutes
Description
Inspect service templates or application templates

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker template inspect <service or application>

Options
Name, shorthand Default Description

--format pretty Configure the output format (pretty|json|yaml)

Parent command
Command Description

docker template Use templates to quickly create new services


Related commands
Command Description

docker template
Modify docker template configuration
config

docker template
Inspect service templates or application templates
inspect

docker template list List available templates with their informations

docker template Choose an application template or service template(s) and scaffold a


scaffold new project

docker template
Print version information
version

docker template list


Estimated reading time: 2 minutes

Description
List available templates with their informations

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker template list

Options
Name, shorthand Default Description

--format pretty Configure the output format (pretty|json|yaml)

--type all Filter by type (application|service|all)

Parent command
Command Description

docker template Use templates to quickly create new services

Related commands
Command Description

docker template
Modify docker template configuration
config

docker template
Inspect service templates or application templates
inspect

docker template list List available templates with their informations

docker template Choose an application template or service template(s) and scaffold a


scaffold new project

docker template
Print version information
version
docker template scaffold
Estimated reading time: 2 minutes

Description
Choose an application template or service template(s) and scaffold a new project

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker template scaffold application [<alias=service>...] OR scaffold [alias=]service
[<[alias=]service>...]

Options
Name,
Default Description
shorthand

--name Application name


Name,
Default Description
shorthand

Deploy to a specific organization / docker hub user (if not specified,


--org
it will use your current hub login)

--path Deploy to a specific path

--platform linux Target platform (linux|windows)

--server Deploy to a specific registry server (host[:port])

--set , -s Override parameters values (service.name=value)

Parent command
Command Description

docker template Use templates to quickly create new services

Related commands
Command Description

docker template
Modify docker template configuration
config

docker template
Inspect service templates or application templates
inspect

docker template list List available templates with their informations

docker template Choose an application template or service template(s) and scaffold a


scaffold new project

docker template
Print version information
version

Examples
docker template scaffold react-java-mysql -s back.java=10 -s front.externalPort=80 docker template
scaffold react-java-mysql java=back reactjs=front -s reactjs.externalPort=80 docker template scaffold
back=spring front=react -s back.externalPort=9000 docker template scaffold react-java-mysql --
server=myregistry:5000 --org=myorg

docker template version


Estimated reading time: 1 minute

Description
Print version information

This command is experimental.

This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.

Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.

To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.

To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.

Usage
docker template version

Parent command
Command Description

docker template Use templates to quickly create new services

Related commands
Command Description

docker template
Modify docker template configuration
config

docker template
Inspect service templates or application templates
inspect

docker template list List available templates with their informations

docker template Choose an application template or service template(s) and scaffold a


scaffold new project

docker template
Print version information
version

docker top
Estimated reading time: 1 minute

Description
Display the running processes of a container

Usage
docker top CONTAINER [ps OPTIONS]

Parent command
Command Description

docker The base command for the Docker CLI.

docker trust
Estimated reading time: 1 minute

Description
Manage trust on Docker images

Usage
docker trust COMMAND

Child commands
Command Description

docker trust inspect Return low-level information about keys and signatures

docker trust key Manage keys for signing Docker images

docker trust revoke Remove trust for an image

docker trust sign Sign an image

docker trust signer Manage entities who can sign Docker images

Parent command
Command Description

docker The base command for the Docker CLI.

docker trust inspect


Estimated reading time: 9 minutes

Description
Return low-level information about keys and signatures

Usage
docker trust inspect IMAGE[:TAG] [IMAGE[:TAG]...]

Options
Name, shorthand Default Description

--pretty Print the information in a human friendly format

Parent command
Command Description

docker trust Manage trust on Docker images

Related commands
Command Description

docker trust inspect Return low-level information about keys and signatures

docker trust key Manage keys for signing Docker images

docker trust revoke Remove trust for an image

docker trust sign Sign an image

docker trust signer Manage entities who can sign Docker images

Extended description
docker trust inspect provides low-level JSON information on signed repositories. This includes all
image tags that are signed, who signed them, and who can sign new tags.

Examples
Get low-level details about signatures for a single image tag
Use the docker trust inspect to get trust information about an image. The following example prints
trust information for the alpine:latest image:
$ docker trust inspect alpine:latest
[
{
"Name": "alpine:latest",
"SignedTags": [
{
"SignedTag": "latest",
"Digest": "d6bfc3baf615dc9618209a8d607ba2a8103d9c8a405b3bd8741d88b4bef36478",
"Signers": [
"Repo Admin"
]
}
],
"Signers": [],
"AdministrativeKeys": [
{
"Name": "Repository",
"Keys": [
{
"ID":
"5a46c9aaa82ff150bb7305a2d17d0c521c2d784246807b2dc611f436a69041fd"
}
]
},
{
"Name": "Root",
"Keys": [
{
"ID":
"a2489bcac7a79aa67b19b96c4a3bf0c675ffdf00c6d2fabe1a5df1115e80adce"
}
]
}
]
}
]

The SignedTags key will list the SignedTag name, its Digest, and the Signers responsible for the
signature.
AdministrativeKeys will list the Repository and Root keys.
If signers are set up for the repository via other docker trust commands, docker trust
inspect includes a Signers key:

$ docker trust inspect my-image:purple


[
{
"Name": "my-image:purple",
"SignedTags": [
{
"SignedTag": "purple",
"Digest": "941d3dba358621ce3c41ef67b47cf80f701ff80cdf46b5cc86587eaebfe45557",
"Signers": [
"alice",
"bob",
"carol"
]
}
],
"Signers": [
{
"Name": "alice",
"Keys": [
{
"ID":
"04dd031411ed671ae1e12f47ddc8646d98f135090b01e54c3561e843084484a3"
},
{
"ID":
"6a11e4898a4014d400332ab0e096308c844584ff70943cdd1d6628d577f45fd8"
}
]
},
{
"Name": "bob",
"Keys": [
{
"ID":
"433e245c656ae9733cdcc504bfa560f90950104442c4528c9616daa45824ccba"
}
]
},
{
"Name": "carol",
"Keys": [
{
"ID":
"d32fa8b5ca08273a2880f455fcb318da3dc80aeae1a30610815140deef8f30d9"
},
{
"ID":
"9a8bbec6ba2af88a5fad6047d428d17e6d05dbdd03d15b4fc8a9a0e8049cd606"
}
]
}
],
"AdministrativeKeys": [
{
"Name": "Repository",
"Keys": [
{
"ID":
"27df2c8187e7543345c2e0bf3a1262e0bc63a72754e9a7395eac3f747ec23a44"
}
]
},
{
"Name": "Root",
"Keys": [
{
"ID":
"40b66ccc8b176be8c7d365a17f3e046d1c3494e053dd57cfeacfe2e19c4f8e8f"
}
]
}
]
}
]

If the image tag is unsigned or unavailable, docker trust inspect does not display any signed tags.
$ docker trust inspect unsigned-img
No signatures or cannot access unsigned-img

However, if other tags are signed in the same image repository, docker trust inspectreports
relevant key information:
$ docker trust inspect alpine:unsigned
[
{
"Name": "alpine:unsigned",
"Signers": [],
"AdministrativeKeys": [
{
"Name": "Repository",
"Keys": [
{
"ID":
"5a46c9aaa82ff150bb7305a2d17d0c521c2d784246807b2dc611f436a69041fd"
}
]
},
{
"Name": "Root",
"Keys": [
{
"ID":
"a2489bcac7a79aa67b19b96c4a3bf0c675ffdf00c6d2fabe1a5df1115e80adce"
}
]
}
]
}
]

Get details about signatures for all image tags in a repository


If no tag is specified, docker trust inspect will report details for all signed tags in the repository:
$ docker trust inspect alpine
[
{
"Name": "alpine",
"SignedTags": [
{
"SignedTag": "3.5",
"Digest":
"b007a354427e1880de9cdba533e8e57382b7f2853a68a478a17d447b302c219c",
"Signers": [
"Repo Admin"
]
},
{
"SignedTag": "3.6",
"Digest":
"d6bfc3baf615dc9618209a8d607ba2a8103d9c8a405b3bd8741d88b4bef36478",
"Signers": [
"Repo Admin"
]
},
{
"SignedTag": "edge",
"Digest":
"23e7d843e63a3eee29b6b8cfcd10e23dd1ef28f47251a985606a31040bf8e096",
"Signers": [
"Repo Admin"
]
},
{
"SignedTag": "latest",
"Digest":
"d6bfc3baf615dc9618209a8d607ba2a8103d9c8a405b3bd8741d88b4bef36478",
"Signers": [
"Repo Admin"
]
}
],
"Signers": [],
"AdministrativeKeys": [
{
"Name": "Repository",
"Keys": [
{
"ID":
"5a46c9aaa82ff150bb7305a2d17d0c521c2d784246807b2dc611f436a69041fd"
}
]
},
{
"Name": "Root",
"Keys": [
{
"ID":
"a2489bcac7a79aa67b19b96c4a3bf0c675ffdf00c6d2fabe1a5df1115e80adce"
}
]
}
]
}
]

Get details about signatures for multiple images


docker trust inspect can take multiple repositories and images as arguments, and reports the
results in an ordered list:
$ docker trust inspect alpine notary
[
{
"Name": "alpine",
"SignedTags": [
{
"SignedTag": "3.5",
"Digest":
"b007a354427e1880de9cdba533e8e57382b7f2853a68a478a17d447b302c219c",
"Signers": [
"Repo Admin"
]
},
{
"SignedTag": "3.6",
"Digest":
"d6bfc3baf615dc9618209a8d607ba2a8103d9c8a405b3bd8741d88b4bef36478",
"Signers": [
"Repo Admin"
]
},
{
"SignedTag": "edge",
"Digest":
"23e7d843e63a3eee29b6b8cfcd10e23dd1ef28f47251a985606a31040bf8e096",
"Signers": [
"Repo Admin"
]
},
{
"SignedTag": "integ-test-base",
"Digest":
"3952dc48dcc4136ccdde37fbef7e250346538a55a0366e3fccc683336377e372",
"Signers": [
"Repo Admin"
]
},
{
"SignedTag": "latest",
"Digest":
"d6bfc3baf615dc9618209a8d607ba2a8103d9c8a405b3bd8741d88b4bef36478",
"Signers": [
"Repo Admin"
]
}
],
"Signers": [],
"AdministrativeKeys": [
{
"Name": "Repository",
"Keys": [
{
"ID":
"5a46c9aaa82ff150bb7305a2d17d0c521c2d784246807b2dc611f436a69041fd"
}
]
},
{
"Name": "Root",
"Keys": [
{
"ID":
"a2489bcac7a79aa67b19b96c4a3bf0c675ffdf00c6d2fabe1a5df1115e80adce"
}
]
}
]
},
{
"Name": "notary",
"SignedTags": [
{
"SignedTag": "server",
"Digest":
"71f64ab718a3331dee103bc5afc6bc492914738ce37c2d2f127a8133714ecf5c",
"Signers": [
"Repo Admin"
]
},
{
"SignedTag": "signer",
"Digest":
"a6122d79b1e74f70b5dd933b18a6d1f99329a4728011079f06b245205f158fe8",
"Signers": [
"Repo Admin"
]
}
],
"Signers": [],
"AdministrativeKeys": [
{
"Name": "Root",
"Keys": [
{
"ID":
"8cdcdef5bd039f4ab5a029126951b5985eebf57cabdcdc4d21f5b3be8bb4ce92"
}
]
},
{
"Name": "Repository",
"Keys": [
{
"ID":
"85bfd031017722f950d480a721f845a2944db26a3dc084040a70f1b0d9bbb3df"
}
]
}
]
}
]

Formatting
You can print the inspect output in a human-readable format instead of the default JSON output, by
using the --pretty option:

Get details about signatures for a single image tag


$ docker trust inspect --pretty alpine:latest

SIGNED TAG DIGEST


SIGNERS
latest 1072e499f3f655a032e88542330cf75b02e7bdf673278f701d7ba61629ee3ebe
(Repo Admin)

Administrative keys for alpine:latest:


Repository Key: 5a46c9aaa82ff150bb7305a2d17d0c521c2d784246807b2dc611f436a69041fd
Root Key: a2489bcac7a79aa67b19b96c4a3bf0c675ffdf00c6d2fabe1a5df1115e80adce

The SIGNED TAG is the signed image tag with a unique content-addressable DIGEST. SIGNERS lists all
entities who have signed.

The administrative keys listed specify the root key of trust, as well as the administrative repository
key. These keys are responsible for modifying signers, and rotating keys for the signed repository.

If signers are set up for the repository via other docker trust commands,docker trust inspect --
pretty displays them appropriately as a SIGNER and specify their KEYS:

$ docker trust inspect --pretty my-image:purple


SIGNED TAG DIGEST
SIGNERS
purple 941d3dba358621ce3c41ef67b47cf80f701ff80cdf46b5cc86587eaebfe45557
alice, bob, carol

List of signers and their keys:

SIGNER KEYS
alice 47caae5b3e61, a85aab9d20a4
bob 034370bcbd77, 82a66673242c
carol b6f9f8e1aab0

Administrative keys for my-image:


Repository Key: 27df2c8187e7543345c2e0bf3a1262e0bc63a72754e9a7395eac3f747ec23a44
Root Key: 40b66ccc8b176be8c7d365a17f3e046d1c3494e053dd57cfeacfe2e19c4f8e8f
However, if other tags are signed in the same image repository, docker trust inspectreports
relevant key information.
$ docker trust inspect --pretty alpine:unsigned

No signatures for alpine:unsigned

Administrative keys for alpine:unsigned:


Repository Key: 5a46c9aaa82ff150bb7305a2d17d0c521c2d784246807b2dc611f436a69041fd
Root Key: a2489bcac7a79aa67b19b96c4a3bf0c675ffdf00c6d2fabe1a5df1115e80adce

Get details about signatures for all image tags in a repository


$ docker trust inspect --pretty alpine
SIGNED TAG DIGEST
SIGNERS
2.6 9ace551613070689a12857d62c30ef0daa9a376107ec0fff0e34786cedb3399b
(Repo Admin)
2.7 9f08005dff552038f0ad2f46b8e65ff3d25641747d3912e3ea8da6785046561a
(Repo Admin)
3.1 d9477888b78e8c6392e0be8b2e73f8c67e2894ff9d4b8e467d1488fcceec21c8
(Repo Admin)
3.2 19826d59171c2eb7e90ce52bfd822993bef6a6fe3ae6bb4a49f8c1d0a01e99c7
(Repo Admin)
3.3 8fd4b76819e1e5baac82bd0a3d03abfe3906e034cc5ee32100d12aaaf3956dc7
(Repo Admin)
3.4 833ad81ace8277324f3ca8c91c02bdcf1d13988d8ecf8a3f97ecdd69d0390ce9
(Repo Admin)
3.5 af2a5bd2f8de8fc1ecabf1c76611cdc6a5f1ada1a2bdd7d3816e121b70300308
(Repo Admin)
3.6 1072e499f3f655a032e88542330cf75b02e7bdf673278f701d7ba61629ee3ebe
(Repo Admin)
edge 79d50d15bd7ea48ea00cf3dd343b0e740c1afaa8e899bee475236ef338e1b53b
(Repo Admin)
latest 1072e499f3f655a032e88542330cf75b02e7bdf673278f701d7ba61629ee3ebe
(Repo Admin)

Administrative keys for alpine:


Repository Key: 5a46c9aaa82ff150bb7305a2d17d0c521c2d784246807b2dc611f436a69041fd
Root Key: a2489bcac7a79aa67b19b96c4a3bf0c675ffdf00c6d2fabe1a5df1115e80adce

Here’s an example with signers that are set up by docker trust commands:
$ docker trust inspect --pretty my-image
SIGNED TAG DIGEST
SIGNERS
red 852cc04935f930a857b630edc4ed6131e91b22073bcc216698842e44f64d2943
alice
blue f1c38dbaeeb473c36716f6494d803fbfbe9d8a76916f7c0093f227821e378197
alice, bob
green cae8fedc840f90c8057e1c24637d11865743ab1e61a972c1c9da06ec2de9a139
alice, bob
yellow 9cc65fc3126790e683d1b92f307a71f48f75fa7dd47a7b03145a123eaf0b45ba
carol
purple 941d3dba358621ce3c41ef67b47cf80f701ff80cdf46b5cc86587eaebfe45557
alice, bob, carol
orange d6c271baa6d271bcc24ef1cbd65abf39123c17d2e83455bdab545a1a9093fc1c
alice

List of signers and their keys for my-image:

SIGNER KEYS
alice 47caae5b3e61, a85aab9d20a4
bob 034370bcbd77, 82a66673242c
carol b6f9f8e1aab0

Administrative keys for my-image:


Repository Key: 27df2c8187e7543345c2e0bf3a1262e0bc63a72754e9a7395eac3f747ec23a44
Root Key: 40b66ccc8b176be8c7d365a17f3e046d1c3494e053dd57cfeacfe2e19c4f8e8f

docker trust key


Estimated reading time: 1 minute

Description
Manage keys for signing Docker images

Usage
docker trust key COMMAND

Child commands
Command Description

docker trust key generate Generate and load a signing key-pair

docker trust key load Load a private key file for signing

Parent command
Command Description

docker trust Manage trust on Docker images

Related commands
Command Description

docker trust inspect Return low-level information about keys and signatures

docker trust key Manage keys for signing Docker images

docker trust revoke Remove trust for an image

docker trust sign Sign an image

docker trust signer Manage entities who can sign Docker images

docker trust key generate


Estimated reading time: 1 minute
Description
Generate and load a signing key-pair

Usage
docker trust key generate NAME

Options
Name, shorthand Default Description

--dir Directory to generate key in, defaults to current directory

Parent command
Command Description

docker trust key Manage keys for signing Docker images

Related commands
Command Description

docker trust key generate Generate and load a signing key-pair

docker trust key load Load a private key file for signing

docker trust key load


Estimated reading time: 1 minute

Description
Load a private key file for signing
Usage
docker trust key load [OPTIONS] KEYFILE

Options
Name, shorthand Default Description

--name signer Name for the loaded key

Parent command
Command Description

docker trust key Manage keys for signing Docker images

Related commands
Command Description

docker trust key generate Generate and load a signing key-pair

docker trust key load Load a private key file for signing

docker trust revoke


Estimated reading time: 3 minutes

Description
Remove trust for an image

Usage
docker trust revoke [OPTIONS] IMAGE[:TAG]
Options
Name, shorthand Default Description

--yes , -y Do not prompt for confirmation

Parent command
Command Description

docker trust Manage trust on Docker images

Related commands
Command Description

docker trust inspect Return low-level information about keys and signatures

docker trust key Manage keys for signing Docker images

docker trust revoke Remove trust for an image

docker trust sign Sign an image

docker trust signer Manage entities who can sign Docker images

Extended description
docker trust revoke removes signatures from tags in signed repositories.

Examples
Revoke signatures from a signed tag
Here’s an example of a repo with two signed tags:

$ docker trust view example/trust-demo


SIGNED TAG DIGEST
SIGNERS
red 852cc04935f930a857b630edc4ed6131e91b22073bcc216698842e44f64d2943
alice
blue f1c38dbaeeb473c36716f6494d803fbfbe9d8a76916f7c0093f227821e378197
alice, bob

List of signers and their keys for example/trust-demo:

SIGNER KEYS
alice 05e87edcaecb
bob 5600f5ab76a2

Administrative keys for example/trust-demo:


Repository Key: ecc457614c9fc399da523a5f4e24fe306a0a6ee1cc79a10e4555b3c6ab02f71e
Root Key: 3cb2228f6561e58f46dbc4cda4fcaff9d5ef22e865a94636f82450d1d2234949

When alice, one of the signers, runs docker trust revoke:


$ docker trust revoke example/trust-demo:red
Enter passphrase for delegation key with ID 27d42a8:
Successfully deleted signature for example/trust-demo:red

After revocation, the tag is removed from the list of released tags:

$ docker trust view example/trust-demo


SIGNED TAG DIGEST
SIGNERS
blue f1c38dbaeeb473c36716f6494d803fbfbe9d8a76916f7c0093f227821e378197
alice, bob

List of signers and their keys for example/trust-demo:

SIGNER KEYS
alice 05e87edcaecb
bob 5600f5ab76a2
Administrative keys for example/trust-demo:
Repository Key: ecc457614c9fc399da523a5f4e24fe306a0a6ee1cc79a10e4555b3c6ab02f71e
Root Key: 3cb2228f6561e58f46dbc4cda4fcaff9d5ef22e865a94636f82450d1d2234949

Revoke signatures on all tags in a repository


When no tag is specified, docker trust revokes all signatures that you have a signing key for.
$ docker trust view example/trust-demo
SIGNED TAG DIGEST
SIGNERS
red 852cc04935f930a857b630edc4ed6131e91b22073bcc216698842e44f64d2943
alice
blue f1c38dbaeeb473c36716f6494d803fbfbe9d8a76916f7c0093f227821e378197
alice, bob

List of signers and their keys for example/trust-demo:

SIGNER KEYS
alice 05e87edcaecb
bob 5600f5ab76a2

Administrative keys for example/trust-demo:


Repository Key: ecc457614c9fc399da523a5f4e24fe306a0a6ee1cc79a10e4555b3c6ab02f71e
Root Key: 3cb2228f6561e58f46dbc4cda4fcaff9d5ef22e865a94636f82450d1d2234949

When alice, one of the signers, runs docker trust revoke:


$ docker trust revoke example/trust-demo
Please confirm you would like to delete all signature data for example/trust-demo?
[y/N] y
Enter passphrase for delegation key with ID 27d42a8:
Successfully deleted signature for example/trust-demo

All tags that have alice’s signature on them are removed from the list of released tags:
$ docker trust view example/trust-demo

No signatures for example/trust-demo


List of signers and their keys for example/trust-demo:

SIGNER KEYS
alice 05e87edcaecb
bob 5600f5ab76a2

Administrative keys for example/trust-demo:


Repository Key: ecc457614c9fc399da523a5f4e24fe306a0a6ee1cc79a10e4555b3c6ab02f71e
Root Key: 3cb2228f6561e58f46dbc4cda4fcaff9d5ef22e865a94636f82450d1d2234949

docker trust sign


Estimated reading time: 3 minutes

Description
Sign an image

Usage
docker trust sign IMAGE:TAG

Options
Name, shorthand Default Description

--local Sign a locally tagged image

Parent command
Command Description

docker trust Manage trust on Docker images

Related commands
Command Description

docker trust inspect Return low-level information about keys and signatures

docker trust key Manage keys for signing Docker images

docker trust revoke Remove trust for an image

docker trust sign Sign an image

docker trust signer Manage entities who can sign Docker images

Extended description
docker trust sign adds signatures to tags to create signed repositories.

Examples
Sign a tag as a repo admin
Given an image:

$ docker trust view example/trust-demo


SIGNED TAG DIGEST
SIGNERS
v1 c24134c079c35e698060beabe110bb83ab285d0d978de7d92fed2c8c83570a41
(Repo Admin)

Administrative keys for example/trust-demo:


Repository Key: 36d4c3601102fa7c5712a343c03b94469e5835fb27c191b529c06fd19c14a942
Root Key: 246d360f7c53a9021ee7d4259e3c5692f3f1f7ad4737b1ea8c7b8da741ad980b

Sign a new tag with docker trust sign:


$ docker trust sign example/trust-demo:v2
Signing and pushing trust metadata for example/trust-demo:v2
The push refers to a repository [docker.io/example/trust-demo]
eed4e566104a: Layer already exists
77edfb6d1e3c: Layer already exists
c69f806905c2: Layer already exists
582f327616f1: Layer already exists
a3fbb648f0bd: Layer already exists
5eac2de68a97: Layer already exists
8d4d1ab5ff74: Layer already exists
v2: digest: sha256:8f6f460abf0436922df7eb06d28b3cdf733d2cac1a185456c26debbff0839c56
size: 1787
Signing and pushing trust metadata
Enter passphrase for repository key with ID 36d4c36:
Successfully signed docker.io/example/trust-demo:v2

docker trust view lists the new signature:

$ docker trust view example/trust-demo


SIGNED TAG DIGEST
SIGNERS
v1 c24134c079c35e698060beabe110bb83ab285d0d978de7d92fed2c8c83570a41
(Repo Admin)
v2 8f6f460abf0436922df7eb06d28b3cdf733d2cac1a185456c26debbff0839c56
(Repo Admin)

Administrative keys for example/trust-demo:


Repository Key: 36d4c3601102fa7c5712a343c03b94469e5835fb27c191b529c06fd19c14a942
Root Key: 246d360f7c53a9021ee7d4259e3c5692f3f1f7ad4737b1ea8c7b8da741ad980b

Sign a tag as a signer


Given an image:

$ docker trust view example/trust-demo

No signatures for example/trust-demo


List of signers and their keys for example/trust-demo:

SIGNER KEYS
alice 05e87edcaecb
bob 5600f5ab76a2

Administrative keys for example/trust-demo:


Repository Key: ecc457614c9fc399da523a5f4e24fe306a0a6ee1cc79a10e4555b3c6ab02f71e
Root Key: 3cb2228f6561e58f46dbc4cda4fcaff9d5ef22e865a94636f82450d1d2234949

Sign a new tag with docker trust sign:


$ docker trust sign example/trust-demo:v1
Signing and pushing trust metadata for example/trust-demo:v1
The push refers to a repository [docker.io/example/trust-demo]
26b126eb8632: Layer already exists
220d34b5f6c9: Layer already exists
8a5132998025: Layer already exists
aca233ed29c3: Layer already exists
e5d2f035d7a4: Layer already exists
v1: digest: sha256:74d4bfa917d55d53c7df3d2ab20a8d926874d61c3da5ef6de15dd2654fc467c4
size: 1357
Signing and pushing trust metadata
Enter passphrase for delegation key with ID 27d42a8:
Successfully signed docker.io/example/trust-demo:v1

docker trust view lists the new signature:

$ docker trust view example/trust-demo


SIGNED TAG DIGEST
SIGNERS
v1 74d4bfa917d55d53c7df3d2ab20a8d926874d61c3da5ef6de15dd2654fc467c4
alice

List of signers and their keys for example/trust-demo:


SIGNER KEYS
alice 05e87edcaecb
bob 5600f5ab76a2

Administrative keys for example/trust-demo:


Repository Key: ecc457614c9fc399da523a5f4e24fe306a0a6ee1cc79a10e4555b3c6ab02f71e
Root Key: 3cb2228f6561e58f46dbc4cda4fcaff9d5ef22e865a94636f82450d1d2234949

docker trust signer


Estimated reading time: 1 minute

Description
Manage entities who can sign Docker images

Usage
docker trust signer COMMAND

Child commands
Command Description

docker trust signer add Add a signer

docker trust signer remove Remove a signer

Parent command
Command Description

docker trust Manage trust on Docker images


Related commands
Command Description

docker trust inspect Return low-level information about keys and signatures

docker trust key Manage keys for signing Docker images

docker trust revoke Remove trust for an image

docker trust sign Sign an image

docker trust signer Manage entities who can sign Docker images

docker trust signer add


Estimated reading time: 1 minute

Description
Add a signer

Usage
docker trust signer add OPTIONS NAME REPOSITORY [REPOSITORY...]

Options
Name, shorthand Default Description

--key Path to the signer’s public key file

Parent command
Command Description

docker trust signer Manage entities who can sign Docker images
Related commands
Command Description

docker trust signer add Add a signer

docker trust signer remove Remove a signer

docker trust signer remove


Estimated reading time: 1 minute

Description
Remove a signer

Usage
docker trust signer remove [OPTIONS] NAME REPOSITORY [REPOSITORY...]

Options
Name,
Default Description
shorthand

Do not prompt for confirmation before removing the most recent


--force , -f
signer

Parent command
Command Description

docker trust signer Manage entities who can sign Docker images

Related commands
Command Description

docker trust signer add Add a signer

docker trust signer remove Remove a signer

docker unpause
Estimated reading time: 1 minute

Description
Unpause all processes within one or more containers

Usage
docker unpause CONTAINER [CONTAINER...]

Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
The docker unpause command un-suspends all processes in the specified containers. On Linux, it
does this using the cgroups freezer.

See the cgroups freezer documentation for further details.

Examples
$ docker unpause my_container
my_container
docker update
Estimated reading time: 4 minutes

Description
Update configuration of one or more containers

Usage
docker update [OPTIONS] CONTAINER [CONTAINER...]

Options
Name, shorthand Default Description

Block IO (relative weight), between 10 and 1000, or 0 to


--blkio-weight
disable (default 0)

--cpu-period Limit CPU CFS (Completely Fair Scheduler) period

--cpu-quota Limit CPU CFS (Completely Fair Scheduler) quota

API 1.25+
--cpu-rt-period
Limit the CPU real-time period in microseconds

API 1.25+
--cpu-rt-runtime
Limit the CPU real-time runtime in microseconds

--cpu-shares , -c CPU shares (relative weight)

API 1.29+
--cpus
Number of CPUs

--cpuset-cpus CPUs in which to allow execution (0-3, 0,1)

--cpuset-mems MEMs in which to allow execution (0-3, 0,1)

--kernel-memory Kernel memory limit

--memory , -m Memory limit


Name, shorthand Default Description

--memory-
Memory soft limit
reservation

Swap limit equal to memory plus swap: ‘-1’ to enable


--memory-swap
unlimited swap

API 1.40+
--pids-limit
Tune container pids limit (set -1 for unlimited)

--restart Restart policy to apply when a container exits

Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
The docker update command dynamically updates container configuration. You can use this
command to prevent containers from consuming too many resources from their Docker host. With a
single command, you can place limits on a single container or on many. To specify more than one
container, provide space-separated list of container names or IDs.
With the exception of the --kernel-memory option, you can specify these options on a running or a
stopped container. On kernel version older than 4.6, you can only update --kernel-memory on a
stopped container or on a running container with kernel memory initialized.
Warning: The docker update and docker container update commands are not supported for
Windows containers.

Examples
The following sections illustrate ways to use this command.

Update a container’s cpu-shares


To limit a container’s cpu-shares to 512, first identify the container name or ID. You can use docker
ps to find these values. You can also use the ID returned from the docker runcommand. Then, do
the following:
$ docker update --cpu-shares 512 abebf7571666

Update a container with cpu-shares and memory


To update multiple resource configurations for multiple containers:

$ docker update --cpu-shares 512 -m 300M abebf7571666 hopeful_morse

Update a container’s kernel memory constraints


You can update a container’s kernel memory limit using the --kernel-memory option. On kernel
version older than 4.6, this option can be updated on a running container only if the container was
started with --kernel-memory. If the container was started without--kernel-memory you need to stop
the container before updating kernel memory.

For example, if you started a container with this command:

$ docker run -dit --name test --kernel-memory 50M ubuntu bash

You can update kernel memory while the container is running:

$ docker update --kernel-memory 80M test

If you started a container without kernel memory initialized:

$ docker run -dit --name test2 --memory 300M ubuntu bash

Update kernel memory of running container test2 will fail. You need to stop the container before
updating the --kernel-memory setting. The next time you start it, the container uses the new value.
Kernel version newer than (include) 4.6 does not have this limitation, you can use --kernel-
memory the same way as other options.

Update a container’s restart policy


You can change a container’s restart policy on a running container. The new restart policy takes
effect instantly after you run docker update on a container.
To update restart policy for one or more containers:

$ docker update --restart=on-failure:3 abebf7571666 hopeful_morse

Note that if the container is started with “--rm” flag, you cannot update the restart policy for it.
The AutoRemove and RestartPolicy are mutually exclusive for the container.

docker version
Estimated reading time: 1 minute

Description
Show the Docker version information

Usage
docker version [OPTIONS]

Options
Name, shorthand Default Description

--format , -f Format the output using the given Go template

Kubernetes
--kubeconfig
Kubernetes config file

Parent command
Command Description

docker The base command for the Docker CLI.

Extended description
By default, this will render all version information in an easy to read layout. If a format is specified,
the given template will be executed instead.

Go’s text/template package describes all the details of the format.

Examples
Default output
$ docker version

Client:
Version: 1.8.0
API version: 1.20
Go version: go1.4.2
Git commit: f5bae0a
Built: Tue Jun 23 17:56:00 UTC 2015
OS/Arch: linux/amd64

Server:
Version: 1.8.0
API version: 1.20
Go version: go1.4.2
Git commit: f5bae0a
Built: Tue Jun 23 17:56:00 UTC 2015
OS/Arch: linux/amd64

Get the server version


$ docker version --format '{{.Server.Version}}'

1.8.0

Dump raw JSON data


$ docker version --format '{{json .}}'

{"Client":{"Version":"1.8.0","ApiVersion":"1.20","GitCommit":"f5bae0a","GoVersion":"g
o1.4.2","Os":"linux","Arch":"amd64","BuildTime":"Tue Jun 23 17:56:00 UTC
2015"},"ServerOK":true,"Server":{"Version":"1.8.0","ApiVersion":"1.20","GitCommit":"f
5bae0a","GoVersion":"go1.4.2","Os":"linux","Arch":"amd64","KernelVersion":"3.13.2-
gentoo","BuildTime":"Tue Jun 23 17:56:00 UTC 2015"}}

docker volume create


Estimated reading time: 4 minutes

Description
Create a volume

API 1.21+ The client and daemon API must both be at least 1.21 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Usage
docker volume create [OPTIONS] [VOLUME]

Options
Name, shorthand Default Description

--driver , -d local Specify volume driver name

--label Set metadata for a volume

--name Specify volume name

--opt , -o Set driver specific options

Parent command
Command Description

docker volume Manage volumes

Related commands
Command Description

docker volume create Create a volume

docker volume inspect Display detailed information on one or more volumes

docker volume ls List volumes

docker volume prune Remove all unused local volumes

docker volume rm Remove one or more volumes

Extended description
Creates a new volume that containers can consume and store data in. If a name is not specified,
Docker generates a random name.

Examples
Create a volume and then configure the container to use it:

$ docker volume create hello

hello

$ docker run -d -v hello:/world busybox ls /world

The mount is created inside the container’s /world directory. Docker does not support relative paths
for mount points inside the container.
Multiple containers can use the same volume in the same time period. This is useful if two
containers need access to shared data. For example, if one container writes and the other reads the
data.

Volume names must be unique among drivers. This means you cannot use the same volume name
with two different drivers. If you attempt this docker returns an error:
A volume named "hello" already exists with the "some-other" driver. Choose a
different volume name.

If you specify a volume name already in use on the current driver, Docker assumes you want to re-
use the existing volume and does not return an error.

Driver-specific options
Some volume drivers may take options to customize the volume creation. Use the -o or --opt flags
to pass driver options:
$ docker volume create --driver fake \
--opt tardis=blue \
--opt timey=wimey \
foo

These options are passed directly to the volume driver. Options for different volume drivers may do
different things (or nothing at all).

The built-in local driver on Windows does not support any options.
The built-in local driver on Linux accepts options similar to the linux mount command. You can
provide multiple options by passing the --opt flag multiple times. Some mount options (such as
the o option) can take a comma-separated list of options. Complete list of available mount options
can be found here.
For example, the following creates a tmpfs volume called foo with a size of 100 megabyte and uid of
1000.
$ docker volume create --driver local \
--opt type=tmpfs \
--opt device=tmpfs \
--opt o=size=100m,uid=1000 \
foo
Another example that uses btrfs:
$ docker volume create --driver local \
--opt type=btrfs \
--opt device=/dev/sda2 \
foo

Another example that uses nfs to mount the /path/to/dir in rw mode from192.168.1.1:
$ docker volume create --driver local \
--opt type=nfs \
--opt o=addr=192.168.1.1,rw \
--opt device=:/path/to/dir \
foo

docker volume inspect


Estimated reading time: 1 minute

Description
Display detailed information on one or more volumes

API 1.21+ The client and daemon API must both be at least 1.21 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Usage
docker volume inspect [OPTIONS] VOLUME [VOLUME...]

Options

Name, shorthand Default Description

--format , -f Format the output using the given Go template


Parent command
Command Description

docker volume Manage volumes

Related commands

Command Description

docker volume create Create a volume

docker volume inspect Display detailed information on one or more volumes

docker volume ls List volumes

docker volume prune Remove all unused local volumes

docker volume rm Remove one or more volumes

Extended description
Returns information about a volume. By default, this command renders all results in a JSON array.
You can specify an alternate format to execute a given template for each result.
Go’stext/template package describes all the details of the format.

Examples
$ docker volume create
85bffb0677236974f93955d8ecc4df55ef5070117b0e53333cc1b443777be24d
$ docker volume inspect
85bffb0677236974f93955d8ecc4df55ef5070117b0e53333cc1b443777be24d
[
{
"Name": "85bffb0677236974f93955d8ecc4df55ef5070117b0e53333cc1b443777be24d",
"Driver": "local",
"Mountpoint":
"/var/lib/docker/volumes/85bffb0677236974f93955d8ecc4df55ef5070117b0e53333cc1b443777b
e24d/_data",
"Status": null
}
]

$ docker volume inspect --format '{{ .Mountpoint }}'


85bffb0677236974f93955d8ecc4df55ef5070117b0e53333cc1b443777be24d
/var/lib/docker/volumes/85bffb0677236974f93955d8ecc4df55ef5070117b0e53333cc1b443777be
24d/_data

docker volume ls
Estimated reading time: 5 minutes

Description
List volumes

API 1.21+ The client and daemon API must both be at least 1.21 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker volume ls [OPTIONS]

Options
Name, shorthand Default Description

--filter , -f Provide filter values (e.g. ‘dangling=true’)

--format Pretty-print volumes using a Go template

--quiet , -q Only display volume names

Parent command
Command Description

docker volume Manage volumes

Related commands
Command Description

docker volume create Create a volume

docker volume inspect Display detailed information on one or more volumes

docker volume ls List volumes

docker volume prune Remove all unused local volumes

docker volume rm Remove one or more volumes

Extended description
List all the volumes known to Docker. You can filter using the -f or --filter flag. Refer to
the filtering section for more information about available filter options.
Examples
Create a volume
$ docker volume create rosemary

rosemary

$ docker volume create tyler

tyler

$ docker volume ls

DRIVER VOLUME NAME


local rosemary
local tyler

Filtering
The filtering flag (-f or --filter) format is of “key=value”. If there is more than one filter, then pass
multiple flags (e.g., --filter "foo=bar" --filter "bif=baz")

The currently supported filters are:

 dangling (boolean - true or false, 0 or 1)


 driver (a volume driver’s name)
 label (label=<key> or label=<key>=<value>)
 name (a volume’s name)

DANGLING

The dangling filter matches on all volumes not referenced by any containers
$ docker run -d -v tyler:/tmpwork busybox

f86a7dd02898067079c99ceacd810149060a70528eff3754d0b0f1a93bd0af18
$ docker volume ls -f dangling=true
DRIVER VOLUME NAME
local rosemary

DRIVER

The driver filter matches volumes based on their driver.


The following example matches volumes that are created with the local driver:
$ docker volume ls -f driver=local

DRIVER VOLUME NAME


local rosemary
local tyler

LABEL

The label filter matches volumes based on the presence of a label alone or a label and a value.

First, let’s create some volumes to illustrate this;

$ docker volume create the-doctor --label is-timelord=yes

the-doctor
$ docker volume create daleks --label is-timelord=no

daleks

The following example filter matches volumes with the is-timelord label regardless of its value.
$ docker volume ls --filter label=is-timelord

DRIVER VOLUME NAME


local daleks
local the-doctor

As the above example demonstrates, both volumes with is-timelord=yes, andis-timelord=no are
returned.
Filtering on both key and value of the label, produces the expected result:
$ docker volume ls --filter label=is-timelord=yes
DRIVER VOLUME NAME
local the-doctor

Specifying multiple label filter produces an “and” search; all conditions should be met;

$ docker volume ls --filter label=is-timelord=yes --filter label=is-timelord=no

DRIVER VOLUME NAME

NAME

The name filter matches on all or part of a volume’s name.


The following filter matches all volumes with a name containing the rose string.
$ docker volume ls -f name=rose

DRIVER VOLUME NAME


local rosemary

Formatting
The formatting options (--format) pretty-prints volumes output using a Go template.

Valid placeholders for the Go template are listed below:

Placeholder Description

.Name Volume name

.Driver Volume driver

.Scope Volume scope (local, global)

.Mountpoint The mount point of the volume on the host

.Labels All labels assigned to the volume

Value of a specific label for this volume. For example {{.Label


.Label
"project.version"}}
When using the --format option, the volume ls command will either output the data exactly as the
template declares or, when using the table directive, includes column headers as well.
The following example uses a template without headers and outputs the Name and Driverentries
separated by a colon for all volumes:
$ docker volume ls --format "{{.Name}}: {{.Driver}}"

vol1: local
vol2: local
vol3: local

docker volume prune


Estimated reading time: 1 minute

Description
Remove all unused local volumes

API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Usage
docker volume prune [OPTIONS]

Options
Name, shorthand Default Description

--filter Provide filter values (e.g. ‘label=’)

--force , -f Do not prompt for confirmation

Parent command
Command Description

docker volume Manage volumes

Related commands
Command Description

docker volume create Create a volume

docker volume inspect Display detailed information on one or more volumes

docker volume ls List volumes

docker volume prune Remove all unused local volumes

docker volume rm Remove one or more volumes

Extended description
Remove all unused local volumes. Unused local volumes are those which are not referenced by any
containers

Examples
$ docker volume prune

WARNING! This will remove all local volumes not used by at least one container.
Are you sure you want to continue? [y/N] y
Deleted Volumes:
07c7bdf3e34ab76d921894c2b834f073721fccfbbcba792aa7648e3a7a664c2e
my-named-vol

Total reclaimed space: 36 B

docker volume rm
Estimated reading time: 1 minute

Description
Remove one or more volumes

API 1.21+ The client and daemon API must both be at least 1.21 to use this command. Use
the docker version command on the client to check your client and daemon API versions.

Usage
docker volume rm [OPTIONS] VOLUME [VOLUME...]

Options
Name, shorthand Default Description

API 1.25+
--force , -f
Force the removal of one or more volumes

Parent command
Command Description

docker volume Manage volumes

Related commands
Command Description

docker volume create Create a volume

docker volume inspect Display detailed information on one or more volumes

docker volume ls List volumes

docker volume prune Remove all unused local volumes

docker volume rm Remove one or more volumes


Extended description
Remove one or more volumes. You cannot remove a volume that is in use by a container.

Examples
$ docker volume rm hello
hello

docker wait
Estimated reading time: 1 minute

Description
Block until one or more containers stop, then print their exit codes

Usage
docker wait CONTAINER [CONTAINER...]

Parent command
Command Description

docker The base command for the Docker CLI.

Examples
Start a container in the background.

$ docker run -dit --name=my_container ubuntu bash

Run docker wait, which should block until the container exits.
$ docker wait my_container
In another terminal, stop the first container. The docker wait command above returns the exit code.
$ docker stop my_container

This is the same docker wait command from above, but it now exits, returning 0.
$ docker wait my_container

Daemon CLI (dockerd)


Usage: dockerd COMMAND

A self-sufficient runtime for containers.

Options:
--add-runtime runtime Register an additional OCI compatible
runtime (default [])
--allow-nondistributable-artifacts list Push nondistributable artifacts to
specified registries (default [])
--api-cors-header string Set CORS headers in the Engine API
--authorization-plugin list Authorization plugins to load (default
[])
--bip string Specify network bridge IP
-b, --bridge string Attach containers to a network bridge
--cgroup-parent string Set parent cgroup for all containers
--cluster-advertise string Address or interface name to advertise
--cluster-store string URL of the distributed storage backend
--cluster-store-opt map Set cluster store options (default
map[])
--config-file string Daemon configuration file (default
"/etc/docker/daemon.json")
--containerd string Path to containerd socket
--cpu-rt-period int Limit the CPU real-time period in
microseconds
--cpu-rt-runtime int Limit the CPU real-time runtime in
microseconds
--data-root string Root directory of persistent Docker
state (default "/var/lib/docker")
-D, --debug Enable debug mode
--default-gateway ip Container default gateway IPv4 address
--default-gateway-v6 ip Container default gateway IPv6 address
--default-address-pool Set the default address pool for local
node networks
--default-runtime string Default OCI runtime for containers
(default "runc")
--default-ulimit ulimit Default ulimits for containers (default
[])
--dns list DNS server to use (default [])
--dns-opt list DNS options to use (default [])
--dns-search list DNS search domains to use (default [])
--exec-opt list Runtime execution options (default [])
--exec-root string Root directory for execution state
files (default "/var/run/docker")
--experimental Enable experimental features
--fixed-cidr string IPv4 subnet for fixed IPs
--fixed-cidr-v6 string IPv6 subnet for fixed IPs
-G, --group string Group for the unix socket (default
"docker")
--help Print usage
-H, --host list Daemon socket(s) to connect to (default
[])
--icc Enable inter-container communication
(default true)
--init Run an init in the container to forward
signals and reap processes
--init-path string Path to the docker-init binary
--insecure-registry list Enable insecure registry communication
(default [])
--ip ip Default IP when binding container ports
(default 0.0.0.0)
--ip-forward Enable net.ipv4.ip_forward (default
true)
--ip-masq Enable IP masquerading (default true)
--iptables Enable addition of iptables rules
(default true)
--ipv6 Enable IPv6 networking
--label list Set key=value labels to the daemon
(default [])
--live-restore Enable live restore of docker when
containers are still running
--log-driver string Default driver for container logs
(default "json-file")
-l, --log-level string Set the logging level ("debug", "info",
"warn", "error", "fatal") (default "info")
--log-opt map Default log driver options for
containers (default map[])
--max-concurrent-downloads int Set the max concurrent downloads for
each pull (default 3)
--max-concurrent-uploads int Set the max concurrent uploads for each
push (default 5)
--metrics-addr string Set default address and port to serve
the metrics api on
--mtu int Set the containers network MTU
--node-generic-resources list Advertise user-defined resource
--no-new-privileges Set no-new-privileges by default for
new containers
--oom-score-adjust int Set the oom_score_adj for the daemon
(default -500)
-p, --pidfile string Path to use for daemon PID file
(default "/var/run/docker.pid")
--raw-logs Full timestamps without ANSI coloring
--registry-mirror list Preferred Docker registry mirror
(default [])
--seccomp-profile string Path to seccomp profile
--selinux-enabled Enable selinux support
--shutdown-timeout int Set the default shutdown timeout
(default 15)
-s, --storage-driver string Storage driver to use
--storage-opt list Storage driver options (default [])
--swarm-default-advertise-addr string Set default address or interface for
swarm advertised address
--tls Use TLS; implied by --tlsverify
--tlscacert string Trust certs signed only by this CA
(default "~/.docker/ca.pem")
--tlscert string Path to TLS certificate file (default
"~/.docker/cert.pem")
--tlskey string Path to TLS key file (default
~/.docker/key.pem")
--tlsverify Use TLS and verify the remote
--userland-proxy Use userland proxy for loopback traffic
(default true)
--userland-proxy-path string Path to the userland proxy binary
--userns-remap string User/Group setting for user namespaces
-v, --version Print version information and quit

Options with [] may be specified multiple times.

Description
dockerd is the persistent process that manages containers. Docker uses different binaries for the
daemon and client. To run the daemon you type dockerd.
To run the daemon with debug output, use dockerd -D or add "debug": true to the daemon.json file.
Note: In Docker 1.13 and higher, enable experimental features by starting dockerdwith the --
experimental flag or adding "experimental": true to the daemon.jsonfile. In earlier Docker versions,
a different build was required to enable experimental features.

Examples
Daemon socket option
The Docker daemon can listen for Docker Engine API requests via three different types of
Socket: unix, tcp, and fd.
By default, a unix domain socket (or IPC socket) is created at /var/run/docker.sock, requiring
either root permission, or docker group membership.
If you need to access the Docker daemon remotely, you need to enable the tcp Socket. Beware that
the default setup provides un-encrypted and un-authenticated direct access to the Docker daemon -
and should be secured either using the built in HTTPS encrypted socket, or by putting a secure web
proxy in front of it. You can listen on port 2375 on all network interfaces with -H tcp://0.0.0.0:2375,
or on a particular network interface using its IP address: -H tcp://192.168.59.103:2375. It is
conventional to use port 2375 for un-encrypted, and port 2376 for encrypted communication with the
daemon.
Note: If you’re using an HTTPS encrypted socket, keep in mind that only TLS1.0 and greater are
supported. Protocols SSLv3 and under are not supported anymore for security reasons.
On Systemd based systems, you can communicate with the daemon via Systemd socket activation,
use dockerd -H fd://. Using fd:// will work perfectly for most setups but you can also specify
individual sockets: dockerd -H fd://3. If the specified socket activated files aren’t found, then
Docker will exit. You can find examples of using Systemd socket activation with Docker and
Systemd in the Docker source tree.
You can configure the Docker daemon to listen to multiple sockets at the same time using multiple -
H options:

# listen using the default unix socket, and on 2 specific IP addresses on this host.

$ sudo dockerd -H unix:///var/run/docker.sock -H tcp://192.168.59.106 -H


tcp://10.10.10.2

The Docker client will honor the DOCKER_HOST environment variable to set the -H flag for the client.
Use one of the following commands:
$ docker -H tcp://0.0.0.0:2375 ps
$ export DOCKER_HOST="tcp://0.0.0.0:2375"

$ docker ps

Setting the DOCKER_TLS_VERIFY environment variable to any value other than the empty string is
equivalent to setting the --tlsverify flag. The following are equivalent:
$ docker --tlsverify ps
# or
$ export DOCKER_TLS_VERIFY=1
$ docker ps

The Docker client will honor the HTTP_PROXY, HTTPS_PROXY, and NO_PROXY environment variables (or
the lowercase versions thereof). HTTPS_PROXY takes precedence over HTTP_PROXY.

Starting with Docker 18.09, the Docker client supports connecting to a remote daemon via SSH:

$ docker -H ssh://[email protected]:22 ps
$ docker -H ssh://[email protected] ps
$ docker -H ssh://example.com ps

To use SSH connection, you need to set up ssh so that it can reach the remote host with public key
authentication. Password authentication is not supported. If your key is protected with passphrase,
you need to set up ssh-agent.
Also, you need to have docker binary 18.09 or later on the daemon host.

BIND DOCKER TO ANOTHER HOST/PORT OR A UNIX SOCKET

Warning: Changing the default docker daemon binding to a TCP port or Unix dockeruser group will
increase your security risks by allowing non-root users to gain rootaccess on the host. Make sure
you control access to docker. If you are binding to a TCP port, anyone with access to that port has
full Docker access; so it is not advisable on an open network.
With -H it is possible to make the Docker daemon to listen on a specific IP and port. By default, it will
listen on unix:///var/run/docker.sock to allow only local connections by theroot user.
You could set it to 0.0.0.0:2375 or a specific host IP to give access to everybody, but that is not
recommended because then it is trivial for someone to gain root access to the host where the
daemon is running.
Similarly, the Docker client can use -H to connect to a custom port. The Docker client will default to
connecting to unix:///var/run/docker.sock on Linux, and tcp://127.0.0.1:2376on Windows.
-H accepts host and port assignment in the following format:

tcp://[host]:[port][path] or unix://path

For example:

 tcp:// -> TCP connection to 127.0.0.1 on either port 2376 when TLS encryption is on, or
port 2375 when communication is in plain text.
 tcp://host:2375 -> TCP connection on host:2375
 tcp://host:2375/path -> TCP connection on host:2375 and prepend path to all requests
 unix://path/to/socket -> Unix socket located at path/to/socket

-H, when empty, will default to the same value as when no -H was passed in.
-H also accepts short form for TCP bindings: host: or host:port or :port

Run Docker in daemon mode:

$ sudo <path to>/dockerd -H 0.0.0.0:5555 &

Download an ubuntu image:


$ docker -H :5555 pull ubuntu

You can use multiple -H, for example, if you want to listen on both TCP and a Unix socket
# Run docker in daemon mode
$ sudo <path to>/dockerd -H tcp://127.0.0.1:2375 -H unix:///var/run/docker.sock &
# Download an ubuntu image, use default Unix socket
$ docker pull ubuntu
# OR use the TCP port
$ docker -H tcp://127.0.0.1:2375 pull ubuntu

Daemon storage-driver
On Linux, the Docker daemon has support for several different image layer storage
drivers: aufs, devicemapper, btrfs, zfs, overlay and overlay2.
The aufs driver is the oldest, but is based on a Linux kernel patch-set that is unlikely to be merged
into the main kernel. These are also known to cause some serious kernel crashes.
However aufs allows containers to share executable and shared library memory, so is a useful
choice when running thousands of containers with the same program or libraries.
The devicemapper driver uses thin provisioning and Copy on Write (CoW) snapshots. For each
devicemapper graph location – typically /var/lib/docker/devicemapper – a thin pool is created
based on two block devices, one for data and one for metadata. By default, these block devices are
created automatically by using loopback mounts of automatically created sparse files. Refer
to Devicemapper options below for a way how to customize this setup.~jpetazzo/Resizing Docker
containers with the Device Mapper plugin article explains how to tune your existing setup without the
use of options.
The btrfs driver is very fast for docker build - but like devicemapper does not share executable
memory between devices. Use dockerd -s btrfs -g /mnt/btrfs_partition.
The zfs driver is probably not as fast as btrfs but has a longer track record on stability. Thanks
to Single Copy ARC shared blocks between clones will be cached only once. Use dockerd -s zfs.
To select a different zfs filesystem set zfs.fsname option as described in ZFS options.
The overlay is a very fast union filesystem. It is now merged in the main Linux kernel as
of 3.18.0. overlay also supports page cache sharing, this means multiple containers accessing the
same file can share a single page cache entry (or entries), it makes overlay as efficient with memory
as aufs driver. Call dockerd -s overlay to use it.
Note: As promising as overlay is, the feature is still quite young and should not be used in
production. Most notably, using overlay can cause excessive inode consumption (especially as the
number of images grows), as well as being incompatible with the use of RPMs.
The overlay2 uses the same fast union filesystem but takes advantage of additional featuresadded
in Linux kernel 4.0 to avoid excessive inode consumption. Call dockerd -s overlay2 to use it.
Note: Both overlay and overlay2 are currently unsupported on btrfs or any Copy on Write
filesystem and should only be used over ext4 partitions.
On Windows, the Docker daemon supports a single image layer storage driver depending on the
image platform: windowsfilter for Windows images, and lcow for Linux containers on Windows.

Options per storage driver


Particular storage-driver can be configured with options specified with --storage-opt flags. Options
for devicemapper are prefixed with dm, options for zfs start with zfs, options for btrfs start
with btrfs and options for lcow start with lcow.

DEVICEMAPPER OPTIONS

This is an example of the configuration file for devicemapper on Linux:

{
"storage-driver": "devicemapper",
"storage-opts": [
"dm.thinpooldev=/dev/mapper/thin-pool",
"dm.use_deferred_deletion=true",
"dm.use_deferred_removal=true"
]
}

dm.thinpooldev

Specifies a custom block storage device to use for the thin pool.

If using a block device for device mapper storage, it is best to use lvm to create and manage the
thin-pool volume. This volume is then handed to Docker to exclusively create snapshot volumes
needed for images and containers.

Managing the thin-pool outside of Engine makes for the most feature-rich method of having Docker
utilize device mapper thin provisioning as the backing storage for Docker containers. The highlights
of the lvm-based thin-pool management feature include: automatic or interactive thin-pool resize
support, dynamically changing thin-pool features, automatic thinp metadata checking when lvm
activates the thin-pool, etc.

As a fallback if no thin pool is provided, loopback files are created. Loopback is very slow, but can be
used without any pre-configuration of storage. It is strongly recommended that you do not use
loopback in production. Ensure your Engine daemon has a--storage-opt dm.thinpooldev argument
provided.

Example:

$ sudo dockerd --storage-opt dm.thinpooldev=/dev/mapper/thin-pool

dm.directlvm_device

As an alternative to providing a thin pool as above, Docker can setup a block device for you.

Example:

$ sudo dockerd --storage-opt dm.directlvm_device=/dev/xvdf

dm.thinp_percent

Sets the percentage of passed in block device to use for storage.

Example:

$ sudo dockerd --storage-opt dm.thinp_percent=95

dm.thinp_metapercent

Sets the percentage of the passed in block device to use for metadata storage.

Example:

$ sudo dockerd --storage-opt dm.thinp_metapercent=1

dm.thinp_autoextend_threshold
Sets the value of the percentage of space used before lvm attempts to autoextend the available
space [100 = disabled]
Example:

$ sudo dockerd --storage-opt dm.thinp_autoextend_threshold=80

dm.thinp_autoextend_percent
Sets the value percentage value to increase the thin pool by when lvm attempts to autoextend the
available space [100 = disabled]

Example:

$ sudo dockerd --storage-opt dm.thinp_autoextend_percent=20

dm.basesize

Specifies the size to use when creating the base device, which limits the size of images and
containers. The default value is 10G. Note, thin devices are inherently “sparse”, so a 10G device
which is mostly empty doesn’t use 10 GB of space on the pool. However, the filesystem will use
more space for the empty case the larger the device is.

The base device size can be increased at daemon restart which will allow all future images and
containers (based on those new images) to be of the new base device size.

Examples

$ sudo dockerd --storage-opt dm.basesize=50G

This will increase the base device size to 50G. The Docker daemon will throw an error if existing
base device size is larger than 50G. A user can use this option to expand the base device size
however shrinking is not permitted.

This value affects the system-wide “base” empty filesystem that may already be initialized and
inherited by pulled images. Typically, a change to this value requires additional steps to take effect:

$ sudo service docker stop

$ sudo rm -rf /var/lib/docker

$ sudo service docker start


dm.loopdatasize
Note: This option configures devicemapper loopback, which should not be used in production.

Specifies the size to use when creating the loopback file for the “data” device which is used for the
thin pool. The default size is 100G. The file is sparse, so it will not initially take up this much space.

Example

$ sudo dockerd --storage-opt dm.loopdatasize=200G

dm.loopmetadatasize
Note: This option configures devicemapper loopback, which should not be used in production.

Specifies the size to use when creating the loopback file for the “metadata” device which is used for
the thin pool. The default size is 2G. The file is sparse, so it will not initially take up this much space.

Example

$ sudo dockerd --storage-opt dm.loopmetadatasize=4G

dm.fs

Specifies the filesystem type to use for the base device. The supported options are “ext4” and “xfs”.
The default is “xfs”

Example

$ sudo dockerd --storage-opt dm.fs=ext4

dm.mkfsarg

Specifies extra mkfs arguments to be used when creating the base device.

Example

$ sudo dockerd --storage-opt "dm.mkfsarg=-O ^has_journal"

dm.mountopt

Specifies extra mount options used when mounting the thin devices.
Example

$ sudo dockerd --storage-opt dm.mountopt=nodiscard

dm.datadev
(Deprecated, use dm.thinpooldev)

Specifies a custom blockdevice to use for data for the thin pool.

If using a block device for device mapper storage, ideally both datadev and metadatadevshould be
specified to completely avoid using the loopback device.

Example

$ sudo dockerd \
--storage-opt dm.datadev=/dev/sdb1 \
--storage-opt dm.metadatadev=/dev/sdc1

dm.metadatadev
(Deprecated, use dm.thinpooldev)

Specifies a custom blockdevice to use for metadata for the thin pool.

For best performance the metadata should be on a different spindle than the data, or even better on
an SSD.

If setting up a new metadata pool it is required to be valid. This can be achieved by zeroing the first
4k to indicate empty metadata, like this:

$ dd if=/dev/zero of=$metadata_dev bs=4096 count=1

Example

$ sudo dockerd \
--storage-opt dm.datadev=/dev/sdb1 \
--storage-opt dm.metadatadev=/dev/sdc1

dm.blocksize

Specifies a custom blocksize to use for the thin pool. The default blocksize is 64K.
Example

$ sudo dockerd --storage-opt dm.blocksize=512K

dm.blkdiscard
Enables or disables the use of blkdiscard when removing devicemapper devices. This is enabled by
default (only) if using loopback devices and is required to resparsify the loopback file on
image/container removal.
Disabling this on loopback can lead to much faster container removal times, but will make the space
used in /var/lib/docker directory not be returned to the system for other use when containers are
removed.

Examples

$ sudo dockerd --storage-opt dm.blkdiscard=false

dm.override_udev_sync_check
Overrides the udev synchronization checks between devicemapper and udev. udev is the device
manager for the Linux kernel.
To view the udev sync support of a Docker daemon that is using the devicemapper driver, run:
$ docker info
[...]
Udev Sync Supported: true
[...]

When udev sync support is true, then devicemapper and udev can coordinate the activation and
deactivation of devices for containers.
When udev sync support is false, a race condition occurs between thedevicemapper and udev during
create and cleanup. The race condition results in errors and failures. (For information on these
failures, see docker#4036)
To allow the docker daemon to start, regardless of udev sync not being supported,
set dm.override_udev_sync_check to true:
$ sudo dockerd --storage-opt dm.override_udev_sync_check=true

When this value is true, the devicemapper continues and simply warns you the errors are happening.
Note: The ideal is to pursue a docker daemon and environment that does support synchronizing
with udev. For further discussion on this topic, see docker#4036. Otherwise, set this flag for migrating
existing Docker daemons to a daemon with a supported environment.
dm.use_deferred_removal
Enables use of deferred device removal if libdm and the kernel driver support the mechanism.

Deferred device removal means that if device is busy when devices are being removed/deactivated,
then a deferred removal is scheduled on device. And devices automatically go away when last user
of the device exits.

For example, when a container exits, its associated thin device is removed. If that device has leaked
into some other mount namespace and can’t be removed, the container exit still succeeds and this
option causes the system to schedule the device for deferred removal. It does not wait in a loop
trying to remove a busy device.

Example

$ sudo dockerd --storage-opt dm.use_deferred_removal=true

dm.use_deferred_deletion

Enables use of deferred device deletion for thin pool devices. By default, thin pool device deletion is
synchronous. Before a container is deleted, the Docker daemon removes any associated devices. If
the storage driver can not remove a device, the container deletion fails and daemon returns.

Error deleting container: Error response from daemon: Cannot destroy container

To avoid this failure, enable both deferred device deletion and deferred device removal on the
daemon.

$ sudo dockerd \
--storage-opt dm.use_deferred_deletion=true \
--storage-opt dm.use_deferred_removal=true

With these two options enabled, if a device is busy when the driver is deleting a container, the driver
marks the device as deleted. Later, when the device isn’t in use, the driver deletes it.

In general it should be safe to enable this option by default. It will help when unintentional leaking of
mount point happens across multiple mount namespaces.
dm.min_free_space

Specifies the min free space percent in a thin pool require for new device creation to succeed. This
check applies to both free data space as well as free metadata space. Valid values are from 0% -
99%. Value 0% disables free space checking logic. If user does not specify a value for this option,
the Engine uses a default value of 10%.

Whenever a new a thin pool device is created (during docker pull or during container creation), the
Engine checks if the minimum free space is available. If sufficient space is unavailable, then device
creation fails and any relevant docker operation fails.

To recover from this error, you must create more free space in the thin pool to recover from the
error. You can create free space by deleting some images and containers from the thin pool. You
can also add more storage to the thin pool.

To add more space to a LVM (logical volume management) thin pool, just add more storage to the
volume group container thin pool; this should automatically resolve any errors. If your configuration
uses loop devices, then stop the Engine daemon, grow the size of loop files and restart the daemon
to resolve the issue.

Example

$ sudo dockerd --storage-opt dm.min_free_space=10%

dm.xfs_nospace_max_retries

Specifies the maximum number of retries XFS should attempt to complete IO when ENOSPC (no
space) error is returned by underlying storage device.

By default XFS retries infinitely for IO to finish and this can result in unkillable process. To change
this behavior one can set xfs_nospace_max_retries to say 0 and XFS will not retry IO after getting
ENOSPC and will shutdown filesystem.

Example

$ sudo dockerd --storage-opt dm.xfs_nospace_max_retries=0

dm.libdm_log_level
Specifies the maxmimum libdm log level that will be forwarded to the dockerd log (as specified by --
log-level). This option is primarily intended for debugging problems involving libdm. Using values
other than the defaults may cause false-positive warnings to be logged.
Values specified must fall within the range of valid libdm log levels. At the time of writing, the
following is the list of libdm log levels as well as their corresponding levels when output by dockerd.

libdm Level Value --log-level

_LOG_FATAL 2 error

_LOG_ERR 3 error

_LOG_WARN 4 warn

_LOG_NOTICE 5 info

_LOG_INFO 6 info

_LOG_DEBUG 7 debug

Example

$ sudo dockerd \
--log-level debug \
--storage-opt dm.libdm_log_level=7

ZFS OPTIONS

zfs.fsname
Set zfs filesystem under which docker will create its own datasets. By default docker will pick up the
zfs filesystem where docker graph (/var/lib/docker) is located.

Example

$ sudo dockerd -s zfs --storage-opt zfs.fsname=zroot/docker

BTRFS OPTIONS

btrfs.min_space

Specifies the minimum size to use when creating the subvolume which is used for containers. If user
uses disk quota for btrfs when creating or running a container with --storage-opt sizeoption, docker
should ensure the size cannot be smaller than btrfs.min_space.
Example

$ sudo dockerd -s btrfs --storage-opt btrfs.min_space=10G

OVERLAY2 OPTIONS

overlay2.override_kernel_check

Overrides the Linux kernel version check allowing overlay2. Support for specifying multiple lower
directories needed by overlay2 was added to the Linux kernel in 4.0.0. However, some older kernel
versions may be patched to add multiple lower directory support for OverlayFS. This option should
only be used after verifying this support exists in the kernel. Applying this option on a kernel without
this support will cause failures on mount.

overlay2.size
Sets the default max size of the container. It is supported only when the backing fs is xfsand
mounted with pquota mount option. Under these conditions the user can pass any size less then the
backing fs size.

Example

$ sudo dockerd -s overlay2 --storage-opt overlay2.size=1G

WINDOWSFILTER OPTIONS

size

Specifies the size to use when creating the sandbox which is used for containers. Defaults to 20G.

Example

C:\> dockerd --storage-opt size=40G

LCOW (LINUX CONTAINERS ON WINDOWS) OPTIONS

lcow.globalmode

Specifies whether the daemon instantiates utility VM instances as required (recommended and
default if omitted), or uses single global utility VM (better performance, but has security implications
and not recommended for production deployments).
Example

C:\> dockerd --storage-opt lcow.globalmode=false

lcow.kirdpath
Specifies the folder path to the location of a pair of kernel and initrd files used for booting a utility
VM. Defaults to %ProgramFiles%\Linux Containers.

Example

C:\> dockerd --storage-opt lcow.kirdpath=c:\path\to\files

lcow.kernel
Specifies the filename of a kernel file located in the lcow.kirdpath path. Defaults to bootx64.efi.

Example

C:\> dockerd --storage-opt lcow.kernel=kernel.efi

lcow.initrd
Specifies the filename of an initrd file located in the lcow.kirdpath path. Defaults to initrd.img.

Example

C:\> dockerd --storage-opt lcow.initrd=myinitrd.img

lcow.bootparameters

Specifies additional boot parameters for booting utility VMs when in kernel/ initrd mode. Ignored if
the utility VM is booting from VHD. These settings are kernel specific.

Example

C:\> dockerd --storage-opt "lcow.bootparameters='option=value'"

lcow.vhdx
Specifies a custom VHDX to boot a utility VM, as an alternate to kernel and initrd booting. Defaults
to uvm.vhdx under lcow.kirdpath.
Example

C:\> dockerd --storage-opt lcow.vhdx=custom.vhdx

lcow.timeout

Specifies the timeout for utility VM operations in seconds. Defaults to 300.

Example

C:\> dockerd --storage-opt lcow.timeout=240

lcow.sandboxsize

Specifies the size in GB to use when creating the sandbox which is used for containers. Defaults to
20. Cannot be less than 20.

Example

C:\> dockerd --storage-opt lcow.sandboxsize=40

Docker runtime execution options


The Docker daemon relies on a OCI compliant runtime (invoked via the containerd daemon) as its
interface to the Linux kernel namespaces, cgroups, and SELinux.
By default, the Docker daemon automatically starts containerd. If you want to
control containerd startup, manually start containerd and pass the path to the containerdsocket
using the --containerd flag. For example:
$ sudo dockerd --containerd /var/run/dev/docker-containerd.sock

Runtimes can be registered with the daemon either via the configuration file or using the --add-
runtime command line argument.

The following is an example adding 2 runtimes via the configuration:

{
"default-runtime": "runc",
"runtimes": {
"runc": {
"path": "runc"
},
"custom": {
"path": "/usr/local/bin/my-runc-replacement",
"runtimeArgs": [
"--debug"
]
}
}
}

This is the same example via the command line:

$ sudo dockerd --add-runtime runc=runc --add-runtime custom=/usr/local/bin/my-runc-


replacement

Note: Defining runtime arguments via the command line is not supported.

OPTIONS FOR THE RUNTIME

You can configure the runtime using options specified with the --exec-opt flag. All the flag’s options
have the native prefix. A single native.cgroupdriver option is available.
The native.cgroupdriver option specifies the management of the container’s cgroups. You can only
specify cgroupfs or systemd. If you specify systemd and it is not available, the system errors out. If
you omit the native.cgroupdriver option, cgroupfs is used.
This example sets the cgroupdriver to systemd:
$ sudo dockerd --exec-opt native.cgroupdriver=systemd

Setting this option applies to all containers the daemon launches.

Also Windows Container makes use of --exec-opt for special purpose. Docker user can specify
default container isolation technology with this, for example:
> dockerd --exec-opt isolation=hyperv

Will make hyperv the default isolation technology on Windows. If no isolation value is specified on
daemon start, on Windows client, the default is hyperv, and on Windows server, the default
is process.

Daemon DNS options


To set the DNS server for all Docker containers, use:

$ sudo dockerd --dns 8.8.8.8

To set the DNS search domain for all Docker containers, use:

$ sudo dockerd --dns-search example.com

Allow push of nondistributable artifacts


Some images (e.g., Windows base images) contain artifacts whose distribution is restricted by
license. When these images are pushed to a registry, restricted artifacts are not included.

To override this behavior for specific registries, use the --allow-nondistributable-artifactsoption


in one of the following forms:

 --allow-nondistributable-artifacts myregistry:5000 tells the Docker daemon to push


nondistributable artifacts to myregistry:5000.
 --allow-nondistributable-artifacts 10.1.0.0/16 tells the Docker daemon to push
nondistributable artifacts to all registries whose resolved IP address is within the subnet
described by the CIDR syntax.

This option can be used multiple times.

This option is useful when pushing images containing nondistributable artifacts to a registry on an
air-gapped network so hosts on that network can pull the images without connecting to another
server.

Warning: Nondistributable artifacts typically have restrictions on how and where they can be
distributed and shared. Only use this feature to push artifacts to private registries and ensure that
you are in compliance with any terms that cover redistributing nondistributable artifacts.

Insecure registries
Docker considers a private registry either secure or insecure. In the rest of this section, registry is
used for private registry, and myregistry:5000 is a placeholder example for a private registry.
A secure registry uses TLS and a copy of its CA certificate is placed on the Docker host
at /etc/docker/certs.d/myregistry:5000/ca.crt. An insecure registry is either not using TLS (i.e.,
listening on plain text HTTP), or is using TLS with a CA certificate not known by the Docker daemon.
The latter can happen when the certificate was not found
under/etc/docker/certs.d/myregistry:5000/, or if the certificate verification failed (i.e., wrong CA).
By default, Docker assumes all, but local (see local registries below), registries are secure.
Communicating with an insecure registry is not possible if Docker assumes that registry is secure. In
order to communicate with an insecure registry, the Docker daemon requires --insecure-
registry in one of the following two forms:

 --insecure-registry myregistry:5000 tells the Docker daemon that myregistry:5000 should


be considered insecure.
 --insecure-registry 10.1.0.0/16 tells the Docker daemon that all registries whose domain
resolve to an IP address is part of the subnet described by the CIDR syntax, should be
considered insecure.

The flag can be used multiple times to allow multiple registries to be marked as insecure.

If an insecure registry is not marked as insecure, docker pull, docker push, and docker search will
result in an error message prompting the user to either secure or pass the --insecure-registry flag
to the Docker daemon as described above.

Local registries, whose IP address falls in the 127.0.0.0/8 range, are automatically marked as
insecure as of Docker 1.3.2. It is not recommended to rely on this, as it may change in the future.

Enabling --insecure-registry, i.e., allowing un-encrypted and/or untrusted communication, can be


useful when running a local registry. However, because its use creates security vulnerabilities it
should ONLY be enabled for testing purposes. For increased security, users should add their CA to
their system’s list of trusted CAs instead of enabling --insecure-registry.

LEGACY REGISTRIES

Starting with Docker 17.12, operations against registries supporting only the legacy v1 protocol are
no longer supported. Specifically, the daemon will not attempt push, pull and login to v1 registries.
The exception to this is search which can still be performed on v1 registries.
The disable-legacy-registry configuration option has been removed and, when used, will produce
an error on daemon startup.

Running a Docker daemon behind an HTTPS_PROXY


When running inside a LAN that uses an HTTPS proxy, the Docker Hub certificates will be replaced by
the proxy’s certificates. These certificates need to be added to your Docker host’s configuration:

1. Install the ca-certificates package for your distribution


2. Ask your network admin for the proxy’s CA certificate and append them
to/etc/pki/tls/certs/ca-bundle.crt
3. Then start your Docker daemon with HTTPS_PROXY=http://username:password@proxy:port/
dockerd. The username: and password@ are optional - and are only needed if your proxy is set
up to require authentication.

This will only add the proxy and authentication to the Docker daemon’s requests - your docker
builds and running containers will need extra configuration to use the proxy
Default ulimit settings
--default-ulimit allows you to set the default ulimit options to use for all containers. It takes the
same options as --ulimit for docker run. If these defaults are not set, ulimitsettings will be
inherited, if not set on docker run, from the Docker daemon. Any --ulimitoptions passed to docker
run will overwrite these defaults.
Be careful setting nproc with the ulimit flag as nproc is designed by Linux to set the maximum
number of processes available to a user, not to a container. For details please check
the run reference.

Node discovery
The --cluster-advertise option specifies the host:port or interface:port combination that this
particular daemon instance should use when advertising itself to the cluster. The daemon is reached
by remote hosts through this value. If you specify an interface, make sure it includes the IP address
of the actual Docker host. For Engine installation created through docker-machine, the interface is
typically eth1.
The daemon uses libkv to advertise the node within the cluster. Some key-value backends support
mutual TLS. To configure the client TLS settings used by the daemon can be configured using the --
cluster-store-opt flag, specifying the paths to PEM encoded files. For example:

$ sudo dockerd \
--cluster-advertise 192.168.1.2:2376 \
--cluster-store etcd://192.168.1.2:2379 \
--cluster-store-opt kv.cacertfile=/path/to/ca.pem \
--cluster-store-opt kv.certfile=/path/to/cert.pem \
--cluster-store-opt kv.keyfile=/path/to/key.pem

The currently supported cluster store options are:

Option Description

Specifies the heartbeat timer in seconds which is used by the daemon


discovery.heartbeat
as a keepalive mechanism to make sure discovery module treats the
Option Description

node as alive in the cluster. If not configured, the default value is 20


seconds.

Specifies the TTL (time-to-live) in seconds which is used by the


discovery module to timeout a node if a valid heartbeat is not received
discovery.ttl
within the configured ttl value. If not configured, the default value is 60
seconds.

Specifies the path to a local file with PEM encoded CA certificates to


kv.cacertfile
trust.

Specifies the path to a local file with a PEM encoded certificate. This
kv.certfile certificate is used as the client cert for communication with the
Key/Value store.

Specifies the path to a local file with a PEM encoded private key. This
kv.keyfile private key is used as the client key for communication with the
Key/Value store.

Specifies the path in the Key/Value store. If not configured, the default
kv.path
value is ‘docker/nodes’.

Access authorization
Docker’s access authorization can be extended by authorization plugins that your organization can
purchase or build themselves. You can install one or more authorization plugins when you start the
Docker daemon using the --authorization-plugin=PLUGIN_ID option.
$ sudo dockerd --authorization-plugin=plugin1 --authorization-plugin=plugin2,...

The PLUGIN_ID value is either the plugin’s name or a path to its specification file. The plugin’s
implementation determines whether you can specify a name or path. Consult with your Docker
administrator to get information about the plugins available to you.
Once a plugin is installed, requests made to the daemon through the command line or Docker’s
Engine API are allowed or denied by the plugin. If you have multiple plugins installed, each plugin, in
order, must allow the request for it to complete.

For information about how to create an authorization plugin, see authorization plugin section in the
Docker extend section of this documentation.

Daemon user namespace options


The Linux kernel user namespace support provides additional security by enabling a process, and
therefore a container, to have a unique range of user and group IDs which are outside the traditional
user and group range utilized by the host system. Potentially the most important security
improvement is that, by default, container processes running as the root user will have expected
administrative privilege (with some restrictions) inside the container but will effectively be mapped to
an unprivileged uid on the host.

For details about how to use this feature, as well as limitations, see Isolate containers with a user
namespace.

Miscellaneous options
IP masquerading uses address translation to allow containers without a public IP to talk to other
machines on the Internet. This may interfere with some network topologies and can be disabled
with --ip-masq=false.
Docker supports softlinks for the Docker data directory (/var/lib/docker) and
for /var/lib/docker/tmp. The DOCKER_TMPDIR and the data directory can be set like this:
DOCKER_TMPDIR=/mnt/disk2/tmp /usr/local/bin/dockerd -D -g /var/lib/docker -H unix://
> /var/lib/docker-machine/docker.log 2>&1
# or
export DOCKER_TMPDIR=/mnt/disk2/tmp
/usr/local/bin/dockerd -D -g /var/lib/docker -H unix:// > /var/lib/docker-
machine/docker.log 2>&1

DEFAULT CGROUP PARENT

The --cgroup-parent option allows you to set the default cgroup parent to use for containers. If this
option is not set, it defaults to /docker for fs cgroup driver and system.slice for systemd cgroup
driver.
If the cgroup has a leading forward slash (/), the cgroup is created under the root cgroup, otherwise
the cgroup is created under the daemon cgroup.
Assuming the daemon is running in cgroup daemoncgroup, --cgroup-parent=/foobar creates a
cgroup in /sys/fs/cgroup/memory/foobar, whereas using --cgroup-parent=foobar creates the
cgroup in /sys/fs/cgroup/memory/daemoncgroup/foobar
The systemd cgroup driver has different rules for --cgroup-parent. Systemd represents hierarchy by
slice and the name of the slice encodes the location in the tree. So --cgroup-parent for systemd
cgroups should be a slice name. A name can consist of a dash-separated series of names, which
describes the path to the slice from the root slice. For example, --cgroup-parent=user-a-
b.slice means the memory cgroup for the container is created
in/sys/fs/cgroup/memory/user.slice/user-a.slice/user-a-b.slice/docker-<id>.scope.
This setting can also be set per container, using the --cgroup-parent option on docker
create and docker run, and takes precedence over the --cgroup-parent option on the daemon.

DAEMON METRICS

The --metrics-addr option takes a tcp address to serve the metrics API. This feature is still
experimental, therefore, the daemon must be running in experimental mode for this feature to work.
To serve the metrics API on localhost:9323 you would specify --metrics-addr 127.0.0.1:9323,
allowing you to make requests on the API at 127.0.0.1:9323/metrics to receive metrics in
the prometheus format.
Port 9323 is the default port associated with Docker metrics to avoid collisions with other prometheus
exporters and services.

If you are running a prometheus server you can add this address to your scrape configs to have
prometheus collect metrics on Docker. For more information on prometheus you can view the
website here.

scrape_configs:
- job_name: 'docker'
static_configs:
- targets: ['127.0.0.1:9323']

Please note that this feature is still marked as experimental as metrics and metric names could
change while this feature is still in experimental. Please provide feedback on what you would like to
see collected in the API.

NODE GENERIC RESOURCES

The --node-generic-resources option takes a list of key-value pair (key=value) that allows you to
advertise user defined resources in a swarm cluster.
The current expected use case is to advertise NVIDIA GPUs so that services requesting NVIDIA-
GPU=[0-16] can land on a node that has enough GPUs for the task to run.

Example of usage:

{
"node-generic-resources": ["NVIDIA-GPU=UUID1", "NVIDIA-GPU=UUID2"]
}

Daemon configuration file


The --config-file option allows you to set any configuration option for the daemon in a JSON
format. This file uses the same flag names as keys, except for flags that allow several entries, where
it uses the plural of the flag name, e.g., labels for the label flag.
The options set in the configuration file must not conflict with options set via flags. The docker
daemon fails to start if an option is duplicated between the file and the flags, regardless their value.
We do this to avoid silently ignore changes introduced in configuration reloads. For example, the
daemon fails to start if you set daemon labels in the configuration file and also set daemon labels via
the --label flag. Options that are not present in the file are ignored when the daemon starts.

On Linux

The default location of the configuration file on Linux is /etc/docker/daemon.json. The --config-
file flag can be used to specify a non-default location.

This is a full example of the allowed configuration options on Linux:

{
"authorization-plugins": [],
"data-root": "",
"dns": [],
"dns-opts": [],
"dns-search": [],
"exec-opts": [],
"exec-root": "",
"experimental": false,
"features": {},
"storage-driver": "",
"storage-opts": [],
"labels": [],
"live-restore": true,
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file":"5",
"labels": "somelabel",
"env": "os,customer"
},
"mtu": 0,
"pidfile": "",
"cluster-store": "",
"cluster-store-opts": {},
"cluster-advertise": "",
"max-concurrent-downloads": 3,
"max-concurrent-uploads": 5,
"default-shm-size": "64M",
"shutdown-timeout": 15,
"debug": true,
"hosts": [],
"log-level": "",
"tls": true,
"tlsverify": true,
"tlscacert": "",
"tlscert": "",
"tlskey": "",
"swarm-default-advertise-addr": "",
"api-cors-header": "",
"selinux-enabled": false,
"userns-remap": "",
"group": "",
"cgroup-parent": "",
"default-ulimits": {
"nofile": {
"Name": "nofile",
"Hard": 64000,
"Soft": 64000
}
},
"init": false,
"init-path": "/usr/libexec/docker-init",
"ipv6": false,
"iptables": false,
"ip-forward": false,
"ip-masq": false,
"userland-proxy": false,
"userland-proxy-path": "/usr/libexec/docker-proxy",
"ip": "0.0.0.0",
"bridge": "",
"bip": "",
"fixed-cidr": "",
"fixed-cidr-v6": "",
"default-gateway": "",
"default-gateway-v6": "",
"icc": false,
"raw-logs": false,
"allow-nondistributable-artifacts": [],
"registry-mirrors": [],
"seccomp-profile": "",
"insecure-registries": [],
"no-new-privileges": false,
"default-runtime": "runc",
"oom-score-adjust": -500,
"node-generic-resources": ["NVIDIA-GPU=UUID1", "NVIDIA-GPU=UUID2"],
"runtimes": {
"cc-runtime": {
"path": "/usr/bin/cc-runtime"
},
"custom": {
"path": "/usr/local/bin/my-runc-replacement",
"runtimeArgs": [
"--debug"
]
}
},
"default-address-pools":[
{"base":"172.80.0.0/16","size":24},
{"base":"172.90.0.0/16","size":24}
]
}

Note: You cannot set options in daemon.json that have already been set on daemon startup as a
flag. On systems that use systemd to start the Docker daemon, -H is already set, so you cannot use
the hosts key in daemon.json to add listening addresses. See
https://docs.docker.com/engine/admin/systemd/#custom-docker-daemon-options for how to
accomplish this task with a systemd drop-in file.

On Windows

The default location of the configuration file on Windows


is%programdata%\docker\config\daemon.json. The --config-file flag can be used to specify a non-
default location.

This is a full example of the allowed configuration options on Windows:

{
"authorization-plugins": [],
"data-root": "",
"dns": [],
"dns-opts": [],
"dns-search": [],
"exec-opts": [],
"experimental": false,
"features":{},
"storage-driver": "",
"storage-opts": [],
"labels": [],
"log-driver": "",
"mtu": 0,
"pidfile": "",
"cluster-store": "",
"cluster-advertise": "",
"max-concurrent-downloads": 3,
"max-concurrent-uploads": 5,
"shutdown-timeout": 15,
"debug": true,
"hosts": [],
"log-level": "",
"tlsverify": true,
"tlscacert": "",
"tlscert": "",
"tlskey": "",
"swarm-default-advertise-addr": "",
"group": "",
"default-ulimits": {},
"bridge": "",
"fixed-cidr": "",
"raw-logs": false,
"allow-nondistributable-artifacts": [],
"registry-mirrors": [],
"insecure-registries": []
}

FEATURE OPTIONS

The optional field features in daemon.json allows users to enable or disable specific daemon
features. For example, {"features":{"buildkit": true}} enables buildkit as the default docker
image builder.

The list of currently supported feature options:


 buildkit: It enables buildkit as default builder when set to true or disables it byfalse. Note
that if this option is not explicitly set in the daemon config file, then it is up to the cli to
determine which builder to invoke.

CONFIGURATION RELOAD BEHAVIOR

Some options can be reconfigured when the daemon is running without requiring to restart the
process. We use the SIGHUP signal in Linux to reload, and a global event in Windows with the
key Global\docker-daemon-config-$PID. The options can be modified in the configuration file but still
will check for conflicts with the provided flags. The daemon fails to reconfigure itself if there are
conflicts, but it won’t stop execution.

The list of currently supported options that can be reconfigured is this:

 debug: it changes the daemon to debug mode when set to true.


 cluster-store: it reloads the discovery store with the new address.
 cluster-store-opts: it uses the new options to reload the discovery store.
 cluster-advertise: it modifies the address advertised after reloading.
 labels: it replaces the daemon labels with a new set of labels.
 live-restore: Enables keeping containers alive during daemon downtime.
 max-concurrent-downloads: it updates the max concurrent downloads for each pull.
 max-concurrent-uploads: it updates the max concurrent uploads for each push.
 default-runtime: it updates the runtime to be used if not is specified at container creation. It
defaults to “default” which is the runtime shipped with the official docker packages.
 runtimes: it updates the list of available OCI runtimes that can be used to run containers.
 authorization-plugin: it specifies the authorization plugins to use.
 allow-nondistributable-artifacts: Replaces the set of registries to which the daemon will
push nondistributable artifacts with a new set of registries.
 insecure-registries: it replaces the daemon insecure registries with a new set of insecure
registries. If some existing insecure registries in daemon’s configuration are not in newly
reloaded insecure resgitries, these existing ones will be removed from daemon’s config.
 registry-mirrors: it replaces the daemon registry mirrors with a new set of registry mirrors.
If some existing registry mirrors in daemon’s configuration are not in newly reloaded registry
mirrors, these existing ones will be removed from daemon’s config.
 shutdown-timeout: it replaces the daemon’s existing configuration timeout with a new
timeout for shutting down all containers.
 features: it explicitly enables or disables specific features.

Updating and reloading the cluster configurations such as --cluster-store,--cluster-


advertise and --cluster-store-opts will take effect only if these configurations were not previously
configured. If --cluster-store has been provided in flags and cluster-advertise not, cluster-
advertise can be added in the configuration file without accompanied by --cluster-store.
Configuration reload will log a warning message if it detects a change in previously configured
cluster configurations.

Run multiple daemons


Note: Running multiple daemons on a single host is considered as “experimental”. The user should
be aware of unsolved problems. This solution may not work properly in some cases. Solutions are
currently under development and will be delivered in the near future.

This section describes how to run multiple Docker daemons on a single host. To run multiple
daemons, you must configure each daemon so that it does not conflict with other daemons on the
same host. You can set these options either by providing them as flags, or by using a daemon
configuration file.

The following daemon options must be configured for each daemon:

-b, --bridge= Attach containers to a network bridge


--exec-root=/var/run/docker Root of the Docker execdriver
--data-root=/var/lib/docker Root of persisted Docker data
-p, --pidfile=/var/run/docker.pid Path to use for daemon PID file
-H, --host=[] Daemon socket(s) to connect to
--iptables=true Enable addition of iptables rules
--config-file=/etc/docker/daemon.json Daemon configuration file
--tlscacert="~/.docker/ca.pem" Trust certs signed only by this CA
--tlscert="~/.docker/cert.pem" Path to TLS certificate file
--tlskey="~/.docker/key.pem" Path to TLS key file

When your daemons use different values for these flags, you can run them on the same host without
any problems. It is very important to properly understand the meaning of those options and to use
them correctly.

 The -b, --bridge= flag is set to docker0 as default bridge network. It is created automatically
when you install Docker. If you are not using the default, you must create and configure the
bridge manually or just set it to ‘none’: --bridge=none
 --exec-root is the path where the container state is stored. The default value
is /var/run/docker. Specify the path for your running daemon here.
 --data-root is the path where persisted data such as images, volumes, and cluster state are
stored. The default value is /var/lib/docker. To avoid any conflict with other daemons, set
this parameter separately for each daemon.
 -p, --pidfile=/var/run/docker.pid is the path where the process ID of the daemon is
stored. Specify the path for your pid file here.
 --host=[] specifies where the Docker daemon will listen for client connections. If
unspecified, it defaults to /var/run/docker.sock.
 --iptables=false prevents the Docker daemon from adding iptables rules. If multiple
daemons manage iptables rules, they may overwrite rules set by another daemon. Be aware
that disabling this option requires you to manually add iptables rules to expose container
ports. If you prevent Docker from adding iptables rules, Docker will also not add IP
masquerading rules, even if you set --ip-masq to true. Without IP masquerading rules,
Docker containers will not be able to connect to external hosts or the internet when using
network other than default bridge.
 --config-file=/etc/docker/daemon.json is the path where configuration file is stored. You
can use it instead of daemon flags. Specify the path for each daemon.
 --tls* Docker daemon supports --tlsverify mode that enforces encrypted and
authenticated remote connections. The --tls* options enable use of specific certificates for
individual daemons.

Example script for a separate “bootstrap” instance of the Docker daemon without network:

$ sudo dockerd \
-H unix:///var/run/docker-bootstrap.sock \
-p /var/run/docker-bootstrap.pid \
--iptables=false \
--ip-masq=false \
--bridge=none \
--data-root=/var/lib/docker-bootstrap \
--exec-root=/var/run/docker-bootstrap

Docker Machine command-line


reference
Estimated reading time: 1 minute

 active
 config
 create
 env
 help
 inspect
 ip
 kill
 ls
 mount
 provision
 regenerate-certs
 restart
 rm
 scp
 ssh
 start
 status
 stop
 upgrade
 url

Machine command-line completion


Estimated reading time: 1 minute

Docker Machine comes with command completion for the bash and zsh shell.

Installing Command Completion


Bash
Make sure bash completion is installed. If you are using a current version of Linux in a non-minimal
installation, bash completion should be available.

On a Mac, install with brew install bash-completion.


Place the completion script in /etc/bash_completion.d/ as follows:

 On a Mac:

 sudo curl -L
https://raw.githubusercontent.com/docker/machine/v0.16.0/contrib/completion/ba
sh/docker-machine.bash -o `brew --prefix`/etc/bash_completion.d/docker-machine

 On a standard Linux installation:

 sudo curl -L
https://raw.githubusercontent.com/docker/machine/v0.16.0/contrib/completion/ba
sh/docker-machine.bash -o /etc/bash_completion.d/docker-machine

Completion is available upon next login.

Zsh
Place the completion script in your a completion file within the ZSH configuration directory, such
as ~/.zsh/completion/.
mkdir -p ~/.zsh/completion
curl -L
https://raw.githubusercontent.com/docker/machine/v0.16.0/contrib/completion/zsh/_dock
er-machine > ~/.zsh/completion/_docker-machine
Include the directory in your $fpath, by adding a like the following to the ~/.zshrcconfiguration file.
fpath=(~/.zsh/completion $fpath)

Make sure compinit is loaded or do it by adding in ~/.zshrc:


autoload -Uz compinit && compinit -i

Then reload your shell:

exec $SHELL -l

Available completions
Depending on what you typed on the command line so far, it completes:

 commands and their options


 container IDs and names
 image repositories and image tags
 file paths

docker-machine active
Estimated reading time: 1 minute

See which machine is “active” (a machine is considered active if the DOCKER_HOSTenvironment


variable points to it).
$ docker-machine ls

NAME ACTIVE DRIVER STATE URL


dev - virtualbox Running tcp://192.168.99.103:2376
staging * digitalocean Running tcp://203.0.113.81:2376

$ echo $DOCKER_HOST
tcp://203.0.113.81:2376

$ docker-machine active
staging
docker-machine config
Estimated reading time: 1 minute

Usage: docker-machine config [OPTIONS] [arg...]

Print the connection config for machine

Description:
Argument is a machine name.

Options:

--swarm Display the Swarm config instead of the Docker daemon

For example:

$ docker-machine config dev \


--tlsverify \
--tlscacert="/Users/ehazlett/.docker/machines/dev/ca.pem" \
--tlscert="/Users/ehazlett/.docker/machines/dev/cert.pem" \
--tlskey="/Users/ehazlett/.docker/machines/dev/key.pem" \
-H tcp://192.168.99.103:2376

docker-machine create
Estimated reading time: 9 minutes

Create a machine. Requires the --driver flag to indicate which provider (VirtualBox, DigitalOcean,
AWS, etc.) the machine should be created on, and an argument to indicate the name of the created
machine.
Looking for the full list of available drivers?

For a full list of drivers that work with docker-machine create and information on how to use them,
see Machine drivers.
Example
Here is an example of using the --virtualbox driver to create a machine called dev.
$ docker-machine create --driver virtualbox dev
Creating CA: /home/username/.docker/machine/certs/ca.pem
Creating client certificate: /home/username/.docker/machine/certs/cert.pem
Image cache does not exist, creating it at /home/username/.docker/machine/cache...
No default boot2docker iso found locally, downloading the latest release...
Downloading
https://github.com/boot2docker/boot2docker/releases/download/v1.6.2/boot2docker.iso
to /home/username/.docker/machine/cache/boot2docker.iso...
Creating VirtualBox VM...
Creating SSH key...
Starting VirtualBox VM...
Starting VM...
To see how to connect Docker to this machine, run: docker-machine env dev

Accessing driver-specific flags in the help text


The docker-machine create command has some flags which apply to all drivers. These largely
control aspects of Machine’s provisioning process (including the creation of Docker Swarm
containers) that the user may wish to customize.
$ docker-machine create
Docker Machine Version: 0.5.0 (45e3688)
Usage: docker-machine create [OPTIONS] [arg...]

Create a machine.

Run 'docker-machine create --driver name' to include the create flags for that driver
in the help text.

Options:

--driver, -d "none"
Driver to create machine with.
--engine-install-url "https://get.docker.com"
Custom URL to use for engine installation [$MACHINE_DOCKER_INSTALL_URL]
--engine-opt [--engine-opt option --engine-opt option]
Specify arbitrary flags to include with the created engine in the form flag=value
--engine-insecure-registry [--engine-insecure-registry option --engine-insecure-
registry option] Specify insecure registries to allow with the created engine
--engine-registry-mirror [--engine-registry-mirror option --engine-registry-mirror
option] Specify registry mirrors to use [$ENGINE_REGISTRY_MIRROR]
--engine-label [--engine-label option --engine-label option]
Specify labels for the created engine
--engine-storage-driver
Specify a storage driver to use with the engine
--engine-env [--engine-env option --engine-env option]
Specify environment variables to set in the engine
--swarm
Configure Machine with Swarm
--swarm-image "swarm:latest"
Specify Docker image to use for Swarm [$MACHINE_SWARM_IMAGE]
--swarm-master
Configure Machine to be a Swarm master
--swarm-discovery
Discovery service to use with Swarm
--swarm-strategy "spread"
Define a default scheduling strategy for Swarm
--swarm-opt [--swarm-opt option --swarm-opt option]
Define arbitrary flags for swarm
--swarm-host "tcp://0.0.0.0:3376"
ip/socket to listen on for Swarm master
--swarm-addr
addr to advertise for Swarm (default: detect and use the machine IP)
--swarm-experimental
Enable Swarm experimental features

Additionally, drivers can specify flags that Machine can accept as part of their plugin code. These
allow users to customize the provider-specific parameters of the created machine, such as size (--
amazonec2-instance-type m1.medium), geographical region (--amazonec2-region us-west-1), and so
on.
To see the provider-specific flags, simply pass a value for --driver when invoking the create help
text.
$ docker-machine create --driver virtualbox --help
Usage: docker-machine create [OPTIONS] [arg...]

Create a machine.

Run 'docker-machine create --driver name' to include the create flags for that driver
in the help text.

Options:

--driver, -d "none"
Driver to create machine with.
--engine-env [--engine-env option --engine-env option]
Specify environment variables to set in the engine
--engine-insecure-registry [--engine-insecure-registry option --engine-insecure-
registry option] Specify insecure registries to allow with the created engine
--engine-install-url "https://get.docker.com"
Custom URL to use for engine installation [$MACHINE_DOCKER_INSTALL_URL]
--engine-label [--engine-label option --engine-label option]
Specify labels for the created engine
--engine-opt [--engine-opt option --engine-opt option]
Specify arbitrary flags to include with the created engine in the form flag=value
--engine-registry-mirror [--engine-registry-mirror option --engine-registry-mirror
option] Specify registry mirrors to use [$ENGINE_REGISTRY_MIRROR]
--engine-storage-driver
Specify a storage driver to use with the engine
--swarm
Configure Machine with Swarm
--swarm-addr
addr to advertise for Swarm (default: detect and use the machine IP)
--swarm-discovery
Discovery service to use with Swarm
--swarm-experimental
Enable Swarm experimental features
--swarm-host "tcp://0.0.0.0:3376"
ip/socket to listen on for Swarm master
--swarm-image "swarm:latest"
Specify Docker image to use for Swarm [$MACHINE_SWARM_IMAGE]
--swarm-master
Configure Machine to be a Swarm master
--swarm-opt [--swarm-opt option --swarm-opt option]
Define arbitrary flags for swarm
--swarm-strategy "spread"
Define a default scheduling strategy for Swarm
--virtualbox-boot2docker-url
The URL of the boot2docker image. Defaults to the latest available version
[$VIRTUALBOX_BOOT2DOCKER_URL]
--virtualbox-cpu-count "1"
number of CPUs for the machine (-1 to use the number of CPUs available)
[$VIRTUALBOX_CPU_COUNT]
--virtualbox-disk-size "20000"
Size of disk for host in MB [$VIRTUALBOX_DISK_SIZE]
--virtualbox-host-dns-resolver
Use the host DNS resolver [$VIRTUALBOX_HOST_DNS_RESOLVER]
--virtualbox-dns-proxy
Proxy all DNS requests to the host [$VIRTUALBOX_DNS_PROXY]
--virtualbox-hostonly-cidr "192.168.99.1/24"
Specify the Host Only CIDR [$VIRTUALBOX_HOSTONLY_CIDR]
--virtualbox-hostonly-nicpromisc "deny"
Specify the Host Only Network Adapter Promiscuous Mode
[$VIRTUALBOX_HOSTONLY_NIC_PROMISC]
--virtualbox-hostonly-nictype "82540EM"
Specify the Host Only Network Adapter Type [$VIRTUALBOX_HOSTONLY_NIC_TYPE]
--virtualbox-import-boot2docker-vm
The name of a Boot2Docker VM to import
--virtualbox-memory "1024"
Size of memory for host in MB [$VIRTUALBOX_MEMORY_SIZE]
--virtualbox-no-share
Disable the mount of your home directory

You may notice that some flags specify environment variables that they are associated with as well
(located to the far left hand side of the row). If these environment variables are set when docker-
machine create is invoked, Docker Machine uses them for the default value of the flag.

Specifying configuration options for the created


Docker engine
As part of the process of creation, Docker Machine installs Docker and configures it with some
sensible defaults. For instance, it allows connection from the outside world over TCP with TLS-
based encryption and defaults to AUFS as the storage driver when available.
There are several cases where the user might want to set options for the created Docker engine
(also known as the Docker daemon) themselves. For example, they may want to allow connection to
a registry that they are running themselves using the --insecure-registry flag for the daemon.
Docker Machine supports the configuration of such options for the created engines via
the create command flags which begin with --engine.
Docker Machine only sets the configured parameters on the daemon and does not set up any of the
“dependencies” for you. For instance, if you specify that the created daemon should use btrfs as a
storage driver, you still must ensure that the proper dependencies are installed, the BTRFS
filesystem has been created, and so on.

The following is an example usage:

$ docker-machine create -d virtualbox \


--engine-label foo=bar \
--engine-label spam=eggs \
--engine-storage-driver overlay \
--engine-insecure-registry registry.myco.com \
foobarmachine

This creates a virtual machine running locally in VirtualBox which uses the overlay storage backend,
has the key-value pairs foo=bar and spam=eggs as labels on the engine, and allows pushing / pulling
from the insecure registry located at registry.myco.com. You can verify much of this by inspecting
the output of docker info:
$ eval $(docker-machine env foobarmachine)
$ docker info
Containers: 0
Images: 0
Storage Driver: overlay
...
Name: foobarmachine
...
Labels:
foo=bar
spam=eggs
provider=virtualbox
The supported flags are as follows:

 --engine-insecure-registry: Specify insecure registries to allow with the created engine


 --engine-registry-mirror: Specify registry mirrors to use
 --engine-label: Specify labels for the created engine
 --engine-storage-driver: Specify a storage driver to use with the engine

If the engine supports specifying the flag multiple times (such as with --label), then so does Docker
Machine.
In addition to this subset of daemon flags which are directly supported, Docker Machine also
supports an additional flag, --engine-opt, which can be used to specify arbitrary daemon options
with the syntax --engine-opt flagname=value. For example, to specify that the daemon should
use 8.8.8.8 as the DNS server for all containers, and always use the syslog log driver you could run
the following create command:
$ docker-machine create -d virtualbox \
--engine-opt dns=8.8.8.8 \
--engine-opt log-driver=syslog \
gdns

Additionally, Docker Machine supports a flag, --engine-env, which can be used to specify arbitrary
environment variables to be set within the engine with the syntax --engine-env name=value. For
example, to specify that the engine should use example.comas the proxy server, you could run the
following create command:
$ docker-machine create -d virtualbox \
--engine-env HTTP_PROXY=http://example.com:8080 \
--engine-env HTTPS_PROXY=https://example.com:8080 \
--engine-env NO_PROXY=example2.com \
proxbox

Specifying Docker Swarm options for the created


machine
In addition to configuring Docker Engine options as listed above, you can use Machine to specify
how the created swarm manager is configured. There is a --swarm-strategy flag, which you can use
to specify the scheduling strategy which Docker Swarm should use (Machine defaults to
the spread strategy). There is also a general purpose --swarm-optoption which works similar to the
aforementioned --engine-opt option, except that it specifies options for the swarm manage command
(used to boot a master node) instead of the base command. You can use this to configure features
that power users might be interested in, such as configuring the heartbeat interval or Swarm’s
willingness to over-commit resources. There is also the --swarm-experimental flag, that allows you
to access experimental features in Docker Swarm.

If you’re not sure how to configure these options, it is best to not specify configuration at all. Docker
Machine chooses sensible defaults for you and you don’t need to worry about it.

Example create:

$ docker-machine create -d virtualbox \


--swarm \
--swarm-master \
--swarm-discovery token://<token> \
--swarm-strategy binpack \
--swarm-opt heartbeat=5s \
upbeat

This sets the swarm scheduling strategy to “binpack” (pack in containers as tightly as possible per
host instead of spreading them out), and the “heartbeat” interval to 5 seconds.

Pre-create check
Many drivers require a certain set of conditions to be in place before machines can be created. For
instance, VirtualBox needs to be installed before the virtualbox driver can be used. For this reason,
Docker Machine has a “pre-create check” which is specified at the driver level.

If this pre-create check succeeds, Docker Machine proceeds with the creation as normal. If the pre-
create check fails, the Docker Machine process exits with status code 3 to indicate that the source of
the non-zero exit was the pre-create check failing.

docker-machine env
Estimated reading time: 3 minutes

Set environment variables to dictate that docker should run a command against a particular
machine.
$ docker-machine env --help

Usage: docker-machine env [OPTIONS] [arg...]

Display the commands to set up the environment for the Docker client

Description:
Argument is a machine name.

Options:

--swarm Display the Swarm config instead of the Docker daemon


--shell Force environment to be configured for a specified shell: [fish,
cmd, powershell, tcsh], default is sh/bash
--unset, -u Unset variables instead of setting them
--no-proxy Add machine IP to NO_PROXY environment variable

docker-machine env machinename prints out export commands which can be run in a subshell.
Running docker-machine env -u prints unset commands which reverse this effect.
$ env | grep DOCKER
$ eval "$(docker-machine env dev)"
$ env | grep DOCKER
DOCKER_HOST=tcp://192.168.99.101:2376
DOCKER_CERT_PATH=/Users/nathanleclaire/.docker/machines/.client
DOCKER_TLS_VERIFY=1
DOCKER_MACHINE_NAME=dev
$ # If you run a docker command, now it runs against that host.
$ eval "$(docker-machine env -u)"
$ env | grep DOCKER
$ # The environment variables have been unset.

The output described above is intended for the shells bash and zsh (if you’re not sure which shell
you’re using, there’s a very good possibility that it’s bash). However, these are not the only shells
which Docker Machine supports. Docker Machine detects the shells available in your environment
and lists them. Docker supports bash, cmd, powershell, and emacs.
If you are using fish and the SHELL environment variable is correctly set to the path where fish is
located, docker-machine env name prints out the values in the format which fishexpects:
set -x DOCKER_TLS_VERIFY 1;
set -x DOCKER_CERT_PATH "/Users/nathanleclaire/.docker/machine/machines/overlay";
set -x DOCKER_HOST tcp://192.168.99.102:2376;
set -x DOCKER_MACHINE_NAME overlay
# Run this command to configure your shell:
# eval "$(docker-machine env overlay)"

If you are on Windows and using either PowerShell or cmd.exe, docker-machine env Docker Machine
should now detect your shell automatically. If the automatic detection does not work, you can still
override it using the --shell flag for docker-machine env.

For PowerShell:

$ docker-machine.exe env --shell powershell dev


$Env:DOCKER_TLS_VERIFY = "1"
$Env:DOCKER_HOST = "tcp://192.168.99.101:2376"
$Env:DOCKER_CERT_PATH = "C:\Users\captain\.docker\machine\machines\dev"
$Env:DOCKER_MACHINE_NAME = "dev"
# Run this command to configure your shell:
# docker-machine.exe env --shell=powershell dev | Invoke-Expression

For cmd.exe:
$ docker-machine.exe env --shell cmd dev
set DOCKER_TLS_VERIFY=1
set DOCKER_HOST=tcp://192.168.99.101:2376
set DOCKER_CERT_PATH=C:\Users\captain\.docker\machine\machines\dev
set DOCKER_MACHINE_NAME=dev
# Run this command to configure your shell: copy and paste the above values into your
command prompt

Tip: See also, how to unset environment variables in the current shell.
Excluding the created machine from proxies
The env command supports a --no-proxy flag which ensures that the created machine’s IP address
is added to the NO_PROXY/no_proxy environment variable.
This is useful when using docker-machine with a local VM provider, such
as virtualbox or vmwarefusion, in network environments where an HTTP proxy is required for
internet access.
$ docker-machine env --no-proxy default
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.104:2376"
export DOCKER_CERT_PATH="/Users/databus23/.docker/machine/certs"
export DOCKER_MACHINE_NAME="default"
export NO_PROXY="192.168.99.104"
# Run this command to configure your shell:
# eval "$(docker-machine env default)"

You may also want to visit the documentation on setting HTTP_PROXY for the created daemon using
the --engine-env flag for docker-machine create.

docker-machine help
Estimated reading time: 1 minute

Usage: docker-machine help [arg...]

Shows a list of commands or help for one command

Usage: docker-machine help subcommand

For example:

$ docker-machine help config


Usage: docker-machine config [OPTIONS] [arg...]

Print the connection config for machine


Description:
Argument is a machine name.

Options:

--swarm Display the Swarm config instead of the Docker daemon

docker-machine inspect
Estimated reading time: 1 minute

Usage: docker-machine inspect [OPTIONS] [arg...]

Inspect information about a machine

Description:
Argument is a machine name.

Options:
--format, -f Format the output using the given go template.

By default, this renders information about a machine as JSON. If a format is specified, the given
template is executed for each result.

Go’s text/template package describes all the details of the format.

In addition to the text/template syntax, there are some additional functions, json and prettyjson,
which can be used to format the output as JSON (documented below).

Examples
List all the details of a machine:

This is the default usage of inspect.


$ docker-machine inspect dev

{
"DriverName": "virtualbox",
"Driver": {
"MachineName": "docker-host-
128be8d287b2028316c0ad5714b90bcfc11f998056f2f790f7c1f43f3d1e6eda",
"SSHPort": 55834,
"Memory": 1024,
"DiskSize": 20000,
"Boot2DockerURL": "",
"IPAddress": "192.168.5.99"
},
...
}

Get a machine’s IP address:

For the most part, you can pick out any field from the JSON in a fairly straightforward manner.

$ docker-machine inspect --format='{{.Driver.IPAddress}}' dev


192.168.5.99

Formatting details:

If you want a subset of information formatted as JSON, you can use the json function in the
template.
$ docker-machine inspect --format='' dev-fusion
{"Boot2DockerURL":"","CPUS":8,"CPUs":8,"CaCertPath":"/Users/hairyhenderson/.docker/ma
chine/certs/ca.pem","DiskSize":20000,"IPAddress":"172.16.62.129","ISO":"/Users/hairyh
enderson/.docker/machine/machines/dev-fusion/boot2docker-1.5.0-
GH747.iso","MachineName":"dev-
fusion","Memory":1024,"PrivateKeyPath":"/Users/hairyhenderson/.docker/machine/certs/c
a-
key.pem","SSHPort":22,"SSHUser":"docker","SwarmDiscovery":"","SwarmHost":"tcp://0.0.0
.0:3376","SwarmMaster":false}

While this is usable, it’s not very human-readable. For this reason, there is prettyjson:
$ docker-machine inspect --format='{{prettyjson .Driver}}' dev-fusion
{
"Boot2DockerURL": "",
"CPUS": 8,
"CPUs": 8,
"CaCertPath": "/Users/hairyhenderson/.docker/machine/certs/ca.pem",
"DiskSize": 20000,
"IPAddress": "172.16.62.129",
"ISO": "/Users/hairyhenderson/.docker/machine/machines/dev-fusion/boot2docker-
1.5.0-GH747.iso",
"MachineName": "dev-fusion",
"Memory": 1024,
"PrivateKeyPath": "/Users/hairyhenderson/.docker/machine/certs/ca-key.pem",
"SSHPort": 22,
"SSHUser": "docker",
"SwarmDiscovery": "",
"SwarmHost": "tcp://0.0.0.0:3376",
"SwarmMaster": false
}

docker-machine ip
Estimated reading time: 1 minute

Get the IP address of one or more machines.

$ docker-machine ip dev
192.168.99.104

$ docker-machine ip dev dev2


192.168.99.104
192.168.99.105

docker-machine kill
Estimated reading time: 1 minute

Usage: docker-machine kill [arg...]


Kill (abruptly force stop) a machine

Description:
Argument(s) are one or more machine names.

For example:

$ docker-machine ls
NAME ACTIVE DRIVER STATE URL
dev * virtualbox Running tcp://192.168.99.104:2376
$ docker-machine kill dev
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL
dev * virtualbox Stopped

docker-machine ls
Estimated reading time: 3 minutes

Usage: docker-machine ls [OPTIONS] [arg...]

List machines

Options:

--quiet, -q Enable quiet mode


--filter [--filter option --filter option] Filter output based on conditions
provided
--timeout, -t "10" Timeout in seconds, default to 10s
--format, -f Pretty-print machines using a Go
template

Timeout
The ls command tries to reach each host in parallel. If a given host does not answer in less than 10
seconds, the ls command states that this host is in Timeout state. In some circumstances (poor
connection, high load, or while troubleshooting), you may want to increase or decrease this value.
You can use the -t flag for this purpose with a numerical value in seconds.

Example
$ docker-machine ls -t 12
NAME ACTIVE DRIVER STATE URL SWARM DOCKER
ERRORS
default - virtualbox Running tcp://192.168.99.100:2376 v1.9.1

Filtering
The filtering flag (--filter) format is a key=value pair. If there is more than one filter, then pass
multiple flags. For example: --filter "foo=bar" --filter "bif=baz"

The currently supported filters are:

 driver (driver name)


 swarm (swarm master’s name)
 state (Running|Paused|Saved|Stopped|Stopping|Starting|Error)
 name (Machine name returned by driver, supports golang style regular expressions)
 label (Machine created with --engine-label option, can be filtered
with label=<key>[=<value>])

Examples
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER
ERRORS
dev - virtualbox Stopped
foo0 - virtualbox Running tcp://192.168.99.105:2376 v1.9.1
foo1 - virtualbox Running tcp://192.168.99.106:2376 v1.9.1
foo2 * virtualbox Running tcp://192.168.99.107:2376 v1.9.1

$ docker-machine ls --filter name=foo0


NAME ACTIVE DRIVER STATE URL SWARM DOCKER
ERRORS
foo0 - virtualbox Running tcp://192.168.99.105:2376 v1.9.1

$ docker-machine ls --filter driver=virtualbox --filter state=Stopped


NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
dev - virtualbox Stopped v1.9.1

$ docker-machine ls --filter label=com.class.app=foo1 --filter


label=com.class.app=foo2
NAME ACTIVE DRIVER STATE URL SWARM DOCKER
ERRORS
foo1 - virtualbox Running tcp://192.168.99.105:2376 v1.9.1
foo2 * virtualbox Running tcp://192.168.99.107:2376 v1.9.1

Formatting
The formatting option (--format) pretty-prints machines using a Go template.

Valid placeholders for the Go template are listed below:

Placeholder Description

.Name Machine name

.Active Is the machine active?

.ActiveHost Is the machine an active non-swarm host?

.ActiveSwarm Is the machine an active swarm master?

.DriverName Driver name

.State Machine state (running, stopped...)

.URL Machine URL

.Swarm Machine swarm name

.Error Machine errors

.DockerVersion Docker Daemon version


Placeholder Description

.ResponseTime Time taken by the host to respond

When using the --format option, the ls command either outputs the data exactly as the template
declares or, when using the table directive, includes column headers as well.
The following example uses a template without headers and outputs the Name and Driverentries
separated by a colon for all running machines:
$ docker-machine ls --format "{{.Name}}: {{.DriverName}}"
default: virtualbox
ec2: amazonec2

To list all machine names with their driver in a table format you can use:

$ docker-machine ls --format "table {{.Name}} {{.DriverName}}"


NAME DRIVER
default virtualbox
ec2 amazonec2

docker-machine mount
Estimated reading time: 1 minute

Mount directories from a machine to your local host, using sshfs.


The notation is machinename:/path/to/dir for the argument; you can also supply an alternative
mount point (default is the same dir path).

Example
Consider the following example:

$ mkdir foo
$ docker-machine ssh dev mkdir foo
$ docker-machine mount dev:/home/docker/foo foo
$ touch foo/bar
$ docker-machine ssh dev ls foo
bar
Now you can use the directory on the machine, for mounting into containers. Any changes done in
the local directory, is reflected in the machine too.

$ eval $(docker-machine env dev)


$ docker run -v /home/docker/foo:/tmp/foo busybox ls /tmp/foo
bar
$ touch foo/baz
$ docker run -v /home/docker/foo:/tmp/foo busybox ls /tmp/foo
bar
baz

The files are actually being transferred using sftp (over an ssh connection), so this program (“sftp”)
needs to be present on the machine - but it usually is.
To unmount the directory again, you can use the same options but the -u flag. You can also
call fuserunmount (or fusermount -u) commands directly.
$ docker-machine mount -u dev:/home/docker/foo foo
$ rmdir foo

Files are actually being stored on the machine, not on the host. So make sure to make a copy
of any files you want to keep, before removing it!

docker-machine provision
Estimated reading time: 1 minute

Re-run provisioning on a created machine.

Sometimes it may be helpful to re-run Machine’s provisioning process on a created machine.


Reasons for doing so may include a failure during the original provisioning process, or a drift from
the desired system state (including the originally specified Swarm or Engine configuration).

Usage is docker-machine provision [name]. Multiple names may be specified.


$ docker-machine provision foo bar

Copying certs to the local machine directory...


Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
The Machine provisioning process will:

1. Set the hostname on the instance to the name Machine addresses it by, such as default.
2. Install Docker if it is not present already.
3. Generate a set of certificates (usually with the default, self-signed CA) and configure the
daemon to accept connections over TLS.
4. Copy the generated certificates to the server and local config directory.
5. Configure the Docker Engine according to the options specified at create time.
6. Configure and activate Swarm if applicable.

docker-machine regenerate-certs
Estimated reading time: 1 minute

Usage: docker-machine regenerate-certs [OPTIONS] [arg...]

Regenerate TLS Certificates for a machine

Description:
Argument(s) are one or more machine names.

Options:

--force, -f Force rebuild and do not prompt


--client-certs Also regenerate client certificates and CA.

Regenerate TLS certificates and update the machine with new certs.

For example:

$ docker-machine regenerate-certs dev

Regenerate TLS machine certs? Warning: this is irreversible. (y/n): y


Regenerating TLS certificates

If your certificates have expired, you’ll need to regenerate the client certs as well using the --client-
certs option:

$ docker-machine regenerate-certs --client-certs dev


Regenerate TLS machine certs? Warning: this is irreversible. (y/n): y
Regenerating TLS certificates
Regenerating local certificates
...

docker-machine restart
Estimated reading time: 1 minute

Usage: docker-machine restart [arg...]

Restart a machine

Description:
Argument(s) are one or more machine names.

Restart a machine. Oftentimes this is equivalent todocker-machine stop; docker-machine start.


But some cloud driver try to implement a clever restart which keeps the same IP address.
$ docker-machine restart dev
Waiting for VM to start...

docker-machine rm
Estimated reading time: 1 minute

Remove a machine. This removes the local reference and deletes it on the cloud provider or
virtualization management platform.

$ docker-machine rm --help

Usage: docker-machine rm [OPTIONS] [arg...]

Remove a machine

Description:
Argument(s) are one or more machine names.
Options:

--force, -f Remove local configuration even if machine cannot be removed, also


implies an automatic yes (`-y`)
-y Assumes automatic yes to proceed with remove, without prompting
further user confirmation

Examples
$ docker-machine ls
NAME ACTIVE URL STATE URL SWARM DOCKER
ERRORS
bar - virtualbox Running tcp://192.168.99.101:2376 v1.9.1
baz - virtualbox Running tcp://192.168.99.103:2376 v1.9.1
foo - virtualbox Running tcp://192.168.99.100:2376 v1.9.1
qix - virtualbox Running tcp://192.168.99.102:2376 v1.9.1

$ docker-machine rm baz
About to remove baz
Are you sure? (y/n): y
Successfully removed baz

$ docker-machine ls
NAME ACTIVE URL STATE URL SWARM DOCKER
ERRORS
bar - virtualbox Running tcp://192.168.99.101:2376 v1.9.1
foo - virtualbox Running tcp://192.168.99.100:2376 v1.9.1
qix - virtualbox Running tcp://192.168.99.102:2376 v1.9.1

$ docker-machine rm bar qix


About to remove bar, qix
Are you sure? (y/n): y
Successfully removed bar
Successfully removed qix

$ docker-machine ls
NAME ACTIVE URL STATE URL SWARM DOCKER
ERRORS
foo - virtualbox Running tcp://192.168.99.100:2376 v1.9.1

$ docker-machine rm -y foo
About to remove foo
Successfully removed foo

docker-machine scp
Estimated reading time: 2 minutes

Copy files from your local host to a machine, from machine to machine, or from a machine to your
local host using scp.
The notation is machinename:/path/to/files for the arguments; in the host machine’s case, you
don’t need to specify the name, just the path.

Example
Consider the following example:

$ cat foo.txt
cat: foo.txt: No such file or directory
$ docker-machine ssh dev pwd
/home/docker
$ docker-machine ssh dev 'echo A file created remotely! >foo.txt'
$ docker-machine scp dev:/home/docker/foo.txt .
foo.txt 100% 28
0.0KB/s 00:00
$ cat foo.txt
A file created remotely!

Just like how scp has a -r flag for copying files recursively, docker-machine has a -r flag for this
feature.
In the case of transferring files from machine to machine, they go through the local host’s filesystem
first (using scp’s -3 flag).
When transferring large files or updating directories with lots of files, you can use the -dflag, which
uses rsync to transfer deltas instead of transferring all of the files.

When transferring directories and not just files, avoid rsync surprises by using trailing slashes on
both the source and destination. For example:

$ mkdir -p bar
$ touch bar/baz
$ docker-machine scp -r -d bar/ dev:/home/docker/bar/
$ docker-machine ssh dev ls bar
baz

Specifying file paths for remote deployments


When you copy files to a remote server with docker-machine scp for app deployment, make
sure docker-compose and the Docker daemon know how to find them. Avoid using relative paths, but
specify absolute paths in Compose files. It’s best to specify absolute paths both for the location on
the Docker daemon and within the container.
For example, imagine you want to transfer your local directory /Users/<username>/webapp to a
remote machine and bind mount it into a container on the remote host. If the remote user is ubuntu,
use a command like this:
$ docker-machine scp -r /Users/<username>/webapp MACHINE-NAME:/home/ubuntu/webapp

Then write a docker-compose file that bind mounts it in:

version: "3.1"
services:
webapp:
image: alpine
command: cat /app/root.php
volumes:
- "/home/ubuntu/webapp:/app"

And we can try it out like so:

$ eval $(docker-machine env MACHINE-NAME)


$ docker-compose run webapp

docker-machine ssh
Estimated reading time: 2 minutes

Log into or run a command on a machine using SSH.

To login, just run docker-machine ssh machinename:


$ docker-machine ssh dev
## .
## ## ## ==
## ## ## ## ===
/""""""""""""""""\___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~
\______ o __/
\ \ __/
\____\______/
_ _ ____ _ _
| |__ ___ ___ | |_|___ \ __| | ___ ___| | _____ _ __
| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__| < __/ |
|_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|
Boot2Docker version 1.4.0, build master : 69cf398 - Fri Dec 12 01:39:42 UTC 2014
docker@boot2docker:~$ ls /
Users/ dev/ home/ lib/ mnt/ proc/ run/ sys/ usr/
bin/ etc/ init linuxrc opt/ root/ sbin/ tmp var/

You can also specify commands to run remotely by appending them directly to thedocker-machine
ssh command, much like the regular ssh program works:
$ docker-machine ssh dev free

total used free shared buffers


Mem: 1023556 183136 840420 0 30920
-/+ buffers: 152216 871340
Swap: 1212036 0 1212036

Commands with flags work as well:

$ docker-machine ssh dev df -h

Filesystem Size Used Available Use% Mounted on


rootfs 899.6M 85.9M 813.7M 10% /
tmpfs 899.6M 85.9M 813.7M 10% /
tmpfs 499.8M 0 499.8M 0% /dev/shm
/dev/sda1 18.2G 58.2M 17.2G 0% /mnt/sda1
cgroup 499.8M 0 499.8M 0% /sys/fs/cgroup
/dev/sda1 18.2G 58.2M 17.2G 0%
/mnt/sda1/var/lib/docker/aufs

If you are using the “external” SSH type as detailed in the next section, you can include additional
arguments to pass through to the ssh binary in the generated command (unless they conflict with
any of the default arguments for the command generated by Docker Machine). For instance, the
following command forwards port 8080 from the defaultmachine to localhost on your host
computer:
$ docker-machine ssh default -L 8080:localhost:8080

Different types of SSH


When Docker Machine is invoked, it checks to see if you have the venerable ssh binary around
locally and attempts to use that for the SSH commands it needs to run, whether they are a part of an
operation such as creation or have been requested by the user directly. If it does not find an
external ssh binary locally, it defaults to using a native Go implementation from crypto/ssh. This is
useful in situations where you may not have access to traditional UNIX tools, such as if you are
using Docker Machine on Windows without having msysgit installed alongside of it.
In most situations, you do not need to worry about this implementation detail and Docker Machine
acts sensibly out of the box. However, if you deliberately want to use the Go native version, you can
do so with a global command line flag / environment variable like so:

$ docker-machine --native-ssh ssh dev

There are some variations in behavior between the two methods, so report any issues or
inconsistencies if you come across them.

docker-machine start
Estimated reading time: 1 minute

Usage: docker-machine start [arg...]

Start a machine

Description:
Argument(s) are one or more machine names.

For example:

$ docker-machine start dev


Starting VM...

docker-machine status
Estimated reading time: 1 minute

Usage: docker-machine status [arg...]

Get the status of a machine

Description:
Argument is a machine name.

For example:
$ docker-machine status dev
Running

docker-machine stop
Estimated reading time: 1 minute

Usage: docker-machine stop [arg...]

Gracefully Stop a machine

Description:
Argument(s) are one or more machine names.

For example:

$ docker-machine ls

NAME ACTIVE DRIVER STATE URL


dev * virtualbox Running tcp://192.168.99.104:2376

$ docker-machine stop dev


$ docker-machine ls

NAME ACTIVE DRIVER STATE URL


dev * virtualbox Stopped

docker-machine upgrade
Estimated reading time: 1 minute

Upgrade a machine to the latest version of Docker. How this upgrade happens depends on the
underlying distribution used on the created instance.

For example, if the machine uses Ubuntu as the underlying operating system, it runs a command
similar to sudo apt-get upgrade docker-engine, because Machine expects Ubuntu machines it
manages to use this package. As another example, if the machine uses boot2docker for its OS, this
command downloads the latest boot2docker ISO and replace the machine’s existing ISO with the
latest.
$ docker-machine upgrade default

Stopping machine to do the upgrade...


Upgrading machine default...
Downloading latest boot2docker release to
/home/username/.docker/machine/cache/boot2docker.iso...
Starting machine back up...
Waiting for VM to start...

Note: If you are using a custom boot2docker ISO specified using--virtualbox-boot2docker-url or


an equivalent flag, running an upgrade on that machine completely replaces the specified ISO with
the latest “vanilla” boot2docker ISO available.

docker-machine url
Estimated reading time: 1 minute

Get the URL of a host

$ docker-machine url dev


tcp://192.168.99.109:2376

Docker Swarm
Docker Swarm overview
You are viewing docs for legacy standalone Swarm. These topics describe standalone Docker
Swarm. In Docker 1.12 and higher, Swarm mode is integrated with Docker Engine. Most users
should use integrated Swarm mode — a good place to start is Getting started with swarm
mode, Swarm mode CLI commands, and the Get started with Docker walkthrough). Standalone
Docker Swarm is not integrated into the Docker Engine API and CLI commands.
Estimated reading time: 2 minutes
Docker Swarm is native clustering for Docker. It turns a pool of Docker hosts into a single, virtual
Docker host. Because Docker Swarm serves the standard Docker API, any tool that already
communicates with a Docker daemon can use Swarm to transparently scale to multiple hosts.
Supported tools include, but are not limited to, the following:

 Dokku
 Docker Compose
 Docker Machine
 Jenkins

And of course, the Docker client itself is also supported.

Like other Docker projects, Docker Swarm follows the “swap, plug, and play” principle. As initial
development settles, an API develops to enable pluggable backends. This means you can swap out
the scheduling backend Docker Swarm uses out-of-the-box with a backend you prefer. Swarm’s
swappable design provides a smooth out-of-box experience for most use cases, and allows large-
scale production deployments to swap for more powerful backends, like Mesos.

Understand Swarm cluster creation


The first step to creating a swarm cluster on your network is to pull the Docker Swarm image. Then,
using Docker, you configure the swarm manager and all the nodes to run Docker Swarm. This
method requires that you:

 open a TCP port on each node for communication with the swarm manager
 install Docker on each node
 create and manage TLS certificates to secure your cluster

As a starting point, the manual method is best suited for experienced administrators or programmers
contributing to Docker Swarm. The alternative is to use docker-machine to install a cluster.

Using Docker Machine, you can quickly install a Docker Swarm on cloud providers or inside your
own data center. If you have VirtualBox installed on your local machine, you can quickly build and
explore Docker Swarm in your local environment. This method automatically generates a certificate
to secure your cluster.

Using Docker Machine is the best method for users getting started with Swarm for the first time. To
try the recommended method of getting started, see Get Started with Docker Swarm.

If you are interested in manually installing or interested in contributing, see Build a swarm cluster for
production.
Discovery services
To dynamically configure and manage the services in your containers, you use a discovery backend
with Docker Swarm. For information on which backends are available, see the Discovery
service documentation.

Advanced scheduling
To learn more about advanced scheduling, see the strategies and filters documents.

Swarm API
The Docker Swarm API is compatible with the Docker remote API, and extends it with some new
endpoints.

Getting help
Docker Swarm is still in its infancy and under active development. If you need help, would like to
contribute, or simply want to talk about the project with like-minded individuals, we have a number of
open channels for communication.

 To report bugs or file feature requests, use the issue tracker on Github.

 To talk about the project with people in real time, join the #docker-swarm channel on IRC.

 To contribute code or documentation changes, submit a pull request on Github.

For more information and resources, visit the Getting Help project page.

Get Docker Swarm


You are viewing docs for legacy standalone Swarm. These topics describe standalone Docker
Swarm. In Docker 1.12 and higher, Swarm mode is integrated with Docker Engine. Most users
should use integrated Swarm mode — a good place to start is Getting started with swarm
mode, Swarm mode CLI commands, and the Get started with Docker walkthrough). Standalone
Docker Swarm is not integrated into the Docker Engine API and CLI commands.
Estimated reading time: 4 minutes
You can create a Docker Swarm cluster using the swarm executable image from a container or using
an executable swarm binary you install on your system. This page introduces the two methods and
discusses their pros and cons.

Create a cluster with an interactive container


You can use the Docker Swarm official image to create a cluster. The image is built by Docker and
updated regularly through an automated build. To use the image, you run it a container via the
Engine docker run command. The image has multiple options and subcommands you can use to
create and manage a Swarm cluster.
The first time you use any image, Docker Engine checks to see if you already have the image in
your environment. By default Docker runs the swarm:latest version but you can also specify a tag
other than latest. If you have an image locally but a newer one exists on Docker Hub, Engine
downloads it.

Run the Swarm image from a container


1. Open a terminal on a host running Engine.

If you are using Mac or Windows, then you must make sure you have started a Docker
Engine host running and pointed your terminal environment to it with the Docker Machine
commands. If you aren’t sure, you can verify:

$ docker-machine ls
NAME ACTIVE URL STATE URL SWARM
DOCKER ERRORS
default * virtualbox Running tcp://192.168.99.100:2376
v1.9.1

This shows an environment running an Engine host on the default instance.


2. Use the swarm image to execute a command.

The easiest command is to get the help for the image. This command shows all the options
that are available with the image.

$ docker run swarm --help


Unable to find image 'swarm:latest' locally
latest: Pulling from library/swarm
d681c900c6e3: Pull complete
188de6f24f3f: Pull complete
90b2ffb8d338: Pull complete
237af4efea94: Pull complete
3b3fc6f62107: Pull complete
7e6c9135b308: Pull complete
986340ab62f0: Pull complete
a9975e2cc0a3: Pull complete
Digest: sha256:c21fd414b0488637b1f05f13a59b032a3f9da5d818d31da1a4ca98a84c0c781b
Status: Downloaded newer image for swarm:latest
Usage: swarm [OPTIONS] COMMAND [arg...]

A Docker-native clustering system

Version: 1.0.1 (744e3a3)

Options:
--debug debug mode [$DEBUG]
--log-level, -l "info" Log level (options: debug, info, warn, error,
fatal, panic)
--help, -h show help
--version, -v print the version

Commands:
create, c Create a cluster
list, l List nodes in a cluster
manage, m Manage a docker cluster
join, j join a docker cluster
help, h Shows a list of commands or help for one command

Run 'swarm COMMAND --help' for more information on a command.

In this example, the swarm image did not exist on the Engine host, so the Engine downloaded
it. After it downloaded, the image executed the help subcommand to display the help text.
After displaying the help, the swarm image exits and returns you to your terminal command
line.

3. List the running containers on your Engine host.

4. $ docker ps
5. CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES

Swarm is no longer running. The swarm image exits after you issue it a command.

Why use the image?


Using a Swarm container has three key benefits over other methods:

 You don’t need to install a binary on the system to use the image.
 The single command docker run gets and runs the most recent version of the image every
time.
 The container isolates Swarm from your host environment. You don’t need to perform or
maintain shell paths and environments.

Running the Swarm image is the recommended way to create and manage your Swarm cluster. All
of Docker’s documentation and tutorials use this method.

Run a Swarm binary


Before you run a Swarm binary directly on a host operating system (OS), you compile the binary
from the source code or get a trusted copy from another location. Then you run the Swarm binary.

To compile Swarm from source code, refer to the instructions in CONTRIBUTING.md.

Why use the binary?


Using a Swarm binary this way has one key benefit over other methods: If you are a developer who
contributes to the Swarm project, you can test your code changes without “containerizing” the binary
before you run it.

Running a Swarm binary on the host OS has disadvantages:

 Compilation from source is a burden.


 The binary doesn’t have the benefits that Docker containers provide, such as isolation.
 Most Docker documentation and tutorials don’t show this method of running swarm.
Lastly, because the Swarm nodes don’t use Engine, you can’t use Docker-based software tools,
such as Docker Engine CLI at the node level.

Install and create a Docker Swarm


You are viewing docs for legacy standalone Swarm. These topics describe standalone Docker
Swarm. In Docker 1.12 and higher, Swarm mode is integrated with Docker Engine. Most users
should use integrated Swarm mode — a good place to start is Getting started with swarm
mode, Swarm mode CLI commands, and the Get started with Docker walkthrough). Standalone
Docker Swarm is not integrated into the Docker Engine API and CLI commands.
Estimated reading time: 8 minutes

You use Docker Swarm to host and schedule a cluster of Docker containers. This section introduces
you to Docker Swarm by teaching you how to create a swarm on your local machine using Docker
Machine and VirtualBox.

Prerequisites
Make sure your local system has VirtualBox installed. If you are using macOS or Windows and have
installed Docker, you should have VirtualBox already installed.

Using the instructions appropriate to your system architecture, install Docker Machine.

Create a Docker Swarm


Docker Machine gets hosts ready to run Docker containers. Each node in your Docker Swarm must
have access to Docker to pull images and run them in containers. Docker Machine manages all this
provisioning for your swarm.

Before you create a swarm with docker-machine, you associate each node with a discovery service.
This example uses the token discovery service hosted by Docker Hub (only for testing/dev, not for
production). This discovery service associates a token with instances of the Docker Daemon running
on each node. Other discovery service backends such as etcd, consul, and zookeeper are available.

1. List the machines on your system.

2. $ docker-machine ls
3. NAME ACTIVE DRIVER STATE URL SWARM
4. docker-vm * virtualbox Running tcp://192.168.99.100:2376

This example was run on a macOS system with Docker Toolbox installed. So, the docker-
vm virtual machine is in the list.
5. Create a VirtualBox machine called local on your system.
6. $ docker-machine create -d virtualbox local
7. INFO[0000] Creating SSH key...
8. INFO[0000] Creating VirtualBox VM...
9. INFO[0005] Starting VirtualBox VM...
10. INFO[0005] Waiting for VM to start...
11. INFO[0050] "local" has been created and is now the active machine.
12. INFO[0050] To point your Docker client at it, run this in your
shell: eval "$(docker-machine env local)"

13. Load the local machine configuration into your shell.


14. $ eval "$(docker-machine env local)"

15. Generate a discovery token using the Docker Swarm image.

The command below runs the swarm create command in a container. If you haven’t got
the swarm:latest image on your local machine, Docker pulls it for you.
$ docker run swarm create
Unable to find image 'swarm:latest' locally
latest: Pulling from swarm
de939d6ed512: Pull complete
79195899a8a4: Pull complete
79ad4f2cc8e0: Pull complete
0db1696be81b: Pull complete
ae3b6728155e: Pull complete
57ec2f5f3e06: Pull complete
73504b2882a3: Already exists
swarm:latest: The image you are pulling has been verified. Important: image
verification is a tech preview feature and should not be relied on to provide
security.
Digest: sha256:aaaf6c18b8be01a75099cc554b4fb372b8ec677ae81764dcdf85470279a61d6f
Status: Downloaded newer image for swarm:latest
fe0cc96a72cf04dba8c1c4aa79536ec3

The swarm create command returned the fe0cc96a72cf04dba8c1c4aa79536ec3 token. Note: This
command relies on Docker Swarm’s hosted discovery service. If this service is having issues, this
command may fail. In this case, see information on using other types of discovery backends. Check
the status page for service availability.

1. Save the token in a safe place.

You use this token in the next step to create a Docker Swarm.

Launch the Swarm manager


A single system in your network is known as your Docker Swarm manager. The swarm manager
orchestrates and schedules containers on the entire cluster. The swarm manager rules a set of
agents (also called nodes or Docker nodes).

Swarm agents are responsible for hosting containers. They are regular docker daemons and you
can communicate with them using the Docker Engine API.

In this section, you create a swarm manager and two nodes.

1. Create a swarm manager under VirtualBox.

2. docker-machine create \
3. -d virtualbox \
4. --swarm \
5. --swarm-master \
6. --swarm-discovery token://<TOKEN-FROM-ABOVE> \
7. swarm-master

For example:

$ docker-machine create -d virtualbox --swarm --swarm-master --swarm-discovery


token://fe0cc96a72cf04dba8c1c4aa79536ec3 swarm-master
INFO[0000] Creating SSH key...
INFO[0000] Creating VirtualBox VM...
INFO[0005] Starting VirtualBox VM...
INFO[0005] Waiting for VM to start...
INFO[0060] "swarm-master" has been created and is now the active machine.
INFO[0060] To point your Docker client at it, run this in your shell: eval
"$(docker-machine env swarm-master)"

8. Open your VirtualBox Manager, it should contain the local machine and the new swarm-
master machine.

9. Create a swarm node.

10. docker-machine create \


11. -d virtualbox \
12. --swarm \
13. --swarm-discovery token://<TOKEN-FROM-ABOVE> \
14. swarm-agent-00

For example:

$ docker-machine create -d virtualbox --swarm --swarm-discovery


token://fe0cc96a72cf04dba8c1c4aa79536ec3 swarm-agent-00
INFO[0000] Creating SSH key...
INFO[0000] Creating VirtualBox VM...
INFO[0005] Starting VirtualBox VM...
INFO[0006] Waiting for VM to start...
INFO[0066] "swarm-agent-00" has been created and is now the active machine.
INFO[0066] To point your Docker client at it, run this in your shell: eval
"$(docker-machine env swarm-agent-00)"

15. Add another agent called swarm-agent-01.


16. $ docker-machine create -d virtualbox --swarm --swarm-discovery
token://fe0cc96a72cf04dba8c1c4aa79536ec3 swarm-agent-01

You should see the two agents in your VirtualBox Manager.

Direct your swarm


In this step, you connect to the swarm machine, display information related to your swarm, and start
an image on your swarm.

1. Point your Docker environment to the machine running the swarm master.

2. $ eval $(docker-machine env --swarm swarm-master)

3. Get information on your new swarm using the docker command.


4. $ docker info
5. Containers: 4
6. Strategy: spread
7. Filters: affinity, health, constraint, port, dependency
8. Nodes: 3
9. swarm-agent-00: 192.168.99.105:2376
10. └ Containers: 1
11. └ Reserved CPUs: 0 / 8
12. └ Reserved Memory: 0 B / 1.023 GiB
13. swarm-agent-01: 192.168.99.106:2376
14. └ Containers: 1
15. └ Reserved CPUs: 0 / 8
16. └ Reserved Memory: 0 B / 1.023 GiB
17. swarm-master: 192.168.99.104:2376
18. └ Containers: 2
19. └ Reserved CPUs: 0 / 8
You can see that each agent and the master all have port 2376 exposed. When you create a
swarm, you can use any port you like and even different ports on different nodes. Each
swarm node runs the swarm agent container.

The master is running both the swarm manager and a swarm agent container. This isn’t
recommended in a production environment because it can cause problems with agent
failover. However, it is perfectly fine to do this in a learning environment like this one.

20. Check the images currently running on your swarm.

21. $ docker ps -a
22. CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
23. 78be991b58d1 swarm:latest "/swarm join --addr 3 minutes ago
Up 2 minutes 2375/tcp swarm-agent-
01/swarm-agent
24. da5127e4f0f9 swarm:latest "/swarm join --addr 6 minutes ago
Up 6 minutes 2375/tcp swarm-agent-
00/swarm-agent
25. ef395f316c59 swarm:latest "/swarm join --addr 16 minutes ago
Up 16 minutes 2375/tcp swarm-
master/swarm-agent
26. 45821ca5208e swarm:latest "/swarm manage --tls 16 minutes ago
Up 16 minutes 2375/tcp, 192.168.99.104:3376->3376/tcp swarm-
master/swarm-agent-master

27. Run the Docker hello-world test image on your swarm.


28. $ docker run hello-world
29. Hello from Docker.
30. This message shows that your installation appears to be working correctly.
31.
32. To generate this message, Docker took the following steps:
33. 1. The Docker client contacted the Docker daemon.
34. 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
35. (Assuming it was not already locally available.)
36. 3. The Docker daemon created a new container from that image which runs the
37. executable that produces the output you are currently reading.
38. 4. The Docker daemon streamed that output to the Docker client, which sent it
39. to your terminal.
To try something more ambitious, you can run an Ubuntu container with:

$ docker run -it ubuntu bash

For more examples and ideas, visit the User Guide.

40. Use the docker ps command to find out which node the container ran on.
41. $ docker ps -a
42. CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
43. 54a8690043dd hello-world:latest "/hello" 22 seconds ago
Exited (0) 3 seconds ago swarm-
agent-00/modest_goodall
44. 78be991b58d1 swarm:latest "/swarm join --addr 5 minutes ago
Up 4 minutes 2375/tcp swarm-
agent-01/swarm-agent
45. da5127e4f0f9 swarm:latest "/swarm join --addr 8 minutes ago
Up 8 minutes 2375/tcp swarm-
agent-00/swarm-agent
46. ef395f316c59 swarm:latest "/swarm join --addr 18 minutes ago
Up 18 minutes 2375/tcp swarm-
master/swarm-agent
47. 45821ca5208e swarm:latest "/swarm manage --tls 18 minutes ago
Up 18 minutes 2375/tcp, 192.168.99.104:3376->3376/tcp swarm-
master/swarm-agent-master

Where to go next
At this point, you’ve installed Docker Swarm by pulling the latest image of it from Docker Hub. Then,
you built and ran a swarm on your local machine using VirtualBox. If you want, you can onto read
an overview of Docker Swarm features. Alternatively, you can develop a more in-depth view of
Swarm by manually installing Swarm on a network.

Plan for Swarm in production


You are viewing docs for legacy standalone Swarm. These topics describe standalone Docker
Swarm. In Docker 1.12 and higher, Swarm mode is integrated with Docker Engine. Most users
should use integrated Swarm mode — a good place to start is Getting started with swarm
mode, Swarm mode CLI commands, and the Get started with Docker walkthrough). Standalone
Docker Swarm is not integrated into the Docker Engine API and CLI commands.
Estimated reading time: 14 minutes

This article provides guidance to help you plan, deploy, and manage Docker swarm clusters in
business critical production environments. The following high level topics are covered:

 Security
 High Availability
 Performance
 Cluster ownership

Security
There are many aspects to securing a Docker Swarm cluster. This section covers:

 Authentication using TLS


 Network access control

These topics are not exhaustive. They form part of a wider security architecture that includes:
security patching, strong password policies, role based access control, technologies such as
SELinux and AppArmor, strict auditing, and more.

Configure Swarm for TLS


All nodes in a swarm cluster must bind their Docker Engine daemons to a network port. This brings
with it all of the usual network related security implications such as man-in-the-middle attacks. These
risks are compounded when the network in question is untrusted such as the internet. To mitigate
these risks, Swarm and the Engine support Transport Layer Security (TLS) for authentication.

The Engine daemons, including the swarm manager, that are configured to use TLS only accepts
commands from Docker Engine clients that sign their communications. Engine and Swarm support
external 3rd party Certificate Authorities (CA) as well as internal corporate CAs.

The default Engine and Swarm ports for TLS are:

 Engine daemon: 2376/tcp


 Swarm manager: 3376/tcp

For more information on configuring Swarm for TLS, see the Overview Docker Swarm with
TLSpage.

Network access control


Production networks are complex, and usually locked down so that only allowed traffic can flow on
the network. The list below shows the network ports and protocols that the different components of a
Swam cluster listen on. You should use these to configure your firewalls and other network access
control lists.

 Swarm manager.
o Inbound 80/tcp (HTTP). This allows docker pull commands to work. If you plan to
pull images from Docker Hub, you must allow Internet connections through port 80.
o Inbound 2375/tcp. This allows Docker Engine CLI commands direct to the Engine
daemon.
o Inbound 3375/tcp. This allows Engine CLI commands to the swarm manager.
o Inbound 22/tcp. This allows remote management via SSH.
 Service Discovery:
o Inbound 80/tcp (HTTP). This allows docker pull commands to work. If you plan to
pull images from Docker Hub, you must allow Internet connections through port 80.
o Inbound Discovery service port. This needs setting to the port that the backend
discovery service listens on (consul, etcd, or zookeeper).
o Inbound 22/tcp. This allows remote management via SSH.
 Swarm nodes:
o Inbound 80/tcp (HTTP). This allows docker pull commands to work. If you plan to
pull images from Docker Hub, you must allow Internet connections through port 80.
o Inbound 2375/tcp. This allows Engine CLI commands direct to the Docker daemon.
o Inbound 22/tcp. This allows remote management via SSH.
 Custom, cross-host container networks:
o Inbound 7946/tcp Allows for discovering other container networks.
o Inbound 7946/udp Allows for discovering other container networks.
o Inbound <store-port>/tcp Network key-value store service port.
o 4789/udp For the container overlay network.
o ESP packets For encrypted overlay networks.

If your firewalls and other network devices are connection state aware, they allow responses to
established TCP connections. If your devices are not state aware, you need to open up ephemeral
ports from 32768-65535. For added security you can configure the ephemeral port rules to only
allow connections from interfaces on known swarm devices.

If your swarm cluster is configured for TLS, replace 2375 with 2376, and 3375 with 3376.

The ports listed above are just for swarm cluster operations such as cluster creation, cluster
management, and scheduling of containers against the cluster. You may need to open additional
network ports for application-related communications.

It is possible for different components of a swarm cluster to exist on separate networks. For
example, many organizations operate separate management and production networks. Some
Docker Engine clients may exist on a management network, while swarm managers, discovery
service instances, and nodes might exist on one or more production networks. To offset against
network failures, you can deploy swarm managers, discovery services, and nodes across multiple
production networks. In all of these cases you can use the list of ports above to assist the work of
your network infrastructure teams to efficiently and securely configure your network.

High Availability (HA)


All production environments should be highly available, meaning they are continuously operational
over long periods of time. To achieve high availability, an environment must survive failures of its
individual component parts.

The following sections discuss some technologies and best practices that can enable you to build
resilient, highly-available swarm clusters. You can then use these cluster to run your most
demanding production applications and workloads.

Swarm manager HA
The swarm manager is responsible for accepting all commands coming in to a swarm cluster, and
scheduling resources against the cluster. If the swarm manager becomes unavailable, some cluster
operations cannot be performed until the swarm manager becomes available again. This is
unacceptable in large-scale business critical scenarios.

Swarm provides HA features to mitigate against possible failures of the swarm manager. You can
use Swarm’s HA feature to configure multiple swarm managers for a single cluster. These swarm
managers operate in an active/passive formation with a single swarm manager being the primary,
and all others being secondaries.

Swarm secondary managers operate as warm standby’s, meaning they run in the background of the
primary swarm manager. The secondary swarm managers are online and accept commands issued
to the cluster, just as the primary swarm manager. However, any commands received by the
secondaries are forwarded to the primary where they are executed. Should the primary swarm
manager fail, a new primary is elected from the surviving secondaries.

When creating HA swarm managers, you should take care to distribute them over as many failure
domains as possible. A failure domain is a network section that can be negatively affected if a critical
device or service experiences problems. For example, if your cluster is running in the Ireland Region
of Amazon Web Services (eu-west-1) and you configure three swarm managers (1 x primary, 2 x
secondary), you should place one in each availability zone as shown below.
In this configuration, the swarm cluster can survive the loss of any two availability zones. For your
applications to survive such failures, they must be architected across as many failure domains as
well.

For swarm clusters serving high-demand, line-of-business applications, you should have 3 or more
swarm managers. This configuration allows you to take one manager down for maintenance, suffer
an unexpected failure, and still continue to manage and operate the cluster.

Discovery service HA
The discovery service is a key component of a swarm cluster. If the discovery service becomes
unavailable, this can prevent certain cluster operations. For example, without a working discovery
service, operations such as adding new nodes to the cluster and making queries against the cluster
configuration fail. This is not acceptable in business critical production environments.

Swarm supports four backend discovery services:

 Hosted (not for production use)


 Consul
 etcd
 Zookeeper
Consul, etcd, and Zookeeper are all suitable for production, and should be configured for high
availability. You should use each service’s existing tools and best practices to configure these for
HA.

For swarm clusters serving high-demand, line-of-business applications, it is recommended to have 5


or more discovery service instances. This due to the replication/HA technologies they use (such as
Paxos/Raft) requiring a strong quorum. Having 5 instances allows you to take one down for
maintenance, suffer an unexpected failure, and still maintain a strong quorum.

When creating a highly available swarm discovery service, you should take care to distribute each
discovery service instance over as many failure domains as possible. For example, if your cluster is
running in the Ireland Region of Amazon Web Services (eu-west-1) and you configure three
discovery service instances, you should place one in each availability zone.

The diagram below shows a swarm cluster configured for HA. It has three swarm managers and
three discovery service instances spread over three failure domains (availability zones). It also has
swarm nodes balanced across all three failure domains. The loss of two availability zones in the
configuration shown below does not cause the swarm cluster to go down.

It is possible to share the same Consul, etcd, or Zookeeper containers between the swarm discovery
and Engine container networks. However, for best performance and availability you should deploy
dedicated instances – a discovery instance for Swarm and another for your container networks.
Multiple clouds
You can architect and build swarm clusters that stretch across multiple cloud providers, and even
across public cloud and on premises infrastructures. The diagram below shows an example swarm
cluster stretched across AWS and Azure.

While such architectures may appear to provide the ultimate in availability, there are several factors
to consider. Network latency can be problematic, as can partitioning. As such, you should seriously
consider technologies that provide reliable, high speed, low latency connections into these cloud
platforms – technologies such as AWS Direct Connect and Azure ExpressRoute.

If you are considering a production deployment across multiple infrastructures like this, make sure
you have good test coverage over your entire system.

Isolated production environments


It is possible to run multiple environments, such as development, staging, and production, on a
single swarm cluster. You accomplish this by tagging swarm nodes and using constraints to filter
containers onto nodes tagged as production or staging etc. However, this is not recommended. The
recommended approach is to air-gap production environments, especially high performance
business critical production environments.
For example, many companies not only deploy dedicated isolated infrastructures for production –
such as networks, storage, compute and other systems. They also deploy separate management
systems and policies. This results in things like users having separate accounts for logging on to
production systems etc. In these types of environments, it is mandatory to deploy dedicated
production swarm clusters that operate on the production hardware infrastructure and follow
thorough production management, monitoring, audit and other policies.

Operating system selection


You should give careful consideration to the operating system that your Swarm infrastructure relies
on. This consideration is vital for production environments.

It is not unusual for a company to use one operating system in development environments, and a
different one in production. A common example of this is to use CentOS in development
environments, but then to use Red Hat Enterprise Linux (RHEL) in production. This decision is often
a balance between cost and support. CentOS Linux can be downloaded and used for free, but
commercial support options are few and far between. Whereas RHEL has an associated support
and license cost, but comes with world class commercial support from Red Hat.

When choosing the production operating system to use with your swarm clusters, choose one that
closely matches what you have used in development and staging environments. Although containers
abstract much of the underlying OS, some features have configuration requirements. For example,
to use Docker container networking with Docker Engine 1.10 or higher, your host must have a Linux
kernel that is version 3.10 or higher. Refer to the change logs to understand the requirements for a
particular version of Docker Engine or Swarm.

You should also consider procedures and channels for deploying and potentially patching your
production operating systems.

Performance
Performance is critical in environments that support business critical line of business applications.
The following sections discuss some technologies and best practices that can help you build high
performance swarm clusters.

Container networks
Docker Engine container networks are overlay networks and can be created across multiple Engine
hosts. For this reason, a container network requires a key-value (KV) store to maintain network
configuration and state. This KV store can be shared in common with the one used by the swarm
cluster discovery service. However, for best performance and fault isolation, you should deploy
individual KV store instances for container networks and swarm discovery. This is especially so in
demanding business critical production environments.

Beginning with Docker Engine 1.9, Docker container networks require specific Linux kernel versions.
Higher kernel versions are usually preferred, but carry an increased risk of instability because of the
newness of the kernel. Where possible, use a kernel version that is already approved for use in your
production environment. If you can not use a 3.10 or higher Linux kernel version for production, you
should begin the process of approving a newer kernel as early as possible.

Scheduling strategies
Scheduling strategies are how Swarm decides which nodes in a cluster to start containers on.
Swarm supports the following strategies:

 spread
 binpack
 random (not for production use)

You can also write your own.

Spread is the default strategy. It attempts to balance the number of containers evenly across all
nodes in the cluster. This is a good choice for high performance clusters, as it spreads container
workload across all resources in the cluster. These resources include CPU, RAM, storage, and
network bandwidth.

If your swarm nodes are balanced across multiple failure domains, the spread strategy evenly
balance containers across those failure domains. However, spread on its own is not aware of the
roles of any of those containers, so has no intelligence to spread multiple instances of the same
service across failure domains. To achieve this you should use tags and constraints.

The binpack strategy runs as many containers as possible on a node, effectively filling it up, before
scheduling containers on the next node.

This means that binpack does not use all cluster resources until the cluster fills up. As a result,
applications running on swarm clusters that operate the binpack strategy might not perform as well
as those that operate the spread strategy. However, binpack is a good choice for minimizing
infrastructure requirements and cost. For example, imagine you have a 10-node cluster where each
node has 16 CPUs and 128GB of RAM. However, your container workload across the entire cluster
is only using the equivalent of 6 CPUs and 64GB RAM. The spread strategy would balance
containers across all nodes in the cluster. However, the binpack strategy would fit all containers on a
single node, potentially allowing you turn off the additional nodes and save on cost.

Ownership of Swarm clusters


The question of ownership is vital in production environments. It is therefore vital that you consider
and agree on all of the following when planning, documenting, and deploying your production swarm
clusters.

 Whose budget does the production swarm infrastructure come out of?
 Who owns the accounts that can administer and manage the production swarm cluster?
 Who is responsible for monitoring the production swarm infrastructure?
 Who is responsible for patching and upgrading the production swarm infrastructure?
 On-call responsibilities and escalation procedures?

The above is not a complete list, and the answers to the questions vary depending on how your
organization’s and team’s are structured. Some companies are along way down the DevOps route,
while others are not. Whatever situation your company is in, it is important that you factor all of the
above into the planning, deployment, and ongoing management of your production swarm clusters.

Build a Swarm cluster for production


You are viewing docs for legacy standalone Swarm. These topics describe standalone Docker
Swarm. In Docker 1.12 and higher, Swarm mode is integrated with Docker Engine. Most users
should use integrated Swarm mode — a good place to start is Getting started with swarm
mode, Swarm mode CLI commands, and the Get started with Docker walkthrough). Standalone
Docker Swarm is not integrated into the Docker Engine API and CLI commands.
Estimated reading time: 10 minutes

This page teaches you to deploy a high-availability swarm cluster. Although the example installation
uses the Amazon Web Services (AWS) platform, you can deploy an equivalent swarm on many
other platforms. In this example, you do the following:

 Verify you have the prerequisites


 Establish basic network security
 Create your nodes
 Install Engine on each node
 Configure a discovery backend
 Create a swarm cluster
 Communicate with the swarm
 Test the high-availability swarm managers
 Additional Resources

For a quickstart for Docker Swarm, try the Evaluate Swarm in a sandbox page.

Prerequisites
 An Amazon Web Services (AWS) account
 Familiarity with AWS features and tools, such as:
o Elastic Cloud (EC2) Dashboard
o Virtual Private Cloud (VPC) Dashboard
o VPC Security groups
o Connecting to an EC2 instance using SSH

Step 1. Add network security rules


AWS uses a “security group” to allow specific types of network traffic on your VPC network.
The default security group’s initial set of rules deny all inbound traffic, allow all outbound traffic, and
allow all traffic between instances.

You’re going to add a couple of rules to allow inbound SSH connections and inbound container
images. This set of rules somewhat protects the Engine, Swarm, and Consul ports. For a production
environment, you would apply more restrictive security measures. Do not leave Docker Engine ports
unprotected.

From your AWS home console, do the following:

1. Click VPC - Isolated Cloud Resources.

The VPC Dashboard opens.

2. Navigate to Security Groups.

3. Select the default security group that’s associated with your default VPC.

4. Add the following two rules.

Type Protocol Port Range Source

SSH TCP 22 0.0.0.0/0

HTTP TCP 80 0.0.0.0/0

The SSH connection allows you to connect to the host while the HTTP is for container images.
Step 2. Create your instances
In this step, you create five Linux hosts that are part of your default security group. When complete,
the example deployment contains three types of nodes:

Node Description Name

Swarm primary and secondary managers manager0, manager1

Swarm node node0, node1

Discovery backend consul0

To create the instances do the following:

1. Open the EC2 Dashboard and launch five EC2 instances, one at a time.

o During Step 1: Choose an Amazon Machine Image (AMI), pick the Amazon Linux
AMI.

o During Step 5: Tag Instance, under Value, give each instance one of these names:
 manager0
 manager1
 consul0
 node0
 node1

o During Step 6: Configure Security Group, choose Select an existing security


groupand pick the “default” security group.

2. Review and launch your instances.

Step 3. Install Engine on each node


1. Install Docker on each host, using the appropriate instructions for your operating system and
distribution.

2. Edit /etc/docker/daemon.json. Create it if it does not exist. Assuming the file was empty, its
contents should be:
3. {
4. "hosts": ["tcp://0.0.0.0:2375", "unix:///var/run/docker.sock"]
5. }
Start or restart Docker for the changes to take effect.

$ sudo systemctl start docker

6. Give the ec2-user root privileges:


7. $ sudo usermod -aG docker ec2-user

8. Log out of the host.

TROUBLESHOOTING

 If entering a docker command produces a message asking whether docker is available on


this host, it may be because the user doesn’t have root privileges. If so, use sudo or give the
user root privileges.

 For this example, don’t create an AMI image from one of your instances running Docker
Engine and then re-use it to create the other instances. Doing so produces errors.

 If your host cannot reach Docker Hub, docker run commands that pull images fail. In that
case, check that your VPC is associated with a security group with a rule that allows inbound
traffic. Also check the Docker Hub status page for service availability.

Step 4. Set up a discovery backend


Here, you’re going to create a minimalist discovery backend. The swarm managers and nodes use
this backend to authenticate themselves as members of the cluster. The swarm managers also use
this information to identify which nodes are available to run containers.

To keep things simple, you are going to run a single consul daemon on the same host as one of the
swarm managers.

1. Use SSH to connect to the consul0 instance.


2. $ ifconfig

3. From the output, copy the eth0 IP address from inet addr.
4. To set up a discovery backend, use the following command, replacing <consul0_ip>with the
IP address from the previous command:
5. $ docker run -d -p 8500:8500 --name=consul progrium/consul -server -bootstrap
-advertise=<consul0_ip>
6. Enter docker ps.
From the output, verify that a consul container is running. Then, disconnect from
the consul0 instance.

Your Consul node is up and running, providing your cluster with a discovery backend. To increase its
reliability, you can create a high-availability cluster using a trio of consul nodes using the link
mentioned at the end of this page. (Before creating a cluster of consul nodes, update the VPC
security group with rules to allow inbound traffic on the required port numbers.)

Step 5. Create swarm cluster


After creating the discovery backend, you can create the swarm managers. In this step, you are
going to create two swarm managers in a high-availability configuration. The first manager you run
becomes the swarm’s primary manager. Some documentation still refers to a primary manager as a
“master”, but that term has been superseded. The second manager you run serves as a replica. If
the primary manager becomes unavailable, the cluster elects the replica as the primary manager.

1. Use SSH to connect to the manager0 instance and use ifconfig to get its IP address.
2. $ ifconfig

3. To create the primary manager in a high-availability swarm cluster, use the following syntax:

4. $ docker run -d -p 4000:4000 swarm manage -H :4000 --replication --advertise


<manager0_ip>:4000 consul://<consul0_ip>:8500

Replacing <manager0_ip> and <consul0_ip> with the IP address from the previous command,
for example:
$ docker run -d -p 4000:4000 swarm manage -H :4000 --replication --advertise
172.30.0.125:4000 consul://172.30.0.161:8500

5. Enter docker ps.


From the output, verify that a swarm cluster container is running. Then, disconnect from
the manager0 instance.
6. Connect to the manager1 node and use ifconfig to get its IP address.
7. $ ifconfig

8. Start the secondary swarm manager using following command.

Replacing <manager1_ip> with the IP address from the previous command, for example:
$ docker run -d -p 4000:4000 swarm manage -H :4000 --replication --advertise
<manager1_ip>:4000 consul://172.30.0.161:8500

9. Enter docker ps to verify that a swarm container is running. Then disconnect from
the manager1 instance.
10. Connect to node0 and node1 in turn and join them to the cluster.
a. Get the node IP addresses with the ifconfig command.

b. Start a swarm container each using the following syntax:

docker run -d swarm join --advertise=<node_ip>:2375


consul://<consul0_ip>:8500

For example:

$ docker run -d swarm join --advertise=172.30.0.69:2375


consul://172.30.0.161:8500

c. Enter docker ps to verify that the swarm cluster container started from the previous
command is running.

Your small swarm cluster is up and running on multiple hosts, providing you with a high-availability
virtual Docker Engine. To increase its reliability and capacity, you can add more swarm managers,
nodes, and a high-availability discovery backend.

Step 6. Communicate with the swarm


You can communicate with the swarm to get information about the managers and nodes using the
Swarm API, which is nearly the same as the standard Docker API. In this example, you use SSL to
connect to manager0 and consul0 host again. Then, you address commands to the swarm manager.

1. Get information about the manager and nodes in the cluster:

2. $ docker -H :4000 info

The output gives the manager’s role as primary (Role: primary) and information about each
of the nodes.

3. Run an application on the swarm:

4. $ docker -H :4000 run hello-world


5. Check which swarm node ran the application:

6. $ docker -H :4000 ps

Step 7. Test Swarm failover


To see the replica instance take over, you’re going to shut down the primary manager. Doing so
kicks off an election, and the replica becomes the primary manager. When you start the manager
you shut down earlier, it becomes the replica.

1. SSH connection to the manager0 instance.


2. Get the container ID or name of the swarm container:
3. $ docker ps

4. Shut down the primary manager, replacing <id_name> with the container’s ID or name (for
example, “8862717fe6d3” or “trusting_lamarr”).
5. docker container rm -f <id_name>

6. Start the swarm manager. For example:

7. $ docker run -d -p 4000:4000 swarm manage -H :4000 --replication --advertise


172.30.0.161:4000 consul://172.30.0.161:8500

8. Review the Engine’s daemon logs, replacing <id_name> with the new container’s ID or name:
9. $ sudo docker logs <id_name>

The output shows two entries like these ones:

time="2016-02-02T02:12:32Z" level=info msg="Leader Election: Cluster leadership


lost"
time="2016-02-02T02:12:32Z" level=info msg="New leader elected:
172.30.0.160:4000"

10. To get information about the manager and nodes in the cluster, enter:

11. $ docker -H :4000 info

You can connect to the manager1 node and run the info and logs commands. They display
corresponding entries for the change in leadership.
Deploy application infrastructure
You are viewing docs for legacy standalone Swarm. These topics describe standalone Docker
Swarm. In Docker 1.12 and higher, Swarm mode is integrated with Docker Engine. Most users
should use integrated Swarm mode — a good place to start is Getting started with swarm
mode, Swarm mode CLI commands, and the Get started with Docker walkthrough). Standalone
Docker Swarm is not integrated into the Docker Engine API and CLI commands.
Estimated reading time: 13 minutes

In this step, you create several Docker hosts to run your application stack on. Before you continue,
make sure you have taken the time to learn the application architecture.

About these instructions


This example assumes you are running on a Mac or Windows system and enabling Docker
Engine docker commands by provisioning local VirtualBox virtual machines using Docker Machine.
For this evaluation installation, you need 6 (six) VirtualBox VMs.

While this example uses Docker Machine, this is only one example of an infrastructure you can use.
You can create the environment design on whatever infrastructure you wish. For example, you could
place the application on another public cloud platform such as Azure or DigitalOcean, on premises in
your data center, or even in a test environment on your laptop.

Finally, these instructions use some common bash command substitution techniques to resolve
some values, for example:
$ eval $(docker-machine env keystore)

In a Windows environment, these substitution fail. If you are running in Windows, replace the
substitution $(docker-machine env keystore) with the actual value.

Task 1. Create the keystore server


To enable a Docker container network and Swarm discovery, you must deploy (or supply) a key-
value store. As a discovery backend, the key-value store maintains an up-to-date list of cluster
members and shares that list with the Swarm manager. The Swarm manager uses this list to assign
tasks to the nodes.
An overlay network requires a key-value store. The key-value store holds information about the
network state which includes discovery, networks, endpoints, IP addresses, and more.

Several different backends are supported. This example uses Consul container.

1. Create a “machine” named keystore.


2. $ docker-machine create -d virtualbox --virtualbox-memory "2000" \
3. --engine-opt="label=com.function=consul" keystore

You can set options for the Engine daemon with the --engine-opt flag. In this command, you
use it to label this Engine instance.
4. Set your local shell to the keystore Docker host.
5. $ eval $(docker-machine env keystore)

6. Run the consul container.

7. $ docker run --restart=unless-stopped -d -p 8500:8500 -h consul


progrium/consul -server -bootstrap

The -p flag publishes port 8500 on the container which is where the Consul server listens.
The server also has several other ports exposed which you can see by running docker ps.
$ docker ps
CONTAINER ID IMAGE ... PORTS
NAMES
372ffcbc96ed progrium/consul ... 53/tcp, 53/udp, 8300-8302/tcp,
8400/tcp, 8301-8302/udp, 0.0.0.0:8500->8500/tcp dreamy_ptolemy

8. Use a curl command to test the server by listing the nodes.


9. $ curl $(docker-machine ip keystore):8500/v1/catalog/nodes
10. [{"Node":"consul","Address":"172.17.0.2"}]

Task 2. Create the Swarm manager


In this step, you create the Swarm manager and connect it to the keystore instance. The Swarm
manager container is the heart of your Swarm cluster. It is responsible for receiving all Docker
commands sent to the cluster, and for scheduling resources against the cluster. In a real-world
production deployment, you should configure additional replica Swarm managers as secondaries for
high availability (HA).
Use the --eng-opt flag to set the cluster-store and cluster-advertise options to refer to
the keystore server. These options support the container network you create later.
1. Create the manager host.
2. $ docker-machine create -d virtualbox --virtualbox-memory "2000" \
3. --engine-opt="label=com.function=manager" \
4. --engine-opt="cluster-store=consul://$(docker-machine ip keystore):8500" \
5. --engine-opt="cluster-advertise=eth1:2376" manager

You also give the daemon a manager label.


6. Set your local shell to the manager Docker host.
7. $ eval $(docker-machine env manager)

8. Start the Swarm manager process.

9. $ docker run --restart=unless-stopped -d -p 3376:2375 \


10. -v /var/lib/boot2docker:/certs:ro \
11. swarm manage --tlsverify \
12. --tlscacert=/certs/ca.pem \
13. --tlscert=/certs/server.pem \
14. --tlskey=/certs/server-key.pem \
15. consul://$(docker-machine ip keystore):8500

This command uses the TLS certificates created for the boot2docker.iso or the manager.
This is key for the manager when it connects to other machines in the cluster.

16. Test your work by displaying the Docker daemon logs from the host.

17. $ docker-machine ssh manager


18. <-- output snipped -->
19. docker@manager:~$ tail /var/lib/boot2docker/docker.log
20. time="2016-04-06T23:11:56.481947896Z" level=debug msg="Calling GET
/v1.15/version"
21. time="2016-04-06T23:11:56.481984742Z" level=debug msg="GET /v1.15/version"
22. time="2016-04-06T23:12:13.070231761Z" level=debug msg="Watch triggered with 1
nodes" discovery=consul
23. time="2016-04-06T23:12:33.069387215Z" level=debug msg="Watch triggered with 1
nodes" discovery=consul
24. time="2016-04-06T23:12:53.069471308Z" level=debug msg="Watch triggered with 1
nodes" discovery=consul
25. time="2016-04-06T23:13:13.069512320Z" level=debug msg="Watch triggered with 1
nodes" discovery=consul
26. time="2016-04-06T23:13:33.070021418Z" level=debug msg="Watch triggered with 1
nodes" discovery=consul
27. time="2016-04-06T23:13:53.069395005Z" level=debug msg="Watch triggered with 1
nodes" discovery=consul
28. time="2016-04-06T23:14:13.071417551Z" level=debug msg="Watch triggered with 1
nodes" discovery=consul
29. time="2016-04-06T23:14:33.069843647Z" level=debug msg="Watch triggered with 1
nodes" discovery=consul

The output indicates that the consul and the manager are communicating correctly.

30. Exit the Docker host.

31. docker@manager:~$ exit

Task 3. Add the load balancer


The application uses Interlock and Nginx as a loadbalancer. Before you build the load balancer host,
create the configuration for Nginx.

1. On your local host, create a config directory.


2. Change directories to the config directory.
3. $ cd config

4. Get the IP address of the Swarm manager host.

For example:

$ docker-machine ip manager
192.168.99.101

5. Use your favorite editor to create a config.toml file and add this content to the file:
6. ListenAddr = ":8080"
7. DockerURL = "tcp://SWARM_MANAGER_IP:3376"
8. TLSCACert = "/var/lib/boot2docker/ca.pem"
9. TLSCert = "/var/lib/boot2docker/server.pem"
10. TLSKey = "/var/lib/boot2docker/server-key.pem"
11.
12. [[Extensions]]
13. Name = "nginx"
14. ConfigPath = "/etc/nginx/nginx.conf"
15. PidPath = "/var/run/nginx.pid"
16. MaxConn = 1024
17. Port = 80

18. In the configuration, replace the SWARM_MANAGER_IP with the manager IP you got in Step 4.

You use this value because the load balancer listens on the manager’s event stream.

19. Save and close the config.toml file.

20. Create a machine for the load balancer.

21. $ docker-machine create -d virtualbox --virtualbox-memory "2000" \


22. --engine-opt="label=com.function=interlock" loadbalancer

23. Switch the environment to the loadbalancer.


24. $ eval $(docker-machine env loadbalancer)

25. Start an interlock container running.


26. $ docker run \
27. -P \
28. -d \
29. -ti \
30. -v nginx:/etc/conf \
31. -v /var/lib/boot2docker:/var/lib/boot2docker:ro \
32. -v /var/run/docker.sock:/var/run/docker.sock \
33. -v $(pwd)/config.toml:/etc/config.toml \
34. --name interlock \
35. ehazlett/interlock:1.0.1 \
36. -D run -c /etc/config.toml
This command relies on the config.toml file being in the current directory. After running the
command, confirm the image is running:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
d846b801a978 ehazlett/interlock:1.0.1 "/bin/interlock -D ru" 2 minutes
ago Up 2 minutes 0.0.0.0:32770->8080/tcp interlock

If you don’t see the image running, use docker ps -a to list all images to make sure the
system attempted to start the image. Then, get the logs to see why the container failed to
start.
$ docker logs interlock
INFO[0000] interlock 1.0.1 (000291d)
DEBU[0000] loading config from: /etc/config.toml
FATA[0000] read /etc/config.toml: is a directory

This error usually means you weren’t starting the docker run from the same configdirectory
where the config.toml file is. If you run the command and get a Conflict error such as:
docker: Error response from daemon: Conflict. The name "/interlock" is already in
use by container
d846b801a978c76979d46a839bb05c26d2ab949ff9f4f740b06b5e2564bae958. You have to
remove (or rename) that container to reuse that name.

Remove the interlock container with the docker container rm interlock and try again.
37. Start an nginx container on the load balancer.
38. $ docker run -ti -d \
39. -p 80:80 \
40. --label interlock.ext.name=nginx \
41. --link=interlock:interlock \
42. -v nginx:/etc/conf \
43. --name nginx \
44. nginx nginx -g "daemon off;" -c /etc/conf/nginx.conf

Task 4. Create the other Swarm nodes


A host in a Swarm cluster is called a node. You’ve already created the manager node. Here, the task
is to create each virtual host for each node. There are three commands required:

 create the host with Docker Machine


 point the local environment to the new host
 join the host to the Swarm cluster

If you were building this in a non-Mac/Windows environment, you’d only need to run
the join command to add a node to the Swarm cluster and register it with the Consul discovery
service. When you create a node, you also give it a label, for example:
--engine-opt="label=com.function=frontend01"

These labels are used later when starting application containers. In the commands below, notice the
label you are applying to each node.

1. Create the frontend01 host and add it to the Swarm cluster.


2. $ docker-machine create -d virtualbox --virtualbox-memory "2000" \
3. --engine-opt="label=com.function=frontend01" \
4. --engine-opt="cluster-store=consul://$(docker-machine ip keystore):8500" \
5. --engine-opt="cluster-advertise=eth1:2376" frontend01
6. $ eval $(docker-machine env frontend01)
7. $ docker run -d swarm join --addr=$(docker-machine ip frontend01):2376
consul://$(docker-machine ip keystore):8500

8. Create the frontend02 VM.


9. $ docker-machine create -d virtualbox --virtualbox-memory "2000" \
10. --engine-opt="label=com.function=frontend02" \
11. --engine-opt="cluster-store=consul://$(docker-machine ip keystore):8500" \
12. --engine-opt="cluster-advertise=eth1:2376" frontend02
13. $ eval $(docker-machine env frontend02)
14. $ docker run -d swarm join --addr=$(docker-machine ip frontend02):2376
consul://$(docker-machine ip keystore):8500

15. Create the worker01 VM.


16. $ docker-machine create -d virtualbox --virtualbox-memory "2000" \
17. --engine-opt="label=com.function=worker01" \
18. --engine-opt="cluster-store=consul://$(docker-machine ip keystore):8500" \
19. --engine-opt="cluster-advertise=eth1:2376" worker01
20. $ eval $(docker-machine env worker01)
21. $ docker run -d swarm join --addr=$(docker-machine ip worker01):2376
consul://$(docker-machine ip keystore):8500

22. Create the dbstore VM.


23. $ docker-machine create -d virtualbox --virtualbox-memory "2000" \
24. --engine-opt="label=com.function=dbstore" \
25. --engine-opt="cluster-store=consul://$(docker-machine ip keystore):8500" \
26. --engine-opt="cluster-advertise=eth1:2376" dbstore
27. $ eval $(docker-machine env dbstore)
28. $ docker run -d swarm join --addr=$(docker-machine ip dbstore):2376
consul://$(docker-machine ip keystore):8500

29. Check your work.

At this point, you have deployed on the infrastructure you need to run the application. Test
this now by listing the running machines:

$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
DOCKER ERRORS
dbstore - virtualbox Running tcp://192.168.99.111:2376
v1.10.3
frontend01 - virtualbox Running tcp://192.168.99.108:2376
v1.10.3
frontend02 - virtualbox Running tcp://192.168.99.109:2376
v1.10.3
keystore - virtualbox Running tcp://192.168.99.100:2376
v1.10.3
loadbalancer - virtualbox Running tcp://192.168.99.107:2376
v1.10.3
manager - virtualbox Running tcp://192.168.99.101:2376
v1.10.3
worker01 * virtualbox Running tcp://192.168.99.110:2376
v1.10.3

30. Make sure the Swarm manager sees all your nodes.

31. $ docker -H $(docker-machine ip manager):3376 info


32. Containers: 4
33. Running: 4
34. Paused: 0
35. Stopped: 0
36. Images: 3
37. Server Version: swarm/1.1.3
38. Role: primary
39. Strategy: spread
40. Filters: health, port, dependency, affinity, constraint
41. Nodes: 4
42. dbstore: 192.168.99.111:2376
43. └ Status: Healthy
44. └ Containers: 1
45. └ Reserved CPUs: 0 / 1
46. └ Reserved Memory: 0 B / 2.004 GiB
47. └ Labels: com.function=dbstore, executiondriver=native-0.2,
kernelversion=4.1.19-boot2docker, operatingsystem=Boot2Docker 1.10.3 (TCL
6.4.1); master : 625117e - Thu Mar 10 22:09:02 UTC 2016, provider=virtualbox,
storagedriver=aufs
48. └ Error: (none)
49. └ UpdatedAt: 2016-04-07T18:25:37Z
50. frontend01: 192.168.99.108:2376
51. └ Status: Healthy
52. └ Containers: 1
53. └ Reserved CPUs: 0 / 1
54. └ Reserved Memory: 0 B / 2.004 GiB
55. └ Labels: com.function=frontend01, executiondriver=native-0.2,
kernelversion=4.1.19-boot2docker, operatingsystem=Boot2Docker 1.10.3 (TCL
6.4.1); master : 625117e - Thu Mar 10 22:09:02 UTC 2016, provider=virtualbox,
storagedriver=aufs
56. └ Error: (none)
57. └ UpdatedAt: 2016-04-07T18:26:10Z
58. frontend02: 192.168.99.109:2376
59. └ Status: Healthy
60. └ Containers: 1
61. └ Reserved CPUs: 0 / 1
62. └ Reserved Memory: 0 B / 2.004 GiB
63. └ Labels: com.function=frontend02, executiondriver=native-0.2,
kernelversion=4.1.19-boot2docker, operatingsystem=Boot2Docker 1.10.3 (TCL
6.4.1); master : 625117e - Thu Mar 10 22:09:02 UTC 2016, provider=virtualbox,
storagedriver=aufs
64. └ Error: (none)
65. └ UpdatedAt: 2016-04-07T18:25:43Z
66. worker01: 192.168.99.110:2376
67. └ Status: Healthy
68. └ Containers: 1
69. └ Reserved CPUs: 0 / 1
70. └ Reserved Memory: 0 B / 2.004 GiB
71. └ Labels: com.function=worker01, executiondriver=native-0.2,
kernelversion=4.1.19-boot2docker, operatingsystem=Boot2Docker 1.10.3 (TCL
6.4.1); master : 625117e - Thu Mar 10 22:09:02 UTC 2016, provider=virtualbox,
storagedriver=aufs
72. └ Error: (none)
73. └ UpdatedAt: 2016-04-07T18:25:56Z
74. Plugins:
75. Volume:
76. Network:
77. Kernel Version: 4.1.19-boot2docker
78. Operating System: linux
79. Architecture: amd64
80. CPUs: 4
81. Total Memory: 8.017 GiB
82. Name: bb13b7cf80e8

The command is acting on the Swarm port, so it returns information about the entire cluster.
You have a manager and no nodes.

Deploy the application


You are viewing docs for legacy standalone Swarm. These topics describe standalone Docker
Swarm. In Docker 1.12 and higher, Swarm mode is integrated with Docker Engine. Most users
should use integrated Swarm mode — a good place to start is Getting started with swarm
mode, Swarm mode CLI commands, and the Get started with Docker walkthrough). Standalone
Docker Swarm is not integrated into the Docker Engine API and CLI commands.
Estimated reading time: 10 minutes

You’ve deployed the load balancer, the discovery backend, and a swarm cluster so now you can
build and deploy the voting application itself. You do this by starting a number of “Dockerized
applications” running in containers.

The diagram below shows the final application configuration including the overlay container
network, voteapp.

In this procedure you connect containers to this network. The voteapp network is available to all
Docker hosts using the Consul discovery backend. Notice that the interlock, nginx, consul,
and swarm manager containers on are not part of the voteapp overlay container network.

Task 1. Set up volume and network


This application relies on both an overlay container network and a container volume. The Docker
Engine provides these two features. Create them both on the swarm manager instance.

1. Direct your local environment to the swarm manager host.


2. $ eval $(docker-machine env manager)

You can create the network on a cluster node and the network is visible on them all.

3. Create the voteapp container network.


4. $ docker network create -d overlay voteapp

5. Switch to the db store.

6. $ eval $(docker-machine env dbstore)

7. Verify you can see the new network from the dbstore node.

8. $ docker network ls
9. NETWORK ID NAME DRIVER
10. e952814f610a voteapp overlay
11. 1f12c5e7bcc4 bridge bridge
12. 3ca38e887cd8 none null
13. 3da57c44586b host host

14. Create a container volume called db-data.


15. $ docker volume create --name db-data

Task 2. Start the containerized microservices


At this point, you are ready to start the component microservices that make up the application. Some
of the application’s containers are launched from existing images pulled directly from Docker Hub.
Other containers are launched from custom images you must build. The list below shows which
containers use custom images and which do not:

 Load balancer container: stock image (ehazlett/interlock)


 Redis containers: stock image (official redis image)
 Postgres (PostgreSQL) containers: stock image (official postgres image)
 Web containers: custom built image
 Worker containers: custom built image
 Results containers: custom built image

You can launch these containers from any host in the cluster using the commands in this section.
Each command includes a -H flag so that they execute against the swarm manager.
The commands also all use the -e flag which is a Swarm constraint. The constraint tells the manager
to look for a node with a matching function label. You set established the labels when you created
the nodes. As you run each command below, look for the value constraint.

1. Start a Postgres database container.

2. $ docker -H $(docker-machine ip manager):3376 run -t -d \


3. -v db-data:/var/lib/postgresql/data \
4. -e constraint:com.function==dbstore \
5. --net="voteapp" \
6. --name db postgres:9.4

7. Start the Redis container.

8. $ docker -H $(docker-machine ip manager):3376 run -t -d \


9. -p 6379:6379 \
10. -e constraint:com.function==dbstore \
11. --net="voteapp" \
12. --name redis redis

The redis name is important so don’t change it.

13. Start the worker application

14. $ docker -H $(docker-machine ip manager):3376 run -t -d \


15. -e constraint:com.function==worker01 \
16. --net="voteapp" \
17. --net-alias=workers \
18. --name worker01 docker/example-voting-app-worker

19. Start the results application.

20. $ docker -H $(docker-machine ip manager):3376 run -t -d \


21. -p 80:80 \
22. --label=interlock.hostname=results \
23. --label=interlock.domain=myenterprise.example.com \
24. -e constraint:com.function==dbstore \
25. --net="voteapp" \
26. --name results-app docker/example-voting-app-result

27. Start the voting application twice; once on each frontend node.

28. $ docker -H $(docker-machine ip manager):3376 run -t -d \


29. -p 80:80 \
30. --label=interlock.hostname=vote \
31. --label=interlock.domain=myenterprise.example.com \
32. -e constraint:com.function==frontend01 \
33. --net="voteapp" \
34. --name voting-app01 docker/example-voting-app-vote

And again on the other frontend node.

$ docker -H $(docker-machine ip manager):3376 run -t -d \


-p 80:80 \
--label=interlock.hostname=vote \
--label=interlock.domain=myenterprise.example.com \
-e constraint:com.function==frontend02 \
--net="voteapp" \
--name voting-app02 docker/example-voting-app-vote

Task 3. Check your work and update /etc/hosts


In this step, you check your work to make sure the Nginx configuration recorded the containers
correctly. Update your local system’s /etc/hosts file to allow you to take advantage of the
loadbalancer.
1. Change to the loadbalancer node.
2. $ eval $(docker-machine env loadbalancer)

3. Check your work by reviewing the configuration of nginx.

4. $ docker container exec interlock cat /etc/conf/nginx.conf


5. ... output snipped ...
6.
7. upstream results.myenterprise.example.com {
8. zone results.myenterprise.example.com_backend 64k;
9.
10. server 192.168.99.111:80;
11.
12. }
13. server {
14. listen 80;
15.
16. server_name results.myenterprise.example.com;
17.
18. location / {
19. proxy_pass http://results.myenterprise.example.com;
20. }
21. }
22. upstream vote.myenterprise.example.com {
23. zone vote.myenterprise.example.com_backend 64k;
24.
25. server 192.168.99.109:80;
26. server 192.168.99.108:80;
27.
28. }
29. server {
30. listen 80;
31.
32. server_name vote.myenterprise.example.com;
33.
34. location / {
35. proxy_pass http://vote.myenterprise.example.com;
36. }
37. }
38.
39. include /etc/conf/conf.d/*.conf;
40. }
The http://vote.myenterprise.example.com site configuration should point to either
frontend node. Requests to http://results.myenterprise.example.com go just to the
single dbstore node where the example-voting-app-result is running.
41. On your local host, edit /etc/hosts file to add the resolution for both these sites.
42. Save and close the /etc/hosts file.
43. Restart the nginx container.

Manual restart is required because the current Interlock server is not forcing an Nginx
configuration reload.

$ docker restart nginx

Task 4. Test the application


Now, you can test your application.

1. Open a browser and navigate to the http://vote.myenterprise.example.com site.

You should see something similar to the following:

2. Click on one of the two voting options.


3. Navigate to the http://results.myenterprise.example.com site to see the results.

4. Try changing your vote.

Both sides change as you switch your vote.


Extra Credit: Deployment with Docker Compose
Up to this point, you’ve deployed each application container individually. This can be cumbersome
especially because there are several different containers and starting them is order dependent. For
example, that database should be running before the worker.

Docker Compose let’s you define your microservice containers and their dependencies in a
Compose file. Then, you can use the Compose file to start all the containers at once. This extra
credit

1. Before you begin, stop all the containers you started.

a. Set the host to the manager.

$ DOCKER_HOST=$(docker-machine ip manager):3376

b. List all the application containers on the swarm.

c. Stop and remove each container.

2. Try to create Compose file on your own by reviewing the tasks in this tutorial.

The version 2 Compose file format is the best to use. Translate each docker runcommand
into a service in the docker-compose.yml file. For example, this command:
$ docker -H $(docker-machine ip manager):3376 run -t -d \
-e constraint:com.function==worker01 \
--net="voteapp" \
--net-alias=workers \
--name worker01 docker/example-voting-app-worker

Becomes this in a Compose file.


worker:
image: docker/example-voting-app-worker
networks:
voteapp:
aliases:
- workers

In general, Compose starts services in reverse order they appear in the file. So, if you want a
service to start before all the others, make it the last service in the file. This application relies
on a volume and a network, declare those at the bottom of the file.

3. Check your work against this file.

4. When you are satisfied, save the docker-compose.yml file to your system.
5. Set DOCKER_HOST to the swarm manager.
6. $ DOCKER_HOST=$(docker-machine ip manager):3376

7. In the same directory as your docker-compose.yml file, start the services.


8. $ docker-compose up -d
9. Creating network "scale_voteapp" with the default driver
10. Creating volume "scale_db-data" with default driver
11. Pulling db (postgres:9.4)...
12. worker01: Pulling postgres:9.4... : downloaded
13. dbstore: Pulling postgres:9.4... : downloaded
14. frontend01: Pulling postgres:9.4... : downloaded
15. frontend02: Pulling postgres:9.4... : downloaded
16. Creating db
17. Pulling redis (redis:latest)...
18. dbstore: Pulling redis:latest... : downloaded
19. frontend01: Pulling redis:latest... : downloaded
20. frontend02: Pulling redis:latest... : downloaded
21. worker01: Pulling redis:latest... : downloaded
22. Creating redis
23. Pulling worker (docker/example-voting-app-worker:latest)...
24. dbstore: Pulling docker/example-voting-app-worker:latest... : downloaded
25. frontend01: Pulling docker/example-voting-app-worker:latest... : downloaded
26. frontend02: Pulling docker/example-voting-app-worker:latest... : downloaded
27. worker01: Pulling docker/example-voting-app-worker:latest... : downloaded
28. Creating scale_worker_1
29. Pulling voting-app (docker/example-voting-app-vote:latest)...
30. dbstore: Pulling docker/example-voting-app-vote:latest... : downloaded
31. frontend01: Pulling docker/example-voting-app-vote:latest... : downloaded
32. frontend02: Pulling docker/example-voting-app-vote:latest... : downloaded
33. worker01: Pulling docker/example-voting-app-vote:latest... : downloaded
34. Creating scale_voting-app_1
35. Pulling result-app (docker/example-voting-app-result:latest)...
36. dbstore: Pulling docker/example-voting-app-result:latest... : downloaded
37. frontend01: Pulling docker/example-voting-app-result:latest... : downloaded
38. frontend02: Pulling docker/example-voting-app-result:latest... : downloaded
39. worker01: Pulling docker/example-voting-app-result:latest... : downloaded
40. Creating scale_result-app_1

41. Use the docker ps command to see the containers on the swarm cluster.
42. $ docker -H $(docker-machine ip manager):3376 ps
43. CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS NAMES
44. b71555033caa docker/example-voting-app-result "node server.js"
6 seconds ago Up 4 seconds 192.168.99.104:32774->80/tcp
frontend01/scale_result-app_1
45. cf29ea21475d docker/example-voting-app-worker "/usr/lib/jvm/java-
7-" 6 seconds ago Up 4 seconds
worker01/scale_worker_1
46. 98414cd40ab9 redis "/entrypoint.sh
redis" 7 seconds ago Up 5 seconds 192.168.99.105:32774-
>6379/tcp frontend02/redis
47. 1f214acb77ae postgres:9.4 "/docker-
entrypoint.s" 7 seconds ago Up 5 seconds 5432/tcp
frontend01/db
48. 1a4b8f7ce4a9 docker/example-voting-app-vote "python app.py"
7 seconds ago Up 5 seconds 192.168.99.107:32772->80/tcp
dbstore/scale_voting-app_1
When you started the services manually, you had a voting-app instances running on two
frontend servers. How many do you have now?
49. Scale your application up by adding some voting-app instances.
50. $ docker-compose scale voting-app=3
51. Creating and starting 2 ... done
52. Creating and starting 3 ... done

After you scale up, list the containers on the cluster again.

53. Change to the loadbalancer node.


54. $ eval $(docker-machine env loadbalancer)

55. Restart the Nginx server.

56. $ docker restart nginx

57. Check your work again by visiting


the http://vote.myenterprise.example.com andhttp://results.myenterprise.example.com
again.

58. You can view the logs on an individual container.

59. $ docker logs scale_voting-app_1


60. * Running on http://0.0.0.0:80/ (Press CTRL+C to quit)
61. * Restarting with stat
62. * Debugger is active!
63. * Debugger pin code: 285-809-660
64. 192.168.99.103 - - [11/Apr/2016 17:15:44] "GET / HTTP/1.0" 200 -
65. 192.168.99.103 - - [11/Apr/2016 17:15:44] "GET /static/stylesheets/style.css
HTTP/1.0" 304 -
66. 192.168.99.103 - - [11/Apr/2016 17:15:45] "GET /favicon.ico HTTP/1.0" 404 -
67. 192.168.99.103 - - [11/Apr/2016 17:22:24] "POST / HTTP/1.0" 200 -
68. 192.168.99.103 - - [11/Apr/2016 17:23:37] "POST / HTTP/1.0" 200 -
69. 192.168.99.103 - - [11/Apr/2016 17:23:39] "POST / HTTP/1.0" 200 -
70. 192.168.99.103 - - [11/Apr/2016 17:23:40] "POST / HTTP/1.0" 200 -
71. 192.168.99.103 - - [11/Apr/2016 17:23:41] "POST / HTTP/1.0" 200 -
72. 192.168.99.103 - - [11/Apr/2016 17:23:43] "POST / HTTP/1.0" 200 -
73. 192.168.99.103 - - [11/Apr/2016 17:23:44] "POST / HTTP/1.0" 200 -
74. 192.168.99.103 - - [11/Apr/2016 17:23:46] "POST / HTTP/1.0" 200 -

This log shows the activity on one of the active voting application containers.

Troubleshoot the application


You are viewing docs for legacy standalone Swarm. These topics describe standalone Docker
Swarm. In Docker 1.12 and higher, Swarm mode is integrated with Docker Engine. Most users
should use integrated Swarm mode — a good place to start is Getting started with swarm
mode, Swarm mode CLI commands, and the Get started with Docker walkthrough). Standalone
Docker Swarm is not integrated into the Docker Engine API and CLI commands.
Estimated reading time: 8 minutes

It’s a fact of life that things fail. With this in mind, it’s important to understand what happens when
failures occur and how to mitigate them. The following sections cover different failure scenarios:

 Swarm manager failures


 Consul (discovery backend) failures
 Interlock load balancer failures
 Web (voting-app) failures
 Redis failures
 Worker (vote-worker) failures
 Postgres failures
 Results-app failures
 Infrastructure failures

Swarm manager failures


In its current configuration, the swarm cluster only has single manager container running on a single
node. If the container exits or the node fails, you cannot administer the cluster until you either fix it,
or replace it.

If the failure is the swarm manager container unexpectedly exiting, Docker automatically attempts to
restart it. This is because the container was started with the --restart=unless-stopped switch.

While the swarm manager is unavailable, the application continues to work in its current
configuration. However, you cannot provision more nodes or containers until you have a working
swarm manager.
Docker Swarm supports high availability for swarm managers. This allows a single swarm cluster to
have two or more managers. One manager is elected as the primary manager and all others operate
as secondaries. In the event that the primary manager fails, one of the secondaries is elected as the
new primary, and cluster operations continue gracefully. If you are deploying multiple swarm
managers for high availability, you should consider spreading them across multiple failure domains
within your infrastructure.

Consul (discovery backend) failures


The swarm cluster that you have deployed has a single Consul container on a single node
performing the cluster discovery service. In this setup, if the Consul container exits or the node fails,
the application continues to operate in its current configuration. However, certain cluster
management operations fail. These include registering new containers in the cluster and making
lookups against the cluster configuration.

If the failure is the consul container unexpectedly exiting, Docker automatically attempts to restart it.
This is because the container was started with the --restart=unless-stoppedswitch.
The Consul, etcd, and Zookeeper discovery service backends support various options for high
availability. These include Paxos/Raft quorums. You should follow existing best practices for
deploying HA configurations of your chosen discover service backend. If you are deploying multiple
discovery service instances for high availability, you should consider spreading them across multiple
failure domains within your infrastructure.

If you operate your swarm cluster with a single discovery backend service and this service fails and
is unrecoverable, you can start a new empty instance of the discovery backend and the swarm
agents on each node in the cluster repopulate it.

Handling failures
There are many reasons why containers can fail. However, Swarm does not attempt to restart failed
containers.

One way to automatically restart failed containers is to explicitly start them with the --
restart=unless-stopped flag. This tells the local Docker daemon to attempt to restart the container if
it unexpectedly exits. This only works in situations where the node hosting the container and its
Docker daemon are still up. This cannot restart a container if the node hosting it has failed, or if the
Docker daemon itself has failed.
Another way is to have an external tool (external to the cluster) monitor the state of your application,
and make sure that certain service levels are maintained. These service levels can include things
like “have at least 10 web server containers running”. In this scenario, if the number of web
containers drops below 10, the tool attempts to start more.

In our simple voting-app example, the front-end is scalable and serviced by a load balancer. In the
event that one of the two web containers fails (or the node that is hosting it fails), the load balancer
stops routing requests to it and sends all requests to the surviving web container. This solution is
highly scalable meaning you can have up to n web containers behind the load balancer.

Interlock load balancer failures


The environment that you have provisioned has a single interlock load balancer container running on
a single node. In this setup, if the container exits or node fails, the application cannot service
incoming requests and the application is unavailable.

If the failure is the interlock container unexpectedly exiting, Docker automatically attempts to restart
it. This is because the container was started with the --restart=unless-stoppedswitch.

It is possible to build an HA Interlock load balancer configuration. One such way is to have multiple
Interlock containers on multiple nodes. You can then use DNS round robin, or other technologies, to
load balance across each Interlock container. That way, if one Interlock container or node goes
down, the others continue to service requests.

If you deploy multiple interlock load balancers, you should consider spreading them across multiple
failure domains within your infrastructure.

Web (voting-app) failures


The environment that you have configured has two voting-app containers running on two separate
nodes. They operate behind an Interlock load balancer that distributes incoming connections across
both.

In the event that one of the web containers or nodes fails, the load balancer starts directing all
incoming requests to surviving instance. Once the failed instance is back up, or a replacement is
added, the load balancer adds it to the configuration and starts sending a portion of the incoming
requests to it.
For highest availability you should deploy the two frontend web services
(frontend01 and frontend02) in different failure zones within your infrastructure. You should also
consider deploying more.

Redis failures
If the redis container fails, its partnered voting-app container does not function correctly. The best
solution in this instance might be to configure health monitoring that verifies the ability to write to
each Redis instance. If an unhealthy redis instance is encountered, remove the voting-
app and redis combination and attempt remedial actions.

Worker (vote-worker) failures


If the worker container exits, or the node that is hosting it fails, the redis containers queue votes until
the worker container comes back up. This situation can prevail indefinitely, though a worker needs to
come back at some point and process the votes.

If the failure is the worker01 container unexpectedly exiting, Docker automatically attempts to restart
it. This is because the container was started with the --restart=unless-stoppedswitch.

Postgres failures
This application does not implement any for of HA or replication for Postgres. Therefore losing the
Postgres container would cause the application to fail and potential lose or corrupt data. A better
solution would be to implement some form of Postgres HA or replication.

Results-app failures
If the results-app container exits, you cannot browse to the results of the poll until the container is
back up and running. Results continue to be collected and counted, but you can’t view results until
the container is back up and running.

The results-app container was started with the --restart=unless-stopped flag meaning that the
Docker daemon automatically attempts to restart it unless it was administratively stopped.

Infrastructure failures
There are many ways in which the infrastructure underpinning your applications can fail. However,
there are a few best practices that can be followed to help mitigate and offset these failures.

One of these is to deploy infrastructure components over as many failure domains as possible. On a
service such as AWS, this often translates into balancing infrastructure and services across multiple
AWS Availability Zones (AZ) within a Region.

To increase the availability of our swarm cluster you could:

 Configure the swarm manager for HA and deploy HA nodes in different AZs
 Configure the Consul discovery service for HA and deploy HA nodes in different AZs
 Deploy all scalable components of the application across multiple AZs

This configuration is shown in the diagram below.

This allows us to lose an entire AZ and still have our cluster and application operate.

But it doesn’t have to stop there. Some applications can be balanced across AWS Regions. It’s even
becoming possible to deploy services across cloud providers, or have balance services across
public cloud providers and your on premises date centers!

The diagram below shows parts of the application and infrastructure deployed across AWS and
Microsoft Azure. But you could just as easily replace one of those cloud providers with your own on
premises data center. In these scenarios, network latency and reliability is key to a smooth and
workable solution.

High availability in Docker Swarm


You are viewing docs for legacy standalone Swarm. These topics describe standalone Docker
Swarm. In Docker 1.12 and higher, Swarm mode is integrated with Docker Engine. Most users
should use integrated Swarm mode — a good place to start is Getting started with swarm
mode, Swarm mode CLI commands, and the Get started with Docker walkthrough). Standalone
Docker Swarm is not integrated into the Docker Engine API and CLI commands.
Estimated reading time: 5 minutes

In Docker Swarm, the swarm manager is responsible for the entire cluster and manages the
resources of multiple Docker hosts at scale. If the swarm manager dies, you must create a new one
and deal with an interruption of service.

The High Availability feature allows a swarm to gracefully handle the failover of a manager instance.
Using this feature, you can create a single primary manager instance and
multiple replica instances.
A primary manager is the main point of contact with the swarm cluster. You can also create and talk
to replica instances that act as backups. Requests issued on a replica are automatically proxied to
the primary manager. If the primary manager fails, a replica takes away the lead. In this way, you
always keep a point of contact with the cluster.

Setup primary and replicas


This section explains how to set up Docker Swarm using multiple managers.

Assumptions
You need either a Consul, etcd, or Zookeeper cluster. This procedure is written assuming
a Consul server running on address 192.168.42.10:8500. All hosts have a Docker Engine configured
to listen on port 2375. The Managers operate on port 4000. The sample swarm configuration has
three machines:

 manager-1 on 192.168.42.200
 manager-2 on 192.168.42.201
 manager-3 on 192.168.42.202

Create the primary manager


You use the swarm manage command with the --replication and --advertise flags to create a
primary manager.
user@manager-1 $ swarm manage -H :4000 <tls-config-flags> --replication --advertise
192.168.42.200:4000 consul://192.168.42.10:8500/nodes
INFO[0000] Listening for HTTP addr=:4000 proto=tcp
INFO[0000] Cluster leadership acquired
INFO[0000] New leader elected: 192.168.42.200:4000
[...]

The --replication flag tells Swarm that the manager is part of a multi-manager configuration and
that this primary manager competes with other manager instances for the primary role. The primary
manager has the authority to manage cluster, replicate logs, and replicate events happening inside
the cluster.
The --advertise option specifies the primary manager address. Swarm uses this address to
advertise to the cluster when the node is elected as the primary. As you see in the command’s
output, the address you provided now appears to be the one of the elected Primary manager.
Create two replicas
Now that you have a primary manager, you can create replicas.

user@manager-2 $ swarm manage -H :4000 <tls-config-flags> --replication --advertise


192.168.42.201:4000 consul://192.168.42.10:8500/nodes
INFO[0000] Listening for HTTP addr=:4000 proto=tcp
INFO[0000] Cluster leadership lost
INFO[0000] New leader elected: 192.168.42.200:4000
[...]

This command creates a replica manager on 192.168.42.201:4000 which is looking


at 192.168.42.200:4000 as the primary manager.

Create an additional, third manager instance:

user@manager-3 $ swarm manage -H :4000 <tls-config-flags> --replication --advertise


192.168.42.202:4000 consul://192.168.42.10:8500/nodes
INFO[0000] Listening for HTTP addr=:4000 proto=tcp
INFO[0000] Cluster leadership lost
INFO[0000] New leader elected: 192.168.42.200:4000
[...]

Once you have established your primary manager and the replicas, create swarm agents as you
normally would.

List machines in the cluster


Typing docker info should give you an output similar to the following:
user@my-machine $ export DOCKER_HOST=192.168.42.200:4000 # Points to manager-1
user@my-machine $ docker info
Containers: 0
Images: 25
Storage Driver:
Role: Primary <--------- manager-1 is the Primary manager
Primary: 192.168.42.200
Strategy: spread
Filters: affinity, health, constraint, port, dependency
Nodes: 3
swarm-agent-0: 192.168.42.100:2375
└ Containers: 0
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 2.053 GiB
└ Labels: executiondriver=native-0.2, kernelversion=3.13.0-49-generic,
operatingsystem=Ubuntu 14.04.2 LTS, storagedriver=aufs
swarm-agent-1: 192.168.42.101:2375
└ Containers: 0
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 2.053 GiB
└ Labels: executiondriver=native-0.2, kernelversion=3.13.0-49-generic,
operatingsystem=Ubuntu 14.04.2 LTS, storagedriver=aufs
swarm-agent-2: 192.168.42.102:2375
└ Containers: 0
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 2.053 GiB
└ Labels: executiondriver=native-0.2, kernelversion=3.13.0-49-generic,
operatingsystem=Ubuntu 14.04.2 LTS, storagedriver=aufs
Execution Driver:
Kernel Version:
Operating System:
CPUs: 3
Total Memory: 6.158 GiB
Name:
ID:
Http Proxy:
Https Proxy:
No Proxy:

This information shows that manager-1 is the current primary and supplies the address to use to
contact this primary.

Test the failover mechanism


To test the failover mechanism, you shut down the designated primary manager. Issue a Ctrl-
C or kill the current primary manager (manager-1) to shut it down.

Wait for automated failover


After a short time, the other instances detect the failure and a replica takes the lead to become the
primary manager.

For example, look at manager-2’s logs:


user@manager-2 $ swarm manage -H :4000 <tls-config-flags> --replication --advertise
192.168.42.201:4000 consul://192.168.42.10:8500/nodes
INFO[0000] Listening for HTTP addr=:4000 proto=tcp
INFO[0000] Cluster leadership lost
INFO[0000] New leader elected: 192.168.42.200:4000
INFO[0038] New leader elected: 192.168.42.201:4000
INFO[0038] Cluster leadership acquired <--- We have been elected as the
new Primary Manager
[...]

Because the primary manager, manager-1, failed right after it was elected, the replica with the
address 192.168.42.201:4000, manager-2, recognized the failure and attempted to take away the
lead. Because manager-2 was fast enough, the process was effectively elected as the primary
manager. As a result, manager-2 became the primary manager of the cluster.
If we take a look at manager-3 we should see those logs:
user@manager-3 $ swarm manage -H :4000 <tls-config-flags> --replication --advertise
192.168.42.202:4000 consul://192.168.42.10:8500/nodes
INFO[0000] Listening for HTTP addr=:4000 proto=tcp
INFO[0000] Cluster leadership lost
INFO[0000] New leader elected: 192.168.42.200:4000
INFO[0036] New leader elected: 192.168.42.201:4000 <--- manager-2 sees the new
Primary Manager
[...]

At this point, we need to export the new DOCKER_HOST value.

Switch the primary


To switch the DOCKER_HOST to use manager-2 as the primary, you do the following:
user@my-machine $ export DOCKER_HOST=192.168.42.201:4000 # Points to manager-2
user@my-machine $ docker info
Containers: 0
Images: 25
Storage Driver:
Role: Primary <--------- manager-2 is the Primary manager
Primary: 192.168.42.201
Strategy: spread
Filters: affinity, health, constraint, port, dependency
Nodes: 3

You can use the docker command on any swarm manager or any replica.
If you like, you can use custom mechanisms to always point DOCKER_HOST to the current primary
manager. Then, you never lose contact with your swarm in the event of a failover.

Swarm and container networks


You are viewing docs for legacy standalone Swarm. These topics describe standalone Docker
Swarm. In Docker 1.12 and higher, Swarm mode is integrated with Docker Engine. Most users
should use integrated Swarm mode — a good place to start is Getting started with swarm
mode, Swarm mode CLI commands, and the Get started with Docker walkthrough). Standalone
Docker Swarm is not integrated into the Docker Engine API and CLI commands.
Estimated reading time: 3 minutes

Docker Swarm is fully compatible with Docker’s networking features. This includes the multi-host
networking feature which allows creation of custom container networks that span multiple Docker
hosts.

Before using Swarm with a custom network, read through the conceptual information in Docker
container networking. You should also have walked through the Get started with multi-host
networking example.

Create a custom network in a Swarm cluster


Multi-host networks require a key-value store. The key-value store holds information about the
network state which includes discovery, networks, endpoints, IP addresses, and more. Through the
Docker’s libkv project, Docker supports Consul, Etcd, and ZooKeeper key-value store backends. For
details about the supported backends, refer to the libkv project.

To create a custom network, you must choose a key-value store backend and implement it on your
network. Then, you configure the Docker Engine daemon to use this store. Two required
parameters, --cluster-store and --cluster-advertise, refer to your key-value store server.

Once you’ve configured and restarted the daemon on each Swarm node, you are ready to create a
network.

List networks
This example assumes there are two nodes node-0 and node-1 in the cluster. From a Swarm node,
list the networks:
$ docker network ls
NETWORK ID NAME DRIVER
3dd50db9706d node-0/host host
09138343e80e node-0/bridge bridge
8834dbd552e5 node-0/none null
45782acfe427 node-1/host host
8926accb25fd node-1/bridge bridge
6382abccd23d node-1/none null

As you can see, each network name is prefixed by the node name.

Create a network
By default, Swarm is using the overlay network driver, a global-scope network driver. A global-scope
network driver creates a network across an entire Swarm cluster. When you create
an overlay network under Swarm, you can omit the -d option:
$ docker network create swarm_network
42131321acab3233ba342443Ba4312
$ docker network ls
NETWORK ID NAME DRIVER
3dd50db9706d node-0/host host
09138343e80e node-0/bridge bridge
8834dbd552e5 node-0/none null
42131321acab node-0/swarm_network overlay
45782acfe427 node-1/host host
8926accb25fd node-1/bridge bridge
6382abccd23d node-1/none null
42131321acab node-1/swarm_network overlay

As you can see here, both the node-0/swarm_network and the node-1/swarm_network have the same
ID. This is because when you create a network on the cluster, it is accessible from all the nodes.
To create a local scope network (for example with the bridge network driver) you should
use <node>/<name> otherwise your network is created on a random node.
$ docker network create node-0/bridge2 -b bridge
921817fefea521673217123abab223
$ docker network create node-1/bridge2 -b bridge
5262bbfe5616fef6627771289aacc2
$ docker network ls
NETWORK ID NAME DRIVER
3dd50db9706d node-0/host host
09138343e80e node-0/bridge bridge
8834dbd552e5 node-0/none null
42131321acab node-0/swarm_network overlay
921817fefea5 node-0/bridge2 bridge
45782acfe427 node-1/host host
8926accb25fd node-1/bridge bridge
6382abccd23d node-1/none null
42131321acab node-1/swarm_network overlay
5262bbfe5616 node-1/bridge2 bridge

--opt encrypted is a feature only available in Docker Swarm mode. It’s not supported in Swarm
standalone. Network encryption requires key management, which is outside the scope of Swarm.

Remove a network
To remove a network you can use its ID or its name. If two different networks have the same name,
include the <node> value:
$ docker network rm swarm_network
42131321acab3233ba342443Ba4312
$ docker network rm node-0/bridge2
921817fefea521673217123abab223
$ docker network ls
NETWORK ID NAME DRIVER
3dd50db9706d node-0/host host
09138343e80e node-0/bridge bridge
8834dbd552e5 node-0/none null
45782acfe427 node-1/host host
8926accb25fd node-1/bridge bridge
6382abccd23d node-1/none null
5262bbfe5616 node-1/bridge2 bridge

The swarm_network was removed from every node. The bridge2 was removed only from node-0.

Docker Swarm discovery


You are viewing docs for legacy standalone Swarm. These topics describe standalone Docker
Swarm. In Docker 1.12 and higher, Swarm mode is integrated with Docker Engine. Most users
should use integrated Swarm mode — a good place to start is Getting started with swarm
mode, Swarm mode CLI commands, and the Get started with Docker walkthrough). Standalone
Docker Swarm is not integrated into the Docker Engine API and CLI commands.
Estimated reading time: 6 minutes

Docker Swarm comes with multiple discovery backends. You use a hosted discovery service with
Docker Swarm. The service maintains a list of IPs in your cluster. This page describes the different
types of hosted discovery. These are:

Use a distributed key/value store


The recommended way to do node discovery in Swarm is Docker’s libkv project. The libkv project is
an abstraction layer over existing distributed key/value stores. As of this writing, the project supports:
 Consul 0.5.1 or higher
 Etcd 2.0 or higher
 ZooKeeper 3.4.5 or higher

For details about libkv and a detailed technical overview of the supported backends, refer to the libkv
project.

Use a hosted discovery key store


1. On each node, start the Swarm agent.

The node IP address doesn’t need to be public as long as the Swarm manager can access it.
In a large cluster, the nodes joining swarm may trigger request spikes to discovery. For
example, a large number of nodes are added by a script, or recovered from a network
partition. This may result in discovery failure. You can use --delayoption to specify a delay
limit. The swarm join command adds a random delay less than this limit to reduce pressure
to discovery.

Etcd:

swarm join --advertise=<node_ip:2375>


etcd://<etcd_addr1>,<etcd_addr2>/<optional path prefix>

Consul:

swarm join --advertise=<node_ip:2375> consul://<consul_addr>/<optional path


prefix>

ZooKeeper:

swarm join --advertise=<node_ip:2375>


zk://<zookeeper_addr1>,<zookeeper_addr2>/<optional path prefix>

2. Start the swarm manager on any machine or your laptop.

Etcd:

swarm manage -H tcp://<swarm_ip:swarm_port>


etcd://<etcd_addr1>,<etcd_addr2>/<optional path prefix>

Consul:

swarm manage -H tcp://<swarm_ip:swarm_port> consul://<consul_addr>/<optional


path prefix>
ZooKeeper:

swarm manage -H tcp://<swarm_ip:swarm_port>


zk://<zookeeper_addr1>,<zookeeper_addr2>/<optional path prefix>

3. Use the regular Docker commands.

4. docker -H tcp://<swarm_ip:swarm_port> info


5. docker -H tcp://<swarm_ip:swarm_port> run ...
6. docker -H tcp://<swarm_ip:swarm_port> ps
7. docker -H tcp://<swarm_ip:swarm_port> logs ...
8. ...

9. Try listing the nodes in your cluster.

Etcd:

swarm list etcd://<etcd_addr1>,<etcd_addr2>/<optional path prefix>


<node_ip:2375>

Consul:

swarm list consul://<consul_addr>/<optional path prefix>


<node_ip:2375>

ZooKeeper:

swarm list zk://<zookeeper_addr1>,<zookeeper_addr2>/<optional path prefix>


<node_ip:2375>

Use TLS with distributed key/value discovery


You can securely talk to the distributed k/v store using TLS. To connect securely to the store, you
must generate the certificates for a node when you join it to the swarm. You can only use with
Consul and Etcd. The following example illustrates this with Consul:
swarm join \
--advertise=<node_ip:2375> \
--discovery-opt kv.cacertfile=/path/to/mycacert.pem \
--discovery-opt kv.certfile=/path/to/mycert.pem \
--discovery-opt kv.keyfile=/path/to/mykey.pem \
consul://<consul_addr>/<optional path prefix>

This works the same way for the swarm manage and list commands.

A static file or list of nodes


Note: This discovery method is incompatible with replicating swarm managers. If you require
replication, you should use a hosted discovery key store.

You can use a static file or list of nodes for your discovery backend. The file must be stored on a
host that is accessible from the swarm manager. You can also pass a node list as an option when
you start Swarm.

Both the static file and the nodes option support an IP address range. To specify a range supply a
pattern, for example, 10.0.0.[10:200] refers to nodes starting from 10.0.0.10 to 10.0.0.200. For
example for the file discovery method.
$ echo "10.0.0.[11:100]:2375" >> /tmp/my_cluster
$ echo "10.0.1.[15:20]:2375" >> /tmp/my_cluster
$ echo "192.168.1.2:[2:20]375" >> /tmp/my_cluster

Or with node discovery:

swarm manage -H <swarm_ip:swarm_port>


"nodes://10.0.0.[10:200]:2375,10.0.1.[2:250]:2375"

To create a file
1. Edit the file and add line for each of your nodes.

2. echo <node_ip1:2375> >> /opt/my_cluster


3. echo <node_ip2:2375> >> /opt/my_cluster
4. echo <node_ip3:2375> >> /opt/my_cluster

This example creates a file named /tmp/my_cluster. You can use any name you like.

5. Start the swarm manager on any machine.

6. swarm manage -H tcp://<swarm_ip:swarm_port> file:///tmp/my_cluster


7. Use the regular Docker commands.

8. docker -H tcp://<swarm_ip:swarm_port> info


9. docker -H tcp://<swarm_ip:swarm_port> run ...
10. docker -H tcp://<swarm_ip:swarm_port> ps
11. docker -H tcp://<swarm_ip:swarm_port> logs ...
12. ...

13. List the nodes in your cluster.

14. $ swarm list file:///tmp/my_cluster


15. <node_ip1:2375>
16. <node_ip2:2375>
17. <node_ip3:2375>

To use a node list


1. Start the manager on any machine or your laptop.

2. swarm manage -H <swarm_ip:swarm_port> nodes://<node_ip1:2375>,<node_ip2:2375>

or

swarm manage -H <swarm_ip:swarm_port> <node_ip1:2375>,<node_ip2:2375>

3. Use the regular Docker commands.

4. docker -H <swarm_ip:swarm_port> info


5. docker -H <swarm_ip:swarm_port> run ...
6. docker -H <swarm_ip:swarm_port> ps
7. docker -H <swarm_ip:swarm_port> logs ...

8. List the nodes in your cluster.

9. $ swarm list file:///tmp/my_cluster


10. <node_ip1:2375>
11. <node_ip2:2375>
12. <node_ip3:2375>
Docker Hub as a hosted discovery service
Deprecation Notice
The Docker Hub Hosted Discovery Service will be removed on June 19th, 2019. Please switch to
one of the other discovery mechanisms. Several brownouts of the service will take place in the
weeks leading up to the removal in order for users to find places where this is still used and give
them time to prepare.
Warning: The Docker Hub Hosted Discovery Service is not recommended for production use. It’s
intended to be used for testing/development. See the discovery backends for production use.

This example uses the hosted discovery service on Docker Hub. Using Docker Hub’s hosted
discovery service requires that each node in the swarm is connected to the public internet. To create
your cluster:

1. Create a cluster.

2. $ swarm create
3. 6856663cdefdec325839a4b7e1de38e8 # <- this is your unique <cluster_id>

4. Create each node and join them to the cluster.

On each of your nodes, start the swarm agent. The node IP address doesn’t need to be
public (eg. 192.168.0.X) but the swarm manager must be able to access it.

$ swarm join --advertise=<node_ip:2375> token://<cluster_id>

5. Start the swarm manager.

This can be on any machine or even your laptop.

$ swarm manage -H tcp://<swarm_ip:swarm_port> token://<cluster_id>

6. Use regular Docker commands to interact with your cluster.

7. docker -H tcp://<swarm_ip:swarm_port> info


8. docker -H tcp://<swarm_ip:swarm_port> run ...
9. docker -H tcp://<swarm_ip:swarm_port> ps
10. docker -H tcp://<swarm_ip:swarm_port> logs ...
11. ...
12. List the nodes in your cluster.

13. swarm list token://<cluster_id>


14. <node_ip:2375>

Contribute a new discovery backend


You can contribute a new discovery backend to Swarm. For information on how to do this,
see github.com/moby/moby/pkg/discovery.

Provision a Swarm cluster with Docker


Machine
You are viewing docs for legacy standalone Swarm. These topics describe standalone Docker
Swarm. In Docker 1.12 and higher, Swarm mode is integrated with Docker Engine. Most users
should use integrated Swarm mode — a good place to start is Getting started with swarm
mode, Swarm mode CLI commands, and the Get started with Docker walkthrough). Standalone
Docker Swarm is not integrated into the Docker Engine API and CLI commands.
Estimated reading time: 5 minutes

You can use Docker Machine to provision a Docker Swarm cluster. Machine is the Docker
provisioning tool. Machine provisions the hosts, installs Docker Engine on them, and then configures
the Docker CLI client. With Machine’s Swarm options, you can also quickly configure a Swarm
cluster as part of this provisioning.

This page explains the commands you need to provision a basic Swarm cluster on a local Mac or
Windows computer using Machine. Once you understand the process, you can use it to setup a
Swarm cluster on a cloud provider, or inside your company’s data center.

If this is the first time you are creating a Swarm cluster, you should first learn about Swarm and its
requirements by installing a Swarm for evaluation or installing a Swarm for production. If this is the
first time you have used Machine, you should take some time to understand Machine before
continuing.

What you need


If you are using macOS or Windows and have installed with Docker Toolbox, you should already
have Machine installed. If you need to install, see the instructions for macOS orWindows.

Machine supports installing on AWS, DigitalOcean, Google Cloud Platform, IBM Softlayer, Microsoft
Azure and Hyper-V, OpenStack, Rackspace, VirtualBox, VMware Fusion®, vCloud® AirTM and
vSphere®. This example uses VirtualBox to run several VMs based on the boot2docker.iso image.
This image is a small-footprint Linux distribution for running Engine.
The Toolbox installation gives you VirtualBox and the boot2docker.iso image you need. It also gives
you the ability provision on all the systems Machine supports.

Note: These examples assume you are using macOS or Windows, if you like you can alsoinstall
Docker Machine directly on a Linux system.

Provision a host to generate a Swarm token


Before you can configure a Swarm, you start by provisioning a host with Engine. Open a terminal on
the host where you installed Machine. Then, to provision a host called local, do the following:
docker-machine create -d virtualbox local

This example uses VirtualBox but it could easily be DigitalOcean or a host on your data center.
The local value is the host name. Once you create it, configure your terminal’s shell environment to
interact with the local host.
eval "$(docker-machine env local)"

Each Swarm host has a token installed into its Engine configuration. The token allows the Swarm
discovery backend to recognize a node as belonging to a particular Swarm cluster. Create the token
for your cluster by running the swarm image:
docker run swarm create
Unable to find image 'swarm' locally
1.1.0-rc2: Pulling from library/swarm
892cb307750a: Pull complete
fe3c9860e6d5: Pull complete
cc01ef3f1fbc: Pull complete
b7e14a9c9c72: Pull complete
3ec746117013: Pull complete
703cb7acfce6: Pull complete
d4f6bb678158: Pull complete
2ad500e1bf96: Pull complete
Digest: sha256:f02993cd1afd86b399f35dc7ca0240969e971c92b0232a8839cf17a37d6e7009
Status: Downloaded newer image for swarm
0de84fa62a1d9e9cc2156111f63ac31f

The output of the swarm create command is a cluster token. Copy the token to a safe place. Once
you have the token, you can provision the swarm nodes and join them to the cluster_id. The rest of
this documentation, refers to this token as the SWARM_CLUSTER_TOKEN.

Provision swarm nodes


All swarm nodes in a cluster must have Engine installed. With Machine and
theSWARM_CLUSTER_TOKEN you can provision a host with Engine and configure it as a swarm node with
one Machine command. To create a swarm manager node on a new VM called swarm-manager, you
do the following:
docker-machine create \
-d virtualbox \
--swarm \
--swarm-master \
--swarm-discovery token://SWARM_CLUSTER_TOKEN \
swarm-manager

Then, provision an additional node. You must supply the SWARM_CLUSTER_TOKEN and a unique name
for each host node, HOST_NODE_NAME.
docker-machine create \
-d virtualbox \
--swarm \
--swarm-discovery token://SWARM_CLUSTER_TOKEN \
HOST_NODE_NAME

For example, you might use node-01 as the HOST_NODE_NAME in the previous example.
Note: These commands rely on Docker Swarm’s hosted discovery service, Docker Hub. If Docker
Hub or your network is having issues, these commands may fail. Check the Docker Hub status
page for service availability. If the problem is Docker Hub, you can wait for it to recover or configure
other types of discovery backends.

Connect node environments with Machine


If you are connecting to typical host environment with Machine, you use the envsubcommand, like
this:
eval "$(docker-machine env local)"

Docker Machine provides a special --swarm flag with its env command to connect to swarm nodes.
docker-machine env --swarm HOST_NODE_NAME
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.101:3376"
export DOCKER_CERT_PATH="/Users/mary/.docker/machine/machines/swarm-manager"
export DOCKER_MACHINE_NAME="swarm-manager"
# Run this command to configure your shell:
# eval $(docker-machine env --swarm HOST_NODE_NAME)

To set your SHELL connect to a swarm node called swarm-manager, you would do this:
eval "$(docker-machine env --swarm swarm-manager)"

Now, you can use the Docker CLI to query and interact with your cluster.

docker info
Containers: 2
Images: 1
Role: primary
Strategy: spread
Filters: health, port, dependency, affinity, constraint
Nodes: 1
swarm-manager: 192.168.99.101:2376
└ Status: Healthy
└ Containers: 2
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 1.021 GiB
└ Labels: executiondriver=native-0.2, kernelversion=4.1.13-boot2docker,
operatingsystem=Boot2Docker 1.9.1 (TCL 6.4.1); master : cef800b - Fri Nov 20 19:33:59
UTC 2015, provider=virtualbox, storagedriver=aufs
CPUs: 1
Total Memory: 1.021 GiB
Name: swarm-manager

Swarm filters
You are viewing docs for legacy standalone Swarm. These topics describe standalone Docker
Swarm. In Docker 1.12 and higher, Swarm mode is integrated with Docker Engine. Most users
should use integrated Swarm mode — a good place to start is Getting started with swarm
mode, Swarm mode CLI commands, and the Get started with Docker walkthrough). Standalone
Docker Swarm is not integrated into the Docker Engine API and CLI commands.
Estimated reading time: 18 minutes

Filters tell Docker Swarm scheduler which nodes to use when creating and running a container.

Configure the available filters


Filters are divided into two categories, node filters and container configuration filters. Node filters
operate on characteristics of the Docker host or on the configuration of the Docker daemon.
Container configuration filters operate on characteristics of containers, or on the availability of
images on a host.

Each filter has a name that identifies it. The node filters are:

 constraint
 health
 containerslots

The container configuration filters are:

 affinity
 dependency
 port

When you start a Swarm manager with the swarm manage command, all the filters are enabled. If you
want to limit the filters available to your Swarm, specify a subset of filters by passing the --
filter flag and the name:
$ swarm manage --filter=health --filter=dependency

Note: Container configuration filters match all containers, including stopped containers, when
applying the filter. To release a node used by a container, you must remove the container from the
node.

Node filters
When creating a container or building an image, you use a constraint or health filter to select a
subset of nodes to consider for scheduling. If a node in Swarm cluster has a label with
key containerslots and a number-value, Swarm does not launch more containers than the given
number.

Use a constraint filter


Node constraints can refer to Docker’s default tags or to custom labels. Default tags are sourced
from docker info. Often, they relate to properties of the Docker host. Currently, the default tags
include:

 node to refer to the node by ID or name


 storagedriver
 executiondriver
 kernelversion
 operatingsystem

Custom node labels you apply when you start the docker daemon, for example:
$ docker daemon --label com.example.environment="production" --label
com.example.storage="ssd"

Then, when you start a container on the cluster, you can set constraints using these default tags or
custom labels. The Swarm scheduler looks for matching node on the cluster and starts the container
there. This approach has several practical applications:

 Schedule based on specific host properties, for example, storage=ssd schedules containers
on specific hardware.
 Force containers to run in a given location, for example region=us-east.
 Create logical cluster partitions by splitting a cluster into sub-clusters with different
properties, for example environment=production.

EXAMPLE NODE CONSTRAINTS


To specify custom label for a node, pass a list of --label options at docker startup time. For
instance, to start node-1 with the storage=ssd label:
$ docker daemon --label storage=ssd
$ swarm join --advertise=192.168.0.42:2375 token://XXXXXXXXXXXXXXXXXX

You might start a different node-2 with storage=disk:


$ docker daemon --label storage=disk
$ swarm join --advertise=192.168.0.43:2375 token://XXXXXXXXXXXXXXXXXX

Once the nodes are joined to a cluster, the Swarm manager pulls their respective tags. Moving
forward, the manager takes the tags into account when scheduling new containers.

Continuing the previous example, assuming your cluster with node-1 and node-2, you can run a
MySQL server container on the cluster. When you run the container, you can use a constraint to
ensure the database gets good I/O performance. You do this by filtering for nodes with flash drives:
$ docker tcp://<manager_ip:manager_port> run -d -P -e constraint:storage==ssd --name
db mysql
f8b693db9cd6

$ docker tcp://<manager_ip:manager_port> ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
f8b693db9cd6 mysql:latest "mysqld" Less than a second ago
running 192.168.0.42:49178->3306/tcp node-1/db

In this example, the manager selected all nodes that met the storage=ssd constraint and applied
resource management on top of them. Only node-1 was selected because it’s the only host running
flash.

Suppose you want to run an Nginx frontend in a cluster. In this case, you wouldn’t want flash drives
because the frontend mostly writes logs to disk.

$ docker tcp://<manager_ip:manager_port> run -d -P -e constraint:storage==disk --name


frontend nginx
963841b138d8

$ docker tcp://<manager_ip:manager_port> ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
963841b138d8 nginx:latest "nginx" Less than a second ago
running 192.168.0.43:49177->80/tcp node-2/frontend
f8b693db9cd6 mysql:latest "mysqld" Up About a minute
running 192.168.0.42:49178->3306/tcp node-1/db

The scheduler selected node-2 since it was started with the storage=disk label.
Finally, build args can be used to apply node constraints to a docker build. This example shows
how to avoid flash drives.
$ mkdir sinatra
$ cd sinatra
$ echo "FROM ubuntu:14.04" > Dockerfile
$ echo "RUN apt-get update && apt-get install -y ruby ruby-dev" >> Dockerfile
$ echo "RUN gem install sinatra" >> Dockerfile
$ docker build --build-arg=constraint:storage==disk -t ouruser/sinatra:v2 .
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM ubuntu:14.04
---> a5a467fddcb8
Step 2 : RUN apt-get update && apt-get install -y ruby ruby-dev
---> Running in 26c9fbc55aeb
---> 30681ef95fff
Removing intermediate container 26c9fbc55aeb
Step 3 : RUN gem install sinatra
---> Running in 68671d4a17b0
---> cd70495a1514
Removing intermediate container 68671d4a17b0
Successfully built cd70495a1514

$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
dockerswarm/swarm manager 8c2c56438951 2 days ago 795.7
MB
ouruser/sinatra v2 cd70495a1514 35 seconds ago 318.7
MB
ubuntu 14.04 a5a467fddcb8 11 days ago 187.9
MB

Use the health filter


The node health filter prevents the scheduler from running containers on unhealthy nodes. A node is
considered unhealthy if the node is down or it can’t communicate with the cluster store.

Use the containerslots filter


You may give your Docker nodes the containerslots label

$ docker daemon --label containerslots=3

Swarm runs up to 3 containers at this node, if all nodes are “full”, an error is thrown indicating no
suitable node can be found. If the value cannot be cast to an integer number or is not present, there
is no limit on container number.

Container filters
When creating a container, you can use three types of container filters:

 affinity
 dependency
 port

Use an affinity filter


Use an affinity filter to create “attractions” between containers. For example, you can run a
container and instruct Swarm to schedule it next to another container based on these affinities:

 container name or ID
 an image on the host
 a custom label applied to the container

These affinities ensure that containers run on the same network node — without you having to know
what each node is running.

EXAMPLE NAME AFFINITY

You can schedule a new container to run next to another based on a container name or ID. For
example, you can start a container called frontend running nginx:
$ docker tcp://<manager_ip:manager_port> run -d -p 80:80 --name frontend nginx
87c4376856a8

$ docker tcp://<manager_ip:manager_port> ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
87c4376856a8 nginx:latest "nginx" Less than a second ago
running 192.168.0.42:80->80/tcp node-1/frontend

Then, using -e affinity:container==frontend value to schedule a second container to locate and


run next to the container named frontend.
$ docker tcp://<manager_ip:manager_port> run -d --name logger -e
affinity:container==frontend logger
87c4376856a8

$ docker tcp://<manager_ip:manager_port> ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
87c4376856a8 nginx:latest "nginx" Less than a second ago
running 192.168.0.42:80->80/tcp node-1/frontend
963841b138d8 logger:latest "logger" Less than a second ago
running node-1/logger

Because of name affinity, the logger container ends up on node-1 along with the frontend container.
Instead of the frontend name you could have supplied its ID as follows:
$ docker tcp://<manager_ip:manager_port> run -d --name logger -e
affinity:container==87c4376856a8

EXAMPLE IMAGE AFFINITY

You can schedule a container to run only on nodes where a specific image is already pulled. For
example, suppose you pull a redis image to two hosts and a mysql image to a third.
$ docker -H node-1:2375 pull redis
$ docker -H node-2:2375 pull mysql
$ docker -H node-3:2375 pull redis
Only node-1 and node-3 have the redis image. Specify a -e affinity:image==redisfilter to
schedule several additional containers to run on these nodes.
$ docker tcp://<manager_ip:manager_port> run -d --name redis1 -e
affinity:image==redis redis
$ docker tcp://<manager_ip:manager_port> run -d --name redis2 -e
affinity:image==redis redis
$ docker tcp://<manager_ip:manager_port> run -d --name redis3 -e
affinity:image==redis redis
$ docker tcp://<manager_ip:manager_port> run -d --name redis4 -e
affinity:image==redis redis
$ docker tcp://<manager_ip:manager_port> run -d --name redis5 -e
affinity:image==redis redis
$ docker tcp://<manager_ip:manager_port> run -d --name redis6 -e
affinity:image==redis redis
$ docker tcp://<manager_ip:manager_port> run -d --name redis7 -e
affinity:image==redis redis
$ docker tcp://<manager_ip:manager_port> run -d --name redis8 -e
affinity:image==redis redis

$ docker tcp://<manager_ip:manager_port> ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
87c4376856a8 redis:latest "redis" Less than a second ago
running node-1/redis1
1212386856a8 redis:latest "redis" Less than a second ago
running node-1/redis2
87c4376639a8 redis:latest "redis" Less than a second ago
running node-3/redis3
1234376856a8 redis:latest "redis" Less than a second ago
running node-1/redis4
86c2136253a8 redis:latest "redis" Less than a second ago
running node-3/redis5
87c3236856a8 redis:latest "redis" Less than a second ago
running node-3/redis6
87c4376856a8 redis:latest "redis" Less than a second ago
running node-3/redis7
963841b138d8 redis:latest "redis" Less than a second ago
running node-1/redis8
As you can see here, the containers were only scheduled on nodes that had the redisimage.
Instead of the image name, you could have specified the image ID.
$ docker image ls
REPOSITORY TAG IMAGE ID
CREATED VIRTUAL SIZE
redis latest 06a1f75304ba 2
days ago 111.1 MB

$ docker tcp://<manager_ip:manager_port> run -d --name redis1 -e


affinity:image==06a1f75304ba redis

EXAMPLE LABEL AFFINITY

A label affinity allows you to filter based on a custom container label. For example, you can run
a nginx container and apply the com.example.type=frontend custom label.
$ docker tcp://<manager_ip:manager_port> run -d -p 80:80 --label
com.example.type=frontend nginx
87c4376856a8

$ docker tcp://<manager_ip:manager_port> ps --filter


"label=com.example.type=frontend"
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
87c4376856a8 nginx:latest "nginx" Less than a second ago
running 192.168.0.42:80->80/tcp node-1/trusting_yonath

Then, use -e affinity:com.example.type==frontend to schedule a container next to the container


with the com.example.type==frontend label.
$ docker tcp://<manager_ip:manager_port> run -d -e
affinity:com.example.type==frontend logger
87c4376856a8

$ docker tcp://<manager_ip:manager_port> ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
87c4376856a8 nginx:latest "nginx" Less than a second ago
running 192.168.0.42:80->80/tcp node-1/trusting_yonath
963841b138d8 logger:latest "logger" Less than a second ago
running node-1/happy_hawking

The logger container ends up on node-1 because its affinity with


thecom.example.type==frontend label.

Use a dependency filter


A container dependency filter co-schedules dependent containers on the same node. Currently,
dependencies are declared as follows:

 --volumes-from=dependency (shared volumes)


 --link=dependency:alias (links)
 --net=container:dependency (shared network stacks)

Swarm attempts to co-locate the dependent container on the same node. If it cannot be done
(because the dependent container doesn’t exist, or because the node doesn’t have enough
resources), it prevents the container creation.

The combination of multiple dependencies are honored if possible. For instance, if you specify --
volumes-from=A --net=container:B, the scheduler attempts to co-locate the container on the same
node as A and B. If those containers are running on different nodes, Swarm does not schedule the
container.

Use a port filter


When the port filter is enabled, a container’s port configuration is used as a unique constraint.
Docker Swarm selects a node where a particular port is available and unoccupied by another
container or process. Required ports may be specified by mapping a host port, or using the host
networking and exposing a port using the container configuration.

EXAMPLE IN BRIDGE MODE

By default, containers run on Docker’s bridge network. To use the port filter with the bridge network,
you run a container as follows.
$ docker tcp://<manager_ip:manager_port> run -d -p 80:80 nginx
87c4376856a8

$ docker tcp://<manager_ip:manager_port> ps
CONTAINER ID IMAGE COMMAND PORTS NAMES
87c4376856a8 nginx:latest "nginx" 192.168.0.42:80->80/tcp node-
1/prickly_engelbart

Docker Swarm selects a node where port 80 is available and unoccupied by another container or
process, in this case node-1. Attempting to run another container that uses the host port 80 results in
Swarm selecting a different node, because port 80 is already occupied on node-1:
$ docker tcp://<manager_ip:manager_port> run -d -p 80:80 nginx
963841b138d8

$ docker tcp://<manager_ip:manager_port> ps
CONTAINER ID IMAGE COMMAND PORTS
NAMES
963841b138d8 nginx:latest "nginx" 192.168.0.43:80->80/tcp
node-2/dreamy_turing
87c4376856a8 nginx:latest "nginx" 192.168.0.42:80->80/tcp
node-1/prickly_engelbart

Again, repeating the same command results in the selection of node-3, since port 80 is neither
available on node-1 nor node-2:
$ docker tcp://<manager_ip:manager_port> run -d -p 80:80 nginx
963841b138d8

$ docker tcp://<manager_ip:manager_port> ps
CONTAINER ID IMAGE COMMAND PORTS
NAMES
f8b693db9cd6 nginx:latest "nginx" 192.168.0.44:80->80/tcp
node-3/stoic_albattani
963841b138d8 nginx:latest "nginx" 192.168.0.43:80->80/tcp
node-2/dreamy_turing
87c4376856a8 nginx:latest "nginx" 192.168.0.42:80->80/tcp
node-1/prickly_engelbart

Finally, Docker Swarm refuses to run another container that requires port 80, because it is not
available on any node in the cluster:
$ docker tcp://<manager_ip:manager_port> run -d -p 80:80 nginx
2014/10/29 00:33:20 Error response from daemon: no resources available to schedule
container
Each container occupies port 80 on its residing node when the container is created and releases the
port when the container is deleted. A container in exited state still owns the port.
If prickly_engelbart on node-1 is stopped but not deleted, trying to start another container on node-
1 that requires port 80 would fail because port 80 is associated with prickly_engelbart. To increase
running instances of nginx, you can either restart prickly_engelbart, or start another container after
deleting prickly_englbart.

NODE PORT FILTER WITH HOST NETWORKING

A container running with --net=host differs from the default bridge mode as the hostmode does not
perform any port binding. Instead, host mode requires that you explicitly expose one or more port
numbers. You expose a port using EXPOSE in the Dockerfile or --expose on the command line.
Swarm makes use of this information in conjunction with the host mode to choose an available node
for a new container.
For example, the following commands start nginx on 3-node cluster.
$ docker tcp://<manager_ip:manager_port> run -d --expose=80 --net=host nginx
640297cb29a7
$ docker tcp://<manager_ip:manager_port> run -d --expose=80 --net=host nginx
7ecf562b1b3f
$ docker tcp://<manager_ip:manager_port> run -d --expose=80 --net=host nginx
09a92f582bc2

Port binding information is not available through the docker ps command because all the nodes
were started with the host network.
$ docker tcp://<manager_ip:manager_port> ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
640297cb29a7 nginx:1 "nginx -g 'daemon of Less than a second ago
Up 30 seconds box3/furious_heisenberg
7ecf562b1b3f nginx:1 "nginx -g 'daemon of Less than a second ago
Up 28 seconds box2/ecstatic_meitner
09a92f582bc2 nginx:1 "nginx -g 'daemon of 46 seconds ago
Up 27 seconds box1/mad_goldstine

Swarm refuses the operation when trying to instantiate the 4th container.

$ docker tcp://<manager_ip:manager_port> run -d --expose=80 --net=host nginx


FATA[0000] Error response from daemon: unable to find a node with port 80/tcp
available in the Host mode

However, port binding to the different value, for example 81, is still allowed.
$ docker tcp://<manager_ip:manager_port> run -d -p 81:80 nginx:latest
832f42819adc
$ docker tcp://<manager_ip:manager_port> ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
832f42819adc nginx:1 "nginx -g 'daemon of Less than a second ago
Up Less than a second 443/tcp, 192.168.136.136:81->80/tcp box3/thirsty_hawking
640297cb29a7 nginx:1 "nginx -g 'daemon of 8 seconds ago
Up About a minute box3/furious_heisenberg
7ecf562b1b3f nginx:1 "nginx -g 'daemon of 13 seconds ago
Up About a minute box2/ecstatic_meitner
09a92f582bc2 nginx:1 "nginx -g 'daemon of About a minute ago
Up About a minute box1/mad_goldstine

How to write filter expressions


To apply a node constraint or container affinity filters you must set environment variables on the
container using filter expressions, for example:
$ docker tcp://<manager_ip:manager_port> run -d --name redis1 -e
affinity:image==~redis redis

Each expression must be in the form:

<filter-type>:<key><operator><value>

The <filter-type> is either the affinity or the constraint keyword. It identifies the type filter you
intend to use.
The <key> is an alpha-numeric and must start with a letter or underscore. The <key>corresponds to
one of the following:

 the container keyword


 the node keyword
 a default tag (node constraints)
 a custom metadata label (nodes or containers).
The <operator> is either == or !=. By default, expression operators are hard enforced. If an
expression is not met exactly , the manager does not schedule the container. You can use a ~(tilde)
to create a “soft” expression. The scheduler tries to match a soft expression. If the expression is not
met, the scheduler discards the filter and schedules the container according to the scheduler’s
strategy.
The <value> is an alpha-numeric string, dots, hyphens, and underscores making up one of the
following:

 A globbing pattern, for example, abc*.


 A regular expression in the form of /regexp/. See re2 syntax for the supported regex syntax.

The following examples illustrate some possible expressions:

 constraint:node==node1 matches node node1.


 constraint:node!=node1 matches all nodes, except node1.
 constraint:region!=us* matches all nodes outside with a region tag prefixed with us.
 constraint:node==/node[12]/ matches nodes node1 and node2.
 constraint:node==/node\d/ matches all nodes with node + 1 digit.
 constraint:node!=/node-[01]/ matches all nodes, except node-0 and node-1.
 constraint:node!=/foo\[bar\]/ matches all nodes, except foo[bar]. You can see the use
of escape characters here.
 constraint:node==/(?i)node1/ matches node node1 case-insensitive.
So NoDe1 or NODE1 also match.
 affinity:image==~redis tries to match for nodes running container with a redisimage.
 constraint:region==~us* searches for nodes in the cluster belonging to the usregion.
 affinity:container!=~redis* schedules a new redis5 container to a node without a
container that satisfies redis*.

Swarm rescheduling
You are viewing docs for legacy standalone Swarm. These topics describe standalone Docker
Swarm. In Docker 1.12 and higher, Swarm mode is integrated with Docker Engine. Most users
should use integrated Swarm mode — a good place to start is Getting started with swarm
mode, Swarm mode CLI commands, and the Get started with Docker walkthrough). Standalone
Docker Swarm is not integrated into the Docker Engine API and CLI commands.
Estimated reading time: 1 minute

You can set rescheduling policies with Docker Swarm. A rescheduling policy determines what the
Swarm scheduler does for containers when the nodes they are running on fail.

Rescheduling policies
You set the reschedule policy when you start a container. You can do this with
the reschedule environment variable or the com.docker.swarm.reschedule-policies label. If you
don’t specify a policy, the default rescheduling policy is off which means that Swarm does not
restart a container when a node fails.
To set the on-node-failure policy with a reschedule environment variable:
$ docker run -d -e "reschedule:on-node-failure" redis

To set the same policy with a com.docker.swarm.reschedule-policies label:


$ docker run -d -l 'com.docker.swarm.reschedule-policies=["on-node-failure"]' redis

Review reschedule logs


You can use the docker logs command to review the rescheduled container actions. To do this, use
the following command syntax:
docker logs SWARM_MANAGER_CONTAINER_ID

When a container is successfully rescheduled, it generates a message similar to the following:

Rescheduled container 2536adb23 from node-1 to node-2 as 2362901cb213da321


Container 2536adb23 was running, starting container 2362901cb213da321

If for some reason, the new container fails to start on the new node, the log contains:

Failed to start rescheduled container 2362901cb213da321

Docker Swarm strategies


You are viewing docs for legacy standalone Swarm. These topics describe standalone Docker
Swarm. In Docker 1.12 and higher, Swarm mode is integrated with Docker Engine. Most users
should use integrated Swarm mode — a good place to start is Getting started with swarm
mode, Swarm mode CLI commands, and the Get started with Docker walkthrough). Standalone
Docker Swarm is not integrated into the Docker Engine API and CLI commands.
Estimated reading time: 4 minutes

The Docker Swarm scheduler features multiple strategies for ranking nodes. The strategy you
choose determines how Swarm computes ranking. When you run a new container, Swarm chooses
to place it on the node with the highest computed ranking for your chosen strategy.
To choose a ranking strategy, pass the --strategy flag and a strategy value to the swarm
manage command. Swarm currently supports these values:

 spread
 binpack
 random

The spread and binpack strategies compute rank according to a node’s available CPU, its RAM, and
the number of containers it has. The random strategy uses no computation. It selects a node at
random and is primarily intended for debugging.

Your goal in choosing a strategy is to best optimize your cluster according to your company’s needs.

Under the spread strategy, Swarm optimizes for the node with the least number of containers.
The binpack strategy causes Swarm to optimize for the node which is most packed. A container
occupies resource during its life cycle, including exited state. Users should be aware of this
condition to schedule containers. For example, spread strategy only checks number of containers
disregarding their states. A node with no active containers but high number of stopped containers
may not be selected, defeating the purpose of load sharing. User could either remove stopped
containers, or start stopped containers to achieve load spreading. The random strategy, like it
sounds, chooses nodes at random regardless of their available CPU or RAM.
Using the spread strategy results in containers spread thinly over many machines. The advantage of
this strategy is that if a node goes down you only lose a few containers.
The binpack strategy avoids fragmentation because it leaves room for bigger containers on unused
machines. The strategic advantage of binpack is that you use fewer machines as Swarm tries to
pack as many containers as it can on a node.
If you do not specify a --strategy Swarm uses spread by default.

Spread strategy example


In this example, your cluster is using the spread strategy which optimizes for nodes that have the
fewest containers. In this cluster, both node-1 and node-2 have 2G of RAM, 2 CPUs, and neither
node is running a container. Under this strategy node-1 and node-2 have the same ranking.
When you run a new container, the system chooses node-1 at random from the Swarm cluster of two
equally ranked nodes:
$ docker tcp://<manager_ip:manager_port> run -d -P -m 1G --name db mysql
f8b693db9cd6
$ docker tcp://<manager_ip:manager_port> ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
f8b693db9cd6 mysql:latest "mysqld" Less than a second ago
running 192.168.0.42:49178->3306/tcp node-1/db

Now, we start another container and ask for 1G of RAM again.

$ docker tcp://<manager_ip:manager_port> run -d -P -m 1G --name frontend nginx


963841b138d8

$ docker tcp://<manager_ip:manager_port> ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
963841b138d8 nginx:latest "nginx" Less than a second ago
running 192.168.0.42:49177->80/tcp node-2/frontend
f8b693db9cd6 mysql:latest "mysqld" Up About a minute
running 192.168.0.42:49178->3306/tcp node-1/db

The container frontend was started on node-2 because it was the node the least loaded already. If
two nodes have the same amount of available RAM and CPUs, the spread strategy prefers the node
with least containers.

BinPack strategy example


In this example, let’s say that both node-1 and node-2 have 2G of RAM and neither is running a
container. Again, the nodes are equal. When you run a new container, the system chooses node-1 at
random from the cluster:
$ docker tcp://<manager_ip:manager_port> run -d -P -m 1G --name db mysql
f8b693db9cd6

$ docker tcp://<manager_ip:manager_port> ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
f8b693db9cd6 mysql:latest "mysqld" Less than a second ago
running 192.168.0.42:49178->3306/tcp node-1/db

Now, you start another container, asking for 1G of RAM again.


$ docker tcp://<manager_ip:manager_port> run -d -P -m 1G --name frontend nginx
963841b138d8

$ docker tcp://<manager_ip:manager_port> ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
963841b138d8 nginx:latest "nginx" Less than a second ago
running 192.168.0.42:49177->80/tcp node-1/frontend
f8b693db9cd6 mysql:latest "mysqld" Up About a minute
running 192.168.0.42:49178->3306/tcp node-1/db

The system starts the new frontend container on node-1 because it was the node the most packed
already. This allows us to start a container requiring 2G of RAM on node-2.
If two nodes have the same amount of available RAM and CPUs, the binpack strategy prefers the
node with most containers.

Use Docker Swarm with TLS


You are viewing docs for legacy standalone Swarm. These topics describe standalone Docker
Swarm. In Docker 1.12 and higher, Swarm mode is integrated with Docker Engine. Most users
should use integrated Swarm mode — a good place to start is Getting started with swarm
mode, Swarm mode CLI commands, and the Get started with Docker walkthrough). Standalone
Docker Swarm is not integrated into the Docker Engine API and CLI commands.
Estimated reading time: 6 minutes

All nodes in a Swarm cluster must bind their Docker daemons to a network port. This has obvious
security implications. These implications are compounded when the network in question is untrusted
such as the internet. To mitigate these risks, Docker Swarm and the Docker Engine daemon support
Transport Layer Security (TLS).

Note: TLS is the successor to SSL (Secure Sockets Layer) and the two terms are often used
interchangeably. Docker uses TLS, this term is used throughout this article.

Learn the TLS concepts


Before going further, it is important to understand the basic concepts of TLS and public key
infrastructure (PKI).
Public key infrastructure is a combination of security related technologies, policies, and procedures,
that are used to create and manage digital certificates. These certificates and infrastructure secure
digital communication using mechanisms such as authentication and encryption.

The following analogy may be useful. It is common practice that passports are used to verify an
individual’s identity. Passports usually contain a photograph and biometric information that identify
the owner. A passport also lists the country that issued it, as well as valid fromand valid to dates.
Digital certificates are very similar. The text below is an extract from a digital certificate:

Certificate:
Data:
Version: 3 (0x2)
Serial Number: 9590646456311914051 (0x8518d2237ad49e43)
Signature Algorithm: sha256WithRSAEncryption
Issuer: C=US, ST=CA, L=Sanfrancisco, O=Docker Inc
Validity
Not Before: Jan 18 09:42:16 2016 GMT
Not After : Jan 15 09:42:16 2026 GMT
Subject: CN=swarm

This certificate identifies a computer called swarm. The certificate is valid between January 2016
and January 2026 and was issued by Docker Inc. based in the state of California in the US.

Just as passports authenticate individuals as they board flights and clear customs, digital certificates
authenticate computers on a network.

Public key infrastructure (PKI) is the combination of technologies, policies, and procedures that work
behind the scenes to enable digital certificates. Some of the technologies, policies and procedures
provided by PKI include:

 Services to securely request certificates


 Procedures to authenticate the entity requesting the certificate
 Procedures to determine the entity’s eligibility for the certificate
 Technologies and processes to issue certificates
 Technologies and processes to revoke certificates

How does Docker Engine authenticate using TLS


This section shows how Docker Engine and Swarm use PKI and certificates to increase security.
You can configure both the Docker Engine CLI and the Docker Engine daemon to require TLS for
authentication. Configuring TLS means that all communications between the Docker Engine CLI and
the Docker Engine daemon must be accompanied with, and signed by a trusted digital certificate.
The Docker Engine CLI must provide its digital certificate before the Docker Engine daemon accepts
incoming commands from it.

The Docker Engine daemon must also trust the certificate that the Docker Engine CLI uses. This
trust is usually established by way of a trusted third party. The Docker Engine CLI and Docker
Engine daemon in the diagram below are configured to require TLS authentication.

The trusted third party in this diagram is the Certificate Authority (CA) server. Like the country in the
passport example, a CA creates, signs, issues, revokes certificates. Trust is established by installing
the CA’s root certificate on the host running the Docker Engine daemon. The Docker Engine CLI
then requests its own certificate from the CA server, which the CA server signs and issues to the
client.

The Docker Engine CLI sends its certificate to the Docker Engine daemon before issuing
commands. The Docker Engine daemon inspects the certificate, and because the Docker Engine
daemon trusts the CA, the Docker Engine daemon automatically trusts any certificates signed by the
CA. Assuming the certificate is in order (the certificate has not expired or been revoked etc.) the
Docker Engine daemon accepts commands from this trusted Docker Engine CLI.
The Docker Engine CLI is simply a client that uses the Docker Engine API to communicate with the
Docker Engine daemon. Any client that uses this Docker Engine API can use TLS. For example,
Docker Engine clients such as ‘Docker Universal Control Plane’ (UCP) have TLS support built-in.
Other, third party products, that use Docker Engine API, can also be configured this way.

TLS modes with Docker and Swarm


Now that you know how certificates are used by the Docker Engine daemon for authentication, it’s
important to be aware of the three TLS configurations possible with Docker Engine daemon and its
clients:

 External 3rd party CA


 Internal corporate CA
 Self-signed certificates

These configurations are differentiated by the type of entity acting as the Certificate Authority (CA).

External 3rd party CA


An external CA is a trusted 3rd party company that provides a means of creating, issuing, revoking,
and otherwise managing certificates. They are trusted in the sense that they need to fulfill specific
conditions and maintain high levels of security and business practices to win your business. You
also need to install the external CA’s root certificates for you computers and services to trust them.

When you use an external 3rd party CA, they create, sign, issue, revoke and otherwise manage your
certificates. They normally charge a fee for these services, but are considered an enterprise-class
scalable solution that provides a high degree of trust.

Internal corporate CA
Many organizations choose to implement their own Certificate Authorities and PKI. Common
examples are using OpenSSL and Microsoft Active Directory. In this case, your company is its own
Certificate Authority with all the work it entails. The benefit is, as your own CA, you have more
control over your PKI.

Running your own CA and PKI requires you to provide all of the services offered by external 3rd
party CAs. These include creating, issuing, revoking, and otherwise managing certificates. Doing all
of this yourself has its own costs and overheads. However, for a large corporation, it still may reduce
costs in comparison to using an external 3rd party service.
Assuming you operate and manage your own internal CAs and PKI properly, an internal, corporate
CA can be a highly scalable and highly secure option.

Self-signed certificates
As the name suggests, self-signed certificates are certificates that are signed with their own private
key rather than a trusted CA. This is a low cost and simple to use option. If you implement and
manage self-signed certificates correctly, they can be better than using no certificates.

Because self-signed certificates lack of a full-blown PKI, they do not scale well and lack many of the
advantages offered by the other options. One of their disadvantages is that you cannot revoke self-
signed certificates. Due to this, and other limitations, self-signed certificates are considered the least
secure of the three options. Self-signed certificates are not recommended for public facing
production workloads exposed to untrusted networks.

Configure Docker Swarm for TLS


You are viewing docs for legacy standalone Swarm. These topics describe standalone Docker
Swarm. In Docker 1.12 and higher, Swarm mode is integrated with Docker Engine. Most users
should use integrated Swarm mode — a good place to start is Getting started with swarm
mode, Swarm mode CLI commands, and the Get started with Docker walkthrough). Standalone
Docker Swarm is not integrated into the Docker Engine API and CLI commands.
Estimated reading time: 17 minutes

In this procedure you create a two-node swarm cluster, a Docker Engine CLI, a swarm manager,
and a Certificate Authority as shown below. All the Docker Engine hosts (client, swarm, node1,
and node2) have a copy of the CA’s certificate as well as their own key-pair signed by the CA.
This procedure includes the following steps:

 Step 1: Set up the prerequisites


 Step 2: Create a Certificate Authority (CA) server
 Step 3: Create and sign keys
 Step 4: Install the keys
 Step 5: Configure the Engine daemon for TLS
 Step 6: Create a swarm cluster
 Step 7: Create the swarm manager using TLS
 Step 8: Test the swarm manager configuration
 Step 9: Configure the Engine CLI to use TLS

Before you begin


The article includes steps to create your own CA using OpenSSL. This is similar to operating your
own internal corporate CA and PKI. However, this must not be used as a guide to building a
production-worthy internal CA and PKI. These steps are included for demonstration purposes only -
so that readers without access to an existing CA and set of certificates can follow along and
configure Docker Swarm to use TLS.

Step 1: Set up the prerequisites


To complete this procedure you must stand up 5 (five) Linux servers. These servers can be any mix
of physical and virtual servers; they may be on premises or in the public cloud. The following table
lists each server name and its purpose.

Server name Description

ca Acts as the Certificate Authority (CA) server.

swarm Acts as the swarm manager.

node1 Acts as a swarm node.

node2 Acts as a swarm node.

client Acts as a remote Docker Engine client.

Make sure that you have SSH access to all 5 servers and that they can communicate with each
other using DNS name resolution. In particular:

 Open TCP port 2376 between the swarm manager and swarm nodes
 Open TCP port 3376 between the Docker Engine client and the swarm manager

You can choose different ports if these are already in use. This example assumes you use these
ports though.

Each server must run an operating system compatible with Docker Engine. For simplicity, the steps
that follow assume all servers are running Ubuntu 14.04 LTS.

Step 2: Create a Certificate Authority (CA) server


Note: If you already have access to a CA and certificates, and are comfortable working with them,
you should skip this step and go to the next.
In this step, you configure a Linux server as a CA. You use this CA to create and sign keys. This
step included so that readers without access to an existing CA (external or corporate) and
certificates can follow along and complete the later steps that require installing and using certificates.
It is not intended as a model for how to deploy production-worthy CA.

1. Logon to the terminal of your CA server and elevate to root.

2. $ sudo su
3. Create a private key called ca-priv-key.pem for the CA:
4. # openssl genrsa -out ca-priv-key.pem 2048
5. Generating RSA private key, 2048 bit long modulus
6. ...........................................................+++
7. .....+++
8. e is 65537 (0x10001)

9. Create a public key called ca.pem for the CA.

The public key is based on the private key created in the previous step.

# openssl req -config /usr/lib/ssl/openssl.cnf -new -key ca-priv-key.pem -x509


-days 1825 -out ca.pem
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:US
<output truncated>

You have now configured a CA server with a public and private keypair. You can inspect the
contents of each key. To inspect the private key:

# openssl rsa -in ca-priv-key.pem -noout -text

To inspect the public key (cert):

# openssl x509 -in ca.pem -noout -text

The following command shows the partial contents of the CA’s public key.

# openssl x509 -in ca.pem -noout -text


Certificate:
Data:
Version: 3 (0x2)
Serial Number: 17432010264024107661 (0xf1eaf0f9f41eca8d)
Signature Algorithm: sha256WithRSAEncryption
Issuer: C=US, ST=CA, L=Sanfrancisco, O=Docker Inc
Validity
Not Before: Jan 16 18:28:12 2016 GMT
Not After : Jan 13 18:28:12 2026 GMT
Subject: C=US, ST=CA, L=San Francisco, O=Docker Inc
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:d1:fe:6e:55:d4:93:fc:c9:8a:04:07:2d:ba:f0:
55:97:c5:2c:f5:d7:1d:6a:9b:f0:f0:55:6c:5d:90:
<output truncated>

Later, you use this certificate to sign keys for other servers in the infrastructure.

Step 3: Create and sign keys


Now that you have a working CA, you need to create key pairs for the swarm manager, swarm
nodes, and remote Docker Engine client. The commands and process to create key pairs is identical
for all servers. You create the following keys:

Key Description

The CA’s private key and must be kept secure. It is used later to sign new
ca-priv-key.pem keys for the other nodes in the environment. Together with the ca.pem file,
this makes up the CA’s key pair.

The CA’s public key (also called certificate). This is installed on all nodes in
ca.pem the environment so that all nodes trust certificates signed by the CA.
Together with the ca-priv-key.pem file, this makes up the CA’s key pair.

A certificate signing request (CSR). A CSR is effectively an application to


the CA to create a new key pair for a particular node. The CA takes the
NODE_NAME.csr
information provided in the CSR and generates the public and private key
pair for that node.
Key Description

A private key signed by the CA. The node uses this key to authenticate
NODE_NAME-priv-
itself with remote Docker Engines. Together with the NODE_NAME-
key.pem
cert.pem file, this makes up a node’s key pair.

NODE_NAME- A certificate signed by the CA. This is not used in this example. Together
cert.pem with the NODE_NAME-priv-key.pem file, this makes up a node’s key pair.

The commands below show how to create keys for all of your nodes. You perform this procedure in
a working directory located on your CA server.

1. Logon to the terminal of your CA server and elevate to root.

2. $ sudo su

3. Create a private key swarm-priv-key.pem for your swarm manager


4. # openssl genrsa -out swarm-priv-key.pem 2048
5. Generating RSA private key, 2048 bit long modulus
6. ............................................................+++
7. ........+++
8. e is 65537 (0x10001)

9. Generate a certificate signing request (CSR) swarm.csr using the private key you create in
the previous step.
10. # openssl req -subj "/CN=swarm" -new -key swarm-priv-key.pem -out swarm.csr

Remember, this is only for demonstration purposes. The process to create a CSR is slightly
different in real-world production environments.

11. Create the certificate swarm-cert.pem based on the CSR created in the previous step.
12. # openssl x509 -req -days 1825 -in swarm.csr -CA ca.pem -CAkey ca-priv-key.pem
-CAcreateserial -out swarm-cert.pem -extensions v3_req -extfile
/usr/lib/ssl/openssl.cnf
13. <snip>
14. # openssl rsa -in swarm-priv-key.pem -out swarm-priv-key.pem

You now have a keypair for the swarm manager.


15. Repeat the steps above for the remaining nodes in your infrastructure (node1, node2,
and client).
Remember to replace the swarm specific values with the values relevant to the node you are
creating the key pair for.
Server name Private key CSR Certificate

node1 node1-priv-key.pem node1.csr node1-cert.pem

node2 node2-priv-key.pem node2.csr node2-cert.pem

client client-priv-key.pem client.csr client-cert.pem

16. Verify that your working directory contains the following files:

17. # ls -l
18. total 64
19. -rw-r--r-- 1 root root 1679 Jan 16 18:27 ca-priv-key.pem
20. -rw-r--r-- 1 root root 1229 Jan 16 18:28 ca.pem
21. -rw-r--r-- 1 root root 17 Jan 18 09:56 ca.srl
22. -rw-r--r-- 1 root root 1086 Jan 18 09:56 client-cert.pem
23. -rw-r--r-- 1 root root 887 Jan 18 09:55 client.csr
24. -rw-r--r-- 1 root root 1679 Jan 18 09:56 client-priv-key.pem
25. -rw-r--r-- 1 root root 1082 Jan 18 09:44 node1-cert.pem
26. -rw-r--r-- 1 root root 887 Jan 18 09:43 node1.csr
27. -rw-r--r-- 1 root root 1675 Jan 18 09:44 node1-priv-key.pem
28. -rw-r--r-- 1 root root 1082 Jan 18 09:49 node2-cert.pem
29. -rw-r--r-- 1 root root 887 Jan 18 09:49 node2.csr
30. -rw-r--r-- 1 root root 1675 Jan 18 09:49 node2-priv-key.pem
31. -rw-r--r-- 1 root root 1082 Jan 18 09:42 swarm-cert.pem
32. -rw-r--r-- 1 root root 887 Jan 18 09:41 swarm.csr
33. -rw-r--r-- 1 root root 1679 Jan 18 09:42 swarm-priv-key.pem

You can inspect the contents of each of the keys. To inspect a private key:

openssl rsa -in <key-name> -noout -text

To inspect a public key (cert):


openssl x509 -in <key-name> -noout -text

The following command shows the partial contents of the swarm manager’s publicswarm-
cert.pem key.

# openssl x509 -in ca.pem -noout -text


Certificate:
Data:
Version: 3 (0x2)
Serial Number: 9590646456311914051 (0x8518d2237ad49e43)
Signature Algorithm: sha256WithRSAEncryption
Issuer: C=US, ST=CA, L=Sanfrancisco, O=Docker Inc
Validity
Not Before: Jan 18 09:42:16 2016 GMT
Not After : Jan 15 09:42:16 2026 GMT
Subject: CN=swarm

<output truncated>

Step 4: Install the keys


In this step, you install the keys on the relevant servers in the infrastructure. Each server needs
three files:

 A copy of the Certificate Authority’s public key (ca.pem)


 Its own private key
 Its own public key (cert)

The procedure below shows you how to copy these files from the CA server to each server
using scp. As part of the copy procedure, rename each file as follows on each node:
Original name Copied name

ca.pem ca.pem

<server>-cert.pem cert.pem

<server>-priv-key.pem key.pem
1. Logon to the terminal of your CA server and elevate to root.

2. $ sudo su

3. Create a ~/.certs directory on the swarm manager. Here we assume user account is
ubuntu.
4. $ ssh ubuntu@swarm 'mkdir -p /home/ubuntu/.certs'

5. Copy the keys from the CA to the swarm manager server.

6. $ scp ./ca.pem ubuntu@swarm:/home/ubuntu/.certs/ca.pem


7. $ scp ./swarm-cert.pem ubuntu@swarm:/home/ubuntu/.certs/cert.pem
8. $ scp ./swarm-priv-key.pem ubuntu@swarm:/home/ubuntu/.certs/key.pem

Note: You may need to provide authentication for the scp commands to work. For example,
AWS EC2 instances use certificate-based authentication. To copy the files to an EC2
instance associated with a public key called nigel.pem, modify the scp command as
follows: scp -i /path/to/nigel.pem ./ca.pem ubuntu@swarm:/home/ubuntu/.certs/ca.pem .

9. Repeat step 2 for each remaining server in the infrastructure.


o node1
o node2
o client

10. Verify your work.

When the copying is complete, each machine should have the following keys.
Each node in your infrastructure should have the following files in
the/home/ubuntu/.certs/ directory:
# ls -l /home/ubuntu/.certs/
total 16
-rw-r--r-- 1 ubuntu ubuntu 1229 Jan 18 10:03 ca.pem
-rw-r--r-- 1 ubuntu ubuntu 1082 Jan 18 10:06 cert.pem
-rw-r--r-- 1 ubuntu ubuntu 1679 Jan 18 10:06 key.pem

Step 5: Configure the Engine daemon for TLS


In the last step, you created and installed the necessary keys on each of your swarm nodes. In this
step, you configure them to listen on the network and only accept connections using TLS. Once you
complete this step, your swarm nodes listen on TCP port 2376, and only accept connections using
TLS.
On node1 and node2 (your swarm nodes), do the following:
1. Open a terminal on node1 and elevate to root.
2. $ sudo su

3. Add the following configuration keys to the /etc/docker/daemon.json. If the file does not yet
exist, create it.
4. {
5. "hosts": ["tcp://0.0.0.0:2376"],
6. "tlsverify": "true",
7. "tlscacert": "/home/ubuntu/.certs/ca.pem",
8. "tlscert": "/home/ubuntu/.certs/cert.pem",
9. "tlskey": "/home/ubuntu/.certs/key.pem"
10. }

Restart Docker for the changes to take effect. If the file is not valid JSON, Docker fails to
start and emits an error.

11. Repeat the procedure on node2 as well.

Step 6: Create a swarm cluster


Next create a swarm cluster. In this procedure you create a two-node swarm cluster using the
default hosted discovery backend. The default hosted discovery backend uses Docker Hub and is
not recommended for production use.

1. Logon to the terminal of your swarm manager node.

2. Create the cluster and export it’s unique ID to the TOKEN environment variable.
3. $ sudo export TOKEN=$(docker run --rm swarm create)
4. Unable to find image 'swarm:latest' locally
5. latest: Pulling from library/swarm
6. d681c900c6e3: Pulling fs layer
7. <snip>
8. 986340ab62f0: Pull complete
9. a9975e2cc0a3: Pull complete
10. Digest:
sha256:c21fd414b0488637b1f05f13a59b032a3f9da5d818d31da1a4ca98a84c0c781b
11. Status: Downloaded newer image for swarm:latest

12. Join node1 to the cluster.


Be sure to specify TCP port 2376 and not 2375.
$ sudo docker run -d swarm join --addr=node1:2376 token://$TOKEN
7bacc98536ed6b4200825ff6f4004940eb2cec891e1df71c6bbf20157c5f9761

13. Join node2 to the cluster.


14. $ sudo docker run -d swarm join --addr=node2:2376 token://$TOKEN
15. db3f49d397bad957202e91f0679ff84f526e74d6c5bf1b6734d834f5edcbca6c

Step 7: Start the swarm manager using TLS


1. Launch a new container with TLS enables

2. $ docker run -d -p 3376:3376 -v /home/ubuntu/.certs:/certs:ro swarm manage --


tlsverify --tlscacert=/certs/ca.pem --tlscert=/certs/cert.pem --
tlskey=/certs/key.pem --host=0.0.0.0:3376 token://$TOKEN

The command above launches a new container based on the swarm image and it maps
port 3376 on the server to port 3376 inside the container. This mapping ensures that Docker
Engine commands sent to the host on port 3376 are passed on to port 3376inside the
container. The container runs the swarm manage process with the --tlsverify, --
tlscacert, --tlscert and --tlskey options specified. These options force TLS verification
and specify the location of the swarm manager’s TLS keys.
3. Run a docker ps command to verify that your swarm manager container is up and running.
4. $ docker ps
5. CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
6. 035dbf57b26e swarm "/swarm manage --tlsv" 7 seconds ago
Up 7 seconds 2375/tcp, 0.0.0.0:3376->3376/tcp compassionate_lovelace

Your swarm cluster is now configured to use TLS.

Step 8: Test the swarm manager configuration


Now that you have a swarm cluster built and configured to use TLS, test that it works with a Docker
Engine CLI.

1. Open a terminal onto your client server.


2. Issue the docker version command.

When issuing the command, you must pass it the location of the clients certifications.

$ sudo docker --tlsverify --tlscacert=/home/ubuntu/.certs/ca.pem --


tlscert=/home/ubuntu/.certs/cert.pem --tlskey=/home/ubuntu/.certs/key.pem -H
swarm:3376 version
Client:
Version: 1.9.1
API version: 1.21
Go version: go1.4.2
Git commit: a34a1d5
Built: Fri Nov 20 13:12:04 UTC 2015
OS/Arch: linux/amd64

Server:
Version: swarm/1.0.1
API version: 1.21
Go version: go1.5.2
Git commit: 744e3a3
Built:
OS/Arch: linux/amd64

The output above shows the Server version as “swarm/1.0.1”. This means that the command
was successfully issued against the swarm manager.

3. Verify that the same command does not work without TLS.

This time, do not pass your certs to the swarm manager.

$ sudo docker -H swarm:3376 version


:
Version: 1.9.1
API version: 1.21
Go version: go1.4.2
Git commit: a34a1d5
Built: Fri Nov 20 13:12:04 UTC 2015
OS/Arch: linux/amd64
Get http://swarm:3376/v1.21/version: malformed HTTP response
"\x15\x03\x01\x00\x02\x02".
* Are you trying to connect to a TLS-enabled daemon without TLS?

The output above shows that the command was rejected by the server. This is because the
server (swarm manager) is configured to only accept connections from authenticated clients
using TLS.

Step 9: Configure the Engine CLI to use TLS


You can configure the Engine so that you don’t need to pass the TLS options when you issue a
command. To do this, configure the Docker Engine host and TLS settings as defaults on your Docker
Engine client.
To do this, you place the client’s keys in your ~/.docker configuration folder. If you have other users
on your system using the Engine command line, configure their account’s ~/.docker as well. The
procedure below shows how to do this for the ubuntu user on your Docker Engine client.
1. Open a terminal onto your client server.
2. If it doesn’t exist, create a .docker directory in the ubuntu user’s home directory.
3. $ mkdir /home/ubuntu/.docker

4. Copy the Docker Engine client’s keys from /home/ubuntu/.certs to/home/ubuntu/.docker


5. $ cp /home/ubuntu/.certs/{ca,cert,key}.pem /home/ubuntu/.docker

6. Edit the account’s ~/.bash_profile.

7. Set the following variables:

Variable Description

DOCKER_HOST Sets the Docker host and TCP port to send all Engine commands to.

DOCKER_TLS_VERIFY Tells Engine to use TLS.

DOCKER_CERT_PATH Specifies the location of TLS keys.


8. For example:

9. export DOCKER_HOST=tcp://swarm:3376
10. export DOCKER_TLS_VERIFY=1
11. export DOCKER_CERT_PATH=/home/ubuntu/.docker/

12. Save and close the file.

13. Source the file to pick up the new variables.

14. $ source ~/.bash_profile

15. Verify that the procedure worked by issuing a docker version command
16. $ docker version
17. Client:
18. Version: 1.9.1
19. API version: 1.21
20. Go version: go1.4.2
21. Git commit: a34a1d5
22. Built: Fri Nov 20 13:12:04 UTC 2015
23. OS/Arch: linux/amd64
24.
25. Server:
26. Version: swarm/1.0.1
27. API version: 1.21
28. Go version: go1.5.2
29. Git commit: 744e3a3
30. Built:
31. OS/Arch: linux/amd64

The server portion of the output above command shows that your Docker client is issuing
commands to the swarm manager and using TLS.

Congratulations! You have configured a Docker swarm cluster to use TLS.

Swarm Command line reference


create — Create a discovery token
You are viewing docs for legacy standalone Swarm. These topics describe standalone Docker
Swarm. In Docker 1.12 and higher, Swarm mode is integrated with Docker Engine. Most users
should use integrated Swarm mode — a good place to start is Getting started with swarm
mode, Swarm mode CLI commands, and the Get started with Docker walkthrough). Standalone
Docker Swarm is not integrated into the Docker Engine API and CLI commands.
Estimated reading time: 1 minute

The create command uses Docker Hub’s hosted discovery backend to create a unique discovery
token for your cluster. For example:
$ docker run --rm swarm create
86222732d62b6868d441d430aee4f055

Later, when you use manage or join to create Swarm managers and nodes, you use the discovery
token in the <discovery> argument. For instance, token://86222732d62b6868d441d430aee4f055 . The
discovery backend registers each new Swarm manager and node that uses the token as a member
of your cluster.

Some documentation also refers to the discovery token as a cluster_id.

Warning: Docker Hub’s hosted discovery backend is not recommended for production use. It’s
intended only for testing/development.

help - Display information about a


command
Estimated reading time: 1 minute

The help command displays information about how to use a command.

For example, to see a list of Swarm options and commands, enter:

$ docker run swarm --help

To see a list of arguments and options for a specific Swarm command, enter:

$ docker run swarm <command> --help


For example:

$ docker run swarm list --help


Usage: swarm list [OPTIONS] <discovery>

List nodes in a cluster

Arguments:
<discovery> discovery service to use [$SWARM_DISCOVERY]
* token://<token>
* consul://<ip>/<path>
* etcd://<ip1>,<ip2>/<path>
* file://path/to/file
* zk://<ip1>,<ip2>/<path>
* [nodes://]<ip1>,<ip2>

Options:
--timeout "10s" timeout
period
--discovery-opt [--discovery-opt option --discovery-opt option] discovery options

join — Create a Swarm node


Estimated reading time: 3 minutes

Prerequisite: Before using join, establish a discovery backend as described in this discovery topic.
The join command creates a Swarm node whose purpose is to run containers on behalf of the
cluster. A typical cluster has multiple Swarm nodes.

To create a Swarm node, use the following syntax:

$ docker run swarm join [OPTIONS] <discovery>

For example, to create a Swarm node in a high-availability cluster with other managers, enter:

$ docker run -d swarm join --advertise=172.30.0.69:2375 consul://172.30.0.161:8500


Or, for example, to create a Swarm node that uses Transport Layer Security (TLS) to authenticate
the Docker Swarm nodes, enter:

$ sudo docker run -d swarm join --addr=node1:2376


token://86222732d62b6868d441d430aee4f055

Arguments
The join command has only one argument:
<discovery> — Discovery backend
Before you create a Swarm node, create a discovery token or set up a discovery backend for your
cluster.

When you create the Swarm node, use the <discovery> argument to specify one of the following
discovery backends:

 token://<token>
 consul://<ip1>/<path>
 etcd://<ip1>,<ip2>,<ip3>/<path>
 file://<path/to/file>
 zk://<ip1>,<ip2>/<path>
 [nodes://]<iprange>,<iprange>

Where:

 <token> is a discovery token generated by Docker Hub’s hosted discovery service. To


generate this discovery token, use the create command.
Warning: Docker Hub’s hosted discovery backend is not recommended for production use.
It’s intended only for testing/development.

 ip1, ip2, ip3 are each the IP address and port numbers of a discovery backend node.
 path (optional) is a path to a key-value store on the discovery backend. When you use a
single backend to service multiple clusters, you use paths to maintain separate key-value
stores for each cluster.
 path/to/file is the path to a file that contains a static list of the Swarm managers and nodes
that are members of the cluster.
 iprange is an IP address or a range of IP addresses followed by a port number.

For example:

 A discovery token: token://0ac50ef75c9739f5bfeeaf00503d4e6e


 A Consul node: consul://172.30.0.165:8500
The environment variable for <discovery> is $SWARM_DISCOVERY.

For more information and examples, see the Docker Swarm Discovery topic.

Options
The join command has the following options:
--advertise or --addr — Advertise the Docker Engine’s IP and
port number
Use --advertise <ip>:<port> or --addr <ip>:<port> to advertise the IP address and port number
of the Docker Engine. For example, --advertise 172.30.0.161:4000. Swarm managers MUST be
able to reach this Swarm node at this address.
The environment variable for --advertise is $SWARM_ADVERTISE.
--heartbeat — Period between each heartbeat
Use --heartbeat "<interval>s" to specify the interval, in seconds, between heartbeats the node
sends to the primary manager. These heartbeats indicate that the node is healthy and reachable. By
default, the interval is 60 seconds.
--ttl — Sets the expiration of an ephemeral node
Use --ttl "<interval>s" to specify the time-to-live (TTL) interval, in seconds, of an ephemeral
node. The default interval is 180s.
--delay — Add a random delay in [0s,delay] to avoid
synchronized registration
Use --delay "<interval>s" to specify the maximum interval for a random delay, in seconds, before
the node registers with the discovery backend. If you deploy a large number of nodes
simultaneously, the random delay spreads registrations out over the interval and avoids saturating
the discovery backend.
--discovery-opt — Discovery options
Use --discovery-opt <value> to discovery options, such as paths to the TLS files; the CA’s public
key certificate, the certificate, and the private key of the distributed K/V store on a Consul or etcd
discovery backend. You can enter multiple discovery options. For example:
--discovery-opt kv.cacertfile=/path/to/mycacert.pem \
--discovery-opt kv.certfile=/path/to/mycert.pem \
--discovery-opt kv.keyfile=/path/to/mykey.pem \

list — List the nodes in a cluster


Estimated reading time: 2 minutes

Use list to display a list of the nodes in a cluster.

To list the nodes in a cluster, use the following syntax:

docker run swarm list [OPTIONS] <discovery>

The following examples show a few different syntaxes for the <discovery> argument:

etcd:

swarm list etcd://<etcd_addr1>,<etcd_addr2>/<optional path prefix> <node_ip:port>

Consul:

swarm list consul://<consul_addr>/<optional path prefix> <node_ip:port>

ZooKeeper:

swarm list zk://<zookeeper_addr1>,<zookeeper_addr2>/<optional path prefix>


<node_ip:port>

Arguments
The list command has only one argument:

<discovery> — Discovery backend

When you use the list command, use the <discovery> argument to specify one of the following
discovery backends:

 token://<token>
 consul://<ip1>/<path>
 etcd://<ip1>,<ip2>,<ip3>/<path>
 file://<path/to/file>
 zk://<ip1>,<ip2>/<path>
 [nodes://]<iprange>,<iprange>

Where:

 <token> is a discovery token generated by Docker Hub’s hosted discovery service. To


generate this discovery token, use the create command.
Warning: Docker Hub’s hosted discovery backend is not recommended for production use.
It’s intended only for testing/development.

 ip1, ip2, ip3 are each the IP address and port numbers of a discovery backend node.
 path (optional) is a path to a key-value store on the discovery backend. When you use a
single backend to service multiple clusters, you use paths to maintain separate key-value
stores for each cluster.
 path/to/file is the path to a file that contains a static list of the Swarm managers and nodes
that are members of the cluster.
 iprange is an IP address or a range of IP addresses followed by a port number.

For example:

 A discovery token: token://0ac50ef75c9739f5bfeeaf00503d4e6e


 A Consul node: consul://172.30.0.165:8500

The environment variable for <discovery> is $SWARM_DISCOVERY.

For more information and examples, see the Docker Swarm Discovery topic.

Options
The list command has the following options:

--timeout — Timeout period

Use --timeout "<interval>s" to specify the timeout period, in seconds, to wait for the discovery
backend to return the list. The default interval is 10s.

--discovery-opt — Discovery options

Use --discovery-opt <value> to discovery options, such as paths to the TLS files; the CA’s public
key certificate, the certificate, and the private key of the distributed K/V store on a Consul or etcd
discovery backend. You can enter multiple discovery options. For example:
--discovery-opt kv.cacertfile=/path/to/mycacert.pem \
--discovery-opt kv.certfile=/path/to/mycert.pem \
--discovery-opt kv.keyfile=/path/to/mykey.pem \

manage — Create a Swarm manager


Estimated reading time: 10 minutes

Prerequisite: Before using manage to create a Swarm manager, establish a discovery backend as
described in this discovery topic.
The manage command creates a Swarm manager whose purpose is to receive commands on behalf
of the cluster and assign containers to Swarm nodes. You can create multiple Swarm managers as
part of a high-availability cluster.

To create a Swarm manager, use the following syntax:

$ docker run swarm manage [OPTIONS] <discovery>

For example, you can use manage to create a Swarm manager in a high-availability cluster with other
managers:
$ docker run -d -p 4000:4000 swarm manage -H :4000 --replication --advertise
172.30.0.161:4000 consul://172.30.0.165:8500

Or, for example, you can use it to create a Swarm manager that uses Transport Layer Security
(TLS) to authenticate the Docker Client and Swarm nodes:

$ docker run -d -p 3376:3376 -v /home/ubuntu/.certs:/certs:ro swarm manage --


tlsverify --tlscacert=/certs/ca.pem --tlscert=/certs/cert.pem --tlskey=/certs/key.pem
--host=0.0.0.0:3376 token://$TOKEN

Argument
The manage command has only one argument:
<discovery> — Discovery backend
Before you create a Swarm manager, create a discovery token or set up a discovery backendfor
your cluster.

When you create the swarm node, use the <discovery> argument to specify one of the following
discovery backends:

 token://<token>
 consul://<ip1>/<path>
 etcd://<ip1>,<ip2>,<ip3>/<path>
 file://<path/to/file>
 zk://<ip1>,<ip2>/<path>
 [nodes://]<iprange>,<iprange>
Where:

 <token> is a discovery token generated by Docker Hub’s hosted discovery service. To


generate this discovery token, use the create command.

Warning: Docker Hub’s hosted discovery backend is not recommended for production use.
It’s intended only for testing/development.

 ip1, ip2, ip3 are each the IP address and port numbers of a discovery backend node.
 path (optional) is a path to a key-value store on the discovery backend. When you use a
single backend to service multiple clusters, you use paths to maintain separate key-value
stores for each cluster.
 path/to/file is the path to a file that contains a static list of the Swarm managers and nodes
that are members the cluster.
 iprange is an IP address or a range of IP addresses followed by a port number.

Here are a pair of <discovery> argument examples:

 A discovery token: token://0ac50ef75c9739f5bfeeaf00503d4e6e


 A Consul node: consul://172.30.0.165:8500

The environment variable for <discovery> is $SWARM_DISCOVERY.

For more information and examples, see the Docker Swarm Discovery topic.

Options
The manage command has the following options:
--strategy — Scheduler placement strategy
Use --strategy "<value>" to tell the Docker Swarm scheduler which placement strategy to use.
Where <value> is:

 spread — Assign each container to the Swarm node with the most available resources.
 binpack - Assign containers to one Swarm node until it is full before assigning them to
another one.
 random - Assign each container to a random Swarm node.

By default, the scheduler applies the spread strategy.

For more information and examples, see Docker Swarm strategies.

--filter, -f — Scheduler filter


Use --filter <value> or -f <value> to tell the Docker Swarm scheduler which nodes to use when
creating and running a container.
Where <value> is:

 health — Use nodes that are running and communicating with the discovery backend.
 port — For containers that have a static port mapping, use nodes whose corresponding port
number is available and not occupied by another container or process.
 dependency — For containers that have a declared dependency, use nodes that already have
a container with the same dependency.
 affinity — For containers that have a declared affinity, use nodes that already have a
container with the same affinity.
 constraint — For containers that have a declared constraint, use nodes that already have a
container with the same constraint.

You can use multiple scheduler filters, like this:

--filter <value> --filter <value>

For more information and examples, see Swarm filters.

--host, -H — Listen to IP/port


Use --host <ip>:<port> or -H <ip>:<port> to specify the IP address and port number to which the
manager listens for incoming messages. If you replace <ip> with zeros or omit it altogether, the
manager uses the default host IP. For example, --host=0.0.0.0:3376 or -H :4000.
The environment variable for --host is $SWARM_HOST.
--replication — Enable Swarm manager replication
Enable Swarm manager replication between the primary and secondary managers in a high-
availability cluster. Replication mirrors cluster information from the primary to the secondary
managers so that, if the primary manager fails, a secondary can become the primary manager.

--replication-ttl — Leader lock release time on failure


Use --replication-ttl "<delay>s" to specify the delay, in seconds, before notifying secondary
managers that the primary manager is down or unreachable. This notification triggers an election in
which one of the secondary managers becomes the primary manager. By default, the delay is 15
seconds.
--advertise, --addr — Advertise Docker Engine’s IP and port
number
Use --advertise <ip>:<port> or --addr <ip>:<port> to advertise the IP address and port number
of the Docker Engine. For example, --advertise 172.30.0.161:4000. Other Swarm managers
MUST be able to reach this Swarm manager at this address.
The environment variable for --advertise is $SWARM_ADVERTISE.
--tls — Enable transport layer security (TLS)
Use --tls to enable transport layer security (TLS). If you use --tlsverify, you do not need to use -
-tls.
--tlscacert — Path to a CA’s public key file
Use --tlscacert=<path/file> to specify the path and filename of the public key (certificate) from a
Certificate Authority (CA). For example, --tlscacert=/certs/ca.pem. When specified, the manager
trusts only remotes that provide a certificate signed by the same CA.
--tlscert — Path to the node’s TLS certificate file
Use --tlscert to specify the path and filename of the manager’s certificate (signed by the CA). For
example, --tlscert=/certs/cert.pem.
--tlskey — Path to the node’s TLS key file
Use --tlskey to specify the path and filename of the manager’s private key (signed by the CA). For
example, --tlskey=/certs/key.pem.
--tlsverify — Use TLS and verify the remote
Use --tlsverify to enable transport layer security (TLS) and accept connections from only those
managers, nodes, and clients that have a certificate signed by the same CA. If you use --tlsverify,
you do not need to use --tls.
--engine-refresh-min-interval — Set engine refresh
minimum interval
Use --engine-refresh-min-interval "<interval>s" to specify the minimum interval, in seconds,
between Engine refreshes. By default, the interval is 30 seconds.
When the primary manager in performs an Engine refresh, it gets updated information about an
Engine in the cluster. The manager uses this information to, among other things, determine whether
the Engine is healthy. If there is a connection failure, the manager determines that the node
is unhealthy. The manager retries an Engine refresh a specified number of times. If the Engine
responds to one of the retries, the manager determines that the Engine is healthy again. Otherwise,
the manager stops retrying and ignores the Engine.
--engine-refresh-max-interval — Set engine refresh
maximum interval
Use --engine-refresh-max-interval "<interval>s" to specify the minimum interval, in seconds,
between Engine refresh. By default, the interval is 60 seconds.
--engine-failure-retry — Set engine failure retry count
Use --engine-failure-retry "<number>" to specify the number of retries to attempt if the engine
fails. By default, the number is 3 retries.
--engine-refresh-retry — Deprecated
Deprecated; Use --engine-failure-retry instead of --engine-refresh-retry "<number>". The
default number is 3 retries.
--heartbeat — Period between each heartbeat
Use --heartbeat "<interval>s" to specify the interval, in seconds, between heartbeats the
manager sends to the primary manager. These heartbeats indicate that the manager is healthy and
reachable. By default, the interval is 60 seconds.
--api-enable-cors, --cors — Enable CORS headers in the
Engine API
Use --api-enable-cors or --cors to enable cross-origin resource sharing (CORS) headers in the
Engine API.
--cluster-driver, -c — Cluster driver to use
Use --cluster-driver "<driver>", -c "<driver>" to specify a cluster driver to use.
Where <driver> is one of the following:

 swarm is the Docker Swarm driver.


 mesos-experimental is the Mesos cluster driver.

By default, the driver is swarm.

For more information about using Mesos driver, see Using Docker Swarm and Mesos.

--cluster-opt — Cluster driver options


You can enter multiple cluster driver options, like this:

--cluster-opt <value> --cluster-opt <value>


Where <value> is one of the following:

 swarm.overcommit=0.05 — Set the fractional percentage by which to overcommit resources.


The default value is 0.05, or 5 percent.
 swarm.createretry=0 — Specify the number of retries to attempt when creating a container
fails. The default value is 0 retries.
 mesos.address= — Specify the Mesos address to bind on. The environment variable for this
option is $SWARM_MESOS_ADDRESS.
 mesos.checkpointfailover=false — Enable Mesos checkpointing, which allows a restarted
slave to reconnect with old executors and recover status updates, at the cost of disk I/O. The
environment variable for this option is $SWARM_MESOS_CHECKPOINT_FAILOVER. The default value
is false (disabled).
 mesos.port= — Specify the Mesos port to bind on. The environment variable for this option
is $SWARM_MESOS_PORT.
 mesos.offertimeout=30s — Specify the Mesos timeout for offers, in seconds. The
environment variable for this option is $SWARM_MESOS_OFFER_TIMEOUT. The default value is 30s.
 mesos.offerrefusetimeout=5s — Specify timeout for Mesos to consider unused resources
refused, in seconds. The environment variable for this option
is $SWARM_MESOS_OFFER_REFUSE_TIMEOUT. The default value is 5s.
 mesos.tasktimeout=5s — Specify the timeout for Mesos task creation, in seconds. The
environment variable for this option is $SWARM_MESOS_TASK_TIMEOUT. The default value is 5s.
 mesos.user= — Specify the Mesos framework user name. The environment variable for this
option is $SWARM_MESOS_USER.

--discovery-opt — Discovery options


Use --discovery-opt <value> to discovery options, such as paths to the TLS files; the CA’s public
key certificate, the certificate, and the private key of the distributed K/V store on a Consul or etcd
discovery backend. You can enter multiple discovery options. For example:
--discovery-opt kv.cacertfile=/path/to/mycacert.pem \
--discovery-opt kv.certfile=/path/to/mycert.pem \
--discovery-opt kv.keyfile=/path/to/mykey.pem \

Swarm: A Docker-native clustering


system
Estimated reading time: 1 minute
The swarm command runs a Swarm container on a Docker Engine host and performs the task
specified by the required subcommand, COMMAND.
Use swarm with the following syntax:
$ docker run swarm [OPTIONS] COMMAND [arg...]

For example, you use swarm with the manage subcommand to create a Swarm manager in a high-
availability cluster with other managers:
$ docker run -d -p 4000:4000 swarm manage -H :4000 --replication --advertise
172.30.0.161:4000 consul://172.30.0.165:8500

Options
The swarm command has the following options:
 --debug — Enable debug mode. Display messages that you can use to debug a Swarm
node. For example:
 time="2016-02-17T17:57:40Z" level=fatal msg="discovery required to join a
cluster. See 'swarm join --help'."

The environment variable for this option is [$DEBUG].

 --log-level "<value>" or -l "<value>" — Set the log level.


Where <value> is: debug, info, warn, error, fatal, or panic. The default value is info.
 --experimental — Enable experimental features.
 --help or -h — Display help.
 --version or -v — Display the version. For example:

 $ docker run swarm --version

swarm version 1.1.0 (a0fd82b)

Swarm vs. Engine response codes


You are viewing docs for legacy standalone Swarm. These topics describe standalone Docker
Swarm. In Docker 1.12 and higher, Swarm mode is integrated with Docker Engine. Most users
should use integrated Swarm mode — a good place to start is Getting started with swarm
mode, Swarm mode CLI commands, and the Get started with Docker walkthrough). Standalone
Docker Swarm is not integrated into the Docker Engine API and CLI commands.
Estimated reading time: 17 minutes

Docker Engine provides a REST API for making calls to the Engine daemon. Docker Swarm allows
a caller to make the same calls to a cluster of Engine daemons. While the API calls are the same,
the API response status codes do differ. This document explains the differences.

Four methods are included, and they are GET, POST, PUT, and DELETE.

The comparison is based on api v1.22, and all Docker Status Codes in api v1.22 are referenced
from docker-remote-api-v1.22.

GET
 Route: /_ping
 Handler: ping
Swarm Status Code Docker Status Code

200 200

500

 Route: /events
 Handler: getEvents

Swarm Status Code Docker Status Code

200 200

400

500

 Route: /info
 Handler: getInfo

Swarm Status Code Docker Status Code

200 200

500

 Route: /version
 Handler: getVersion

Swarm Status Code Docker Status Code

200 200

500

 Route: /images/json
 Handler: getImagesJSON

Swarm Status Code Docker Status Code

200 200

500 500
 Route: /images/viz
 Handler: notImplementedHandler

Swarm Status Code Docker Status Code

501 no this api

 Route: /images/search
 Handler: proxyRandom

Swarm Status Code Docker Status Code

200 200

500 500

 Route: /images/get
 Handler: getImages

Swarm Status Code Docker Status Code

200 200

404

500 500

 Route: /images/{name:.*}/get
 Handler: proxyImageGet

Swarm Status Code Docker Status Code

200 200

404

500 500

 Route: /images/{name:.*}/history
 Handler: proxyImage

Swarm Status Code Docker Status Code

200 200

404 404

500 500

 Route: /images/{name:.*}/json
 Handler: proxyImage

Swarm Status Code Docker Status Code

200 200

404 404

500 500

 Route: /containers/ps
 Handler: getContainersJSON

Swarm Status Code Docker Status Code

200 no this api

404 no this api

500 no this api

 Route: /containers/json
 Handler: getContainersJSON

Swarm Status Code Docker Status Code

200 200

400

404
Swarm Status Code Docker Status Code

500 500

 Route: /containers/{name:.*}/archive
 Handler: proxyContainer

Swarm Status Code Docker Status Code

200 200

400 400

404 404

500 500

 Route: /containers/{name:.*}/export
 Handler: proxyContainer

Swarm Status Code Docker Status Code

200 200

404 404

500 500

 Route: /containers/{name:.*}/changes
 Handler: proxyContainer

Swarm Status Code Docker Status Code

200 200

404 404

500 500

 Route: /containers/{name:.*}/json
 Handler: getContainerJSON
Swarm Status Code Docker Status Code

200 200

404 404

500 500

 Route: /containers/{name:.*}/top
 Handler: proxyContainer

Swarm Status Code Docker Status Code

200 200

404 404

500 500

 Route: /containers/{name:.*}/logs
 Handler: proxyContainer

Swarm Status Code Docker Status Code

101 101

200 200

404 404

500 500

 Route: /containers/{name:.*}/stats
 Handler: proxyContainer

Swarm Status Code Docker Status Code

200 200

404 404

500 500
 Route: /containers/{name:.*}/attach/ws
 Handler: proxyHijack

Swarm Status Code Docker Status Code

200 200

400 400

404 404

500 500

 Route: /exec/{execid:.*}/json
 Handler: proxyContainer

Swarm Status Code Docker Status Code

200 200

404 404

500 500

 Route: /networks
 Handler: getNetworks

Swarm Status Code Docker Status Code

200 200

400

500 500

 Route: /networks/{networkid:.*}
 Handler: getNetwork

Swarm Status Code Docker Status Code

200 200

404 404
 Route: /volumes
 Handler: getVolumes

Swarm Status Code Docker Status Code

200 200

500

 Route: /volumes/{volumename:.*}
 Handler: getVolume

Swarm Status Code Docker Status Code

200 200

404 404

500

POST
 Route: /auth
 Handler: proxyRandom

Swarm Status Code Docker Status Code

200 200

204 204

500 500

 Route: /commit
 Handler: postCommit

Swarm Status Code Docker Status Code

201 201

404 404
Swarm Status Code Docker Status Code

500 500

 Route: /build
 Handler: postBuild

Swarm Status Code Docker Status Code

200 200

500 500

 Route: /images/create
 Handler: postImagesCreate

Swarm Status Code Docker Status Code

200 200

500 500

 Route: /images/load
 Handler: postImagesLoad

Swarm Status Code Docker Status Code

200

201

500

 Route: /images/{name:.*}/push
 Handler: proxyImagePush

Swarm Status Code Docker Status Code

200 200
Swarm Status Code Docker Status Code

404 404

500 500

 Route: /images/{name:.*}/tag
 Handler: postTagImage

Swarm Status Code Docker Status Code

200

201

400

404 404

409

500 500

 Route: /containers/create
 Handler: postContainersCreate

Swarm Status Code Docker Status Code

201 201

400

404

406

409

500 500
 Route: /containers/{name:.*}/kill
 Handler: proxyContainerAndForceRefresh

Swarm Status Code Docker Status Code

204 204

404 404

500 500

 Route: /containers/{name:.*}/pause
 Handler: proxyContainerAndForceRefresh

Swarm Status Code Docker Status Code

204 204

404 404

500 500

 Route: /containers/{name:.*}/unpause
 Handler: proxyContainerAndForceRefresh

Swarm Status Code Docker Status Code

204 204

404 404

500 500

 Route: /containers/{name:.*}/rename
 Handler: postRenameContainer

Swarm Status Code Docker Status Code

200

204

404 404
Swarm Status Code Docker Status Code

409 409

500 500

 Route: /containers/{name:.*}/restart
 Handler: proxyContainerAndForceRefresh

Swarm Status Code Docker Status Code

204 204

404 404

500 500

 Route: /containers/{name:.*}/start
 Handler: postContainersStart

Swarm Status Code Docker Status Code

204 204

304

404 404

500 500

 Route: /containers/{name:.*}/stop
 Handler: proxyContainerAndForceRefresh

Swarm Status Code Docker Status Code

204 204

304 304

404 404

500 500
 Route: /containers/{name:.*}/update
 Handler: proxyContainerAndForceRefresh

Swarm Status Code Docker Status Code

200 200

400 400

404 404

500 500

 Route: /containers/{name:.*}/wait
 Handler: proxyContainerAndForceRefresh

Swarm Status Code Docker Status Code

204 204

404 404

500 500

 Route: /containers/{name:.*}/resize
 Handler: proxyContainer

Swarm Status Code Docker Status Code

200 200

404 404

500 500

 Route: /containers/{name:.*}/attach
 Handler: proxyHijack

Swarm Status Code Docker Status Code

101 101

200 200
Swarm Status Code Docker Status Code

400 400

404 404

500 500

 Route: /containers/{name:.*}/copy
 Handler: proxyContainer

Swarm Status Code Docker Status Code

200 200

404 404

500 500

 Route: /containers/{name:.*}/exec
 Handler: postContainersExec

Swarm Status Code Docker Status Code

201 201

404 404

409

500 500

 Route: /exec/{execid:.*}/start
 Handler: postExecStart

Swarm Status Code Docker Status Code

200 200

404 404

409 409
Swarm Status Code Docker Status Code

500

 Route: /exec/{execid:.*}/resize
 Handler: proxyContainer

Swarm Status Code Docker Status Code

201 201

404 404

500

 Route: /networks/create
 Handler: postNetworksCreate

Swarm Status Code Docker Status Code

200

201

400

404

500 500

 Route: /networks/{networkid:.*}/connect
 Handler: proxyNetworkConnect

Swarm Status Code Docker Status Code

200 200

404 404

500 500

 Route: /networks/{networkid:.*}/disconnect
 Handler: proxyNetworkDisconnect
Swarm Status Code Docker Status Code

200 200

404 404

500 500

 Route: /volumes/create
 Handler: postVolumesCreate

Swarm Status Code Docker Status Code

200

201

400

500 500

PUT
 Route: /containers/{name:.*}/archive"
 Handler: proxyContainer

Swarm Status Code Docker Status Code

200 200

400 400

403 403

404 404

500 500

DELETE
 Route: /containers/{name:.*}
 Handler: deleteContainers
Swarm Status Code Docker Status Code

200

204

400

404 404

500 500

 Route: /images/{name:.*}
 Handler: deleteImages

Swarm Status Code Docker Status Code

200 200

404 404

409

500 500

 Route: /networks/{networkid:.*}
 Handler: deleteNetworks

Swarm Status Code Docker Status Code

200

204

404 404

500 500

 Route: /volumes/{name:.*}"
 Handler: deleteVolumes
Swarm Status Code Docker Status Code

204 204

404 404

409

500 500

Docker Swarm API


You are viewing docs for legacy standalone Swarm. These topics describe standalone Docker
Swarm. In Docker 1.12 and higher, Swarm mode is integrated with Docker Engine. Most users
should use integrated Swarm mode — a good place to start is Getting started with swarm
mode, Swarm mode CLI commands, and the Get started with Docker walkthrough). Standalone
Docker Swarm is not integrated into the Docker Engine API and CLI commands.
Estimated reading time: 3 minutes

The Docker Swarm API is mostly compatible with the Docker Remote API. This document is an
overview of the differences between the Swarm API and the Docker Engine API.

Missing endpoints
Some endpoints have not yet been implemented and return a 404 error.

POST "/images/create" : "docker import" flow not implement

Endpoints which behave differently


Endpoint Differences

New field Node added:

"Node": {
GET "/containers/{name:.*}/json" "Id":
"ODAI:IC6Q:MSBL:TPB5:HIEE:6IKC:VCAM:QRNH:PRGX:ERZT
:OK46:PMFX",
"Ip": "0.0.0.0",
"Addr": "http://0.0.0.0:4243",
"Name": "vagrant-ubuntu-saucy-64"
}

HostIP replaced by the actual Node's IP


GET "/containers/{name:.*}/json"
if HostIP is 0.0.0.0

GET "/containers/json" Node's name prepended to the container name.

HostIP replaced by the actual Node's IP


GET "/containers/json"
if HostIP is 0.0.0.0

Containers started from the swarm official image are hidden


GET "/containers/json"
by default, use all=1to display them.

Use --filter node=<Node name> to show images of the


GET "/images/json"
specific node.

CpuShares in HostConfig sets the number of CPU cores


POST "/containers/create"
allocated to the container.

Registry authentication
During container create calls, the Swarm API optionally accepts an X-Registry-Auth header. If
provided, this header is passed down to the engine if the image must be pulled to complete the
create operation.

The following two examples demonstrate how to utilize this using the existing Docker CLI.

Authenticate using registry tokens


Note: This example requires Docker Engine 1.10 with auth token support. For older Engine versions,
refer to authenticate using username and password.
This example uses the jq command-line utility. To run this example, install jq using your package
manager (apt-get install jq or yum install jq).
REPO=yourrepo/yourimage
REPO_USER=yourusername
read -s PASSWORD
AUTH_URL=https://auth.docker.io/token

# obtain a JSON token, and extract the "token" value using 'jq'
TOKEN=$(curl -s -u "${REPO_USER}:${PASSWORD}"
"${AUTH_URL}?scope=repository:${REPO}:pull&service=registry.docker.io" | jq -r
".token")
HEADER=$(echo "{\"registrytoken\":\"${TOKEN}\"}"|base64 -w 0 )
echo HEADER=$HEADER

Add the header you’ve calculated to your ~/.docker/config.json:


"HttpHeaders": {
"X-Registry-Auth": "<HEADER string from above>"
}

You can now authenticate to the registry, and run private images on Swarm:

$ docker run --rm -it yourprivateimage:latest

Be aware that tokens are short-lived and expire quickly.

Authenticate using username and password


Note: This authentication method stores your credentials unencrypted on the filesystem. Refer
to Authenticate using registry tokens for a more secure approach.

First, calculate the header

REPO_USER=yourusername
read -s PASSWORD
HEADER=$(echo "{\"username\":\"${REPO_USER}\",\"password\":\"${PASSWORD}\"}" | base64
-w 0 )
unset PASSWORD
echo HEADER=$HEADER

Add the header you’ve calculated to your ~/.docker/config.json:


"HttpHeaders": {
"X-Registry-Auth": "<HEADER string from above>"
}

You can now authenticate to the registry, and run private images on Swarm:

$ docker run --rm -it yourprivateimage:latest


Docker Compose
Overview of docker-compose CLI
Estimated reading time: 5 minutes

This page provides the usage information for the docker-compose Command.

Command options overview and help


You can also see this information by running docker-compose --help from the command line.
Define and run multi-container applications with Docker.

Usage:
docker-compose [-f <arg>...] [options] [COMMAND] [ARGS...]
docker-compose -h|--help

Options:
-f, --file FILE Specify an alternate compose file
(default: docker-compose.yml)
-p, --project-name NAME Specify an alternate project name
(default: directory name)
--verbose Show more output
--log-level LEVEL Set log level (DEBUG, INFO, WARNING, ERROR, CRITICAL)
--no-ansi Do not print ANSI control characters
-v, --version Print version and exit
-H, --host HOST Daemon socket to connect to

--tls Use TLS; implied by --tlsverify


--tlscacert CA_PATH Trust certs signed only by this CA
--tlscert CLIENT_CERT_PATH Path to TLS certificate file
--tlskey TLS_KEY_PATH Path to TLS key file
--tlsverify Use TLS and verify the remote
--skip-hostname-check Don't check the daemon's hostname against the
name specified in the client certificate
--project-directory PATH Specify an alternate working directory
(default: the path of the Compose file)
--compatibility If set, Compose will attempt to convert deploy
keys in v3 files to their non-Swarm equivalent

Commands:
build Build or rebuild services
bundle Generate a Docker bundle from the Compose file
config Validate and view the Compose file
create Create services
down Stop and remove containers, networks, images, and volumes
events Receive real time events from containers
exec Execute a command in a running container
help Get help on a command
images List images
kill Kill containers
logs View output from containers
pause Pause services
port Print the public port for a port binding
ps List containers
pull Pull service images
push Push service images
restart Restart services
rm Remove stopped containers
run Run a one-off command
scale Set number of containers for a service
start Start services
stop Stop services
top Display the running processes
unpause Unpause services
up Create and start containers
version Show the Docker-Compose version information
You can use Docker Compose binary, docker-compose [-f <arg>...] [options] [COMMAND]
[ARGS...], to build and manage multiple services in Docker containers.

Use -f to specify name and path of one or more


Compose files
Use the -f flag to specify the location of a Compose configuration file.

Specifying multiple Compose files


You can supply multiple -f configuration files. When you supply multiple files, Compose combines
them into a single configuration. Compose builds the configuration in the order you supply the files.
Subsequent files override and add to their predecessors.

For example, consider this command line:

$ docker-compose -f docker-compose.yml -f docker-compose.admin.yml run backup_db

The docker-compose.yml file might specify a webapp service.


webapp:
image: examples/web
ports:
- "8000:8000"
volumes:
- "/data"

If the docker-compose.admin.yml also specifies this same service, any matching fields override the
previous file. New values, add to the webapp service configuration.
webapp:
build: .
environment:
- DEBUG=1

Use a -f with - (dash) as the filename to read the configuration from stdin. When stdin is used all
paths in the configuration are relative to the current working directory.
The -f flag is optional. If you don’t provide this flag on the command line, Compose traverses the
working directory and its parent directories looking for a docker-compose.ymland a docker-
compose.override.yml file. You must supply at least the docker-compose.ymlfile. If both files are
present on the same directory level, Compose combines the two files into a single configuration.
The configuration in the docker-compose.override.yml file is applied over and in addition to the
values in the docker-compose.yml file.

Specifying a path to a single Compose file


You can use -f flag to specify a path to Compose file that is not located in the current directory,
either from the command line or by setting up a COMPOSE_FILE environment variable in your shell
or in an environment file.
For an example of using the -f option at the command line, suppose you are running the Compose
Rails sample, and have a docker-compose.yml file in a directory called sandbox/rails. You can use
a command like docker-compose pull to get the postgres image for the db service from anywhere by
using the -f flag as follows: docker-compose -f ~/sandbox/rails/docker-compose.yml pull db

Here’s the full example:

$ docker-compose -f ~/sandbox/rails/docker-compose.yml pull db


Pulling db (postgres:latest)...
latest: Pulling from library/postgres
ef0380f84d05: Pull complete
50cf91dc1db8: Pull complete
d3add4cd115c: Pull complete
467830d8a616: Pull complete
089b9db7dc57: Pull complete
6fba0a36935c: Pull complete
81ef0e73c953: Pull complete
338a6c4894dc: Pull complete
15853f32f67c: Pull complete
044c83d92898: Pull complete
17301519f133: Pull complete
dcca70822752: Pull complete
cecf11b8ccf3: Pull complete
Digest: sha256:1364924c753d5ff7e2260cd34dc4ba05ebd40ee8193391220be0f9901d4e1651
Status: Downloaded newer image for postgres:latest
Use -p to specify a project name
Each configuration has a project name. If you supply a -p flag, you can specify a project name. If
you don’t specify the flag, Compose uses the current directory name. See also
the COMPOSE_PROJECT_NAME environment variable.

Set up environment variables


You can set environment variables for various docker-compose options, including the -fand -p flags.
For example, the COMPOSE_FILE environment variable relates to the -f flag,
and COMPOSE_PROJECT_NAME environment variable relates to the -p flag.

Also, you can set some of these variables in an environment file.

Compose CLI environment variables


Estimated reading time: 4 minutes

Several environment variables are available for you to configure the Docker Compose command-line
behaviour.

Variables starting with DOCKER_ are the same as those used to configure the Docker command-line
client. If you’re using docker-machine, then the eval "$(docker-machine env my-docker-
vm)" command should set them to their correct values. (In this example, my-docker-vm is the name of
a machine you created.)
Note: Some of these variables can also be provided using an environment file.

COMPOSE_PROJECT_NAME
Sets the project name. This value is prepended along with the service name to the container on start
up. For example, if your project name is myapp and it includes two services dband web, then Compose
starts containers named myapp_db_1 and myapp_web_1respectively.
Setting this is optional. If you do not set this, the COMPOSE_PROJECT_NAME defaults to the basename of
the project directory. See also the -p command-line option.

COMPOSE_FILE
Specify the path to a Compose file. If not provided, Compose looks for a file nameddocker-
compose.yml in the current directory and then each parent directory in succession until a file by that
name is found.
This variable supports multiple Compose files separated by a path separator (on Linux and macOS
the path separator is :, on Windows it is ;). For example:COMPOSE_FILE=docker-
compose.yml:docker-compose.prod.yml. The path separator can also be customized
using COMPOSE_PATH_SEPARATOR.
See also the -f command-line option.

COMPOSE_API_VERSION
The Docker API only supports requests from clients which report a specific version. If you receive
a client and server don't have same version error using docker-compose, you can workaround
this error by setting this environment variable. Set the version value to match the server version.

Setting this variable is intended as a workaround for situations where you need to run temporarily
with a mismatch between the client and server version. For example, if you can upgrade the client
but need to wait to upgrade the server.

Running with this variable set and a known mismatch does prevent some Docker features from
working properly. The exact features that fail would depend on the Docker client and server
versions. For this reason, running with this variable set is only intended as a workaround and it is not
officially supported.

If you run into problems running with this set, resolve the mismatch through upgrade and remove
this setting to see if your problems resolve before notifying support.

DOCKER_HOST
Sets the URL of the docker daemon. As with the Docker client, defaults
to unix:///var/run/docker.sock.

DOCKER_TLS_VERIFY
When set to anything other than an empty string, enables TLS communication with
the docker daemon.
DOCKER_CERT_PATH
Configures the path to the ca.pem, cert.pem, and key.pem files used for TLS verification. Defaults
to ~/.docker.

COMPOSE_HTTP_TIMEOUT
Configures the time (in seconds) a request to the Docker daemon is allowed to hang before
Compose considers it failed. Defaults to 60 seconds.

COMPOSE_TLS_VERSION
Configure which TLS version is used for TLS communication with the docker daemon. Defaults
to TLSv1. Supported values are: TLSv1, TLSv1_1, TLSv1_2.

COMPOSE_CONVERT_WINDOWS_PATHS
Enable path conversion from Windows-style to Unix-style in volume definitions. Users of Docker
Machine and Docker Toolbox on Windows should always set this. Defaults to 0. Supported
values: true or 1 to enable, false or 0 to disable.

COMPOSE_PATH_SEPARATOR
If set, the value of the COMPOSE_FILE environment variable is separated using this character as path
separator.

COMPOSE_FORCE_WINDOWS_HOST
If set, volume declarations using the short syntax are parsed assuming the host path is a Windows
path, even if Compose is running on a UNIX-based system. Supported values: true or 1 to
enable, false or 0 to disable.

COMPOSE_IGNORE_ORPHANS
If set, Compose doesn’t try to detect orphaned containers for the project. Supported
values: true or 1 to enable, false or 0 to disable.
COMPOSE_PARALLEL_LIMIT
Sets a limit for the number of operations Compose can execute in parallel. The default value is 64,
and may not be set lower than 2.

COMPOSE_INTERACTIVE_NO_CLI
If set, Compose doesn’t attempt to use the Docker CLI for interactive run and execoperations. This
option is not available on Windows where the CLI is required for the aforementioned operations.
Supported: true or 1 to enable, false or 0 to disable.

Command-line completion
Estimated reading time: 3 minutes

Compose comes with command completion for the bash and zsh shell.

Install command completion


Bash
Make sure bash completion is installed.

LINUX

1. On a current Linux OS (in a non-minimal installation), bash completion should be available.

2. Place the completion script in /etc/bash_completion.d/.


3. sudo curl -L
https://raw.githubusercontent.com/docker/compose/1.24.1/contrib/completion/bas
h/docker-compose -o /etc/bash_completion.d/docker-compose

Mac
Install via Homebrew

1. Install with brew install bash-completion.

2. After the installation, Brew displays the installation path. Make sure to place the completion
script in the path.
For example, when running this command on Mac 10.13.2, place the completion script
in /usr/local/etc/bash_completion.d/.
sudo curl -L
https://raw.githubusercontent.com/docker/compose/1.24.1/contrib/completion/bas
h/docker-compose -o /usr/local/etc/bash_completion.d/docker-compose

3. Add the following to your ~/.bash_profile:


4. if [ -f $(brew --prefix)/etc/bash_completion ]; then
5. . $(brew --prefix)/etc/bash_completion
6. fi

7. You can source your ~/.bash_profile or launch a new terminal to utilize completion.

Install via MacPorts

1. Run sudo port install bash-completion to install bash completion.


2. Add the following lines to ~/.bash_profile:
3. if [ -f /opt/local/etc/profile.d/bash_completion.sh ]; then
4. . /opt/local/etc/profile.d/bash_completion.sh
5. fi

6. You can source your ~/.bash_profile or launch a new terminal to utilize completion.

Zsh
Make sure you have installed oh-my-zsh on your computer.

WITH OH-MY-ZSH SHELL

Add docker and docker-compose to the plugins list in ~/.zshrc to run autocompletion within the oh-
my-zsh shell. In the following example, ... represent other Zsh plugins you may have installed.
plugins=(... docker docker-compose
)

WITHOUT OH-MY-ZSH SHELL

1. Place the completion script in your /path/to/zsh/completion (typically ~/.zsh/completion/):


2. $ mkdir -p ~/.zsh/completion
3. $ curl -L
https://raw.githubusercontent.com/docker/compose/1.24.1/contrib/completion/zsh
/_docker-compose > ~/.zsh/completion/_docker-compose

4. Include the directory in your $fpath by adding in ~/.zshrc:


5. fpath=(~/.zsh/completion $fpath)

6. Make sure compinit is loaded or do it by adding in ~/.zshrc:


7. autoload -Uz compinit && compinit -i

8. Then reload your shell:

9. exec $SHELL -l

Available completions
Depending on what you typed on the command line so far, it completes:

 available docker-compose commands


 options that are available for a particular command
 service names that make sense in a given context, such as services with running or stopped
instances or services based on images vs. services based on Dockerfiles. For docker-
compose scale, completed service names automatically have “=” appended.
 arguments for selected options. For example, docker-compose kill -s completes some
signals like SIGHUP and SIGUSR1.

Enjoy working with Compose faster and with fewer typos!

docker-compose build
Estimated reading time: 1 minute

Usage: build [options] [--build-arg key=val...] [SERVICE...]

Options:
--compress Compress the build context using gzip.
--force-rm Always remove intermediate containers.
--no-cache Do not use cache when building the image.
--pull Always attempt to pull a newer version of the image.
-m, --memory MEM Sets memory limit for the build container.
--build-arg key=val Set build-time variables for services.
--parallel Build images in parallel.

Services are built once and then tagged, by default as project_service. For
example, composetest_db. If the Compose file specifies an image name, the image is tagged with
that name, substituting any variables beforehand. See variable substitution.
If you change a service’s Dockerfile or the contents of its build directory, run docker-compose
build to rebuild it.

docker-compose bundle
Estimated reading time: 1 minute

Usage: bundle [options]

Options:
--push-images Automatically push images for any services
which have a `build` option specified.

-o, --output PATH Path to write the bundle file to.


Defaults to "<project name>.dab".

Generate a Distributed Application Bundle (DAB) from the Compose file.

Images must have digests stored, which requires interaction with a Docker registry. If digests aren’t
stored for all images, you can fetch them with docker-compose pull or docker-compose push. To
push images automatically when bundling, pass --push-images. Only services with a build option
specified have their images pushed.

docker-compose config
Estimated reading time: 1 minute

Usage: config [options]

Options:
--resolve-image-digests Pin image tags to digests.
-q, --quiet Only validate the configuration, don't print anything.
--services Print the service names, one per line.
--volumes Print the volume names, one per line.
--hash="*" Print the service config hash, one per line.
Set "service1,service2" for a list of specified services
or use the wildcard symbol to display all services.

Validate and view the Compose file.

docker-compose create
Estimated reading time: 1 minute

Creates containers for a service.


This command is deprecated. Use the `up` command with `--no-start` instead.

Usage: create [options] [SERVICE...]

Options:
--force-recreate Recreate containers even if their configuration and
image haven't changed. Incompatible with --no-recreate.
--no-recreate If containers already exist, don't recreate them.
Incompatible with --force-recreate.
--no-build Don't build an image, even if it's missing.
--build Build images before creating containers.

docker-compose down
Estimated reading time: 1 minute

Usage: down [options]

Options:
--rmi type Remove images. Type must be one of:
'all': Remove all images used by any service.
'local': Remove only images that don't have a
custom tag set by the `image` field.
-v, --volumes Remove named volumes declared in the `volumes`
section of the Compose file and anonymous volumes
attached to containers.
--remove-orphans Remove containers for services not defined in the
Compose file
-t, --timeout TIMEOUT Specify a shutdown timeout in seconds.
(default: 10)

Stops containers and removes containers, networks, volumes, and images created by up.

By default, the only things removed are:

 Containers for services defined in the Compose file


 Networks defined in the networks section of the Compose file
 The default network, if one is used

Networks and volumes defined as external are never removed.

docker-compose events
Estimated reading time: 1 minute

Usage: events [options] [SERVICE...]

Options:
--json Output events as a stream of json objects

Stream container events for every container in the project.

With the --json flag, a json object is printed one per line with the format:
{
"time": "2015-11-20T18:01:03.615550",
"type": "container",
"action": "create",
"id": "213cf7...5fc39a",
"service": "web",
"attributes": {
"name": "application_web_1",
"image": "alpine:edge"
}
}

docker-compose exec
Estimated reading time: 1 minute

Usage: exec [options] [-e KEY=VAL...] SERVICE COMMAND [ARGS...]

Options:
-d, --detach Detached mode: Run command in the background.
--privileged Give extended privileges to the process.
-u, --user USER Run the command as this user.
-T Disable pseudo-tty allocation. By default `docker-compose exec`
allocates a TTY.
--index=index index of the container if there are multiple
instances of a service [default: 1]
-e, --env KEY=VAL Set environment variables (can be used multiple times,
not supported in API < 1.25)
-w, --workdir DIR Path to workdir directory for this command.

This is the equivalent of docker exec. With this subcommand you can run arbitrary commands in
your services. Commands are by default allocating a TTY, so you can use a command such
as docker-compose exec web sh to get an interactive prompt.

docker-compose help
Estimated reading time: 1 minute

Usage: help COMMAND

Displays help and usage instructions for a command.


docker-compose kill
Estimated reading time: 1 minute

Usage: kill [options] [SERVICE...]

Options:
-s SIGNAL SIGNAL to send to the container.
Default signal is SIGKILL.

Forces running containers to stop by sending a SIGKILL signal. Optionally the signal can be passed,
for example:
docker-compose kill -s SIGINT

docker-compose logs
Estimated reading time: 1 minute

Usage: logs [options] [SERVICE...]

Options:
--no-color Produce monochrome output.
-f, --follow Follow log output.
-t, --timestamps Show timestamps.
--tail="all" Number of lines to show from the end of the logs
for each container.

Displays log output from services.

docker-compose pause
Estimated reading time: 1 minute

Usage: pause [SERVICE...]

Pauses running containers of a service. They can be unpaused with docker-compose unpause.
docker-compose port
Estimated reading time: 1 minute

Usage: port [options] SERVICE PRIVATE_PORT

Options:
--protocol=proto tcp or udp [default: tcp]
--index=index index of the container if there are multiple
instances of a service [default: 1]

Prints the public port for a port binding.

docker-compose ps
Estimated reading time: 1 minute

Usage: ps [options] [SERVICE...]

Options:
-q, --quiet Only display IDs
--services Display services
--filter KEY=VAL Filter services by a property
-a, --all Show all stopped containers (including those created by the
run command)

Lists containers.

$ docker-compose ps
Name Command State Ports
-------------------------------------------------------------------------------------
--------
mywordpress_db_1 docker-entrypoint.sh mysqld Up (healthy) 3306/tcp
mywordpress_wordpress_1 /entrypoint.sh apache2-for ... Restarting
0.0.0.0:8000->80/tcp

docker-compose pull
Estimated reading time: 1 minute

Usage: pull [options] [SERVICE...]

Options:
--ignore-pull-failures Pull what it can and ignores images with pull failures.
--parallel Deprecated, pull multiple images in parallel (enabled by
default).
--no-parallel Disable parallel pulling.
-q, --quiet Pull without printing progress information
--include-deps Also pull services declared as dependencies

Pulls an image associated with a service defined in a docker-compose.yml or docker-stack.yml file,


but does not start containers based on those images.
For example, suppose you have this docker-compose.yml file from the Quickstart: Compose and
Rails sample.
version: '2'
services:
db:
image: postgres
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db

If you run docker-compose pull ServiceName in the same directory as the docker-compose.yml file
that defines the service, Docker pulls the associated image. For example, to call the postgres image
configured as the db service in our example, you would run docker-compose pull db.
$ docker-compose pull db
Pulling db (postgres:latest)...
latest: Pulling from library/postgres
cd0a524342ef: Pull complete
9c784d04dcb0: Pull complete
d99dddf7e662: Pull complete
e5bff71e3ce6: Pull complete
cb3e0a865488: Pull complete
31295d654cd5: Pull complete
fc930a4e09f5: Pull complete
8650cce8ef01: Pull complete
61949acd8e52: Pull complete
527a203588c0: Pull complete
26dec14ac775: Pull complete
0efc0ed5a9e5: Pull complete
40cd26695b38: Pull complete
Digest: sha256:fd6c0e2a9d053bebb294bb13765b3e01be7817bf77b01d58c2377ff27a4a46dc
Status: Downloaded newer image for postgres:latest

docker-compose push
Estimated reading time: 1 minute

Usage: push [options] [SERVICE...]

Options:
--ignore-push-failures Push what it can and ignores images with push failures.

Pushes images for services to their respective registry/repository.

The following assumptions are made:

 You are pushing an image you have built locally

 You have access to the build key

Example
version: '3'
services:
service1:
build: .
image: localhost:5000/yourimage # goes to local registry

service2:
build: .
image: youruser/yourimage # goes to youruser DockerHub registry

docker-compose restart
Estimated reading time: 1 minute

Usage: restart [options] [SERVICE...]

Options:
-t, --timeout TIMEOUT Specify a shutdown timeout in seconds.
(default: 10)

Restarts all stopped and running services.

If you make changes to your docker-compose.yml configuration these changes are not reflected after
running this command.

For example, changes to environment variables (which are added after a container is built, but
before the container’s command is executed) are not updated after restarting.

If you are looking to configure a service’s restart policy, please refer to restart in Compose file v3
and restart in Compose v2. Note that if you are deploying a stack in swarm mode, you should
use restart_policy, instead.

docker-compose rm
Estimated reading time: 1 minute

Usage: rm [options] [SERVICE...]


Options:
-f, --force Don't ask to confirm removal
-s, --stop Stop the containers, if required, before removing
-v Remove any anonymous volumes attached to containers

Removes stopped service containers.

By default, anonymous volumes attached to containers are not removed. You can override this
with -v. To list all volumes, use docker volume ls.

Any data which is not in a volume is lost.

Running the command with no options also removes one-off containers created by docker-compose
up or docker-compose run:

$ docker-compose rm
Going to remove djangoquickstart_web_run_1
Are you sure? [yN] y
Removing djangoquickstart_web_run_1 ... done

docker-compose run
Estimated reading time: 2 minutes

Usage:
run [options] [-v VOLUME...] [-p PORT...] [-e KEY=VAL...] [-l KEY=VALUE...]
SERVICE [COMMAND] [ARGS...]

Options:
-d, --detach Detached mode: Run container in the background, print
new container name.
--name NAME Assign a name to the container
--entrypoint CMD Override the entrypoint of the image.
-e KEY=VAL Set an environment variable (can be used multiple times)
-l, --label KEY=VAL Add or override a label (can be used multiple times)
-u, --user="" Run as specified username or uid
--no-deps Don't start linked services.
--rm Remove container after run. Ignored in detached mode.
-p, --publish=[] Publish a container's port(s) to the host
--service-ports Run command with the service's ports enabled and mapped
to the host.
--use-aliases Use the service's network aliases in the network(s) the
container connects to.
-v, --volume=[] Bind mount a volume (default [])
-T Disable pseudo-tty allocation. By default `docker-compose
run`
allocates a TTY.
-w, --workdir="" Working directory inside the container

Runs a one-time command against a service. For example, the following command starts
the web service and runs bash as its command.
docker-compose run web bash

Commands you use with run start in new containers with configuration defined by that of the service,
including volumes, links, and other details. However, there are two important differences.
First, the command passed by run overrides the command defined in the service configuration. For
example, if the web service configuration is started with bash, then docker-compose run web python
app.py overrides it with python app.py.
The second difference is that the docker-compose run command does not create any of the ports
specified in the service configuration. This prevents port collisions with already-open ports. If you do
want the service’s ports to be created and mapped to the host, specify the --service-ports flag:
docker-compose run --service-ports web python manage.py shell

Alternatively, manual port mapping can be specified with the --publish or -p options, just as when
using docker run:
docker-compose run --publish 8080:80 -p 2022:22 -p 127.0.0.1:2021:21 web python
manage.py shell

If you start a service configured with links, the run command first checks to see if the linked service
is running and starts the service if it is stopped. Once all the linked services are running,
the run executes the command you passed it. For example, you could run:
docker-compose run db psql -h db -U docker

This opens an interactive PostgreSQL shell for the linked db container.


If you do not want the run command to start linked containers, use the --no-deps flag:
docker-compose run --no-deps web python manage.py shell

If you want to remove the container after running while overriding the container’s restart policy, use
the --rm flag:
docker-compose run --rm web python manage.py db upgrade

This runs a database upgrade script, and removes the container when finished running, even if a
restart policy is specified in the service configuration.

docker-compose scale
Estimated reading time: 1 minute

Note: This command is deprecated. Use the up command with the --scale flag instead. Beware that
using up with --scale flag has some subtle differences with the scale command as it incorporates
the behaviour of up command.
Usage: scale [options] [SERVICE=NUM...]

Options:
-t, --timeout TIMEOUT Specify a shutdown timeout in seconds.
(default: 10)

Sets the number of containers to run for a service.

Numbers are specified as arguments in the form service=num. For example:


docker-compose scale web=2 worker=3

Tip: Alternatively, in Compose file version 3.x, you can specify replicas under the deploykey as part
of a service configuration for Swarm mode. The deploy key and its sub-options (including replicas)
only works with the docker stack deploy command, not docker-compose up or docker-compose run.

docker-compose start
Estimated reading time: 1 minute

Usage: start [SERVICE...]


Starts existing containers for a service.

docker-compose stop
Estimated reading time: 1 minute

Usage: stop [options] [SERVICE...]

Options:
-t, --timeout TIMEOUT Specify a shutdown timeout in seconds.
(default: 10)

Stops running containers without removing them. They can be started again withdocker-compose
start.

docker-compose top
Estimated reading time: 1 minute

Usage: top [SERVICE...]

Displays the running processes.

$ docker-compose top
compose_service_a_1
PID USER TIME COMMAND
----------------------------
4060 root 0:00 top

compose_service_b_1
PID USER TIME COMMAND
----------------------------
4115 root 0:00 top

docker-compose unpause
Estimated reading time: 1 minute

Usage: unpause [SERVICE...]

Unpauses paused containers of a service.

docker-compose up
Estimated reading time: 2 minutes

Usage: up [options] [--scale SERVICE=NUM...] [SERVICE...]

Options:
-d, --detach Detached mode: Run containers in the background,
print new container names. Incompatible with
--abort-on-container-exit.
--no-color Produce monochrome output.
--quiet-pull Pull without printing progress information
--no-deps Don't start linked services.
--force-recreate Recreate containers even if their configuration
and image haven't changed.
--always-recreate-deps Recreate dependent containers.
Incompatible with --no-recreate.
--no-recreate If containers already exist, don't recreate
them. Incompatible with --force-recreate and -V.
--no-build Don't build an image, even if it's missing.
--no-start Don't start the services after creating them.
--build Build images before starting containers.
--abort-on-container-exit Stops all containers if any container was
stopped. Incompatible with -d.
-t, --timeout TIMEOUT Use this timeout in seconds for container
shutdown when attached or when containers are
already running. (default: 10)
-V, --renew-anon-volumes Recreate anonymous volumes instead of retrieving
data from the previous containers.
--remove-orphans Remove containers for services not defined
in the Compose file.
--exit-code-from SERVICE Return the exit code of the selected service
container. Implies --abort-on-container-exit.
--scale SERVICE=NUM Scale SERVICE to NUM instances. Overrides the
`scale` setting in the Compose file if present.

Builds, (re)creates, starts, and attaches to containers for a service.

Unless they are already running, this command also starts any linked services.

The docker-compose up command aggregates the output of each container (essentially


running docker-compose logs -f). When the command exits, all containers are stopped.
Running docker-compose up -d starts the containers in the background and leaves them running.
If there are existing containers for a service, and the service’s configuration or image was changed
after the container’s creation, docker-compose up picks up the changes by stopping and recreating
the containers (preserving mounted volumes). To prevent Compose from picking up changes, use
the --no-recreate flag.
If you want to force Compose to stop and recreate all containers, use the --force-recreateflag.
If the process encounters an error, the exit code for this command is 1.
If the process is interrupted using SIGINT (ctrl + C) or SIGTERM, the containers are stopped, and the
exit code is 0.
If SIGINT or SIGTERM is sent again during this shutdown phase, the running containers are killed, and
the exit code is 2.

Compose file reference


Compose file version 3 reference
Estimated reading time: 76 minutes

Reference and guidelines


These topics describe version 3 of the Compose file format. This is the newest version.

Compose and Docker compatibility matrix


There are several versions of the Compose file format – 1, 2, 2.x, and 3.x. The table below is a quick
look. For full details on what each version includes and how to upgrade, see About versions and
upgrading.

This table shows which Compose file versions support specific Docker releases.

Compose file format Docker Engine release

3.7 18.06.0+

3.6 18.02.0+

3.5 17.12.0+

3.4 17.09.0+

3.3 17.06.0+

3.2 17.04.0+

3.1 1.13.1+

3.0 1.13.0+

2.4 17.12.0+

2.3 17.06.0+

2.2 1.13.0+

2.1 1.12.0+

2.0 1.10.0+

1.0 1.9.1.+

In addition to Compose file format versions shown in the table, the Compose itself is on a release
schedule, as shown in Compose releases, but file format versions do not necessarily increment with
each release. For example, Compose file format 3.0 was first introduced in Compose release 1.10.0,
and versioned gradually in subsequent releases.

Compose file structure and examples


Example Compose file version 3

The topics on this reference page are organized alphabetically by top-level key to reflect the
structure of the Compose file itself. Top-level keys that define a section in the configuration file such
as build, deploy, depends_on, networks, and so on, are listed with the options that support them as
sub-topics. This maps to the <key>: <option>: <value> indent structure of the Compose file.

A good place to start is the Getting Started tutorial which uses version 3 Compose stack files to
implement multi-container apps, service definitions, and swarm mode. Here are some Compose files
used in the tutorial.

 Your first docker-compose.yml File

 Add a new service and redeploy

Another good reference is the Compose file for the voting app sample used in the Docker for
Beginners lab topic on Deploying an app to a Swarm. This is also shown on the accordion at the top
of this section.

Service configuration reference


The Compose file is a YAML file defining services, networks and volumes. The default path for a
Compose file is ./docker-compose.yml.
Tip: You can use either a .yml or .yaml extension for this file. They both work.
A service definition contains configuration that is applied to each container started for that service,
much like passing command-line parameters to docker container create. Likewise, network and
volume definitions are analogous to docker network create and docker volume create.
As with docker container create, options specified in the Dockerfile, such
as CMD, EXPOSE, VOLUME, ENV, are respected by default - you don’t need to specify them again
in docker-compose.yml.
You can use environment variables in configuration values with a Bash-like ${VARIABLE}syntax -
see variable substitution for full details.

This section contains a list of all configuration options supported by a service definition in version 3.

build
Configuration options that are applied at build time.

build can be specified either as a string containing a path to the build context:

version: "3.7"
services:
webapp:
build: ./dir

Or, as an object with the path specified under context and optionally Dockerfile and args:

version: "3.7"
services:
webapp:
build:
context: ./dir
dockerfile: Dockerfile-alternate
args:
buildno: 1

If you specify image as well as build, then Compose names the built image with the webapp and
optional tag specified in image:
build: ./dir
image: webapp:tag

This results in an image named webapp and tagged tag, built from ./dir.
Note: This option is ignored when deploying a stack in swarm mode with a (version 3) Compose file.
The docker stack command accepts only pre-built images.
CONTEXT

Either a path to a directory containing a Dockerfile, or a url to a git repository.

When the value supplied is a relative path, it is interpreted as relative to the location of the Compose
file. This directory is also the build context that is sent to the Docker daemon.

Compose builds and tags it with a generated name, and uses that image thereafter.

build:
context: ./dir

DOCKERFILE

Alternate Dockerfile.

Compose uses an alternate file to build with. A build path must also be specified.
build:
context: .
dockerfile: Dockerfile-alternate

ARGS

Add build arguments, which are environment variables accessible only during the build process.

First, specify the arguments in your Dockerfile:

ARG buildno
ARG gitcommithash

RUN echo "Build number: $buildno"


RUN echo "Based on commit: $gitcommithash"

Then specify the arguments under the build key. You can pass a mapping or a list:
build:
context: .
args:
buildno: 1
gitcommithash: cdc3b19
build:
context: .
args:
- buildno=1
- gitcommithash=cdc3b19

Note: In your Dockerfile, if you specify ARG before the FROM instruction, ARG is not available in the
build instructions under FROM. If you need an argument to be available in both places, also specify it
under the FROM instruction. See Understand how ARGS and FROM interact for usage details.

You can omit the value when specifying a build argument, in which case its value at build time is the
value in the environment where Compose is running.

args:
- buildno
- gitcommithash

Note: YAML boolean values (true, false, yes, no, on, off) must be enclosed in quotes, so that the
parser interprets them as strings.
CACHE_FROM
Note: This option is new in v3.2

A list of images that the engine uses for cache resolution.

build:
context: .
cache_from:
- alpine:latest
- corp/web_app:3.14

LABELS
Note: This option is new in v3.3

Add metadata to the resulting image using Docker labels. You can use either an array or a
dictionary.

We recommend that you use reverse-DNS notation to prevent your labels from conflicting with those
used by other software.

build:
context: .
labels:
com.example.description: "Accounting webapp"
com.example.department: "Finance"
com.example.label-with-empty-value: ""
build:
context: .
labels:
- "com.example.description=Accounting webapp"
- "com.example.department=Finance"
- "com.example.label-with-empty-value"
SHM_SIZE
Added in version 3.5 file format
Set the size of the /dev/shm partition for this build’s containers. Specify as an integer value
representing the number of bytes or as a string expressing a byte value.
build:
context: .
shm_size: '2gb'
build:
context: .
shm_size: 10000000

TARGET
Added in version 3.4 file format
Build the specified stage as defined inside the Dockerfile. See the multi-stage build docs for details.
build:
context: .
target: prod

cap_add, cap_drop
Add or drop container capabilities. See man 7 capabilities for a full list.
cap_add:
- ALL

cap_drop:
- NET_ADMIN
- SYS_ADMIN

Note: These options are ignored when deploying a stack in swarm mode with a (version 3)
Compose file.
cgroup_parent
Specify an optional parent cgroup for the container.

cgroup_parent: m-executor-abcd

Note: This option is ignored when deploying a stack in swarm mode with a (version 3) Compose file.
command
Override the default command.

command: bundle exec thin -p 3000

The command can also be a list, in a manner similar to dockerfile:

command: ["bundle", "exec", "thin", "-p", "3000"]

configs
Grant access to configs on a per-service basis using the per-service configs configuration. Two
different syntax variants are supported.
Note: The config must already exist or be defined in the top-level configsconfiguration of this stack
file, or stack deployment fails.

For more information on configs, see configs.


SHORT SYNTAX
The short syntax variant only specifies the config name. This grants the container access to the
config and mounts it at /<config_name> within the container. The source name and destination
mountpoint are both set to the config name.
The following example uses the short syntax to grant the redis service access to
the my_config and my_other_config configs. The value of my_config is set to the contents of the
file ./my_config.txt, and my_other_config is defined as an external resource, which means that it
has already been defined in Docker, either by running the docker config create command or by
another stack deployment. If the external config does not exist, the stack deployment fails with
a config not found error.
Note: config definitions are only supported in version 3.3 and higher of the compose file format.
version: "3.7"
services:
redis:
image: redis:latest
deploy:
replicas: 1
configs:
- my_config
- my_other_config
configs:
my_config:
file: ./my_config.txt
my_other_config:
external: true

LONG SYNTAX

The long syntax provides more granularity in how the config is created within the service’s task
containers.

 source: The name of the config as it exists in Docker.


 target: The path and name of the file to be mounted in the service’s task containers.
Defaults to /<source> if not specified.
 uid and gid: The numeric UID or GID that owns the mounted config file within in the
service’s task containers. Both default to 0 on Linux if not specified. Not supported on
Windows.
 mode: The permissions for the file that is mounted within the service’s task containers, in octal
notation. For instance, 0444 represents world-readable. The default is 0444. Configs cannot
be writable because they are mounted in a temporary filesystem, so if you set the writable
bit, it is ignored. The executable bit can be set. If you aren’t familiar with UNIX file permission
modes, you may find this permissions calculator useful.

The following example sets the name of my_config to redis_config within the container, sets the
mode to 0440 (group-readable) and sets the user and group to 103. The redisservice does not have
access to the my_other_config config.
version: "3.7"
services:
redis:
image: redis:latest
deploy:
replicas: 1
configs:
- source: my_config
target: /redis_config
uid: '103'
gid: '103'
mode: 0440
configs:
my_config:
file: ./my_config.txt
my_other_config:
external: true

You can grant a service access to multiple configs and you can mix long and short syntax. Defining
a config does not imply granting a service access to it.

container_name
Specify a custom container name, rather than a generated default name.

container_name: my-web-container

Because Docker container names must be unique, you cannot scale a service beyond 1 container if
you have specified a custom name. Attempting to do so results in an error.

Note: This option is ignored when deploying a stack in swarm mode with a (version 3) Compose file.
credential_spec
Note: This option was added in v3.3. Using group Managed Service Account (gMSA) configurations
with compose files is supported in Compose version 3.8.
Configure the credential spec for managed service account. This option is only used for services
using Windows containers. The credential_spec must be in the
format file://<filename> or registry://<value-name>.
When using file:, the referenced file must be present in the CredentialSpecs subdirectory in the
Docker data directory, which defaults to C:\ProgramData\Docker\ on Windows. The following
example loads the credential spec from a file namedC:\ProgramData\Docker\CredentialSpecs\my-
credential-spec.json:

credential_spec:
file: my-credential-spec.json

When using registry:, the credential spec is read from the Windows registry on the daemon’s host.
A registry value with the given name must be located in:
HKLM\SOFTWARE\Microsoft\Windows
NT\CurrentVersion\Virtualization\Containers\CredentialSpecs

The following example load the credential spec from a value named my-credential-spec in the
registry:
credential_spec:
registry: my-credential-spec

EXAMPLE GMSA CONFIGURATION


When configuring a gMSA credential spec for a service, you only need to specify a credential spec
with config, as shown in the following example:
version: "3.8"
services:
myservice:
image: myimage:latest
credential_spec:
config: my_credential_spec

configs:
my_credentials_spec:
file: ./my-credential-spec.json|

depends_on
Express dependency between services, Service dependencies cause the following behaviors:

 docker-compose up starts services in dependency order. In the following


example, dband redis are started before web.
 docker-compose up SERVICE automatically includes SERVICE’s dependencies. In the following
example, docker-compose up web also creates and starts db and redis.
 docker-compose stop stops services in dependency order. In the following example, web is
stopped before db and redis.

Simple example:

version: "3.7"
services:
web:
build: .
depends_on:
- db
- redis
redis:
image: redis
db:
image: postgres

There are several things to be aware of when using depends_on:


 depends_on does not wait for db and redis to be “ready” before starting web - only until they
have been started. If you need to wait for a service to be ready, see Controlling startup
order for more on this problem and strategies for solving it.
 Version 3 no longer supports the condition form of depends_on.
 The depends_on option is ignored when deploying a stack in swarm mode with a version 3
Compose file.
deploy
Version 3 only.
Specify configuration related to the deployment and running of services. This only takes effect when
deploying to a swarm with docker stack deploy, and is ignored by docker-compose upand docker-
compose run.

version: "3.7"
services:
redis:
image: redis:alpine
deploy:
replicas: 6
update_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure

Several sub-options are available:


ENDPOINT_MODE

Specify a service discovery method for external clients connecting to a swarm.

Version 3.3 only.


 endpoint_mode: vip - Docker assigns the service a virtual IP (VIP) that acts as the front end
for clients to reach the service on a network. Docker routes requests between the client and
available worker nodes for the service, without client knowledge of how many nodes are
participating in the service or their IP addresses or ports. (This is the default.)
 endpoint_mode: dnsrr - DNS round-robin (DNSRR) service discovery does not use a single
virtual IP. Docker sets up DNS entries for the service such that a DNS query for the service
name returns a list of IP addresses, and the client connects directly to one of these. DNS
round-robin is useful in cases where you want to use your own load balancer, or for Hybrid
Windows and Linux applications.
version: "3.7"

services:
wordpress:
image: wordpress
ports:
- "8080:80"
networks:
- overlay
deploy:
mode: replicated
replicas: 2
endpoint_mode: vip

mysql:
image: mysql
volumes:
- db-data:/var/lib/mysql/data
networks:
- overlay
deploy:
mode: replicated
replicas: 2
endpoint_mode: dnsrr
volumes:
db-data:

networks:
overlay:

The options for endpoint_mode also work as flags on the swarm mode CLI command docker service
create. For a quick list of all swarm related docker commands, see Swarm mode CLI commands.

To learn more about service discovery and networking in swarm mode, see Configure service
discovery in the swarm mode topics.
LABELS

Specify labels for the service. These labels are only set on the service, and not on any containers for
the service.

version: "3.7"
services:
web:
image: web
deploy:
labels:
com.example.description: "This label will appear on the web service"

To set labels on containers instead, use the labels key outside of deploy:
version: "3.7"
services:
web:
image: web
labels:
com.example.description: "This label will appear on all containers for the web
service"

MODE
Either global (exactly one container per swarm node) or replicated (a specified number of
containers). The default is replicated. (To learn more, see Replicated and global services in
the swarm topics.)
version: "3.7"
services:
worker:
image: dockersamples/examplevotingapp_worker
deploy:
mode: global

PLACEMENT

Specify placement of constraints and preferences. See the docker service create documentation for
a full description of the syntax and available types of constraints and preferences.

version: "3.7"
services:
db:
image: postgres
deploy:
placement:
constraints:
- node.role == manager
- engine.labels.operatingsystem == ubuntu 14.04
preferences:
- spread: node.labels.zone

REPLICAS
If the service is replicated (which is the default), specify the number of containers that should be
running at any given time.
version: "3.7"
services:
worker:
image: dockersamples/examplevotingapp_worker
networks:
- frontend
- backend
deploy:
mode: replicated
replicas: 6

RESOURCES

Configures resource constraints.

Note: This replaces the older resource constraint options for non swarm mode in Compose files prior
to version 3 (cpu_shares, cpu_quota, cpuset, mem_limit, memswap_limit, mem_swappiness), as
described in Upgrading version 2.x to 3.x.

Each of these is a single value, analogous to its docker service create counterpart.

In this general example, the redis service is constrained to use no more than 50M of memory
and 0.50 (50% of a single core) of available processing time (CPU), and has 20M of memory
and 0.25 CPU time reserved (as always available to it).
version: "3.7"
services:
redis:
image: redis:alpine
deploy:
resources:
limits:
cpus: '0.50'
memory: 50M
reservations:
cpus: '0.25'
memory: 20M

The topics below describe available options to set resource constraints on services or containers in
a swarm.

Looking for options to set resources on non swarm mode containers?

The options described here are specific to the deploy key and swarm mode. If you want to set
resource constraints on non swarm deployments, use Compose file format version 2 CPU, memory,
and other resource options. If you have further questions, refer to the discussion on the GitHub
issue docker/compose/4513.
Out Of Memory Exceptions (OOME)

If your services or containers attempt to use more memory than the system has available, you may
experience an Out Of Memory Exception (OOME) and a container, or the Docker daemon, might be
killed by the kernel OOM killer. To prevent this from happening, ensure that your application runs on
hosts with adequate memory and see Understand the risks of running out of memory.
RESTART_POLICY
Configures if and how to restart containers when they exit. Replaces restart.

 condition: One of none, on-failure or any (default: any).


 delay: How long to wait between restart attempts, specified as a duration (default: 0).
 max_attempts: How many times to attempt to restart a container before giving up (default:
never give up). If the restart does not succeed within the configured window, this attempt
doesn’t count toward the configured max_attempts value. For example, if max_attempts is set
to ‘2’, and the restart fails on the first attempt, more than two restarts may be attempted.
 window: How long to wait before deciding if a restart has succeeded, specified as
a duration (default: decide immediately).

version: "3.7"
services:
redis:
image: redis:alpine
deploy:
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s

ROLLBACK_CONFIG
Version 3.7 file format and up

Configures how the service should be rollbacked in case of a failing update.

 parallelism: The number of containers to rollback at a time. If set to 0, all containers


rollback simultaneously.
 delay: The time to wait between each container group’s rollback (default 0s).
 failure_action: What to do if a rollback fails. One of continue or pause (default pause)
 monitor: Duration after each task update to monitor for failure (ns|us|ms|s|m|h)(default 0s).
 max_failure_ratio: Failure rate to tolerate during a rollback (default 0).
 order: Order of operations during rollbacks. One of stop-first (old task is stopped before
starting new one), or start-first (new task is started first, and the running tasks briefly
overlap) (default stop-first).

UPDATE_CONFIG

Configures how the service should be updated. Useful for configuring rolling updates.

 parallelism: The number of containers to update at a time.


 delay: The time to wait between updating a group of containers.
 failure_action: What to do if an update fails. One of continue, rollback,
or pause (default: pause).
 monitor: Duration after each task update to monitor for failure (ns|us|ms|s|m|h)(default 0s).
 max_failure_ratio: Failure rate to tolerate during an update.
 order: Order of operations during updates. One of stop-first (old task is stopped before
starting new one), or start-first (new task is started first, and the running tasks briefly
overlap) (default stop-first) Note: Only supported for v3.4 and higher.

Note: order is only supported for v3.4 and higher of the compose file format.
version: "3.7"
services:
vote:
image: dockersamples/examplevotingapp_vote:before
depends_on:
- redis
deploy:
replicas: 2
update_config:
parallelism: 2
delay: 10s
order: stop-first

NOT SUPPORTED FOR DOCKER STACK DEPLOY


The following sub-options (supported for docker-compose up and docker-compose run) are not
supported for docker stack deploy or the deploy key.

 build
 cgroup_parent
 container_name
 devices
 tmpfs
 external_links
 links
 network_mode
 restart
 security_opt
 sysctls
 userns_mode

Tip: See the section on how to configure volumes for services, swarms, and docker-stack.yml files.
Volumes are supported but to work with swarms and services, they must be configured as named
volumes or associated with services that are constrained to nodes with access to the requisite
volumes.
devices
List of device mappings. Uses the same format as the --device docker client create option.
devices:
- "/dev/ttyUSB0:/dev/ttyUSB0"

Note: This option is ignored when deploying a stack in swarm mode with a (version 3) Compose file.
depends_on
Express dependency between services, Service dependencies cause the following behaviors:

 docker-compose up starts services in dependency order. In the following


example, dband redis are started before web.
 docker-compose up SERVICE automatically includes SERVICE’s dependencies. In the following
example, docker-compose up web also creates and starts db and redis.
 docker-compose stop stops services in dependency order. In the following example, web is
stopped before db and redis.

Simple example:

version: "3.7"
services:
web:
build: .
depends_on:
- db
- redis
redis:
image: redis
db:
image: postgres

There are several things to be aware of when using depends_on:


 depends_on does not wait for db and redis to be “ready” before starting web - only until they
have been started. If you need to wait for a service to be ready, see Controlling startup
order for more on this problem and strategies for solving it.
 Version 3 no longer supports the condition form of depends_on.
 The depends_on option is ignored when deploying a stack in swarm mode with a version 3
Compose file.
dns
Custom DNS servers. Can be a single value or a list.

dns: 8.8.8.8
dns:
- 8.8.8.8
- 9.9.9.9

dns_search
Custom DNS search domains. Can be a single value or a list.

dns_search: example.com
dns_search:
- dc1.example.com
- dc2.example.com

entrypoint
Override the default entrypoint.

entrypoint: /code/entrypoint.sh

The entrypoint can also be a list, in a manner similar to dockerfile:

entrypoint:
- php
- -d
- zend_extension=/usr/local/lib/php/extensions/no-debug-non-zts-
20100525/xdebug.so
- -d
- memory_limit=-1
- vendor/bin/phpunit

Note: Setting entrypoint both overrides any default entrypoint set on the service’s image with
the ENTRYPOINT Dockerfile instruction, and clears out any default command on the image - meaning
that if there’s a CMD instruction in the Dockerfile, it is ignored.
env_file
Add environment variables from a file. Can be a single value or a list.

If you have specified a Compose file with docker-compose -f FILE, paths in env_file are relative to
the directory that file is in.

Environment variables declared in the environment section override these values – this holds true
even if those values are empty or undefined.

env_file: .env
env_file:
- ./common.env
- ./apps/web.env
- /opt/secrets.env

Compose expects each line in an env file to be in VAR=VAL format. Lines beginning with #are treated
as comments and are ignored. Blank lines are also ignored.
# Set Rails/Rack environment
RACK_ENV=development

Note: If your service specifies a build option, variables defined in environment files
are not automatically visible during the build. Use the args sub-option of build to define build-time
environment variables.
The value of VAL is used as is and not modified at all. For example if the value is surrounded by
quotes (as is often the case of shell variables), the quotes are included in the value passed to
Compose.
Keep in mind that the order of files in the list is significant in determining the value assigned to a
variable that shows up more than once. The files in the list are processed from the top down. For the
same variable specified in file a.env and assigned a different value in file b.env, if b.env is listed
below (after), then the value from b.env stands. For example, given the following declaration
in docker-compose.yml:
services:
some-service:
env_file:
- a.env
- b.env

And the following files:

# a.env
VAR=1

and

# b.env
VAR=hello

$VAR is hello.

environment
Add environment variables. You can use either an array or a dictionary. Any boolean values; true,
false, yes no, need to be enclosed in quotes to ensure they are not converted to True or False by the
YML parser.

Environment variables with only a key are resolved to their values on the machine Compose is
running on, which can be helpful for secret or host-specific values.

environment:
RACK_ENV: development
SHOW: 'true'
SESSION_SECRET:
environment:
- RACK_ENV=development
- SHOW=true
- SESSION_SECRET
Note: If your service specifies a build option, variables defined in environment are notautomatically
visible during the build. Use the args sub-option of build to define build-time environment variables.
expose
Expose ports without publishing them to the host machine - they’ll only be accessible to linked
services. Only the internal port can be specified.

expose:
- "3000"
- "8000"

external_links
Link to containers started outside this docker-compose.yml or even outside of Compose, especially
for containers that provide shared or common services. external_links follow semantics similar to
the legacy option links when specifying both the container name and the link alias
(CONTAINER:ALIAS).
external_links:
- redis_1
- project_db_1:mysql
- project_db_1:postgresql

Notes:

If you’re using the version 2 or above file format, the externally-created containers must be
connected to at least one of the same networks as the service that is linking to them. Links are a
legacy option. We recommend using networks instead.

This option is ignored when deploying a stack in swarm mode with a (version 3) Compose file.
extra_hosts
Add hostname mappings. Use the same values as the docker client --add-host parameter.
extra_hosts:
- "somehost:162.242.195.82"
- "otherhost:50.31.209.229"

An entry with the ip address and hostname is created in /etc/hosts inside containers for this
service, e.g:
162.242.195.82 somehost
50.31.209.229 otherhost

healthcheck
Version 2.1 file format and up.

Configure a check that’s run to determine whether or not containers for this service are “healthy”.
See the docs for the HEALTHCHECK Dockerfile instruction for details on how healthchecks work.

healthcheck:
test: ["CMD", "curl", "-f", "http://localhost"]
interval: 1m30s
timeout: 10s
retries: 3
start_period: 40s

interval, timeout and start_period are specified as durations.


Note: start_period is only supported for v3.4 and higher of the compose file format.
test must be either a string or a list. If it’s a list, the first item must be either NONE, CMD or CMD-SHELL.
If it’s a string, it’s equivalent to specifying CMD-SHELL followed by that string.
# Hit the local web app
test: ["CMD", "curl", "-f", "http://localhost"]

As above, but wrapped in /bin/sh. Both forms below are equivalent.


test: ["CMD-SHELL", "curl -f http://localhost || exit 1"]
test: curl -f https://localhost || exit 1

To disable any default healthcheck set by the image, you can use disable: true. This is equivalent
to specifying test: ["NONE"].
healthcheck:
disable: true

image
Specify the image to start the container from. Can either be a repository/tag or a partial image ID.

image: redis
image: ubuntu:14.04
image: tutum/influxdb
image: example-registry.com:4000/postgresql
image: a4bc65fd

If the image does not exist, Compose attempts to pull it, unless you have also specified build, in
which case it builds it using the specified options and tags it with the specified tag.

init
Added in version 3.7 file format.
Run an init inside the container that forwards signals and reaps processes. Set this option to true to
enable this feature for the service.
version: "3.7"
services:
web:
image: alpine:latest
init: true

The default init binary that is used is Tini, and is installed in /usr/libexec/docker-initon the
daemon host. You can configure the daemon to use a custom init binary through the init-
path configuration option.

isolation
Specify a container’s isolation technology. On Linux, the only supported value is default. On
Windows, acceptable values are default, process and hyperv. Refer to the Docker Engine docs for
details.
labels
Add metadata to containers using Docker labels. You can use either an array or a dictionary.

It’s recommended that you use reverse-DNS notation to prevent your labels from conflicting with
those used by other software.

labels:
com.example.description: "Accounting webapp"
com.example.department: "Finance"
com.example.label-with-empty-value: ""
labels:
- "com.example.description=Accounting webapp"
- "com.example.department=Finance"
- "com.example.label-with-empty-value"

links
Warning: The --link flag is a legacy feature of Docker. It may eventually be removed. Unless you
absolutely need to continue using it, we recommend that you use user-defined networks to facilitate
communication between two containers instead of using --link. One feature that user-defined
networks do not support that you can do with --link is sharing environmental variables between
containers. However, you can use other mechanisms such as volumes to share environment
variables between containers in a more controlled way.
Link to containers in another service. Either specify both the service name and a link alias
(SERVICE:ALIAS), or just the service name.
web:
links:
- db
- db:database
- redis

Containers for the linked service are reachable at a hostname identical to the alias, or the service
name if no alias was specified.

Links are not required to enable services to communicate - by default, any service can reach any
other service at that service’s name. (See also, the Links topic in Networking in Compose.)

Links also express dependency between services in the same way as depends_on, so they
determine the order of service startup.

Notes

 If you define both links and networks, services with links between them must share at least
one network in common to communicate.

 This option is ignored when deploying a stack in swarm mode with a (version 3) Compose
file.
logging
Logging configuration for the service.

logging:
driver: syslog
options:
syslog-address: "tcp://192.168.0.42:123"

The driver name specifies a logging driver for the service’s containers, as with the --log-
driver option for docker run (documented here).

The default value is json-file.

driver: "json-file"
driver: "syslog"
driver: "none"

Note: Only the json-file and journald drivers make the logs available directly from docker-compose
up and docker-compose logs. Using any other driver does not print any logs.
Specify logging options for the logging driver with the options key, as with the --log-optoption
for docker run.
Logging options are key-value pairs. An example of syslog options:
driver: "syslog"
options:
syslog-address: "tcp://192.168.0.42:123"

The default driver json-file, has options to limit the amount of logs stored. To do this, use a key-value
pair for maximum storage size and maximum number of files:

options:
max-size: "200k"
max-file: "10"

The example shown above would store log files until they reach a max-size of 200kB, and then
rotate them. The amount of individual log files stored is specified by the max-filevalue. As logs grow
beyond the max limits, older log files are removed to allow storage of new logs.
Here is an example docker-compose.yml file that limits logging storage:
version: "3.7"
services:
some-service:
image: some-service
logging:
driver: "json-file"
options:
max-size: "200k"
max-file: "10"

Logging options available depend on which logging driver you use

The above example for controlling log files and sizes uses options specific to the json-file driver.
These particular options are not available on other logging drivers. For a full list of supported logging
drivers and their options, see logging drivers.
network_mode
Network mode. Use the same values as the docker client --network parameter, plus the special
form service:[service name].
network_mode: "bridge"
network_mode: "host"
network_mode: "none"
network_mode: "service:[service name]"
network_mode: "container:[container name/id]"

Notes

 This option is ignored when deploying a stack in swarm mode with a (version 3) Compose
file.
 network_mode: "host" cannot be mixed with links.

networks
Networks to join, referencing entries under the top-level networks key.
services:
some-service:
networks:
- some-network
- other-network
ALIASES

Aliases (alternative hostnames) for this service on the network. Other containers on the same
network can use either the service name or this alias to connect to one of the service’s containers.

Since aliases is network-scoped, the same service can have different aliases on different networks.
Note: A network-wide alias can be shared by multiple containers, and even by multiple services. If it
is, then exactly which container the name resolves to is not guaranteed.

The general format is shown here.

services:
some-service:
networks:
some-network:
aliases:
- alias1
- alias3
other-network:
aliases:
- alias2

In the example below, three services are provided (web, worker, and db), along with two networks
(new and legacy). The db service is reachable at the hostname db or database on the new network,
and at db or mysql on the legacy network.
version: "3.7"

services:
web:
image: "nginx:alpine"
networks:
- new

worker:
image: "my-worker-image:latest"
networks:
- legacy
db:
image: mysql
networks:
new:
aliases:
- database
legacy:
aliases:
- mysql

networks:
new:
legacy:

IPV4_ADDRESS, IPV6_ADDRESS

Specify a static IP address for containers for this service when joining the network.

The corresponding network configuration in the top-level networks section must have anipam block
with subnet configurations covering each static address.
If IPv6 addressing is desired, the enable_ipv6 option must be set, and you must use a version 2.x
Compose file. IPv6 options do not currently work in swarm mode.

An example:

version: "3.7"

services:
app:
image: nginx:alpine
networks:
app_net:
ipv4_address: 172.16.238.10
ipv6_address: 2001:3984:3989::10

networks:
app_net:
ipam:
driver: default
config:
- subnet: "172.16.238.0/24"
- subnet: "2001:3984:3989::/64"

pid
pid: "host"

Sets the PID mode to the host PID mode. This turns on sharing between container and the host
operating system the PID address space. Containers launched with this flag can access and
manipulate other containers in the bare-metal machine’s namespace and vice versa.

ports
Expose ports.

Note: Port mapping is incompatible with network_mode: host


SHORT SYNTAX
Either specify both ports (HOST:CONTAINER), or just the container port (an ephemeral host port is
chosen).
Note: When mapping ports in the HOST:CONTAINER format, you may experience erroneous results
when using a container port lower than 60, because YAML parses numbers in the format xx:yy as a
base-60 value. For this reason, we recommend always explicitly specifying your port mappings as
strings.
ports:
- "3000"
- "3000-3005"
- "8000:8000"
- "9090-9091:8080-8081"
- "49100:22"
- "127.0.0.1:8001:8001"
- "127.0.0.1:5000-5010:5000-5010"
- "6060:6060/udp"
LONG SYNTAX

The long form syntax allows the configuration of additional fields that can’t be expressed in the short
form.

 target: the port inside the container


 published: the publicly exposed port
 protocol: the port protocol (tcp or udp)
 mode: host for publishing a host port on each node, or ingress for a swarm mode port to be
load balanced.

ports:
- target: 80
published: 8080
protocol: tcp
mode: host

Note: The long syntax is new in v3.2


restart
no is the default restart policy, and it does not restart a container under any circumstance.
When always is specified, the container always restarts. The on-failure policy restarts a container if
the exit code indicates an on-failure error.
restart: "no"
restart: always
restart: on-failure
restart: unless-stopped

Note: This option is ignored when deploying a stack in swarm mode with a (version 3) Compose file.
Use restart_policy instead.
secrets
Grant access to secrets on a per-service basis using the per-service secrets configuration. Two
different syntax variants are supported.
Note: The secret must already exist or be defined in the top-level secretsconfiguration of this stack
file, or stack deployment fails.

For more information on secrets, see secrets.


SHORT SYNTAX
The short syntax variant only specifies the secret name. This grants the container access to the
secret and mounts it at /run/secrets/<secret_name> within the container. The source name and
destination mountpoint are both set to the secret name.
The following example uses the short syntax to grant the redis service access to
the my_secret and my_other_secret secrets. The value of my_secret is set to the contents of the
file ./my_secret.txt, and my_other_secret is defined as an external resource, which means that it
has already been defined in Docker, either by running the docker secret create command or by
another stack deployment. If the external secret does not exist, the stack deployment fails with
a secret not found error.
version: "3.7"
services:
redis:
image: redis:latest
deploy:
replicas: 1
secrets:
- my_secret
- my_other_secret
secrets:
my_secret:
file: ./my_secret.txt
my_other_secret:
external: true

LONG SYNTAX

The long syntax provides more granularity in how the secret is created within the service’s task
containers.

 source: The name of the secret as it exists in Docker.


 target: The name of the file to be mounted in /run/secrets/ in the service’s task containers.
Defaults to source if not specified.
 uid and gid: The numeric UID or GID that owns the file within /run/secrets/ in the service’s
task containers. Both default to 0 if not specified.
 mode: The permissions for the file to be mounted in /run/secrets/ in the service’s task
containers, in octal notation. For instance, 0444 represents world-readable. The default in
Docker 1.13.1 is 0000, but is be 0444 in newer versions. Secrets cannot be writable because
they are mounted in a temporary filesystem, so if you set the writable bit, it is ignored. The
executable bit can be set. If you aren’t familiar with UNIX file permission modes, you may
find this permissions calculator useful.

The following example sets name of the my_secret to redis_secret within the container, sets the
mode to 0440 (group-readable) and sets the user and group to 103. The redisservice does not have
access to the my_other_secret secret.
version: "3.7"
services:
redis:
image: redis:latest
deploy:
replicas: 1
secrets:
- source: my_secret
target: redis_secret
uid: '103'
gid: '103'
mode: 0440
secrets:
my_secret:
file: ./my_secret.txt
my_other_secret:
external: true

You can grant a service access to multiple secrets and you can mix long and short syntax. Defining
a secret does not imply granting a service access to it.

security_opt
Override the default labeling scheme for each container.

security_opt:
- label:user:USER
- label:role:ROLE

Note: This option is ignored when deploying a stack in swarm mode with a (version 3) Compose file.
stop_grace_period
Specify how long to wait when attempting to stop a container if it doesn’t handle SIGTERM (or
whatever stop signal has been specified with stop_signal), before sending SIGKILL. Specified as
a duration.
stop_grace_period: 1s
stop_grace_period: 1m30s

By default, stop waits 10 seconds for the container to exit before sending SIGKILL.
stop_signal
Sets an alternative signal to stop the container. By default stop uses SIGTERM. Setting an
alternative signal using stop_signal causes stop to send that signal instead.
stop_signal: SIGUSR1

sysctls
Kernel parameters to set in the container. You can use either an array or a dictionary.

sysctls:
net.core.somaxconn: 1024
net.ipv4.tcp_syncookies: 0
sysctls:
- net.core.somaxconn=1024
- net.ipv4.tcp_syncookies=0

Note: This option is ignored when deploying a stack in swarm mode with a (version 3) Compose file.
tmpfs
Version 2 file format and up.

Mount a temporary file system inside the container. Can be a single value or a list.

tmpfs: /run
tmpfs:
- /run
- /tmp

Note: This option is ignored when deploying a stack in swarm mode with a (version 3-3.5) Compose
file.
Version 3.6 file format and up.

Mount a temporary file system inside the container. Size parameter specifies the size of the tmpfs
mount in bytes. Unlimited by default.

- type: tmpfs
target: /app
tmpfs:
size: 1000

ulimits
Override the default ulimits for a container. You can either specify a single limit as an integer or
soft/hard limits as a mapping.

ulimits:
nproc: 65535
nofile:
soft: 20000
hard: 40000

userns_mode
userns_mode: "host"

Disables the user namespace for this service, if Docker daemon is configured with user
namespaces. See dockerd for more information.

Note: This option is ignored when deploying a stack in swarm mode with a (version 3) Compose file.
volumes
Mount host paths or named volumes, specified as sub-options to a service.

You can mount a host path as part of a definition for a single service, and there is no need to define
it in the top level volumes key.
But, if you want to reuse a volume across multiple services, then define a named volume in the top-
level volumes key. Use named volumes with services, swarms, and stack files.
Note: The top-level volumes key defines a named volume and references it from each
service’s volumes list. This replaces volumes_from in earlier versions of the Compose file format.
See Use volumes and Volume Plugins for general information on volumes.
This example shows a named volume (mydata) being used by the web service, and a bind mount
defined for a single service (first path under db service volumes). The db service also uses a named
volume called dbdata (second path under db service volumes), but defines it using the old string
format for mounting a named volume. Named volumes must be listed under the top-
level volumes key, as shown.
version: "3.7"
services:
web:
image: nginx:alpine
volumes:
- type: volume
source: mydata
target: /data
volume:
nocopy: true
- type: bind
source: ./static
target: /opt/app/static

db:
image: postgres:latest
volumes:
- "/var/run/postgres/postgres.sock:/var/run/postgres/postgres.sock"
- "dbdata:/var/lib/postgresql/data"

volumes:
mydata:
dbdata:

Note: See Use volumes and Volume Plugins for general information on volumes.
SHORT SYNTAX
Optionally specify a path on the host machine (HOST:CONTAINER), or an access mode
(HOST:CONTAINER:ro).
You can mount a relative path on the host, that expands relative to the directory of the Compose
configuration file being used. Relative paths should always begin with . or ...
volumes:
# Just specify a path and let the Engine create a volume
- /var/lib/mysql

# Specify an absolute path mapping


- /opt/data:/var/lib/mysql

# Path on the host, relative to the Compose file


- ./cache:/tmp/cache

# User-relative path
- ~/configs:/etc/configs/:ro

# Named volume
- datavolume:/var/lib/mysql

LONG SYNTAX

The long form syntax allows the configuration of additional fields that can’t be expressed in the short
form.

 type: the mount type volume, bind, tmpfs or npipe


 source: the source of the mount, a path on the host for a bind mount, or the name of a
volume defined in the top-level volumes key. Not applicable for a tmpfs mount.
 target: the path in the container where the volume is mounted
 read_only: flag to set the volume as read-only
 bind: configure additional bind options
o propagation: the propagation mode used for the bind
 volume: configure additional volume options
o nocopy: flag to disable copying of data from a container when a volume is created
 tmpfs: configure additional tmpfs options
o size: the size for the tmpfs mount in bytes
 consistency: the consistency requirements of the mount, one of consistent (host and
container have identical view), cached (read cache, host view is authoritative)
or delegated (read-write cache, container’s view is authoritative)

version: "3.7"
services:
web:
image: nginx:alpine
ports:
- "80:80"
volumes:
- type: volume
source: mydata
target: /data
volume:
nocopy: true
- type: bind
source: ./static
target: /opt/app/static

networks:
webnet:

volumes:
mydata:

Note: The long syntax is new in v3.2


VOLUMES FOR SERVICES, SWARMS, AND STACK FILES
When working with services, swarms, and docker-stack.yml files, keep in mind that the tasks
(containers) backing a service can be deployed on any node in a swarm, and this may be a different
node each time the service is updated.

In the absence of having named volumes with specified sources, Docker creates an anonymous
volume for each task backing a service. Anonymous volumes do not persist after the associated
containers are removed.

If you want your data to persist, use a named volume and a volume driver that is multi-host aware,
so that the data is accessible from any node. Or, set constraints on the service so that its tasks are
deployed on a node that has the volume present.

As an example, the docker-stack.yml file for the votingapp sample in Docker Labs defines a service
called db that runs a postgres database. It is configured as a named volume to persist the data on
the swarm, and is constrained to run only on manager nodes. Here is the relevant snip-it from that
file:
version: "3.7"
services:
db:
image: postgres:9.4
volumes:
- db-data:/var/lib/postgresql/data
networks:
- backend
deploy:
placement:
constraints: [node.role == manager]

CACHING OPTIONS FOR VOLUME MOUNTS (DOCKER DESKTOP FOR MAC)


On Docker 17.04 CE Edge and up, including 17.06 CE Edge and Stable, you can configure
container-and-host consistency requirements for bind-mounted directories in Compose files to allow
for better performance on read/write of volume mounts. These options address issues specific
to osxfs file sharing, and therefore are only applicable on Docker Desktop for Mac.

The flags are:

 consistent: Full consistency. The container runtime and the host maintain an identical view
of the mount at all times. This is the default.
 cached: The host’s view of the mount is authoritative. There may be delays before updates
made on the host are visible within a container.
 delegated: The container runtime’s view of the mount is authoritative. There may be delays
before updates made in a container are visible on the host.
Here is an example of configuring a volume as cached:
version: "3.7"
services:
php:
image: php:7.1-fpm
ports:
- "9000"
volumes:
- .:/var/www/project:cached

Full detail on these flags, the problems they solve, and their docker run counterparts is in the
Docker Desktop for Mac topic Performance tuning for volume mounts (shared filesystems).
domainname, hostname, ipc, mac_address, privileged,
read_only, shm_size, stdin_open, tty, user, working_dir
Each of these is a single value, analogous to its docker run counterpart. Note that mac_address is a
legacy option.
user: postgresql
working_dir: /code

domainname: foo.com
hostname: foo
ipc: host
mac_address: 02:42:ac:11:65:43

privileged: true

read_only: true
shm_size: 64M
stdin_open: true
tty: true

Specifying durations
Some configuration options, such as the interval and timeout sub-options for check, accept a
duration as a string in a format that looks like this:
2.5s
10s
1m30s
2h32m
5h34m56s
The supported units are us, ms, s, m and h.

Specifying byte values


Some configuration options, such as the shm_size sub-option for build, accept a byte value as a
string in a format that looks like this:
2b
1024kb
2048k
300m
1gb

The supported units are b, k, m and g, and their alternative notation kb, mb and gb. Decimal values are
not supported at this time.

Volume configuration reference


While it is possible to declare volumes on the file as part of the service declaration, this section
allows you to create named volumes (without relying on volumes_from) that can be reused across
multiple services, and are easily retrieved and inspected using the docker command line or API. See
the docker volume subcommand documentation for more information.

See Use volumes and Volume Plugins for general information on volumes.

Here’s an example of a two-service setup where a database’s data directory is shared with another
service as a volume so that it can be periodically backed up:

version: "3.7"

services:
db:
image: db
volumes:
- data-volume:/var/lib/db
backup:
image: backup-service
volumes:
- data-volume:/var/lib/backup/data

volumes:
data-volume:

An entry under the top-level volumes key can be empty, in which case it uses the default driver
configured by the Engine (in most cases, this is the local driver). Optionally, you can configure it
with the following keys:
driver
Specify which volume driver should be used for this volume. Defaults to whatever driver the Docker
Engine has been configured to use, which in most cases is local. If the driver is not available, the
Engine returns an error when docker-compose up tries to create the volume.
driver: foobar

driver_opts
Specify a list of options as key-value pairs to pass to the driver for this volume. Those options are
driver-dependent - consult the driver’s documentation for more information. Optional.

volumes:
example:
driver_opts:
type: "nfs"
o: "addr=10.40.0.199,nolock,soft,rw"
device: ":/docker/example"

external
If set to true, specifies that this volume has been created outside of Compose. docker-compose
up does not attempt to create it, and raises an error if it doesn’t exist.
For version 3.3 and below of the format, external cannot be used in conjunction with other volume
configuration keys (driver, driver_opts, labels). This limitation no longer exists for version 3.4 and
above.
In the example below, instead of attempting to create a volume called [projectname]_data,
Compose looks for an existing volume simply called data and mount it into the db service’s
containers.
version: "3.7"
services:
db:
image: postgres
volumes:
- data:/var/lib/postgresql/data

volumes:
data:
external: true

external.name was deprecated in version 3.4 file format use name instead.

You can also specify the name of the volume separately from the name used to refer to it within the
Compose file:

volumes:
data:
external:
name: actual-name-of-volume

External volumes are always created with docker stack deploy

External volumes that do not exist are created if you use docker stack deploy to launch the app
in swarm mode (instead of docker compose up). In swarm mode, a volume is automatically created
when it is defined by a service. As service tasks are scheduled on new nodes, swarmkit creates the
volume on the local node. To learn more, seemoby/moby#29976.
labels
Add metadata to containers using Docker labels. You can use either an array or a dictionary.

It’s recommended that you use reverse-DNS notation to prevent your labels from conflicting with
those used by other software.

labels:
com.example.description: "Database volume"
com.example.department: "IT/Ops"
com.example.label-with-empty-value: ""
labels:
- "com.example.description=Database volume"
- "com.example.department=IT/Ops"
- "com.example.label-with-empty-value"

name
Added in version 3.4 file format

Set a custom name for this volume. The name field can be used to reference volumes that contain
special characters. The name is used as is and will not be scoped with the stack name.

version: "3.7"
volumes:
data:
name: my-app-data

It can also be used in conjunction with the external property:


version: "3.7"
volumes:
data:
external: true
name: my-app-data

Network configuration reference


The top-level networks key lets you specify networks to be created.

 For a full explanation of Compose’s use of Docker networking features and all network driver
options, see the Networking guide.

 For Docker Labs tutorials on networking, start with Designing Scalable, Portable Docker
Container Networks

driver
Specify which driver should be used for this network.
The default driver depends on how the Docker Engine you’re using is configured, but in most
instances it is bridge on a single host and overlay on a Swarm.

The Docker Engine returns an error if the driver is not available.

driver: overlay

BRIDGE
Docker defaults to using a bridge network on a single host. For examples of how to work with bridge
networks, see the Docker Labs tutorial on Bridge networking.
OVERLAY
The overlay driver creates a named network across multiple nodes in a swarm.
 For a working example of how to build and use an overlay network with a service in swarm
mode, see the Docker Labs tutorial on Overlay networking and service discovery.

 For an in-depth look at how it works under the hood, see the networking concepts lab on
the Overlay Driver Network Architecture.
HOST OR NONE
Use the host’s networking stack, or no networking. Equivalent to docker run --net=host or docker
run --net=none. Only used if you use docker stack commands. If you use the docker-
compose command, use network_mode instead.

If you want to use a particular network on a common build, use [network] as mentioned in the
second yaml file example.

The syntax for using built-in networks such as host and none is a little different. Define an external
network with the name host or none (that Docker has already created automatically) and an alias that
Compose can use (hostnet or nonet in the following examples), then grant the service access to that
network using the alias.
version: "3.7"
services:
web:
networks:
hostnet: {}

networks:
hostnet:
external: true
name: host
services:
web:
...
build:
...
network: host
context: .
...
services:
web:
...
networks:
nonet: {}

networks:
nonet:
external: true
name: none

driver_opts
Specify a list of options as key-value pairs to pass to the driver for this network. Those options are
driver-dependent - consult the driver’s documentation for more information. Optional.

driver_opts:
foo: "bar"
baz: 1

attachable
Note: Only supported for v3.2 and higher.
Only used when the driver is set to overlay. If set to true, then standalone containers can attach to
this network, in addition to services. If a standalone container attaches to an overlay network, it can
communicate with services and standalone containers that are also attached to the overlay network
from other Docker daemons.
networks:
mynet1:
driver: overlay
attachable: true

enable_ipv6
Enable IPv6 networking on this network.

Not supported in Compose File version 3

enable_ipv6 requires you to use a version 2 Compose file, as this directive is not yet supported in
Swarm mode.
ipam
Specify custom IPAM config. This is an object with several properties, each of which is optional:

 driver: Custom IPAM driver, instead of the default.


 config: A list with zero or more config blocks, each containing any of the following keys:
o subnet: Subnet in CIDR format that represents a network segment

A full example:

ipam:
driver: default
config:
- subnet: 172.28.0.0/16

Note: Additional IPAM configurations, such as gateway, are only honored for version 2 at the
moment.
internal
By default, Docker also connects a bridge network to it to provide external connectivity. If you want
to create an externally isolated overlay network, you can set this option to true.
labels
Add metadata to containers using Docker labels. You can use either an array or a dictionary.

It’s recommended that you use reverse-DNS notation to prevent your labels from conflicting with
those used by other software.
labels:
com.example.description: "Financial transaction network"
com.example.department: "Finance"
com.example.label-with-empty-value: ""
labels:
- "com.example.description=Financial transaction network"
- "com.example.department=Finance"
- "com.example.label-with-empty-value"

external
If set to true, specifies that this network has been created outside of Compose. docker-compose
up does not attempt to create it, and raises an error if it doesn’t exist.
For version 3.3 and below of the format, external cannot be used in conjunction with other network
configuration keys (driver, driver_opts, ipam, internal). This limitation no longer exists for version
3.4 and above.
In the example below, proxy is the gateway to the outside world. Instead of attempting to create a
network called [projectname]_outside, Compose looks for an existing network simply
called outside and connect the proxy service’s containers to it.
version: "3.7"

services:
proxy:
build: ./proxy
networks:
- outside
- default
app:
build: ./app
networks:
- default

networks:
outside:
external: true

external.name was deprecated in version 3.5 file format use name instead.

You can also specify the name of the network separately from the name used to refer to it within the
Compose file:

version: "3.7"
networks:
outside:
external:
name: actual-name-of-network

name
Added in version 3.5 file format

Set a custom name for this network. The name field can be used to reference networks which
contain special characters. The name is used as is and will not be scoped with the stack name.

version: "3.7"
networks:
network1:
name: my-app-net

It can also be used in conjunction with the external property:


version: "3.7"
networks:
network1:
external: true
name: my-app-net

configs configuration reference


The top-level configs declaration defines or references configs that can be granted to the services in
this stack. The source of the config is either file or external.

 file: The config is created with the contents of the file at the specified path.
 external: If set to true, specifies that this config has already been created. Docker does not
attempt to create it, and if it does not exist, a config not found error occurs.
 name: The name of the config object in Docker. This field can be used to reference configs
that contain special characters. The name is used as is and will not be scoped with the stack
name. Introduced in version 3.5 file format.

In this example, my_first_config is created (as <stack_name>_my_first_config)when the stack is


deployed, and my_second_config already exists in Docker.
configs:
my_first_config:
file: ./config_data
my_second_config:
external: true

Another variant for external configs is when the name of the config in Docker is different from the
name that exists within the service. The following example modifies the previous one to use the
external config called redis_config.
configs:
my_first_config:
file: ./config_data
my_second_config:
external:
name: redis_config

You still need to grant access to the config to each service in the stack.

secrets configuration reference


The top-level secrets declaration defines or references secrets that can be granted to the services in
this stack. The source of the secret is either file or external.

 file: The secret is created with the contents of the file at the specified path.
 external: If set to true, specifies that this secret has already been created. Docker does not
attempt to create it, and if it does not exist, a secret not found error occurs.
 name: The name of the secret object in Docker. This field can be used to reference secrets
that contain special characters. The name is used as is and will not be scoped with the stack
name. Introduced in version 3.5 file format.
In this example, my_first_secret is created as <stack_name>_my_first_secret when the stack is
deployed, and my_second_secret already exists in Docker.
secrets:
my_first_secret:
file: ./secret_data
my_second_secret:
external: true

Another variant for external secrets is when the name of the secret in Docker is different from the
name that exists within the service. The following example modifies the previous one to use the
external secret called redis_secret.
Compose File v3.5 and above
secrets:
my_first_secret:
file: ./secret_data
my_second_secret:
external: true
name: redis_secret

Compose File v3.4 and under


my_second_secret:
external:
name: redis_secret

You still need to grant access to the secrets to each service in the stack.

Variable substitution
Your configuration options can contain environment variables. Compose uses the variable values
from the shell environment in which docker-compose is run. For example, suppose the shell
contains POSTGRES_VERSION=9.3 and you supply this configuration:
db:
image: "postgres:${POSTGRES_VERSION}"
When you run docker-compose up with this configuration, Compose looks for
thePOSTGRES_VERSION environment variable in the shell and substitutes its value in. For this example,
Compose resolves the image to postgres:9.3 before running the configuration.
If an environment variable is not set, Compose substitutes with an empty string. In the example
above, if POSTGRES_VERSION is not set, the value for the image option is postgres:.
You can set default values for environment variables using a .env file, which Compose automatically
looks for. Values set in the shell environment override those set in the .envfile.
Important: The .env file feature only works when you use thedocker-compose up command and
does not work with docker stack deploy.
Both $VARIABLE and ${VARIABLE} syntax are supported. Additionally when using the 2.1 file format, it
is possible to provide inline default values using typical shell syntax:

 ${VARIABLE:-default} evaluates to default if VARIABLE is unset or empty in the environment.


 ${VARIABLE-default} evaluates to default only if VARIABLE is unset in the environment.

Similarly, the following syntax allows you to specify mandatory variables:

 ${VARIABLE:?err} exits with an error message containing err if VARIABLE is unset or empty in
the environment.
 ${VARIABLE?err} exits with an error message containing err if VARIABLE is unset in the
environment.

Other extended shell-style features, such as ${VARIABLE/foo/bar}, are not supported.


You can use a $$ (double-dollar sign) when your configuration needs a literal dollar sign. This also
prevents Compose from interpolating a value, so a $$ allows you to refer to environment variables
that you don’t want processed by Compose.
web:
build: .
command: "$$VAR_NOT_INTERPOLATED_BY_COMPOSE"

If you forget and use a single dollar sign ($), Compose interprets the value as an environment
variable and warns you:

The VAR_NOT_INTERPOLATED_BY_COMPOSE is not set. Substituting an empty string.

Extension fields
Added in version 3.4 file format.
It is possible to re-use configuration fragments using extension fields. Those special fields can be of
any format as long as they are located at the root of your Compose file and their name start with
the x- character sequence.
Note

Starting with the 3.7 format (for the 3.x series) and 2.4 format (for the 2.x series), extension fields are
also allowed at the root of service, volume, network, config and secret definitions.
version: '3.4'
x-custom:
items:
- a
- b
options:
max-size: '12m'
name: "custom"

The contents of those fields are ignored by Compose, but they can be inserted in your resource
definitions using YAML anchors. For example, if you want several of your services to use the same
logging configuration:

logging:
options:
max-size: '12m'
max-file: '5'
driver: json-file

You may write your Compose file as follows:

version: '3.4'
x-logging:
&default-logging
options:
max-size: '12m'
max-file: '5'
driver: json-file
services:
web:
image: myapp/web:latest
logging: *default-logging
db:
image: mysql:latest
logging: *default-logging

It is also possible to partially override values in extension fields using the YAML merge type. For
example:

version: '3.4'
x-volumes:
&default-volume
driver: foobar-storage

services:
web:
image: myapp/web:latest
volumes: ["vol1", "vol2", "vol3"]
volumes:
vol1: *default-volume
vol2:
<< : *default-volume
name: volume02
vol3:
<< : *default-volume
driver: default
name: volume-local

Compose file version 2 reference


Estimated reading time: 43 minutes
Reference and guidelines
These topics describe version 2 of the Compose file format.

Compose and Docker compatibility matrix


There are several versions of the Compose file format – 1, 2, 2.x, and 3.x The table below is a quick
look. For full details on what each version includes and how to upgrade, see About versions and
upgrading.

This table shows which Compose file versions support specific Docker releases.

Compose file format Docker Engine release

3.7 18.06.0+

3.6 18.02.0+

3.5 17.12.0+

3.4 17.09.0+

3.3 17.06.0+

3.2 17.04.0+

3.1 1.13.1+

3.0 1.13.0+

2.4 17.12.0+

2.3 17.06.0+

2.2 1.13.0+

2.1 1.12.0+

2.0 1.10.0+

1.0 1.9.1.+
In addition to Compose file format versions shown in the table, the Compose itself is on a release
schedule, as shown in Compose releases, but file format versions do not necessarily increment with
each release. For example, Compose file format 3.0 was first introduced in Compose release 1.10.0,
and versioned gradually in subsequent releases.

Service configuration reference


The Compose file is a YAML file defining services, networks and volumes. The default path for a
Compose file is ./docker-compose.yml.
Tip: You can use either a .yml or .yaml extension for this file. They both work.
A container definition contains configuration which are applied to each container started for that
service, much like passing command-line parameters to docker run. Likewise, network and volume
definitions are analogous to docker network create and docker volume create.
As with docker run, options specified in the Dockerfile, such as CMD, EXPOSE, VOLUME, ENV, are
respected by default - you don’t need to specify them again in docker-compose.yml.
You can use environment variables in configuration values with a Bash-like ${VARIABLE}syntax -
see variable substitution for full details.

This section contains a list of all configuration options supported by a service definition in version 2.

blkio_config
A set of configuration options to set block IO limits for this service.

version: "2.4"
services:
foo:
image: busybox
blkio_config:
weight: 300
weight_device:
- path: /dev/sda
weight: 400
device_read_bps:
- path: /dev/sdb
rate: '12mb'
device_read_iops:
- path: /dev/sdb
rate: 120
device_write_bps:
- path: /dev/sdb
rate: '1024k'
device_write_iops:
- path: /dev/sdb
rate: 30

DEVICE_READ_BPS, DEVICE_WRITE_BPS

Set a limit in bytes per second for read / write operations on a given device. Each item in the list
must have two keys:

 path, defining the symbolic path to the affected device


 rate, either as an integer value representing the number of bytes or as a string expressing
a byte value.

DEVICE_READ_IOPS, DEVICE_WRITE_IOPS

Set a limit in operations per second for read / write operations on a given device. Each item in the list
must have two keys:

 path, defining the symbolic path to the affected device


 rate, as an integer value representing the permitted number of operations per second.

WEIGHT

Modify the proportion of bandwidth allocated to this service relative to other services. Takes an
integer value between 10 and 1000, with 500 being the default.
WEIGHT_DEVICE

Fine-tune bandwidth allocation by device. Each item in the list must have two keys:

 path, defining the symbolic path to the affected device


 weight, an integer value between 10 and 1000

build
Configuration options that are applied at build time.

build can be specified either as a string containing a path to the build context, or an object with the
path specified under context and optionally dockerfile and args.
build: ./dir

build:
context: ./dir
dockerfile: Dockerfile-alternate
args:
buildno: 1

If you specify image as well as build, then Compose names the built image with the webapp and
optional tag specified in image:
build: ./dir
image: webapp:tag

This results in an image named webapp and tagged tag, built from ./dir.
CACHE_FROM
Added in version 2.2 file format

A list of images that the engine uses for cache resolution.

build:
context: .
cache_from:
- alpine:latest
- corp/web_app:3.14

CONTEXT
Version 2 file format and up. In version 1, just use build.

Either a path to a directory containing a Dockerfile, or a url to a git repository.

When the value supplied is a relative path, it is interpreted as relative to the location of the Compose
file. This directory is also the build context that is sent to the Docker daemon.

Compose builds and tags it with a generated name, and use that image thereafter.

build:
context: ./dir
DOCKERFILE

Alternate Dockerfile.

Compose uses an alternate file to build with. A build path must also be specified.

build:
context: .
dockerfile: Dockerfile-alternate

ARGS
Version 2 file format and up.

Add build arguments, which are environment variables accessible only during the build process.

First, specify the arguments in your Dockerfile:

ARG buildno
ARG gitcommithash

RUN echo "Build number: $buildno"


RUN echo "Based on commit: $gitcommithash"

Then specify the arguments under the build key. You can pass a mapping or a list:
build:
context: .
args:
buildno: 1
gitcommithash: cdc3b19

build:
context: .
args:
- buildno=1
- gitcommithash=cdc3b19

Note: In your Dockerfile, if you specify ARG before the FROM instruction, If you need an argument to be
available in both places, also specify it under the FROM instruction. See Understand how ARGS and
FROM interact for usage details.
You can omit the value when specifying a build argument, in which case its value at build time is the
value in the environment where Compose is running.

args:
- buildno
- gitcommithash

Note: YAML boolean values (true, false, yes, no, on, off) must be enclosed in quotes, so that the
parser interprets them as strings.
EXTRA_HOSTS
Add hostname mappings at build-time. Use the same values as the docker client --add-
hostparameter.

extra_hosts:
- "somehost:162.242.195.82"
- "otherhost:50.31.209.229"

An entry with the ip address and hostname is created in /etc/hosts inside containers for this build,
e.g:
162.242.195.82 somehost
50.31.209.229 otherhost

ISOLATION
Added in version 2.1 file format.
Specify a build’s container isolation technology. On Linux, the only supported value is default. On
Windows, acceptable values are default, process and hyperv. Refer to theDocker Engine docs for
details.
If unspecified, Compose will use the isolation value found in the service’s definition to determine
the value to use for builds.
LABELS
Added in version 2.1 file format

Add metadata to the resulting image using Docker labels. You can use either an array or a
dictionary.

It’s recommended that you use reverse-DNS notation to prevent your labels from conflicting with
those used by other software.

build:
context: .
labels:
com.example.description: "Accounting webapp"
com.example.department: "Finance"
com.example.label-with-empty-value: ""

build:
context: .
labels:
- "com.example.description=Accounting webapp"
- "com.example.department=Finance"
- "com.example.label-with-empty-value"

NETWORK
Added in version 2.2 file format
Set the network containers connect to for the RUN instructions during build.
build:
context: .
network: host

build:
context: .
network: custom_network_1

SHM_SIZE
Added in version 2.3 file format
Set the size of the /dev/shm partition for this build’s containers. Specify as an integer value
representing the number of bytes or as a string expressing a byte value.
build:
context: .
shm_size: '2gb'
build:
context: .
shm_size: 10000000

TARGET
Added in version 2.3 file format
Build the specified stage as defined inside the Dockerfile. See the multi-stage build docs for details.
build:
context: .
target: prod

cap_add, cap_drop
Add or drop container capabilities. See man 7 capabilities for a full list.
cap_add:
- ALL

cap_drop:
- NET_ADMIN
- SYS_ADMIN

command
Override the default command.

command: bundle exec thin -p 3000

The command can also be a list, in a manner similar to dockerfile:

command: ["bundle", "exec", "thin", "-p", "3000"]

cgroup_parent
Specify an optional parent cgroup for the container.

cgroup_parent: m-executor-abcd

container_name
Specify a custom container name, rather than a generated default name.
container_name: my-web-container

Because Docker container names must be unique, you cannot scale a service beyond 1 container if
you have specified a custom name. Attempting to do so results in an error.

cpu_rt_runtime, cpu_rt_period
Added in version 2.2 file format

Configure CPU allocation parameters using the Docker daemon realtime scheduler.

cpu_rt_runtime: '400ms'
cpu_rt_period: '1400us'

# Integer values will use microseconds as units


cpu_rt_runtime: 95000
cpu_rt_period: 11000

device_cgroup_rules
Added in version 2.3 file format.

Add rules to the cgroup allowed devices list.

device_cgroup_rules:
- 'c 1:3 mr'
- 'a 7:* rmw'

devices
List of device mappings. Uses the same format as the --device docker client create option.
devices:
- "/dev/ttyUSB0:/dev/ttyUSB0"

depends_on
Version 2 file format and up.

Express dependency between services, which has two effects:

 docker-compose up starts services in dependency order. In the following


example, dband redis are started before web.
 docker-compose up SERVICE automatically include SERVICE’s dependencies. In the following
example, docker-compose up web also create and start db and redis.

Simple example:

version: "2.4"
services:
web:
build: .
depends_on:
- db
- redis
redis:
image: redis
db:
image: postgres

Note: depends_on does not wait for db and redis to be “ready” before starting web - only until they
have been started. If you need to wait for a service to be ready, see Controlling startup order for
more on this problem and strategies for solving it.
Added in version 2.1 file format.

A healthcheck indicates that you want a dependency to wait for another container to be “healthy” (as
indicated by a successful state from the healthcheck) before starting.

Example:

version: "2.4"
services:
web:
build: .
depends_on:
db:
condition: service_healthy
redis:
condition: service_started
redis:
image: redis
db:
image: redis
healthcheck:
test: "exit 0"

In the above example, Compose waits for the redis service to be started (legacy behavior) and
the db service to be healthy before starting web.

See the healthcheck section for complementary information.

dns
Custom DNS servers. Can be a single value or a list.

dns: 8.8.8.8
dns:
- 8.8.8.8
- 9.9.9.9

dns_opt
List of custom DNS options to be added to the container’s resolv.conf file.
dns_opt:
- use-vc
- no-tld-query

dns_search
Custom DNS search domains. Can be a single value or a list.

dns_search: example.com
dns_search:
- dc1.example.com
- dc2.example.com

tmpfs
Mount a temporary file system inside the container. Can be a single value or a list.
tmpfs: /run
tmpfs:
- /run
- /tmp

entrypoint
Override the default entrypoint.

entrypoint: /code/entrypoint.sh

The entrypoint can also be a list, in a manner similar to dockerfile:

entrypoint:
- php
- -d
- zend_extension=/usr/local/lib/php/extensions/no-debug-non-zts-
20100525/xdebug.so
- -d
- memory_limit=-1
- vendor/bin/phpunit

Note: Setting entrypoint both overrides any default entrypoint set on the service’s image with
the ENTRYPOINT Dockerfile instruction, and clears out any default command on the image - meaning
that if there’s a CMD instruction in the Dockerfile, it is ignored.
env_file
Add environment variables from a file. Can be a single value or a list.

If you have specified a Compose file with docker-compose -f FILE, paths in env_file are relative to
the directory that file is in.

Environment variables declared in the environment section override these values – this holds true
even if those values are empty or undefined.

env_file: .env

env_file:
- ./common.env
- ./apps/web.env
- /opt/secrets.env

Compose expects each line in an env file to be in VAR=VAL format. Lines beginning with #are
processed as comments and are ignored. Blank lines are also ignored.
# Set Rails/Rack environment
RACK_ENV=development

Note: If your service specifies a build option, variables defined in environment files
are not automatically visible during the build. Use the args sub-option of build to define build-time
environment variables.
The value of VAL is used as is and not modified at all. For example if the value is surrounded by
quotes (as is often the case of shell variables), the quotes are included in the value passed to
Compose.
Keep in mind that the order of files in the list is significant in determining the value assigned to a
variable that shows up more than once. The files in the list are processed from the top down. For the
same variable specified in file a.env and assigned a different value in file b.env, if b.env is listed
below (after), then the value from b.env stands. For example, given the following declaration
in docker-compose.yml:
services:
some-service:
env_file:
- a.env
- b.env

And the following files:

# a.env
VAR=1

and

# b.env
VAR=hello

$VAR is hello.
environment
Add environment variables. You can use either an array or a dictionary. Any boolean values; true,
false, yes no, need to be enclosed in quotes to ensure they are not converted to True or False by the
YML parser.

Environment variables with only a key are resolved to their values on the machine Compose is
running on, which can be helpful for secret or host-specific values.

environment:
RACK_ENV: development
SHOW: 'true'
SESSION_SECRET:

environment:
- RACK_ENV=development
- SHOW=true
- SESSION_SECRET

Note: If your service specifies a build option, variables defined in environment are notautomatically
visible during the build. Use the args sub-option of build to define build-time environment variables.
expose
Expose ports without publishing them to the host machine - they’ll only be accessible to linked
services. Only the internal port can be specified.

expose:
- "3000"
- "8000"

extends
Extend another service, in the current file or another, optionally overriding configuration.

You can use extends on any service together with other configuration keys. The extendsvalue must
be a dictionary defined with a required service and an optional file key.
extends:
file: common.yml
service: webapp

The service the name of the service being extended, for example web or database. The file is the
location of a Compose configuration file defining that service.
If you omit the file Compose looks for the service configuration in the current file. The file value
can be an absolute or relative path. If you specify a relative path, Compose treats it as relative to the
location of the current file.
You can extend a service that itself extends another. You can extend indefinitely. Compose does not
support circular references and docker-compose returns an error if it encounters one.
For more on extends, see the the extends documentation.
external_links
Link to containers started outside this docker-compose.yml or even outside of Compose, especially
for containers that provide shared or common services. external_links follow semantics similar
to links when specifying both the container name and the link alias (CONTAINER:ALIAS).
external_links:
- redis_1
- project_db_1:mysql
- project_db_1:postgresql

Note: For version 2 file format, the externally-created containers must be connected to at least one
of the same networks as the service which is linking to them.
extra_hosts
Add hostname mappings. Use the same values as the docker client --add-host parameter.
extra_hosts:
- "somehost:162.242.195.82"
- "otherhost:50.31.209.229"

An entry with the ip address and hostname is created in /etc/hosts inside containers for this
service, e.g:
162.242.195.82 somehost
50.31.209.229 otherhost

group_add
Specify additional groups (by name or number) which the user inside the container should be a
member of. Groups must exist in both the container and the host system to be added. An example of
where this is useful is when multiple containers (running as different users) need to all read or write
the same file on the host system. That file can be owned by a group shared by all the containers,
and specified in group_add. See the Docker documentation for more details.

A full example:

version: "2.4"
services:
myservice:
image: alpine
group_add:
- mail

Running id inside the created container shows that the user belongs to the mail group, which would
not have been the case if group_add were not used.
healthcheck
Version 2.1 file format and up.

Configure a check that’s run to determine whether or not containers for this service are “healthy”.
See the docs for the HEALTHCHECK Dockerfile instruction for details on how healthchecks work.

healthcheck:
test: ["CMD", "curl", "-f", "http://localhost"]
interval: 1m30s
timeout: 10s
retries: 3
start_period: 40s

interval, timeout and start_period are specified as durations.


test must be either a string or a list. If it’s a list, the first item must be either NONE, CMD or CMD-SHELL.
If it’s a string, it’s equivalent to specifying CMD-SHELL followed by that string.
# Hit the local web app
test: ["CMD", "curl", "-f", "http://localhost"]

# As above, but wrapped in /bin/sh. Both forms below are equivalent.


test: ["CMD-SHELL", "curl -f http://localhost && echo 'cool, it works'"]
test: curl -f https://localhost && echo 'cool, it works'

To disable any default healthcheck set by the image, you can use disable: true. This is equivalent
to specifying test: ["NONE"].
healthcheck:
disable: true

Note: The start_period option is a more recent feature and is only available with the 2.3 file format.
image
Specify the image to start the container from. Can either be a repository/tag or a partial image ID.

image: redis
image: ubuntu:14.04
image: tutum/influxdb
image: example-registry.com:4000/postgresql
image: a4bc65fd

If the image does not exist, Compose attempts to pull it, unless you have also specified build, in
which case it builds it using the specified options and tags it with the specified tag.

init
Added in version 2.2 file format.
Run an init inside the container that forwards signals and reaps processes. Set this option to true to
enable this feature for the service.
version: "2.4"
services:
web:
image: alpine:latest
init: true

The default init binary that is used is Tini, and is installed in /usr/libexec/docker-initon the
daemon host. You can configure the daemon to use a custom init binary through the init-
path configuration option.

isolation
Added in version 2.1 file format.
Specify a container’s isolation technology. On Linux, the only supported value is default. On
Windows, acceptable values are default, process and hyperv. Refer to the Docker Engine docs for
details.
labels
Add metadata to containers using Docker labels. You can use either an array or a dictionary.

It’s recommended that you use reverse-DNS notation to prevent your labels from conflicting with
those used by other software.

labels:
com.example.description: "Accounting webapp"
com.example.department: "Finance"
com.example.label-with-empty-value: ""

labels:
- "com.example.description=Accounting webapp"
- "com.example.department=Finance"
- "com.example.label-with-empty-value"

links
Link to containers in another service. Either specify both the service name and a link alias
("SERVICE:ALIAS"), or just the service name.
Links are a legacy option. We recommend using networks instead.
web:
links:
- "db"
- "db:database"
- "redis"

Containers for the linked service are reachable at a hostname identical to the alias, or the service
name if no alias was specified.

Links also express dependency between services in the same way as depends_on, so they
determine the order of service startup.
Note: If you define both links and networks, services with links between them must share at least
one network in common in order to communicate. We recommend using networks instead.
logging
Logging configuration for the service.

logging:
driver: syslog
options:
syslog-address: "tcp://192.168.0.42:123"

The driver name specifies a logging driver for the service’s containers, as with the --log-
driver option for docker run (documented here).

The default value is json-file.

driver: "json-file"
driver: "syslog"
driver: "none"

Note: Only the json-file and journald drivers make the logs available directly fromdocker-compose
up and docker-compose logs. Using any other driver does not print any logs.
Specify logging options for the logging driver with the options key, as with the --log-optoption
for docker run.
Logging options are key-value pairs. An example of syslog options:
driver: "syslog"
options:
syslog-address: "tcp://192.168.0.42:123"

network_mode
Version 2 file format and up. Replaces the version 1 net option.
Network mode. Use the same values as the docker client --net parameter, plus the special
form service:[service name].
network_mode: "bridge"
network_mode: "host"
network_mode: "none"
network_mode: "service:[service name]"
network_mode: "container:[container name/id]"

networks
Version 2 file format and up. Replaces the version 1 net option.
Networks to join, referencing entries under the top-level networks key.
services:
some-service:
networks:
- some-network
- other-network

ALIASES

Aliases (alternative hostnames) for this service on the network. Other containers on the same
network can use either the service name or this alias to connect to one of the service’s containers.

Since aliases is network-scoped, the same service can have different aliases on different networks.
Note: A network-wide alias can be shared by multiple containers, and even by multiple services. If it
is, then exactly which container the name resolves to is not guaranteed.

The general format is shown here.

services:
some-service:
networks:
some-network:
aliases:
- alias1
- alias3
other-network:
aliases:
- alias2

In the example below, three services are provided (web, worker, and db), along with two networks
(new and legacy). The db service is reachable at the hostname db or database on the new network,
and at db or mysql on the legacy network.
version: "2.4"
services:
web:
build: ./web
networks:
- new

worker:
build: ./worker
networks:
- legacy

db:
image: mysql
networks:
new:
aliases:
- database
legacy:
aliases:
- mysql

networks:
new:
legacy:

IPV4_ADDRESS, IPV6_ADDRESS

Specify a static IP address for containers for this service when joining the network.

The corresponding network configuration in the top-level networks section must have an ipam block
with subnet and gateway configurations covering each static address. If IPv6 addressing is desired,
the enable_ipv6 option must be set.

An example:
version: "2.4"

services:
app:
image: busybox
command: ifconfig
networks:
app_net:
ipv4_address: 172.16.238.10
ipv6_address: 2001:3984:3989::10

networks:
app_net:
driver: bridge
enable_ipv6: true
ipam:
driver: default
config:
- subnet: 172.16.238.0/24
gateway: 172.16.238.1
- subnet: 2001:3984:3989::/64
gateway: 2001:3984:3989::1

LINK_LOCAL_IPS
Added in version 2.1 file format.

Specify a list of link-local IPs. Link-local IPs are special IPs which belong to a well known subnet and
are purely managed by the operator, usually dependent on the architecture where they are
deployed. Therefore they are not managed by docker (IPAM driver).

Example usage:

version: "2.4"
services:
app:
image: busybox
command: top
networks:
app_net:
link_local_ips:
- 57.123.22.11
- 57.123.22.13
networks:
app_net:
driver: bridge

PRIORITY
Specify a priority to indicate in which order Compose should connect the service’s containers to its
networks. If unspecified, the default value is 0.
In the following example, the app service connects to app_net_1 first as it has the highest priority. It
then connects to app_net_3, then app_net_2, which uses the default priority value of 0.
version: "2.4"
services:
app:
image: busybox
command: top
networks:
app_net_1:
priority: 1000
app_net_2:

app_net_3:
priority: 100
networks:
app_net_1:
app_net_2:
app_net_3:

Note: If multiple networks have the same priority, the connection order is undefined.
pid
pid: "host"
pid: "container:custom_container_1"
pid: "service:foobar"

If set to one of the following forms: container:<container_name>, service:<service_name>, the


service shares the PID address space of the designated container or service.

If set to “host”, the service’s PID mode is the host PID mode. This turns on sharing between
container and the host operating system the PID address space. Containers launched with this flag
can access and manipulate other containers in the bare-metal machine’s namespace and vice
versa.

Note: the service: and container: forms require version 2.1 or above
pids_limit
Added in version 2.1 file format.
Tunes a container’s PIDs limit. Set to -1 for unlimited PIDs.
pids_limit: 10

platform
Added in version 2.4 file format.
Target platform containers for this service will run on, using the os[/arch[/variant]] syntax, e.g.
platform: osx
platform: windows/amd64
platform: linux/arm64/v8

This parameter determines which version of the image will be pulled and/or on which platform the
service’s build will be performed.

ports
Expose ports. Either specify both ports (HOST:CONTAINER), or just the container port (an ephemeral
host port is chosen).
Note: When mapping ports in the HOST:CONTAINER format, you may experience erroneous results
when using a container port lower than 60, because YAML parses numbers in the format xx:yy as a
base-60 value. For this reason, we recommend always explicitly specifying your port mappings as
strings.
ports:
- "3000"
- "3000-3005"
- "8000:8000"
- "9090-9091:8080-8081"
- "49100:22"
- "127.0.0.1:8001:8001"
- "127.0.0.1:5000-5010:5000-5010"
- "6060:6060/udp"
- "12400-12500:1240"

runtime
Added in version 2.3 file format
Specify which runtime to use for the service’s containers. Default runtime and available runtimes are
listed in the output of docker info.
web:
image: busybox:latest
command: true
runtime: runc

scale
Added in version 2.2 file format
Specify the default number of containers to deploy for this service. Whenever you run docker-
compose up, Compose creates or removes containers to match the specified number. This value can
be overridden using the --scale flag.
web:
image: busybox:latest
command: echo 'scaled'
scale: 3

security_opt
Override the default labeling scheme for each container.

security_opt:
- label:user:USER
- label:role:ROLE

stop_grace_period
Specify how long to wait when attempting to stop a container if it doesn’t handle SIGTERM (or
whatever stop signal has been specified with stop_signal), before sending SIGKILL. Specified as
a duration.
stop_grace_period: 1s
stop_grace_period: 1m30s

By default, stop waits 10 seconds for the container to exit before sending SIGKILL.
stop_signal
Sets an alternative signal to stop the container. By default stop uses SIGTERM. Setting an
alternative signal using stop_signal causes stop to send that signal instead.
stop_signal: SIGUSR1

storage_opt
Added in version 2.1 file format.

Set storage driver options for this service.

storage_opt:
size: '1G'

sysctls
Added in version 2.1 file format.

Kernel parameters to set in the container. You can use either an array or a dictionary.

sysctls:
net.core.somaxconn: 1024
net.ipv4.tcp_syncookies: 0

sysctls:
- net.core.somaxconn=1024
- net.ipv4.tcp_syncookies=0
ulimits
Override the default ulimits for a container. You can either specify a single limit as an integer or
soft/hard limits as a mapping.

ulimits:
nproc: 65535
nofile:
soft: 20000
hard: 40000

userns_mode
Added in version 2.1 file format.
userns_mode: "host"

Disables the user namespace for this service, if Docker daemon is configured with user
namespaces. See dockerd for more information.

volumes
Mount host folders or named volumes. Named volumes need to be specified with the top-
level volumes key.
You can mount a relative path on the host, which expands relative to the directory of the Compose
configuration file being used. Relative paths should always begin with . or ...
SHORT SYNTAX
The short syntax uses the generic [SOURCE:]TARGET[:MODE] format, where SOURCE can be either a
host path or volume name. TARGET is the container path where the volume is mounted. Standard
modes are ro for read-only and rw for read-write (default).
volumes:
# Just specify a path and let the Engine create a volume
- /var/lib/mysql

# Specify an absolute path mapping


- /opt/data:/var/lib/mysql

# Path on the host, relative to the Compose file


- ./cache:/tmp/cache
# User-relative path
- ~/configs:/etc/configs/:ro

# Named volume
- datavolume:/var/lib/mysql

LONG SYNTAX
Added in version 2.3 file format.

The long form syntax allows the configuration of additional fields that can’t be expressed in the short
form.

 type: the mount type volume, bind, tmpfs or npipe


 source: the source of the mount, a path on the host for a bind mount, or the name of a
volume defined in the top-level volumes key. Not applicable for a tmpfs mount.
 target: the path in the container where the volume is mounted
 read_only: flag to set the volume as read-only
 bind: configure additional bind options
o propagation: the propagation mode used for the bind
 volume: configure additional volume options
o nocopy: flag to disable copying of data from a container when a volume is created
 tmpfs: configure additional tmpfs options
o size: the size for the tmpfs mount in bytes

version: "2.4"
services:
web:
image: nginx:alpine
ports:
- "80:80"
volumes:
- type: volume
source: mydata
target: /data
volume:
nocopy: true
- type: bind
source: ./static
target: /opt/app/static

networks:
webnet:

volumes:
mydata:

Note: When creating bind mounts, using the long syntax requires the referenced folder to be created
beforehand. Using the short syntax creates the folder on the fly if it doesn’t exist. See the bind
mounts documentation for more information.
volume_driver
Specify a default volume driver to be used for all declared volumes on this service.

volume_driver: mydriver

Note: In version 2 files, this option only applies to anonymous volumes (those specified in the image,
or specified under volumes without an explicit named volume or host path). To configure the driver
for a named volume, use the driver key under the entry in the top-level volumes option.

See Docker Volumes and Volume Plugins for more information.

volumes_from
Mount all of the volumes from another service or container, optionally specifying read-only access
(ro) or read-write (rw). If no access level is specified, then read-write is used.
volumes_from:
- service_name
- service_name:ro
- container:container_name
- container:container_name:rw

Notes

 The container:... formats are only supported in the version 2 file format.

 In version 1, you can use container names without marking them as such:
o service_name
o service_name:ro
o container_name
o container_name:rw

restart
no is the default restart policy, and it doesn’t restart a container under any circumstance.
When always is specified, the container always restarts. The on-failure policy restarts a container if
the exit code indicates an on-failure error.
- restart: no
- restart: always
- restart: on-failure

cpu_count, cpu_percent, cpu_shares, cpu_period, cpu_quota,


cpus, cpuset, domainname, hostname, ipc, mac_address,
mem_limit, memswap_limit, mem_swappiness,
mem_reservation, oom_kill_disable, oom_score_adj, privileged,
read_only, shm_size, stdin_open, tty, user, working_dir
Each of these is a single value, analogous to its docker run counterpart.

Note: The following options were added in version 2.2: cpu_count, cpu_percent, cpus. The following
options were added in version 2.1: oom_kill_disable, cpu_period
cpu_count: 2
cpu_percent: 50
cpus: 0.5
cpu_shares: 73
cpu_quota: 50000
cpu_period: 20ms
cpuset: 0,1

user: postgresql
working_dir: /code

domainname: foo.com
hostname: foo
ipc: host
mac_address: 02:42:ac:11:65:43
mem_limit: 1000000000
memswap_limit: 2000000000
mem_reservation: 512m
privileged: true

oom_score_adj: 500
oom_kill_disable: true

read_only: true
shm_size: 64M
stdin_open: true
tty: true

Specifying durations
Some configuration options, such as the interval and timeout sub-options forhealthcheck, accept a
duration as a string in a format that looks like this:
2.5s
10s
1m30s
2h32m
5h34m56s

The supported units are us, ms, s, m and h.

Specifying byte values


Some configuration options, such as the device_read_bps sub-option for blkio_config, accept a
byte value as a string in a format that looks like this:
2b
1024kb
2048k
300m
1gb

The supported units are b, k, m and g, and their alternative notation kb, mb and gb. Decimal values are
not supported at this time.

Volume configuration reference


While it is possible to declare volumes on the fly as part of the service declaration, this section
allows you to create named volumes that can be reused across multiple services (without relying
on volumes_from), and are easily retrieved and inspected using the docker command line or API.
See the docker volume subcommand documentation for more information.

Here’s an example of a two-service setup where a database’s data directory is shared with another
service as a volume so that it can be periodically backed up:

version: "2.4"

services:
db:
image: db
volumes:
- data-volume:/var/lib/db
backup:
image: backup-service
volumes:
- data-volume:/var/lib/backup/data

volumes:
data-volume:

An entry under the top-level volumes key can be empty, in which case it uses the default driver
configured by the Engine (in most cases, this is the local driver). Optionally, you can configure it
with the following keys:
driver
Specify which volume driver should be used for this volume. Defaults to whatever driver the Docker
Engine has been configured to use, which in most cases is local. If the driver is not available, the
Engine returns an error when docker-compose up tries to create the volume.
driver: foobar

driver_opts
Specify a list of options as key-value pairs to pass to the driver for this volume. Those options are
driver-dependent - consult the driver’s documentation for more information. Optional.

driver_opts:
foo: "bar"
baz: 1

external
If set to true, specifies that this volume has been created outside of Compose. docker-compose
up does not attempt to create it, and raises an error if it doesn’t exist.
For version 2.0 of the format, external cannot be used in conjunction with other volume
configuration keys (driver, driver_opts, labels). This limitation no longer exists forversion 2.1 and
above.
In the example below, instead of attempting to create a volume called [projectname]_data,
Compose looks for an existing volume simply called data and mount it into the db service’s
containers.
version: "2.4"

services:
db:
image: postgres
volumes:
- data:/var/lib/postgresql/data

volumes:
data:
external: true
You can also specify the name of the volume separately from the name used to refer to it within the
Compose file:

volumes:
data:
external:
name: actual-name-of-volume

Note: In newer versions of Compose, the external.name property is deprecated in favor of simply
using the name property.
labels
Added in version 2.1 file format.

Add metadata to containers using Docker labels. You can use either an array or a dictionary.

It’s recommended that you use reverse-DNS notation to prevent your labels from conflicting with
those used by other software.

labels:
com.example.description: "Database volume"
com.example.department: "IT/Ops"
com.example.label-with-empty-value: ""

labels:
- "com.example.description=Database volume"
- "com.example.department=IT/Ops"
- "com.example.label-with-empty-value"

name
Added in version 2.1 file format

Set a custom name for this volume.

version: "2.4"
volumes:
data:
name: my-app-data
It can also be used in conjunction with the external property:
version: "2.4"
volumes:
data:
external: true
name: my-app-data

Network configuration reference


The top-level networks key lets you specify networks to be created. For a full explanation of
Compose’s use of Docker networking features, see the Networking guide.
driver
Specify which driver should be used for this network.

The default driver depends on how the Docker Engine you’re using is configured, but in most
instances it is bridge on a single host and overlay on a Swarm.

The Docker Engine returns an error if the driver is not available.

driver: overlay

Starting in Compose file format 2.1, overlay networks are always created as attachable, and this is
not configurable. This means that standalone containers can connect to overlay networks.
driver_opts
Specify a list of options as key-value pairs to pass to the driver for this network. Those options are
driver-dependent - consult the driver’s documentation for more information. Optional.

driver_opts:
foo: "bar"
baz: 1

enable_ipv6
Added in version 2.1 file format.

Enable IPv6 networking on this network.


ipam
Specify custom IPAM config. This is an object with several properties, each of which is optional:

 driver: Custom IPAM driver, instead of the default.


 config: A list with zero or more config blocks, each containing any of the following keys:
o subnet: Subnet in CIDR format that represents a network segment
o ip_range: Range of IPs from which to allocate container IPs
o gateway: IPv4 or IPv6 gateway for the master subnet
o aux_addresses: Auxiliary IPv4 or IPv6 addresses used by Network driver, as a
mapping from hostname to IP
 options: Driver-specific options as a key-value mapping.

A full example:

ipam:
driver: default
config:
- subnet: 172.28.0.0/16
ip_range: 172.28.5.0/24
gateway: 172.28.5.254
aux_addresses:
host1: 172.28.1.5
host2: 172.28.1.6
host3: 172.28.1.7
options:
foo: bar
baz: "0"

internal
By default, Docker also connects a bridge network to it to provide external connectivity. If you want
to create an externally isolated overlay network, you can set this option to true.
labels
Added in version 2.1 file format.

Add metadata to containers using Docker labels. You can use either an array or a dictionary.

It’s recommended that you use reverse-DNS notation to prevent your labels from conflicting with
those used by other software.
labels:
com.example.description: "Financial transaction network"
com.example.department: "Finance"
com.example.label-with-empty-value: ""

labels:
- "com.example.description=Financial transaction network"
- "com.example.department=Finance"
- "com.example.label-with-empty-value"

external
If set to true, specifies that this network has been created outside of Compose. docker-compose
up does not attempt to create it, and raises an error if it doesn’t exist.
For version 2.0 of the format, external cannot be used in conjunction with other network
configuration keys (driver, driver_opts, ipam, internal). This limitation no longer exists for version
2.1 and above.
In the example below, proxy is the gateway to the outside world. Instead of attempting to create a
network called [projectname]_outside, Compose looks for an existing network simply
called outside and connect the proxy service’s containers to it.
version: "2.4"

services:
proxy:
build: ./proxy
networks:
- outside
- default
app:
build: ./app
networks:
- default

networks:
outside:
external: true

You can also specify the name of the network separately from the name used to refer to it within the
Compose file:

networks:
outside:
external:
name: actual-name-of-network

Not supported for version 2 docker-compose files. Use network_mode instead.


name
Added in version 2.1 file format

Set a custom name for this network.

version: "2.4"
networks:
network1:
name: my-app-net

It can also be used in conjunction with the external property:


version: "2.4"
networks:
network1:
external: true
name: my-app-net

Variable substitution
Your configuration options can contain environment variables. Compose uses the variable values
from the shell environment in which docker-compose is run. For example, suppose the shell
contains POSTGRES_VERSION=9.3 and you supply this configuration:
db:
image: "postgres:${POSTGRES_VERSION}"
When you run docker-compose up with this configuration, Compose looks for
thePOSTGRES_VERSION environment variable in the shell and substitutes its value in. For this example,
Compose resolves the image to postgres:9.3 before running the configuration.
If an environment variable is not set, Compose substitutes with an empty string. In the example
above, if POSTGRES_VERSION is not set, the value for the image option is postgres:.
You can set default values for environment variables using a .env file, which Compose automatically
looks for. Values set in the shell environment override those set in the .envfile.
Important: The .env file feature only works when you use thedocker-compose up command and
does not work with docker stack deploy.
Both $VARIABLE and ${VARIABLE} syntax are supported. Additionally when using the 2.1 file format, it
is possible to provide inline default values using typical shell syntax:

 ${VARIABLE:-default} evaluates to default if VARIABLE is unset or empty in the environment.


 ${VARIABLE-default} evaluates to default only if VARIABLE is unset in the environment.

Similarly, the following syntax allows you to specify mandatory variables:

 ${VARIABLE:?err} exits with an error message containing err if VARIABLE is unset or empty in
the environment.
 ${VARIABLE?err} exits with an error message containing err if VARIABLE is unset in the
environment.

Other extended shell-style features, such as ${VARIABLE/foo/bar}, are not supported.


You can use a $$ (double-dollar sign) when your configuration needs a literal dollar sign. This also
prevents Compose from interpolating a value, so a $$ allows you to refer to environment variables
that you don’t want processed by Compose.
web:
build: .
command: "$$VAR_NOT_INTERPOLATED_BY_COMPOSE"

If you forget and use a single dollar sign ($), Compose interprets the value as an environment
variable and warns you:

The VAR_NOT_INTERPOLATED_BY_COMPOSE is not set. Substituting an empty string.

Extension fields
Added in version 2.1 file format.
It is possible to re-use configuration fragments using extension fields. Those special fields can be of
any format as long as they are located at the root of your Compose file and their name start with
the x- character sequence.
Note

Starting with the 3.7 format (for the 3.x series) and 2.4 format (for the 2.x series), extension fields are
also allowed at the root of service, volume, network, config and secret definitions.
version: '3.4'
x-custom:
items:
- a
- b
options:
max-size: '12m'
name: "custom"

The contents of those fields are ignored by Compose, but they can be inserted in your resource
definitions using YAML anchors. For example, if you want several of your services to use the same
logging configuration:

logging:
options:
max-size: '12m'
max-file: '5'
driver: json-file

You may write your Compose file as follows:

version: '3.4'
x-logging:
&default-logging
options:
max-size: '12m'
max-file: '5'
driver: json-file
services:
web:
image: myapp/web:latest
logging: *default-logging
db:
image: mysql:latest
logging: *default-logging

It is also possible to partially override values in extension fields using the YAML merge type. For
example:

version: '3.4'
x-volumes:
&default-volume
driver: foobar-storage

services:
web:
image: myapp/web:latest
volumes: ["vol1", "vol2", "vol3"]
volumes:
vol1: *default-volume
vol2:
<< : *default-volume
name: volume02
vol3:
<< : *default-volume
driver: default
name: volume-local

Compose file version 1 reference


Estimated reading time: 14 minutes
Reference and guidelines
These topics describe version 1 of the Compose file format. This is the oldest version.

Compose and Docker compatibility matrix


There are several versions of the Compose file format – 1, 2, 2.x, and 3.x The table below is a quick
look. For full details on what each version includes and how to upgrade, see About versions and
upgrading.

This table shows which Compose file versions support specific Docker releases.

Compose file format Docker Engine release

3.7 18.06.0+

3.6 18.02.0+

3.5 17.12.0+

3.4 17.09.0+

3.3 17.06.0+

3.2 17.04.0+

3.1 1.13.1+

3.0 1.13.0+

2.4 17.12.0+

2.3 17.06.0+

2.2 1.13.0+

2.1 1.12.0+

2.0 1.10.0+

1.0 1.9.1.+
In addition to Compose file format versions shown in the table, the Compose itself is on a release
schedule, as shown in Compose releases, but file format versions do not necessarily increment with
each release. For example, Compose file format 3.0 was first introduced in Compose release 1.10.0,
and versioned gradually in subsequent releases.

Service configuration reference


The Version 1 Compose file is a YAML file that defines services.

The default path for a Compose file is ./docker-compose.yml.


Tip: You can use either a .yml or .yaml extension for this file. They both work.
A service definition contains configuration which is applied to each container started for that service,
much like passing command-line parameters to docker run.
As with docker run, options specified in the Dockerfile, such as CMD, EXPOSE, VOLUME, ENV, are
respected by default - you don’t need to specify them again in docker-compose.yml.

This section contains a list of all configuration options supported by a service definition in version 1.

build
Configuration options that are applied at build time.

build can specified as a string containing a path to the build context.

build: ./dir

Note

In version 1 file format, build is different in two ways:

 Only the string form (build: .) is allowed - not the object form that is allowed in Version 2
and up.
 Using build together with image is not allowed. Attempting to do so results in an error.

DOCKERFILE

Alternate Dockerfile.

Compose uses an alternate file to build with. A build path must also be specified.

build: .
dockerfile: Dockerfile-alternate
Note

In the version 1 file format, dockerfile is different from newer versions in two ways:
 It appears alongside build, not as a sub-option:
 Using dockerfile together with image is not allowed. Attempting to do so results in an error.
cap_add, cap_drop
Add or drop container capabilities. See man 7 capabilities for a full list.
cap_add:
- ALL

cap_drop:
- NET_ADMIN
- SYS_ADMIN

Note: These options are ignored when deploying a stack in swarm mode with a (version 3)
Compose file.
command
Override the default command.

command: bundle exec thin -p 3000

The command can also be a list, in a manner similar to dockerfile:

command: ["bundle", "exec", "thin", "-p", "3000"]

cgroup_parent
Specify an optional parent cgroup for the container.

cgroup_parent: m-executor-abcd

container_name
Specify a custom container name, rather than a generated default name.

container_name: my-web-container
Because Docker container names must be unique, you cannot scale a service beyond 1 container if
you have specified a custom name. Attempting to do so results in an error.

devices
List of device mappings. Uses the same format as the --device docker client create option.
devices:
- "/dev/ttyUSB0:/dev/ttyUSB0"

dns
Custom DNS servers. Can be a single value or a list.

dns: 8.8.8.8
dns:
- 8.8.8.8
- 9.9.9.9

dns_search
Custom DNS search domains. Can be a single value or a list.

dns_search: example.com
dns_search:
- dc1.example.com
- dc2.example.com

entrypoint
Override the default entrypoint.

entrypoint: /code/entrypoint.sh

The entrypoint can also be a list, in a manner similar to dockerfile:

entrypoint:
- php
- -d
- zend_extension=/usr/local/lib/php/extensions/no-debug-non-zts-
20100525/xdebug.so
- -d
- memory_limit=-1
- vendor/bin/phpunit

Note: Setting entrypoint both overrides any default entrypoint set on the service’s image with
the ENTRYPOINT Dockerfile instruction, and clears out any default command on the image - meaning
that if there’s a CMD instruction in the Dockerfile, it is ignored.
env_file
Add environment variables from a file. Can be a single value or a list.

If you have specified a Compose file with docker-compose -f FILE, paths in env_file are relative to
the directory that file is in.

Environment variables specified in environment override these values.

env_file: .env

env_file:
- ./common.env
- ./apps/web.env
- /opt/secrets.env

Compose expects each line in an env file to be in VAR=VAL format. Lines beginning with #are
processed as comments and are ignored. Blank lines are also ignored.
# Set Rails/Rack environment
RACK_ENV=development

Note: If your service specifies a build option, variables defined in environment files
are not automatically visible during the build.
The value of VAL is used as is and not modified at all. For example if the value is surrounded by
quotes (as is often the case of shell variables), the quotes are included in the value passed to
Compose.
Keep in mind that the order of files in the list is significant in determining the value assigned to a
variable that shows up more than once. The files in the list are processed from the top down. For the
same variable specified in file a.env and assigned a different value in file b.env, if b.env is listed
below (after), then the value from b.env stands. For example, given the following declaration
in docker-compose.yml:
services:
some-service:
env_file:
- a.env
- b.env

And the following files:

# a.env
VAR=1

and

# b.env
VAR=hello

$VAR is hello.
environment
Add environment variables. You can use either an array or a dictionary. Any boolean values; true,
false, yes no, need to be enclosed in quotes to ensure they are not converted to True or False by the
YML parser.

Environment variables with only a key are resolved to their values on the machine Compose is
running on, which can be helpful for secret or host-specific values.

environment:
RACK_ENV: development
SHOW: 'true'
SESSION_SECRET:

environment:
- RACK_ENV=development
- SHOW=true
- SESSION_SECRET
Note: If your service specifies a build option, variables defined in environment are notautomatically
visible during the build.
expose
Expose ports without publishing them to the host machine - they’ll only be accessible to linked
services. Only the internal port can be specified.

expose:
- "3000"
- "8000"

extends
Extend another service, in the current file or another, optionally overriding configuration.

You can use extends on any service together with other configuration keys. The extendsvalue must
be a dictionary defined with a required service and an optional file key.
extends:
file: common.yml
service: webapp

The service the name of the service being extended, for example web or database. The file is the
location of a Compose configuration file defining that service.
If you omit the file Compose looks for the service configuration in the current file. The file value
can be an absolute or relative path. If you specify a relative path, Compose treats it as relative to the
location of the current file.
You can extend a service that itself extends another. You can extend indefinitely. Compose does not
support circular references and docker-compose returns an error if it encounters one.
For more on extends, see the the extends documentation.
external_links
Link to containers started outside this docker-compose.yml or even outside of Compose, especially
for containers that provide shared or common services. external_links follow semantics similar
to links when specifying both the container name and the link alias (CONTAINER:ALIAS).
external_links:
- redis_1
- project_db_1:mysql
- project_db_1:postgresql

extra_hosts
Add hostname mappings. Use the same values as the docker client --add-host parameter.
extra_hosts:
- "somehost:162.242.195.82"
- "otherhost:50.31.209.229"

An entry with the ip address and hostname is created in /etc/hosts inside containers for this
service, e.g:
162.242.195.82 somehost
50.31.209.229 otherhost

image
Specify the image to start the container from. Can either be a repository/tag or a partial image ID.

image: redis
image: ubuntu:14.04
image: tutum/influxdb
image: example-registry.com:4000/postgresql
image: a4bc65fd

If the image does not exist, Compose attempts to pull it, unless you have also specified build, in
which case it builds it using the specified options and tags it with the specified tag.

Note: In the version 1 file format, using build together with image is not allowed. Attempting to do so
results in an error.
labels
Add metadata to containers using Docker labels. You can use either an array or a dictionary.

It’s recommended that you use reverse-DNS notation to prevent your labels from conflicting with
those used by other software.

labels:
com.example.description: "Accounting webapp"
com.example.department: "Finance"
com.example.label-with-empty-value: ""

labels:
- "com.example.description=Accounting webapp"
- "com.example.department=Finance"
- "com.example.label-with-empty-value"

links
Link to containers in another service. Either specify both the service name and a link alias
(SERVICE:ALIAS), or just the service name.
Links are a legacy option. We recommend using networks instead.
web:
links:
- db
- db:database
- redis

Containers for the linked service are reachable at a hostname identical to the alias, or the service
name if no alias was specified.

Links also express dependency between services in the same way as depends_on, so they
determine the order of service startup.

Note: If you define both links and networks, services with links between them must share at least
one network in common in order to communicate.
log_driver
Version 1 file format only. In version 2 and up, use logging.
Specify a log driver. The default is json-file.
log_driver: syslog

log_opt
Version 1 file format only. In version 2 and up, use logging.
Specify logging options as key-value pairs. An example of syslog options:
log_opt:
syslog-address: "tcp://192.168.0.42:123"
net
Version 1 file format only. In version 2 and up, use network_mode and networks.
Network mode. Use the same values as the docker client --net parameter. The container:... form
can take a service name instead of a container name or id.
net: "bridge"
net: "host"
net: "none"
net: "container:[service name or container name/id]"

pid
pid: "host"

Sets the PID mode to the host PID mode. This turns on sharing between container and the host
operating system the PID address space. Containers launched with this flag can access and
manipulate other containers in the bare-metal machine’s namespace and vice versa.

ports
Expose ports. Either specify both ports (HOST:CONTAINER), or just the container port (an ephemeral
host port is chosen).
Note: When mapping ports in the HOST:CONTAINER format, you may experience erroneous results
when using a container port lower than 60, because YAML parses numbers in the format xx:yy as a
base-60 value. For this reason, we recommend always explicitly specifying your port mappings as
strings.
ports:
- "3000"
- "3000-3005"
- "8000:8000"
- "9090-9091:8080-8081"
- "49100:22"
- "127.0.0.1:8001:8001"
- "127.0.0.1:5000-5010:5000-5010"
- "6060:6060/udp"

security_opt
Override the default labeling scheme for each container.
security_opt:
- label:user:USER
- label:role:ROLE

stop_signal
Sets an alternative signal to stop the container. By default stop uses SIGTERM. Setting an
alternative signal using stop_signal causes stop to send that signal instead.
stop_signal: SIGUSR1

ulimits
Override the default ulimits for a container. You can either specify a single limit as an integer or
soft/hard limits as a mapping.

ulimits:
nproc: 65535
nofile:
soft: 20000
hard: 40000

volumes, volume_driver
Mount paths or named volumes, optionally specifying a path on the host machine (HOST:CONTAINER),
or an access mode (HOST:CONTAINER:ro). For version 2 files, named volumes need to be specified
with the top-level volumes key. When using version 1, the Docker Engine creates the named volume
automatically if it doesn’t exist.
You can mount a relative path on the host, which expands relative to the directory of the Compose
configuration file being used. Relative paths should always begin with . or ...
volumes:
# Just specify a path and let the Engine create a volume
- /var/lib/mysql

# Specify an absolute path mapping


- /opt/data:/var/lib/mysql

# Path on the host, relative to the Compose file


- ./cache:/tmp/cache

# User-relative path
- ~/configs:/etc/configs/:ro

# Named volume
- datavolume:/var/lib/mysql

If you do not use a host path, you may specify a volume_driver.


volume_driver: mydriver

There are several things to note, depending on which Compose file version you’re using:

 For version 1 files, both named volumes and container volumes use the specified driver.

 No path expansion is done if you have also specified a volume_driver. For example, if you
specify a mapping of ./foo:/data, the ./foo part is passed straight to the volume driver
without being expanded.

See Docker Volumes and Volume Plugins for more information.

volumes_from
Mount all of the volumes from another service or container, optionally specifying read-only access
(ro) or read-write (rw). If no access level is specified, then read-write is used.
volumes_from:
- service_name
- service_name:ro

cpu_shares, cpu_quota, cpuset, domainname, hostname, ipc,


mac_address, mem_limit, memswap_limit, mem_swappiness,
privileged, read_only, restart, shm_size, stdin_open, tty, user,
working_dir
Each of these is a single value, analogous to its docker run counterpart.

cpu_shares: 73
cpu_quota: 50000
cpuset: 0,1
user: postgresql
working_dir: /code

domainname: foo.com
hostname: foo
ipc: host
mac_address: 02:42:ac:11:65:43

mem_limit: 1000000000
memswap_limit: 2000000000
privileged: true

restart: always

read_only: true
shm_size: 64M
stdin_open: true
tty: true

Compose file versions and upgrading


Estimated reading time: 14 minutes

The Compose file is a YAML file defining services, networks, and volumes for a Docker application.

The Compose file formats are now described in these references, specific to each version.

Reference file What changed in this version

Version 3 (most current, and recommended) Version 3 updates

Version 2 Version 2 updates

Version 1 Version 1 updates


The topics below explain the differences among the versions, Docker Engine compatibility, and how
to upgrade.

Compatibility matrix
There are several versions of the Compose file format – 1, 2, 2.x, and 3.x

This table shows which Compose file versions support specific Docker releases.

Compose file format Docker Engine release

3.7 18.06.0+

3.6 18.02.0+

3.5 17.12.0+

3.4 17.09.0+

3.3 17.06.0+

3.2 17.04.0+

3.1 1.13.1+

3.0 1.13.0+

2.4 17.12.0+

2.3 17.06.0+

2.2 1.13.0+

2.1 1.12.0+

2.0 1.10.0+

1.0 1.9.1.+

In addition to Compose file format versions shown in the table, the Compose itself is on a release
schedule, as shown in Compose releases, but file format versions do not necessarily increment with
each release. For example, Compose file format 3.0 was first introduced in Compose release 1.10.0,
and versioned gradually in subsequent releases.
Looking for more detail on Docker and Compose compatibility?

We recommend keeping up-to-date with newer releases as much as possible. However, if you are
using an older version of Docker and want to determine which Compose release is compatible, refer
to the Compose release notes. Each set of release notes gives details on which versions of Docker
Engine are supported, along with compatible Compose file format versions. (See also, the
discussion in issue #3404.)

For details on versions and how to upgrade, see Versioning and Upgrading.

Versioning
There are currently three versions of the Compose file format:

 Version 1, the legacy format. This is specified by omitting a version key at the root of the
YAML.
 Version 2.x. This is specified with a version: '2' or version: '2.1', etc., entry at the root of
the YAML.
 Version 3.x, the latest and recommended version, designed to be cross-compatible between
Compose and the Docker Engine’s swarm mode. This is specified with a version:
'3' or version: '3.1', etc., entry at the root of the YAML.

v2 and v3 Declaration
Note: When specifying the Compose file version to use, make sure to specify both
the major and minor numbers. If no minor version is given, 0 is used by default and not the latest
minor version.

The Compatibility Matrix shows Compose file versions mapped to Docker Engine releases.

To move your project to a later version, see the Upgrading section.

Note: If you’re using multiple Compose files or extending services, each file must be of the same
version - you cannot, for example, mix version 1 and 2 in a single project.

Several things differ depending on which version you use:

 The structure and permitted configuration keys


 The minimum Docker Engine version you must be running
 Compose’s behaviour with regards to networking

These differences are explained below.


Version 1
Compose files that do not declare a version are considered “version 1”. In those files, all
the services are declared at the root of the document.

Version 1 is supported by Compose up to 1.6.x. It will be deprecated in a future Compose release.

Version 1 files cannot declare named volumes, networks or build arguments.

Compose does not take advantage of networking when you use version 1: every container is placed
on the default bridge network and is reachable from every other container at its IP address. You
need to use links to enable discovery between containers.

Example:

web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
links:
- redis
redis:
image: redis

Version 2
Compose files using the version 2 syntax must indicate the version number at the root of the
document. All services must be declared under the services key.

Version 2 files are supported by Compose 1.6.0+ and require a Docker Engine of version 1.10.0+.

Named volumes can be declared under the volumes key, and networks can be declared under
the networks key.

By default, every container joins an application-wide default network, and is discoverable at a


hostname that’s the same as the service name. This means links are largely unnecessary. For more
details, see Networking in Compose.

Note: When specifying the Compose file version to use, make sure to specify both
the major and minor numbers. If no minor version is given, 0 is used by default and not the
latest minor version. As a result, features added in later versions will not be supported. For
example:
version: "2"

is equivalent to:

version: "2.0"

Simple example:

version: "2.4"
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
redis:
image: redis

A more extended example, defining volumes and networks:

version: "2.4"
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
networks:
- front-tier
- back-tier
redis:
image: redis
volumes:
- redis-data:/var/lib/redis
networks:
- back-tier
volumes:
redis-data:
driver: local
networks:
front-tier:
driver: bridge
back-tier:
driver: bridge

Several other options were added to support networking, such as:

 aliases
 The depends_on option can be used in place of links to indicate dependencies between
services and startup order.
 version: "2.4"

 services:

 web:

 build: .

 depends_on:

 - db

 - redis

 redis:

 image: redis

 db:

 image: postgres

 ipv4_address, ipv6_address

Variable substitution also was added in Version 2.


Version 2.1
An upgrade of version 2 that introduces new parameters only available with Docker Engine
version 1.12.0+. Version 2.1 files are supported by Compose 1.9.0+.

Introduces the following additional parameters:

 link_local_ips
 isolation in build configurations and service definitions
 labels for volumes and networks
 name for volumes
 userns_mode
 healthcheck
 sysctls
 pids_limit
 oom_kill_disable
 cpu_period

Version 2.2
An upgrade of version 2.1 that introduces new parameters only available with Docker Engine
version 1.13.0+. Version 2.2 files are supported by Compose 1.13.0+. This version also allows you
to specify default scale numbers inside the service’s configuration.

Introduces the following additional parameters:

 init
 scale
 cpu_rt_runtime and cpu_rt_period

Version 2.3
An upgrade of version 2.2 that introduces new parameters only available with Docker Engine
version 17.06.0+. Version 2.3 files are supported by Compose 1.16.0+.

Introduces the following additional parameters:

 target, extra_hosts and shm_size for build configurations


 start_period for healthchecks
 “Long syntax” for volumes
 runtime for service definitions
 device_cgroup_rules
Version 2.4
An upgrade of version 2.3 that introduces new parameters only available with Docker Engine
version 17.12.0+. Version 2.4 files are supported by Compose 1.21.0+.

Introduces the following additional parameters:

 platform for service definitions


 Support for extension fields at the root of service, network, and volume definitions

Version 3
Designed to be cross-compatible between Compose and the Docker Engine’s swarm mode, version
3 removes several options and adds several more.

 Removed: volume_driver, volumes_from, cpu_shares, cpu_quota, cpuset, mem_limit, memswap


_limit, extends, group_add. See the upgrading guide for how to migrate away from these.
(For more information on extends, see Extending services.)

 Added: deploy

Note: When specifying the Compose file version to use, make sure to specify both
the major and minor numbers. If no minor version is given, 0 is used by default and not the
latest minor version. As a result, features added in later versions will not be supported. For
example:
version: "3"

is equivalent to:

version: "3.0"

Version 3.3
An upgrade of version 3 that introduces new parameters only available with Docker Engine
version 17.06.0+, and higher.

Introduces the following additional parameters:

 build labels
 credential_spec
 configs
 deploy endpoint_mode
Version 3.4
An upgrade of version 3 that introduces new parameters. It is only available with Docker Engine
version 17.09.0 and higher.

Introduces the following additional parameters:

 target and network in build configurations


 start_period for healthchecks
 order for update configurations
 name for volumes

Version 3.5
An upgrade of version 3 that introduces new parameters. It is only available with Docker Engine
version 17.12.0 and higher.

Introduces the following additional parameters:

 isolation in service definitions


 name for networks, secrets and configs
 shm_size in build configurations

Version 3.6
An upgrade of version 3 that introduces new parameters. It is only available with Docker Engine
version 18.02.0 and higher.

Introduces the following additional parameters:

 tmpfs size for tmpfs-type mounts

Version 3.7
An upgrade of version 3 that introduces new parameters. It is only available with Docker Engine
version 18.06.0 and higher.

Introduces the following additional parameters:

 init in service definitions


 rollback_config in deploy configurations
 Support for extension fields at the root of service, network, volume, secret and config
definitions

Upgrading
Version 2.x to 3.x
Between versions 2.x and 3.x, the structure of the Compose file is the same, but several options
have been removed:

 volume_driver: Instead of setting the volume driver on the service, define a volume using
the top-level volumes option and specify the driver there.
 version: "3.7"

 services:

 db:

 image: postgres

 volumes:

 - data:/var/lib/postgresql/data

 volumes:

 data:

 driver: mydriver

 volumes_from: To share a volume between services, define it using the top-


level volumes option and reference it from each service that shares it using the service-
level volumes option.
 cpu_shares, cpu_quota, cpuset, mem_limit, memswap_limit: These have been replaced by
the resources key under deploy. deploy configuration only takes effect when using docker
stack deploy, and is ignored by docker-compose.
 extends: This option has been removed for version: "3.x" Compose files. (For more
information, see Extending services.)
 group_add: This option has been removed for version: "3.x" Compose files.
 pids_limit: This option has not been introduced in version: "3.x" Compose files.
 link_local_ips in networks: This option has not been introduced in version:
"3.x"Compose files.

Version 1 to 2.x
In the majority of cases, moving from version 1 to 2 is a very simple process:

1. Indent the whole file by one level and put a services: key at the top.
2. Add a version: '2' line at the top of the file.

It’s more complicated if you’re using particular configuration features:

 dockerfile: This now lives under the build key:


 build:

 context: .

 dockerfile: Dockerfile-alternate

 log_driver, log_opt: These now live under the logging key:

 logging:

 driver: syslog

 options:

 syslog-address: "tcp://192.168.0.42:123"

 links with environment variables: As documented in the environment variables reference,


environment variables created by links have been deprecated for some time. In the new
Docker network system, they have been removed. You should either connect directly to the
appropriate hostname or set the relevant environment variable yourself, using the link
hostname:
 web:

 links:

 - db

 environment:

 - DB_PORT=tcp://db:5432

 external_links: Compose uses Docker networks when running version 2 projects, so links
behave slightly differently. In particular, two containers must be connected to at least one
network in common in order to communicate, even if explicitly linked together.

Either connect the external container to your app’s default network, or connect both the
external container and your service’s containers to an external network.

 net: This is now replaced by network_mode:

 net: host -> network_mode: host

 net: bridge -> network_mode: bridge

 net: none -> network_mode: none

If you’re using net: "container:[service name]", you must now usenetwork_mode:


"service:[service name]" instead.
net: "container:web" -> network_mode: "service:web"

If you’re using net: "container:[container name/id]", the value does not need to change.
net: "container:cont-name" -> network_mode: "container:cont-name"
net: "container:abc12345" -> network_mode: "container:abc12345"

 volumes with named volumes: these must now be explicitly declared in a top-
level volumes section of your Compose file. If a service mounts a named volume called data,
you must declare a data volume in your top-level volumes section. The whole file might look
like this:
 version: "2.4"

 services:

 db:

 image: postgres

 volumes:

 - data:/var/lib/postgresql/data

 volumes:

 data: {}

By default, Compose creates a volume whose name is prefixed with your project name. If
you want it to just be called data, declare it as external:
volumes:
data:
external: true

Compatibility mode
docker-compose 1.20.0 introduces a new --compatibility flag designed to help developers
transition to version 3 more easily. When enabled, docker-compose reads the deploysection of each
service’s definition and attempts to translate it into the equivalent version 2 parameter. Currently, the
following deploy keys are translated:

 resources limits and memory reservations


 replicas
 restart_policy condition and max_attempts
All other keys are ignored and produce a warning if present. You can review the configuration that
will be used to deploy by using the --compatibility flag with the config command.
Do not use this in production!

We recommend against using --compatibility mode in production. Because the resulting


configuration is only an approximate using non-Swarm mode properties, it may produce unexpected
results.

Docker stacks and distributed


application bundles (experimental)
The functionality described on this page is marked as Experimental, and as such, may change
before it becomes generally available.
Estimated reading time: 4 minutes

Note: This is a modified copy of the Docker Stacks and Distributed Application Bundlesdocument in
the docker/docker-ce repo. It’s been updated to accurately reflect newer releases.

Overview
A Dockerfile can be built into an image, and containers can be created from that image. Similarly,
a docker-compose.yml can be built into a distributed application bundle, and stackscan be
created from that bundle. In that sense, the bundle is a multi-services distributable image format.
Docker Stacks and Distributed Application Bundles started as experimental features introduced in
Docker 1.12 and Docker Compose 1.8, alongside the concept of swarm mode, and nodes and
services in the Engine API. Neither Docker Engine nor the Docker Registry support distribution of
bundles, and the concept of a bundle is not the emphasis for new releases going forward.

However, swarm mode, multi-service applications, and stack files now are fully supported. A stack
file is a particular type of version 3 Compose file.

If you are just getting started with Docker and want to learn the best way to deploy multi-service
applications, a good place to start is the Get Started walkthrough. This shows you how to define a
service configuration in a Compose file, deploy the app, and use the relevant tools and commands.

Produce a bundle
The easiest way to produce a bundle is to generate it using docker-compose from an existing docker-
compose.yml. Of course, that’s just one possible way to proceed, in the same way that docker
build isn’t the only way to produce a Docker image.
From docker-compose:
$ docker-compose bundle
WARNING: Unsupported key 'network_mode' in services.nsqd - ignoring
WARNING: Unsupported key 'links' in services.nsqd - ignoring
WARNING: Unsupported key 'volumes' in services.nsqd - ignoring
[...]
Wrote bundle to vossibility-stack.dab

Create a stack from a bundle


Note: Because support for stacks and bundles is in the experimental stage, you need to
install an experimental build of Docker Engine to use it.

If you’re on Mac or Windows, download the “Beta channel” version of Docker Desktop for
Mac or Docker Desktop for Windows to install it. If you’re on Linux, follow the instructions in
the experimental build README.
A stack is created using the docker deploy command:
# docker deploy --help

Usage: docker deploy [OPTIONS] STACK

Create and update a stack

Options:
--file string Path to a Distributed Application Bundle file (Default:
STACK.dab)
--help Print usage
--with-registry-auth Send registry authentication details to Swarm agents

Let’s deploy the stack created before:

# docker deploy vossibility-stack


Loading bundle from vossibility-stack.dab
Creating service vossibility-stack_elasticsearch
Creating service vossibility-stack_kibana
Creating service vossibility-stack_logstash
Creating service vossibility-stack_lookupd
Creating service vossibility-stack_nsqd
Creating service vossibility-stack_vossibility-collector

We can verify that services were correctly created:

# docker service ls
ID NAME REPLICAS IMAGE
COMMAND
29bv0vnlm903 vossibility-stack_lookupd 1
nsqio/nsq@sha256:eeba05599f31eba418e96e71e0984c3dc96963ceb66924dd37a47bf7ce18a662
/nsqlookupd
4awt47624qwh vossibility-stack_nsqd 1
nsqio/nsq@sha256:eeba05599f31eba418e96e71e0984c3dc96963ceb66924dd37a47bf7ce18a662
/nsqd --data-path=/data --lookupd-tcp-address=lookupd:4160
4tjx9biia6fs vossibility-stack_elasticsearch 1
elasticsearch@sha256:12ac7c6af55d001f71800b83ba91a04f716e58d82e748fa6e5a7359eed2301aa
7563uuzr9eys vossibility-stack_kibana 1
kibana@sha256:6995a2d25709a62694a937b8a529ff36da92ebee74bafd7bf00e6caf6db2eb03
9gc5m4met4he vossibility-stack_logstash 1
logstash@sha256:2dc8bddd1bb4a5a34e8ebaf73749f6413c101b2edef6617f2f7713926d2141fe
logstash -f /etc/logstash/conf.d/logstash.conf
axqh55ipl40h vossibility-stack_vossibility-collector 1 icecrime/vossibility-
collector@sha256:f03f2977203ba6253988c18d04061c5ec7aab46bca9dfd89a9a1fa4500989fba --
config /config/config.toml --debug

Manage stacks
Stacks are managed using the docker stack command:
# docker stack --help

Usage: docker stack COMMAND

Manage Docker stacks


Options:
--help Print usage

Commands:
config Print the stack configuration
deploy Create and update a stack
rm Remove the stack
services List the services in the stack
tasks List the tasks in the stack

Run 'docker stack COMMAND --help' for more information on a command.

Bundle file format


Distributed application bundles are described in a JSON format. When bundles are persisted as
files, the file extension is .dab.
A bundle has two top-level fields: version and services. The version used by Docker 1.12 tools
is 0.1.
services in the bundle are the services that comprise the app. They correspond to the
new Service object introduced in the 1.12 Docker Engine API.

A service has the following fields:

Image (required) string

The image that the service runs. Docker images should be referenced with full content hash
to fully specify the deployment artifact for the service.
Example: postgres@sha256:e0a230a9f5b4e1b8b03bb3e8cf7322b0e42b7838c5c87f4545edb48f5
eb8f077

Command []string

Command to run in service containers.

Args []string

Arguments passed to the service containers.

Env []string
Environment variables.

Labels map[string]string

Labels used for setting meta data on services.

Ports []Port

Service ports (composed of Port (int) and Protocol (string). A service description can only
specify the container port to be exposed. These ports can be mapped on runtime hosts at
the operator's discretion.

WorkingDir string

Working directory inside the service containers.

User string

Username or UID (format: <name|uid>[:<group|gid>]).

Networks []string

Networks that the service containers should be connected to. An entity deploying a bundle
should create networks as needed.

Note: Some configuration options are not yet supported in the DAB format, including volume
mounts.

Use Compose with Swarm


You are viewing docs for legacy standalone Swarm. These topics describe standalone Docker
Swarm. In Docker 1.12 and higher, Swarm mode is integrated with Docker Engine. Most users
should use integrated Swarm mode — a good place to start is Getting started with swarm
mode, Swarm mode CLI commands, and the Get started with Docker walkthrough). Standalone
Docker Swarm is not integrated into the Docker Engine API and CLI commands.
Estimated reading time: 4 minutes

Docker Compose and Docker Swarm aim to have full integration, meaning you can point a Compose
app at a Swarm cluster and have it all just work as if you were using a single Docker host.

The actual extent of integration depends on which version of the Compose file format you are using:
1. If you’re using version 1 along with links, your app works, but Swarm schedules all
containers on one host, because links between containers do not work across hosts with the
old networking system.

2. If you’re using version 2, your app should work with no changes:

o subject to the limitations described below,

o as long as the Swarm cluster is configured to use the overlay driver, or a custom
driver which supports multi-host networking.

Read Get started with multi-host networking to see how to set up a Swarm cluster with Docker
Machine and the overlay driver. Once you’ve got it running, deploying your app to it should be as
simple as:

$ eval "$(docker-machine env --swarm <name of swarm master machine>)"


$ docker-compose up

Limitations
Building images

Swarm can build an image from a Dockerfile just like a single-host Docker instance can, but the
resulting image only lives on a single node and won’t be distributed to other nodes.

If you want to use Compose to scale the service in question to multiple nodes, build the image, push
it to a registry such as Docker Hub, and reference it from docker-compose.yml:
$ docker build -t myusername/web .
$ docker push myusername/web

$ cat docker-compose.yml
web:
image: myusername/web

$ docker-compose up -d
$ docker-compose scale web=3

Multiple dependencies
If a service has multiple dependencies of the type which force co-scheduling (see Automatic
scheduling below), it’s possible that Swarm schedules the dependencies on different nodes, making
the dependent service impossible to schedule. For example, here foo needs to be co-scheduled
with bar and baz:
version: "2"
services:
foo:
image: foo
volumes_from: ["bar"]
network_mode: "service:baz"
bar:
image: bar
baz:
image: baz

The problem is that Swarm might first schedule bar and baz on different nodes (since they’re not
dependent on one another), making it impossible to pick an appropriate node for foo.

To work around this, use manual scheduling to ensure that all three services end up on the same
node:

version: "2"
services:
foo:
image: foo
volumes_from: ["bar"]
network_mode: "service:baz"
environment:
- "constraint:node==node-1"
bar:
image: bar
environment:
- "constraint:node==node-1"
baz:
image: baz
environment:
- "constraint:node==node-1"

Host ports and recreating containers

If a service maps a port from the host, such as 80:8000, then you may get an error like this when
running docker-compose up on it after the first time:
docker: Error response from daemon: unable to find a node that satisfies
container==6ab2dfe36615ae786ef3fc35d641a260e3ea9663d6e69c5b70ce0ca6cb373c02.

The usual cause of this error is that the container has a volume (defined either in its image or in the
Compose file) without an explicit mapping, and so in order to preserve its data, Compose has
directed Swarm to schedule the new container on the same node as the old container. This results in
a port clash.

There are two viable workarounds for this problem:

 Specify a named volume, and use a volume driver which is capable of mounting the volume
into the container regardless of what node it’s scheduled on.

Compose does not give Swarm any specific scheduling instructions if a service uses only
named volumes.

version: "2"

services:
web:
build: .
ports:
- "80:8000"
volumes:
- web-logs:/var/log/web

volumes:
web-logs:
driver: custom-volume-driver
 Remove the old container before creating the new one. You lose any data in the volume.

 $ docker-compose stop web

 $ docker-compose rm -f web

 $ docker-compose up web

Scheduling containers
Automatic scheduling

Some configuration options result in containers being automatically scheduled on the same Swarm
node to ensure that they work correctly. These are:

 network_mode: "service:..." and network_mode: "container:..." (andnet:


"container:..." in the version 1 file format).
 volumes_from
 links

Manual scheduling

Swarm offers a rich set of scheduling and affinity hints, enabling you to control where containers are
located. They are specified via container environment variables, so you can use
Compose’s environment option to set them.
# Schedule containers on a specific node
environment:
- "constraint:node==node-1"

# Schedule containers on a node that has the 'storage' label set to 'ssd'
environment:
- "constraint:storage==ssd"

# Schedule containers where the 'redis' image is already pulled


environment:
- "affinity:image==redis"
Declare default environment variables
in file
Estimated reading time: 1 minute

Compose supports declaring default environment variables in an environment file


named .env placed in the folder where the docker-compose command is executed (current working
directory).

Syntax rules
These syntax rules apply to the .env file:

 Compose expects each line in an env file to be in VAR=VAL format.


 Lines beginning with # are processed as comments and ignored.
 Blank lines are ignored.
 There is no special handling of quotation marks. This means that they are part of the VAL.

Compose file and CLI variables


The environment variables you define here are used for variable substitution in your Compose file,
and can also be used to define the following CLI variables:

 COMPOSE_API_VERSION
 COMPOSE_CONVERT_WINDOWS_PATHS
 COMPOSE_FILE
 COMPOSE_HTTP_TIMEOUT
 COMPOSE_TLS_VERSION
 COMPOSE_PROJECT_NAME
 DOCKER_CERT_PATH
 DOCKER_HOST
 DOCKER_TLS_VERIFY

Notes

 Values present in the environment at runtime always override those defined inside
the .env file. Similarly, values passed via command-line arguments take precedence as well.
 Environment variables defined in the .env file are not automatically visible inside containers.
To set container-applicable environment variables, follow the guidelines in the
topic Environment variables in Compose, which describes how to pass shell environment
variables through to containers, define environment variables in Compose files, and more.
Environment variables in Compose
Estimated reading time: 4 minutes

There are multiple parts of Compose that deal with environment variables in one sense or another.
This page should help you find the information you need.

Substitute environment variables in Compose files


It’s possible to use environment variables in your shell to populate values inside a Compose file:

web:
image: "webapp:${TAG}"

For more information, see the Variable substitution section in the Compose file reference.

Set environment variables in containers


You can set environment variables in a service’s containers with the ‘environment’ key, just like
with docker run -e VARIABLE=VALUE ...:
web:
environment:
- DEBUG=1

Pass environment variables to containers


You can pass environment variables from your shell straight through to a service’s containers with
the ‘environment’ key by not giving them a value, just like with docker run -e VARIABLE ...:
web:
environment:
- DEBUG

The value of the DEBUG variable in the container is taken from the value for the same variable in the
shell in which Compose is run.

The “env_file” configuration option


You can pass multiple environment variables from an external file through to a service’s containers
with the ‘env_file’ option, just like with docker run --env-file=FILE ...:
web:
env_file:
- web-variables.env

Set environment variables with ‘docker-compose


run’
Just like with docker run -e, you can set environment variables on a one-off container with docker-
compose run -e:

docker-compose run -e DEBUG=1 web python console.py

You can also pass a variable through from the shell by not giving it a value:

docker-compose run -e DEBUG web python console.py

The value of the DEBUG variable in the container is taken from the value for the same variable in the
shell in which Compose is run.

The “.env” file


You can set default values for any environment variables referenced in the Compose file, or used to
configure Compose, in an environment file named .env:
$ cat .env
TAG=v1.5

$ cat docker-compose.yml
version: '3'
services:
web:
image: "webapp:${TAG}"

When you run docker-compose up, the web service defined above uses the image webapp:v1.5. You
can verify this with the config command, which prints your resolved application config to the terminal:
$ docker-compose config

version: '3'
services:
web:
image: 'webapp:v1.5'

Values in the shell take precedence over those specified in the .env file. If you set TAG to a different
value in your shell, the substitution in image uses that instead:
$ export TAG=v2.0
$ docker-compose config

version: '3'
services:
web:
image: 'webapp:v2.0'

When you set the same environment variable in multiple files, here’s the priority used by Compose
to choose which value to use:

1. Compose file
2. Shell environment variables
3. Environment file
4. Dockerfile
5. Variable is not defined

In the example below, we set the same environment variable on an Environment file, and the
Compose file:

$ cat ./Docker/api/api.env
NODE_ENV=test

$ cat docker-compose.yml
version: '3'
services:
api:
image: 'node:6-alpine'
env_file:
- ./Docker/api/api.env
environment:
- NODE_ENV=production

When you run the container, the environment variable defined in the Compose file takes
precedence.

$ docker-compose exec api node

> process.env.NODE_ENV
'production'

Having any ARG or ENV setting in a Dockerfile evaluates only if there is no Docker Compose entry
for environment or env_file.
Specifics for NodeJS containers

If you have a package.json entry for script:start likeNODE_ENV=test node server.js, then this
overrules any setting in yourdocker-compose.yml file.

Configure Compose using environment variables


Several environment variables are available for you to configure the Docker Compose command-line
behavior. They begin with COMPOSE_ or DOCKER_, and are documented in CLI Environment Variables.

Environment variables created by links


When using the ‘links’ option in a v1 Compose file, environment variables are created for each link.
They are documented in the Link environment variables reference.

However, these variables are deprecated. Use the link alias as a hostname instead.

Share Compose configurations


between files and projects
Estimated reading time: 8 minutes
Compose supports two methods of sharing common configuration:

1. Extending an entire Compose file by using multiple Compose files


2. Extending individual services with the extends field (for Compose file versions up to 2.1)

Multiple Compose files


Using multiple Compose files enables you to customize a Compose application for different
environments or different workflows.

Understanding multiple Compose files

By default, Compose reads two files, a docker-compose.yml and an optionaldocker-


compose.override.yml file. By convention, the docker-compose.yml contains your base configuration.
The override file, as its name implies, can contain configuration overrides for existing services or
entirely new services.

If a service is defined in both files, Compose merges the configurations using the rules described
in Adding and overriding configuration.

To use multiple override files, or an override file with a different name, you can use the -foption to
specify the list of files. Compose merges files in the order they’re specified on the command line.
See the docker-compose command reference for more information about using -f.
When you use multiple configuration files, you must make sure all paths in the files are relative to the
base Compose file (the first Compose file specified with -f). This is required because override files
need not be valid Compose files. Override files can contain small fragments of configuration.
Tracking which fragment of a service is relative to which path is difficult and confusing, so to keep
paths easier to understand, all paths must be defined relative to the base file.

Example use case

In this section, there are two common use cases for multiple Compose files: changing a Compose
app for different environments, and running administrative tasks against a Compose app.
DIFFERENT ENVIRONMENTS

A common use case for multiple files is changing a development Compose app for a production-like
environment (which may be production, staging or CI). To support these differences, you can split
your Compose configuration into a few different files:
Start with a base file that defines the canonical configuration for the services.

docker-compose.yml

web:
image: example/my_web_app:latest
links:
- db
- cache

db:
image: postgres:latest

cache:
image: redis:latest

In this example the development configuration exposes some ports to the host, mounts our code as
a volume, and builds the web image.

docker-compose.override.yml

web:
build: .
volumes:
- '.:/code'
ports:
- 8883:80
environment:
DEBUG: 'true'

db:
command: '-d'
ports:
- 5432:5432

cache:
ports:
- 6379:6379

When you run docker-compose up it reads the overrides automatically.

Now, it would be nice to use this Compose app in a production environment. So, create another
override file (which might be stored in a different git repo or managed by a different team).

docker-compose.prod.yml

web:
ports:
- 80:80
environment:
PRODUCTION: 'true'

cache:
environment:
TTL: '500'

To deploy with this production Compose file you can run

docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d

This deploys all three services using the configuration in docker-compose.yml and docker-
compose.prod.yml (but not the dev configuration in docker-compose.override.yml).

See production for more information about Compose in production.


ADMINISTRATIVE TASKS

Another common use case is running adhoc or administrative tasks against one or more services in
a Compose app. This example demonstrates running a database backup.

Start with a docker-compose.yml.

web:
image: example/my_web_app:latest
links:
- db
db:
image: postgres:latest

In a docker-compose.admin.yml add a new service to run the database export or backup.

dbadmin:
build: database_admin/
links:
- db

To start a normal environment run docker-compose up -d. To run a database backup, include
the docker-compose.admin.yml as well.
docker-compose -f docker-compose.yml -f docker-compose.admin.yml \
run dbadmin db-backup

Extending services
Note: The extends keyword is supported in earlier Compose file formats up to Compose file version
2.1 (see extends in v1 and extends in v2), but is not supported in Compose version 3.x. See
the Version 3 summary of keys added and removed, along with information on how to upgrade.
See moby/moby#31101 to follow the discussion thread on possibility of adding support for extends in
some form in future versions.
Docker Compose’s extends keyword enables sharing of common configurations among different
files, or even different projects entirely. Extending services is useful if you have several services that
reuse a common set of configuration options. Using extends you can define a common set of service
options in one place and refer to it from anywhere.
Keep in mind that links, volumes_from, and depends_on are never shared between services
using extends. These exceptions exist to avoid implicit dependencies; you always
define links and volumes_from locally. This ensures dependencies between services are clearly
visible when reading the current file. Defining these locally also ensures that changes to the
referenced file don’t break anything.

Understand the extends configuration


When defining any service in docker-compose.yml, you can declare that you are extending another
service like this:
web:
extends:
file: common-services.yml
service: webapp

This instructs Compose to re-use the configuration for the webapp service defined in the common-
services.yml file. Suppose that common-services.yml looks like this:

webapp:
build: .
ports:
- "8000:8000"
volumes:
- "/data"

In this case, you get exactly the same result as if you wrote docker-compose.yml with the
same build, ports and volumes configuration values defined directly under web.
You can go further and define (or re-define) configuration locally in docker-compose.yml:
web:
extends:
file: common-services.yml
service: webapp
environment:
- DEBUG=1
cpu_shares: 5

important_web:
extends: web
cpu_shares: 10

You can also write other services and link your web service to them:
web:
extends:
file: common-services.yml
service: webapp
environment:
- DEBUG=1
cpu_shares: 5
links:
- db
db:
image: postgres

Example use case

Extending an individual service is useful when you have multiple services that have a common
configuration. The example below is a Compose app with two services: a web application and a
queue worker. Both services use the same codebase and share many configuration options.

In a common.yml we define the common configuration:

app:
build: .
environment:
CONFIG_FILE_PATH: /code/config
API_KEY: xxxyyy
cpu_shares: 5

In a docker-compose.yml we define the concrete services which use the common configuration:

webapp:
extends:
file: common.yml
service: app
command: /code/run_web_app
ports:
- 8080:8080
links:
- queue
- db

queue_worker:
extends:
file: common.yml
service: app
command: /code/run_worker
links:
- queue

Adding and overriding configuration


Compose copies configurations from the original service over to the local one. If a configuration
option is defined in both the original service and the local service, the local
value replaces or extends the original value.

For single-value options like image, command or mem_limit, the new value replaces the old value.
# original service
command: python app.py

# local service
command: python otherapp.py

# result
command: python otherapp.py

build and image in Compose file version 1


In the case of build and image, when using version 1 of the Compose file format, using one option in
the local service causes Compose to discard the other option if it was defined in the original service.
For example, if the original service defines image: webapp and the local service defines build:
. then the resulting service has a build: . and no image option.
This is because build and image cannot be used together in a version 1 file.
For the multi-value options ports, expose, external_links, dns, dns_search, and tmpfs, Compose
concatenates both sets of values:
# original service
expose:
- "3000"

# local service
expose:
- "4000"
- "5000"

# result
expose:
- "3000"
- "4000"
- "5000"

In the case of environment, labels, volumes, and devices, Compose “merges” entries together with
locally-defined values taking precedence. For environment and labels, the environment variable or
label name determines which value is used:
# original service
environment:
- FOO=original
- BAR=original

# local service
environment:
- BAR=local
- BAZ=local

# result
environment:
- FOO=original
- BAR=local
- BAZ=local

Entries for volumes and devices are merged using the mount path in the container:
# original service
volumes:
- ./original:/foo
- ./original:/bar

# local service
volumes:
- ./local:/bar
- ./local:/baz

# result
volumes:
- ./original:/foo
- ./local:/bar
- ./local:/baz

Networking in Compose
Estimated reading time: 5 minutes

This page applies to Compose file formats version 2 and higher. Networking features are not
supported for Compose file version 1 (legacy).

By default Compose sets up a single network for your app. Each container for a service joins the
default network and is both reachable by other containers on that network, and discoverable by them
at a hostname identical to the container name.

Note: Your app’s network is given a name based on the “project name”, which is based on the name
of the directory it lives in. You can override the project name with either the --project-name flag or
the COMPOSE_PROJECT_NAME environment variable.
For example, suppose your app is in a directory called myapp, and your docker-compose.ymllooks like
this:
version: "3"
services:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
ports:
- "8001:5432"

When you run docker-compose up, the following happens:

1. A network called myapp_default is created.


2. A container is created using web’s configuration. It joins the network myapp_defaultunder the
name web.
3. A container is created using db’s configuration. It joins the network myapp_defaultunder the
name db.

In v2.1+, overlay networks are always attachable


Starting in Compose file format 2.1, overlay networks are always created asattachable, and this is
not configurable. This means that standalone containers can connect to overlay networks.
In Compose file format 3.x, you can optionally set the attachable property to false.
Each container can now look up the hostname web or db and get back the appropriate container’s IP
address. For example, web’s application code could connect to the URL postgres://db:5432 and
start using the Postgres database.
It is important to note the distinction between HOST_PORT and CONTAINER_PORT. In the above example,
for db, the HOST_PORT is 8001 and the container port is 5432 (postgres default). Networked service-to-
service communication use the CONTAINER_PORT. When HOST_PORT is defined, the service is
accessible outside the swarm as well.
Within the web container, your connection string to db would look likepostgres://db:5432, and from
the host machine, the connection string would look like postgres://{DOCKER_IP}:8001.

Update containers
If you make a configuration change to a service and run docker-compose up to update it, the old
container is removed and the new one joins the network under a different IP address but the same
name. Running containers can look up that name and connect to the new address, but the old
address stops working.

If any containers have connections open to the old container, they are closed. It is a container’s
responsibility to detect this condition, look up the name again and reconnect.
Links
Links allow you to define extra aliases by which a service is reachable from another service. They
are not required to enable services to communicate - by default, any service can reach any other
service at that service’s name. In the following example, db is reachable from web at the
hostnames db and database:
version: "3"
services:

web:
build: .
links:
- "db:database"
db:
image: postgres

See the links reference for more information.

Multi-host networking
Note: The instructions in this section refer to legacy Docker Swarm operations, and only work when
targeting a legacy Swarm cluster. For instructions on deploying a compose project to the newer
integrated swarm mode, consult the Docker Stacksdocumentation.
When deploying a Compose application to a Swarm cluster, you can make use of the built-
in overlay driver to enable multi-host communication between containers with no changes to your
Compose file or application code.
Consult the Getting started with multi-host networking to see how to set up a Swarm cluster. The
cluster uses the overlay driver by default, but you can specify it explicitly if you prefer - see below for
how to do this.

Specify custom networks


Instead of just using the default app network, you can specify your own networks with the top-
level networks key. This lets you create more complex topologies and specify custom network
drivers and options. You can also use it to connect services to externally-created networks which
aren’t managed by Compose.
Each service can specify what networks to connect to with the service-level networks key, which is a
list of names referencing entries under the top-level networks key.
Here’s an example Compose file defining two custom networks. The proxy service is isolated from
the db service, because they do not share a network in common - only app can talk to both.
version: "3"
services:

proxy:
build: ./proxy
networks:
- frontend
app:
build: ./app
networks:
- frontend
- backend
db:
image: postgres
networks:
- backend

networks:
frontend:
# Use a custom driver
driver: custom-driver-1
backend:
# Use a custom driver which takes special options
driver: custom-driver-2
driver_opts:
foo: "1"
bar: "2"
Networks can be configured with static IP addresses by setting the ipv4_address and/or
ipv6_address for each attached network.

Networks can also be given a custom name (since version 3.5):

version: "3.5"
networks:
frontend:
name: custom_frontend
driver: custom-driver-1

For full details of the network configuration options available, see the following references:

 Top-level networks key


 Service-level networks key

Configure the default network


Instead of (or as well as) specifying your own networks, you can also change the settings of the app-
wide default network by defining an entry under networks named default:
version: "3"
services:

web:
build: .
ports:
- "8000:8000"
db:
image: postgres

networks:
default:
# Use a custom driver
driver: custom-driver-1

Use a pre-existing network


If you want your containers to join a pre-existing network, use the external option:
networks:
default:
external:
name: my-pre-existing-network

Instead of attempting to create a network called [projectname]_default, Compose looks for a


network called my-pre-existing-network and connect your app’s containers to it.

Use Compose in production


Estimated reading time: 2 minutes

When you define your app with Compose in development, you can use this definition to run your
application in different environments such as CI, staging, and production.

The easiest way to deploy an application is to run it on a single server, similar to how you would run
your development environment. If you want to scale up your application, you can run Compose apps
on a Swarm cluster.

Modify your Compose file for production

You probably need to make changes to your app configuration to make it ready for production.
These changes may include:

 Removing any volume bindings for application code, so that code stays inside the container
and can’t be changed from outside
 Binding to different ports on the host
 Setting environment variables differently, such as when you need to decrease the verbosity
of logging, or to enable email sending)
 Specifying a restart policy like restart: always to avoid downtime
 Adding extra services such as a log aggregator

For this reason, consider defining an additional Compose file, say production.yml, which specifies
production-appropriate configuration. This configuration file only needs to include the changes you’d
like to make from the original Compose file. The additional Compose file can be applied over the
original docker-compose.yml to create a new configuration.
Once you’ve got a second configuration file, tell Compose to use it with the -f option:
docker-compose -f docker-compose.yml -f production.yml up -d
See Using multiple compose files for a more complete example.

Deploying changes

When you make changes to your app code, remember to rebuild your image and recreate your app’s
containers. To redeploy a service called web, use:
$ docker-compose build web
$ docker-compose up --no-deps -d web

This first rebuilds the image for web and then stop, destroy, and recreate just the webservice. The --
no-deps flag prevents Compose from also recreating any services which webdepends on.

Running Compose on a single server

You can use Compose to deploy an app to a remote Docker host by setting
the DOCKER_HOST, DOCKER_TLS_VERIFY, and DOCKER_CERT_PATH environment variables appropriately.
For tasks like this, Docker Machine makes managing local and remote Docker hosts very easy, and
is recommended even if you’re not deploying remotely.
Once you’ve set up your environment variables, all the normal docker-compose commands work with
no further configuration.

Running Compose on a Swarm cluster

Docker Swarm, a Docker-native clustering system, exposes the same API as a single Docker host,
which means you can use Compose against a Swarm instance and run your apps across multiple
hosts.

Link environment variables


(superseded)
Estimated reading time: 1 minute

Note: Environment variables are no longer the recommended method for connecting to linked
services. Instead, you should use the link name (by default, the name of the linked service) as
the hostname to connect to. See the docker-compose.yml documentation for details.

Environment variables are only populated if you’re using the legacy version 1 Compose file format.
Compose uses Docker links to expose services’ containers to one another. Each linked container
injects a set of environment variables, each of which begins with the uppercase name of the
container.

To see what environment variables are available to a service, run docker-compose run SERVICE env.
name_PORT
Full URL, such as DB_PORT=tcp://172.17.0.5:5432
name_PORT_num_protocol
Full URL, such as DB_PORT_5432_TCP=tcp://172.17.0.5:5432
name_PORT_num_protocol_ADDR
Container’s IP address, such as DB_PORT_5432_TCP_ADDR=172.17.0.5
name_PORT_num_protocol_PORT
Exposed port number, such as DB_PORT_5432_TCP_PORT=5432
name_PORT_num_protocol_PROTO
Protocol (tcp or udp), such as DB_PORT_5432_TCP_PROTO=tcp
name_NAME
Fully qualified container name, such as DB_1_NAME=/myapp_web_1/myapp_db_1

Control startup and shutdown order in


Compose
Estimated reading time: 2 minutes

You can control the order of service startup and shutdown with the depends_on option. Compose
always starts and stops containers in dependency order, where dependencies are determined
by depends_on, links, volumes_from, and network_mode: "service:...".

However, for startup Compose does not wait until a container is “ready” (whatever that means for
your particular application) - only until it’s running. There’s a good reason for this.

The problem of waiting for a database (for example) to be ready is really just a subset of a much
larger problem of distributed systems. In production, your database could become unavailable or
move hosts at any time. Your application needs to be resilient to these types of failures.

To handle this, design your application to attempt to re-establish a connection to the database after
a failure. If the application retries the connection, it can eventually connect to the database.
The best solution is to perform this check in your application code, both at startup and whenever a
connection is lost for any reason. However, if you don’t need this level of resilience, you can work
around the problem with a wrapper script:

 Use a tool such as wait-for-it, dockerize, or sh-compatible wait-for. These are small wrapper
scripts which you can include in your application’s image to poll a given host and port until
it’s accepting TCP connections.

For example, to use wait-for-it.sh or wait-for to wrap your service’s command:


version: "2"
services:
web:
build: .
ports:
- "80:8000"
depends_on:
- "db"
command: ["./wait-for-it.sh", "db:5432", "--", "python", "app.py"]
db:
image: postgres

Tip: There are limitations to this first solution. For example, it doesn’t verify when a specific
service is really ready. If you add more arguments to the command, use the bash
shift command with a loop, as shown in the next example.

 Alternatively, write your own wrapper script to perform a more application-specific health
check. For example, you might want to wait until Postgres is definitely ready to accept
commands:

 #!/bin/sh

 # wait-for-postgres.sh

 set -e

 host="$1"

 shift
 cmd="$@"

 until PGPASSWORD=$POSTGRES_PASSWORD psql -h "$host" -U "postgres" -c '\q'; do

 >&2 echo "Postgres is unavailable - sleeping"

 sleep 1

 done

 >&2 echo "Postgres is up - executing command"

 exec $cmd

You can use this as a wrapper script as in the previous example, by setting:

command: ["./wait-for-postgres.sh", "db", "python", "app.py"]

Sample apps with Compose


Estimated reading time: 1 minute

The following samples show the various aspects of how to work with Docker Compose. As a
prerequisite, be sure to install Docker Compose if you have not already done so.

Key concepts these samples cover


The samples should help you to:

 define services based on Docker images using Compose files docker-


compose.yml anddocker-stack.yml files
 understand the relationship between docker-compose.yml and Dockerfiles
 learn how to make calls to your application services from Compose files
 learn how to deploy applications and services to a swarm

Samples tailored to demo Compose


These samples focus specifically on Docker Compose:

 Quickstart: Compose and Django - Shows how to use Docker Compose to set up and run a
simple Django/PostgreSQL app.
 Quickstart: Compose and Rails - Shows how to use Docker Compose to set up and run a
Rails/PostgreSQL app.

 Quickstart: Compose and WordPress - Shows how to use Docker Compose to set up and
run WordPress in an isolated environment with Docker containers.

Samples that include Compose in the workflows


These samples include working with Docker Compose as part of broader learning goals:

 Get Started with Docker - This multi-part tutorial covers writing your first app, data storage,
networking, and swarms, and ends with your app running on production servers in the cloud.

 Deploying an app to a Swarm - This tutorial from Docker Labs shows you how to create and
customize a sample voting app, deploy it to a swarm, test it, reconfigure the app, and
redeploy.

DTR CLI
docker/dtr overview
Estimated reading time: 1 minute

This tool has commands to install, configure, and backup Docker Trusted Registry (DTR). It also
allows uninstalling DTR. By default the tool runs in interactive mode. It prompts you for the values
needed.

Additional help is available for each command with the ‘--help’ option.

Usage
docker run -it --rm docker/dtr \
command [command options]

If not specified, docker/dtr uses the latest tag by default. To work with a different version, specify it
in the command. For example, docker run -it --rm docker/dtr:2.6.0.

Commands
Option Description

install Install Docker Trusted Registry

join Add a new replica to an existing DTR cluster

reconfigure Change DTR configurations

remove Remove a DTR replica from a cluster

destroy Destroy a DTR replica’s data

restore Install and restore DTR from an existing backup

backup Create a backup of DTR

upgrade Upgrade DTR 2.4.x cluster to this version

images List all the images necessary to install DTR

emergency-repair Recover DTR from loss of quorum

docker/dtr backup
Estimated reading time: 3 minutes

Create a backup of DTR

Usage
docker run -i --rm docker/dtr \
backup [command options] > backup.tar

Example Commands
BASIC

docker run -i --rm --log-driver none docker/dtr:2.6.5 \


backup --ucp-ca "$(cat ca.pem)" --existing-replica-id 5eb9459a7832 > backup.tar
ADVANCED (WITH CHAINED COMMANDS)

The following command has been tested on Linux:

DTR_VERSION=$(docker container inspect $(docker container ps -f \


name=dtr-registry -q) | grep -m1 -Po '(?<=DTR_VERSION=)\d.\d.\d'); \
REPLICA_ID=$(docker inspect -f '{{.Name}}' $(docker ps -q -f name=dtr-rethink) | cut
-f 3 -d '-'); \
read -p 'ucp-url (The UCP URL including domain and port): ' UCP_URL; \
read -p 'ucp-username (The UCP administrator username): ' UCP_ADMIN; \
read -sp 'ucp password: ' UCP_PASSWORD; \
docker run --log-driver none -i --rm \
--env UCP_PASSWORD=$UCP_PASSWORD \
docker/dtr:$DTR_VERSION backup \
--ucp-username $UCP_ADMIN \
--ucp-url $UCP_URL \
--ucp-ca "$(curl https://${UCP_URL}/ca)" \
--existing-replica-id $REPLICA_ID > \
dtr-metadata-${DTR_VERSION}-backup-$(date +%Y%m%d-%H_%M_%S).tar

For a detailed explanation on the advanced example, see Back up your DTR metadata. To learn
more about the --log-driver option for docker run, see docker run reference.

Description
This command creates a tar file with the contents of the volumes used by DTR, and prints it. You
can then use docker/dtr restore to restore the data from an existing backup.

Note:

 This command only creates backups of configurations, and image metadata. It does not back
up users and organizations. Users and organizations can be backed up during a UCP
backup.

It also does not back up Docker images stored in your registry. You should implement a
separate backup policy for the Docker images stored in your registry, taking into
consideration whether your DTR installation is configured to store images on the filesystem
or is using a cloud provider.
 This backup contains sensitive information and should be stored securely.

 Using the --offline-backup flag temporarily shuts down the RethinkDB container. Take the
replica out of your load balancer to avoid downtime.

Options
Option Environment Variable Description

--debug $DEBUG Enable debug mode for additional logs.

-- The ID of an existing DTR replica. To add,


existing- $DTR_REPLICA_ID remove or modify a DTR replica, you must
replica-id connect to an existing healthy replica’s database.

--help-
$DTR_EXTENDED_HELP Display extended help text for a given command.
extended

This flag takes RethinkDB down during backup


and takes a more reliable backup. If you back up
--offline-
$DTR_OFFLINE_BACKUP DTR with this flag, RethinkDB will go down during
backup
backup. However, offline backups are guaranteed
to be more consistent than online backups.

Use a PEM-encoded TLS CA certificate for UCP.


Download the UCP TLS CA certificate
--ucp-ca $UCP_CA
from https://<ucp-url>/ca, and use --ucp-ca
"$(cat ca.pem)".

Disable TLS verification for UCP. The installation


--ucp- uses TLS but always trusts the TLS certificate
insecure- $UCP_INSECURE_TLS used by UCP, which can lead to MITM (man-in-
tls the-middle) attacks. For production deployments,
use --ucp-ca "$(cat ca.pem)" instead.

--ucp-
$UCP_PASSWORD The UCP administrator password.
password

--ucp-url $UCP_URL The UCP URL including domain and port.

--ucp-
$UCP_USERNAME The UCP administrator username.
username

docker/dtr destroy
Estimated reading time: 1 minute

Destroy a DTR replica’s data

Usage
docker run -it --rm docker/dtr \
destroy [command options]

Description
This command forcefully removes all containers and volumes associated with a DTR replica without
notifying the rest of the cluster. Use this command on all replicas uninstall DTR.

Use the ‘remove’ command to gracefully scale down your DTR cluster.

Options
Option Environment Variable Description

--replica-
$DTR_DESTROY_REPLICA_ID The ID of the replica to destroy.
id

--ucp-url $UCP_URL The UCP URL including domain and port.

--ucp-
$UCP_USERNAME The UCP administrator username.
username

--ucp-
$UCP_PASSWORD The UCP administrator password.
password

--debug $DEBUG Enable debug mode for additional logs.

--help- Display extended help text for a given


$DTR_EXTENDED_HELP
extended command.

Disable TLS verification for UCP.The


installation uses TLS but always trusts the
--ucp-
TLS certificate used by UCP, which can lead
insecure- $UCP_INSECURE_TLS
tls
to man-in-the-middle attacks. For production
deployments, use --ucp-ca “$(cat ca.pem)”
instead.
Option Environment Variable Description

Use a PEM-encoded TLS CA certificate for


UCP.Download the UCP TLS CA certificate
--ucp-ca $UCP_CA
from https:///ca, and use --ucp-ca "$(cat
ca.pem)".

docker/dtr emergency-repair
Estimated reading time: 3 minutes

Recover DTR from loss of quorum

Usage
docker run -it --rm docker/dtr \
emergency-repair [command options]

Description
This command repairs a DTR cluster that has lost quorum by reverting your cluster to a single DTR
replica.

There are three steps you can take to recover an unhealthy DTR cluster:

1. If the majority of replicas are healthy, remove the unhealthy nodes from the cluster, and join
new ones for high availability.
2. If the majority of replicas are unhealthy, use this command to revert your cluster to a single
DTR replica.
3. If you can’t repair your cluster to a single replica, you’ll have to restore from an existing
backup, using the restore command.

When you run this command, a DTR replica of your choice is repaired and turned into the only
replica in the whole DTR cluster. The containers for all the other DTR replicas are stopped and
removed. When using the force option, the volumes for these replicas are also deleted.
After repairing the cluster, you should use the join command to add more DTR replicas for high
availability.

Options
Option Environment Variable Description

--debug $DEBUG Enable debug mode for additional logs.

-- The ID of an existing DTR replica. To add,


existing- $DTR_REPLICA_ID remove or modify DTR, you must connect to an
replica-id existing healthy replica’s database.

--help-
extended
$DTR_EXTENDED_HELP Display extended help text for a given command.

The subnet used by the dtr-ol overlay network.


Example: 10.0.0.0/24. For high-availability, DTR
creates an overlay network between UCP nodes.
--overlay-
subnet
$DTR_OVERLAY_SUBNET This flag allows you to choose the subnet for that
network. Make sure the subnet you choose is not
used on any machine where DTR replicas are
deployed.

Delete the data volumes of all unhealthy replicas.


With this option, the volume of the DTR replica
you’re restoring is preserved but the volumes for
--prune $PRUNE
all other replicas are deleted. This has the same
result as completely uninstalling DTR from those
replicas.

Use a PEM-encoded TLS CA certificate for UCP.


--ucp-ca $UCP_CA Download the UCP TLS CA certificate from
https:///ca, and use `--ucp-ca "$(cat ca.pem)"`.

Disable TLS verification for UCP. The installation


--ucp- uses TLS but always trusts the TLS certificate
insecure- $UCP_INSECURE_TLS used by UCP, which can lead to MITM (man-in-
tls the-middle) attacks. For production deployments,
use --ucp-ca "$(cat ca.pem)"instead.

--ucp-
password
$UCP_PASSWORD The UCP administrator password.

--ucp-url $UCP_URL The UCP URL including domain and port.

--ucp-
username
$UCP_USERNAME The UCP administrator username.

--y, yes $YES Answer yes to any prompts.

docker/dtr install
Estimated reading time: 8 minutes

Install Docker Trusted Registry

Usage
docker run -it --rm docker/dtr \
install [command options]

Description
This command installs Docker Trusted Registry (DTR) on a node managed by Docker Universal
Control Plane (UCP).

After installing DTR, you can join additional DTR replicas using docker/dtr join.

Example Usage
$ docker run -it --rm docker/dtr:2.7.0 install \
--ucp-node <UCP_NODE_HOSTNAME> \
--ucp-insecure-tls

Note: Use --ucp-ca "$(cat ca.pem)" instead of --ucp-insecure-tls for a production deployment.

Options
Option Environment Variable Description

Use async NFS volume options on the replica


specified in the --existing-replica-
id option. The NFS configuration must be set
with --nfs-storage-url explicitly to use this
option. Using --async-nfs will bring down any
--async-nfs $ASYNC_NFS
containers on the replica that use the NFS
volume, delete the NFS volume, bring it back
up with the appropriate configuration, and
restart any containers that were brought
down.
Option Environment Variable Description

--client- Specify root CA certificates for client


cert-auth- $CLIENT_CA authentication with --client-cert-auth-ca
ca "$(cat ca.pem)".

--debug $DEBUG Enable debug mode for additional logs.

Use a PEM-encoded TLS CA certificate for


DTR. By default DTR generates a self-signed
--dtr-ca $DTR_CA TLS certificate during deployment. You can
use your own root CA public certificate with --
dtr-ca "$(cat ca.pem)".

Use a PEM-encoded TLS certificate for DTR.


By default DTR generates a self-signed TLS
certificate during deployment. You can use
your own public key certificate with --dtr-
--dtr-cert $DTR_CERT
cert "$(cat cert.pem)". If the certificate has
been signed by an intermediate certificate
authority, append its public key certificate at
the end of the file to establish a chain of trust.

URL of the host or load balancer clients use


to reach DTR. When you use this flag, users
are redirected to UCP for logging in. Once
authenticated they are redirected to the URL
you specify in this flag. If you don’t use this
flag, DTR is deployed without single sign-on
with UCP. Users and teams are shared but
users log in separately into the two
--dtr-
applications. You can enable and disable
external- $DTR_EXTERNAL_URL
single sign-on within your DTR system
url
settings. Format https://host[:port], where
port is the value you used with --replica-
https-port. Since HSTS (HTTP Strict-
Transport-Security) header is included in all
API responses, make sure to specify the
FQDN (Fully Qualified Domain Name) of your
DTR, or your browser may refuse to load the
web interface.

Use a PEM-encoded TLS private key for


DTR. By default DTR generates a self-signed
--dtr-key $DTR_KEY TLS certificate during deployment. You can
use your own TLS private key with --dtr-key
"$(cat key.pem)".
Option Environment Variable Description

Customize the volume to store Docker


images. By default DTR creates a volume to
store the Docker images in the local
filesystem of the node where DTR is running,
--dtr-
without high-availability. Use this flag to
storage- $DTR_STORAGE_VOLUME
specify a full path or volume name for DTR to
volume
store images. For high-availability, make sure
all DTR replicas can read and write data on
this volume. If you’re using NFS, use --nfs-
storage-urlinstead.

Enables TLS client certificate authentication;


use --enable-client-cert-auth=false to
--enable- disable it. If enabled, DTR will additionally
client- $ENABLE_CLIENT_CERT_AUTH authenticate users via TLS client certificates.
cert-auth You must also specify the root certificate
authorities (CAs) that issued the certificates
with --client-cert-auth-ca.

Enables pprof profiling of the server. Use --


enable-pprof=false to disable it. Once DTR
--enable- is deployed with this flag, you can access
$DTR_PPROF
pprof the pprof endpoint for the api server
at /debug/pprof, and the registry endpoint
at /registry_debug_pprof/debug/pprof.

--help- Display extended help text for a given


$DTR_EXTENDED_HELP
extended command.

--http-
$DTR_HTTP_PROXY The HTTP proxy used for outgoing requests.
proxy

--https-
$DTR_HTTPS_PROXY The HTTPS proxy used for outgoing requests.
proxy

The syslog system to send logs to.The


--log-host $LOG_HOST endpoint to send logs to. Use this flag if you
set --log-protocol to tcp or udp.

Log level for all container logs when logging


--log-level $LOG_LEVEL to syslog. Default: INFO. The supported log
levels are debug, info, warn, error, or fatal.

--log- The protocol for sending logs. Default is


$LOG_PROTOCOL
protocol internal. By default, DTR internal components
Option Environment Variable Description

log information using the logger specified in


the Docker daemon in the node where the
DTR replica is deployed. Use this option to
send DTR logs to an external syslog system.
The supported values are tcp, udp,
or internal. Internal is the default option,
stopping DTR from sending logs to an
external system. Use this flag with --log-
host.

Use NFS to store Docker images following


this
format: nfs://<ip|hostname>/<mountpoint>.
By default, DTR creates a volume to store the
Docker images in the local filesystem of the
node where DTR is running, without high
availability. To use this flag, you need to
--nfs- install an NFS client library like nfs-
$NFS_STORAGE_URL
storage-url common in the node where you’re deploying
DTR. You can test this by running showmount
-e <nfs-server>. When you join new
replicas, they will start using NFS so there is
no need to specify this flag. To reconfigure
DTR to stop using NFS, leave this option
empty: --nfs-storage-url "". See USE
NFS for more details.

Pass in NFS volume options verbatim for the


replica specified in the --existing-replica-
id option. The NFS configuration must be set
with --nfs-storage-url explicitly to use this
--nfs- option. Specifying --nfs-options will pass in
$NFS_OPTIONS
options character-for-character the options specified
in the argument when creating or recreating
the NFS volume. For instance, to use NFS v4
with async, pass in “rw,nfsvers=4,async” as
the argument.

List of domains the proxy should not be used


for. When using --http-proxy you can use
--no-proxy $DTR_NO_PROXY this flag to specify a list of domains that you
don’t want to route through the proxy.
Format acme.com[, acme.org].
Option Environment Variable Description

The subnet used by the dtr-ol overlay


network. Example: 10.0.0.0/24. For high-
availability, DTR creates an overlay network
--overlay- between UCP nodes. This flag allows you to
$DTR_OVERLAY_SUBNET
subnet choose the subnet for that network. Make
sure the subnet you choose is not used on
any machine where DTR replicas are
deployed.

The public HTTP port for the DTR replica.


Default is 80. This allows you to customize the
HTTP port where users can reach DTR. Once
--replica- users access the HTTP port, they are
$REPLICA_HTTP_PORT
http-port redirected to use an HTTPS connection,
using the port specified with --replica-
https-port. This port can also be used for
unencrypted health checks.

The public HTTPS port for the DTR replica.


--replica- Default is 443. This allows you to customize
$REPLICA_HTTPS_PORT
https-port the HTTPS port where users can reach DTR.
Each replica can use a different port.

--replica- Assign a 12-character hexadecimal ID to the


$DTR_INSTALL_REPLICA_ID
id DTR replica. Random by default.

The maximum amount of space in MB for


RethinkDB in-memory cache used by the
--replica- given replica. Default is auto. Auto
rethinkdb- $RETHINKDB_CACHE_MB is (available_memory - 1024) / 2. This
cache-mb config allows changing the RethinkDB cache
usage per replica. You need to run it once per
replica to change each one.

Use a PEM-encoded TLS CA certificate for


UCP. Download the UCP TLS CA certificate
--ucp-ca $UCP_CA
from https://<ucp-url>/ca, and use --ucp-
ca "$(cat ca.pem)".

Disable TLS verification for UCP. The


installation uses TLS but always trusts the
--ucp-
TLS certificate used by UCP, which can lead
insecure- $UCP_INSECURE_TLS
to MITM (man-in-the-middle) attacks. For
tls
production deployments, use --ucp-ca
"$(cat ca.pem)" instead.
Option Environment Variable Description

The hostname of the UCP node to deploy


DTR. Random by default. You can find the
--ucp-node $UCP_NODE hostnames of the nodes in the cluster in the
UCP web interface, or by running docker
node ls on a UCP manager node.

--ucp-
$UCP_PASSWORD The UCP administrator password.
password

--ucp-url $UCP_URL The UCP URL including domain and port.

--ucp-
$UCP_USERNAME The UCP administrator username.
username

docker/dtr join
Estimated reading time: 3 minutes

Add a new replica to an existing DTR cluster. Use SSH to log into any node that is already part of
UCP.

Usage
docker run -it --rm \
docker/dtr:2.6.0 join \
--ucp-node <ucp-node-name> \
--ucp-insecure-tls

Description
This command creates a replica of an existing DTR on a node managed by Docker Universal
Control Plane (UCP).

For setting DTR for high-availability, create 3, 5, or 7 replicas of DTR.

Options
Option Environment Variable Description

--debug $DEBUG Enable debug mode for additional logs.

The ID of an existing DTR replica. To add,


--existing-
$DTR_REPLICA_ID remove or modify DTR, you must connect to an
replica-id
existing healthy replica’s database.

--help- Display extended help text for a given


$DTR_EXTENDED_HELP
extended command.

The public HTTP port for the DTR replica.


Default is 80. This allows you to customize the
HTTP port where users can reach DTR. Once
--replica- users access the HTTP port, they are
$REPLICA_HTTP_PORT
http-port redirected to use an HTTPS connection, using
the port specified with --replica-https-port. This
port can also be used for unencrypted health
checks.

The public HTTPS port for the DTR replica.


--replica- Default is 443. This allows you to customize the
$REPLICA_HTTPS_PORT
https-port HTTPS port where users can reach DTR. Each
replica can use a different port.

--replica- Assign a 12-character hexadecimal ID to the


$DTR_INSTALL_REPLICA_ID
id DTR replica. Random by default.

The maximum amount of space in MB for


RethinkDB in-memory cache used by the given
--replica- replica. Default is auto. Auto
rethinkdb- $RETHINKDB_CACHE_MB is (available_memory - 1024) / 2. This config
cache-mb allows changing the RethinkDB cache usage
per replica. You need to run it once per replica
to change each one.

Don’t test if overlay networks are working


correctly between UCP nodes. For high-
--skip-
availability, DTR creates an overlay network
network- $DTR_SKIP_NETWORK_TEST
test
between UCP nodes and tests that it is working
when joining replicas. Don’t use this option for
production deployments.

Use a PEM-encoded TLS CA certificate for


UCP.Download the UCP TLS CA certificate
--ucp-ca $UCP_CA
from https://<ucp-url>/ca, and use --ucp-ca
"$(cat ca.pem)".
Option Environment Variable Description

Disable TLS verification for UCP. The


installation uses TLS but always trusts the TLS
--ucp-
certificate used by UCP, which can lead to
insecure- $UCP_INSECURE_TLS
MITM (man-in-the-middle) attacks. For
tls
production deployments, use --ucp-ca "$(cat
ca.pem)"instead.

The hostname of the UCP node to deploy


DTR. Random by default.You can find the
--ucp-node $UCP_NODE hostnames of the nodes in the cluster in the
UCP web interface, or by running docker node
lson a UCP manager node.

--ucp-
$UCP_PASSWORD The UCP administrator password.
password

--ucp-url $UCP_URL The UCP URL including domain and port.

--ucp-
$UCP_USERNAME The UCP administrator username.
username

Join a new replica even if the cluster is


unhealthy.Joining replicas to an unhealthy DTR
--unsafe-
$DTR_UNSAFE_JOIN cluster leads to split-brain scenarios, and data
join
loss. Don’t use this option for production
deployments.

docker/dtr reconfigure
Estimated reading time: 8 minutes

Change DTR configurations.

Usage
docker run -it --rm docker/dtr \
reconfigure [command options]

Description
This command changes DTR configuration settings. If you are using NFS as a storage volume,
see Use NFS for details on changes to the reconfiguration process.

DTR is restarted for the new configurations to take effect. To have no down time, configure your
DTR for high availability.

Options
Option Environment Variable Description

Use async NFS volume options on the replica


specified in the --existing-replica-
id option. The NFS configuration must be set
with --nfs-storage-url explicitly to use this
option. Using --async-nfs will bring down
--async-nfs $ASYNC_NFS
any containers on the replica that use the
NFS volume, delete the NFS volume, bring it
back up with the appropriate configuration,
and restart any containers that were brought
down.

--client- Specify root CA certificates for client


cert-auth- $CLIENT_CA authentication with --client-cert-auth-ca
ca "$(cat ca.pem)".

Enable debug mode for additional logs of this


bootstrap container (the log level of
--debug $DEBUG
downstream DTR containers can be set
with --log-level).

Use a PEM-encoded TLS CA certificate for


DTR. By default DTR generates a self-signed
--dtr-ca $DTR_CA TLS certificate during deployment. You can
use your own root CA public certificate with -
-dtr-ca "$(cat ca.pem)".

Use a PEM-encoded TLS certificate for DTR.


By default DTR generates a self-signed TLS
certificate during deployment. You can use
your own public key certificate with --dtr-
--dtr-cert $DTR_CERT
cert "$(cat cert.pem)". If the certificate has
been signed by an intermediate certificate
authority, append its public key certificate at
the end of the file to establish a chain of trust.
Option Environment Variable Description

URL of the host or load balancer clients use


to reach DTR. When you use this flag, users
are redirected to UCP for logging in. Once
authenticated they are redirected to the url
you specify in this flag. If you don’t use this
flag, DTR is deployed without single sign-on
with UCP. Users and teams are shared but
users login separately into the two
--dtr-
applications. You can enable and disable
external- $DTR_EXTERNAL_URL
single sign-on in the DTR settings.
url
Format https://host[:port], where port is
the value you used with --replica-https-
port. Since HSTS (HTTP Strict-Transport-
Security) header is included in all API
responses, make sure to specify the FQDN
(Fully Qualified Domain Name) of your DTR,
or your browser may refuse to load the web
interface.

Use a PEM-encoded TLS private key for


DTR. By default DTR generates a self-signed
--dtr-key $DTR_KEY TLS certificate during deployment. You can
use your own TLS private key with --dtr-key
"$(cat key.pem)".

Customize the volume to store Docker


images. By default DTR creates a volume to
store the Docker images in the local
filesystem of the node where DTR is running,
--dtr-
without high-availability. Use this flag to
storage- $DTR_STORAGE_VOLUME
specify a full path or volume name for DTR to
volume
store images. For high-availability, make sure
all DTR replicas can read and write data on
this volume. If you’re using NFS, use --nfs-
storage-urlinstead.

Enables TLS client certificate authentication;


use --enable-client-cert-auth=false to
--enable- disable it. If enabled, DTR will additionally
client- $ENABLE_CLIENT_CERT_AUTH authenticate users via TLS client certificates.
cert-auth You must also specify the root certificate
authorities (CAs) that issued the certificates
with --client-cert-auth-ca.
Option Environment Variable Description

Enables pprof profiling of the server. Use --


enable-pprof=false to disable it. Once DTR
--enable- is deployed with this flag, you can access the
$DTR_PPROF
pprof pprof endpoint for the api server
at /debug/pprof, and the registry endpoint
at /registry_debug_pprof/debug/pprof.

The ID of an existing DTR replica. To add,


--existing-
$DTR_REPLICA_ID remove or modify DTR, you must connect to
replica-id
an existing healthy replica’s database.

--help- Display extended help text for a given


$DTR_EXTENDED_HELP
extended command.

--http-
$DTR_HTTP_PROXY The HTTP proxy used for outgoing requests.
proxy

--https- The HTTPS proxy used for outgoing


$DTR_HTTPS_PROXY
proxy requests.

The syslog system to send logs to. The


--log-host $LOG_HOST endpoint to send logs to. Use this flag if you
set --log-protocol to tcp or udp.

Log level for all container logs when logging


--log-level $LOG_LEVEL to syslog. Default: INFO. The supported log
levels are debug, info, warn, error, or fatal.

The protocol for sending logs. Default is


internal. By default, DTR internal components
log information using the logger specified in
the Docker daemon in the node where the
DTR replica is deployed. Use this option to
--log-
$LOG_PROTOCOL send DTR logs to an external syslog system.
protocol
The supported values are tcp, udp,
and internal. Internal is the default option,
stopping DTR from sending logs to an
external system. Use this flag with --log-
host.

When running DTR 2.5 (with experimental


online garbage collection) and 2.6.0-2.6.3,
--nfs-
$NFS_STORAGE_URL there is an issue with reconfiguring and
storage-url
restoring DTR with --nfs-storage-urlwhich
leads to erased tags. Make sure to back up
Option Environment Variable Description

your DTR metadata before you proceed. To


work around the issue, manually create a
storage volume on each DTR node and
reconfigure DTR with --dtr-storage-
volume and your newly-created volume
instead. See Reconfigure Using a Local NFS
Volume for more details. To reconfigure DTR
to stop using NFS, leave this option empty: --
nfs-storage-url "". See USE NFS for more
details. Upgrade to 2.6.4 and follow Best
practice for data migration in 2.6.4when
switching storage backends.

Pass in NFS volume options verbatim for the


replica specified in the --existing-replica-
id option. The NFS configuration must be set
with --nfs-storage-url explicitly to use this
--nfs- option. Specifying --nfs-options will pass in
$NFS_OPTIONS
options character-for-character the options specified
in the argument when creating or recreating
the NFS volume. For instance, to use NFS v4
with async, pass in “rw,nfsvers=4,async” as
the argument.

List of domains the proxy should not be used


for. When using --http-proxy you can use
--no-proxy $DTR_NO_PROXY this flag to specify a list of domains that you
don’t want to route through the proxy.
Format acme.com[, acme.org].

The public HTTP port for the DTR replica.


Default is 80. This allows you to customize
the HTTP port where users can reach DTR.
--replica- Once users access the HTTP port, they are
$REPLICA_HTTP_PORT
http-port redirected to use an HTTPS connection,
using the port specified with --replica-https-
port. This port can also be used for
unencrypted health checks.

The public HTTPS port for the DTR replica.


--replica- Default is 443. This allows you to customize
$REPLICA_HTTPS_PORT
https-port the HTTPS port where users can reach DTR.
Each replica can use a different port.
Option Environment Variable Description

The maximum amount of space in MB for


RethinkDB in-memory cache used by the
--replica- given replica. Default is auto. Auto
rethinkdb- $RETHINKDB_CACHE_MB is (available_memory - 1024) / 2. This
cache-mb config allows changing the RethinkDB cache
usage per replica. You need to run it once per
replica to change each one.

A flag added in 2.6.4 which lets you indicate


the migration status of your storage data.
Specify this flag if you are migrating to a new
storage backend and have already moved all
--storage- contents from your old backend to your new
$STORAGE_MIGRATED
migrated one. If not specified, DTR will assume the
new backend is empty during a backend
storage switch, and consequently destroy
your existing tags and related image
metadata.

Use a PEM-encoded TLS CA certificate for


UCP. Download the UCP TLS CA certificate
--ucp-ca $UCP_CA
from https://<ucp-url>/ca, and use --ucp-
ca "$(cat ca.pem)".

Disable TLS verification for UCP. The


installation uses TLS but always trusts the
--ucp-
TLS certificate used by UCP, which can lead
insecure- $UCP_INSECURE_TLS
to MITM (man-in-the-middle) attacks. For
tls
production deployments, use --ucp-ca
"$(cat ca.pem)" instead.

--ucp-
$UCP_PASSWORD The UCP administrator password.
password

--ucp-url $UCP_URL The UCP URL including domain and port.

--ucp-
$UCP_USERNAME The UCP administrator username.
username

docker/dtr remove
Estimated reading time: 1 minute

Remove a DTR replica from a cluster


Usage
docker run -it --rm docker/dtr \
remove [command options]

Description
This command gracefully scales down your DTR cluster by removing exactly one replica. All other
replicas must be healthy and will remain healthy after this operation.

Options
Option Environment Variable Description

--debug $DEBUG Enable debug mode for additional logs.

-- The ID of an existing DTR replica. To add,


existing- $DTR_REPLICA_ID remove or modify DTR, you must connect to
replica-id an existing healthy replica’s database.

--help- Display extended help text for a given


$DTR_EXTENDED_HELP
extended command.

--replica-
$DTR_REMOVE_REPLICA_ID DEPRECATED Alias for --replica-ids.
id

--replica- A comma separated list of IDs of replicas to


$DTR_REMOVE_REPLICA_IDS
ids remove from the cluster.

Use a PEM-encoded TLS CA certificate for


UCP. Download the UCP TLS CA certificate
--ucp-ca $UCP_CA
from https://<ucp-url>/ca, and use --ucp-
ca "$(cat ca.pem)".

Disable TLS verification for UCP. The


installation uses TLS but always trusts the
--ucp-
TLS certificate used by UCP, which can lead
insecure- $UCP_INSECURE_TLS
to MITM (man-in-the-middle) attacks. For
tls
production deployments, use --ucp-ca
"$(cat ca.pem)"instead.
Option Environment Variable Description

--ucp-
$UCP_PASSWORD The UCP administrator password.
password

--ucp-url $UCP_URL The UCP URL including domain and port.

--ucp-
$UCP_USERNAME The UCP administrator username.
username

docker/dtr restore
Estimated reading time: 8 minutes

Install and restore DTR from an existing backup

Usage
docker run -i --rm docker/dtr \
restore [command options] < backup.tar

Description
This command performs a fresh installation of DTR, and reconfigures it with configuration data from
a tar file generated by docker/dtr backup. If you are restoring DTR after a failure, please make sure
you have destroyed the old DTR fully. See DTR disastery recovery for Docker’s recommended
recovery strategies based on your setup.

There are three steps you can take to recover an unhealthy DTR cluster:

1. If the majority of replicas are healthy, remove the unhealthy nodes from the cluster, and join
new nodes for high availability.
2. If the majority of replicas are unhealthy, use this command to revert your cluster to a single
DTR replica.
3. If you can’t repair your cluster to a single replica, you’ll have to restore from an existing
backup, using the restore command.

This command does not restore Docker images. You should implement a separate restore
procedure for the Docker images stored in your registry, taking in consideration whether your DTR
installation is configured to store images on the local filesystem or using a cloud provider.
After restoring the cluster, you should use the join command to add more DTR replicas for high
availability.

Options
Option Environment Variable Description

--debug $DEBUG Enable debug mode for additional logs.

Use a PEM-encoded TLS CA certificate for DTR.


By default DTR generates a self-signed TLS
--dtr-ca $DTR_CA certificate during deployment. You can use your
own TLS CA certificate with --dtr-ca "$(cat
ca.pem)".

Use a PEM-encoded TLS certificate for DTR. By


default DTR generates a self-signed TLS
--dtr-cert $DTR_CERT certificate during deployment. You can use your
own TLS certificate with --dtr-cert "$(cat
ca.pem)".

URL of the host or load balancer clients use to


reach DTR. When you use this flag, users are
redirected to UCP for logging in. Once
authenticated they are redirected to the URL you
specify in this flag. If you don’t use this flag, DTR
--dtr-
is deployed without single sign-on with UCP.
external- $DTR_EXTERNAL_URL
Users and teams are shared but users log in
url
separately into the two applications. You can
enable and disable single sign-on within your
DTR system settings.
Format https://host[:port], where port is the
value you used with --replica-https-port.

Use a PEM-encoded TLS private key for DTR. By


default DTR generates a self-signed TLS
--dtr-key $DTR_KEY certificate during deployment. You can use your
own TLS private key with --dtr-key "$(cat
ca.pem)".

Mandatory flag to allow for DTR to fall back to


--dtr- your configured storage setting at the time of
storage- $DTR_STORAGE_VOLUME backup. If you have previously configured DTR to
volume use a full path or volume name for storage,
specify this flag to use the same setting on
Option Environment Variable Description

restore. See docker/dtr install and docker/dtr


reconfigure for usage details.

Mandatory flag to allow for DTR to fall back to


your configured storage backend at the time of
backup. If cloud storage was configured, then the
default storage on restore is cloud storage.
Otherwise, local storage is used. With DTR 2.5
--dtr-use- (with experimental online garbage collection) and
default- $DTR_DEFAULT_STORAGE 2.6.0-2.6.3, this flag must be specified in order to
storage keep your DTR metadata. If you encounter an
issue with lost tags, see Restore to Cloud
Storage for Docker’s recommended recovery
strategy. Upgrade to 2.6.4 and follow Best
practice for data migration in 2.6.4when switching
storage backends.

Mandatory flag to allow for DTR to fall back to


your configured storage setting at the time of
backup. When running DTR 2.5 (with
experimental online garbage collection) and
2.6.0-2.6.3, there is an issue with reconfiguring
and restoring DTR with --nfs-storage-url which
leads to erased tags. Make sure to back up your
DTR metadata before you proceed. If NFS was
--nfs- previously configured, you have to manually
$NFS_STORAGE_URL
storage-url create a storage volume on each DTR node and
specify --dtr-storage-volume with the newly-
created volume instead. See Restore to a Local
NFS Volume for more details. For additional NFS
configuration options to support NFS v4,
see docker/dtr install and docker/dtr
reconfigure. Upgrade to 2.6.4 and follow Best
practice for data migration in 2.6.4 when
switching storage backends.

Enables pprof profiling of the server. Use --


enable-pprof=false to disable it. Once DTR is
--enable- deployed with this flag, you can access
$DTR_PPROF
pprof the pprofendpoint for the api server
at /debug/pprof, and the registry endpoint
at /registry_debug_pprof/debug/pprof.

--help-
$DTR_EXTENDED_HELP Display extended help text for a given command.
extended
Option Environment Variable Description

--http-
$DTR_HTTP_PROXY The HTTP proxy used for outgoing requests.
proxy

--https-
$DTR_HTTPS_PROXY The HTTPS proxy used for outgoing requests.
proxy

The syslog system to send logs to.The endpoint


--log-host $LOG_HOST to send logs to. Use this flag if you set --log-
protocol to tcp or udp.

Log level for all container logs when logging to


--log-level $LOG_LEVEL syslog. Default: INFO. The supported log levels
are debug, info, warn, error, or fatal.

The protocol for sending logs. Default is


internal.By default, DTR internal components log
information using the logger specified in the
Docker daemon in the node where the DTR
--log-
$LOG_PROTOCOL replica is deployed. Use this option to send DTR
protocol
logs to an external syslog system. The supported
values are tcp, udp, and internal. Internal is the
default option, stopping DTR from sending logs to
an external system. Use this flag with --log-host.

List of domains the proxy should not be used


for.When using --http-proxy you can use this flag
--no-proxy $DTR_NO_PROXY to specify a list of domains that you don’t want to
route through the proxy. Format acme.com[,
acme.org].

The public HTTP port for the DTR replica. Default


is 80. This allows you to customize the HTTP port
where users can reach DTR. Once users access
--replica-
$REPLICA_HTTP_PORT the HTTP port, they are redirected to use an
http-port
HTTPS connection, using the port specified
with --replica-https-port. This port can also be
used for unencrypted health checks.

The public HTTPS port for the DTR replica.


--replica- Default is 443. This allows you to customize the
$REPLICA_HTTPS_PORT
https-port HTTPS port where users can reach DTR. Each
replica can use a different port.

--replica- Assign a 12-character hexadecimal ID to the


$DTR_INSTALL_REPLICA_ID
id DTR replica. Random by default.
Option Environment Variable Description

The maximum amount of space in MB for


RethinkDB in-memory cache used by the given
--replica- replica. Default is auto. Auto
rethinkdb- $RETHINKDB_CACHE_MB is (available_memory - 1024) / 2. This config
cache-mb allows changing the RethinkDB cache usage per
replica. You need to run it once per replica to
change each one.

Use a PEM-encoded TLS CA certificate for UCP.


Download the UCP TLS CA certificate
--ucp-ca $UCP_CA
from https://<ucp-url>/ca, and use --ucp-ca
"$(cat ca.pem)".

Disable TLS verification for UCP. The installation


--ucp- uses TLS but always trusts the TLS certificate
insecure- $UCP_INSECURE_TLS used by UCP, which can lead to MITM (man-in-
tls the-middle) attacks. For production deployments,
use --ucp-ca "$(cat ca.pem)"instead.

The hostname of the UCP node to deploy DTR.


Random by default. You can find the hostnames
--ucp-node $UCP_NODE of the nodes in the cluster in the UCP web
interface, or by running docker node ls on a
UCP manager node.

--ucp-
$UCP_PASSWORD The UCP administrator password.
password

--ucp-url $UCP_URL The UCP URL including domain and port.

--ucp-
$UCP_USERNAME The UCP administrator username.
username

docker/dtr upgrade
Estimated reading time: 1 minute

Upgrade DTR 2.5.x cluster to this version

Usage
docker run -it --rm docker/dtr \
upgrade [command options]

Description
This command upgrades DTR 2.5.x to the current version of this image.

Options
Option Environment Variable Description

--debug $DEBUG Enable debug mode for additional logs.

-- The ID of an existing DTR replica. To add, remove


existing- $DTR_REPLICA_ID or modify DTR, you must connect to an existing
replica-id healthy replica’s database.

--help-
$DTR_EXTENDED_HELP Display extended help text for a given command.
extended

Use a PEM-encoded TLS CA certificate for UCP.


Download the UCP TLS CA certificate
--ucp-ca $UCP_CA
from https://<ucp-url>/ca, and use --ucp-ca
"$(cat ca.pem)".

Disable TLS verification for UCP. The installation


--ucp- uses TLS but always trusts the TLS certificate
insecure- $UCP_INSECURE_TLS used by UCP, which can lead to MITM (man-in-
tls the-middle) attacks. For production deployments,
use --ucp-ca "$(cat ca.pem)" instead.

--ucp-
$UCP_PASSWORD The UCP administrator password.
password

--ucp-url $UCP_URL The UCP URL including domain and port.

--ucp-
$UCP_USERNAME The UCP administrator username.
username

UCP CLI
docker/ucp overview
Estimated reading time: 2 minutes

This image has commands to install and manage Docker Universal Control Plane (UCP) on a
Docker Engine.

You can configure the commands using flags or environment variables. When using environment
variables, use the docker container run -e VARIABLE_NAME syntax to pass the value from your
shell, or docker container run -e VARIABLE_NAME=value to specify the value explicitly on the
command line.
The container running this image needs to be named ucp and bind-mount the Docker daemon
socket. Below you can find an example of how to run this image.
Additional help is available for each command with the --help flag.

Usage
docker container run -it --rm \
--name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
docker/ucp \
command [command arguments]

Commands
Option Description

backup Create a backup of a UCP manager node

dump-certs Print the public certificates used by this UCP web server

example-config Display an example configuration file for UCP

help Shows a list of commands or help for one command

id Print the ID of UCP running on this node

images Verify the UCP images on this node

install Install UCP on this node

port-check-server Checks the ports on a node before a UCP installation


Option Description

restart Start or restart UCP components running on this node

restore Restore a UCP cluster from a backup

stop Stop UCP components running on this node

support Create a support dump for this UCP node

uninstall-ucp Uninstall UCP from this swarm

upgrade Upgrade the UCP cluster

docker/ucp dump-certs
Estimated reading time: 1 minute

Print the public certificates used by this UCP web server.

Usage
docker container run --rm \
--name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
docker/ucp \
dump-certs [command options]

Description
This command outputs the public certificates for the UCP web server running on this node. By
default, it prints the contents of the ca.pem and cert.pem files.
When integrating UCP and DTR, use this command with the --cluster --ca flags to configure DTR.

Options
Option Description

--debug, D Enable debug mode

--jsonlog Produce json formatted output for easier parsing

--ca Only print the contents of the ca.pem file

--cluster Print the internal UCP swarm root CA and cert instead of the public server cert

docker/ucp example-config
Estimated reading time: 1 minute

Display an example configuration file for UCP.

Usage
docker container run --rm -i \
--name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
docker/ucp \
example-config

docker/ucp id
Estimated reading time: 1 minute

Print the ID of UCP running on this node.

Usage
docker container run --rm \
--name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
docker/ucp \
id
Description
This command prints the ID of the UCP components running on this node. This ID matches what you
see when running the docker info command while using a client bundle.

This ID is used by other commands as confirmation.

Options
Option Description

--debug, D Enable debug mode

--jsonlog Produce json formatted output for easier parsing

docker/ucp images
Estimated reading time: 1 minute

Verify the UCP images on this node.

Usage
docker container run --rm -it \
--name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
docker/ucp \
images [command options]

Description
This command checks the UCP images that are available in this node, and pulls the ones that are
missing.

Options
Option Description

--debug, D Enable debug mode

--jsonlog Produce json formatted output for easier parsing

--list List all images used by UCP but don’t pull them

--pull value Pull UCP images: always, when missing, or never

--registry-password value Password to use when pulling images

--registry-username value Username to use when pulling images

docker/ucp install
Estimated reading time: 6 minutes

Install UCP on a node

Usage
docker container run --rm -it \
--name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
docker/ucp \
install [command options]

Description
This command initializes a new swarm, turns anode into a manager, and installs Docker Universal
Control Plane (UCP).

When installing UCP you can customize:

 The UCP web server certificates. Create a volume named ucp-controller-server-certsand


copy the ca.pem, cert.pem, and key.pem files to the root directory. Then run the install
command with the --external-server-cert flag.
 The license used by UCP, which you can accomplish by bind-mounting the file
at /config/docker_subscription.lic in the tool. For example, -v
/path/to/my/config/docker_subscription.lic:/config/docker_subscription.lic or by
specifying the --license $(cat license.lic) option.

If you’re joining more nodes to this swarm, open the following ports in your firewall:

 443 or the --controller-port


 2376 or the --swarm-port
 12376, 12379, 12380, 12381, 12382, 12383, 12384, 12385, 12386, 12387
 4789 (udp) and 7946 (tcp/udp) for overlay networking

If you have SELinux policies enabled for your Docker install, you will need to use docker container
run --rm -it --security-opt label=disable ... when running this command.

If you are installing on Azure, see Install UCP on Azure.

Options
Option Description

--debug, -D Enable debug mode

--jsonlog Produce json formatted output for easier parsing

--interactive, -i Run in interactive mode and prompt for configuration values

--admin-
The UCP administrator password [$UCP_ADMIN_PASSWORD]
password value

--admin-
The UCP administrator username [$UCP_ADMIN_USER]
username value

Set the Docker Swarm scheduler to binpack mode. Used for backwards
--binpack
compatibility

--cloud-
The cloud provider for the cluster
provider value

A URL pointing to a kubernetes YAML file to be used as an installer for


--cni-installer- the CNI plugin of the cluster. If specified, the default CNI plugin will not be
url value installed. If the URL is using the HTTPS scheme, no certificate
verification will be performed

--controller-
Port for the web UI and API (default: 443)
port value
Option Description

--data-path- Address or interface to use for data path traffic. Format: IP address or
addr value network interface name [$UCP_DATA_PATH_ADDR]

--disable-
Disable anonymous tracking and analytics
tracking

--disable-usage Disable anonymous usage reporting

--dns-opt value Set DNS options for the UCP containers [$DNS_OPT]

--dns- Set custom DNS search domains for the UCP containers
search value [$DNS_SEARCH]

--dns value Set custom DNS servers for the UCP containers [$DNS]

--enable-
Enable performance profiling
profiling

Use the latest existing UCP config during this installation. The install will
--existing-config
fail if a config is not found

--external-
Customize the certificates used by the UCP web server
server-cert

--external- Set the IP address of the load balancer that published services are
service-lb value expected to be reachable on

--force-insecure-
Force install to continue even with unauthenticated Docker Engine ports.
tcp

Force the install/upgrade even if the system does not meet the minimum
--force-minimums
requirements

--host- The network address to advertise to other nodes. Format: IP address or


address value network interface name [$UCP_HOST_ADDRESS]

--iscsiadm- Path to the host iscsiadm binary. This option is applicable only when --
pathvalue storage-iscsi is specified

--iscsidb- Path to the host iscsi DB. This option is applicable only when --storage-
path value iscsi is specified

--kube-apiserver-
Port for the Kubernetes API server (default: 6443)
port value
Option Description

--kv-snapshot- Number of changes between key-value store snapshots (default: 20000)


count value [$KV_SNAPSHOT_COUNT]

--kv- Timeout in milliseconds for the key-value store (default: 5000)


timeout value [$KV_TIMEOUT]

--license value Add a license: e.g. --license “$(cat license.lic)” [$UCP_LICENSE]

--nodeport- Allowed port range for Kubernetes services of type NodePort (Default:
range value 32768-35535) (default: “32768-35535”)

Kubernetes cluster IP pool for the pods to allocated IP from (Default:


--pod-cidr value
192.168.0.0/16) (default: “192.168.0.0/16”)

--preserve-certs Don’t generate certificates if they already exist

--pull value Pull UCP images: ‘always’, when ‘missing’, or ‘never’ (default: “missing”)

Set the Docker Swarm scheduler to random mode. Used for backwards
--random
compatibility

--registry-
Password to use when pulling images [$REGISTRY_PASSWORD]
password value

--registry-
Username to use when pulling images [$REGISTRY_USERNAME]
username value

Add subject alternative names to certificates (e.g. --san www1.acme.com


--san value
--san www2.acme.com) [$UCP_HOSTNAMES]

--service-
cluster-ip- Kubernetes Cluster IP Range for Services (default: “10.96.0.0/16”)
rangevalue

--skip-cloud- Disables checks which rely on detecting which (if any) cloud provider the
provider-check cluster is currently running on

--storage-expt-
Flag to enable experimental features in Kubernetes storage
enabled

--storage-iscsi Enable ISCSI based Persistent Volumes in Kubernetes

--swarm- Enable Docker Swarm experimental features. Used for backwards


experimental compatibility
Option Description

--swarm-grpc-
Port for communication between nodes (default: 2377)
port value

--swarm- Port for the Docker Swarm manager. Used for backwards compatibility
port value (default: 2376)

--unlock- The unlock key for this swarm-mode cluster, if one exists.
key value [$UNLOCK_KEY]

Flag to indicate if cni provider is calico and managed by UCP (calico is


--unmanaged-cni
the default CNI provider)

docker/ucp port-check-server
Estimated reading time: 1 minute

Checks the suitablility of the node for a UCP installation

Usage
docker run --rm -it \
-v /var/run/docker.sock:/var/run/docker.sock \
docker/ucp \
port-check-server [command options]

Description
Checks the suitablility of the node for a UCP installation

Options
Option Description

--listen-address -l value Listen Address (default: “:2376”)

docker/ucp restore
Estimated reading time: 2 minutes

Restore a UCP cluster from a backup.

Usage
docker container run --rm -i \
--name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
docker/ucp \
restore [command options] < backup.tar

Description
This command installs a new UCP cluster that is populated with the state of a previous UCP
manager node using a tar file generated by the backup command. All UCP settings, users, teams
and permissions will be restored from the backup file. The Restore operation does not alter or
recover any containers, networks, volumes or services of an underlying cluster.
The restore command can be performed on any manager node of an existing cluster. If the current
node does not belong in a cluster, one will be initialized using the value of the --host-address flag.
When restoring on an existing swarm-mode cluster, no previous UCP components must be running
on any node of the cluster. This cleanup can be performed with the uninstall-ucp command.

If restore is performed on a different cluster than the one where the backup file was taken on, the
Cluster Root CA of the old UCP installation will not be restored. This will invalidate any previously
issued Admin Client Bundles and all administrator will be required to download new client bundles
after the operation is completed. Any existing Client Bundles for non-admin users will still be fully
operational.

By default, the backup tar file is read from stdin. You can also bind-mount the backup file
under /config/backup.tar, and run the restore command with the --interactive flag.

Notes:

 Run uninstall-ucp before attempting the restore operation on an existing UCP cluster.
 If your swarm-mode cluster has lost quorum and the original set of managers are not
recoverable, you can attempt to recover a single-manager cluster with docker swarm init --
force-new-cluster.
 You can restore from a backup that was taken on a different manager node or a different
cluster altogether.

Options
Option Description

--debug, D Enable debug mode

--jsonlog Produce json formatted output for easier parsing

--interactive, i Run in interactive mode and prompt for configuration values

--data-path-
Address or interface to use for data path traffic
addrvalue

Force the install/upgrade even if the system does not meet the
--force-minimums
minimum requirements

The network address to advertise to other nodes. Format: IP address


--host-addressvalue
or network interface name

--passphrasevalue Decrypt the backup tar file with the provided passphrase

Add subject alternative names to certificates (e.g. --san


--san value
www1.acme.com --san www2.acme.com)

`--swarm-grpc-
Port for communication between nodes (default: 2377)
port value

--unlock-keyvalue The unlock key for this swarm-mode cluster, if one exists.

docker/ucp uninstall-ucp
Estimated reading time: 1 minute

Uninstall UCP from this swarm.

Usage
docker container run --rm -it \
--name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
docker/ucp \
uninstall-ucp [command options]

Description
This command uninstalls UCP from the swarm, but preserves the swarm so that your applications
can continue running.

After UCP is uninstalled you can use the docker swarm leave and docker node rmcommands to
remove nodes from the swarm.

Once UCP is uninstalled, you won’t be able to join nodes to the swarm unless UCP is installed
again.

Options
Option Description

--debug, D Enable debug mode

--jsonlog Produce json formatted output for easier parsing

--interactive, i Run in interactive mode and prompt for configuration values

--id value The ID of the UCP instance to uninstall

--pull value Pull UCP images: always, when missing, or never

--purge-config Remove UCP configs during uninstallation

--registry-passwordvalue Password to use when pulling images

--registry-usernamevalue Username to use when pulling images

docker/ucp upgrade
Estimated reading time: 1 minute

Upgrade the UCP cluster.


Usage
docker container run --rm -it \
--name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
docker/ucp \
upgrade [command options]

Description
This command upgrades the UCP running on this cluster.

Before performing an upgrade, you should perform a backup by using the backup command.

After upgrading UCP, go to the UCP web interface and confirm each node is healthy and that all
nodes have been upgraded successfully.

Options
Option Description

--debug, D Enable debug mode

--jsonlog Produce json formatted output for easier parsing

--interactive, i Run in interactive mode and prompt for configuration values

--admin-passwordvalue The UCP administrator password

--admin-usernamevalue The UCP administrator username

Force the install/upgrade even if the system does not meet the
--force-minimums
minimum requirements

Override the previously configured host address with this IP or


--host-addressvalue
network interface

--id The ID of the UCP instance to upgrade


Option Description

--manual-worker-
Whether to manually upgrade worker nodes. Defaults to false
upgrade

--pull Pull UCP images: always, when missing, or never

--registry-
Password to use when pulling images
passwordvalue

--registry-
Username to use when pulling images
usernamevalue

You might also like