Docker CLI Reference Documentation
Docker CLI Reference Documentation
General form
The basic docker run command takes this form:
$ docker run [OPTIONS] IMAGE[:TAG|@DIGEST] [COMMAND] [ARG...]
The docker run command must specify an IMAGE to derive the container from. An image developer
can define image defaults related to:
With the docker run [OPTIONS] an operator can add to or override the image defaults set by a
developer. And, additionally, operators can override nearly all the defaults set by the Docker runtime
itself. The operator’s ability to override image and Docker runtime defaults is why runhas more
options than any other docker command.
To learn how to interpret the types of [OPTIONS], see Option types.
Note: Depending on your Docker system configuration, you may be required to preface the docker
run command with sudo. To avoid having to use sudo with the dockercommand, your system
administrator can create a Unix group called docker and add users to it. For more information about
this configuration, refer to the Docker installation documentation for your operating system.
Operator exclusive options
Only the operator (the person executing docker run) can set the following options.
Detached vs foreground
o Detached (-d)
o Foreground
Container identification
o Name (--name)
o PID equivalent
IPC settings (--ipc)
Network settings
Restart policies (--restart)
Clean up (--rm)
Runtime constraints on resources
Runtime privilege and Linux capabilities
Detached vs foreground
When starting a Docker container, you must first decide if you want to run the container in the
background in a “detached” mode or in the default foreground mode:
-d=false: Detached mode: Run container in the background, print new container id
Detached (-d)
To start a container in detached mode, you use -d=true or just -d option. By design, containers
started in detached mode exit when the root process used to run the container exits, unless you also
specify the --rm option. If you use -d with --rm, the container is removed when it exits or when the
daemon exits, whichever happens first.
Do not pass a service x start command to a detached container. For example, this command
attempts to start the nginx service.
$ docker run -d -p 80:80 my_image service nginx start
This succeeds in starting the nginx service inside the container. However, it fails the detached
container paradigm in that, the root process (service nginx start) returns and the detached
container stops as designed. As a result, the nginx service is started but could not be used. Instead,
to start a process such as the nginx web server do the following:
$ docker run -d -p 80:80 my_image nginx -g 'daemon off;'
To do input/output with a detached container use network connections or shared volumes. These
are required because the container is no longer listening to the command line where docker run was
run.
To reattach to a detached container, use docker attach command.
Foreground
In foreground mode (the default when -d is not specified), docker run can start the process in the
container and attach the console to the process’s standard input, output, and standard error. It can
even pretend to be a TTY (this is what most command line executables expect) and pass along
signals. All of that is configurable:
-a=[] : Attach to `STDIN`, `STDOUT` and/or `STDERR`
-t : Allocate a pseudo-tty
--sig-proxy=true: Proxy all received signals to the process (non-TTY mode only)
-i : Keep STDIN open even if not attached
If you do not specify -a then Docker will attach to both stdout and stderr . You can specify to which
of the three standard streams (STDIN, STDOUT, STDERR) you’d like to connect instead, as in:
$ docker run -a stdin -a stdout -i -t ubuntu /bin/bash
For interactive processes (like a shell), you must use -i -t together in order to allocate a tty for the
container process. -i -t is often written -it as you’ll see in later examples. Specifying -t is
forbidden when the client is receiving its standard input from a pipe, as in:
$ echo test | docker run -i busybox cat
Note: A process running as PID 1 inside a container is treated specially by Linux: it ignores any
signal with the default action. So, the process will not terminate on SIGINT or SIGTERM unless it is
coded to do so.
Container identification
Name (--name)
The operator can identify a container in three ways:
Identifier
Example value
type
UUID long
“f78375b1c487e03c9438c729345e54db9d20cfa2ac1fc3494b6eb60872e74778”
identifier
UUID short
“f78375b1c487”
identifier
Name “evil_ptolemy”
The UUID identifiers come from the Docker daemon. If you do not assign a container name with
the --name option, then the daemon generates a random string name for you. Defining a name can be
a handy way to add meaning to a container. If you specify a name, you can use it when referencing
the container within a Docker network. This works for both background and foreground Docker
containers.
Note: Containers on the default bridge network must be linked to communicate by name.
PID equivalent
Finally, to help with automation, you can have Docker write the container ID out to a file of your
choosing. This is similar to how some programs might write out their process ID to a file (you’ve
seen them as PID files):
Image[:tag]
While not strictly a means of identifying a container, you can specify a version of an image you’d like
to run the container with by adding image[:tag] to the command. For example, docker run
ubuntu:14.04.
Image[@digest]
Images using the v2 or later image format have a content-addressable identifier called a digest. As
long as the input used to generate the image is unchanged, the digest value is predictable and
referenceable.
The following example runs a container from the alpine image with
thesha256:9cacb71397b640eca97488cf08582ae4e4068513101088e9f96c9814bfda95e0 digest:
$ docker run
alpine@sha256:9cacb71397b640eca97488cf08582ae4e4068513101088e9f96c9814bfda95e0 date
PID namespace provides separation of processes. The PID Namespace removes the view of the
system processes, and allows process ids to be reused including pid 1.
In certain cases you want your container to share the host’s process namespace, basically allowing
processes within the container to see all of the processes on the system. For example, you could
build a container with debugging tools like strace or gdb, but want to use these tools when
debugging processes within the container.
FROM alpine:latest
RUN apk add --update htop && rm -rf /var/cache/apk/*
CMD ["htop"]
Joining another container’s pid namespace can be used for debugging that container.
Example
Start a container running a redis server:
$ docker run --name my-redis -d redis
Debug the redis container by running another container that has strace in it:
The UTS namespace is for setting the hostname and the domain that is visible to running processes
in that namespace. By default, all containers, including those with --network=host, have their own
UTS namespace. The host setting will result in the container using the same UTS namespace as the
host. Note that --hostname and --domainname are invalid in hostUTS mode.
You may wish to share the UTS namespace with the host if you would like the hostname of the
container to change as the hostname of the host changes. A more advanced use case would be
changing the host’s hostname from a container.
Value Description
“container: <_name-or-
Join another (“shareable”) container’s IPC namespace.
ID_>"
Value Description
If not specified, daemon default is used, which can either be "private" or "shareable", depending
on the daemon version and configuration.
IPC (POSIX/SysV IPC) namespace provides separation of named shared memory segments,
semaphores and message queues.
Shared memory segments are used to accelerate inter-process communication at memory speed,
rather than through pipes or through the network stack. Shared memory is commonly used by
databases and custom-built (typically C/OpenMPI, C++/using boost libraries) high performance
applications for scientific computing and financial services industries. If these types of applications
are broken into multiple containers, you might need to share the IPC mechanisms of the containers,
using "shareable" mode for the main (i.e. “donor”) container, and "container:<donor-name-or-
ID>" for other containers.
Network settings
--dns=[] : Set custom dns servers for the container
--network="bridge" : Connect a container to a network
'bridge': create a network stack on the default Docker bridge
'none': no networking
'container:<name|id>': reuse another container's network stack
'host': use the Docker host network stack
'<network-name>|<network-id>': connect to a user-defined
network
--network-alias=[] : Add network-scoped alias for the container
--add-host="" : Add a line to /etc/hosts (host:IP)
--mac-address="" : Sets the container's Ethernet device's MAC address
--ip="" : Sets the container's Ethernet device's IPv4 address
--ip6="" : Sets the container's Ethernet device's IPv6 address
--link-local-ip=[] : Sets one or more container's Ethernet device's link local
IPv4/IPv6 addresses
By default, all containers have networking enabled and they can make any outgoing connections.
The operator can completely disable networking with docker run --network none which disables all
incoming and outgoing networking. In cases like this, you would perform I/O through files
or STDIN and STDOUT only.
Publishing ports and linking to other containers only works with the default (bridge). The linking
feature is a legacy feature. You should always prefer using Docker network drivers over linking.
Your container will use the same DNS servers as the host by default, but you can override this with -
-dns.
By default, the MAC address is generated using the IP address allocated to the container. You can
set the container’s MAC address explicitly by providing a MAC address via the --mac-
address parameter (format:12:34:56:78:9a:bc).Be aware that Docker does not check if manually
specified MAC addresses are unique.
Supported networks :
Network Description
bridge (default) Connect the container to the bridge via veth interfaces.
NETWORK: NONE
With the network is none a container will not have access to any external routes. The container will
still have a loopback interface enabled in the container but it does not have any routes to external
traffic.
NETWORK: BRIDGE
With the network set to bridge a container will use docker’s default networking setup. A bridge is
setup on the host, commonly named docker0, and a pair of veth interfaces will be created for the
container. One side of the veth pair will remain on the host attached to the bridge while the other
side of the pair will be placed inside the container’s namespaces in addition to
the loopback interface. An IP address will be allocated for containers on the bridge’s network and
traffic will be routed though this bridge to the container.
Containers can communicate via their IP addresses by default. To communicate by name, they must
be linked.
NETWORK: HOST
With the network set to host a container will share the host’s network stack and all interfaces from
the host will be available to the container. The container’s hostname will match the hostname on the
host system. Note that --mac-address is invalid in hostnetmode. Even in host network mode a
container has its own UTS namespace by default. As such --hostname and --domainname are
allowed in host network mode and will only change the hostname and domain name inside the
container. Similar to --hostname, the --add-host, --dns, --dns-search, and --dns-option options
can be used in hostnetwork mode. These options update /etc/hosts or /etc/resolv.conf inside the
container. No change are made to /etc/hosts and /etc/resolv.conf on the host.
Compared to the default bridge mode, the host mode gives significantly better networking
performance since it uses the host’s native networking stack whereas the bridge has to go through
one level of virtualization through the docker daemon. It is recommended to run containers in this
mode when their networking performance is critical, for example, a production Load Balancer or a
High Performance Web Server.
Note: --network="host" gives the container full access to local system services such as D-bus and
is therefore considered insecure.
NETWORK: CONTAINER
With the network set to container a container will share the network stack of another container. The
other container’s name must be provided in the format of --network container:<name|id>. Note
that --add-host --hostname --dns --dns-search--dns-option and --mac-address are invalid
in container netmode, and --publish--publish-all --expose are also invalid
in container netmode.
Example running a Redis container with Redis binding to localhost then running the redis-
cli command and connecting to the Redis server over the localhost interface.
USER-DEFINED NETWORK
You can create a network using a Docker network driver or an external network driver plugin. You
can connect multiple containers to the same network. Once connected to a user-defined network,
the containers can communicate easily using only another container’s IP address or name.
For overlay networks or custom plugins that support multi-host connectivity, containers connected to
the same multi-host network but launched from different Engines can also communicate in this way.
The following example creates a network using the built-in bridge network driver and running a
container in the created network
$ docker network create -d bridge my-net
$ docker run --network=my-net -itd --name=container3 busybox
Managing /etc/hosts
Your container will have lines in /etc/hosts which define the hostname of the container itself as well
as localhost and a few other common things. The --add-host flag can be used to add additional
lines to /etc/hosts.
$ docker run -it --add-host db-static:86.75.30.9 ubuntu cat /etc/hosts
172.17.0.22 09d03f76bf2c
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
86.75.30.9 db-static
If a container is connected to the default bridge network and linked with other containers, then the
container’s /etc/hosts file is updated with the linked container’s name.
Note Since Docker may live update the container’s /etc/hosts file, there may be situations when
processes inside the container can end up reading an empty or incomplete /etc/hosts file. In most
cases, retrying the read again should fix the problem.
Policy Result
no Do not automatically restart the container when it exits. This is the default.
on-
Restart only if the container exits with a non-zero exit status. Optionally, limit
failure[:max-
the number of restart retries the Docker daemon attempts.
retries]
Always restart the container regardless of the exit status. When you specify
always, the Docker daemon will try to restart the container indefinitely. The
always
container will also always start on daemon startup, regardless of the current
state of the container.
An ever increasing delay (double the previous delay, starting at 100 milliseconds) is added before
each restart to prevent flooding the server. This means the daemon will wait for 100 ms, then 200
ms, 400, 800, 1600, and so on until either the on-failure limit is hit, or when you docker
stop or docker rm -f the container.
If a container is successfully restarted (the container is started and runs for at least 10 seconds), the
delay is reset to its default value of 100 ms.
You can specify the maximum amount of times Docker will try to restart the container when using
the on-failure policy. The default is that Docker will try forever to restart the container. The number
of (attempted) restarts for a container can be obtained via docker inspect. For example, to get the
number of restarts for container “my-container”;
$ docker inspect -f "{{ .RestartCount }}" my-container
# 2
Combining --restart (restart policy) with the --rm (clean up) flag results in an error. On container
restart, attached clients are disconnected. See the examples on using the --rm(clean up) flag later in
this page.
Examples
$ docker run --restart=always redis
This will run the redis container with a restart policy of always so that if the container exits, Docker
will restart it.
$ docker run --restart=on-failure:10 redis
This will run the redis container with a restart policy of on-failure and a maximum restart count of
10. If the redis container exits with a non-zero exit status more than 10 times in a row Docker will
abort trying to restart the container. Providing a maximum restart limit is only valid for the on-
failure policy.
Exit Status
The exit code from docker run gives information about why the container failed to run or why it
exited. When docker run exits with a non-zero code, the exit codes follow the chrootstandard, see
below:
Clean up (--rm)
By default a container’s file system persists even after the container exits. This makes debugging a
lot easier (since you can inspect the final state) and you retain all your data by default. But if you are
running short-term foreground processes, these container file systems can really pile up. If instead
you’d like Docker to automatically clean up the container and remove the file system when the
container exits, you can add the --rm flag:
--rm=false: Automatically remove the container when it exits
Note: When you set the --rm flag, Docker also removes the anonymous volumes associated with the
container when the container is removed. This is similar to running docker rm -v my-container.
Only volumes that are specified without a name are removed. For example, with docker run --rm -
v /foo -v awesome:/bar busybox top, the volume for /foo will be removed, but the volume
for /bar will not. Volumes inherited via --volumes-from will be removed with the same logic -- if the
original volume was specified with a name it will not be removed.
Security configuration
--security-opt="label=user:USER" : Set the label user for the container
--security-opt="label=role:ROLE" : Set the label role for the container
--security-opt="label=type:TYPE" : Set the label type for the container
--security-opt="label=level:LEVEL" : Set the label level for the container
--security-opt="label=disable" : Turn off label confinement for the container
--security-opt="apparmor=PROFILE" : Set the apparmor profile to be applied to the
container
--security-opt="no-new-privileges:true|false" : Disable/enable container processes
from gaining new privileges
--security-opt="seccomp=unconfined" : Turn off seccomp confinement for the container
--security-opt="seccomp=profile.json": White listed syscalls seccomp Json file to be
used as a seccomp filter
You can override the default labeling scheme for each container by specifying the --security-
opt flag. Specifying the level in the following command allows you to share the same content
between containers.
$ docker run --security-opt label=level:s0:c100,c200 -it fedora bash
To disable the security labeling for this container versus running with the --privileged flag, use the
following command:
$ docker run --security-opt label=disable -it fedora bash
If you want a tighter security policy on the processes within a container, you can specify an alternate
type for the container. You could run a container that is only allowed to listen on Apache ports by
executing the following command:
If you want to prevent your container processes from gaining additional privileges, you can execute
the following command:
Option Description
-c, --cpu-
CPU shares (relative weight)
shares=0
--cpuset-
cpus=""
CPUs in which to allow execution (0-3, 0,1)
--cpuset- Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only
mems="" effective on NUMA systems.
--cpu-rt- Limit the CPU real-time period. In microseconds. Requires parent cgroups
period=0 be set and cannot be higher than parent. Also check rtprio ulimits.
--cpu-rt- Limit the CPU real-time runtime. In microseconds. Requires parent cgroups
runtime=0 be set and cannot be higher than parent. Also check rtprio ulimits.
--blkio- Block IO weight (relative weight) accepts a weight value between 10 and
weight=0 1000.
--blkio-weight-
device=""
Block IO weight (relative device weight, format: DEVICE_NAME:WEIGHT)
--device-read- Limit read rate (IO per second) from a device (format: <device-
iops="" path>:<number>). Number is a positive integer.
--device-write- Limit write rate (IO per second) to a device (format: <device-
iops="" path>:<number>). Number is a positive integer.
--oom-kill-
disable=false
Whether to disable OOM Killer for the container or not.
--oom-score-
adj=0
Tune container’s OOM preferences (-1000 to 1000)
or g (gigabytes). If you omit the unit, the system uses bytes. If you omit the
size entirely, the system uses 64m.
Option Result
memory=inf, memory- There is no memory limit for the container. The container can use
swap=inf(default) as much memory as needed.
Examples:
We set nothing about memory, this means the processes in the container can use as much memory
and swap memory as they need.
We set memory limit and disabled swap memory limit, this means the processes in the container can
use 300M memory and as much swap memory as they need (if the host supports swap memory).
We set memory limit only, this means the processes in the container can use 300M memory and
300M swap memory, by default, the total virtual memory size (--memory-swap) will be set as double
of memory, in this case, memory + swap would be 2*300M, so processes can use 300M swap
memory as well.
$ docker run -it -m 300M --memory-swap 1G ubuntu:14.04 /bin/bash
We set both memory and swap memory, so the processes in the container can use 300M memory
and 700M swap memory.
Memory reservation is a kind of memory soft limit that allows for greater sharing of memory. Under
normal circumstances, containers can use as much of the memory as needed and are constrained
only by the hard limits set with the -m/--memory option. When memory reservation is set, Docker
detects memory contention or low memory and forces containers to restrict their consumption to a
reservation limit.
Always set the memory reservation value below the hard limit, otherwise the hard limit takes
precedence. A reservation of 0 is the same as setting no reservation. By default (without reservation
set), memory reservation is the same as the hard memory limit.
Memory reservation is a soft-limit feature and does not guarantee the limit won’t be exceeded.
Instead, the feature attempts to ensure that, when memory is heavily contended for, memory is
allocated based on the reservation hints/setup.
The following example limits the memory (-m) to 500M and sets the memory reservation to 200M.
$ docker run -it -m 500M --memory-reservation 200M ubuntu:14.04 /bin/bash
Under this configuration, when the container consumes memory more than 200M and less than
500M, the next system memory reclaim attempts to shrink container memory below 200M.
The following example set memory reservation to 1G without a hard memory limit.
The container can use as much memory as it needs. The memory reservation setting ensures the
container doesn’t consume too much memory for long time, because every memory reclaim shrinks
the container’s consumption to the reservation.
By default, kernel kills processes in a container if an out-of-memory (OOM) error occurs. To change
this behaviour, use the --oom-kill-disable option. Only disable the OOM killer on containers where
you have also set the -m/--memory option. If the -m flag is not set, this can result in the host running
out of memory and require killing the host’s system processes to free memory.
The following example limits the memory to 100M and disables the OOM killer for this container:
$ docker run -it -m 100M --oom-kill-disable ubuntu:14.04 /bin/bash
The container has unlimited memory which can cause the host to run out memory and require killing
system processes to free memory. The --oom-score-adj parameter can be changed to select the
priority of which containers will be killed when the system is out of memory, with negative scores
making them less likely to be killed, and positive scores more likely.
stack pages
slab pages
sockets memory pressure
tcp memory pressure
You can setup kernel memory limit to constrain these kinds of memory. For example, every process
consumes some stack pages. By limiting kernel memory, you can prevent new processes from being
created when the kernel memory usage is too high.
Kernel memory is never completely independent of user memory. Instead, you limit kernel memory
in the context of the user memory limit. Assume “U” is the user memory limit and “K” the kernel limit.
There are three possible ways to set limits:
Option Result
U != 0, K < U Kernel memory is a subset of the user memory. This setup is useful in
deployments where the total amount of memory per-cgroup is overcommitted.
Option Result
Since kernel memory charges are also fed to the user counter and reclamation
is triggered for the container for both kinds of memory. This configuration gives
U != 0, K > U
the admin a unified view of memory. It is also useful for people who just want to
track kernel memory usage.
Examples:
We set memory and kernel memory, so the processes in the container can use 500M memory in
total, in this 500M memory, it can be 50M kernel memory tops.
We set kernel memory without -m, so the processes in the container can use as much memory as
they want, but they can only use 50M kernel memory.
Swappiness constraint
By default, a container’s kernel can swap out a percentage of anonymous pages. To set this
percentage for a container, specify a --memory-swappiness value between 0 and 100. A value of 0
turns off anonymous page swapping. A value of 100 sets all anonymous pages as swappable. By
default, if you are not using --memory-swappiness, memory swappiness value will be inherited from
the parent.
Setting the --memory-swappiness option is helpful when you want to retain the container’s working
set and to avoid swapping performance penalties.
CPU share constraint
By default, all containers get the same proportion of CPU cycles. This proportion can be modified by
changing the container’s CPU share weighting relative to the weighting of all other running
containers.
To modify the proportion from the default of 1024, use the -c or --cpu-shares flag to set the
weighting to 2 or higher. If 0 is set, the system will ignore the value and use the default of 1024.
The proportion will only apply when CPU-intensive processes are running. When tasks in one
container are idle, other containers can use the left-over CPU time. The actual amount of CPU time
will vary depending on the number of containers running on the system.
For example, consider three containers, one has a cpu-share of 1024 and two others have a cpu-
share setting of 512. When processes in all three containers attempt to use 100% of CPU, the first
container would receive 50% of the total CPU time. If you add a fourth container with a cpu-share of
1024, the first container only gets 33% of the CPU. The remaining containers receive 16.5%, 16.5%
and 33% of the CPU.
On a multi-core system, the shares of CPU time are distributed over all CPU cores. Even if a
container is limited to less than 100% of CPU time, it can use 100% of each individual CPU core.
For example, consider a system with more than three cores. If you start one container {C0}with -
c=512 running one process, and another container {C1} with -c=1024 running two processes, this can
result in the following division of CPU shares:
PID container CPU CPU share
100 {C0} 0 100% of CPU0
101 {C1} 1 100% of CPU1
102 {C1} 2 100% of CPU2
Examples:
In addition to use --cpu-period and --cpu-quota for setting CPU period constraints, it is possible to
specify --cpus with a float number to achieve the same purpose. For example, if there is 1 CPU,
then --cpus=0.5 will achieve the same result as setting --cpu-period=50000and --cpu-
quota=25000 (50% CPU).
The default value for --cpus is 0.000, which means there is no limit.
Cpuset constraint
We can set cpus in which to allow execution for containers.
Examples:
This means processes in container can be executed on cpu 0, cpu 1 and cpu 2.
We can set mems in which to allow execution for containers. Only effective on NUMA systems.
Examples:
This example restricts the processes in the container to only use memory from memory nodes 1 and
3.
This example restricts the processes in the container to only use memory from memory nodes 0, 1
and 2.
The --blkio-weight flag can set the weighting to a value between 10 to 1000. For example, the
commands below create two containers with different blkio weight:
$ docker run -it --name c1 --blkio-weight 300 ubuntu:14.04 /bin/bash
$ docker run -it --name c2 --blkio-weight 600 ubuntu:14.04 /bin/bash
If you do block IO in the two containers at the same time, by, for example:
You’ll find that the proportion of time is the same as the proportion of blkio weights of the two
containers.
If you specify both the --blkio-weight and --blkio-weight-device, Docker uses the --blkio-
weight as the default weight and uses --blkio-weight-device to override this default with a new
value on a specific device. The following example uses a default weight of 300 and overrides this
default on /dev/sda setting that weight to 200:
$ docker run -it \
--blkio-weight 300 \
--blkio-weight-device "/dev/sda:200" \
ubuntu
The --device-read-bps flag limits the read rate (bytes per second) from a device. For example, this
command creates a container and limits the read rate to 1mb per second from /dev/sda:
$ docker run -it --device-read-bps /dev/sda:1mb ubuntu
The --device-write-bps flag limits the write rate (bytes per second) to a device. For example, this
command creates a container and limits the write rate to 1mb per second for /dev/sda:
$ docker run -it --device-write-bps /dev/sda:1mb ubuntu
Both flags take limits in the <device-path>:<limit>[unit] format. Both read and write rates must be
a positive integer. You can specify the rate in kb (kilobytes), mb (megabytes), or gb (gigabytes).
The --device-read-iops flag limits read rate (IO per second) from a device. For example, this
command creates a container and limits the read rate to 1000 IO per second from /dev/sda:
$ docker run -ti --device-read-iops /dev/sda:1000 ubuntu
The --device-write-iops flag limits write rate (IO per second) to a device. For example, this
command creates a container and limits the write rate to 1000 IO per second to /dev/sda:
$ docker run -ti --device-write-iops /dev/sda:1000 ubuntu
Both flags take limits in the <device-path>:<limit> format. Both read and write rates must be a
positive integer.
Additional groups
--group-add: Add additional groups to run as
By default, the docker container process runs with the supplementary groups looked up for the
specified user. If one wants to add more to that list of groups, then one can use this flag:
$ docker run --rm --group-add audio --group-add nogroup --group-add 777 busybox id
uid=0(root) gid=0(root) groups=10(wheel),29(audio),99(nogroup),777
By default, Docker containers are “unprivileged” and cannot, for example, run a Docker daemon
inside a Docker container. This is because by default a container is not allowed to access any
devices, but a “privileged” container is given access to all devices (see the documentation
on cgroups devices).
When the operator executes docker run --privileged, Docker will enable access to all devices on
the host as well as set some configuration in AppArmor or SELinux to allow the container nearly all
the same access to the host as processes running outside containers on the host. Additional
information about running with --privileged is available on the Docker Blog.
If you want to limit access to a specific device or devices you can use the --device flag. It allows you
to specify one or more devices that will be accessible within the container.
$ docker run --device=/dev/snd:/dev/snd ...
By default, the container will be able to read, write, and mknod these devices. This can be overridden
using a third :rwm set of options to each --device flag:
$ docker run --device=/dev/sda:/dev/xvdc --rm -it ubuntu fdisk /dev/xvdc
CHOWN Make arbitrary changes to file UIDs and GIDs (see chown(2)).
The next table shows the capabilities which are not granted by default and may be added.
Raise process nice value (nice(2), setpriority(2)) and change the nice
SYS_NICE
value for arbitrary processes.
Bypass file read permission checks and directory read and execute
DAC_READ_SEARCH
permission checks.
Use reboot(2) and kexec_load(2), reboot and load a new kernel for
SYS_BOOT
later execution.
Both flags support the value ALL, so if the operator wants to have all capabilities but MKNODthey could
use:
$ docker run --cap-add=ALL --cap-drop=MKNOD ...
For interacting with the network stack, instead of using --privileged they should use --cap-
add=NET_ADMIN to modify the network interfaces.
$ docker run -it --rm ubuntu:14.04 ip link add dummy0 type dummy
RTNETLINK answers: Operation not permitted
$ docker run -it --rm --cap-add=NET_ADMIN ubuntu:14.04 ip link add dummy0 type dummy
To mount a FUSE based filesystem, you need to combine both --cap-add and --device:
$ docker run --rm -it --cap-add SYS_ADMIN sshfs sshfs [email protected]:/home/sven
/mnt
fuse: failed to open /dev/fuse: Operation not permitted
$ docker run --rm -it --device /dev/fuse sshfs sshfs [email protected]:/home/sven /mnt
fusermount: mount failed: Operation not permitted
$ docker run --rm -it --cap-add SYS_ADMIN --device /dev/fuse sshfs
# sshfs [email protected]:/home/sven /mnt
The authenticity of host '10.10.10.20 (10.10.10.20)' can't be established.
ECDSA key fingerprint is 25:34:85:75:25:b0:17:46:05:19:04:93:b5:dd:5f:c6.
Are you sure you want to continue connecting (yes/no)? yes
[email protected]'s password:
root@30aa0cfaf1b5:/# ls -la /mnt/src/docker
total 1516
drwxrwxr-x 1 1000 1000 4096 Dec 4 06:08 .
drwxrwxr-x 1 1000 1000 4096 Dec 4 11:46 ..
-rw-rw-r-- 1 1000 1000 16 Oct 8 00:09 .dockerignore
-rwxrwxr-x 1 1000 1000 464 Oct 8 00:09 .drone.yml
drwxrwxr-x 1 1000 1000 4096 Dec 4 06:11 .git
-rw-rw-r-- 1 1000 1000 461 Dec 4 06:08 .gitignore
....
The default seccomp profile will adjust to the selected capabilities, in order to allow use of facilities
allowed by the capabilities, so you should not have to adjust this, since Docker 1.12. In Docker 1.10
and 1.11 this did not happen and it may be necessary to use a custom seccomp profile or use --
security-opt seccomp=unconfined when adding capabilities.
Disables any logging for the container. docker logs won’t be available with this
none
driver.
json- Default logging driver for Docker. Writes JSON messages to file. No logging
file options are supported for this driver.
syslog Syslog logging driver for Docker. Writes log messages to syslog.
journald Journald logging driver for Docker. Writes log messages to journald.
Graylog Extended Log Format (GELF) logging driver for Docker. Writes log
gelf
messages to a GELF endpoint likeGraylog or Logstash.
fluentd Fluentd logging driver for Docker. Writes log messages to fluentd(forward input).
Amazon CloudWatch Logs logging driver for Docker. Writes log messages to
awslogs
Amazon CloudWatch Logs
Splunk logging driver for Docker. Writes log messages to splunk using Event Http
splunk
Collector.
The docker logs command is available only for the json-file and journald logging drivers. For
detailed information on working with logging drivers, see Configure logging drivers.
Overriding Dockerfile image defaults
When a developer builds an image from a Dockerfile or when she commits it, the developer can set
a number of default parameters that take effect when the image starts up as a container.
Four of the Dockerfile commands cannot be overridden at runtime: FROM, MAINTAINER, RUN, and ADD.
Everything else has a corresponding override in docker run. We’ll go through what the developer
might have set in each Dockerfile instruction and how the operator can override that setting.
This command is optional because the person who created the IMAGE may have already provided a
default COMMAND using the Dockerfile CMD instruction. As the operator (the person running a container
from the image), you can override that CMD instruction just by specifying a new COMMAND.
If the image also specifies an ENTRYPOINT then the CMD or COMMAND get appended as arguments to
the ENTRYPOINT.
The ENTRYPOINT of an image is similar to a COMMAND because it specifies what executable to run when
the container starts, but it is (purposely) more difficult to override. The ENTRYPOINTgives a container
its default nature or behavior, so that when you set an ENTRYPOINT you can run the container as if it
were that binary, complete with default options, and you can pass in more options via the COMMAND.
But, sometimes an operator may want to run something else inside the container, so you can
override the default ENTRYPOINT at runtime by using a string to specify the new ENTRYPOINT. Here is
an example of how to run a shell in a container that has been set up to automatically run something
else (like /usr/bin/redis-server):
$ docker run -it --entrypoint /bin/bash example/redis
You can reset a containers entrypoint by passing an empty string, for example:
Note: Passing --entrypoint will clear out any default command set on the image (i.e.
any CMD instruction in the Dockerfile used to build it).
With the exception of the EXPOSE directive, an image developer hasn’t got much control over
networking. The EXPOSE instruction defines the initial incoming ports that provide services. These
ports are available to processes inside the container. An operator can use the --expose option to
add to the exposed ports.
To expose a container’s internal port, an operator can start the container with the -P or -pflag. The
exposed port is accessible on the host and the ports are available to any client that can reach the
host.
The -P option publishes all the ports to the host interfaces. Docker binds each exposed port to a
random port on the host. The range of ports are within an ephemeral port range defined
by /proc/sys/net/ipv4/ip_local_port_range. Use the -p flag to explicitly map a single port or range
of ports.
The port number inside the container (where the service listens) does not need to match the port
number exposed on the outside of the container (where clients connect). For example, inside the
container an HTTP service is listening on port 80 (and so the image developer specifies EXPOSE 80 in
the Dockerfile). At runtime, the port might be bound to 42800 on the host. To find the mapping
between the host ports and the exposed ports, use docker port.
If the operator uses --link when starting a new client container in the default bridge network, then
the client container can access the exposed port via a private networking interface. If --link is used
when starting a container in a user-defined network as described in Networking overview, it will
provide a named alias for the container being linked to.
Variable Value
Additionally, the operator can set any environment variable in the container by using one or
more -e flags, even overriding those mentioned above, or already defined by the developer with a
Dockerfile ENV. If the operator names an environment variable without specifying a value, then the
current value of the named variable is propagated into the container’s environment:
$ export today=Wednesday
$ docker run -e "deep=purple" -e today --rm alpine env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=d2219b854598
deep=purple
today=Wednesday
HOME=/root
PS C:\> docker run --rm -e "foo=bar" microsoft/nanoserver cmd /s /c set
ALLUSERSPROFILE=C:\ProgramData
APPDATA=C:\Users\ContainerAdministrator\AppData\Roaming
CommonProgramFiles=C:\Program Files\Common Files
CommonProgramFiles(x86)=C:\Program Files (x86)\Common Files
CommonProgramW6432=C:\Program Files\Common Files
COMPUTERNAME=C2FAEFCC8253
ComSpec=C:\Windows\system32\cmd.exe
foo=bar
LOCALAPPDATA=C:\Users\ContainerAdministrator\AppData\Local
NUMBER_OF_PROCESSORS=8
OS=Windows_NT
Path=C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\Wind
owsPowerShell\v1.0\;C:\Users\ContainerAdministrator\AppData\Local\Microsoft\WindowsAp
ps
PATHEXT=.COM;.EXE;.BAT;.CMD
PROCESSOR_ARCHITECTURE=AMD64
PROCESSOR_IDENTIFIER=Intel64 Family 6 Model 62 Stepping 4, GenuineIntel
PROCESSOR_LEVEL=6
PROCESSOR_REVISION=3e04
ProgramData=C:\ProgramData
ProgramFiles=C:\Program Files
ProgramFiles(x86)=C:\Program Files (x86)
ProgramW6432=C:\Program Files
PROMPT=$P$G
PUBLIC=C:\Users\Public
SystemDrive=C:
SystemRoot=C:\Windows
TEMP=C:\Users\ContainerAdministrator\AppData\Local\Temp
TMP=C:\Users\ContainerAdministrator\AppData\Local\Temp
USERDOMAIN=User Manager
USERNAME=ContainerAdministrator
USERPROFILE=C:\Users\ContainerAdministrator
windir=C:\Windows
Similarly the operator can set the HOSTNAME (Linux) or COMPUTERNAME (Windows) with -h.
HEALTHCHECK
--health-cmd Command to run to check health
--health-interval Time between running the check
--health-retries Consecutive failures needed to report unhealthy
--health-timeout Maximum time to allow one check to run
--health-start-period Start period for the container to initialize before
starting health-retries countdown
--no-healthcheck Disable any container-specified HEALTHCHECK
Example:
The example below mounts an empty tmpfs into the container with the rw, noexec, nosuid,
and size=65536k options.
$ docker run -d --tmpfs /run:rw,noexec,nosuid,size=65536k my_image
The `nocopy` mode is used to disable automatically copying the requested volume
path in the container to the volume storage location.
For named volumes, `copy` is the default mode. Copy modes are not supported
for bind-mounted volumes.
--volumes-from="": Mount all volumes from the given container(s)
Note: When using systemd to manage the Docker daemon’s start and stop, in the systemd unit file
there is an option to control mount propagation for the Docker daemon itself, called MountFlags. The
value of this setting may cause Docker to not see mount propagation changes made on the mount
point. For example, if this value is slave, you may not be able to use
the shared or rshared propagation on a volume.
The volumes commands are complex enough to have their own documentation in section Use
volumes. A developer can define one or more VOLUME’s associated with an image, but only the
operator can give access from one container to another (or from a container to a volume mounted on
the host).
The container-dest must always be an absolute path such as /src/docs. The host-srccan either
be an absolute path or a name value. If you supply an absolute path for the host-dir, Docker bind-
mounts to the path you specify. If you supply a name, Docker creates a named volume by that name.
A name value must start with an alphanumeric character, followed by a-z0-
9, _(underscore), . (period) or - (hyphen). An absolute path starts with a / (forward slash).
For example, you can specify either /foo or foo for a host-src value. If you supply the /foo value,
Docker creates a bind mount. If you supply the foo specification, Docker creates a named volume.
USER
root (id = 0) is the default user within a container. The image developer can create additional users.
Those users are accessible by name. When passing a numeric ID, the user does not have to exist in
the container.
The developer can set a default user to run the first process with the Dockerfile USERinstruction.
When starting a container, the operator can override the USER instruction by passing the -u option.
-u="", --user="": Sets the username or UID used and optionally the groupname or GID
for the specified command.
WORKDIR
The default working directory for running binaries within a container is the root directory (/), but the
developer can set a different default with the Dockerfile WORKDIR command. The operator can
override this with:
-w="": Working directory inside the container
docker
To list available commands, either run docker with no parameters or execute docker help:
$ docker
Usage: docker [OPTIONS] COMMAND [ARG...]
docker [ --help | -v | --version ]
Options:
--config string Location of client config files (default "/root/.docker")
-c, --context string Name of the context to use to connect to the daemon
(overrides DOCKER_HOST env var and default context set with "docker context use")
-D, --debug Enable debug mode
--help Print usage
-H, --host value Daemon socket(s) to connect to (default [])
-l, --log-level string Set the logging level
("debug"|"info"|"warn"|"error"|"fatal") (default "info")
--tls Use TLS; implied by --tlsverify
--tlscacert string Trust certs signed only by this CA (default
"/root/.docker/ca.pem")
--tlscert string Path to TLS certificate file (default
"/root/.docker/cert.pem")
--tlskey string Path to TLS key file (default "/root/.docker/key.pem")
--tlsverify Use TLS and verify the remote
-v, --version Print version information and quit
Commands:
attach Attach to a running container
# […]
Description
Depending on your Docker system configuration, you may be required to preface
each docker command with sudo. To avoid having to use sudo with the docker command, your
system administrator can create a Unix group called docker and add users to it.
For more information about installing Docker or sudo configuration, refer to the installationinstructions
for your operating system.
Environment variables
For easy reference, the following list of environment variables are supported by the dockercommand
line:
HTTP_PROXY
HTTPS_PROXY
NO_PROXY
These Go environment variables are case-insensitive. See the Go specification for details on these
variables.
Configuration files
By default, the Docker command line stores its configuration files in a directory called .docker within
your $HOME directory. However, you can specify a different location via
the DOCKER_CONFIG environment variable or the --config command line option. If both are specified,
then the --config option overrides the DOCKER_CONFIG environment variable. For example:
docker --config ~/testconfigs/ ps
Instructs Docker to use the configuration files in your ~/testconfigs/ directory when running
the ps command.
Docker manages most of the files in the configuration directory and you should not modify them.
However, you can modify the config.json file to control certain aspects of how the docker command
behaves.
Currently, you can modify the docker command behavior using environment variables or command-
line options. You can also use options within config.json to modify some of the same behavior.
When using these mechanisms, you must keep in mind the order of precedence among them.
Command line options override environment variables and environment variables override properties
you specify in a config.json file.
The config.json file stores a JSON encoding of several properties:
The property HttpHeaders specifies a set of headers to include in all messages sent from the Docker
client to the daemon. Docker does not try to interpret or understand these header; it simply puts
them into the messages. Docker does not allow these headers to change any headers it sets for
itself.
The property psFormat specifies the default format for docker ps output. When the --format flag is
not provided with the docker ps command, Docker’s client uses this property. If this property is not
set, the client falls back to the default table format. For a list of supported formatting directives, see
the Formatting section in the docker ps documentation
The property imagesFormat specifies the default format for docker images output. When the --
format flag is not provided with the docker images command, Docker’s client uses this property. If
this property is not set, the client falls back to the default table format. For a list of supported
formatting directives, see the Formatting section in the docker imagesdocumentation
The property pluginsFormat specifies the default format for docker plugin ls output. When the --
format flag is not provided with the docker plugin ls command, Docker’s client uses this property.
If this property is not set, the client falls back to the default table format. For a list of supported
formatting directives, see the Formatting section in the docker plugin ls documentation
The property servicesFormat specifies the default format for docker service ls output. When the --
format flag is not provided with the docker service ls command, Docker’s client uses this property.
If this property is not set, the client falls back to the default json format. For a list of supported
formatting directives, see the Formatting section in the docker service ls documentation
The property serviceInspectFormat specifies the default format for docker service inspect output.
When the --format flag is not provided with thedocker service inspect command, Docker’s client
uses this property. If this property is not set, the client falls back to the default json format. For a list
of supported formatting directives, see the Formatting section in the docker service
inspect documentation
The property statsFormat specifies the default format for docker stats output. When the --
format flag is not provided with the docker stats command, Docker’s client uses this property. If this
property is not set, the client falls back to the default table format. For a list of supported formatting
directives, see Formatting section in the docker stats documentation
The property secretFormat specifies the default format for docker secret ls output. When the --
format flag is not provided with the docker secret ls command, Docker’s client uses this property.
If this property is not set, the client falls back to the default table format. For a list of supported
formatting directives, see Formatting section in the docker secret lsdocumentation
The property nodesFormat specifies the default format for docker node ls output. When the --
format flag is not provided with the docker node ls command, Docker’s client uses the value
of nodesFormat. If the value of nodesFormat is not set, the client uses the default table format. For a
list of supported formatting directives, see the Formatting section in the docker node
ls documentation
The property configFormat specifies the default format for docker config ls output. When the --
format flag is not provided with the docker config ls command, Docker’s client uses this property.
If this property is not set, the client falls back to the default table format. For a list of supported
formatting directives, see Formatting section in the docker config lsdocumentation
The property credsStore specifies an external binary to serve as the default credential store. When
this property is set, docker login will attempt to store credentials in the binary specified by docker-
credential-<value> which is visible on $PATH. If this property is not set, credentials will be stored in
the auths property of the config. For more information, see theCredentials store section in
the docker login documentation
The property credHelpers specifies a set of credential helpers to use preferentially
over credsStore or auths when storing and retrieving credentials for specific registries. If this
property is set, the binary docker-credential-<value> will be used when storing or retrieving
credentials for a specific registry. For more information, see the Credential helpers section in
the docker login documentation
The property stackOrchestrator specifies the default orchestrator to use when running docker
stack management commands. Valid values are "swarm", "kubernetes", and "all". This property
can be overridden with the DOCKER_STACK_ORCHESTRATOR environment variable, or the --
orchestrator flag.
Once attached to a container, users detach from it and leave it running using the using CTRL-p CTRL-
q key sequence. This detach key sequence is customizable using the detachKeys property. Specify
a <sequence> value for the property. The format of the <sequence> is a comma-separated list of either
a letter [a-Z], or the ctrl- combined with any of the following:
Your customization applies to all containers started in with your Docker client. Users can override
your custom or the default key sequence on a per-container basis. To do this, the user specifies
the --detach-keys flag with the docker attach, docker exec, docker runor docker start command.
The property plugins contains settings specific to CLI plugins. The key is the plugin name, while the
value is a further map of options, which are specific to that plugin.
Following is a sample config.json file:
{
"HttpHeaders": {
"MyHeader": "MyValue"
},
"psFormat": "table {{.ID}}\\t{{.Image}}\\t{{.Command}}\\t{{.Labels}}",
"imagesFormat": "table {{.ID}}\\t{{.Repository}}\\t{{.Tag}}\\t{{.CreatedAt}}",
"pluginsFormat": "table {{.ID}}\t{{.Name}}\t{{.Enabled}}",
"statsFormat": "table {{.Container}}\t{{.CPUPerc}}\t{{.MemUsage}}",
"servicesFormat": "table {{.ID}}\t{{.Name}}\t{{.Mode}}",
"secretFormat": "table {{.ID}}\t{{.Name}}\t{{.CreatedAt}}\t{{.UpdatedAt}}",
"configFormat": "table {{.ID}}\t{{.Name}}\t{{.CreatedAt}}\t{{.UpdatedAt}}",
"serviceInspectFormat": "pretty",
"nodesFormat": "table {{.ID}}\t{{.Hostname}}\t{{.Availability}}",
"detachKeys": "ctrl-e,e",
"credsStore": "secretservice",
"credHelpers": {
"awesomereg.example.org": "hip-star",
"unicorn.example.com": "vcbait"
},
"stackOrchestrator": "kubernetes",
"plugins": {
"plugin1": {
"option": "value"
},
"plugin2": {
"anotheroption": "anothervalue",
"athirdoption": "athirdvalue"
}
}
}
Notary
If using your own notary server and a self-signed certificate or an internal Certificate Authority, you
need to place the certificate at tls/<registry_url>/ca.crt in your docker config directory.
Alternatively you can trust the certificate globally by adding it to your system’s list of root Certificate
Authorities.
Examples
Display help text
To list the help on any command just execute the command, followed by the --help option.
$ docker run --help
Options:
--add-host value Add a custom host-to-IP mapping (host:ip) (default
[])
-a, --attach value Attach to STDIN, STDOUT or STDERR (default [])
...
Option types
Single character command line options can be combined, so rather than typing docker run -i -t --
name test busybox sh, you can write docker run -it --name test busybox sh.
BOOLEAN
Boolean options take the form -d=false. The value you see in the help text is the default value which
is set if you do not specify that flag. If you specify a Boolean flag without a value, this will set the flag
to true, irrespective of the default value.
For example, running docker run -d will set the value to true, so your container will run in
“detached” mode, in the background.
Options which default to true (e.g., docker build --rm=true) can only be set to the non-default
value by explicitly setting them to false:
$ docker build --rm=false .
MULTI
You can specify options like -a=[] multiple times in a single command line, for example in these
commands:
$ docker run -a stdin -a stdout -i -t ubuntu /bin/bash
Sometimes, multiple options can call for a more complex value string as for -v:
$ docker run -v /host:/container example/mysql
Note: Do not use the -t and -a stderr options together due to limitations in the pty implementation.
All stderr in pty mode simply goes to stdout.
Options like --name="" expect a string, and they can only be specified once. Options like -c=0 expect
an integer, and they can only be specified once.
Description
The base command for the Docker CLI.
Child commands
Command Description
docker
Manage checkpoints
checkpoint
docker import Import the contents from a tarball to create a filesystem image
docker port List port mappings or a specific mapping for the container
docker wait Block until one or more containers stop, then print their exit codes
Docker app
Working with Docker App
(experimental)
Estimated reading time: 14 minutes
This is an experimental feature.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Overview
Docker App is a CLI plug-in that introduces a top-level docker app command to bring the container
experience to applications. The following table compares Docker containers with Docker
applications.
Object Config file Build with Execute with Share with
Container Dockerfile docker image build docker container run docker image push
App App Package docker app bundle docker app install docker app push
With Docker App, entire applications can now be managed as easily as images and containers. For
example, Docker App lets you build, validate and deploy applications with the docker app command.
You can even leverage secure supply-chain features such as signed push and pull operations.
NOTE: docker app works with Engine - Community 19.03 or higher and Engine - Enterprise
19.03 or higher.
The first scenario describes basic components of a Docker App with tools and workflow.
1. Prerequisites
2. Initialize an empty new project
3. Populate the project
4. Validate the app
5. Deploy the app
6. Push the app to Docker Hub or Docker Trusted Registry
7. Install the app directly from Docker Hub
Prerequisites
You need at least one Docker node operating in Swarm mode. You also need the latest build of the
Docker CLI with the App CLI plugin included.
Depending on your Linux distribution and your security context, you might need to prepend
commands with sudo.
Use the following command to initialize a new empty project called “hello-world”.
The command produces a single file in your current directory called hello-world.dockerapp. The
format of the file name is appended with `.dockerapp`.
$ ls
hello-world.dockerapp
If you run docker app init without the --single-file flag, you get a new directory containing three
YAML files. The name of the directory is the name of the project with .dockerapp appended, and the
three YAML files are:
docker-compose.yml
metadata.yml
parameters.yml
However, the --single-file option merges the three YAML files into a single YAML file with three
sections. Each of these sections relates to one of the three YAML files mentioned
previously: docker-compose.yml, metadata.yml, and parameters.yml. Using the --single-file option
enables you to share your application using a single configuration file.
Inspect the YAML with the following command.
$ cat hello-world.dockerapp
# Application metadata - equivalent to metadata.yml.
version: 0.1.0
name: hello-world
description:
---
# Application services - equivalent to docker-compose.yml.
version: "3.6"
services: {}
---
# Default application parameters - equivalent to parameters.yml.
Notice that each of the three sections is separated by a set of three dashes (“---“). Let’s quickly
describe each section.
The first section of the file specifies identification metadata such as name, version, description and
maintainers. It accepts key-value pairs. This part of the file can be a separate file
called metadata.yml
The second section of the file describes the application. It can be a separate file called docker-
compose.yml.
The final section specifies default values for application parameters. It can be a separate file
called parameters.yml
Use your preferred editor to edit the hello-world.dockerapp YAML file and update the application
section with the following information:
version: "3.6"
services:
hello:
image: hashicorp/http-echo
command: ["-text", "${hello.text}"]
ports:
- ${hello.port}:5678
The sections of the YAML file are currently order-based. This means it’s important they remain in the
order we’ve explained, with the metadata section being first, the app section being second, and
the parameters section being last. This may change to name-based sections in future releases.
docker app validate operations fail if the Parameters section doesn’t specify a default value for
every parameter expressed in the app section.
As the validate operation has returned no problems, the app is ready to be deployed.
All three options are discussed, starting with deploying as a native Dock App application.
By default, docker app uses the current context to run the installation container and as a target
context to deploy the application. You can override the second context using the flag --target-
context or by using the environment variable DOCKER_TARGET_CONTEXT. This flag is also available for
the commands status, upgrade, and uninstall.
$ docker app install hello-world.dockerapp --name my-app --target-context=my-big-
production-cluster
Creating network my-app_default
Creating service my-app_hello
Application "my-app" installed on context "my-big-production-cluster"
Note: Two applications deployed on the same target context cannot share the same name, but this
is valid if they are deployed on different target contexts.
You can check the status of the app with the docker app status <app-name> command.
$ docker app status my-app
INSTALLATION
------------
Name: my-app
Created: 35 seconds
Modified: 31 seconds
Revision: 01DCMY7MWW67AY03B029QATXFF
Last Action: install
Result: SUCCESS
Orchestrator: swarm
APPLICATION
-----------
Name: hello-world
Version: 0.1.0
Reference:
PARAMETERS
----------
hello.port: 8080
hello.text: Hello, World!
STATUS
------
ID NAME MODE REPLICAS IMAGE PORTS
miqdk1v7j3zk my-app_hello replicated 1/1 hashicorp/http-echo:latest
*:8080->5678/tcp
The app is deployed using the stack orchestrator. This means you can also inspect it using the
regular docker stack commands.
$ docker stack ls
NAME SERVICES ORCHESTRATOR
my-app 1 Swarm
Now that the app is running, you can point a web browser at the DNS name or public IP of the
Docker node on port 8080 and see the app. You must ensure traffic to port 8080 is allowed on the
connection form your browser to your Docker host.
Now change the port of the application using docker app upgrade <app-name> command.
$ docker app upgrade my-app --hello.port=8181
Upgrading service my-app_hello
Application "my-app" upgraded on context "default"
You can uninstall the app with docker app uninstall my-app.
The process for deploying as a Compose app comprises two major steps:
Rendering is the process of reading the entire application configuration and outputting it as a
single docker-compose.yml file. This creates a Compose file with hard-coded values wherever a
parameter was specified as a variable.
Use the following command to render the app to a Compose file called docker-compose.ymlin the
current directory.
$ docker app render --output docker-compose.yml hello-world.dockerapp
Notice that the file contains hard-coded values that were expanded based on the contents of
the Parameters section of the project’s YAML file. For example, ${hello.text} has been expanded
to “Hello world!”.
Note: Almost all the docker app commands propose the --set key=value flag to override a default
parameter.
The application is now running as a Docker Compose app and should be reachable on port 8080 on
your Docker host. You must ensure traffic to port 8080 is allowed on the connection form your
browser to your Docker host.
You can use docker-compose down to stop and remove the application.
Deploying the app as a Docker stack is a two-step process very similar to deploying it as a Docker
Compose app.
Complete the steps in the previous section to render the Docker app project as a Compose file and
make sure you’re ready to deploy it as a Docker Stack. Your Docker host must be in Swarm mode.
The app is now deployed as a Docker stack and can be reached on port 8080 on your Docker host.
Use the docker stack rm hello-world-app command to stop and remove the stack. You must
ensure traffic to port 8080 is allowed on the connection form your browser to your Docker host.
Push the application to Docker Hub. To complete this step, you need a valid Docker ID and you
must be logged in to the registry to which you are pushing the app.
By default, all platform architectures are pushed to the registry. If you are pushing an official Docker
image as part of your app, you may find your app bundle becomes large with all image architectures
embedded. To just push the architecture required, you can add the --platformflag.
$ docker login
By default, all platform architectures are pushed to DTR. If you are pushing an official Docker image
as part of your app, you may find your app bundle becomes large with all image architectures
embedded. To just push the architecture required, you can add the --platformflag.
$ docker login dtr.example.com
This action was performed directly against the app in the registry. Note that for DTR, the application
will be prefixed with the Fully Qualified Domain Name (FQDN) of your trusted registry.
Now install it as a native Docker App by referencing the app in the registry, with a different port.
The app used in these examples is a simple web server that displays the text “Hello world!” on port
8181, your app might be different.
$ curl http://localhost:8181
Hello world!
You can see the name of your Docker App with the docker stack ls command.
CLI reference
Estimated reading time: 2 minutes
Description
Docker Application
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Child commands
Command Description
docker app bundle Create a CNAB invocation image and bundle.json for the application
docker app
Generates completion scripts for the specified shell (bash or zsh)
completion
docker app list List the installations and their last known installation result
docker app merge Merge a directory format Docker Application definition into a single file
docker app render Render the Compose file for an Application Package
Command Description
docker app split Split a single-file Docker Application definition into the directory format
docker app
Uninstall an application
uninstall
docker app
Upgrade an installed application
upgrade
Parent command
Command Description
Extended description
A tool to build and manage Docker Applications.
Description
Create a CNAB invocation image and bundle.json for the application
This command is experimental.
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker app bundle [APP_NAME] [--output OUTPUT_FILE]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker app bundle Create a CNAB invocation image and bundle.json for the application
docker app
Generates completion scripts for the specified shell (bash or zsh)
completion
docker app list List the installations and their last known installation result
docker app merge Merge a directory format Docker Application definition into a single file
docker app render Render the Compose file for an Application Package
docker app split Split a single-file Docker Application definition into the directory format
docker app
Uninstall an application
uninstall
docker app
Upgrade an installed application
upgrade
Examples
$ docker app bundle myapp.dockerapp
Description
Generates completion scripts for the specified shell (bash or zsh)
This command is experimental.
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker app completion SHELL
Parent command
Command Description
Related commands
Command Description
docker app bundle Create a CNAB invocation image and bundle.json for the application
docker app
Generates completion scripts for the specified shell (bash or zsh)
completion
docker app list List the installations and their last known installation result
docker app merge Merge a directory format Docker Application definition into a single file
docker app render Render the Compose file for an Application Package
docker app split Split a single-file Docker Application definition into the directory format
docker app
Uninstall an application
uninstall
docker app
Upgrade an installed application
upgrade
Extended description
Load the “docker app” completion code for bash into the
current shell
. <(docker app completion bash)
Set the “docker app” completion code for bash to autoload
on startup in your ~/.bashrc,
~/.profile or ~/.bash_profile
. <(docker app completion bash)
Load the “docker app” completion code for zsh into the
current shell
source <(docker app completion zsh)
Examples
$ . <(docker app completion bash)
Description
Initialize Docker Application definition
This command is experimental.
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker app init APP_NAME [--compose-file COMPOSE_FILE] [--description DESCRIPTION] [-
-maintainer NAME:EMAIL ...] [OPTIONS]
Options
Name,
Default Description
shorthand
Parent command
Command Description
Related commands
Command Description
docker app bundle Create a CNAB invocation image and bundle.json for the application
docker app
Generates completion scripts for the specified shell (bash or zsh)
completion
docker app list List the installations and their last known installation result
docker app merge Merge a directory format Docker Application definition into a single file
docker app render Render the Compose file for an Application Package
docker app split Split a single-file Docker Application definition into the directory format
docker app
Uninstall an application
uninstall
docker app
Upgrade an installed application
upgrade
Examples
$ docker app init myapp --description “a useful description”
Description
Shows metadata, parameters and a summary of the Compose file for a given application
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker app inspect [APP_NAME] [OPTIONS]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker app bundle Create a CNAB invocation image and bundle.json for the application
docker app
Generates completion scripts for the specified shell (bash or zsh)
completion
docker app list List the installations and their last known installation result
docker app merge Merge a directory format Docker Application definition into a single file
docker app render Render the Compose file for an Application Package
docker app split Split a single-file Docker Application definition into the directory format
docker app
Uninstall an application
uninstall
docker app
Upgrade an installed application
upgrade
Examples
$ docker app inspect myapp.dockerapp
Description
Install an application
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker app install [APP_NAME] [--name INSTALLATION_NAME] [--target-context
TARGET_CONTEXT] [OPTIONS]
Options
Name, shorthand Default Description
--kubernetes-
namespace
Kubernetes namespace to install into
--with-registry-
auth
Sends registry auth
Parent command
Command Description
Related commands
Command Description
docker app bundle Create a CNAB invocation image and bundle.json for the application
docker app
Generates completion scripts for the specified shell (bash or zsh)
completion
docker app list List the installations and their last known installation result
docker app merge Merge a directory format Docker Application definition into a single file
docker app render Render the Compose file for an Application Package
docker app split Split a single-file Docker Application definition into the directory format
docker app
Uninstall an application
uninstall
docker app
Upgrade an installed application
upgrade
Extended description
Install an application. By default, the application definition in the current directory will be installed.
The APP_NAME can also be:
Examples
$ docker app install myapp.dockerapp --name myinstallation --target-context=mycontext $ docker
app install myrepo/myapp:mytag --name myinstallation --target-context=mycontext $ docker app
install bundle.json --name myinstallation --credential-set=mycredentials.yml
Description
List the installations and their last known installation result
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker app list [OPTIONS]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker app bundle Create a CNAB invocation image and bundle.json for the application
docker app
Generates completion scripts for the specified shell (bash or zsh)
completion
docker app list List the installations and their last known installation result
Command Description
docker app merge Merge a directory format Docker Application definition into a single file
docker app render Render the Compose file for an Application Package
docker app split Split a single-file Docker Application definition into the directory format
docker app
Uninstall an application
uninstall
docker app
Upgrade an installed application
upgrade
Description
Merge a directory format Docker Application definition into a single file
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker app merge [APP_NAME] [--output OUTPUT_FILE]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker app bundle Create a CNAB invocation image and bundle.json for the application
docker app
Generates completion scripts for the specified shell (bash or zsh)
completion
docker app list List the installations and their last known installation result
docker app merge Merge a directory format Docker Application definition into a single file
docker app render Render the Compose file for an Application Package
docker app split Split a single-file Docker Application definition into the directory format
docker app
Uninstall an application
uninstall
docker app
Upgrade an installed application
upgrade
Examples
$ docker app merge myapp.dockerapp --output myapp-single.dockerapp
Description
Push an application package to a registry
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker app push [APP_NAME] --tag TARGET_REFERENCE [OPTIONS]
Options
Name, shorthand Default Description
Parent command
Command Description
docker app bundle Create a CNAB invocation image and bundle.json for the application
docker app
Generates completion scripts for the specified shell (bash or zsh)
completion
docker app list List the installations and their last known installation result
docker app merge Merge a directory format Docker Application definition into a single file
docker app render Render the Compose file for an Application Package
docker app split Split a single-file Docker Application definition into the directory format
docker app
Uninstall an application
uninstall
docker app
Upgrade an installed application
upgrade
Examples
$ docker app push myapp --tag myrepo/myapp:mytag
docker app render
Estimated reading time: 3 minutes
Description
Render the Compose file for an Application Package
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker app render [APP_NAME] [--set KEY=VALUE ...] [--parameters-file PARAMETERS-FILE
...] [OPTIONS]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker app bundle Create a CNAB invocation image and bundle.json for the application
docker app
Generates completion scripts for the specified shell (bash or zsh)
completion
docker app list List the installations and their last known installation result
docker app merge Merge a directory format Docker Application definition into a single file
docker app render Render the Compose file for an Application Package
Command Description
docker app split Split a single-file Docker Application definition into the directory format
docker app
Uninstall an application
uninstall
docker app
Upgrade an installed application
upgrade
Examples
$ docker app render myapp.dockerapp --set key=value
Description
Split a single-file Docker Application definition into the directory format
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker app split [APP_NAME] [--output OUTPUT_DIRECTORY]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker app bundle Create a CNAB invocation image and bundle.json for the application
docker app
Generates completion scripts for the specified shell (bash or zsh)
completion
docker app list List the installations and their last known installation result
Command Description
docker app merge Merge a directory format Docker Application definition into a single file
docker app render Render the Compose file for an Application Package
docker app split Split a single-file Docker Application definition into the directory format
docker app
Uninstall an application
uninstall
docker app
Upgrade an installed application
upgrade
Examples
$ docker app split myapp.dockerapp --output myapp-directory.dockerapp
Description
Get the installation status of an application
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker app status INSTALLATION_NAME [--target-context TARGET_CONTEXT] [OPTIONS]
Options
Name, shorthand Default Description
--with-registry-
Sends registry auth
auth
Parent command
Command Description
Related commands
Command Description
docker app bundle Create a CNAB invocation image and bundle.json for the application
docker app
Generates completion scripts for the specified shell (bash or zsh)
completion
docker app list List the installations and their last known installation result
docker app merge Merge a directory format Docker Application definition into a single file
docker app render Render the Compose file for an Application Package
docker app split Split a single-file Docker Application definition into the directory format
docker app
Uninstall an application
uninstall
docker app
Upgrade an installed application
upgrade
Extended description
Get the installation status of an application. If the installation is a Docker Application, the status
shows the stack services.
Examples
$ docker app status myinstallation --target-context=mycontext
Description
Uninstall an application
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker app uninstall INSTALLATION_NAME [--target-context TARGET_CONTEXT] [OPTIONS]
Options
Name, shorthand Default Description
--with-registry-
Sends registry auth
auth
Parent command
Command Description
Related commands
Command Description
docker app bundle Create a CNAB invocation image and bundle.json for the application
docker app
Generates completion scripts for the specified shell (bash or zsh)
completion
docker app list List the installations and their last known installation result
docker app merge Merge a directory format Docker Application definition into a single file
docker app render Render the Compose file for an Application Package
docker app split Split a single-file Docker Application definition into the directory format
docker app
Uninstall an application
uninstall
docker app
Upgrade an installed application
upgrade
Examples
$ docker app uninstall myinstallation --target-context=mycontext
Description
Upgrade an installed application
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker app upgrade INSTALLATION_NAME [--target-context TARGET_CONTEXT] [OPTIONS]
Options
Name, shorthand Default Description
--with-registry-
Sends registry auth
auth
Parent command
Command Description
docker app bundle Create a CNAB invocation image and bundle.json for the application
docker app
Generates completion scripts for the specified shell (bash or zsh)
completion
docker app list List the installations and their last known installation result
docker app merge Merge a directory format Docker Application definition into a single file
docker app render Render the Compose file for an Application Package
docker app split Split a single-file Docker Application definition into the directory format
docker app
Uninstall an application
uninstall
docker app
Upgrade an installed application
upgrade
Examples
$ docker app upgrade myinstallation --target-context=mycontext --set key=value
docker app validate
Estimated reading time: 2 minutes
Description
Checks the rendered application is syntactically correct
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker app validate [APP_NAME] [--set KEY=VALUE ...] [--parameters-file
PARAMETERS_FILE]
Options
Name, shorthand Default Description
Related commands
Command Description
docker app bundle Create a CNAB invocation image and bundle.json for the application
docker app
Generates completion scripts for the specified shell (bash or zsh)
completion
docker app list List the installations and their last known installation result
docker app merge Merge a directory format Docker Application definition into a single file
docker app render Render the Compose file for an Application Package
docker app split Split a single-file Docker Application definition into the directory format
docker app
Uninstall an application
uninstall
docker app
Upgrade an installed application
upgrade
Command Description
Description
Print version information
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker app version
Parent command
Command Description
Related commands
Command Description
docker app bundle Create a CNAB invocation image and bundle.json for the application
docker app
Generates completion scripts for the specified shell (bash or zsh)
completion
docker app list List the installations and their last known installation result
docker app merge Merge a directory format Docker Application definition into a single file
docker app render Render the Compose file for an Application Package
docker app split Split a single-file Docker Application definition into the directory format
docker app
Uninstall an application
uninstall
docker app
Upgrade an installed application
upgrade
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Overview
Docker Assemble (docker assemble) is a plugin which provides a language and framework-aware
tool that enables users to build an application into an optimized Docker container. With Docker
Assemble, users can quickly build Docker images without providing configuration information (like
Dockerfile) by auto-detecting the required information from existing framework configuration.
System requirements
Docker Assemble requires a Linux, Windows, or a macOS Mojave with the Docker Engine installed.
Install
Docker Assemble requires its own buildkit instance to be running in a Docker container on the local
system. You can start and manage the backend using the backend subcommand of docker
assemble.
To start the backend, run:
When the backend is running, it can be used for multiple builds and does not need to be restarted.
Note: For instructions on running a remote backend, accessing logs, saving the build cache in a
named volume, accessing a host port, and for information about the buildkit instance, see --help .
Ensure you are running the backend before you build any projects using Docker Assemble. For
instructions on running the backend, see Install Docker Assemble.
Clone the git repository you would like to use. The following example uses the docker-
springfamework repository.
When you build a Spring Boot project, Docker Assemble automatically detects the information it
requires from the pom.xml project file.
Build the project using the docker assemble build command by passing it the path to the source
repository:
~$ docker assemble build docker-springframework
«…»
Successfully built: docker.io/library/hello-boot:1
The resulting image is exported to the local Docker image store using a name and a tag which are
automatically determined by the project metadata.
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v1.5.2.RELEASE)
Ensure you are running the backend before you build any projects using Docker Assemble. For
instructions on running the backend, see Install Docker Assemble.
Clone the git repository you would like to use. The following example uses the dotnetdemorepository.
~$ git clone https://github.com/mbentley/dotnetdemo
Cloning into 'dotnetdemo'...
«…»
Build the project using the docker assemble build command by passing it the path to the source
repository (or a subdirectory in the following example):
~$ docker assemble build dotnetdemo/dotnetdemo
«…»
Successfully built: docker.io/library/dotnetdemo:latest
The resulting image is exported to the local Docker image store using a name and a tag which are
automatically determined by the project metadata.
~$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
dotnetdemo latest a055e61e3a9e 24 seconds ago 349MB
The only mandatory field in docker-assemble.yaml is version. All other parameters are
optional.
At most one of dotnet or springboot can be present in the yaml file.
Fields of type duration are integers with nanosecond granularity. However the following units
of time are supported: ns, us (or µs), ms, s, m, h. For example, 25s.
Each setting in the configuration file has a command line equivalent which can be used with the -o/-
-option argument, which takes a KEY=VALUE string where KEY is constructed by joining each element
of the YAML hierarchy with a period (.).
For example, the image → repository-namespace key in the YAML becomes -o image.repository-
namespace=NAME on the command line and springboot → enabledbecomes -o
springboot.enabled=BOOLEAN.
The following convenience aliases take precedence over the -o/--option equivalents:
--namespace is an alias for image.repository-namespace;
--name corresponds to image.repository-name;
--tag corresponds to image.tag;
--label corresponds to image.labels (can be used multiple times);
--port corresponds to image.ports (can be used multiple times)
Multi-platform images
By default, Docker Assemble builds images for the linux/amd64 platform and exports them to the
local Docker image store. This is also true when running Docker Assemble on Windows or macOS.
For some application frameworks, Docker Assemble can build multi-platform images to support
running on several host platforms. For example, linux/amd64 and windows/amd64.
To support multi-platform images, images must be pushed to a registry instead of the local image
store. This is because the local image store can only import uni-platform images which match its
platform.
To enable the multi-platform mode, use the --push option. For example:
$ docker assemble build --push /path/to/my/project
Linux-based images must be Debian, Red Hat, or Alpine-based and have a standard environment
with:
find
xargs
grep
true
a standard POSIX shell (located at /bin/sh)
These tools are required for internal inspection that Docker Assemble performs on the images.
Depending on the type of your project and your configuration, the base images must meet other
requirements as described in the following sections.
Spring Boot
Install Java JDK and maven on the base build image and ensure it is available in $PATH. Install a
maven settings file as /usr/share/maven/ref/settings-docker.xml (irrespective of the install
location of Maven).
Ensure the base runtime image has Java JRE installed and is available in $PATH. The build and
runtime image must have the same version of Java installed.
linux/amd64
linux/amd64
windows/amd64
ASP.NET Core
Install .NET Core SDK on the base build image and ensure it includes the .NET Core command-line
interface tools.
Install .NET Core command-line interface tools on the base runtime image.
linux/amd64
linux/amd64
windows/amd64
Bill of lading
Docker Assemble generates a bill of lading when building an image. This contains information about
the tools, base images, libraries, and packages used by Assemble to build the image and that are
included in the runtime image. The bill of lading has two parts – one for build and one for runtime.
You can find the bill of lading by inspecting the resulting image. It is stored using the
label com.docker.assemble.bill-of-lading:
$ docker image inspect --format '{{ index .Config.Labels "com.docker.assemble.bill-
of-lading" }}' <image>
Note: The bill of lading is only supported on the linux/amd64 platform and only for images which are
based on Alpine (apk), Red Hat (rpm) or Debian (dpkg-query).
Health checks
Docker Assemble only supports health checks on linux/amd64 based runtime images and require
certain additional commands to be present depending on the value of image.healthcheck.kind:
On Alpine (apk) and Debian (dpkg) based images, these dependencies are installed automatically.
For other base images, you must ensure they are present in the images you specify.
If your base runtime image lacks the necessary commands, you may need to
set image.healthcheck.kind to none in your docker-assemble.yaml file.
Notes:
You can repeat the --allow-host-port option or give it a comma separated list of ports.
Passing --allow-host-port 0 disables the default and no ports are exposed. For example:
$ docker assemble backend start --allow-host-port 0
On Docker Desktop, this functionality allows the backend to access ports on the Docker
Desktop VM host, rather than the Windows or macOS host. To access the the Windows or
macOS host port, you can use host.docker.internal as usual.
Backend sub-commands
Info
The info sub-command describes the backend:
Sidecar containers:
- 0f339c0cc8d7 docker-assemble-backend-username-proxy-port-5000 (running)
Found 1 worker(s):
- 70it95b8x171u5g9jbixkscz9
Platforms:
- linux/amd64
Labels:
- com.docker.assemble.commit: «…»
- org.mobyproject.buildkit.worker.executor: oci
- org.mobyproject.buildkit.worker.hostname: 2f03e7d288e6
- org.mobyproject.buildkit.worker.snapshotter: overlayfs
Build cache contains 54 entries, total size 3.65GB (0B currently in use)
Stop
The stop sub-command destroys the backend container
Logs
The logs sub-command displays the backend logs.
Cache
The build cache gets lost when the backend is stopped. To avoid this, you can create a volume
named docker-assemble-backend-cache-«username» and it will automatically be used as the build
cache.
Alternatively you can specify a named docker volume to use for the cache. For example:
For information regarding the current cache contents, run the command docker assemble backend
cache.
To clean the cache, rundocker assemble backend cache purge .
docker assemble
Estimated reading time: 2 minutes
Description
assemble is a high-level build tool
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Options
Name,
Default Description
shorthand
docker-container://docker-
--addr backend address
assemble-backend-root
Child commands
Command Description
Parent command
Command Description
Description
Manage build backend service
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Child commands
Command Description
docker assemble backend info Print information about build backend service
docker assemble backend logs Show logs for build backend service
Parent command
Command Description
Related commands
Command Description
Description
Manage build cache
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Child commands
Command Description
Related commands
Command Description
docker assemble backend info Print information about build backend service
docker assemble backend logs Show logs for build backend service
Description
Purge build cache
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker assemble backend cache purge
Parent command
Command Description
Related commands
Command Description
Description
Show build cache contents
This command is experimental.
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker assemble backend cache usage
Parent command
Command Description
Related commands
Command Description
Description
Print image to be used as backend
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker assemble backend image
Parent command
Command Description
Related commands
Command Description
docker assemble backend info Print information about build backend service
docker assemble backend logs Show logs for build backend service
Extended description
Print image to be used as backend.
Description
Print information about build backend service
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker assemble backend info
Parent command
Command Description
Related commands
Command Description
docker assemble backend info Print information about build backend service
docker assemble backend logs Show logs for build backend service
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker assemble backend logs
Options
Name, shorthand Default Description
Parent command
Command Description
docker assemble backend info Print information about build backend service
docker assemble backend logs Show logs for build backend service
Description
Start build backend service
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker assemble backend start
Options
Name,
Default Description
shorthand
docker-container://docker-
--addr backend address
assemble-backend-root
Parent command
Command Description
Related commands
Command Description
docker assemble backend info Print information about build backend service
Command Description
docker assemble backend logs Show logs for build backend service
Description
Stop build backend service
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker assemble backend stop
Options
Name,
Default Description
shorthand
docker-container://docker-assemble-
--addr backend address
backend-root
Parent command
Command Description
Related commands
Command Description
docker assemble backend info Print information about build backend service
docker assemble backend logs Show logs for build backend service
Description
Build a project into a container
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker assemble build [PATH]
Options
Name,
Default Description
shorthand
--debug-dump-
config
--debug-dump-
image
--debug-dump-
llb
--debug-skip-
build
--frontend
--frontend-
devel
Name,
Default Description
shorthand
docker-container://docker-
--addr backend address
assemble-backend-root
Parent command
Command Description
Related commands
Command Description
Description
Print the version number of docker assemble
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker assemble version
Parent command
Command Description
Related commands
Command Description
docker attach
Estimated reading time: 5 minutes
Description
Attach local standard input, output, and error streams to a running container
Usage
docker attach [OPTIONS] CONTAINER
Options
Name, shorthand Default Description
Parent command
Command Description
Extended description
Use docker attach to attach your terminal’s standard input, output, and error (or any combination of
the three) to a running container using the container’s ID or name. This allows you to view its
ongoing output or to control it interactively, as though the commands were running directly in your
terminal.
Note: The attach command will display the output of the ENTRYPOINT/CMD process. This can appear
as if the attach command is hung when in fact the process may simply not be interacting with the
terminal at that time.
You can attach to the same contained process multiple times simultaneously, from different sessions
on the Docker host.
To stop a container, use CTRL-c. This key sequence sends SIGKILL to the container. If --sig-
proxy is true (the default),CTRL-c sends a SIGINT to the container. If the container was run with -
i and -t, you can detach from a container and leave it running using the CTRL-p CTRL-q key
sequence.
Note: A process running as PID 1 inside a container is treated specially by Linux: it ignores any
signal with the default action. So, the process will not terminate on SIGINT or SIGTERM unless it is
coded to do so.
It is forbidden to redirect the standard input of a docker attach command while attaching to a tty-
enabled container (i.e.: launched with -t).
While a client is connected to container’s stdio using docker attach, Docker uses a ~1MB memory
buffer to maximize the throughput of the application. If this buffer is filled, the speed of the API
connection will start to have an effect on the process output writing speed. This is similar to other
applications like SSH. Because of this, it is not recommended to run performance critical
applications that generate a lot of output in the foreground over a slow client connection. Instead,
users should use the docker logs command to get access to the logs.
To override the sequence for an individual container, use the --detach-keys="<sequence>"flag with
the docker attach command. The format of the <sequence> is either a letter [a-Z], or
the ctrl- combined with any of the following:
These a, ctrl-a, X, or ctrl-\\ values are all examples of valid key sequences. To configure a
different configuration default key sequence for all containers, see Configuration file section.
Examples
Attach to and detach from a running container
$ docker run -d --name topdemo ubuntu /usr/bin/top -b
$ echo $?
0
$ docker ps -a | grep topdemo
275c44472aebd77c926d4527885bb09f2f6db21d878c75f0a1c212c03d3bcfab
root@f38c87f2a42d:/# exit 13
exit
$ echo $?
13
docker build
Estimated reading time: 22 minutes
Description
Build an image from a Dockerfile
Usage
docker build [OPTIONS] PATH | URL | -
Options
Name, shorthand Default Description
--cpu-shares , -
c
CPU shares (relative weight)
--disable-
content-trust
true Skip image verification
API 1.25+
--network
Set the networking mode for the RUN instructions during build
API 1.40+
--output , -o
Output destination (format: type=local,dest=path)
Set type of progress output (auto, plain, tty). Use plain to show
--progress auto
container output
API 1.39+
--secret Secret file to expose to the build (only if BuildKit enabled):
id=mysecret,src=/local/secret
API 1.39+
--ssh SSH agent socket or keys to expose to the build (only if BuildKit
enabled) (format: default|[=|[,]])
Parent command
Command Description
Extended description
The docker build command builds Docker images from a Dockerfile and a “context”. A build’s
context is the set of files located in the specified PATH or URL. The build process can refer to any of
the files in the context. For example, your build can use a COPY instruction to reference a file in the
context.
The URL parameter can refer to three kinds of resources: Git repositories, pre-packaged tarball
contexts and plain text files.
Git repositories
When the URL parameter points to the location of a Git repository, the repository acts as the build
context. The system recursively fetches the repository and its submodules. The commit history is not
preserved. A repository is first pulled into a temporary directory on your local host. After that
succeeds, the directory is sent to the Docker daemon as the context. Local copy gives you the ability
to access private repositories using local user credentials, VPN’s, and so forth.
Note: If the URL parameter contains a fragment the system will recursively clone the repository and
its submodules using a git clone --recursive command.
Git URLs accept context configuration in their fragment section, separated by a colon :. The first part
represents the reference that Git will check out, and can be either a branch, a tag, or a remote
reference. The second part represents a subdirectory inside the repository that will be used as a
build context.
For example, run this command to use a directory called docker in the branch container:
$ docker build https://github.com/docker/rootfs.git#container:docker
The following table represents all the valid suffixes with their build contexts:
myrepo.git refs/heads/master /
myrepo.git#mytag refs/tags/mytag /
myrepo.git#mybranch refs/heads/mybranch /
Build Syntax Suffix Commit Used Build Context Used
myrepo.git#pull/42/head refs/pull/42/head /
Tarball contexts
If you pass an URL to a remote tarball, the URL itself is sent to the daemon:
The download operation will be performed on the host the Docker daemon is running on, which is
not necessarily the same host from which the build command is being issued. The Docker daemon
will fetch context.tar.gz and use it as the build context. Tarball contexts must be tar archives
conforming to the standard tar UNIX format and can be compressed with any one of the ‘xz’, ‘bzip2’,
‘gzip’ or ‘identity’ (no compression) formats.
Text files
Instead of specifying a context, you can pass a single Dockerfile in the URL or pipe the file in
via STDIN. To pipe a Dockerfile from STDIN:
$ docker build - < Dockerfile
If you use STDIN or specify a URL pointing to a plain text file, the system places the contents into a file
called Dockerfile, and any -f, --file option is ignored. In this scenario, there is no context.
By default the docker build command will look for a Dockerfile at the root of the build context.
The -f, --file, option lets you specify the path to an alternative file to use instead. This is useful in
cases where the same set of files are used for multiple builds. The path must be to a file within the
build context. If a relative path is specified then it is interpreted as relative to the root of the context.
In most cases, it’s best to put each Dockerfile in an empty directory. Then, add to that directory only
the files needed for building the Dockerfile. To increase the build’s performance, you can exclude
files and directories by adding a .dockerignore file to that directory as well. For information on
creating one, see the .dockerignore file.
If the Docker client loses connection to the daemon, the build is canceled. This happens if you
interrupt the Docker client with CTRL-c or if the Docker client is killed for any reason. If the build
initiated a pull which is still running at the time the build is cancelled, the pull is cancelled as well.
Examples
Build with PATH
$ docker build .
This example specifies that the PATH is ., and so all the files in the local directory get tard and sent
to the Docker daemon. The PATH specifies where to find the files for the “context” of the build on the
Docker daemon. Remember that the daemon could be running on a remote machine and that no
parsing of the Dockerfile happens at the client side (where you’re running docker build). That
means that all the files at PATH get sent, not just the ones listed to ADD in the Dockerfile.
The transfer of context from the local machine to the Docker daemon is what the dockerclient means
when you see the “Sending build context” message.
If you wish to keep the intermediate containers after the build is complete, you must use --rm=false.
This does not affect the build cache.
This will clone the GitHub repository and use the cloned repository as context. The Dockerfile at the
root of the repository is used as Dockerfile. You can specify an arbitrary Git repository by using
the git:// or git@ scheme.
$ docker build -f ctx/Dockerfile http://server/ctx.tar.gz
Build with -
$ docker build - < Dockerfile
This will read a Dockerfile from STDIN without context. Due to the lack of a context, no contents of
any local directory will be sent to the Docker daemon. Since there is no context, a
Dockerfile ADD only works if it refers to a remote URL.
$ docker build - < context.tar.gz
This will build an image for a compressed context read from STDIN. Supported formats are: bzip2,
gzip and xz.
This example shows the use of the .dockerignore file to exclude the .git directory from the context.
Its effect can be seen in the changed size of the uploaded context. The builder reference contains
detailed information on creating a .dockerignore file
This will build like the previous example, but it will then tag the resulting image. The repository name
will be vieux/apache and the tag will be 2.0. Read more about valid tags.
You can apply multiple tags to an image. For example, you can apply the latest tag to a newly built
image and add another tag that references a specific version. For example, to tag an image both
as whenry/fedora-jboss:latest and whenry/fedora-jboss:v2.1, use the following:
$ docker build -t whenry/fedora-jboss:latest -t whenry/fedora-jboss:v2.1 .
This will use a file called Dockerfile.debug for the build instructions instead of Dockerfile.
$ curl example.com/remote/Dockerfile | docker build -f - .
The above command will use the current directory as the build context and read a Dockerfile from
stdin.
The above commands will build the current build context (as specified by the .) twice, once using a
debug version of a Dockerfile and once using a production version.
$ cd /home/me/myapp/some/dir/really/deep
$ docker build -f /home/me/myapp/dockerfiles/debug /home/me/myapp
$ docker build -f ../../../../dockerfiles/debug /home/me/myapp
These two docker build commands do the exact same thing. They both use the contents of
the debug file instead of looking for a Dockerfile and will use /home/me/myapp as the root of the build
context. Note that debug is in the directory structure of the build context, regardless of how you refer
to it on the command line.
Note: docker build will return a no such file or directory error if the file or directory does not
exist in the uploaded context. This may happen if there is no context, or if you specify a file that is
elsewhere on the Host system. The context is limited to the current directory (and its children) for
security reasons, and to ensure repeatable builds on remote Docker hosts. This is also the reason
why ADD ../filewill not work.
This flag allows you to pass the build-time variables that are accessed like regular environment
variables in the RUN instruction of the Dockerfile. Also, these values don’t persist in the intermediate
or final images like ENV values do. You must add --build-argfor each build argument.
Using this flag will not alter the output you see when the ARG lines from the Dockerfile are echoed
during the build process.
For detailed information on using ARG and ENV instructions, see the Dockerfile reference.
You may also use the --build-arg flag without a value, in which case the value from the local
environment will be propagated into the Docker container being built:
$ export HTTP_PROXY=http://10.20.30.2:1234
$ docker build --build-arg HTTP_PROXY .
This is similar to how docker run -e works. Refer to the docker run documentation for more
information.
Use the value specified by the Docker daemon’s --exec-opt . If the daemondoes not
default
specify an isolation technology, Microsoft Windows uses processas its default value.
Specifying the --isolation flag without a value is the same as setting --isolation="default".
Once the image is built, squash the new layers into a new image with a single new layer. Squashing
does not destroy any existing image, rather it creates a new image with the content of the squashed
layers. This effectively makes it look like all Dockerfile commands were created with a single layer.
The build cache is preserved with this method.
The --squash option is an experimental feature, and should not be considered stable.
Squashing layers can be beneficial if your Dockerfile produces multiple layers modifying the same
files, for example, files that are created in one step, and removed in another step. For other use-
cases, squashing images may actually have a negative impact on performance; when pulling an
image consisting of multiple layers, layers can be pulled in parallel, and allows sharing layers
between images (saving space).
For most use cases, multi-stage builds are a better alternative, as they give more fine-grained
control over your build, and can take advantage of future optimizations in the builder. Refer to
the use multi-stage builds section in the userguide for more information.
KNOWN LIMITATIONS
When squashing layers, the resulting image cannot take advantage of layer sharing with
other images, and may use significantly more space. Sharing the base image is still
supported.
When using this option you may see significantly more space used due to storing two copies
of the image, one for the build cache with all the cache layers in tact, and one for the
squashed version.
While squashing layers may produce smaller images, it may have a negative impact on
performance, as a single layer takes longer to extract, and downloading a single layer cannot
be parallelized.
When attempting to squash an image that does not make changes to the filesystem (for
example, the Dockerfile only contains ENV instructions), the squash step will fail (see issue
#33823).
PREREQUISITES
Experimental mode can be enabled by using the --experimental flag when starting the Docker
daemon or setting experimental: true in the daemon.json configuration file.
By default, experimental mode is disabled. To see the current configuration, use the docker
version command.
Server:
Version: 1.13.1
API version: 1.26 (minimum version 1.12)
Go version: go1.7.5
Git commit: 092cba3
Built: Wed Feb 8 06:35:24 2017
OS/Arch: linux/amd64
Experimental: false
[...]
To enable experimental mode, users need to restart the docker daemon with the experimental flag
enabled.
Experimental features are now included in the standard Docker binaries as of version 1.13.0. For
enabling experimental features, you need to start the Docker daemon with --experimental flag. You
can also enable the daemon flag via /etc/docker/daemon.json. e.g.
{
"experimental": true
}
[...]
We could find that all layer’s name is <missing>, and there is a new layer with COMMENT merge.
Test the image, check for /remove_me being gone, make sure hello\nworld is in /hello, make sure
the HELLO envvar’s value is world.
docker builder
Estimated reading time: 1 minute
Description
Manage builds
API 1.31+ The client and daemon API must both be at least 1.31 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker builder COMMAND
Child commands
Command Description
Parent command
Command Description
Description
Build an image from a Dockerfile
API 1.31+ The client and daemon API must both be at least 1.31 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker builder build [OPTIONS] PATH | URL | -
Options
Name, shorthand Default Description
--cpu-shares , -
CPU shares (relative weight)
c
--disable-
true Skip image verification
content-trust
API 1.25+
--network
Set the networking mode for the RUN instructions during build
API 1.40+
--output , -o
Output destination (format: type=local,dest=path)
Set type of progress output (auto, plain, tty). Use plain to show
--progress auto
container output
API 1.39+
--secret Secret file to expose to the build (only if BuildKit enabled):
id=mysecret,src=/local/secret
API 1.39+
--ssh SSH agent socket or keys to expose to the build (only if BuildKit
enabled) (format: default|[=|[,]])
Parent command
Command Description
Related commands
Command Description
Description
Remove build cache
API 1.39+ The client and daemon API must both be at least 1.39 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker builder prune
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker buildx
Estimated reading time: 2 minutes
Description
Build with BuildKit
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Child commands
Command Description
Parent command
Command Description
Description
Build from a file
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker buildx bake [OPTIONS] [TARGET...]
Options
Name,
Default Description
shorthand
Set type of progress output (auto, plain, tty). Use plain to show
--progress auto
container output
Parent command
Command Description
Related commands
Command Description
Description
Start a build
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker buildx build [OPTIONS] PATH | URL | -
Options
Name,
Default Description
shorthand
--cgroup-
Optional parent cgroup for the container
parent
--cpu-shares ,
CPU shares (relative weight)
-c
--network Set the networking mode for the RUN instructions during build
Set type of progress output (auto, plain, tty). Use plain to show
--progress auto
container output
Parent command
Command Description
Related commands
Command Description
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker buildx create [OPTIONS] [CONTEXT|ENDPOINT]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Child commands
Command Description
docker buildx imagetools create Create a new image based on source images
Parent command
Command Description
Related commands
Command Description
Description
Create a new image based on source images
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker buildx imagetools create [OPTIONS] [SOURCE] [SOURCE...]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker buildx imagetools create Create a new image based on source images
Description
Show details of image in the registry
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker buildx imagetools inspect [OPTIONS] NAME
Options
Name, shorthand Default Description
Parent command
Command Description
docker buildx imagetools create Create a new image based on source images
Description
Inspect current builder instance
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker buildx inspect [NAME]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker buildx ls
Estimated reading time: 2 minutes
Description
List builder instances
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker buildx ls
Parent command
Command Description
Related commands
Command Description
docker buildx rm
Estimated reading time: 2 minutes
Description
Remove a builder instance
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker buildx rm [NAME]
Parent command
Command Description
Related commands
Command Description
Description
Stop builder instance
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker buildx stop [NAME]
Parent command
Command Description
Related commands
Command Description
Description
Set the current builder instance
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker buildx use [OPTIONS] NAME
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
Description
Show buildx version information
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker buildx version
Parent command
Command Description
Related commands
Command Description
docker checkpoint
Estimated reading time: 1 minute
Description
Manage checkpoints
API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
This command is experimental.
This command is experimental on the Docker daemon. It should not be used in production
environments. To enable experimental features on the Docker daemon, edit the daemon.json and
set experimental to true.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker checkpoint COMMAND
Child commands
Command Description
Parent command
Command Description
Description
Create a checkpoint from a running container
API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
This command is experimental.
This command is experimental on the Docker daemon. It should not be used in production
environments. To enable experimental features on the Docker daemon, edit the daemon.json and
set experimental to true.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker checkpoint create [OPTIONS] CONTAINER CHECKPOINT
Options
Name, shorthand Default Description
Related commands
Command Description
docker checkpoint ls
Estimated reading time: 2 minutes
Description
List checkpoints for a container
API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
This command is experimental.
This command is experimental on the Docker daemon. It should not be used in production
environments. To enable experimental features on the Docker daemon, edit the daemon.json and
set experimental to true.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker checkpoint ls [OPTIONS] CONTAINER
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker checkpoint rm
Estimated reading time: 2 minutes
Description
Remove a checkpoint
API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
This command is experimental.
This command is experimental on the Docker daemon. It should not be used in production
environments. To enable experimental features on the Docker daemon, edit the daemon.json and
set experimental to true.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker checkpoint rm [OPTIONS] CONTAINER CHECKPOINT
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker cluster
Estimated reading time: 1 minute
Description
Docker Cluster
Options
Name, shorthand Default Description
Child commands
Command Description
Parent command
Command Description
Extended description
A tool to build and manage Docker Clusters.
Description
Backup a running cluster
Usage
docker cluster backup [OPTIONS] cluster
Options
Name,
Default Description
shorthand
Parent command
Command Description
Related commands
Command Description
Description
Create a new Docker Cluster
Usage
docker cluster create [OPTIONS]
Options
Name, shorthand Default Description
--switch-context
Switch context after cluster create.
, -s
Parent command
Command Description
Description
Display detailed information about a cluster
Usage
docker cluster inspect [OPTIONS] cluster
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker cluster ls
Estimated reading time: 1 minute
Description
List all available clusters
Usage
docker cluster ls [OPTIONS]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
Description
Restore a cluster from a backup
Usage
docker cluster restore [OPTIONS] cluster
Options
Name,
Default Description
shorthand
Parent command
Command Description
Related commands
Command Description
docker cluster rm
Estimated reading time: 1 minute
Description
Remove a cluster
Usage
docker cluster rm [OPTIONS] cluster
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
Description
Update a running cluster’s desired state
Usage
docker cluster update [OPTIONS] cluster
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker commit
Estimated reading time: 3 minutes
Description
Create a new image from a container’s changes
Usage
docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]
Options
Name, shorthand Default Description
Parent command
Command Description
Extended description
It can be useful to commit a container’s file changes or settings into a new image. This allows you to
debug a container by running an interactive shell, or to export a working dataset to another server.
Generally, it is better to use Dockerfiles to manage your images in a documented and maintainable
way. Read more about valid image names and tags.
The commit operation will not include any data contained in volumes mounted inside the container.
By default, the container being committed and its processes will be paused while the image is
committed. This reduces the likelihood of encountering data corruption during the process of
creating the commit. If this behavior is undesired, set the --pause option to false.
The --change option will apply Dockerfile instructions to the image that is created.
Supported Dockerfile instructions:CMD|ENTRYPOINT|ENV|EXPOSE|LABEL|ONBUILD|USER|VOLUME|WORKDIR
Examples
Commit a container
$ docker ps
f5283438590d
$ docker images
[HOME=/ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin]
f5283438590d
89373736e2e7f00bc149bd783073ac43d0507da250e999f3f1036e0db60817c0
$ docker ps
docker config
Estimated reading time: 1 minute
Description
Manage Docker configs
API 1.30+ The client and daemon API must both be at least 1.30 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker config COMMAND
Child commands
Command Description
Parent command
Command Description
More info
Store configuration data using Docker Configs
Description
Create a config from a file or STDIN
API 1.30+ The client and daemon API must both be at least 1.30 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker config create [OPTIONS] CONFIG file|-
Options
Name, shorthand Default Description
API 1.37+
--template-driver
Template driver
Parent command
Command Description
Related commands
Command Description
Description
Display detailed information on one or more configs
API 1.30+ The client and daemon API must both be at least 1.30 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker config inspect [OPTIONS] CONFIG [CONFIG...]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker config ls
Estimated reading time: 1 minute
Description
List configs
API 1.30+ The client and daemon API must both be at least 1.30 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Swarm This command works with the Swarm orchestrator.
Usage
docker config ls [OPTIONS]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker config rm
Estimated reading time: 1 minute
Description
Remove one or more configs
API 1.30+ The client and daemon API must both be at least 1.30 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker config rm CONFIG [CONFIG...]
Parent command
Command Description
Related commands
Command Description
docker container
Estimated reading time: 2 minutes
Description
Manage containers
Usage
docker container COMMAND
Child commands
Command Description
docker container
Create a new image from a container’s changes
commit
docker container cp Copy files/folders between a container and the local filesystem
docker container
Display detailed information on one or more containers
inspect
docker container pause Pause all processes within one or more containers
docker container port List port mappings or a specific mapping for the container
docker container
Rename a container
rename
Command Description
docker container stats Display a live stream of container(s) resource usage statistics
docker container
Unpause all processes within one or more containers
unpause
docker container wait Block until one or more containers stop, then print their exit codes
Parent command
Command Description
Extended description
Manage containers.
Description
Attach local standard input, output, and error streams to a running container
Usage
docker container attach [OPTIONS] CONTAINER
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker container
Create a new image from a container’s changes
commit
docker container cp Copy files/folders between a container and the local filesystem
docker container
Display detailed information on one or more containers
inspect
docker container pause Pause all processes within one or more containers
docker container port List port mappings or a specific mapping for the container
docker container
Rename a container
rename
docker container stats Display a live stream of container(s) resource usage statistics
docker container
Unpause all processes within one or more containers
unpause
docker container wait Block until one or more containers stop, then print their exit codes
Description
Create a new image from a container’s changes
Usage
docker container commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker container
Create a new image from a container’s changes
commit
Command Description
docker container cp Copy files/folders between a container and the local filesystem
docker container
Display detailed information on one or more containers
inspect
docker container pause Pause all processes within one or more containers
docker container port List port mappings or a specific mapping for the container
docker container
Rename a container
rename
docker container stats Display a live stream of container(s) resource usage statistics
docker container
Unpause all processes within one or more containers
unpause
docker container wait Block until one or more containers stop, then print their exit codes
docker container cp
Estimated reading time: 2 minutes
Description
Copy files/folders between a container and the local filesystem
Usage
docker container cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH|-
docker cp [OPTIONS] SRC_PATH|- CONTAINER:DEST_PATH
Options
Name, shorthand Default Description
Parent command
Command Description
docker container
Create a new image from a container’s changes
commit
docker container cp Copy files/folders between a container and the local filesystem
docker container
Display detailed information on one or more containers
inspect
docker container pause Pause all processes within one or more containers
docker container port List port mappings or a specific mapping for the container
docker container
Rename a container
rename
docker container stats Display a live stream of container(s) resource usage statistics
docker container
Unpause all processes within one or more containers
unpause
docker container
Update configuration of one or more containers
update
docker container wait Block until one or more containers stop, then print their exit codes
Extended description
Copy files/folders between a container and the local filesystem
Use ‘-‘ as the source to read a tar archive from stdin and extract it to a directory destination in a
container. Use ‘-‘ as the destination to stream a tar archive of a container source to stdout.
Description
Create a new container
Usage
docker container create [OPTIONS] IMAGE [COMMAND] [ARG...]
Options
Name, shorthand Default Description
--blkio-weight-
Block IO weight (relative device weight)
device
API 1.25+
--cpu-rt-period
Limit CPU real-time period in microseconds
API 1.25+
--cpu-rt-runtime
Limit CPU real-time runtime in microseconds
API 1.25+
--cpus
Number of CPUs
--device-cgroup-
Add a rule to the cgroup allowed devices list
rule
--device-read-
Limit read rate (IO per second) from a device
iops
--device-write-
Limit write rate (bytes per second) to a device
bps
--device-write-
Limit write rate (IO per second) to a device
iops
--disable-
true Skip image verification
content-trust
API 1.40+
--gpus
GPU devices to add to the container (‘all’ to pass all GPUs)
API 1.29+
--health-start-
Start period for the container to initialize before starting health-
period
retries countdown (ms|s|m|h) (default 0s)
API 1.25+
--init Run an init inside the container that forwards signals and
reaps processes
--interactive , -
Keep STDIN open even if not attached
i
--io-maxiops Maximum IOps limit for the system drive (Windows only)
--memory-
Memory soft limit
reservation
--memory-
-1 Tune container memory swappiness (0 to 100)
swappiness
--oom-kill-
Disable OOM Killer
disable
--publish-all , -
Publish all exposed ports to random ports
P
API 1.25+
--stop-timeout
Timeout (in seconds) to stop a container
Parent command
Command Description
Related commands
Command Description
docker container
Create a new image from a container’s changes
commit
docker container cp Copy files/folders between a container and the local filesystem
docker container
Display detailed information on one or more containers
inspect
docker container pause Pause all processes within one or more containers
docker container port List port mappings or a specific mapping for the container
docker container
Rename a container
rename
docker container stats Display a live stream of container(s) resource usage statistics
docker container
Unpause all processes within one or more containers
unpause
docker container wait Block until one or more containers stop, then print their exit codes
Description
Inspect changes to files or directories on a container’s filesystem
Usage
docker container diff CONTAINER
Parent command
Command Description
Related commands
Command Description
docker container
Create a new image from a container’s changes
commit
docker container cp Copy files/folders between a container and the local filesystem
docker container
Display detailed information on one or more containers
inspect
docker container pause Pause all processes within one or more containers
Command Description
docker container port List port mappings or a specific mapping for the container
docker container
Rename a container
rename
docker container stats Display a live stream of container(s) resource usage statistics
docker container
Unpause all processes within one or more containers
unpause
docker container
Update configuration of one or more containers
update
docker container wait Block until one or more containers stop, then print their exit codes
Description
Run a command in a running container
Usage
docker container exec [OPTIONS] CONTAINER COMMAND [ARG...]
Options
Name, shorthand Default Description
API 1.25+
--env , -e
Set environment variables
API 1.35+
--workdir , -w
Working directory inside the container
Parent command
Command Description
Related commands
Command Description
docker container
Create a new image from a container’s changes
commit
Command Description
docker container cp Copy files/folders between a container and the local filesystem
docker container
Display detailed information on one or more containers
inspect
docker container pause Pause all processes within one or more containers
docker container port List port mappings or a specific mapping for the container
docker container
Rename a container
rename
docker container stats Display a live stream of container(s) resource usage statistics
docker container
Unpause all processes within one or more containers
unpause
docker container
Update configuration of one or more containers
update
docker container wait Block until one or more containers stop, then print their exit codes
Description
Export a container’s filesystem as a tar archive
Usage
docker container export [OPTIONS] CONTAINER
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker container
Create a new image from a container’s changes
commit
docker container cp Copy files/folders between a container and the local filesystem
docker container
Display detailed information on one or more containers
inspect
docker container pause Pause all processes within one or more containers
docker container port List port mappings or a specific mapping for the container
docker container
Rename a container
rename
docker container stats Display a live stream of container(s) resource usage statistics
docker container
Unpause all processes within one or more containers
unpause
docker container wait Block until one or more containers stop, then print their exit codes
Description
Display detailed information on one or more containers
Usage
docker container inspect [OPTIONS] CONTAINER [CONTAINER...]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker container
Create a new image from a container’s changes
commit
docker container cp Copy files/folders between a container and the local filesystem
docker container
Display detailed information on one or more containers
inspect
docker container pause Pause all processes within one or more containers
docker container port List port mappings or a specific mapping for the container
docker container
Rename a container
rename
docker container stats Display a live stream of container(s) resource usage statistics
docker container
Unpause all processes within one or more containers
unpause
docker container wait Block until one or more containers stop, then print their exit codes
Description
Kill one or more running containers
Usage
docker container kill [OPTIONS] CONTAINER [CONTAINER...]
Options
Name, shorthand Default Description
Related commands
Command Description
docker container
Create a new image from a container’s changes
commit
docker container cp Copy files/folders between a container and the local filesystem
docker container
Display detailed information on one or more containers
inspect
docker container pause Pause all processes within one or more containers
docker container port List port mappings or a specific mapping for the container
docker container
Rename a container
rename
docker container stats Display a live stream of container(s) resource usage statistics
docker container
Unpause all processes within one or more containers
unpause
docker container
Update configuration of one or more containers
update
docker container wait Block until one or more containers stop, then print their exit codes
Description
Fetch the logs of a container
Usage
docker container logs [OPTIONS] CONTAINER
Options
Name,
Default Description
shorthand
--tail all Number of lines to show from the end of the logs
--timestamps ,
Show timestamps
-t
API 1.35+
--until Show logs before a timestamp (e.g. 2013-01-02T13:23:37) or
relative (e.g. 42m for 42 minutes)
Parent command
Command Description
Related commands
Command Description
docker container
Create a new image from a container’s changes
commit
docker container cp Copy files/folders between a container and the local filesystem
docker container
Display detailed information on one or more containers
inspect
docker container pause Pause all processes within one or more containers
docker container port List port mappings or a specific mapping for the container
docker container
Rename a container
rename
docker container stats Display a live stream of container(s) resource usage statistics
docker container
Unpause all processes within one or more containers
unpause
docker container
Update configuration of one or more containers
update
docker container wait Block until one or more containers stop, then print their exit codes
docker container ls
Estimated reading time: 3 minutes
Description
List containers
Usage
docker container ls [OPTIONS]
Options
Name, shorthand Default Description
Parent command
Command Description
docker container
Create a new image from a container’s changes
commit
docker container cp Copy files/folders between a container and the local filesystem
docker container
Display detailed information on one or more containers
inspect
docker container pause Pause all processes within one or more containers
docker container port List port mappings or a specific mapping for the container
docker container
Rename a container
rename
docker container stats Display a live stream of container(s) resource usage statistics
docker container
Unpause all processes within one or more containers
unpause
docker container
Update configuration of one or more containers
update
docker container wait Block until one or more containers stop, then print their exit codes
Description
Pause all processes within one or more containers
Usage
docker container pause CONTAINER [CONTAINER...]
Parent command
Command Description
Related commands
Command Description
docker container
Create a new image from a container’s changes
commit
docker container cp Copy files/folders between a container and the local filesystem
docker container
Display detailed information on one or more containers
inspect
docker container pause Pause all processes within one or more containers
docker container port List port mappings or a specific mapping for the container
docker container
Rename a container
rename
docker container stats Display a live stream of container(s) resource usage statistics
docker container
Unpause all processes within one or more containers
unpause
docker container
Update configuration of one or more containers
update
docker container wait Block until one or more containers stop, then print their exit codes
Description
List port mappings or a specific mapping for the container
Usage
docker container port CONTAINER [PRIVATE_PORT[/PROTO]]
Parent command
Command Description
Related commands
Command Description
docker container
Create a new image from a container’s changes
commit
docker container cp Copy files/folders between a container and the local filesystem
docker container
Display detailed information on one or more containers
inspect
docker container pause Pause all processes within one or more containers
docker container port List port mappings or a specific mapping for the container
docker container
Rename a container
rename
docker container stats Display a live stream of container(s) resource usage statistics
docker container
Unpause all processes within one or more containers
unpause
docker container wait Block until one or more containers stop, then print their exit codes
Description
Remove all stopped containers
API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker container prune [OPTIONS]
Options
Name, shorthand Default Description
Related commands
Command Description
docker container
Create a new image from a container’s changes
commit
docker container cp Copy files/folders between a container and the local filesystem
docker container
Display detailed information on one or more containers
inspect
docker container pause Pause all processes within one or more containers
docker container port List port mappings or a specific mapping for the container
docker container
Rename a container
rename
Command Description
docker container stats Display a live stream of container(s) resource usage statistics
docker container
Unpause all processes within one or more containers
unpause
docker container wait Block until one or more containers stop, then print their exit codes
Extended description
Removes all stopped containers.
Examples
Prune containers
$ docker container prune
WARNING! This will remove all stopped containers.
Are you sure you want to continue? [y/N] y
Deleted Containers:
4a7f7eebae0f63178aff7eb0aa39cd3f0627a203ab2df258c1a00b456cf20063
f98f9c2aa1eaf727e4ec9c0283bc7d4aa4762fbdba7f26191f26c97f64090360
Filtering
The filtering flag (--filter) format is of “key=value”. If there is more than one filter, then pass
multiple flags (e.g., --filter "foo=bar" --filter "bif=baz")
The until filter can be Unix timestamps, date formatted timestamps, or Go duration strings
(e.g. 10m, 1h30m) computed relative to the daemon machine’s time. Supported formats for date
formatted time stamps include RFC3339Nano, RFC3339, 2006-01-02T15:04:05,2006-01-
02T15:04:05.999999999, 2006-01-02Z07:00, and 2006-01-02. The local timezone on the daemon will
be used if you do not provide either a Z or a +-00:00 timezone offset at the end of the timestamp.
When providing Unix timestamps enter seconds[.nanoseconds], where seconds is the number of
seconds that have elapsed since January 1, 1970 (midnight UTC/GMT), not counting leap seconds
(aka Unix epoch or Unix time), and the optional .nanoseconds field is a fraction of a second no more
than nine digits long.
The label filter accepts two formats. One is the label=... (label=<key> or label=<key>=<value>),
which removes containers with the specified labels. The other format is
the label!=... (label!=<key> or label!=<key>=<value>), which removes containers without the
specified labels.
Deleted Containers:
53a9bc23a5168b6caa2bfbefddf1b30f93c7ad57f3dec271fd32707497cb9369
Total reclaimed space: 25 B
Deleted Containers:
4a75091a6d618526fcd8b33ccd6e5928ca2a64415466f768a6180004b0c72c6c
Description
Rename a container
Usage
docker container rename CONTAINER NEW_NAME
Parent command
Command Description
Related commands
Command Description
docker container
Create a new image from a container’s changes
commit
docker container cp Copy files/folders between a container and the local filesystem
docker container
Display detailed information on one or more containers
inspect
docker container pause Pause all processes within one or more containers
docker container port List port mappings or a specific mapping for the container
docker container
Rename a container
rename
docker container stats Display a live stream of container(s) resource usage statistics
docker container
Unpause all processes within one or more containers
unpause
docker container
Update configuration of one or more containers
update
docker container wait Block until one or more containers stop, then print their exit codes
Description
Restart one or more containers
Usage
docker container restart [OPTIONS] CONTAINER [CONTAINER...]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker container
Create a new image from a container’s changes
commit
docker container cp Copy files/folders between a container and the local filesystem
docker container
Display detailed information on one or more containers
inspect
docker container pause Pause all processes within one or more containers
docker container port List port mappings or a specific mapping for the container
docker container
Rename a container
rename
docker container stats Display a live stream of container(s) resource usage statistics
docker container
Unpause all processes within one or more containers
unpause
docker container wait Block until one or more containers stop, then print their exit codes
docker container rm
Estimated reading time: 2 minutes
Description
Remove one or more containers
Usage
docker container rm [OPTIONS] CONTAINER [CONTAINER...]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker container
Create a new image from a container’s changes
commit
Command Description
docker container cp Copy files/folders between a container and the local filesystem
docker container
Display detailed information on one or more containers
inspect
docker container pause Pause all processes within one or more containers
docker container port List port mappings or a specific mapping for the container
docker container
Rename a container
rename
docker container stats Display a live stream of container(s) resource usage statistics
docker container
Unpause all processes within one or more containers
unpause
docker container wait Block until one or more containers stop, then print their exit codes
Description
Run a command in a new container
Usage
docker container run [OPTIONS] IMAGE [COMMAND] [ARG...]
Options
Name, shorthand Default Description
--blkio-weight-
Block IO weight (relative device weight)
device
API 1.25+
--cpu-rt-period
Limit CPU real-time period in microseconds
API 1.25+
--cpu-rt-runtime
Limit CPU real-time runtime in microseconds
API 1.25+
--cpus
Number of CPUs
--device-cgroup-
Add a rule to the cgroup allowed devices list
rule
--device-read-
Limit read rate (IO per second) from a device
iops
--device-write-
Limit write rate (bytes per second) to a device
bps
Name, shorthand Default Description
--device-write-
Limit write rate (IO per second) to a device
iops
--disable-
true Skip image verification
content-trust
API 1.40+
--gpus
GPU devices to add to the container (‘all’ to pass all GPUs)
API 1.29+
--health-start-
Start period for the container to initialize before starting health-
period
retries countdown (ms|s|m|h) (default 0s)
API 1.25+
--init Run an init inside the container that forwards signals and
reaps processes
--interactive , -
Keep STDIN open even if not attached
i
--io-maxiops Maximum IOps limit for the system drive (Windows only)
--memory-
Memory soft limit
reservation
Name, shorthand Default Description
--memory-
-1 Tune container memory swappiness (0 to 100)
swappiness
--oom-kill-
Disable OOM Killer
disable
--publish-all , -
Publish all exposed ports to random ports
P
API 1.25+
--stop-timeout
Timeout (in seconds) to stop a container
Parent command
Command Description
Related commands
Command Description
docker container
Create a new image from a container’s changes
commit
docker container cp Copy files/folders between a container and the local filesystem
docker container
Display detailed information on one or more containers
inspect
docker container pause Pause all processes within one or more containers
docker container port List port mappings or a specific mapping for the container
docker container
Rename a container
rename
docker container stats Display a live stream of container(s) resource usage statistics
docker container
Unpause all processes within one or more containers
unpause
docker container
Update configuration of one or more containers
update
docker container wait Block until one or more containers stop, then print their exit codes
Description
Start one or more stopped containers
Usage
docker container start [OPTIONS] CONTAINER [CONTAINER...]
Options
Name, shorthand Default Description
experimental (daemon)
--checkpoint
Restore from this checkpoint
experimental (daemon)
--checkpoint-dir
Use a custom checkpoint storage directory
Parent command
Command Description
Related commands
Command Description
docker container
Create a new image from a container’s changes
commit
docker container cp Copy files/folders between a container and the local filesystem
docker container
Display detailed information on one or more containers
inspect
docker container pause Pause all processes within one or more containers
docker container port List port mappings or a specific mapping for the container
docker container
Rename a container
rename
docker container stats Display a live stream of container(s) resource usage statistics
docker container
Unpause all processes within one or more containers
unpause
docker container
Update configuration of one or more containers
update
docker container wait Block until one or more containers stop, then print their exit codes
Description
Display a live stream of container(s) resource usage statistics
Usage
docker container stats [OPTIONS] [CONTAINER...]
Options
Name, shorthand Default Description
--no-stream Disable streaming stats and only pull the first result
Parent command
Command Description
Related commands
Command Description
docker container
Create a new image from a container’s changes
commit
docker container cp Copy files/folders between a container and the local filesystem
docker container
Display detailed information on one or more containers
inspect
docker container pause Pause all processes within one or more containers
docker container port List port mappings or a specific mapping for the container
docker container
Rename a container
rename
docker container stats Display a live stream of container(s) resource usage statistics
docker container
Unpause all processes within one or more containers
unpause
docker container wait Block until one or more containers stop, then print their exit codes
docker container stop
Estimated reading time: 2 minutes
Description
Stop one or more running containers
Usage
docker container stop [OPTIONS] CONTAINER [CONTAINER...]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker container
Create a new image from a container’s changes
commit
docker container cp Copy files/folders between a container and the local filesystem
docker container
Display detailed information on one or more containers
inspect
docker container pause Pause all processes within one or more containers
docker container port List port mappings or a specific mapping for the container
docker container
Rename a container
rename
docker container stats Display a live stream of container(s) resource usage statistics
docker container
Unpause all processes within one or more containers
unpause
Command Description
docker container
Update configuration of one or more containers
update
docker container wait Block until one or more containers stop, then print their exit codes
Description
Display the running processes of a container
Usage
docker container top CONTAINER [ps OPTIONS]
Parent command
Command Description
Related commands
Command Description
docker container
Create a new image from a container’s changes
commit
docker container cp Copy files/folders between a container and the local filesystem
docker container
Display detailed information on one or more containers
inspect
docker container pause Pause all processes within one or more containers
docker container port List port mappings or a specific mapping for the container
docker container
Rename a container
rename
docker container stats Display a live stream of container(s) resource usage statistics
docker container
Unpause all processes within one or more containers
unpause
Command Description
docker container
Update configuration of one or more containers
update
docker container wait Block until one or more containers stop, then print their exit codes
Description
Unpause all processes within one or more containers
Usage
docker container unpause CONTAINER [CONTAINER...]
Parent command
Command Description
Related commands
Command Description
docker container
Create a new image from a container’s changes
commit
docker container cp Copy files/folders between a container and the local filesystem
docker container
Display detailed information on one or more containers
inspect
docker container pause Pause all processes within one or more containers
docker container port List port mappings or a specific mapping for the container
docker container
Rename a container
rename
docker container stats Display a live stream of container(s) resource usage statistics
docker container
Unpause all processes within one or more containers
unpause
Command Description
docker container
Update configuration of one or more containers
update
docker container wait Block until one or more containers stop, then print their exit codes
Description
Update configuration of one or more containers
Usage
docker container update [OPTIONS] CONTAINER [CONTAINER...]
Options
Name, shorthand Default Description
API 1.25+
--cpu-rt-period
Limit the CPU real-time period in microseconds
API 1.25+
--cpu-rt-runtime
Limit the CPU real-time runtime in microseconds
API 1.29+
--cpus
Number of CPUs
Name, shorthand Default Description
--memory-
Memory soft limit
reservation
API 1.40+
--pids-limit
Tune container pids limit (set -1 for unlimited)
Parent command
Command Description
Related commands
Command Description
docker container
Create a new image from a container’s changes
commit
docker container cp Copy files/folders between a container and the local filesystem
docker container
Display detailed information on one or more containers
inspect
docker container pause Pause all processes within one or more containers
docker container port List port mappings or a specific mapping for the container
docker container
Rename a container
rename
docker container stats Display a live stream of container(s) resource usage statistics
docker container
Unpause all processes within one or more containers
unpause
docker container wait Block until one or more containers stop, then print their exit codes
docker container wait
Estimated reading time: 2 minutes
Description
Block until one or more containers stop, then print their exit codes
Usage
docker container wait CONTAINER [CONTAINER...]
Parent command
Command Description
Related commands
Command Description
docker container
Create a new image from a container’s changes
commit
docker container cp Copy files/folders between a container and the local filesystem
docker container
Display detailed information on one or more containers
inspect
docker container pause Pause all processes within one or more containers
docker container port List port mappings or a specific mapping for the container
docker container
Rename a container
rename
docker container stats Display a live stream of container(s) resource usage statistics
docker container
Unpause all processes within one or more containers
unpause
docker container
Update configuration of one or more containers
update
docker container wait Block until one or more containers stop, then print their exit codes
docker context
Estimated reading time: 1 minute
Description
Manage contexts
Usage
docker context COMMAND
Child commands
Command Description
Parent command
Command Description
Usage
docker context create [OPTIONS] CONTEXT
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
Extended description
Creates a new context. This allows you to quickly switch the cli configuration to connect to different
clusters or single nodes.
To create a context from scratch provide the docker and, if required, kubernetes options. The
example below creates the context my-context with a docker endpoint of /var/run/docker.sock and
a kubernetes configuration sourced from the file /home/me/my-kube-config:
$ docker context create my-context \
--docker host=/var/run/docker.sock \
--kubernetes config-file=/home/me/my-kube-config
Use the --from=<context-name> option to create a new context from an existing context. The
example below creates a new context named my-context from the existing context existing-
context:
If the --from option is not set, the context is created from the current context:
$ docker context create my-context
This can be used to create a context out of an existing DOCKER_HOST based script:
$ source my-setup-script.sh
$ docker context create my-context
To source only the docker endpoint configuration from an existing context use the --docker
from=<context-name> option. The example below creates a new context named my-context using
the docker endpoint configuration from the existing context existing-context and a kubernetes
configuration sourced from the file /home/me/my-kube-config:
$ docker context create my-context \
--docker from=existing-context \
--kubernetes config-file=/home/me/my-kube-config
To source only the kubernetes configuration from an existing context use the--kubernetes
from=<context-name> option. The example below creates a new context named my-context using
the kuberentes configuration from the existing context existing-contextand a docker endpoint
of /var/run/docker.sock:
$ docker context create my-context \
--docker host=/var/run/docker.sock \
--kubernetes from=existing-context
Docker and Kubernetes endpoints configurations, as well as default stack orchestrator and
description can be modified with docker context update
Description
Export a context to a tar or kubeconfig file
Usage
docker context export [OPTIONS] CONTEXT [FILE|-]
Options
Name, shorthand Default Description
Related commands
Command Description
Extended description
Exports a context in a file that can then be used with docker context import (or with kubectl if --
kubeconfig is set). Default output filename is <CONTEXT>.dockercontext,
or <CONTEXT>.kubeconfig if --kubeconfig is set. To export to STDOUT, you can run docker context
export my-context -.
Description
Import a context from a tar or zip file
Usage
docker context import CONTEXT FILE|-
Parent command
Command Description
Related commands
Command Description
Extended description
Imports a context previously exported with docker context export. To import from stdin, use a
hyphen (-) as filename.
Description
Display detailed information on one or more contexts
Usage
docker context inspect [OPTIONS] [CONTEXT] [CONTEXT...]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
Extended description
Inspects one or more contexts.
Examples
Inspect a context by name
$ docker context inspect "local+aks"
[
{
"Name": "local+aks",
"Metadata": {
"Description": "Local Docker Engine + Azure AKS endpoint",
"StackOrchestrator": "kubernetes"
},
"Endpoints": {
"docker": {
"Host": "npipe:////./pipe/docker_engine",
"SkipTLSVerify": false
},
"kubernetes": {
"Host": "https://simon-aks-***.hcp.uksouth.azmk8s.io:443",
"SkipTLSVerify": false,
"DefaultNamespace": "default"
}
},
"TLSMaterial": {
"kubernetes": [
"ca.pem",
"cert.pem",
"key.pem"
]
},
"Storage": {
"MetadataPath":
"C:\\Users\\simon\\.docker\\contexts\\meta\\cb6d08c0a1bfa5fe6f012e61a442788c00bed93f5
09141daff05f620fc54ddee",
"TLSPath":
"C:\\Users\\simon\\.docker\\contexts\\tls\\cb6d08c0a1bfa5fe6f012e61a442788c00bed93f50
9141daff05f620fc54ddee"
}
}
]
docker context ls
Estimated reading time: 1 minute
Description
List contexts
Usage
docker context ls [OPTIONS]
Options
Name, shorthand Default Description
Related commands
Command Description
docker context rm
Estimated reading time: 1 minute
Description
Remove one or more contexts
Usage
docker context rm CONTEXT [CONTEXT...]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
Description
Update a context
Usage
docker context update [OPTIONS] CONTEXT
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
Extended description
Updates an existing context. See context create
Description
Set the current docker context
Usage
docker context use CONTEXT
Parent command
Command Description
Related commands
Command Description
Extended description
Set the default context to use, when DOCKER_HOST, DOCKER_CONTEXT environment variables and --
host, --context global options are not set. To disable usage of contexts, you can use the
special default context.
docker cp
Estimated reading time: 5 minutes
Description
Copy files/folders between a container and the local filesystem
Usage
docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH|-
docker cp [OPTIONS] SRC_PATH|- CONTAINER:DEST_PATH
Options
Name, shorthand Default Description
Extended description
The docker cp utility copies the contents of SRC_PATH to the DEST_PATH. You can copy from the
container’s file system to the local machine or the reverse, from the local filesystem to the container.
If - is specified for either the SRC_PATH or DEST_PATH, you can also stream a tar archive from STDIN or
to STDOUT. The CONTAINER can be a running or stopped container. The SRC_PATH or DEST_PATH can be
a file or directory.
The docker cp command assumes container paths are relative to the container’s / (root) directory.
This means supplying the initial forward slash is optional; The command
sees compassionate_darwin:/tmp/foo/myfile.txt and compassionate_darwin:tmp/foo/myfile.txta
s identical. Local machine paths can be an absolute or relative value. The command interprets a
local machine’s relative paths as relative to the current working directory where docker cp is run.
The cp command behaves like the Unix cp -a command in that directories are copied recursively
with permissions preserved if possible. Ownership is set to the user and primary group at the
destination. For example, files copied to a container are created with UID:GID of the root user. Files
copied to the local machine are created with the UID:GID of the user which invoked the docker
cp command. However, if you specify the -a option, docker cpsets the ownership to the user and
primary group at the source. If you specify the -L option, docker cp follows any symbolic link in
the SRC_PATH. docker cp does not create parent directories for DEST_PATH if they do not exist.
Assuming a path separator of /, a first argument of SRC_PATH and second argument of DEST_PATH, the
behavior is as follows:
The command requires SRC_PATH and DEST_PATH to exist according to the above rules. If SRC_PATH is
local and is a symbolic link, the symbolic link, not the target, is copied by default. To copy the link
target and not the link, specify the -L option.
A colon (:) is used as a delimiter between CONTAINER and its path. You can also use :when
specifying paths to a SRC_PATH or DEST_PATH on a local machine, for examplefile:name.txt. If you
use a : in a local machine path, you must be explicit with a relative or absolute path, for example:
`/path/to/file:name.txt` or `./file:name.txt`
It is not possible to copy certain system files such as resources under /proc, /sys, /dev, tmpfs, and
mounts created by the user in the container. However, you can still copy such files by manually
running tar in docker exec. Both of the following examples do the same thing in different ways
(consider SRC_PATH and DEST_PATH are directories):
$ docker exec CONTAINER tar Ccf $(dirname SRC_PATH) - $(basename SRC_PATH) | tar Cxf
DEST_PATH -
$ tar Ccf $(dirname SRC_PATH) - $(basename SRC_PATH) | docker exec -i CONTAINER tar
Cxf DEST_PATH -
Using - as the SRC_PATH streams the contents of STDIN as a tar archive. The command extracts the
content of the tar to the DEST_PATH in container’s filesystem. In this case, DEST_PATH must specify a
directory. Using - as the DEST_PATH streams the contents of the resource as a tar archive to STDOUT.
docker create
Estimated reading time: 12 minutes
Description
Create a new container
Usage
docker create [OPTIONS] IMAGE [COMMAND] [ARG...]
Options
Name, shorthand Default Description
--blkio-weight-
Block IO weight (relative device weight)
device
API 1.25+
--cpu-rt-period
Limit CPU real-time period in microseconds
API 1.25+
--cpu-rt-runtime
Limit CPU real-time runtime in microseconds
API 1.25+
--cpus
Number of CPUs
--device-cgroup-
Add a rule to the cgroup allowed devices list
rule
--device-read-
Limit read rate (IO per second) from a device
iops
--device-write-
Limit write rate (bytes per second) to a device
bps
--device-write-
Limit write rate (IO per second) to a device
iops
--disable-
true Skip image verification
content-trust
API 1.40+
--gpus
GPU devices to add to the container (‘all’ to pass all GPUs)
API 1.29+
--health-start-
Start period for the container to initialize before starting health-
period
retries countdown (ms|s|m|h) (default 0s)
API 1.25+
--init Run an init inside the container that forwards signals and
reaps processes
--interactive , -
Keep STDIN open even if not attached
i
--io-maxiops Maximum IOps limit for the system drive (Windows only)
--memory-
Memory soft limit
reservation
--memory-
-1 Tune container memory swappiness (0 to 100)
swappiness
--oom-kill-
Disable OOM Killer
disable
--publish-all , -
Publish all exposed ports to random ports
P
API 1.25+
--stop-timeout
Timeout (in seconds) to stop a container
Parent command
Command Description
Extended description
The docker create command creates a writeable container layer over the specified image and
prepares it for running the specified command. The container ID is then printed to STDOUT. This is
similar to docker run -d except the container is never started. You can then use the docker start
<container_id> command to start the container at any point.
This is useful when you want to set up a container configuration ahead of time so that it is ready to
start when you need it. The initial status of the new container is created.
Please see the run command section and the Docker run reference for more details.
Examples
Create and start a container
$ docker create -t -i fedora bash
6d8af538ec541dd581ebc2a24153a28329acb5268abe5ef868c1f1a261221752
$ docker start -a -i 6d8af538ec5
bash-4.2#
Initialize volumes
As of v1.4.0 container volumes are initialized during the docker create phase (i.e., docker run too).
For example, this allows you to create the data volume container, and then use it from another
container:
$ docker create -v /data --name data ubuntu
240633dfbb98128fa77473d3d9018f6123b99c454b3251427ae190a7d951ad57
total 8
drwxr-xr-x 2 root root 4096 Dec 5 04:10 .
drwxr-xr-x 48 root root 4096 Dec 5 04:11 ..
Similarly, create a host directory bind mounted volume container, which can then be used from the
subsequent container:
$ docker create -v /home/docker:/docker --name docker ubuntu
9aa88c08f319cd1e4515c3c46b0de7cc9aa75e878357b1e96f91e2c773029f03
total 20
drwxr-sr-x 5 1000 staff 180 Dec 5 04:00 .
drwxr-xr-x 48 root root 4096 Dec 5 04:13 ..
-rw-rw-r-- 1 1000 staff 3833 Dec 5 04:01 .ash_history
-rw-r--r-- 1 1000 staff 446 Nov 28 11:51 .ashrc
-rw-r--r-- 1 1000 staff 25 Dec 5 04:00 .gitconfig
drwxr-sr-x 3 1000 staff 60 Dec 1 03:28 .local
-rw-r--r-- 1 1000 staff 920 Nov 28 11:51 .profile
drwx--S--- 2 1000 staff 460 Dec 5 00:51 .ssh
drwxr-xr-x 32 1000 staff 1140 Dec 5 04:01 docker
This (size) will allow to set the container rootfs size to 120G at creation time. This option is only
available for the devicemapper, btrfs, overlay2, windowsfilter and zfs graph drivers. For
the devicemapper, btrfs, windowsfilter and zfs graph drivers, user cannot pass a size less than the
Default BaseFS Size. For the overlay2 storage driver, the size option is only available if the backing
fs is xfs and mounted with the pquota mount option. Under these conditions, user can pass any size
less than the backing fs size.
daemon is running on
Windows server, or hyperv if
running on Windows client.
Specifying the --isolation flag without a value is the same as setting --isolation="default".
One of the solution is to add a more permissive rule to a container allowing it access to a wider
range of devices. For example, supposing our container needs access to a character device with
major 42 and any number of minor number (added as new devices appear), the following rule would
be added:
docker create --device-cgroup-rule='c 42:* rmw' -name my-container my-image
Then, a user could ask udev to execute a script that would docker exec my-container mknod
newDevX c 42 <minor> the required device when it is added.
NOTE: initially present devices still need to be explicitly added to the create/run command
docker deploy
Estimated reading time: 4 minutes
Description
Deploy a new stack or update an existing stack
API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
This command is experimental.
This command is experimental on the Docker daemon. It should not be used in production
environments. To enable experimental features on the Docker daemon, edit the daemon.json and
set experimental to true.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker deploy [OPTIONS] STACK
Options
Name,
Default Description
shorthand
experimental (daemon)Swarm
--bundle-file
Path to a Distributed Application Bundle file
Kubernetes
--namespace
Kubernetes namespace to use
API 1.27+Swarm
--prune
Prune services that are no longer referenced
API 1.30+Swarm
--resolve-image always Query the registry to resolve image digest and supported
platforms (“always”|”changed”|”never”)
--with- Swarm
registry-auth Send registry authentication details to Swarm agents
Parent command
Command Description
Extended description
Create and update a stack from a compose or a dab file on the swarm. This command has to be run
targeting a manager node.
Examples
Compose file
The deploy command supports compose file version 3.0 and above.
$ docker stack deploy --compose-file docker-compose.yml vossibility
$ docker service ls
Description
Inspect changes to files or directories on a container’s filesystem
Usage
docker diff CONTAINER
Parent command
Command Description
Extended description
List the changed files and directories in a container᾿s filesystem since the container was created.
Three different types of change are tracked:
Symbol Description
You can use the full or shortened container ID or the container name set usingdocker run --
name option.
Examples
Inspect the changes to an nginx container:
$ docker diff 1fdfd1f54c1b
C /dev
C /dev/console
C /dev/core
C /dev/stdout
C /dev/fd
C /dev/ptmx
C /dev/stderr
C /dev/stdin
C /run
A /run/nginx.pid
C /var/lib/nginx/tmp
A /var/lib/nginx/tmp/client_body
A /var/lib/nginx/tmp/fastcgi
A /var/lib/nginx/tmp/proxy
A /var/lib/nginx/tmp/scgi
A /var/lib/nginx/tmp/uwsgi
C /var/log/nginx
A /var/log/nginx/access.log
A /var/log/nginx/error.log
docker events
Estimated reading time: 12 minutes
Description
Get real time events from the server
Usage
docker events [OPTIONS]
Options
Name, shorthand Default Description
Parent command
Command Description
Extended description
Use docker events to get real-time events from the server. These events differ per Docker object
type.
Object types
CONTAINERS
attach
commit
copy
create
destroy
detach
die
exec_create
exec_detach
exec_die
exec_start
export
health_status
kill
oom
pause
rename
resize
restart
start
stop
top
unpause
update
IMAGES
delete
import
load
pull
push
save
tag
untag
PLUGINS
enable
disable
install
remove
VOLUMES
create
destroy
mount
unmount
NETWORKS
create
connect
destroy
disconnect
remove
DAEMONS
reload
SERVICES
create
remove
update
NODES
create
remove
update
SECRETS
create
remove
update
CONFIGS
create
remove
update
FILTERING
The filtering flag (-f or --filter) format is of “key=value”. If you would like to use multiple filters,
pass multiple flags (e.g., --filter "foo=bar" --filter "bif=baz")
Using the same filter multiple times will be handled as a OR; for example--filter
container=588a23dac085 --filter container=a8f7720b8c22 will display events for container
588a23dac085 OR container a8f7720b8c22
Using multiple filters will be handled as a AND; for example--filter container=588a23dac085 --
filter event=start will display events for container container 588a23dac085 AND the event type
is start
FORMAT
If a format (--format) is specified, the given template will be executed instead of the default format.
Go’s text/template package describes all the details of the format.
If a format is set to {{json .}}, the events are streamed as valid JSON Lines. For information about
JSON Lines, please refer to http://jsonlines.org/ .
Examples
Basic example
You’ll need two shells for this example.
$ docker events
Type=container Status=create
ID=2ee349dac409e97974ce8d01b70d250b85e0ba8189299c126a87812311951e26
Type=container Status=attach
ID=2ee349dac409e97974ce8d01b70d250b85e0ba8189299c126a87812311951e26
Type=container Status=start
ID=2ee349dac409e97974ce8d01b70d250b85e0ba8189299c126a87812311951e26
Type=container Status=resize
ID=2ee349dac409e97974ce8d01b70d250b85e0ba8189299c126a87812311951e26
Type=container Status=die
ID=2ee349dac409e97974ce8d01b70d250b85e0ba8189299c126a87812311951e26
Type=container Status=destroy
ID=2ee349dac409e97974ce8d01b70d250b85e0ba8189299c126a87812311951e26
FORMAT AS JSON
{"status":"create","id":"196016a57679bf42424484918746a9474cd905dd993c4d0f4..
{"status":"attach","id":"196016a57679bf42424484918746a9474cd905dd993c4d0f4..
{"Type":"network","Action":"connect","Actor":{"ID":"1b50a5bf755f6021dfa78e..
{"status":"start","id":"196016a57679bf42424484918746a9474cd905dd993c4d0f42..
{"status":"resize","id":"196016a57679bf42424484918746a9474cd905dd993c4d0f4..
docker exec
Estimated reading time: 3 minutes
Description
Run a command in a running container
Usage
docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
Options
Name, shorthand Default Description
API 1.25+
--env , -e
Set environment variables
API 1.35+
--workdir , -w
Working directory inside the container
Parent command
Command Description
Extended description
The docker exec command runs a new command in a running container.
The command started using docker exec only runs while the container’s primary process (PID 1) is
running, and it is not restarted if the container is restarted.
COMMAND will run in the default directory of the container. If the underlying image has a custom
directory specified with the WORKDIR directive in its Dockerfile, this will be used instead.
Examples
Run docker exec on a running container
First, start a container.
This will create a container named ubuntu_bash and start a Bash session.
This will create a new file /tmp/execWorks inside the running container ubuntu_bash, in the
background.
Next, execute an interactive bash shell on the container.
$ docker exec -it ubuntu_bash bash
This will create a new Bash session in the container ubuntu_bash with environment variable $VAR set
to “1”. Note that this environment variable will only be valid on the current Bash session.
By default docker exec command runs in the same working directory set when container was
created.
$ docker exec -it ubuntu_bash pwd
/
You can select working directory for the command to execute into
test
$ docker ps
FATA[0000] Error response from daemon: Container test is paused, unpause the
container before exec
$ echo $?
1
docker export
Estimated reading time: 1 minute
Description
Export a container’s filesystem as a tar archive
Usage
docker export [OPTIONS] CONTAINER
Options
Name, shorthand Default Description
Parent command
Command Description
Extended description
The docker export command does not export the contents of volumes associated with the
container. If a volume is mounted on top of an existing directory in the container, docker export will
export the contents of the underlying directory, not the contents of the volume.
Refer to Backup, restore, or migrate data volumes in the user guide for examples on exporting data
in a volume.
Examples
Each of these commands has the same result.
docker history
Estimated reading time: 2 minutes
Description
Show the history of an image
Usage
docker history [OPTIONS] IMAGE
Options
Name, shorthand Default Description
Parent command
Command Description
Examples
To see how the docker:latest image was built:
$ docker history docker
To see how the docker:apache image was added to a container’s base image:
$ docker history docker:scm
IMAGE CREATED CREATED BY
SIZE COMMENT
2ac9d1098bf1 3 months ago /bin/bash
241.4 MB Added Apache to Fedora base image
88b42ffd1f7c 5 months ago /bin/sh -c #(nop) ADD file:1fd8d7f9f6557cafc7
373.7 MB
c69cab00d6ef 5 months ago /bin/sh -c #(nop) MAINTAINER Lokesh Mandvekar
0 B
511136ea3c5a 19 months ago
0 B Imported from -
Placeholder Description
.ID Image ID
When using the --format option, the history command will either output the data exactly as the
template declares or, when using the table directive, will include column headers as well.
The following example uses a template without headers and outputs the ID and CreatedSince entries
separated by a colon for the busybox image:
$ docker history --format "{{.ID}}: {{.CreatedSince}}" busybox
Description
Manage the docker engine
Usage
docker engine COMMAND COMMAND
Child commands
Command Description
Parent command
Command Description
Description
Activate Enterprise Edition
Usage
docker engine activate [OPTIONS]
Options
Name,
Default Description
shorthand
Parent command
Command Description
Related commands
Command Description
Extended description
Activate Enterprise Edition.
With this command you may apply an existing Docker enterprise license, or interactively download
one from Docker. In the interactive exchange, you can sign up for a new trial, or download an
existing license. If you are currently running a Community Edition engine, the daemon will be
updated to the Enterprise Edition Docker engine with additional capabilities and long term support.
For more information about different Docker Enterprise license types visit
https://www.docker.com/licenses
For non-interactive scriptable deployments, download your license from https://hub.docker.com/ then
specify the file with the ‘--license’ flag.
Description
Check for available engine updates
Usage
docker engine check [OPTIONS]
Options
Name,
Default Description
shorthand
Parent command
Command Description
Related commands
Command Description
Usage
docker engine update [OPTIONS]
Options
Name,
Default Description
shorthand
Parent command
Command Description
Related commands
Command Description
Description
Manage images
Usage
docker image COMMAND
Child commands
Command Description
docker image
Show the history of an image
history
docker image import Import the contents from a tarball to create a filesystem image
docker image
Display detailed information on one or more images
inspect
Parent command
Command Description
Extended description
Manage images.
Description
Build an image from a Dockerfile
Usage
docker image build [OPTIONS] PATH | URL | -
Options
Name, shorthand Default Description
--cpu-shares , -
CPU shares (relative weight)
c
--disable-
true Skip image verification
content-trust
API 1.25+
--network
Set the networking mode for the RUN instructions during build
API 1.40+
--output , -o
Output destination (format: type=local,dest=path)
Set type of progress output (auto, plain, tty). Use plain to show
--progress auto
container output
API 1.39+
--secret Secret file to expose to the build (only if BuildKit enabled):
id=mysecret,src=/local/secret
API 1.39+
--ssh SSH agent socket or keys to expose to the build (only if BuildKit
enabled) (format: default|[=|[,]])
Parent command
Command Description
Related commands
Command Description
docker image
Show the history of an image
history
docker image import Import the contents from a tarball to create a filesystem image
docker image
Display detailed information on one or more images
inspect
Description
Show the history of an image
Usage
docker image history [OPTIONS] IMAGE
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker image
Show the history of an image
history
docker image import Import the contents from a tarball to create a filesystem image
docker image
Display detailed information on one or more images
inspect
Description
Import the contents from a tarball to create a filesystem image
Usage
docker image import [OPTIONS] file|URL|- [REPOSITORY[:TAG]]
Options
Name, shorthand Default Description
Parent command
Command Description
docker image
Show the history of an image
history
docker image import Import the contents from a tarball to create a filesystem image
docker image
Display detailed information on one or more images
inspect
Description
Display detailed information on one or more images
Usage
docker image inspect [OPTIONS] IMAGE [IMAGE...]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker image
Show the history of an image
history
docker image import Import the contents from a tarball to create a filesystem image
docker image
Display detailed information on one or more images
inspect
Description
Load an image from a tar archive or STDIN
Usage
docker image load [OPTIONS]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker image
Show the history of an image
history
docker image import Import the contents from a tarball to create a filesystem image
docker image
Display detailed information on one or more images
inspect
docker image ls
Estimated reading time: 1 minute
Description
List images
Usage
docker image ls [OPTIONS] [REPOSITORY[:TAG]]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker image
Show the history of an image
history
docker image import Import the contents from a tarball to create a filesystem image
docker image
Display detailed information on one or more images
inspect
Description
Remove unused images
API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker image prune [OPTIONS]
Options
Name, shorthand Default Description
Related commands
Command Description
docker image
Show the history of an image
history
docker image import Import the contents from a tarball to create a filesystem image
docker image
Display detailed information on one or more images
inspect
Extended description
Remove all dangling images. If -a is specified, will also remove all images not referenced by any
container.
Examples
Example output:
WARNING! This will remove all images without at least one container associated to
them.
Are you sure you want to continue? [y/N] y
Deleted Images:
untagged: alpine:latest
untagged:
alpine@sha256:3dcdb92d7432d56604d4545cbd324b14e647b313626d99b889d0626de158f73a
deleted: sha256:4e38e38c8ce0b8d9041a9c4fefe786631d1416225e13b0bfe8cfa2321aec4bba
deleted: sha256:4fe15f8d0ae69e169824f25f1d4da3015a48feeeeebb265cd2e328e15c6a869f
untagged: alpine:3.3
untagged:
alpine@sha256:4fa633f4feff6a8f02acfc7424efd5cb3e76686ed3218abf4ca0fa4a2a358423
untagged: my-jq:latest
deleted: sha256:ae67841be6d008a374eff7c2a974cde3934ffe9536a7dc7ce589585eddd83aff
deleted: sha256:34f6f1261650bc341eb122313372adc4512b4fceddc2a7ecbb84f0958ce5ad65
deleted: sha256:cf4194e8d8db1cb2d117df33f2c75c0369c3a26d96725efb978cc69e046b87e7
untagged: my-curl:latest
deleted: sha256:b2789dd875bf427de7f9f6ae001940073b3201409b14aba7e5db71f408b8569e
deleted: sha256:96daac0cb203226438989926fc34dd024f365a9a8616b93e168d303cfe4cb5e9
deleted: sha256:5cbd97a14241c9cd83250d6b6fc0649833c4a3e84099b968dd4ba403e609945e
deleted: sha256:a0971c4015c1e898c60bf95781c6730a05b5d8a2ae6827f53837e6c9d38efdec
deleted: sha256:d8359ca3b681cc5396a4e790088441673ed3ce90ebc04de388bfcd31a0716b06
deleted: sha256:83fc9ba8fb70e1da31dfcc3c88d093831dbd4be38b34af998df37e8ac538260c
deleted: sha256:ae7041a4cc625a9c8e6955452f7afe602b401f662671cea3613f08f3d9343b35
deleted: sha256:35e0f43a37755b832f0bbea91a2360b025ee351d7309dae0d9737bc96b6d0809
deleted: sha256:0af941dd29f00e4510195dd00b19671bc591e29d1495630e7e0f7c44c1e6a8c0
deleted: sha256:9fc896fc2013da84f84e45b3096053eb084417b42e6b35ea0cce5a3529705eac
deleted: sha256:47cf20d8c26c46fff71be614d9f54997edacfe8d46d51769706e5aba94b16f2b
deleted: sha256:2c675ee9ed53425e31a13e3390bf3f539bf8637000e4bcfbb85ee03ef4d910a1
Total reclaimed space: 16.43 MB
Filtering
The filtering flag (--filter) format is of “key=value”. If there is more than one filter, then pass
multiple flags (e.g., --filter "foo=bar" --filter "bif=baz")
The until filter can be Unix timestamps, date formatted timestamps, or Go duration strings
(e.g. 10m, 1h30m) computed relative to the daemon machine’s time. Supported formats for date
formatted time stamps include RFC3339Nano, RFC3339, 2006-01-02T15:04:05,2006-01-
02T15:04:05.999999999, 2006-01-02Z07:00, and 2006-01-02. The local timezone on the daemon will
be used if you do not provide either a Z or a +-00:00 timezone offset at the end of the timestamp.
When providing Unix timestamps enter seconds[.nanoseconds], where seconds is the number of
seconds that have elapsed since January 1, 1970 (midnight UTC/GMT), not counting leap seconds
(aka Unix epoch or Unix time), and the optional .nanoseconds field is a fraction of a second no more
than nine digits long.
The label filter accepts two formats. One is the label=... (label=<key> or label=<key>=<value>),
which removes images with the specified labels. The other format is
the label!=... (label!=<key> or label!=<key>=<value>), which removes images without the
specified labels.
Predicting what will be removed
If you are using positive filtering (testing for the existence of a label or that a label has a specific
value), you can use docker image ls with the same filtering syntax to see which images match your
filter.
However, if you are using negative filtering (testing for the absence of a label or that a label
does not have a specific value), this type of filter does not work with docker image ls so you cannot
easily predict which images will be removed. In addition, the confirmation prompt for docker image
prune always warns that alldangling images will be removed, even if you are using --filter.
The following removes images created before 2017-01-04T00:00:00:
$ docker images --format 'table
{{.Repository}}\t{{.Tag}}\t{{.ID}}\t{{.CreatedAt}}\t{{.Size}}'
REPOSITORY TAG IMAGE ID CREATED AT
SIZE
foo latest 2f287ac753da 2017-01-04 13:42:23 -0800
PST 3.98 MB
alpine latest 88e169ea8f46 2016-12-27 10:17:25 -0800
PST 3.98 MB
busybox latest e02e811dd08f 2016-10-07 14:03:58 -0700
PDT 1.09 MB
Deleted Images:
untagged: alpine:latest
untagged:
alpine@sha256:dfbd4a3a8ebca874ebd2474f044a0b33600d4523d03b0df76e5c5986cb02d7e8
untagged: busybox:latest
untagged:
busybox@sha256:29f5d56d12684887bdfa50dcd29fc31eea4aaf4ad3bec43daf19026a7ce69912
deleted: sha256:e02e811dd08fd49e7f6032625495118e63f597eb150403d02e3238af1df240ba
deleted: sha256:e88b3f82283bc59d5e0df427c824e9f95557e661fcb0ea15fb0fb6f97760f9d9
The following removes images created more than 10 days (240h) ago:
$ docker images
Deleted Images:
untagged: golang:1.7.0
untagged:
golang@sha256:6765038c2b8f407fd6e3ecea043b44580c229ccfa2a13f6d85866cf2b4a9628e
deleted: sha256:138c2e6554219de65614d88c15521bfb2da674cbb0bf840de161f89ff4264b96
deleted: sha256:ec353c2e1a673f456c4b78906d0d77f9d9456cfb5229b78c6a960bfb7496b76a
deleted: sha256:fe22765feaf3907526b4921c73ea6643ff9e334497c9b7e177972cf22f68ee93
deleted: sha256:ff845959c80148421a5c3ae11cc0e6c115f950c89bc949646be55ed18d6a2912
deleted: sha256:a4320831346648c03db64149eafc83092e2b34ab50ca6e8c13112388f25899a7
deleted: sha256:4c76020202ee1d9709e703b7c6de367b325139e74eebd6b55b30a63c196abaf3
deleted: sha256:d7afd92fb07236c8a2045715a86b7d5f0066cef025018cd3ca9a45498c51d1d6
deleted: sha256:9e63c5bce4585dd7038d830a1f1f4e44cb1a1515b00e620ac718e934b484c938
untagged: debian:jessie
untagged:
debian@sha256:c1af755d300d0c65bb1194d24bce561d70c98a54fb5ce5b1693beb4f7988272f
deleted: sha256:7b0a06c805e8f23807fb8856621c60851727e85c7bcb751012c813f122734c8d
deleted: sha256:f96222d75c5563900bc4dd852179b720a0885de8f7a0619ba0ac76e92542bbc8
$ docker images
The following example removes images with the label maintainer set to john:
$ docker image prune --filter="label=maintainer=john"
This example removes images which have a maintainer label not set to john:
$ docker image prune --filter="label!=maintainer=john"
Note: You are prompted for confirmation before the prune removes anything, but you are not shown
a list of what will potentially be removed. In addition, docker image ls does not support negative
filtering, so it difficult to predict what images will actually be removed.
Description
Pull an image or a repository from a registry
Usage
docker image pull [OPTIONS] NAME[:TAG|@DIGEST]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker image
Show the history of an image
history
docker image import Import the contents from a tarball to create a filesystem image
docker image
Display detailed information on one or more images
inspect
Description
Push an image or a repository to a registry
Usage
docker image push [OPTIONS] NAME[:TAG]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker image
Show the history of an image
history
docker image import Import the contents from a tarball to create a filesystem image
docker image
Display detailed information on one or more images
inspect
docker image rm
Estimated reading time: 1 minute
Description
Remove one or more images
Usage
docker image rm [OPTIONS] IMAGE [IMAGE...]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker image
Show the history of an image
history
docker image import Import the contents from a tarball to create a filesystem image
docker image
Display detailed information on one or more images
inspect
Description
Save one or more images to a tar archive (streamed to STDOUT by default)
Usage
docker image save [OPTIONS] IMAGE [IMAGE...]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker image
Show the history of an image
history
docker image import Import the contents from a tarball to create a filesystem image
docker image
Display detailed information on one or more images
inspect
Description
Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
Usage
docker image tag SOURCE_IMAGE[:TAG] TARGET_IMAGE[:TAG]
Parent command
Command Description
Related commands
Command Description
docker image
Show the history of an image
history
docker image import Import the contents from a tarball to create a filesystem image
docker image
Display detailed information on one or more images
inspect
docker images
Estimated reading time: 10 minutes
Description
List images
Usage
docker images [OPTIONS] [REPOSITORY[:TAG]]
Options
Name, shorthand Default Description
Parent command
Command Description
Extended description
The default docker images will show all top level images, their repository and tags, and their size.
Docker images have intermediate layers that increase reusability, decrease disk usage, and speed
up docker build by allowing each step to be cached. These intermediate layers are not shown by
default.
The SIZE is the cumulative space taken up by the image and all its parent images. This is also the
disk space used by the contents of the Tar file created when you docker save an image.
An image will be listed more than once if it has multiple repository names or tags. This single image
(identifiable by its matching IMAGE ID) uses up the SIZE listed only once.
Examples
List the most recently created images
$ docker images
For example, to list all images in the “java” repository, run this command :
The [REPOSITORY[:TAG]] value must be an “exact match”. This means that, for example,docker
images jav does not match the image java.
If both REPOSITORY and TAG are provided, only images matching that repository and tag are listed. To
find all local images in the “java” repository with tag “8” you can use:
$ docker images java:8
When pushing or pulling to a 2.0 registry, the push or pull command output includes the image
digest. You can pull using a digest value. You can also reference by digest in create, run,
and rmi commands, as well as the FROM image reference in a Dockerfile.
Filtering
The filtering flag (-f or --filter) format is of “key=value”. If there is more than one filter, then pass
multiple flags (e.g., --filter "foo=bar" --filter "bif=baz")
This will display untagged images that are the leaves of the images tree (not intermediary layers).
These images occur when a new build of an image takes the repo:tag away from the image ID,
leaving it as <none>:<none> or untagged. A warning will be issued if trying to remove an image when
a container is presently using it. By having this flag it allows for batch cleanup.
You can use this in conjunction with docker rmi ...:
$ docker rmi $(docker images -f "dangling=true" -q)
8abc22fbb042
48e5f45168b9
bf747efa0e2f
980fe10e5736
dea752e4e117
511136ea3c5a
Note: Docker warns you if any containers exist that are using these untagged images.
The label filter matches images based on the presence of a label alone or a label and a value.
The following filter matches images with the com.example.version label regardless of its value.
$ docker images --filter "label=com.example.version"
The following filter matches images with the com.example.version label with the 1.0 value.
$ docker images --filter "label=com.example.version=1.0"
In this example, with the 0.1 value, it returns an empty set because no matches were found.
$ docker images --filter "label=com.example.version=0.1"
REPOSITORY TAG IMAGE ID CREATED SIZE
The before filter shows only images created before the image with given id or reference. For
example, having these images:
$ docker images
The reference filter shows only images whose reference matches the specified pattern.
$ docker images
Placeholder Description
.ID Image ID
When using the --format option, the image command will either output the data exactly as the
template declares or, when using the table directive, will include column headers as well.
The following example uses a template without headers and outputs the ID and Repository entries
separated by a colon for all images:
$ docker images --format "{{.ID}}: {{.Repository}}"
77af4d6b9913: <none>
b6fa739cedf5: committ
78a85c484f71: <none>
30557a29d5ab: docker
5ed6274db6ce: <none>
746b819f315e: postgres
746b819f315e: postgres
746b819f315e: postgres
746b819f315e: postgres
To list all images with their repository and tag in a table format you can use:
docker import
Estimated reading time: 2 minutes
Description
Import the contents from a tarball to create a filesystem image
Usage
docker import [OPTIONS] file|URL|- [REPOSITORY[:TAG]]
Options
Name, shorthand Default Description
Parent command
Command Description
Extended description
You can specify a URL or - (dash) to take data directly from STDIN. The URL can point to an archive
(.tar, .tar.gz, .tgz, .bzip, .tar.xz, or .txz) containing a filesystem or to an individual file on the Docker
host. If you specify an archive, Docker untars it in the container relative to the / (root). If you specify
an individual file, you must specify the full path within the host. To import from a remote location,
specify a URI that begins with the http:// or https://protocol.
The --change option will apply Dockerfile instructions to the image that is created.
Supported Dockerfile instructions:CMD|ENTRYPOINT|ENV|EXPOSE|ONBUILD|USER|VOLUME|WORKDIR
Examples
Import from a remote location
This will create a new untagged image.
Note the sudo in this example – you must preserve the ownership of the files (especially root
ownership) during the archiving with tar. If you are not root (or the sudo command) when you tar,
then the ownerships might not get preserved.
docker info
Estimated reading time: 5 minutes
Description
Display system-wide information
Usage
docker info [OPTIONS]
Options
Name, shorthand Default Description
Parent command
Command Description
Extended description
This command displays system wide information regarding the Docker installation. Information
displayed includes the kernel version, number of containers and images. The number of images
shown is the number of unique images. The same image tagged under different names is counted
only once.
If a format is specified, the given template will be executed instead of the default format.
Go’s text/template package describes all the details of the format.
Depending on the storage driver in use, additional information can be shown, such as pool name,
data file, metadata file, data space used, total data space, metadata space used, and total metadata
space.
The data file is where the images are stored and the metadata file is where the meta data regarding
those images are stored. When run for the first time Docker allocates a certain amount of data space
and meta data space from the space available on the volume where /var/lib/docker is mounted.
Examples
Show output
The example below shows the output for a daemon running on Red Hat Enterprise Linux, using
the devicemapper storage driver. As can be seen in the output, additional information about
the devicemapper storage driver is shown:
$ docker info
Client:
Debug Mode: false
Server:
Containers: 14
Running: 3
Paused: 1
Stopped: 10
Images: 52
Server Version: 1.10.3
Storage Driver: devicemapper
Pool Name: docker-202:2-25583803-pool
Pool Blocksize: 65.54 kB
Base Device Size: 10.74 GB
Backing Filesystem: xfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 1.68 GB
Data Space Total: 107.4 GB
Data Space Available: 7.548 GB
Metadata Space Used: 2.322 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.145 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.107-RHEL7 (2015-12-01)
Execution Driver: native-0.2
Logging Driver: json-file
Plugins:
Volume: local
Network: null host bridge
Kernel Version: 3.10.0-327.el7.x86_64
Operating System: Red Hat Enterprise Linux Server 7.2 (Maipo)
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 991.7 MiB
Name: ip-172-30-0-91.ec2.internal
ID: I54V:OLXT:HVMM:TPKO:JPHQ:CQCD:JNLC:O3BZ:4ZVJ:43XJ:PFHZ:6N2S
Docker Root Dir: /var/lib/docker
Debug Mode: false
Username: gordontheturtle
Registry: https://index.docker.io/v1/
Insecure registries:
myinsecurehost:5000
127.0.0.0/8
$ docker -D info
Client:
Debug Mode: true
Server:
Containers: 14
Running: 3
Paused: 1
Stopped: 10
Images: 52
Server Version: 1.13.0
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: active
NodeID: rdjq45w1op418waxlairloqbm
Is Manager: true
ClusterID: te8kdyw33n36fqiz74bfjeixd
Managers: 1
Nodes: 2
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 3
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Root Rotation In Progress: false
Node Address: 172.16.66.128 172.16.66.129
Manager Addresses:
172.16.66.128:2477
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 8517738ba4b82aff5662c97ca4627e7e4d03b531
runc version: ac031b5bf1cc92239461125f4c1ffb760522bbf2
init version: N/A (expected: v0.13.0)
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.4.0-31-generic
Operating System: Ubuntu 16.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.937 GiB
Name: ubuntu
ID: H52R:7ZR6:EIIA:76JG:ORIY:BVKF:GSFU:HNPG:B5MK:APSC:SZ3Q:N326
Docker Root Dir: /var/lib/docker
Debug Mode: true
File Descriptors: 30
Goroutines: 123
System Time: 2016-11-12T17:24:37.955404361-08:00
EventsListeners: 0
Http Proxy: http://test:[email protected]:8080
Https Proxy: https://test:[email protected]:8080
No Proxy: localhost,127.0.0.1,docker-registry.somecorporation.com
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Labels:
storage=ssd
staging=true
Experimental: false
Insecure Registries:
127.0.0.0/8
Registry Mirrors:
http://192.168.1.2/
http://registry-mirror.example.com:5000/
Live Restore Enabled: false
The global -D option causes all docker commands to output debug information.
{"ID":"I54V:OLXT:HVMM:TPKO:JPHQ:CQCD:JNLC:O3BZ:4ZVJ:43XJ:PFHZ:6N2S","Containers":14,
...}
E:\docker>docker info
Client:
Debug Mode: false
Server:
Containers: 1
Running: 0
Paused: 0
Stopped: 1
Images: 17
Server Version: 1.13.0
Storage Driver: windowsfilter
Windows:
Logging Driver: json-file
Plugins:
Volume: local
Network: nat null overlay
Swarm: inactive
Default Isolation: process
Kernel Version: 10.0 14393 (14393.206.amd64fre.rs1_release.160912-1937)
Operating System: Windows Server 2016 Datacenter
OSType: windows
Architecture: x86_64
CPUs: 8
Total Memory: 3.999 GiB
Name: WIN-V0V70C0LU5P
ID: NYMS:B5VK:UMSL:FVDZ:EWB5:FKVK:LPFL:FJMQ:H6FT:BZJ6:L2TD:XH62
Docker Root Dir: C:\control
Debug Mode: false
Registry: https://index.docker.io/v1/
Insecure Registries:
127.0.0.0/8
Registry Mirrors:
http://192.168.1.2/
http://registry-mirror.example.com:5000/
Live Restore Enabled: false
You can ignore these warnings unless you actually need the ability to limit these resources, in which
case you should consult your operating system’s documentation for enabling them.Learn more.
docker inspect
Estimated reading time: 2 minutes
Description
Return low-level information on Docker objects
Usage
docker inspect [OPTIONS] NAME|ID [NAME|ID...]
Options
Name, shorthand Default Description
Parent command
Command Description
Extended description
Docker inspect provides detailed information on constructs controlled by Docker.
Examples
Get an instance’s IP address
For the most part, you can pick out any field from the JSON in a fairly straightforward manner.
docker kill
Estimated reading time: 1 minute
Description
Kill one or more running containers
Usage
docker kill [OPTIONS] CONTAINER [CONTAINER...]
Options
Name, shorthand Default Description
Parent command
Command Description
Extended description
The docker kill subcommand kills one or more containers. The main process inside the container
is sent SIGKILL signal (default), or the signal that is specified with the --signaloption. You can kill a
container using the container’s ID, ID-prefix, or name.
Note: ENTRYPOINT and CMD in the shell form run as a subcommand of /bin/sh -c, which does not
pass signals. This means that the executable is not the container’s PID 1 and does not receive Unix
signals.
Examples
Send a KILL signal to a container
The following example sends the default KILL signal to the container named my_container:
$ docker kill my_container
Send a custom signal to a container
The following example sends a SIGHUP signal to the container named my_container:
$ docker kill --signal=SIGHUP my_container
You can specify a custom signal either by name, or number. The SIG prefix is optional, so the
following examples are equivalent:
$ docker kill --signal=SIGHUP my_container
$ docker kill --signal=HUP my_container
$ docker kill --signal=1 my_container
docker load
Estimated reading time: 1 minute
Description
Load an image from a tar archive or STDIN
Usage
docker load [OPTIONS]
Options
Name, shorthand Default Description
Parent command
Command Description
Extended description
Load an image or repository from a tar archive (even if compressed with gzip, bzip2, or xz) from a
file or STDIN. It restores both images and tags.
Examples
$ docker image ls
$ docker images
docker login
Estimated reading time: 6 minutes
Description
Log in to a Docker registry
Usage
docker login [OPTIONS] [SERVER]
Options
Name, shorthand Default Description
--password , -p Password
--username , -u Username
Parent command
Command Description
You can log into any public or private repository for which you have credentials. When you log in, the
command stores credentials in $HOME/.docker/config.json on Linux
or %USERPROFILE%/.docker/config.json on Windows, via the procedure described below.
Credentials store
The Docker Engine can keep user credentials in an external credentials store, such as the native
keychain of the operating system. Using an external store is more secure than storing credentials in
the Docker configuration file.
To use a credentials store, you need an external helper program to interact with a specific keychain
or external store. Docker requires the helper program to be in the client’s host $PATH.
This is the list of currently available credentials helpers and where you can download them from:
You need to specify the credentials store in $HOME/.docker/config.json to tell the docker engine to
use it. The value of the config property should be the suffix of the program to use (i.e. everything
after docker-credential-). For example, to use docker-credential-osxkeychain:
{
"credsStore": "osxkeychain"
}
If you are currently logged in, run docker logout to remove the credentials from the file and
run docker login again.
DEFAULT BEHAVIOR
By default, Docker looks for the native binary on each of the platforms, i.e. “osxkeychain” on macOS,
“wincred” on windows, and “pass” on Linux. A special case is that on Linux, Docker will fall back to
the “secretservice” binary if it cannot find the “pass” binary. If none of these binaries are present, it
stores the credentials (i.e. password) in base64 encoding in the config files described above.
Credential helpers can be any program or script that follows a very simple protocol. This protocol is
heavily inspired by Git, but it differs in the information shared.
The helpers always use the first argument in the command to identify the action. There are only
three possible values for that argument: store, get, and erase.
The store command takes a JSON payload from the standard input. That payload carries the server
address, to identify the credential, the user name, and either a password or an identity token.
{
"ServerURL": "https://index.docker.io/v1",
"Username": "david",
"Secret": "passw0rd1"
}
If the secret being stored is an identity token, the Username should be set to <token>.
The store command can write error messages to STDOUT that the docker engine will show if there
was an issue.
The get command takes a string payload from the standard input. That payload carries the server
address that the docker engine needs credentials for. This is an example of that
payload: https://index.docker.io/v1.
The get command writes a JSON payload to STDOUT. Docker reads the user name and password
from this payload:
{
"Username": "david",
"Secret": "passw0rd1"
}
The erase command takes a string payload from STDIN. That payload carries the server address that
the docker engine wants to remove credentials for. This is an example of that
payload: https://index.docker.io/v1.
The erase command can write error messages to STDOUT that the docker engine will show if there
was an issue.
Credential helpers
Credential helpers are similar to the credential store above, but act as the designated programs to
handle credentials for specific registries. The default credential store (credsStore or the config file
itself) will not be used for operations concerning credentials of the specified registries.
If you are currently logged in, run docker logout to remove the credentials from the default store.
Credential helpers are specified in a similar way to credsStore, but allow for multiple helpers to be
configured at a time. Keys specify the registry domain, and values specify the suffix of the program
to use (i.e. everything after docker-credential-). For example:
{
"credHelpers": {
"registry.example.com": "registryhelper",
"awesomereg.example.org": "hip-star",
"unicorn.example.io": "vcbait"
}
}
docker logout
Estimated reading time: 1 minute
Description
Log out from a Docker registry
Usage
docker logout [SERVER]
Parent command
Command Description
Examples
$ docker logout localhost:8080
docker logs
Estimated reading time: 3 minutes
Description
Fetch the logs of a container
Usage
docker logs [OPTIONS] CONTAINER
Options
Name,
Default Description
shorthand
--tail all Number of lines to show from the end of the logs
--timestamps ,
Show timestamps
-t
API 1.35+
--until Show logs before a timestamp (e.g. 2013-01-02T13:23:37) or
relative (e.g. 42m for 42 minutes)
Parent command
Command Description
Extended description
The docker logs command batch-retrieves logs present at the time of execution.
Note: this command is only functional for containers that are started with the json-
file or journald logging driver.
For more information about selecting and configuring logging drivers, refer to Configure logging
drivers.
The docker logs --follow command will continue streaming the new output from the
container’s STDOUT and STDERR.
Passing a negative number or a non-integer to --tail is invalid and the value is set to allin that
case.
The docker logs --timestamps command will add an RFC3339Nano timestamp , for example 2014-
09-16T06:17:46.000000000Z, to each log entry. To ensure that the timestamps are aligned the nano-
second part of the timestamp will be padded with zero when necessary.
The docker logs --details command will add on extra attributes, such as environment variables
and labels, provided to --log-opt when creating the container.
The --since option shows only the container logs generated after a given date. You can specify the
date as an RFC 3339 date, a UNIX timestamp, or a Go duration string (e.g. 1m30s, 3h). Besides
RFC3339 date format you may also use RFC3339Nano, 2006-01-02T15:04:05,2006-01-
02T15:04:05.999999999, 2006-01-02Z07:00, and 2006-01-02. The local timezone on the client will be
used if you do not provide either a Z or a +-00:00 timezone offset at the end of the timestamp. When
providing Unix timestamps enter seconds[.nanoseconds], where seconds is the number of seconds
that have elapsed since January 1, 1970 (midnight UTC/GMT), not counting leap seconds (aka Unix
epoch or Unix time), and the optional .nanoseconds field is a fraction of a second no more than nine
digits long. You can combine the --since option with either or both of the --follow or --
tail options.
Examples
Retrieve logs until a specific point in time
In order to retrieve logs before a specific point in time, run:
$ docker run --name test -d busybox sh -c "while true; do $(echo date); sleep 1;
done"
$ date
Tue 14 Nov 2017 16:40:00 CET
$ docker logs -f --until=2s
Tue 14 Nov 2017 16:40:00 CET
Tue 14 Nov 2017 16:40:01 CET
Tue 14 Nov 2017 16:40:02 CET
docker manifest
Estimated reading time: 8 minutes
Description
Manage Docker image manifests and manifest lists
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker manifest COMMAND COMMAND
Child commands
Command Description
docker manifest create Create a local manifest list for annotating and pushing to a registry
Parent command
Command Description
Extended description
The docker manifest command by itself performs no action. In order to operate on a manifest or
manifest list, one of the subcommands must be used.
A single manifest is information about an image, such as layers, size, and digest. The docker
manifest command also gives users additional information such as the os and architecture an image
was built for.
A manifest list is a list of image layers that is created by specifying one or more (ideally more than
one) image names. It can then be used in the same way as an image name in docker
pull and docker run commands, for example.
Ideally a manifest list is created from images that are identical in function for different os/arch
combinations. For this reason, manifest lists are often referred to as “multi-arch images”. However, a
user could create a manifest list that points to two images -- one for windows on amd64, and one for
darwin on amd64.
manifest inspect
manifest inspect --help
Options:
--help Print usage
--insecure Allow communication with an insecure registry
-v, --verbose Output additional info including layers and platform
manifest create
Usage: docker manifest create MANIFEST_LIST MANIFEST [MANIFEST...]
Create a local manifest list for annotating and pushing to a registry
Options:
-a, --amend Amend an existing manifest list
--insecure Allow communication with an insecure registry
--help Print usage
manifest annotate
Usage: docker manifest annotate [OPTIONS] MANIFEST_LIST MANIFEST
Options:
--arch string Set architecture
--help Print usage
--os string Set operating system
--os-features stringSlice Set operating system feature
--variant string Set architecture variant
manifest push
Usage: docker manifest push [OPTIONS] MANIFEST_LIST
Options:
--help Print usage
--insecure Allow push to an insecure registry
-p, --purge Remove the local manifest list after push
Examples
Inspect an image’s manifest object
$ docker manifest inspect hello-world
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {
"mediaType": "application/vnd.docker.container.image.v1+json",
"size": 1520,
"digest":
"sha256:1815c82652c03bfd8644afda26fb184f2ed891d921b20a0703b46768f9755c57"
},
"layers": [
{
"mediaType":
"application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 972,
"digest":
"sha256:b04784fba78d739b526e27edc02a5a8cd07b1052e9283f5fc155828f4b614c28"
}
]
}
Just as with other docker commands that take image names, you can refer to an image with or
without a tag, or by digest (e.g. hello-
world@sha256:f3b3b28a45160805bb16542c9531888519430e9e6d6ffc09d72261b0d26ff74f).
Note that the --insecure flag is not required to annotate a manifest list, since annotations are to a
locally-stored copy of a manifest list. You may also skip the --insecure flag if you are performing
a docker manifest inspect on a locally-stored manifest list. Be sure to keep in mind that locally-
stored manifest lists are never used by the engine on a docker pull.
docker manifest annotate
Estimated reading time: 2 minutes
Description
Add additional information to a local image manifest
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker manifest annotate [OPTIONS] MANIFEST_LIST MANIFEST
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker manifest create Create a local manifest list for annotating and pushing to a registry
Description
Create a local manifest list for annotating and pushing to a registry
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker manifest create MANIFEST_LIST MANIFEST [MANIFEST...]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker manifest create Create a local manifest list for annotating and pushing to a registry
Description
Display an image manifest, or manifest list
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker manifest inspect [OPTIONS] [MANIFEST_LIST] MANIFEST
Options
Name, shorthand Default Description
Related commands
Command Description
docker manifest create Create a local manifest list for annotating and pushing to a registry
Description
Push a manifest list to a repository
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker manifest push [OPTIONS] MANIFEST_LIST
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker manifest create Create a local manifest list for annotating and pushing to a registry
docker network
Estimated reading time: 1 minute
Description
Manage networks
API 1.21+ The client and daemon API must both be at least 1.21 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker network COMMAND
Child commands
Command Description
Parent command
Command Description
Extended description
Manage networks. You can use subcommands to create, inspect, list, remove, prune, connect, and
disconnect networks.
Description
Connect a container to a network
API 1.21+ The client and daemon API must both be at least 1.21 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker network connect [OPTIONS] NETWORK CONTAINER
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
Extended description
Connects a container to a network. You can connect a container by name or by ID. Once connected,
the container can communicate with other containers in the same network.
Examples
Connect a running container to a network
$ docker network connect multi-host-network container1
If specified, the container’s IP address(es) is reapplied when a stopped container is restarted. If the
IP address is no longer available, the container fails to start. One way to guarantee that the IP
address is available is to specify an --ip-range when creating the network, and choose the static IP
address(es) from outside that range. This ensures that the IP address is not given to another
container while this container is not on the network.
$ docker network create --subnet 172.20.0.0/16 --ip-range 172.20.240.0/20 multi-host-
network
$ docker network connect --ip 172.20.128.2 multi-host-network container2
To verify the container is connected, use the docker network inspect command. Use docker
network disconnect to remove a container from the network.
Once connected in network, containers can communicate using only another container’s IP address
or name. For overlay networks or custom plugins that support multi-host connectivity, containers
connected to the same multi-host network but launched from different Engines can also
communicate in this way.
You can connect a container to one or more networks. The networks need not be the same type. For
example, you can connect a single container bridge and overlay networks.
Description
Create a network
API 1.21+ The client and daemon API must both be at least 1.21 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker network create [OPTIONS] NETWORK
Options
Name, shorthand Default Description
API 1.25+
--attachable
Enable manual container attachment
API 1.30+
--config-from
The network from which copying the configuration
API 1.30+
--config-only
Create a configuration only network
API 1.29+
--ingress
Create swarm routing-mesh network
API 1.30+
--scope
Control the network’s scope
Parent command
Command Description
Related commands
Command Description
Extended description
Creates a new network. The DRIVER accepts bridge or overlay which are the built-in network drivers.
If you have installed a third party or your own custom network driver you can specify
that DRIVER here also. If you don’t specify the --driver option, the command automatically creates
a bridge network for you. When you install Docker Engine it creates a bridge network automatically.
This network corresponds to the docker0 bridge that Engine has traditionally relied on. When you
launch a new container with docker run it automatically connects to this bridge network. You cannot
remove this default bridge network, but you can create new ones using the network
create command.
Bridge networks are isolated networks on a single Engine installation. If you want to create a
network that spans multiple Docker hosts each running an Engine, you must create
an overlay network. Unlike bridge networks, overlay networks require some pre-existing conditions
before you can create one. These conditions are:
Access to a key-value store. Engine supports Consul, Etcd, and ZooKeeper (Distributed
store) key-value stores.
A cluster of hosts with connectivity to the key-value store.
A properly configured Engine daemon on each host in the cluster.
--cluster-store
--cluster-store-opt
--cluster-advertise
To read more about these options and how to configure them, see “Get started with multi-host
network”.
While not required, it is a good idea to install Docker Swarm to manage the cluster that makes up
your network. Swarm provides sophisticated discovery and server management tools that can assist
your implementation.
Once you have prepared the overlay network prerequisites you simply choose a Docker host in the
cluster and issue the following to create the network:
$ docker network create -d overlay my-multihost-network
Network names must be unique. The Docker daemon attempts to identify naming conflicts but this is
not guaranteed. It is the user’s responsibility to avoid name conflicts.
Examples
Connect containers
When you start a container, use the --network flag to connect it to a network. This example adds
the busybox container to the mynet network:
$ docker run -itd --network=mynet busybox
If you want to add a container to a network after the container is already running, use the docker
network connect subcommand.
You can connect multiple containers to the same network. Once connected, the containers can
communicate using only another container’s IP address or name. For overlay networks or custom
plugins that support multi-host connectivity, containers connected to the same multi-host network but
launched from different Engines can also communicate in this way.
You can disconnect a container from a network using the docker network disconnectcommand.
Additionally, you also specify the --gateway --ip-range and --aux-address options.
$ docker network create \
--driver=bridge \
--subnet=172.28.0.0/16 \
--ip-range=172.28.5.0/24 \
--gateway=172.28.5.254 \
br0
If you omit the --gateway flag the Engine selects one for you from inside a preferred pool.
For overlay networks and for network driver plugins that support it you can create multiple
subnetworks. This example uses two /25 subnet mask to adhere to the current guidance of not
having more than 256 IPs in a single overlay network. Each of the subnetworks has 126 usable
addresses.
$ docker network create -d overlay \
--subnet=192.168.1.0/25 \
--subnet=192.170.2.0/25 \
--gateway=192.168.1.100 \
--gateway=192.170.2.100 \
--aux-address="my-router=192.168.1.5" --aux-address="my-switch=192.168.1.6" \
--aux-address="my-printer=192.170.1.5" --aux-address="my-nas=192.170.1.6" \
my-multihost-network
Be sure that your subnetworks do not overlap. If they do, the network create fails and Engine returns
an error.
The following arguments can be passed to docker network create for any network driver, again with
their approximate equivalents to docker daemon.
Argument Equivalent Description
For example, let’s use -o or --opt options to specify an IP address binding when publishing ports:
$ docker network create \
-o "com.docker.network.bridge.host_binding_ipv4"="172.19.0.1" \
simple-network
Description
Disconnect a container from a network
API 1.21+ The client and daemon API must both be at least 1.21 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker network disconnect [OPTIONS] NETWORK CONTAINER
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
Extended description
Disconnects a container from a network. The container must be running to disconnect it from the
network.
Examples
$ docker network disconnect multi-host-network container1
Description
Display detailed information on one or more networks
API 1.21+ The client and daemon API must both be at least 1.21 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker network inspect [OPTIONS] NETWORK [NETWORK...]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker network ls
Estimated reading time: 7 minutes
Description
List networks
API 1.21+ The client and daemon API must both be at least 1.21 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker network ls [OPTIONS]
Options
Name, shorthand Default Description
Parent command
Command Description
Extended description
Lists all the networks the Engine daemon knows about. This includes the networks that span across
multiple hosts in a cluster.
Examples
List all networks
$ sudo docker network ls
NETWORK ID NAME DRIVER SCOPE
7fca4eb8c647 bridge bridge local
9f904ee27bf5 none null local
cf03ee007fb4 host host local
78b03ee04fc4 multi-host overlay swarm
Filtering
The filtering flag (-f or --filter) format is a key=value pair. If there is more than one filter, then
pass multiple flags (e.g. --filter "foo=bar" --filter "bif=baz"). Multiple filter flags are combined
as an OR filter. For example, -f type=custom -f type=builtin returns
both custom and builtin networks.
driver
id (network’s id)
label (label=<key> or label=<key>=<value>)
name (network’s name)
scope (swarm|global|local)
type (custom|builtin)
DRIVER
ID
LABEL
The label filter matches networks based on the presence of a label alone or a labeland a value.
The following filter matches networks with the usage label regardless of its value.
$ docker network ls -f "label=usage"
NETWORK ID NAME DRIVER SCOPE
db9db329f835 test1 bridge local
f6e212da9dfd test2 bridge local
The following filter matches networks with the usage label with the prod value.
$ docker network ls -f "label=usage=prod"
NETWORK ID NAME DRIVER SCOPE
f6e212da9dfd test2 bridge local
NAME
SCOPE
TYPE
The type filter supports two values; builtin displays predefined networks (bridge, none, host),
whereas custom displays user defined networks.
By having this flag it allows for batch cleanup. For example, use this filter to delete all user defined
networks:
$ docker network rm `docker network ls --filter type=custom -q`
A warning will be issued when trying to remove a network that has containers attached.
Formatting
The formatting options (--format) pretty-prints networks output using a Go template.
Placeholder Description
.ID Network ID
When using the --format option, the network ls command will either output the data exactly as the
template declares or, when using the table directive, includes column headers as well.
The following example uses a template without headers and outputs the ID and Driverentries
separated by a colon for all networks:
$ docker network ls --format "{{.ID}}: {{.Driver}}"
afaaab448eb2: bridge
d1584f8dc718: host
391df270dc66: null
Description
Remove all unused networks
API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker network prune [OPTIONS]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
Extended description
Remove all unused networks. Unused networks are those which are not referenced by any
containers.
Examples
$ docker network prune
WARNING! This will remove all networks not used by at least one container.
Are you sure you want to continue? [y/N] y
Deleted Networks:
n1
n2
Filtering
The filtering flag (--filter) format is of “key=value”. If there is more than one filter, then pass
multiple flags (e.g., --filter "foo=bar" --filter "bif=baz")
The until filter can be Unix timestamps, date formatted timestamps, or Go duration strings
(e.g. 10m, 1h30m) computed relative to the daemon machine’s time. Supported formats for date
formatted time stamps include RFC3339Nano, RFC3339, 2006-01-02T15:04:05,2006-01-
02T15:04:05.999999999, 2006-01-02Z07:00, and 2006-01-02. The local timezone on the daemon will
be used if you do not provide either a Z or a +-00:00 timezone offset at the end of the timestamp.
When providing Unix timestamps enter seconds[.nanoseconds], where seconds is the number of
seconds that have elapsed since January 1, 1970 (midnight UTC/GMT), not counting leap seconds
(aka Unix epoch or Unix time), and the optional .nanoseconds field is a fraction of a second no more
than nine digits long.
The label filter accepts two formats. One is the label=... (label=<key> or label=<key>=<value>),
which removes networks with the specified labels. The other format is
the label!=... (label!=<key> or label!=<key>=<value>), which removes networks without the
specified labels.
The following removes networks created more than 5 minutes ago. Note that system networks such
as bridge, host, and none will never be pruned:
$ docker network ls
Deleted Networks:
foo-1-day-ago
$ docker network ls
Description
Remove one or more networks
API 1.21+ The client and daemon API must both be at least 1.21 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker network rm NETWORK [NETWORK...]
Parent command
Command Description
Related commands
Command Description
Examples
Remove a network
To remove the network named ‘my-network’:
When you specify multiple networks, the command attempts to delete each in turn. If the deletion of
one network fails, the command continues to the next on the list and tries to delete that. The
command reports success or failure for each deletion.
docker node
Estimated reading time: 1 minute
Description
Manage Swarm nodes
API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker node COMMAND
Child commands
Command Description
docker node demote Demote one or more nodes from manager in the swarm
docker node promote Promote one or more nodes to manager in the swarm
docker node ps List tasks running on one or more nodes, defaults to current node
Parent command
Command Description
Extended description
Manage nodes.
Description
Demote one or more nodes from manager in the swarm
API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker node demote NODE [NODE...]
Parent command
Command Description
Related commands
Command Description
docker node demote Demote one or more nodes from manager in the swarm
docker node promote Promote one or more nodes to manager in the swarm
docker node ps List tasks running on one or more nodes, defaults to current node
Extended description
Demotes an existing manager so that it is no longer a manager. This command targets a docker
engine that is a manager in the swarm.
Examples
$ docker node demote <node name>
Description
Display detailed information on one or more nodes
API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker node inspect [OPTIONS] self|NODE [NODE...]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker node demote Demote one or more nodes from manager in the swarm
docker node promote Promote one or more nodes to manager in the swarm
docker node ps List tasks running on one or more nodes, defaults to current node
Extended description
Returns information about a node. By default, this command renders all results in a JSON array. You
can specify an alternate format to execute a given template for each result.
Go’stext/template package describes all the details of the format.
Examples
Inspect a node
$ docker node inspect swarm-manager
[
{
"ID": "e216jshn25ckzbvmwlnh5jr3g",
"Version": {
"Index": 10
},
"CreatedAt": "2017-05-16T22:52:44.9910662Z",
"UpdatedAt": "2017-05-16T22:52:45.230878043Z",
"Spec": {
"Role": "manager",
"Availability": "active"
},
"Description": {
"Hostname": "swarm-manager",
"Platform": {
"Architecture": "x86_64",
"OS": "linux"
},
"Resources": {
"NanoCPUs": 1000000000,
"MemoryBytes": 1039843328
},
"Engine": {
"EngineVersion": "17.06.0-ce",
"Plugins": [
{
"Type": "Volume",
"Name": "local"
},
{
"Type": "Network",
"Name": "overlay"
},
{
"Type": "Network",
"Name": "null"
},
{
"Type": "Network",
"Name": "host"
},
{
"Type": "Network",
"Name": "bridge"
},
{
"Type": "Network",
"Name": "overlay"
}
]
},
"TLSInfo": {
"TrustRoot": "-----BEGIN CERTIFICATE-----
\nMIIBazCCARCgAwIBAgIUOzgqU4tA2q5Yv1HnkzhSIwGyIBswCgYIKoZIzj0EAwIw\nEzERMA8GA1UEAxMIc
3dhcm0tY2EwHhcNMTcwNTAyMDAyNDAwWhcNMzcwNDI3MDAy\nNDAwWjATMREwDwYDVQQDEwhzd2FybS1jYTBZ
MBMGByqGSM49AgEGCCqGSM49AwEH\nA0IABMbiAmET+HZyve35ujrnL2kOLBEQhFDZ5MhxAuYs96n796sFlfx
TxC1lM/2g\nAh8DI34pm3JmHgZxeBPKUURJHKWjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB\nAf8EBTAD
AQH/MB0GA1UdDgQWBBS3sjTJOcXdkls6WSY2rTx1KIJueTAKBggqhkjO\nPQQDAgNJADBGAiEAoeVWkaXgSUA
ucQmZ3Yhmx22N/cq1EPBgYHOBZmHt0NkCIQC3\nzONcJ/+WA21OXtb+vcijpUOXtNjyHfcox0N8wsLDqQ==\n
-----END CERTIFICATE-----\n",
"CertIssuerSubject": "MBMxETAPBgNVBAMTCHN3YXJtLWNh",
"CertIssuerPublicKey":
"MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAExuICYRP4dnK97fm6OucvaQ4sERCEUNnkyHEC5iz3qfv3qwWV
/FPELWUz/aACHwMjfimbcmYeBnF4E8pRREkcpQ=="
}
},
"Status": {
"State": "ready",
"Addr": "168.0.32.137"
},
"ManagerStatus": {
"Leader": true,
"Reachability": "reachable",
"Addr": "168.0.32.137:2377"
}
}
]
false
docker node ls
Estimated reading time: 5 minutes
Description
List nodes in the swarm
API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker node ls [OPTIONS]
Options
Name, shorthand Default Description
Related commands
Command Description
docker node demote Demote one or more nodes from manager in the swarm
docker node promote Promote one or more nodes to manager in the swarm
docker node ps List tasks running on one or more nodes, defaults to current node
Extended description
Lists all the nodes that the Docker Swarm manager knows about. You can filter using the -for --
filter flag. Refer to the filtering section for more information about available filter options.
Examples
$ docker node ls
Filtering
The filtering flag (-f or --filter) format is of “key=value”. If there is more than one filter, then pass
multiple flags (e.g., --filter "foo=bar" --filter "bif=baz")
id
label
membership
name
role
ID
LABEL
The label filter matches nodes based on engine labels and on the presence of a labelalone or
a label and a value. Node labels are currently not used for filtering.
The following filter matches nodes with the foo label regardless of its value.
$ docker node ls -f "label=foo"
MEMBERSHIP
The membership filter matches nodes based on the presence of a membership and a
valueaccepted or pending.
The following filter matches nodes with the membership of accepted.
$ docker node ls -f "membership=accepted"
NAME
ROLE
The role filter matches nodes based on the presence of a role and a value worker or manager.
The following filter matches nodes with the manager role.
$ docker node ls -f "role=manager"
Formatting
The formatting options (--format) pretty-prints nodes output using a Go template.
Placeholder Description
.ID Node ID
Node of the daemon (true/false, trueindicates that the node is the same as
.Self
current docker daemon)
TLS status of the node (“Ready”, or “Needs Rotation” has TLS certificate
.TLSStatus
signed by an old CA)
When using the --format option, the node ls command will either output the data exactly as the
template declares or, when using the table directive, includes column headers as well.
The following example uses a template without headers and outputs the ID, Hostname, and TLS
Status entries separated by a colon for all nodes:
Description
Promote one or more nodes to manager in the swarm
API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker node promote NODE [NODE...]
Parent command
Command Description
Related commands
Command Description
docker node demote Demote one or more nodes from manager in the swarm
docker node promote Promote one or more nodes to manager in the swarm
docker node ps List tasks running on one or more nodes, defaults to current node
Extended description
Promotes a node to manager. This command can only be executed on a manager node.
Examples
$ docker node promote <node name>
docker node ps
Estimated reading time: 4 minutes
Description
List tasks running on one or more nodes, defaults to current node
API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker node ps [OPTIONS] [NODE...]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker node demote Demote one or more nodes from manager in the swarm
docker node promote Promote one or more nodes to manager in the swarm
docker node ps List tasks running on one or more nodes, defaults to current node
Extended description
Lists all the tasks on a Node that Docker knows about. You can filter using the -f or --filter flag.
Refer to the filtering section for more information about available filter options.
Examples
$ docker node ps swarm-manager1
NAME IMAGE NODE DESIRED STATE
CURRENT STATE
redis.1.7q92v0nr1hcgts2amcjyqg3pq redis:3.0.6 swarm-manager1 Running
Running 5 hours
redis.6.b465edgho06e318egmgjbqo4o redis:3.0.6 swarm-manager1 Running
Running 29 seconds
redis.7.bg8c07zzg87di2mufeq51a2qp redis:3.0.6 swarm-manager1 Running
Running 5 seconds
redis.9.dkkual96p4bb3s6b10r7coxxt redis:3.0.6 swarm-manager1 Running
Running 5 seconds
redis.10.0tgctg8h8cech4w0k0gwrmr23 redis:3.0.6 swarm-manager1 Running
Running 5 seconds
Filtering
The filtering flag (-f or --filter) format is of “key=value”. If there is more than one filter, then pass
multiple flags (e.g., --filter "foo=bar" --filter "bif=baz")
name
id
label
desired-state
NAME
ID
LABEL
The label filter matches tasks based on the presence of a label alone or a label and a value.
The following filter matches tasks with the usage label regardless of its value.
$ docker node ps -f "label=usage"
DESIRED-STATE
The desired-state filter can take the values running, shutdown, or accepted.
Formatting
The formatting options (--format) pretty-prints tasks output using a Go template.
Placeholder Description
.ID Task ID
.Node Node ID
.Error Error
When using the --format option, the node ps command will either output the data exactly as the
template declares or, when using the table directive, includes column headers as well.
The following example uses a template without headers and outputs the Name and Imageentries
separated by a colon for all tasks:
$ docker node ps --format "{{.Name}}: {{.Image}}"
top.1: busybox
top.2: busybox
top.3: busybox
docker node rm
Estimated reading time: 2 minutes
Description
Remove one or more nodes from the swarm
API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker node rm [OPTIONS] NODE [NODE...]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker node demote Demote one or more nodes from manager in the swarm
docker node promote Promote one or more nodes to manager in the swarm
docker node ps List tasks running on one or more nodes, defaults to current node
Extended description
When run from a manager node, removes the specified nodes from a swarm.
Examples
Remove a stopped node from the swarm
$ docker node rm swarm-node-02
Error response from daemon: rpc error: code = 9 desc = node swarm-node-03 is not
down and can't be removed
A manager node must be demoted to a worker node (using docker node demote) before you can
remove it from the swarm.
Description
Update a node
API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker node update [OPTIONS] NODE
Options
Name, shorthand Default Description
Related commands
Command Description
docker node demote Demote one or more nodes from manager in the swarm
docker node promote Promote one or more nodes to manager in the swarm
docker node ps List tasks running on one or more nodes, defaults to current node
Extended description
Update metadata about a node, such as its availability, labels, or roles.
Examples
Add label metadata to a node
Add metadata to a swarm node using node labels. You can specify a node label as a key with an
empty value:
To add multiple labels to a node, pass the --label-add flag for each label:
$ docker node update --label-add foo --label-add bar worker1
When you create a service, you can use node labels as a constraint. A constraint limits the nodes
where the scheduler deploys tasks for a service.
For example, to add a type label to identify nodes where the scheduler should deploy message
queue service tasks:
$ docker node update --label-add type=queue worker1
The labels you set for nodes using docker node update apply only to the node entity within the
swarm. Do not confuse them with the docker daemon labels for dockerd.
docker pause
Estimated reading time: 1 minute
Description
Pause all processes within one or more containers
Usage
docker pause CONTAINER [CONTAINER...]
Parent command
Command Description
Extended description
The docker pause command suspends all processes in the specified containers. On Linux, this uses
the cgroups freezer. Traditionally, when suspending a process the SIGSTOP signal is used, which is
observable by the process being suspended. With the cgroups freezer the process is unaware, and
unable to capture, that it is being suspended, and subsequently resumed. On Windows, only Hyper-
V containers can be paused.
See the cgroups freezer documentation for further details.
Examples
$ docker pause my_container
docker plugin
Estimated reading time: 1 minute
Description
Manage plugins
API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker plugin COMMAND
Child commands
Command Description
docker plugin Create a plugin from a rootfs and configuration. Plugin data directory must
create contain config.json and rootfs directory.
docker plugin
Disable a plugin
disable
docker plugin
Enable a plugin
enable
docker plugin
Display detailed information on one or more plugins
inspect
docker plugin
Install a plugin
install
Command Description
docker plugin
Push a plugin to a registry
push
docker plugin
Upgrade an existing plugin
upgrade
Parent command
Command Description
Extended description
Manage plugins.
Description
Create a plugin from a rootfs and configuration. Plugin data directory must contain config.json and
rootfs directory.
API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker plugin create [OPTIONS] PLUGIN PLUGIN-DATA-DIR
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker plugin Create a plugin from a rootfs and configuration. Plugin data directory must
create contain config.json and rootfs directory.
docker plugin
Disable a plugin
disable
docker plugin
Enable a plugin
enable
docker plugin
Display detailed information on one or more plugins
inspect
docker plugin
Install a plugin
install
docker plugin
Push a plugin to a registry
push
docker plugin
Upgrade an existing plugin
upgrade
Extended description
Creates a plugin. Before creating the plugin, prepare the plugin’s root filesystem as well as the
config.json
Examples
The following example shows how to create a sample plugin.
$ ls -ls /home/pluginDir
total 4
4 -rw-r--r-- 1 root root 431 Nov 7 01:40 config.json
0 drwxr-xr-x 19 root root 420 Nov 7 01:40 rootfs
plugin
$ docker plugin ls
The plugin can subsequently be enabled for local use or pushed to the public registry.
Description
Disable a plugin
API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker plugin disable [OPTIONS] PLUGIN
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker plugin Create a plugin from a rootfs and configuration. Plugin data directory must
create contain config.json and rootfs directory.
docker plugin
Disable a plugin
disable
docker plugin
Enable a plugin
enable
Command Description
docker plugin
Display detailed information on one or more plugins
inspect
docker plugin
Install a plugin
install
docker plugin
Push a plugin to a registry
push
docker plugin
Upgrade an existing plugin
upgrade
Extended description
Disables a plugin. The plugin must be installed before it can be disabled, see docker plugin
install. Without the -f option, a plugin that has references (e.g., volumes, networks) cannot be
disabled.
Examples
The following example shows that the sample-volume-plugin plugin is installed and enabled:
$ docker plugin ls
$ docker plugin ls
Description
Enable a plugin
API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker plugin enable [OPTIONS] PLUGIN
Options
Name, shorthand Default Description
Parent command
Command Description
docker plugin Create a plugin from a rootfs and configuration. Plugin data directory must
create contain config.json and rootfs directory.
docker plugin
Disable a plugin
disable
docker plugin
Enable a plugin
enable
docker plugin
Display detailed information on one or more plugins
inspect
docker plugin
Install a plugin
install
docker plugin
Push a plugin to a registry
push
docker plugin
Upgrade an existing plugin
upgrade
Extended description
Enables a plugin. The plugin must be installed before it can be enabled, see docker plugin
install.
Examples
The following example shows that the sample-volume-plugin plugin is installed, but disabled:
$ docker plugin ls
ID NAME TAG DESCRIPTION
ENABLED
69553ca1d123 tiborvass/sample-volume-plugin latest A test
plugin for Docker false
tiborvass/sample-volume-plugin
$ docker plugin ls
Description
Display detailed information on one or more plugins
API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker plugin inspect [OPTIONS] PLUGIN [PLUGIN...]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker plugin Create a plugin from a rootfs and configuration. Plugin data directory must
create contain config.json and rootfs directory.
docker plugin
Disable a plugin
disable
docker plugin
Enable a plugin
enable
docker plugin
Display detailed information on one or more plugins
inspect
docker plugin
Install a plugin
install
docker plugin
Push a plugin to a registry
push
docker plugin
Upgrade an existing plugin
upgrade
Extended description
Returns information about a plugin. By default, this command renders all results in a JSON array.
Examples
$ docker plugin inspect tiborvass/sample-volume-plugin:latest
{
"Id": "8c74c978c434745c3ade82f1bc0acf38d04990eaf494fa507c16d9f1daa99c21",
"Name": "tiborvass/sample-volume-plugin:latest",
"PluginReference": "tiborvas/sample-volume-plugin:latest",
"Enabled": true,
"Config": {
"Mounts": [
{
"Name": "",
"Description": "",
"Settable": null,
"Source": "/data",
"Destination": "/data",
"Type": "bind",
"Options": [
"shared",
"rbind"
]
},
{
"Name": "",
"Description": "",
"Settable": null,
"Source": null,
"Destination": "/foobar",
"Type": "tmpfs",
"Options": null
}
],
"Env": [
"DEBUG=1"
],
"Args": null,
"Devices": null
},
"Manifest": {
"ManifestVersion": "v0",
"Description": "A test plugin for Docker",
"Documentation": "https://docs.docker.com/engine/extend/plugins/",
"Interface": {
"Types": [
"docker.volumedriver/1.0"
],
"Socket": "plugins.sock"
},
"Entrypoint": [
"plugin-sample-volume-plugin",
"/data"
],
"Workdir": "",
"User": {
},
"Network": {
"Type": "host"
},
"Capabilities": null,
"Mounts": [
{
"Name": "",
"Description": "",
"Settable": null,
"Source": "/data",
"Destination": "/data",
"Type": "bind",
"Options": [
"shared",
"rbind"
]
},
{
"Name": "",
"Description": "",
"Settable": null,
"Source": null,
"Destination": "/foobar",
"Type": "tmpfs",
"Options": null
}
],
"Devices": [
{
"Name": "device",
"Description": "a host device to mount",
"Settable": null,
"Path": "/dev/cpu_dma_latency"
}
],
"Env": [
{
"Name": "DEBUG",
"Description": "If set, prints debug messages",
"Settable": null,
"Value": "1"
}
],
"Args": {
"Name": "args",
"Description": "command line arguments",
"Settable": null,
"Value": [
]
}
}
}
8c74c978c434745c3ade82f1bc0acf38d04990eaf494fa507c16d9f1daa99c21
Description
Install a plugin
API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker plugin install [OPTIONS] PLUGIN [KEY=VALUE...]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker plugin Create a plugin from a rootfs and configuration. Plugin data directory must
create contain config.json and rootfs directory.
docker plugin
Disable a plugin
disable
docker plugin
Enable a plugin
enable
docker plugin
Display detailed information on one or more plugins
inspect
docker plugin
Install a plugin
install
docker plugin
Push a plugin to a registry
push
docker plugin
Upgrade an existing plugin
upgrade
Extended description
Installs and enables a plugin. Docker looks first for the plugin on your Docker host. If the plugin does
not exist locally, then the plugin is pulled from the registry. Note that the minimum required registry
version to distribute plugins is 2.3.0
Examples
The following example installs vieus/sshfs plugin and sets its DEBUG environment variable to 1. To
install, pull the plugin from Docker Hub and prompt the user to accept the list of privileges that the
plugin needs, set the plugin’s parameters and enable the plugin.
$ docker plugin install vieux/sshfs DEBUG=1
$ docker plugin ls
docker plugin ls
Estimated reading time: 3 minutes
Description
List plugins
API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker plugin ls [OPTIONS]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker plugin Create a plugin from a rootfs and configuration. Plugin data directory must
create contain config.json and rootfs directory.
docker plugin
Disable a plugin
disable
docker plugin
Enable a plugin
enable
docker plugin
Display detailed information on one or more plugins
inspect
docker plugin
Install a plugin
install
docker plugin
Push a plugin to a registry
push
docker plugin
Upgrade an existing plugin
upgrade
Extended description
Lists all the plugins that are currently installed. You can install plugins using the docker plugin
install command. You can also filter using the -f or --filter flag. Refer to the filtering section for
more information about available filter options.
Examples
$ docker plugin ls
Filtering
The filtering flag (-f or --filter) format is of “key=value”. If there is more than one filter, then pass
multiple flags (e.g., --filter "foo=bar" --filter "bif=baz")
ENABLED
CAPABILITY
The capability filter matches on plugin capabilities. One plugin might have multiple capabilities.
Currently volumedriver, networkdriver, ipamdriver, logdriver, metricscollector, and authz are
supported capabilities.
$ docker plugin install --disable vieux/sshfs
Formatting
The formatting options (--format) pretty-prints plugins output using a Go template.
Placeholder Description
.ID Plugin ID
Placeholder Description
When using the --format option, the plugin ls command will either output the data exactly as the
template declares or, when using the table directive, includes column headers as well.
The following example uses a template without headers and outputs the ID and Nameentries
separated by a colon for all plugins:
$ docker plugin ls --format "{{.ID}}: {{.Name}}"
4be01827a72e: vieux/sshfs:latest
docker plugin rm
Estimated reading time: 2 minutes
Description
Remove one or more plugins
API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker plugin rm [OPTIONS] PLUGIN [PLUGIN...]
Options
Name, shorthand Default Description
Related commands
Command Description
docker plugin Create a plugin from a rootfs and configuration. Plugin data directory must
create contain config.json and rootfs directory.
docker plugin
Disable a plugin
disable
docker plugin
Enable a plugin
enable
docker plugin
Display detailed information on one or more plugins
inspect
docker plugin
Install a plugin
install
docker plugin
Push a plugin to a registry
push
docker plugin
Upgrade an existing plugin
upgrade
Extended description
Removes a plugin. You cannot remove a plugin if it is enabled, you must disable a plugin using
the docker plugin disable before removing it (or use --force, use of force is not recommended,
since it can affect functioning of running containers using the plugin).
Examples
The following example disables and removes the sample-volume-plugin:latest plugin:
$ docker plugin disable tiborvass/sample-volume-plugin
tiborvass/sample-volume-plugin
tiborvass/sample-volume-plugin
Description
Change settings for a plugin
API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker plugin set PLUGIN KEY=VALUE [KEY=VALUE...]
Parent command
Command Description
docker plugin Create a plugin from a rootfs and configuration. Plugin data directory must
create contain config.json and rootfs directory.
docker plugin
Disable a plugin
disable
docker plugin
Enable a plugin
enable
docker plugin
Display detailed information on one or more plugins
inspect
docker plugin
Install a plugin
install
docker plugin
Push a plugin to a registry
push
docker plugin
Upgrade an existing plugin
upgrade
Extended description
Change settings for a plugin. The plugin must be disabled.
env variables
source of mounts
path of devices
args
Examples
Change an environment variable
The following example change the env variable DEBUG on the sample-volume-plugin plugin.
$ docker plugin inspect -f {{.Settings.Env}} tiborvass/sample-volume-plugin
[DEBUG=0]
Note: Since only source is settable in mymount,docker plugins set mymount=/bar myplugin would
work too.
/dev/foo
/dev/bar
["foo", "bar"]
Description
Upgrade an existing plugin
API 1.26+ The client and daemon API must both be at least 1.26 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker plugin upgrade [OPTIONS] PLUGIN [REMOTE]
Options
Name, shorthand Default Description
--disable-content-
true Skip image verification
trust
--grant-all-
Grant all permissions necessary to run the plugin
permissions
Parent command
Command Description
Related commands
Command Description
docker plugin Create a plugin from a rootfs and configuration. Plugin data directory must
create contain config.json and rootfs directory.
docker plugin
Disable a plugin
disable
docker plugin
Enable a plugin
enable
docker plugin
Display detailed information on one or more plugins
inspect
docker plugin
Install a plugin
install
docker plugin
Push a plugin to a registry
push
Command Description
docker plugin
Upgrade an existing plugin
upgrade
Extended description
Upgrades an existing plugin to the specified remote plugin image. If no remote is specified, Docker
will re-pull the current image and use the updated version. All existing references to the plugin will
continue to work. The plugin must be disabled before running the upgrade.
Examples
The following example installs vieus/sshfs plugin, uses it to create and use a volume, then
upgrades the plugin.
$ docker plugin install vieux/sshfs DEBUG=1
sshvolume
# Here docker volume ls doesn't show 'sshfsvolume', since the plugin is disabled
$ docker volume ls
viex/sshfs:next
$ docker volume ls
hello
docker port
Estimated reading time: 1 minute
Description
List port mappings or a specific mapping for the container
Usage
docker port CONTAINER [PRIVATE_PORT[/PROTO]]
Parent command
Command Description
Examples
Show all mapped ports
You can find out all the ports mapped by not specifying a PRIVATE_PORT, or just a specific mapping:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
b650456536c7 busybox:latest top 54 minutes ago Up 54
minutes 0.0.0.0:1234->9876/tcp, 0.0.0.0:4321->7890/tcp test
$ docker port test
7890/tcp -> 0.0.0.0:4321
9876/tcp -> 0.0.0.0:1234
$ docker port test 7890/tcp
0.0.0.0:4321
$ docker port test 7890/udp
2014/06/24 11:53:36 Error: No public port '7890/udp' published for test
$ docker port test 7890
0.0.0.0:4321
docker ps
Estimated reading time: 14 minutes
Description
List containers
Usage
docker ps [OPTIONS]
Options
Name, shorthand Default Description
Parent command
Command Description
$ docker ps -a
docker ps groups exposed ports into a single range if possible. E.g., a container that exposes TCP
ports 100, 101, 102 displays 100-102/tcp in the PORTS column.
Filtering
The filtering flag (-f or --filter) format is a key=value pair. If there is more than one filter, then
pass multiple flags (e.g. --filter "foo=bar" --filter "bif=baz")
Filter Description
id Container’s ID
exited An integer representing the container’s exit code. Only useful with --all.
Filter Description
before or since Filters containers created before or after a given container ID or name
LABEL
The label filter matches containers based on the presence of a label alone or a labeland a value.
The following filter matches containers with the color label regardless of its value.
$ docker ps --filter "label=color"
The following filter matches containers with the color label with the blue value.
$ docker ps --filter "label=color=blue"
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
d85756f57265 busybox "top" About a minute ago Up
About a minute high_albattani
NAME
EXITED
The exited filter matches containers by exist status code. For example, to filter for containers that
have exited successfully:
$ docker ps -a --filter 'exited=0'
You can use a filter to locate containers that exited with status of 137 meaning a SIGKILL(9) killed
them.
$ docker ps -a --filter 'exited=137'
STATUS
ANCESTOR
The ancestor filter matches containers based on its image or a descendant of it. The filter supports
the following image representation:
image
image:tag
image:tag@digest
short-id
full-id
If you don’t specify a tag, the latest tag is used. For example, to filter for containers that use the
latest ubuntu image:
$ docker ps --filter ancestor=ubuntu
Match containers based on the ubuntu-c1 image which, in this case, is a child of ubuntu:
$ docker ps --filter ancestor=ubuntu-c1
The following matches containers based on the layer d0e008c6cf02 or an image that have this layer
in its layer stack.
$ docker ps --filter ancestor=d0e008c6cf02
CREATE TIME
before
The before filter shows only containers created before the container with given id or name. For
example, having these containers created:
$ docker ps
since
The since filter shows only containers created since the container with given id or name. For
example, with the same containers as in before filter:
$ docker ps -f since=6e63f6ff38b0
VOLUME
The volume filter shows only containers that mount a specific volume or have a volume mounted in a
specific path:
$ docker ps --filter volume=remote-volume --format "table {{.ID}}\t{{.Mounts}}"
CONTAINER ID MOUNTS
9c3527ed70ce remote-volume
NETWORK
The network filter shows only containers that are connected to a network with a given name or id.
The following filter matches all containers that are connected to a network with a name
containing net1.
$ docker run -d --net=net1 --name=test1 ubuntu top
$ docker run -d --net=net2 --name=test2 ubuntu top
$ docker ps --filter network=net1
The network filter matches on both the network’s name and id. The following example shows all
containers that are attached to the net1 network, using the network id as a filter;
$ docker network inspect --format "{{.ID}}" net1
8c0b4110ae930dbe26b258de9bc34a03f98056ed6f27f991d32919bfe401d7c5
$ docker ps --filter
network=8c0b4110ae930dbe26b258de9bc34a03f98056ed6f27f991d32919bfe401d7c5
The publish and expose filters show only containers that have published or exposed port with a
given port number, port range, and/or protocol. The default protocol is tcp when not specified.
The following filter matches all containers that have published port of 80:
$ docker ps -a
The following filter matches all containers that have exposed TCP port in the range of 8000-8080:
$ docker ps --filter expose=8000-8080/tcp
The following filter matches all containers that have exposed UDP port 80:
$ docker ps --filter publish=80/udp
Formatting
The formatting option (--format) pretty-prints container output using a Go template.
Placeholder Description
.ID Container ID
.Image Image ID
When using the --format option, the ps command will either output the data exactly as the template
declares or, when using the table directive, includes column headers as well.
The following example uses a template without headers and outputs the ID and Commandentries
separated by a colon for all running containers:
$ docker ps --format "{{.ID}}: {{.Command}}"
To list all running containers with their labels in a table format you can use:
CONTAINER ID LABELS
a87ecb4f327c com.docker.swarm.node=ubuntu,com.docker.swarm.storage=ssd
01946d9d34d8
c1d3b0166030 com.docker.swarm.node=debian,com.docker.swarm.cpu=6
41d50ecd2f57
com.docker.swarm.node=fedora,com.docker.swarm.cpu=3,com.docker.swarm.storage=ssd
docker pull
Estimated reading time: 8 minutes
Description
Pull an image or a repository from a registry
Usage
docker pull [OPTIONS] NAME[:TAG|@DIGEST]
Options
Name, shorthand Default Description
Parent command
Command Description
Extended description
Most of your images will be created on top of a base image from the Docker Hub registry.
Docker Hub contains many pre-built images that you can pull and try without needing to define and
configure your own.
To download a particular image, or set of images (i.e., a repository), use docker pull.
Proxy configuration
If you are behind an HTTP proxy server, for example in corporate settings, before open a connect to
registry, you may need to configure the Docker daemon’s proxy settings, using
the HTTP_PROXY, HTTPS_PROXY, and NO_PROXY environment variables. To set these environment
variables on a host using systemd, refer to the control and configure Docker with systemdfor
variables configuration.
Concurrent downloads
By default the Docker daemon will pull three layers of an image at a time. If you are on a low
bandwidth connection this may cause timeout issues and you may want to lower this via the --max-
concurrent-downloads daemon option. See the daemon documentation for more details.
Examples
Pull an image from Docker Hub
To download a particular image, or set of images (i.e., a repository), use docker pull. If no tag is
provided, Docker Engine uses the :latest tag as a default. This command pulls
the debian:latest image:
$ docker pull debian
Docker images can consist of multiple layers. In the example above, the image consists of two
layers; fdd5d7827f33 and a3ed95caeb02.
Layers can be reused by images. For example, the debian:jessie image shares both layers
with debian:latest. Pulling the debian:jessie image therefore only pulls its metadata, but not its
layers, because all layers are already present locally:
$ docker pull debian:jessie
jessie: Pulling from library/debian
fdd5d7827f33: Already exists
a3ed95caeb02: Already exists
Digest: sha256:a9c958be96d7d40df920e7041608f2f017af81800ca5ad23e327bc402626b58e
Status: Downloaded newer image for debian:jessie
To see which images are present locally, use the docker images command:
$ docker images
Docker uses a content-addressable image store, and the image ID is a SHA256 digest covering the
image’s configuration and layers. In the example above, debian:jessie and debian:latest have the
same image ID because they are actually the same image tagged with different names. Because
they are the same image, their layers are stored only once and do not consume extra disk space.
For more information about images, layers, and the content-addressable store, refer to understand
images, containers, and storage drivers.
In some cases you don’t want images to be updated to newer versions, but prefer to use a fixed
version of an image. Docker enables you to pull an image by its digest. When pulling an image by
digest, you specify exactly which version of an image to pull. Doing so, allows you to “pin” an image
to that version, and guarantee that the image you’re using is always the same.
To know the digest of an image, pull the image first. Let’s pull the latest ubuntu:14.04 image from
Docker Hub:
$ docker pull ubuntu:14.04
14.04: Pulling from library/ubuntu
5a132a7e7af1: Pull complete
fd2731e4c50c: Pull complete
28a2f68d1120: Pull complete
a3ed95caeb02: Pull complete
Digest: sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2
Status: Downloaded newer image for ubuntu:14.04
Docker prints the digest of the image after the pull has finished. In the example above, the digest of
the image is:
sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2
Docker also prints the digest of an image when pushing to a registry. This may be useful if you want
to pin to a version of the image you just pushed.
A digest takes the place of the tag when pulling an image, for example, to pull the above image by
digest, run the following command:
$ docker pull
ubuntu@sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2
Docker uses the https:// protocol to communicate with a registry, unless the registry is allowed to
be accessed over an insecure connection. Refer to the insecure registries section for more
information.
Cancel a pull
Killing the docker pull process, for example by pressing CTRL-c while it is running in a terminal, will
terminate the pull operation.
$ docker pull fedora
Note: Technically, the Engine terminates a pull operation when the connection between the Docker
Engine daemon and the Docker Engine client initiating the pull is lost. If the connection with the
Engine daemon is lost for other reasons than a manual interaction, the pull is also aborted.
docker push
Estimated reading time: 2 minutes
Description
Push an image or a repository to a registry
Usage
docker push [OPTIONS] NAME[:TAG]
Options
Name, shorthand Default Description
Parent command
Command Description
Extended description
Use docker push to share your images to the Docker Hub registry or to a self-hosted one.
Refer to the docker tag reference for more information about valid image and tag names.
Killing the docker push process, for example by pressing CTRL-c while it is running in a terminal,
terminates the push operation.
Progress bars are shown during docker push, which show the uncompressed size. The actual
amount of data that’s pushed will be compressed before sending, so the uploaded size will not be
reflected by the progress bar.
Concurrent uploads
By default the Docker daemon will push five layers of an image at a time. If you are on a low
bandwidth connection this may cause timeout issues and you may want to lower this via the --max-
concurrent-uploads daemon option. See the daemon documentation for more details.
Examples
Push a new image to a registry
First save the new image by finding the container ID (using docker ps) and then committing it to a
new image name. Note that only a-z0-9-_. are allowed when naming images:
$ docker commit c16378f943fe rhel-httpd
Now, push the image to the registry using the image ID. In this example the registry is on host
named registry-host and listening on port 5000. To do this, tag the image with the host name or IP
address, and the port of the registry:
$ docker tag rhel-httpd registry-host:5000/myadmin/rhel-httpd
$ docker images
docker registry
Estimated reading time: 1 minute
Description
Manage Docker registries
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker registry COMMAND
Child commands
Command Description
Parent command
Command Description
Description
List registry events (DTR Only)
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker registry events HOST | REPOSITORY [OPTIONS]
Options
Name,
Default Description
shorthand
Related commands
Command Description
Extended description
List registry events (Only supported by Docker Trusted Registry)
Description
Inspect registry image history (DTR Only)
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker registry history IMAGE [OPTIONS]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
Description
Display information about a registry (DTR Only)
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker registry info HOST [OPTIONS]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
Description
Inspect registry image
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker registry inspect IMAGE [OPTIONS]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
Description
List registry job logs (DTR Only)
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker registry joblogs HOST JOB_ID [OPTIONS]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker registry ls
Estimated reading time: 2 minutes
Description
List registry images
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker registry ls REPOSITORY[:TAG] [OPTIONS]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
Description
Remove a registry image (DTR Only)
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker registry rmi REPOSITORY:TAG [OPTIONS]
Parent command
Command Description
Related commands
Command Description
docker rename
Estimated reading time: 1 minute
Description
Rename a container
Usage
docker rename CONTAINER NEW_NAME
Parent command
Command Description
Extended description
The docker rename command renames a container.
Examples
$ docker rename my_container my_new_container
docker restart
Estimated reading time: 1 minute
Description
Restart one or more containers
Usage
docker restart [OPTIONS] CONTAINER [CONTAINER...]
Options
Name, shorthand Default Description
Parent command
Command Description
Examples
$ docker restart my_container
docker rm
Estimated reading time: 2 minutes
Description
Remove one or more containers
Usage
docker rm [OPTIONS] CONTAINER [CONTAINER...]
Options
Name, shorthand Default Description
Parent command
Command Description
Examples
Remove a container
This will remove the container referenced under the link /redis.
$ docker rm /redis
/redis
/webapp/redis
redis
The main process inside the container referenced under the link redis will receiveSIGKILL, then the
container will be removed.
This command will delete all stopped containers. The command docker ps -a -q will return all
existing container IDs and pass them to the rm command which will delete them. Any running
containers will not be deleted.
This command will remove the container and any volumes associated with it. Note that if a volume
was specified with a name, it will not be removed.
In this example, the volume for /foo will remain intact, but the volume for /bar will be removed. The
same behavior holds for volumes inherited with --volumes-from.
docker rmi
Estimated reading time: 3 minutes
Description
Remove one or more images
Usage
docker rmi [OPTIONS] IMAGE [IMAGE...]
Options
Name, shorthand Default Description
Parent command
Command Description
Extended description
Removes (and un-tags) one or more images from the host node. If an image has multiple tags, using
this command with the tag as a parameter only removes the tag. If the tag is the only one for the
image, both the image and the tag are removed.
This does not remove images from a registry. You cannot remove an image of a running container
unless you use the -f option. To see all images on a host use the docker image ls command.
Examples
You can remove an image using its short or long ID, its tag, or its digest. If an image has one or
more tags referencing it, you must remove all of them before the image is removed. Digest
references are removed automatically when an image is removed by tag.
$ docker images
Untagged: test1:latest
Untagged: test2:latest
$ docker images
REPOSITORY TAG IMAGE ID CREATED
SIZE
test latest fd484f19954f 23 seconds ago
7 B (virtual 4.964 MB)
Untagged: test:latest
Deleted: fd484f19954f4920da7ff372b5067f5b7ddb2fd3830cecd17b96ea9e286ba5b8
If you use the -f flag and specify the image’s short or long ID, then this command untags and
removes all images that match the specified ID.
$ docker images
Untagged: test1:latest
Untagged: test:latest
Untagged: test2:latest
Deleted: fd484f19954f4920da7ff372b5067f5b7ddb2fd3830cecd17b96ea9e286ba5b8
$ docker rmi
localhost:5000/test/busybox@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a9
8caa0382cfbdbf
Untagged:
localhost:5000/test/busybox@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a9
8caa0382cfbdbf
Deleted: 4986bf8c15363d1c5d15512d5266f8777bfba4974ac56e3270e7760f6f0a8125
Deleted: ea13149945cb6b1e746bf28032f02e9b5a793523481a0a18645fc77ad53c4ea2
Deleted: df7546f9f060a2268024c8a230d8639878585defcc1bc6f79d2728a13957871b
docker run
Estimated reading time: 34 minutes
Description
Run a command in a new container
Usage
docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
Options
Name, shorthand Default Description
--blkio-weight-
Block IO weight (relative device weight)
device
API 1.25+
--cpu-rt-period
Limit CPU real-time period in microseconds
API 1.25+
--cpu-rt-runtime
Limit CPU real-time runtime in microseconds
API 1.25+
--cpus
Number of CPUs
--device-cgroup-
Add a rule to the cgroup allowed devices list
rule
Name, shorthand Default Description
--device-read-
Limit read rate (IO per second) from a device
iops
--device-write-
Limit write rate (bytes per second) to a device
bps
--device-write-
Limit write rate (IO per second) to a device
iops
--disable-
true Skip image verification
content-trust
API 1.40+
--gpus
GPU devices to add to the container (‘all’ to pass all GPUs)
API 1.29+
--health-start-
Start period for the container to initialize before starting health-
period
retries countdown (ms|s|m|h) (default 0s)
API 1.25+
--init Run an init inside the container that forwards signals and
reaps processes
--interactive , -
Keep STDIN open even if not attached
i
--io-maxiops Maximum IOps limit for the system drive (Windows only)
--memory-
Memory soft limit
reservation
--memory-
-1 Tune container memory swappiness (0 to 100)
swappiness
--oom-kill-
Disable OOM Killer
disable
--publish-all , -
Publish all exposed ports to random ports
P
API 1.25+
--stop-timeout
Timeout (in seconds) to stop a container
Parent command
Command Description
Extended description
The docker run command first creates a writeable container layer over the specified image, and
then starts it using the specified command. That is, docker run is equivalent to the
API /containers/create then /containers/(id)/start. A stopped container can be restarted with
all its previous changes intact using docker start. See docker ps -a to view a list of all containers.
The docker run command can be used in combination with docker commit to change the command
that a container runs. There is additional detailed information about docker runin the Docker run
reference.
For information on connecting a container to a network, see the “Docker network overview”.
Examples
Assign name and allocate pseudo-TTY (--name, -it)
$ docker run --name test -it debian
root@d6c0fe130dba:/# exit 13
$ echo $?
13
$ docker ps -a | grep test
d6c0fe130dba debian:7 "/bin/bash" 26 seconds ago
Exited (13) 17 seconds ago test
This example runs a container named test using the debian:latest image. The -itinstructs Docker
to allocate a pseudo-TTY connected to the container’s stdin; creating an interactive bash shell in the
container. In the example, the bash shell is quit by enteringexit 13. This exit code is passed on to
the caller of docker run, and is recorded in the test container’s metadata.
This will create a container and print test to the console. The cidfile flag makes Docker attempt to
create a new file and write the container ID to it. If the file exists already, Docker will return an error.
Docker will close this file when docker run exits.
This will not work, because by default, most potentially dangerous kernel capabilities are dropped;
including cap_sys_admin (which is required to mount filesystems). However, the --privileged flag
will allow it to run:
$ docker run -t -i --privileged ubuntu bash
root@50e3f57e16e6:/# mount -t tmpfs none /mnt
root@50e3f57e16e6:/# df -h
Filesystem Size Used Avail Use% Mounted on
none 1.9G 0 1.9G 0% /mnt
The --privileged flag gives all capabilities to the container, and it also lifts all the limitations
enforced by the device cgroup controller. In other words, the container can then do almost
everything that the host can do. This flag exists to allow special use-cases, like running Docker
within Docker.
The -w lets the command being executed inside directory given, here /path/to/dir/. If the path
does not exist it is created inside the container.
This (size) will allow to set the container rootfs size to 120G at creation time. This option is only
available for the devicemapper, btrfs, overlay2, windowsfilter and zfs graph drivers. For
the devicemapper, btrfs, windowsfilter and zfs graph drivers, user cannot pass a size less than the
Default BaseFS Size. For the overlay2 storage driver, the size option is only available if the backing
fs is xfs and mounted with the pquota mount option. Under these conditions, user can pass any size
less than the backing fs size.
The --tmpfs flag mounts an empty tmpfs into the container with
the rw, noexec, nosuid, size=65536k options.
The -v flag mounts the current working directory into the container. The -w lets the command being
executed inside the current working directory, by changing into the directory to the value returned
by pwd. So this combination executes the command using the container, but inside the current
working directory.
$ docker run -v /doesnt/exist:/foo -w /foo -i -t ubuntu bash
When the host directory of a bind-mounted volume doesn’t exist, Docker will automatically create
this directory on the host for you. In the example above, Docker will create the /doesnt/exist folder
before starting your container.
$ docker run --read-only -v /icanwrite busybox touch /icanwrite/here
Volumes can be used in combination with --read-only to control where a container writes files.
The --read-only flag mounts the container’s root filesystem as read only prohibiting writes to
locations other than the specified volumes for the container.
$ docker run -t -i -v /var/run/docker.sock:/var/run/docker.sock -v /path/to/static-
docker-binary:/usr/bin/docker busybox sh
By bind-mounting the docker unix socket and statically linked docker binary (refer to get the linux
binary), you give the container the full access to create and manipulate the host’s Docker daemon.
The following examples will fail when using Windows-based containers, as the destination of a
volume or bind mount inside the container must be one of: a non-existing or empty directory; or a
drive other than C:. Further, the source of a bind mount must be a local directory, not a file.
This binds port 8080 of the container to TCP port 80 on 127.0.0.1 of the host machine. You can also
specify udp and sctp ports. The Docker User Guide explains in detail how to manipulate ports in
Docker.
$ docker run --expose 80 ubuntu bash
This exposes port 80 of the container without publishing the port to the host system’s interfaces.
Use the -e, --env, and --env-file flags to set simple (non-array) environment variables in the
container you’re running, or overwrite variables that are defined in the Dockerfile of the image you’re
running.
You can define the variable and its value when running the container:
$ docker run --env VAR1=value1 --env VAR2=value2 ubuntu env | grep VAR
VAR1=value1
VAR2=value2
You can also use variables that you’ve exported to your local environment:
export VAR1=value1
export VAR2=value2
$ docker run --env VAR1 --env VAR2 ubuntu env | grep VAR
VAR1=value1
VAR2=value2
When running the command, the Docker CLI client checks the value the variable has in your local
environment and passes it to the container. If no = is provided and that variable is not exported in
your local environment, the variable won’t be set in the container.
You can also load the environment variables from a file. This file should use the
syntax <variable>=value (which sets the variable to the given value) or <variable> (which takes the
value from the local environment), and # for comments.
$ cat env.list
# This is a comment
VAR1=value1
VAR2=value2
USER
The my-label key doesn’t specify a value so the label defaults to an empty string (""). To add
multiple labels, repeat the label flag (-l or --label).
The key=value must be unique to avoid overwriting the label value. If you specify labels with identical
keys but different values, each subsequent value overwrites the previous. Docker uses the
last key=value you supply.
Use the --label-file flag to load multiple labels from a file. Delimit each label in the file with an
EOL mark. The example below loads labels from a labels file in the current directory:
$ docker run --label-file ./labels ubuntu bash
The label-file format is similar to the format for loading environment variables. (Unlike environment
variables, labels are not visible to processes running inside a container.) The following example
illustrates a label-file format:
com.example.label1="a label"
# this is a comment
com.example.label2=another\ label
com.example.label3
For additional information on working with labels, see Labels - custom metadata in Docker in the
Docker User Guide.
You can also choose the IP addresses for the container with --ip and --ip6 flags when you start the
container on a user-defined network.
$ docker run -itd --network=my-net --ip=10.10.9.75 busybox
If you want to add a running container to a network use the docker network connectsubcommand.
You can connect multiple containers to the same network. Once connected, the containers can
communicate easily need only another container’s IP address or name. For overlaynetworks or
custom plugins that support multi-host connectivity, containers connected to the same multi-host
network but launched from different Engines can also communicate in this way.
Note: Service discovery is unavailable on the default bridge network. Containers can communicate
via their IP addresses by default. To communicate by name, they must be linked.
You can disconnect a container from a network using the docker network disconnectcommand.
Labeling systems like SELinux require that proper labels are placed on volume content mounted into
a container. Without a label, the security system might prevent the processes running inside the
container from using the content. By default, Docker does not change the labels set by the OS.
To change the label in the container context, you can add either of two suffixes :z or :Z to the
volume mount. These suffixes tell Docker to relabel file objects on the shared volumes. The z option
tells Docker that two containers share the volume content. As a result, Docker labels the content
with a shared content label. Shared volume labels allow all containers to read/write content.
The Z option tells Docker to label the content with a private unshared label. Only the current
container can use a private volume.
This pipes data into a container and prints the container’s ID by attaching only to the
container’s STDIN.
$ docker run -a stderr ubuntu echo test
This isn’t going to print anything unless there’s an error because we’ve only attached to the STDERR of
the container. The container’s logs still store what’s been written to STDERR and STDOUT.
$ cat somefile | docker run -i -a stdin mybuilder dobuild
This is how piping a file into a container could be done for a build. The container’s ID will be printed
after the build is done and the build logs could be retrieved using docker logs. This is useful if you
need to pipe a file or something else into a container and retrieve the container’s ID once the
container has finished running.
It is often necessary to directly expose devices to a container. The --device option enables that. For
example, a specific block storage device or loop device or audio device can be added to an
otherwise unprivileged container (without the --privileged flag) and have the application directly
access it.
By default, the container will be able to read, write and mknod these devices. This can be overridden
using a third :rwm set of options to each --device flag:
$ docker run --device=/dev/sda:/dev/xvdc --rm -it ubuntu fdisk /dev/xvdc
Note: --device cannot be safely used with ephemeral devices. Block devices that may be removed
should not be added to untrusted containers with --device.
For Windows, the format of the string passed to the --device option is in the form of --
device=<IdType>/<Id>. Beginning with Windows Server 2019 and Windows 10 October 2018
Update, Windows only supports an IdType of class and the Id as a device interface class GUID.
Refer to the table defined in the Windows container docs for a list of container-supported device
interface class GUIDs.
If this option is specified for a process-isolated Windows container, all devices that implement the
requested device interface class GUID are made available in the container. For example, the
command below makes all COM ports on the host visible in the container.
Note: the --device option is only supported on process-isolated Windows containers. This option
fails if the container isolation is hyperv or when running Linux Containers on Windows (LCOW).
Use the device option to specify GPUs. The example below exposes a specific GPU.
$ docker run -it --rm --gpus device=GPU-3a23c669-1f69-c64e-cf85-44e9b07e7a2a ubuntu
nvidia-smi
Policy Result
no Do not automatically restart the container when it exits. This is the default.
Policy Result
on-
Restart only if the container exits with a non-zero exit status. Optionally,
failure[:max-
retries]
limit the number of restart retries the Docker daemon attempts.
Always restart the container regardless of the exit status. When you
specify always, the Docker daemon will try to restart the container
always
indefinitely. The container will also always start on daemon startup,
regardless of the current state of the container.
This will run the redis container with a restart policy of always so that if the container exits, Docker
will restart it.
More detailed information on restart policies can be found in the Restart Policies (--restart)section of
the Docker run reference page.
Sometimes you need to connect to the Docker host from within your container. To enable this, pass
the Docker host’s IP address to the container using the --add-host flag. To find the host’s address,
use the ip addr show command.
The flags you pass to ip addr show depend on whether you are using IPv4 or IPv6 networking in
your containers. Use the following flags for IPv4 address retrieval for a network device named eth0:
$ HOSTIP=`ip -4 addr show scope global dev eth0 | grep inet | awk '{print \$2}' | cut
-d / -f 1`
$ docker run --add-host=docker:${HOSTIP} --rm -it debian
For IPv6 use the -6 flag instead of the -4 flag. For other network devices, replace eth0with the
correct device name (for example docker0 for the bridge device).
Note: If you do not provide a hard limit, the soft limit will be used for both values. If
no ulimits are set, they will be inherited from the default ulimits set on the daemon. as option
is disabled now. In other words, the following script is not supported:
$ docker run -it --ulimit as=1024 fedora /bin/bash`
The values are sent to the appropriate syscall as they are set. Docker doesn’t perform any byte
conversion. Take this into account when setting the values.
FOR NPROC USAGE
Be careful setting nproc with the ulimit flag as nproc is designed by Linux to set the maximum
number of processes available to a user, not to a container. For example, start four containers
with daemon user:
$ docker run -d -u daemon --ulimit nproc=3 busybox top
Value Description
Use the value specified by the Docker daemon’s --exec-opt or system default (see
default
below).
If you have set the --exec-opt isolation=hyperv option on the Docker daemon, or are running
against a Windows client-based daemon, these commands are equivalent and result
in hyperv isolation:
PS C:\> docker run -d microsoft/nanoserver powershell echo hyperv
PS C:\> docker run -d --isolation default microsoft/nanoserver powershell echo hyperv
PS C:\> docker run -d --isolation hyperv microsoft/nanoserver powershell echo hyperv
On Windows, this will affect containers differently depending on what type of isolation is used.
With process isolation, Windows will report the full memory of the host system, not the limit
to applications running inside the container
PS C:\> docker run -it -m 2GB --isolation=process microsoft/nanoserver
powershell Get-ComputerInfo *memory*
CsTotalPhysicalMemory : 17064509440
CsPhyicallyInstalledMemory : 16777216
OsTotalVisibleMemorySize : 16664560
OsFreePhysicalMemory : 14646720
OsTotalVirtualMemorySize : 19154928
OsFreeVirtualMemory : 17197440
OsInUseVirtualMemory : 1957488
OsMaxProcessMemorySize : 137438953344
With hyperv isolation, Windows will create a utility VM that is big enough to hold the memory
limit, plus the minimal OS needed to host the container. That size is reported as “Total
Physical Memory.”
PS C:\> docker run -it -m 2GB --isolation=hyperv microsoft/nanoserver
powershell Get-ComputerInfo *memory*
CsTotalPhysicalMemory : 2683355136
CsPhyicallyInstalledMemory :
OsTotalVisibleMemorySize : 2620464
OsFreePhysicalMemory : 2306552
OsTotalVirtualMemorySize : 2620464
OsFreeVirtualMemory : 2356692
OsInUseVirtualMemory : 263772
OsMaxProcessMemorySize : 137438953344
Note: Not all sysctls are namespaced. Docker does not support changing sysctls inside of a
container that also modify the host system. As the kernel evolves we expect to see more sysctls
become namespaced.
IPC Namespace:
If you use the --network=host option using these sysctls will not be allowed.
docker save
Estimated reading time: 1 minute
Description
Save one or more images to a tar archive (streamed to STDOUT by default)
Usage
docker save [OPTIONS] IMAGE [IMAGE...]
Options
Name, shorthand Default Description
Parent command
Command Description
Extended description
Produces a tarred repository to the standard output stream. Contains all parent layers, and all tags +
versions, or specified repo:tag, for each argument provided.
Examples
Create a backup that can then be used with docker load.
$ docker save busybox > busybox.tar
$ ls -sh busybox.tar
2.7M busybox.tar
$ ls -sh busybox.tar
2.7M busybox.tar
docker search
Estimated reading time: 5 minutes
Description
Search the Docker Hub for images
Usage
docker search [OPTIONS] TERM
Options
Name, shorthand Default Description
deprecated
--automated
Only show automated builds
deprecated
--stars , -s
Only displays with at least x stars
Parent command
Command Description
Extended description
Search Docker Hub for images
See Find Public Images on Docker Hub for more details on finding shared images from the
command line.
Examples
Search images by name
This example displays images with a name containing ‘busybox’:
NAME DESCRIPTION
STARS OFFICIAL AUTOMATED
busybox Busybox base image. 316
[OK]
progrium/busybox 50
[OK]
radial/busyboxplus Full-chain, Internet enabled, busybox made... 8
[OK]
odise/busybox-python 2
[OK]
azukiapp/busybox This image is meant to be used as the base... 2
[OK]
ofayau/busybox-jvm Prepare busybox to install a 32 bits JVM. 1
[OK]
shingonoide/archlinux-busybox Arch Linux, a lightweight and flexible Lin... 1
[OK]
odise/busybox-curl 1
[OK]
ofayau/busybox-libc32 Busybox with 32 bits (and 64 bits) libs 1
[OK]
peelsky/zulu-openjdk-busybox 1
[OK]
skomma/busybox-data Docker image suitable for data volume cont... 1
[OK]
elektritter/busybox-teamspeak Lightweight teamspeak3 container based on... 1
[OK]
socketplane/busybox 1
[OK]
oveits/docker-nginx-busybox This is a tiny NginX docker image based on... 0
[OK]
ggtools/busybox-ubuntu Busybox ubuntu version with extra goodies 0
[OK]
nikfoundas/busybox-confd Minimal busybox based distribution of confd 0
[OK]
openshift/busybox-http-app 0
[OK]
jllopis/busybox 0
[OK]
swyckoff/busybox 0
[OK]
powellquiring/busybox 0
[OK]
williamyeh/busybox-sh Docker image for BusyBox's sh 0
[OK]
simplexsys/busybox-cli-powered Docker busybox images, with a few often us... 0
[OK]
fhisamoto/busybox-java Busybox java 0
[OK]
scottabernethy/busybox 0
[OK]
marclop/busybox-solr
Filtering
The filtering flag (-f or --filter) format is a key=value pair. If there is more than one filter, then
pass multiple flags (e.g. --filter "foo=bar" --filter "bif=baz")
The currently supported filters are:
STARS
This example displays images with a name containing ‘busybox’ and at least 3 stars:
IS-AUTOMATED
This example displays images with a name containing ‘busybox’ and are automated builds:
IS-OFFICIAL
This example displays images with a name containing ‘busybox’, at least 3 stars and are official
builds:
Placeholder Description
When you use the --format option, the search command will output the data exactly as the template
declares. If you use the table directive, column headers are included as well.
The following example uses a template without headers and outputs the Name and StarCount entries
separated by a colon for all images:
{% raw %}
$ docker search --format "{{.Name}}: {{.StarCount}}" nginx
nginx: 5441
jwilder/nginx-proxy: 953
richarvey/nginx-php-fpm: 353
million12/nginx-php: 75
webdevops/php-nginx: 70
h3nrik/nginx-ldap: 35
bitnami/nginx: 23
evild/alpine-nginx: 14
million12/nginx: 9
maxexcloo/nginx: 7
{% endraw %}
{% raw %}
$ docker search --format "table {{.Name}}\t{{.IsAutomated}}\t{{.IsOfficial}}" nginx
docker secret
Estimated reading time: 1 minute
Description
Manage Docker secrets
API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker secret COMMAND
Child commands
Command Description
Parent command
Command Description
Extended description
Manage secrets.
Description
Create a secret from a file or STDIN as content
API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker secret create [OPTIONS] SECRET [file|-]
Options
Name, shorthand Default Description
API 1.31+
--driver , -d
Secret driver
API 1.37+
--template-driver
Template driver
Parent command
Command Description
Related commands
Command Description
Extended description
Creates a secret using standard input or from a file for the secret content. You must run this
command on a manager node.
For detailed information about using secrets, refer to manage sensitive data with Docker secrets.
Examples
Create a secret
$ printf <secret> | docker secret create my_secret -
onakdyv307se2tl7nl20anokv
$ docker secret ls
dg426haahpi5ezmkkj5kyl3sn
$ docker secret ls
eo7jnzguqgtpdah3cm5srfb97
$ docker secret inspect my_secret
[
{
"ID": "eo7jnzguqgtpdah3cm5srfb97",
"Version": {
"Index": 17
},
"CreatedAt": "2017-03-24T08:15:09.735271783Z",
"UpdatedAt": "2017-03-24T08:15:09.735271783Z",
"Spec": {
"Name": "my_secret",
"Labels": {
"env": "dev",
"rev": "20170324"
}
}
}
]
Description
Display detailed information on one or more secrets
API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker secret inspect [OPTIONS] SECRET [SECRET...]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
Extended description
Inspects the specified secret. This command has to be run targeting a manager node.
By default, this renders all results in a JSON array. If a format is specified, the given template will be
executed for each result.
For detailed information about using secrets, refer to manage sensitive data with Docker secrets.
Examples
Inspect a secret by name or ID
You can inspect a secret, either by its name, or ID
For example, given the following secret:
$ docker secret ls
[
{
"ID": "eo7jnzguqgtpdah3cm5srfb97",
"Version": {
"Index": 17
},
"CreatedAt": "2017-03-24T08:15:09.735271783Z",
"UpdatedAt": "2017-03-24T08:15:09.735271783Z",
"Spec": {
"Name": "my_secret",
"Labels": {
"env": "dev",
"rev": "20170324"
}
}
}
]
Formatting
You can use the --format option to obtain specific information about a secret. The following example
command outputs the creation time of the secret.
Description
List secrets
API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker secret ls [OPTIONS]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
Extended description
Run this command on a manager node to list the secrets in the swarm.
For detailed information about using secrets, refer to manage sensitive data with Docker secrets.
Examples
$ docker secret ls
Filtering
The filtering flag (-f or --filter) format is a key=value pair. If there is more than one filter, then
pass multiple flags (e.g., --filter "foo=bar" --filter "bif=baz")
id (secret’s ID)
label (label=<key> or label=<key>=<value>)
name (secret’s name)
ID
The id filter matches all or prefix of a secret’s id.
$ docker secret ls -f "id=6697bflskwj1998km1gnnjr38"
LABEL
The label filter matches secrets based on the presence of a label alone or a label and a value.
The following filter matches all secrets with a project label regardless of its value:
$ docker secret ls --filter label=project
The following filter matches only services with the project label with the project-a value.
$ docker service ls --filter label=project=test
NAME
Placeholder Description
.ID Secret ID
When using the --format option, the secret ls command will either output the data exactly as the
template declares or, when using the table directive, will include column headers as well.
The following example uses a template without headers and outputs the ID and Nameentries
separated by a colon for all images:
$ docker secret ls --format "{{.ID}}: {{.Name}}"
77af4d6b9913: secret-1
b6fa739cedf5: secret-2
78a85c484f71: secret-3
To list all secrets with their name and created date in a table format you can use:
ID NAME CREATED
77af4d6b9913 secret-1 5 minutes ago
b6fa739cedf5 secret-2 3 hours ago
78a85c484f71 secret-3 10 days ago
docker secret rm
Estimated reading time: 1 minute
Description
Remove one or more secrets
API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker secret rm SECRET [SECRET...]
Parent command
Command Description
Related commands
Command Description
Extended description
Removes the specified secrets from the swarm. This command has to be run targeting a manager
node.
For detailed information about using secrets, refer to manage sensitive data with Docker secrets.
Examples
This example removes a secret:
Warning: Unlike docker rm, this command does not ask for confirmation before removing a secret.
docker service
Estimated reading time: 1 minute
Description
Manage services
API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker service COMMAND
Child commands
Command Description
Parent command
Command Description
Extended description
Manage services.
Description
Create a new service
API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker service create [OPTIONS] IMAGE [COMMAND] [ARG...]
Options
Name, shorthand Default Description
API 1.30+
--config
Specify configurations to expose to the service
API 1.29+
--credential-spec Credential spec for managed service account (Windows
only)
API 1.29+
--detach , -d Exit immediately instead of waiting for the service to
converge
API 1.25+
--dns
Set custom DNS servers
API 1.25+
--dns-option
Set DNS options
API 1.25+
--dns-search
Set custom DNS search domains
API 1.25+
--group Set one or more supplementary user groups for the
container
API 1.25+
--health-cmd
Command to run to check health
API 1.25+
--health-interval
Time between running the check (ms|s|m|h)
Name, shorthand Default Description
API 1.25+
--health-retries
Consecutive failures needed to report unhealthy
API 1.29+
--health-start-
Start period for the container to initialize before counting
period
retries towards unstable (ms|s|m|h)
API 1.25+
--health-timeout
Maximum time to allow one check to run (ms|s|m|h)
API 1.25+
--host
Set one or more custom host-to-IP mappings (host:ip)
API 1.25+
--hostname
Container hostname
API 1.37+
--init Use an init inside each service container to forward
signals and reap processes
API 1.35+
--isolation
Service container isolation mode
API 1.25+
--no-healthcheck
Disable any container-specified HEALTHCHECK
Name, shorthand Default Description
API 1.30+
--no-resolve-image Do not query the registry to resolve image digest and
supported platforms
API 1.28+
--placement-pref
Add a placement preference
API 1.28+
--read-only
Mount the container’s root filesystem as read only
API 1.40+
--replicas-max- Maximum number of tasks per node (default 0 =
per-node
unlimited)
--restart-max-
Maximum number of restarts before giving up
attempts
API 1.28+
--rollback-delay Delay between task rollbacks (ns|us|ms|s|m|h) (default
0s)
API 1.28+
--rollback-
Action on rollback failure (“pause”|”continue”) (default
failure-action
“pause”)
Name, shorthand Default Description
API 1.28+
--rollback-monitor Duration after each task rollback to monitor for failure
(ns|us|ms|s|m|h) (default 5s)
API 1.29+
--rollback-order Rollback order (“start-first”|”stop-first”) (default “stop-
first”)
API 1.28+
--rollback- Maximum number of tasks rolled back simultaneously (0
1
parallelism
to roll back all at once)
API 1.25+
--secret
Specify secrets to expose to the service
API 1.28+
--stop-signal
Signal to stop the container
API 1.40+
--sysctl
Sysctl options
API 1.25+
--tty , -t
Allocate a pseudo-TTY
API 1.25+
--update-monitor Duration after each task update to monitor for failure
(ns|us|ms|s|m|h) (default 5s)
API 1.29+
--update-order
Update order (“start-first”|”stop-first”) (default “stop-first”)
Name, shorthand Default Description
--with-registry-
Send registry authentication details to swarm agents
auth
Parent command
Command Description
Related commands
Command Description
Examples
Create a service
$ docker service create --name redis redis:3.0.6
dmu1ept4cxcfe8k8lhtux3ro3
a8q9dasaafudfs8q8w32udass
$ docker service ls
If your image is available on a private registry which requires login, use the--with-registry-
auth flag with docker service create, after logging in. If your image is stored
on registry.example.com, which is a private registry, use a command like the following:
$ docker login registry.example.com
4cdgfyky7ozwh3htjfw0d12qv
The above command sets the desired number of tasks for the service. Even though the command
returns immediately, actual scaling of the service may take some time. The REPLICAS column shows
both the actual and desired number of replica tasks for the service.
In the following example the desired state is 5 replicas, but the current number of RUNNINGtasks is 3:
$ docker service ls
Once all the tasks are created and RUNNING, the actual number of tasks is equal to the desired
number:
$ docker service ls
Create a service specifying the secret, target, user/group ID, and mode:
4cdgfyky7ozwh3htjfw0d12qv
When you run a service update, the scheduler updates a maximum of 2 tasks at a time,
with 10s between updates. For more information, refer to the rolling updates tutorial.
To specify multiple environment variables, specify multiple --env flags, each with a separate key-
value pair.
$ docker service create \
--name redis_2 \
--replicas 5 \
--env MYVAR=foo \
--env MYVAR2=bar \
redis:3.0.6
A bind mount makes a file or directory on the host available to the container it is mounted within. A
bind mount may be either read-only or read-write. For example, a container might share its host’s
DNS information by means of a bind mount of the host’s /etc/resolv.conf or a container might write
logs to its host’s /var/log/myContainerLogs directory. If you use bind mounts and your host and
containers have different notions of permissions, access controls, or other such details, you will run
into portability issues.
A named volume is a mechanism for decoupling persistent data needed by your container from the
image used to create the container and from the host machine. Named volumes are created and
managed by Docker, and a named volume persists even when no container is currently using it.
Data in named volumes can be shared between a container and the host machine, as well as
between multiple containers. Docker uses a volume driver to create, manage, and mount volumes.
You can back up or restore volumes using Docker commands.
A npipe mounts a named pipe from the host into the container.
Consider a situation where your image starts a lightweight web server. You could use that image as
a base image, copy in your website’s HTML files, and package that into another image. Each time
your website changed, you’d need to update the new image and redeploy all of the containers
serving your website. A better solution is to store the website in a named volume which is attached
to each of your web server containers when they start. To update the website, you just update the
named volume.
The following table describes options which apply to both bind mounts and named volumes in a
service:
type=volume: src is an
optional way to specify the
name of the volume (for
example, src=my-volume).
If the named volume does not
exist, it is automatically
created. If no src is
specified, the volume is
assigned a random name
which is guaranteed to be
unique on the host, but may
not be unique cluster-wide. A
randomly-named volume has
the same lifecycle as its
for type=bindand type=np container and is destroyed
src or source
ipe when the container is
destroyed (which is
upon service update, or
when scaling or re-balancing
the service)
type=bind: src is required,
and specifies an absolute
path to the file or directory to
bind-mount (for
example, src=/path/on/ho
st/). An error is produced if
the file or directory does not
exist.
type=tmpfs: src is not
supported.
The following options can only be used for bind mounts (type=bind):
Option Description
bind-
See the bind propagation section.
propagation
Bind propagation
Bind propagation refers to whether or not mounts created within a given bind mount or named
volume can be propagated to replicas of that mount. Consider a mount point /mnt, which is also
mounted on /tmp. The propation settings control whether a mount on /tmp/awould also be available
on /mnt/a. Each propagation setting has a recursive counterpoint. In the case of recursion, consider
that /tmp/a is also mounted as /foo. The propagation settings control
whether /mnt/a and/or /tmp/a would exist.
The bind-propagation option defaults to rprivate for both bind mounts and volume mounts, and is
only configurable for bind mounts. In other words, named volumes do not support bind propagation.
shared: Sub-mounts of the original mount are exposed to replica mounts, and sub-mounts of
replica mounts are also propagated to the original mount.
slave: similar to a shared mount, but only in one direction. If the original mount exposes a
sub-mount, the replica mount can see it. However, if the replica mount exposes a sub-mount,
the original mount cannot see it.
private: The mount is private. Sub-mounts within it are not exposed to replica mounts, and
sub-mounts of replica mounts are not exposed to the original mount.
rshared: The same as shared, but the propagation also extends to and from mount points
nested within any of the original or replica mount points.
rslave: The same as slave, but the propagation also extends to and from mount points
nested within any of the original or replica mount points.
rprivate: The default. The same as private, meaning that no mount points anywhere within
the original or replica mount points propagate in either direction.
For more information about bind propagation, see the Linux kernel documentation for shared
subtree.
The following options can only be used for named volumes (type=volume):
Option Description
volume- Name of the volume-driver plugin to use for the volume. Defaults to "local", to
driver use the local volume driver to create the volume if the volume does not exist.
One or more custom metadata ("labels") to apply to the volume upon creation. For
volume-
example, volume-label=mylabel=hello-world,my-other-label=hello-
label
mars. For more information about labels, refer to apply custom metadata.
By default, if you attach an empty volume to a container, and files or directories
already existed at the mount-path in the container (dst), the Engine copies those
files and directories into the volume, allowing the host to access them.
Set volume-nocopy to disable copying files from the container's filesystem to the
volume and mount the empty volume.
volume-
nocopy
A value is optional:
Options specific to a given volume driver, which will be passed to the driver when
creating the volume. Options are provided as a comma-separated list of key/value
volume-
pairs, for example, volume-opt=some-option=some-value,volume-
opt
opt=some-other-option=some-other-value. For available options for a given
driver, refer to that driver's documentation.
The following options can only be used for tmpfs mounts (type=tmpfs);
Option Description
tmpfs- File mode of the tmpfs in octal. (e.g. "700" or "0700".) Defaults to "1777" in
mode Linux.
The --mount flag supports most options that are supported by the -v or --volume flag for docker run,
with some important exceptions:
The --mount flag allows you to specify a volume driver and volume driver options per
volume, without creating the volumes in advance. In contrast, docker run allows you to
specify a single volume driver which is shared by all volumes, using the --volume-
driver flag.
The --mount flag allows you to specify custom metadata (“labels”) for a volume, before the
volume is created.
When you use --mount with type=bind, the host-path must refer to an existing path on the
host. The path will not be created for you and the service will fail with an error if the path
does not exist.
The --mount flag does not allow you to relabel a volume with Z or z flags, which are used
for selinux labeling.
For each replica of the service, the engine requests a volume named “my-volume” from the default
(“local”) volume driver where the task is deployed. If the volume does not exist, the engine creates a
new volume and applies the “color” and “shape” labels.
When the task is started, the volume is mounted on /path/in/container/ inside the container.
Be aware that the default (“local”) volume is a locally scoped volume driver. This means that
depending on where a task is deployed, either that task gets a new volume named “my-volume”, or
shares the same “my-volume” with other tasks of the same service. Multiple containers writing to a
single shared volume can cause data corruption if the software running inside the container is not
designed to handle concurrent processes writing to the same location. Also take into account that
containers can be re-scheduled by the Swarm orchestrator and be deployed on a different node.
The following command creates a service with three replicas with an anonymous volume
on /path/in/container:
$ docker service create \
--name my-service \
--replicas 3 \
--mount type=volume,destination=/path/in/container \
nginx:alpine
In this example, no name (source) is specified for the volume, so a new volume is created for each
task. This guarantees that each task gets its own volume, and volumes are not shared between
tasks. Anonymous volumes are removed after the task using them is complete.
The following example bind-mounts a host directory at /path/in/container in the containers backing
the service:
$ docker service create \
--name my-service \
--mount type=bind,source=/path/on/host,destination=/path/in/container \
nginx:alpine
engine.labels apply to Docker Engine labels like operating system, drivers, etc. Swarm
administrators add node.labels for operational purposes by using the docker node
updatecommand.
For example, the following limits tasks for the redis service to nodes where the node type label
equals queue:
This uses --placement-pref with a spread strategy (currently the only supported strategy) to spread
tasks evenly over the values of the datacenter node label. In this example, we assume that every
node has a datacenter node label attached to it. If there are three different values of this label
among nodes in the swarm, one third of the tasks will be placed on the nodes associated with each
value. This is true even if there are more nodes with one value than another. For example, consider
the following set of nodes:
Since we are spreading over the values of the datacenter label and the service has 9 replicas, 3
replicas will end up in each datacenter. There are three nodes associated with the value east, so
each one will get one of the three replicas reserved for this value. There are two nodes with the
value south, and the three replicas for this value will be divided between them, with one receiving
two replicas and another receiving just one. Finally, west has a single node that will get all three
replicas reserved for west.
If the nodes in one category (for example, those with node.labels.datacenter=south) can’t handle
their fair share of tasks due to constraints or resource limitations, the extra tasks will be assigned to
other nodes instead, if possible.
Both engine labels and node labels are supported by placement preferences. The example above
uses a node label, because the label is referenced with node.labels.datacenter. To spread over the
values of an engine label, use--placement-pref spread=engine.labels.<labelname>.
It is possible to add multiple placement preferences to a service. This establishes a hierarchy of
preferences, so that tasks are first divided over one category, and then further divided over
additional categories. One example of where this may be useful is dividing tasks fairly between
datacenters, and then splitting the tasks within each datacenter over a choice of racks. To add
multiple placement preferences, specify the --placement-pref flag multiple times. The order is
significant, and the placement preferences will be applied in the order given when making
scheduling decisions.
The following example sets up a service with multiple placement preferences. Tasks are spread first
over the various datacenters, and then over racks (as indicated by the respective labels):
When updating a service with docker service update, --placement-pref-add appends a new
placement preference after all existing placement preferences. --placement-pref-rmremoves an
existing placement preference that matches the argument.
First, create an overlay network on a manager node the docker network create command:
etjpu59cykrptrgw0z0hk5snf
After you create an overlay network in swarm mode, all manager nodes have access to the network.
When you create a service and pass the --network flag to attach the service to the overlay network:
$ docker service create \
--replicas 3 \
--network my-network \
--name my-web \
nginx
716thylsndqma81j6kkkb5aus
Long form syntax of --network allows to specify list of aliases and driver options:
--network name=my-network,alias=web1,driver-opt=field1=value1
There is also a long format, which is easier to read and allows you to specify more options. The long
format is preferred. You cannot specify the service’s mode when using the short format. Here is an
example of using the long format for the same service as above:
Short
Option Long syntax Description
syntax
st. Defaults
to ingress to use the
routing mesh.
The protocol to
use, tcp , udp,
--publish --publish or sctp. Defaults
protocol 8080:80/tc published=8080,target=80,protocol totcp. To bind a port
p =tcp for both protocols,
specify the -por --
publishflag twice.
When you publish a service port using ingress mode, the swarm routing mesh makes the service
accessible at the published port on every node regardless if there is a task for the service running on
the node. If you use host mode, the port is only bound on nodes where the service is running, and a
given port on a node can only be bound once. You can only set the publication mode using the long
syntax. For more information refer to Use swarm mode routing mesh.
--hostname
--mount
--env
Placeholder Description
.Service.ID Service ID
.Node.ID Node ID
.Task.ID Task ID
TEMPLATE EXAMPLE
In this example, we are going to set the template of the created containers based on the service’s
name, the node’s ID and hostname where it sits.
va8ew30grofhjoychbr6iot8c
ID NAME IMAGE
NODE DESIRED STATE CURRENT STATE ERROR PORTS
wo41w8hg8qan hosttempl.1
busybox:latest@sha256:29f5d56d12684887bdfa50dcd29fc31eea4aaf4ad3bec43daf19026a7ce6991
2 2e7a8a9c4da2 Running Running about a minute ago
x3ti0erg11rjpg64m75kej2mz-hosttempl
default: use default settings specified on the node running the task
process: use process isolation (Windows server only)
hyperv: use Hyper-V isolation
Description
Display detailed information on one or more services
API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker service inspect [OPTIONS] SERVICE [SERVICE...]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
Extended description
Inspects the specified service. This command has to be run targeting a manager node.
By default, this renders all results in a JSON array. If a format is specified, the given template will be
executed for each result.
Examples
Inspect a service by name or ID
You can inspect a service, either by its name, or ID
$ docker service ls
ID NAME MODE REPLICAS IMAGE
dmu1ept4cxcf redis replicated 3/3 redis:3.0.6
Both docker service inspect redis, and docker service inspect dmu1ept4cxcf produce the same
result:
$ docker service inspect redis
[
{
"ID": "dmu1ept4cxcfe8k8lhtux3ro3",
"Version": {
"Index": 12
},
"CreatedAt": "2016-06-17T18:44:02.558012087Z",
"UpdatedAt": "2016-06-17T18:44:02.558012087Z",
"Spec": {
"Name": "redis",
"TaskTemplate": {
"ContainerSpec": {
"Image": "redis:3.0.6"
},
"Resources": {
"Limits": {},
"Reservations": {}
},
"RestartPolicy": {
"Condition": "any",
"MaxAttempts": 0
},
"Placement": {}
},
"Mode": {
"Replicated": {
"Replicas": 1
}
},
"UpdateConfig": {},
"EndpointSpec": {
"Mode": "vip"
}
},
"Endpoint": {
"Spec": {}
}
}
]
$ docker service inspect dmu1ept4cxcf
[
{
"ID": "dmu1ept4cxcfe8k8lhtux3ro3",
"Version": {
"Index": 12
},
...
}
]
Formatting
You can print the inspect output in a human-readable format instead of the default JSON output, by
using the --pretty option:
$ docker service inspect --pretty frontend
ID: c8wgl7q4ndfd52ni6qftkvnnp
Name: frontend
Labels:
- org.example.projectname=demo-app
Service Mode: REPLICATED
Replicas: 5
Placement:
UpdateConfig:
Parallelism: 0
On failure: pause
Max failure ratio: 0
ContainerSpec:
Image: nginx:alpine
Resources:
Networks: net1
Endpoint Mode: vip
Ports:
PublishedPort = 4443
Protocol = tcp
TargetPort = 443
PublishMode = ingress
You can also use --format pretty for the same effect.
The --format option can be used to obtain specific information about a service. For example, the
following command outputs the number of replicas of the “redis” service.
$ docker service inspect --format='{{.Spec.Mode.Replicated.Replicas}}' redis
10
Description
Fetch the logs of a service or task
API 1.29+ The client and daemon API must both be at least 1.29 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker service logs [OPTIONS] SERVICE|TASK
Options
Name,
Default Description
shorthand
API 1.30+
--details
Show extra details provided to logs
API 1.30+
--raw
Do not neatly format logs
--tail all Number of lines to show from the end of the logs
--timestamps ,
Show timestamps
-t
Parent command
Command Description
Related commands
Command Description
Extended description
The docker service logs command batch-retrieves logs present at the time of execution.
The docker service logs command can be used with either the name or ID of a service, or with the
ID of a task. If a service is passed, it will display logs for all of the containers in that service. If a task
is passed, it will only display logs from that particular task.
Note: This command is only functional for services that are started with the json-
fileor journald logging driver.
For more information about selecting and configuring logging drivers, refer to Configure logging
drivers.
The docker service logs --follow command will continue streaming the new output from the
service’s STDOUT and STDERR.
Passing a negative number or a non-integer to --tail is invalid and the value is set to allin that
case.
The docker service logs --timestamps command will add an RFC3339Nano timestamp , for
example 2014-09-16T06:17:46.000000000Z, to each log entry. To ensure that the timestamps are
aligned the nano-second part of the timestamp will be padded with zero when necessary.
The docker service logs --details command will add on extra attributes, such as environment
variables and labels, provided to --log-opt when creating the service.
The --since option shows only the service logs generated after a given date. You can specify the
date as an RFC 3339 date, a UNIX timestamp, or a Go duration string (e.g. 1m30s, 3h). Besides
RFC3339 date format you may also use RFC3339Nano, 2006-01-02T15:04:05,2006-01-
02T15:04:05.999999999, 2006-01-02Z07:00, and 2006-01-02. The local timezone on the client will be
used if you do not provide either a Z or a +-00:00 timezone offset at the end of the timestamp. When
providing Unix timestamps enter seconds[.nanoseconds], where seconds is the number of seconds
that have elapsed since January 1, 1970 (midnight UTC/GMT), not counting leap seconds (aka Unix
epoch or Unix time), and the optional .nanoseconds field is a fraction of a second no more than nine
digits long. You can combine the --since option with either or both of the --follow or --
tail options.
docker service ls
Estimated reading time: 4 minutes
Description
List services
API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker service ls [OPTIONS]
Options
Name, shorthand Default Description
Parent command
Command Description
Extended description
This command when run targeting a manager, lists services are running in the swarm.
Examples
On a manager node:
$ docker service ls
The REPLICAS column shows both the actual and desired number of tasks for the service.
Filtering
The filtering flag (-f or --filter) format is of “key=value”. If there is more than one filter, then pass
multiple flags (e.g., --filter "foo=bar" --filter "bif=baz")
id
label
mode
name
ID
LABEL
The label filter matches services based on the presence of a label alone or a label and a value.
The following filter matches all services with a project label regardless of its value:
$ docker service ls --filter label=project
ID NAME MODE REPLICAS IMAGE
01sl1rp6nj5u frontend2 replicated 1/1 nginx:alpine
36xvvwwauej0 frontend replicated 5/5 nginx:alpine
74nzcxxjv6fq backend replicated 3/3 redis:3.0.6
The following filter matches only services with the project label with the project-a value.
$ docker service ls --filter label=project=project-a
ID NAME MODE REPLICAS IMAGE
36xvvwwauej0 frontend replicated 5/5 nginx:alpine
74nzcxxjv6fq backend replicated 3/3 redis:3.0.6
MODE
The mode filter matches on the mode (either replicated or global) of a service.
The following filter matches only global services.
$ docker service ls --filter mode=global
ID NAME MODE REPLICAS IMAGE
w7y0v2yrn620 top global 1/1
busybox
NAME
Formatting
The formatting options (--format) pretty-prints services output using a Go template.
Placeholder Description
.ID Service ID
When using the --format option, the service ls command will either output the data exactly as the
template declares or, when using the table directive, includes column headers as well.
The following example uses a template without headers and outputs the ID, Mode,
and Replicas entries separated by a colon for all services:
$ docker service ls --format "{{.ID}}: {{.Mode}} {{.Replicas}}"
docker service ps
Estimated reading time: 6 minutes
Description
List the tasks of one or more services
API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker service ps [OPTIONS] SERVICE [SERVICE...]
Options
Name, shorthand Default Description
Parent command
Command Description
Extended description
Lists the tasks that are running as part of the specified services. This command has to be run
targeting a manager node.
Examples
List the tasks that are part of a service
The following command shows all the tasks that are part of the redis service:
$ docker service ps redis
In addition to running tasks, the output also shows the task history. For example, after updating the
service to use the redis:3.0.6 image, the output may look like this:
$ docker service ps redis
The number of items in the task history is determined by the --task-history-limit option that was
set when initializing the swarm. You can change the task history retention limit using the docker
swarm update command.
When deploying a service, docker resolves the digest for the service’s image, and pins the service to
that digest. The digest is not shown by default, but is printed if --no-trunc is used. The --no-
trunc option also shows the non-truncated task ID, and error-messages, as can be seen below;
ID NAME IMAGE
NODE DESIRED STATE CURRENT STATE ERROR
PORTS
50qe8lfnxaxksi9w2a704wkp7 redis.1
redis:3.0.6@sha256:6a692a76c2081888b589e26e6ec835743119fe453d67ecf03df7de5b73d69842
manager1 Running Running 5 minutes ago
ky2re9oz86r9556i2szb8a8af \_ redis.1
redis:3.0.5@sha256:f8829e00d95672c48c60f468329d6693c4bdd28d1f057e755f8ba8b40008682e
worker2 Shutdown Shutdown 5 minutes ago
bk658fpbex0d57cqcwoe3jthu redis.2
redis:3.0.6@sha256:6a692a76c2081888b589e26e6ec835743119fe453d67ecf03df7de5b73d69842
worker2 Running Running 5 seconds
nvjljf7rmor4htv7l8rwcx7i7 \_ redis.2
redis:3.0.6@sha256:6a692a76c2081888b589e26e6ec835743119fe453d67ecf03df7de5b73d69842
worker2 Shutdown Rejected 5 minutes ago "No such image:
redis@sha256:6a692a76c2081888b589e26e6ec835743119fe453d67ecf03df7de5b73d69842"
Filtering
The filtering flag (-f or --filter) format is a key=value pair. If there is more than one filter, then
pass multiple flags (e.g. --filter "foo=bar" --filter "bif=baz"). Multiple filter flags are combined
as an OR filter. For example, -f name=redis.1 -f name=redis.7 returns
both redis.1 and redis.7 tasks.
id
name
node
desired-state
ID
NAME
NODE
DESIRED-STATE
The desired-state filter can take the values running, shutdown, or accepted.
Formatting
The formatting options (--format) pretty-prints tasks output using a Go template.
Placeholder Description
.ID Task ID
.Node Node ID
.Error Error
When using the --format option, the service ps command will either output the data exactly as the
template declares or, when using the table directive, includes column headers as well.
The following example uses a template without headers and outputs the Name and Imageentries
separated by a colon for all tasks:
$ docker service ps --format "{{.Name}}: {{.Image}}" top
top.1: busybox
top.2: busybox
top.3: busybox
Description
Revert changes to a service’s configuration
API 1.31+ The client and daemon API must both be at least 1.31 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker service rollback [OPTIONS] SERVICE
Options
Name, shorthand Default Description
API 1.29+
--detach , -d
Exit immediately instead of waiting for the service to converge
Parent command
Command Description
Related commands
Command Description
Extended description
Roll back a specified service to its previous version from the swarm. This command must be run
targeting a manager node.
Examples
Roll back to the previous version of a service
Use the docker service rollback command to roll back to the previous version of a service. After
executing this command, the service is reverted to the configuration that was in place before the
most recent docker service update command.
The following example creates a service with a single replica, updates the service to use three
replicas, and then rolls back the service to the previous version, having one replica.
$ docker service ls
$ docker service ls
Now roll back the service to its previous version, and confirm it is running a single replica again:
$ docker service rollback my-service
$ docker service ls
docker service rm
Estimated reading time: 1 minute
Description
Remove one or more services
API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker service rm SERVICE [SERVICE...]
Parent command
Command Description
Related commands
Command Description
Extended description
Removes the specified services from the swarm. This command has to be run targeting a manager
node.
Examples
Remove the redis service:
$ docker service rm redis
redis
$ docker service ls
Warning: Unlike docker rm, this command does not ask for confirmation before removing a running
service.
docker service scale
Estimated reading time: 3 minutes
Description
Scale one or multiple replicated services
API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker service scale SERVICE=REPLICAS [SERVICE=REPLICAS...]
Options
Name, shorthand Default Description
API 1.29+
--detach , -d
Exit immediately instead of waiting for the service to converge
Parent command
Command Description
Related commands
Command Description
Extended description
The scale command enables you to scale one or more replicated services either up or down to the
desired number of replicas. This command cannot be applied on services which are global mode.
The command will return immediately, but the actual scaling of the service may take some time. To
stop all replicas of a service while keeping the service active in the swarm you can set the scale to 0.
Examples
Scale a single service
The following command scales the “frontend” service to 50 tasks.
frontend scaled to 50
The following command tries to scale a global service to 10 tasks and returns an error.
b4g08uwuairexjub6ome6usqh
$ docker service scale backend=10
Directly afterwards, run docker service ls, to see the actual number of replicas.
$ docker service ls --filter name=frontend
You can also scale a service using the docker service update command. The following commands
are equivalent:
$ docker service scale frontend=50
$ docker service update --replicas=50 frontend
backend scaled to 3
frontend scaled to 5
$ docker service ls
API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker service update [OPTIONS] SERVICE
Options
Name, shorthand Default Description
API 1.30+
--config-add
Add or update a config file on a service
API 1.30+
--config-rm
Remove a configuration file
--container-label-
Add or update a container label
add
--container-label-
Remove a container label by its key
rm
API 1.29+
--credential-spec Credential spec for managed service account (Windows
only)
API 1.29+
--detach , -d Exit immediately instead of waiting for the service to
converge
Name, shorthand Default Description
API 1.25+
--dns-add
Add or update a custom DNS server
API 1.25+
--dns-option-add
Add or update a DNS option
API 1.25+
--dns-option-rm
Remove a DNS option
API 1.25+
--dns-rm
Remove a custom DNS server
API 1.25+
--dns-search-add
Add or update a custom DNS search domain
API 1.25+
--dns-search-rm
Remove a DNS search domain
API 1.25+
--force
Force update even if no changes require it
--generic-resource-
Add a Generic resource
add
--generic-resource-
Remove a Generic resource
rm
API 1.25+
--group-add Add an additional supplementary user group to the
container
API 1.25+
--group-rm Remove a previously added supplementary user group from
the container
API 1.25+
--health-cmd
Command to run to check health
Name, shorthand Default Description
API 1.25+
--health-interval
Time between running the check (ms|s|m|h)
API 1.25+
--health-retries
Consecutive failures needed to report unhealthy
API 1.29+
--health-start-
Start period for the container to initialize before counting
period
retries towards unstable (ms|s|m|h)
API 1.25+
--health-timeout
Maximum time to allow one check to run (ms|s|m|h)
API 1.32+
--host-add
Add a custom host-to-IP mapping (host:ip)
API 1.25+
--host-rm
Remove a custom host-to-IP mapping (host:ip)
API 1.25+
--hostname
Container hostname
API 1.37+
--init Use an init inside each service container to forward signals
and reap processes
API 1.35+
--isolation
Service container isolation mode
API 1.29+
--network-add
Add a network
API 1.29+
--network-rm
Remove a network
API 1.25+
--no-healthcheck
Disable any container-specified HEALTHCHECK
API 1.30+
--no-resolve-image Do not query the registry to resolve image digest and
supported platforms
API 1.28+
--placement-pref-rm
Remove a placement preference
API 1.28+
--read-only
Mount the container’s root filesystem as read only
--restart-max-
Maximum number of restarts before giving up
attempts
API 1.25+
--rollback
Rollback to previous specification
API 1.28+
--rollback-delay
Delay between task rollbacks (ns|us|ms|s|m|h)
API 1.28+
--rollback-monitor Duration after each task rollback to monitor for failure
(ns|us|ms|s|m|h)
API 1.29+
--rollback-order
Rollback order (“start-first”|”stop-first”)
API 1.28+
--rollback-
Maximum number of tasks rolled back simultaneously (0 to
parallelism
roll back all at once)
API 1.25+
--secret-add
Add or update a secret on a service
API 1.25+
--secret-rm
Remove a secret
API 1.28+
--stop-signal
Signal to stop the container
API 1.40+
--sysctl-add
Add or update a Sysctl option
API 1.40+
--sysctl-rm
Remove a Sysctl option
Name, shorthand Default Description
API 1.25+
--tty , -t
Allocate a pseudo-TTY
--update-failure-
Action on update failure (“pause”|”continue”|”rollback”)
action
API 1.25+
--update-monitor Duration after each task update to monitor for failure
(ns|us|ms|s|m|h)
API 1.29+
--update-order
Update order (“start-first”|”stop-first”)
--with-registry-
Send registry authentication details to swarm agents
auth
Parent command
Command Description
Related commands
Command Description
Extended description
Updates a service as described by the specified parameters. This command has to be run targeting
a manager node. The parameters are the same as docker service create. Please look at the
description there for further information.
Normally, updating a service will only cause the service’s tasks to be replaced with new ones if a
change to the service requires recreating the tasks for it to take effect. For example, only changing
the --update-parallelism setting will not recreate the tasks, because the individual tasks are not
affected by this setting. However, the --force flag will cause the tasks to be recreated anyway. This
can be used to perform a rolling restart without any changes to the service parameters.
Examples
Update a service
$ docker service update --limit-cpu 2 redis
myservice
myservice
The following example adds a new alias name to an existing service already connected to network
my-network:
The following example updates the number of replicas for the service from 4 to 5, and then rolls back
to the previous configuration.
web
$ docker service ls
web
$ docker service ls
Other options can be combined with --rollback as well, for example, --update-delay 0s to execute
the rollback without a delay between tasks:
$ docker service update \
--rollback \
--update-delay 0s
web
web
Services can also be set up to roll back to the previous version automatically when an update fails.
To set up a service for automatic rollback, use --update-failure-action=rollback. A rollback will
be triggered if the fraction of the tasks which failed to update successfully exceeds the value given
with --update-max-failure-ratio.
The rate, parallelism, and other parameters of a rollback operation are determined by the values
passed with the following flags:
--rollback-delay
--rollback-failure-action
--rollback-max-failure-ratio
--rollback-monitor
--rollback-parallelism
docker stack
Estimated reading time: 1 minute
Description
Manage Docker stacks
API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker stack [OPTIONS] COMMAND
Options
Name, shorthand Default Description
Kubernetes
--kubeconfig
Kubernetes config file
Child commands
Command Description
Parent command
Command Description
Extended description
Manage stacks.
docker stack deploy
Estimated reading time: 4 minutes
Description
Deploy a new stack or update an existing stack
API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker stack deploy [OPTIONS] STACK
Options
Name, shorthand Default Description
experimental (daemon)Swarm
--bundle-file
Path to a Distributed Application Bundle file
Kubernetes
--namespace
Kubernetes namespace to use
API 1.27+Swarm
--prune
Prune services that are no longer referenced
API 1.30+Swarm
--resolve-image always Query the registry to resolve image digest and supported
platforms (“always”|”changed”|”never”)
--with-registry- Swarm
auth Send registry authentication details to Swarm agents
Kubernetes
--kubeconfig
Kubernetes config file
Related commands
Command Description
Extended description
Create and update a stack from a compose or a dab file on the swarm. This command has to be run
targeting a manager node.
Examples
Compose file
The deploy command supports compose file version 3.0 and above.
$ docker stack deploy --compose-file docker-compose.yml vossibility
The Compose file can also be provided as standard input with --compose-file -:
$ cat docker-compose.yml | docker stack deploy --compose-file - vossibility
If your configuration is split between multiple Compose files, e.g. a base configuration and
environment-specific overrides, you can provide multiple --compose-file flags.
$ docker stack deploy --compose-file docker-compose.yml -c docker-compose.prod.yml
vossibility
$ docker service ls
DAB file
$ docker stack deploy --bundle-file vossibility-stack.dab vossibility
$ docker service ls
docker stack ps
Estimated reading time: 7 minutes
Description
List the tasks in the stack
API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker stack ps [OPTIONS] STACK
Options
Name, shorthand Default Description
Kubernetes
--namespace
Kubernetes namespace to use
Name, shorthand Default Description
Kubernetes
--kubeconfig
Kubernetes config file
Parent command
Command Description
Related commands
Command Description
Extended description
Lists the tasks that are running as part of the specified stack. This command has to be run targeting
a manager node.
Examples
List the tasks that are part of a stack
The following command shows all the tasks that are part of the voting stack:
$ docker stack ps voting
ID NAME IMAGE
NODE DESIRED STATE CURRENT STATE ERROR PORTS
xim5bcqtgk1b voting_worker.1
dockersamples/examplevotingapp_worker:latest node2 Running Running 2
minutes ago
q7yik0ks1in6 voting_result.1
dockersamples/examplevotingapp_result:before node1 Running Running 2
minutes ago
rx5yo0866nfx voting_vote.1 dockersamples/examplevotingapp_vote:before
node3 Running Running 2 minutes ago
tz6j82jnwrx7 voting_db.1 postgres:9.4
node1 Running Running 2 minutes ago
w48spazhbmxc voting_redis.1 redis:alpine
node2 Running Running 3 minutes ago
6jj1m02freg1 voting_visualizer.1 dockersamples/visualizer:stable
node1 Running Running 2 minutes ago
kqgdmededccb voting_vote.2 dockersamples/examplevotingapp_vote:before
node2 Running Running 2 minutes ago
t72q3z038jeh voting_redis.2 redis:alpine
node3 Running Running 3 minutes ago
Filtering
The filtering flag (-f or --filter) format is a key=value pair. If there is more than one filter, then
pass multiple flags (e.g. --filter "foo=bar" --filter "bif=baz"). Multiple filter flags are combined
as an OR filter. For example, -f name=redis.1 -f name=redis.7 returns
both redis.1 and redis.7 tasks.
id
name
node
desired-state
ID
NAME
NODE
DESIRED-STATE
The desired-state filter can take the values running, shutdown, or accepted.
$ docker stack ps -f "desired-state=running" voting
ID NAME IMAGE
NODE DESIRED STATE CURRENT STATE ERROR PORTS
xim5bcqtgk1b voting_worker.1
dockersamples/examplevotingapp_worker:latest node2 Running Running 21
minutes ago
q7yik0ks1in6 voting_result.1
dockersamples/examplevotingapp_result:before node1 Running Running 21
minutes ago
rx5yo0866nfx voting_vote.1 dockersamples/examplevotingapp_vote:before
node3 Running Running 21 minutes ago
tz6j82jnwrx7 voting_db.1 postgres:9.4
node1 Running Running 21 minutes ago
w48spazhbmxc voting_redis.1 redis:alpine
node2 Running Running 21 minutes ago
6jj1m02freg1 voting_visualizer.1 dockersamples/visualizer:stable
node1 Running Running 21 minutes ago
kqgdmededccb voting_vote.2 dockersamples/examplevotingapp_vote:before
node2 Running Running 21 minutes ago
t72q3z038jeh voting_redis.2 redis:alpine
node3 Running Running 21 minutes ago
Formatting
The formatting options (--format) pretty-prints tasks output using a Go template.
Placeholder Description
.ID Task ID
.Node Node ID
.Error Error
This option can be used to perform batch operations. For example, you can use the task IDs as input
for other commands, such as docker inspect. The following example inspects all tasks of the
“voting” stack;
$ docker inspect $(docker stack ps -q voting)
[
{
"ID": "xim5bcqtgk1b1gk0krq1",
"Version": {
(...)
docker stack rm
Estimated reading time: 2 minutes
Description
Remove one or more stacks
API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker stack rm [OPTIONS] STACK [STACK...]
Options
Name, shorthand Default Description
Kubernetes
--namespace
Kubernetes namespace to use
Kubernetes
--kubeconfig
Kubernetes config file
Parent command
Command Description
Related commands
Command Description
Extended description
Remove the stack from the swarm. This command has to be run targeting a manager node.
Examples
Remove a stack
This will remove the stack with the name myapp. Services, networks, and secrets associated with the
stack will be removed.
$ docker stack rm myapp
API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker stack services [OPTIONS] STACK
Options
Name, shorthand Default Description
Kubernetes
--namespace
Kubernetes namespace to use
Kubernetes
--kubeconfig
Kubernetes config file
Parent command
Command Description
Related commands
Command Description
Extended description
Lists the services that are running as part of the specified stack. This command has to be run
targeting a manager node.
Examples
The following command shows all services in the myapp stack:
$ docker stack services myapp
Filtering
The filtering flag (-f or --filter) format is a key=value pair. If there is more than one filter, then
pass multiple flags (e.g. --filter "foo=bar" --filter "bif=baz"). Multiple filter flags are combined
as an OR filter.
The following command shows both the web and db services:
$ docker stack services --filter name=myapp_web --filter name=myapp_db myapp
ID NAME REPLICAS IMAGE
COMMAND
7be5ei6sqeye myapp_web 1/1
nginx@sha256:23f809e7fd5952e7d5be065b4d3643fbbceccd349d537b62a123ef2201bc886f
dn7m7nhhfb9y myapp_db 1/1
mysql@sha256:a9a5b559f8821fe73d58c3606c812d1c044868d42c63817fa5125fd9d8b7b539
Formatting
The formatting options (--format) pretty-prints services output using a Go template.
Placeholder Description
.ID Service ID
docker start
Estimated reading time: 1 minute
Description
Start one or more stopped containers
Usage
docker start [OPTIONS] CONTAINER [CONTAINER...]
Options
Name, shorthand Default Description
experimental (daemon)
--checkpoint
Restore from this checkpoint
experimental (daemon)
--checkpoint-dir
Use a custom checkpoint storage directory
Examples
$ docker start my_container
docker stats
Estimated reading time: 6 minutes
Description
Display a live stream of container(s) resource usage statistics
Usage
docker stats [OPTIONS] [CONTAINER...]
Options
Name, shorthand Default Description
--no-stream Disable streaming stats and only pull the first result
Parent command
Command Description
Extended description
The docker stats command returns a live data stream for running containers. To limit data to one or
more specific containers, specify a list of container names or ids separated by a space. You can
specify a stopped container but stopped containers do not return any data.
If you want more detailed information about a container’s resource usage, use
the /containers/(id)/stats API endpoint.
Note: On Linux, the Docker CLI reports memory usage by subtracting page cache usage from the
total memory usage. The API does not perform such a calculation but rather provides the total
memory usage and the amount from the page cache so that clients can use the data as needed.
Note: The PIDS column contains the number of processes and kernel threads created by that
container. Threads is the term used by Linux kernel. Other equivalent terms are “lightweight
process” or “kernel task”, etc. A large number in the PIDS column combined with a small number of
processes (as reported by ps or top) may indicate that something in the container is creating many
threads.
Examples
Running docker stats on all running containers against a Linux daemon.
$ docker stats
If you don’t specify a format string using --format, the following columns are shown.
Column name Description
CONTAINER
the ID and name of the container
ID and Name
CPU % and MEM % the percentage of the host’s CPU and memory the container is using
the total memory the container is using, and the total amount of memory
MEM USAGE / LIMIT
it is allowed to use
The amount of data the container has sent and received over its
NET I/O
network interface
The amount of data the container has read to and written from block
BLOCK I/O
devices on the host
Running docker stats on multiple containers by name and id against a Linux daemon.
$ docker stats awesome_brattain 67b2525d8ad1
Running docker stats with customized format on all (Running and Stopped) containers.
$ docker stats --all --format "table {{.Container}}\t{{.CPUPerc}}\t{{.MemUsage}}"
fervent_panini 5acfcb1b4fd1 drunk_visvesvaraya big_heisenberg
Running docker stats on multiple containers by name and id against a Windows daemon.
PS E:\> docker ps -a
CONTAINER ID NAME IMAGE COMMAND
CREATED STATUS PORTS NAMES
3f214c61ad1d awesome_brattain nanoserver "cmd" 2
minutes ago Up 2 minutes big_minsky
9db7aa4d986d mad_wilson windowsservercore "cmd" 2
minutes ago Up 2 minutes mad_wilson
09d3bb5b1604 fervent_panini windowsservercore "cmd" 2
minutes ago Up 2 minutes affectionate_easley
Formatting
The formatting option (--format) pretty prints container output using a Go template.
Placeholder Description
.ID Container ID
Placeholder Description
.NetIO Network IO
.BlockIO Block IO
When using the --format option, the stats command either outputs the data exactly as the template
declares or, when using the table directive, includes column headers as well.
The following example uses a template without headers and outputs
the Container and CPUPerc entries separated by a colon for all images:
$ docker stats --format "{{.Container}}: {{.CPUPerc}}"
09d3bb5b1604: 6.61%
9db7aa4d986d: 9.19%
3f214c61ad1d: 0.00%
To list all containers statistics with their name, CPU percentage and memory usage in a table format
you can use:
On Linux:
"table
{{.ID}}\t{{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}\t{{.MemPerc}}\t{{.NetIO}}\t{{.BlockIO
}}\t{{.PIDs}}"
On Windows:
"table {{.ID}}\t{{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}\t{{.NetIO}}\t{{.BlockIO}}"
Note: On Docker 17.09 and older, the {{.Container}} column was used, instead
of {{.ID}}\t{{.Name}}.
docker stop
Estimated reading time: 1 minute
Description
Stop one or more running containers
Usage
docker stop [OPTIONS] CONTAINER [CONTAINER...]
Options
Name, shorthand Default Description
Parent command
Command Description
Extended description
The main process inside the container will receive SIGTERM, and after a grace period, SIGKILL.
Examples
$ docker stop my_container
docker swarm
Estimated reading time: 1 minute
Description
Manage Swarm
API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker swarm COMMAND
Child commands
Command Description
Parent command
Command Description
Extended description
Manage the swarm.
docker swarm ca
Estimated reading time: 4 minutes
Description
Display and rotate the root CA
API 1.30+ The client and daemon API must both be at least 1.30 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker swarm ca [OPTIONS]
Options
Name,
Default Description
shorthand
Parent command
Command Description
Related commands
Command Description
Extended description
View or rotate the current swarm CA certificate. This command must target a manager node.
Examples
Run the docker swarm ca command without any options to view the current root CA certificate in
PEM format.
$ docker swarm ca
-----BEGIN CERTIFICATE-----
MIIBazCCARCgAwIBAgIUJPzo67QC7g8Ebg2ansjkZ8CbmaswCgYIKoZIzj0EAwIw
EzERMA8GA1UEAxMIc3dhcm0tY2EwHhcNMTcwNTAzMTcxMDAwWhcNMzcwNDI4MTcx
MDAwWjATMREwDwYDVQQDEwhzd2FybS1jYTBZMBMGByqGSM49AgEGCCqGSM49AwEH
A0IABKL6/C0sihYEb935wVPRA8MqzPLn3jzou0OJRXHsCLcVExigrMdgmLCC+Va4
+sJ+SLVO1eQbvLHH8uuDdF/QOU6jQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB
Af8EBTADAQH/MB0GA1UdDgQWBBSfUy5bjUnBAx/B0GkOBKp91XvxzjAKBggqhkjO
PQQDAgNJADBGAiEAnbvh0puOS5R/qvy1PMHY1iksYKh2acsGLtL/jAIvO4ACIQCi
lIwQqLkJ48SQqCjG1DBTSBsHmMSRT+6mE2My+Z3GKA==
-----END CERTIFICATE-----
Pass the --rotate flag (and optionally a --ca-cert, along with a --ca-key or--external-
ca parameter flag), in order to rotate the current swarm root CA.
Once the rotation os finished (all the progress bars have completed) the now-current CA certificate
will be printed:
--rotate
Root CA Rotation is recommended if one or more of the swarm managers have been compromised,
so that those managers can no longer connect to or be trusted by any other node in the cluster.
Alternately, root CA rotation can be used to give control of the swarm CA to an external CA, or to
take control back from an external CA.
The --rotate flag does not require any parameters to do a rotation, but you can optionally specify a
certificate and key, or a certificate and external CA URL, and those will be used instead of an
automatically-generated certificate/key pair.
Because the root CA key should be kept secret, if provided it will not be visible when viewing swarm
any information via the CLI or API.
The root CA rotation will not be completed until all registered nodes have rotated their TLS
certificates. If the rotation is not completing within a reasonable amount of time, try runningdocker
node ls --format '{{.ID}} {{.Hostname}} {{.Status}} {{.TLSStatus}}' to see if any nodes are
down or otherwise unable to rotate TLS certificates.
--detach
Initiate the root CA rotation, but do not wait for the completion of or display the progress of the
rotation.
Description
Initialize a swarm
API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker swarm init [OPTIONS]
Options
Name, shorthand Default Description
API 1.31+
--data-path-addr Address or interface to use for data path traffic (format:
<ip|interface>)
API 1.40+
Port number to use for data path traffic (1024 - 49151).
--data-path-port
If no value is set or is set to 0, the default port (4789) is
used.
--dispatcher-
5s Dispatcher heartbeat period (ns|us|ms|s|m|h)
heartbeat
--force-new-
Force create a new cluster from current state
cluster
API 1.25+
--max-snapshots
Number of additional Raft snapshots to retain
--task-history-
5 Task history retention limit
limit
Parent command
Command Description
Related commands
Command Description
Extended description
Initialize a swarm. The docker engine targeted by this command becomes a manager in the newly
created single-node swarm.
Examples
$ docker swarm init --advertise-addr 192.168.99.121
Swarm initialized: current node (bvz81updecsj6wjz393c09vti) is now a manager.
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the
instructions.
docker swarm init generates two random tokens, a worker token and a manager token. When you
join a new node to the swarm, the node joins as a worker or manager node based upon the token
you pass to swarm join.
After you create the swarm, you can display or rotate the token using swarm join-token.
--autolock
This flag enables automatic locking of managers with an encryption key. The private keys and data
stored by all managers will be protected by the encryption key printed in the output, and will not be
accessible without it. Thus, it is very important to store this key in order to activate a manager after it
restarts. The key can be passed to docker swarm unlock to reactivate the manager. Autolock can be
disabled by running docker swarm update --autolock=false. After disabling it, the encryption key is
no longer required to start the manager, and it will start up on its own without user intervention.
--cert-expiry
This flag sets the validity period for node certificates.
--dispatcher-heartbeat
This flag sets the frequency with which nodes are told to use as a period to report their health.
--external-ca
This flag sets up the swarm to use an external CA to issue node certificates. The value takes the
form protocol=X,url=Y. The value for protocol specifies what protocol should be used to send
signing requests to the external CA. Currently, the only supported value is cfssl. The URL specifies
the endpoint where signing requests should be submitted.
--force-new-cluster
This flag forces an existing node that was part of a quorum that was lost to restart as a single node
Manager without losing its data.
--listen-addr
The node listens for inbound swarm manager traffic on this address. The default is to listen on
0.0.0.0:2377. It is also possible to specify a network interface to listen on that interface’s address; for
example --listen-addr eth0:2377.
Specifying a port is optional. If the value is a bare IP address or interface name, the default port
2377 will be used.
--advertise-addr
This flag specifies the address that will be advertised to other members of the swarm for API access
and overlay networking. If unspecified, Docker will check if the system has a single IP address, and
use that IP address with the listening port (see --listen-addr). If the system has multiple IP
addresses, --advertise-addr must be specified so that the correct address is chosen for inter-
manager communication and overlay networking.
It is also possible to specify a network interface to advertise that interface’s address; for example --
advertise-addr eth0:2377.
Specifying a port is optional. If the value is a bare IP address or interface name, the default port
2377 will be used.
--data-path-addr
This flag specifies the address that global scope network drivers will publish towards other nodes in
order to reach the containers running on this node. Using this parameter it is then possible to
separate the container’s data traffic from the management traffic of the cluster. If unspecified,
Docker will use the same IP address or interface that is used for the advertise address.
--data-path-port
This flag allows you to configure the UDP port number to use for data path traffic. The provided port
number must be within the 1024 - 49151 range. If this flag is not set or is set to 0, the default port
number 4789 is used. The data path port can only be configured when initializing the swarm, and
applies to all nodes that join the swarm. The following example initializes a new Swarm, and
configures the data path port to UDP port 7777;
After the swarm is initialized, use the docker info command to verify that the port is configured:
docker info
...
ClusterID: 9vs5ygs0gguyyec4iqf2314c0
Managers: 1
Nodes: 1
Data Path Port: 7777
...
--default-addr-pool
This flag specifies default subnet pools for global scope networks. Format example is --default-
addr-pool 30.30.0.0/16 --default-addr-pool 40.40.0.0/16
--default-addr-pool-mask-length
This flag specifies default subnet pools mask length for default-addr-pool. Format example is --
default-addr-pool-mask-length 24
--task-history-limit
This flag sets up task history retention limit.
--max-snapshots
This flag sets the number of old Raft snapshots to retain in addition to the current Raft snapshots. By
default, no old snapshots are retained. This option may be used for debugging, or to store old
snapshots of the swarm state for disaster recovery purposes.
--snapshot-interval
This flag specifies how many log entries to allow in between Raft snapshots. Setting this to a higher
number will trigger snapshots less frequently. Snapshots compact the Raft log and allow for more
efficient transfer of the state to new managers. However, there is a performance cost to taking
snapshots frequently.
--availability
This flag specifies the availability of the node at the time the node joins a master. Possible
availability values are active, pause, or drain.
This flag is useful in certain situations. For example, a cluster may want to have dedicated manager
nodes that are not served as worker nodes. This could be achieved by passing --
availability=drain to docker swarm init.
Description
Manage join tokens
API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Swarm This command works with the Swarm orchestrator.
Usage
docker swarm join-token [OPTIONS] (worker|manager)
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
Description
Join a swarm as a node and/or manager
API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker swarm join [OPTIONS] HOST:PORT
Options
Name,
Default Description
shorthand
--advertise-
Advertised address (format: <ip|interface>[:port])
addr
API 1.31+
--data-path-
Address or interface to use for data path traffic (format:
addr
<ip|interface>)
Parent command
Command Description
Extended description
Join a node to a swarm. The node joins as a manager node or worker node based upon the token
you pass with the --token flag. If you pass a manager token, the node joins as a manager. If you
pass a worker token, the node joins as a worker.
Examples
Join a node to swarm as a manager
The example below demonstrates joining a manager node using a manager token.
A cluster should only have 3-7 managers at most, because a majority of managers must be available
for the cluster to function. Nodes that aren’t meant to participate in this management quorum should
join as workers instead. Managers should be stable hosts that have static IP addresses.
--listen-addr value
If the node is a manager, it will listen for inbound swarm manager traffic on this address. The default
is to listen on 0.0.0.0:2377. It is also possible to specify a network interface to listen on that
interface’s address; for example --listen-addr eth0:2377.
Specifying a port is optional. If the value is a bare IP address, or interface name, the default port
2377 will be used.
--advertise-addr value
This flag specifies the address that will be advertised to other members of the swarm for API access.
If unspecified, Docker will check if the system has a single IP address, and use that IP address with
the listening port (see --listen-addr). If the system has multiple IP addresses, --advertise-
addr must be specified so that the correct address is chosen for inter-manager communication and
overlay networking.
It is also possible to specify a network interface to advertise that interface’s address; for example --
advertise-addr eth0:2377.
Specifying a port is optional. If the value is a bare IP address, or interface name, the default port
2377 will be used.
This flag is generally not necessary when joining an existing swarm. If you’re joining new nodes
through a load balancer, you should use this flag to ensure the node advertises its IP address and
not the IP address of the load balancer.
--data-path-addr
This flag specifies the address that global scope network drivers will publish towards other nodes in
order to reach the containers running on this node. Using this parameter it is then possible to
separate the container’s data traffic from the management traffic of the cluster. If unspecified,
Docker will use the same IP address or interface that is used for the advertise address.
--token string
Secret value required for nodes to join the swarm
--availability
This flag specifies the availability of the node at the time the node joins a master. Possible
availability values are active, pause, or drain.
This flag is useful in certain situations. For example, a cluster may want to have dedicated manager
nodes that are not served as worker nodes. This could be achieved by passing --
availability=drain to docker swarm join.
Description
Leave the swarm
API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker swarm leave [OPTIONS]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
Extended description
When you run this command on a worker, that worker leaves the swarm.
You can use the --force option on a manager to remove it from the swarm. However, this does not
reconfigure the swarm to ensure that there are enough managers to maintain a quorum in the
swarm. The safe way to remove a manager from a swarm is to demote it to a worker and then direct
it to leave the quorum without using --force. Only use --force in situations where the swarm will no
longer be used after the manager leaves, such as in a single-node swarm.
Examples
Consider the following swarm, as seen from the manager:
$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
7ln70fl22uw2dvjn2ft53m3q5 worker2 Ready Active
dkp8vy1dq1kxleu9g4u78tlag worker1 Ready Active
dvfxp4zseq4s0rih1selh0d20 * manager1 Ready Active Leader
The node will still appear in the node list, and marked as down. It no longer affects swarm operation,
but a long list of down nodes can clutter the node list. To remove an inactive node from the list, use
the node rm command.
Description
Manage the unlock key
API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
Description
Unlock swarm
API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker swarm unlock
Parent command
Command Description
Related commands
Command Description
Extended description
Unlocks a locked manager using a user-supplied unlock key. This command must be used to
reactivate a manager after its Docker daemon restarts if the autolock setting is turned on. The unlock
key is printed at the time when autolock is enabled, and is also available from the docker swarm
unlock-key command.
Examples
$ docker swarm unlock
Please enter unlock key:
Description
Update the swarm
API 1.24+ The client and daemon API must both be at least 1.24 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker swarm update [OPTIONS]
Options
Name, shorthand Default Description
--dispatcher-
5s Dispatcher heartbeat period (ns|us|ms|s|m|h)
heartbeat
API 1.25+
--max-snapshots
Number of additional Raft snapshots to retain
API 1.25+
--snapshot-interval 10000
Number of log entries between Raft snapshots
Parent command
Command Description
Related commands
Command Description
Extended description
Updates a swarm with new parameter values. This command must target a manager node.
Examples
$ docker swarm update --cert-expiry 720h
docker system
Estimated reading time: 1 minute
Description
Manage Docker
Usage
docker system COMMAND
Child commands
Command Description
docker system events Get real time events from the server
Extended description
Manage Docker.
docker system df
Estimated reading time: 3 minutes
Description
Show docker disk usage
API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker system df [OPTIONS]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker system events Get real time events from the server
Extended description
The docker system df command displays information regarding the amount of disk space used by
the docker daemon.
Examples
By default the command will just show a summary of the data used:
$ docker system df
A more detailed view can be requested using the -v, --verbose flag:
$ docker system df -v
Images space usage:
NAME LINKS
SIZE
07c7bdf3e34ab76d921894c2b834f073721fccfbbcba792aa7648e3a7a664c2e 2
36 B
my-named-vol 0
0 B
SHARED SIZE is the amount of space that an image shares with another one (i.e. their
common data)
UNIQUE SIZE is the amount of space that is only used by a given image
SIZE is the virtual size of the image, it is the sum of SHARED SIZE and UNIQUE SIZE
Note: Network information is not shown because it doesn’t consume the disk space.
docker system events
Estimated reading time: 11 minutes
Description
Get real time events from the server
Usage
docker system events [OPTIONS]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker system events Get real time events from the server
Command Description
Extended description
Use docker system events to get real-time events from the server. These events differ per Docker
object type.
Object types
CONTAINERS
attach
commit
copy
create
destroy
detach
die
exec_create
exec_detach
exec_start
export
health_status
kill
oom
pause
rename
resize
restart
start
stop
top
unpause
update
IMAGES
delete
import
load
pull
push
save
tag
untag
PLUGINS
install
enable
disable
remove
VOLUMES
create
mount
unmount
destroy
NETWORKS
create
connect
disconnect
destroy
DAEMONS
reload
The --since and --until parameters can be Unix timestamps, date formatted timestamps, or Go
duration strings (e.g. 10m, 1h30m) computed relative to the client machine’s time. If you do not provide
the --since option, the command returns only new and/or live events. Supported formats for date
formatted time stamps include RFC3339Nano, RFC3339, 2006-01-02T15:04:05, 2006-01-
02T15:04:05.999999999, 2006-01-02Z07:00, and 2006-01-02. The local timezone on the client will be
used if you do not provide either a Z or a +-00:00 timezone offset at the end of the timestamp. When
providing Unix timestamps enter seconds[.nanoseconds], where seconds is the number of seconds
that have elapsed since January 1, 1970 (midnight UTC/GMT), not counting leap seconds (aka Unix
epoch or Unix time), and the optional .nanoseconds field is a fraction of a second no more than nine
digits long.
FILTERING
The filtering flag (-f or --filter) format is of “key=value”. If you would like to use multiple filters,
pass multiple flags (e.g., --filter "foo=bar" --filter "bif=baz")
Using the same filter multiple times will be handled as a OR; for example--filter
container=588a23dac085 --filter container=a8f7720b8c22 will display events for container
588a23dac085 OR container a8f7720b8c22
Using multiple filters will be handled as a AND; for example--filter container=588a23dac085 --
filter event=start will display events for container container 588a23dac085 AND the event type
is start
FORMAT
If a format (--format) is specified, the given template will be executed instead of the default format.
Go’s text/template package describes all the details of the format.
If a format is set to {{json .}}, the events are streamed as valid JSON Lines. For information about
JSON Lines, please refer to http://jsonlines.org/ .
Examples
Basic example
You’ll need two shells for this example.
Type=container Status=create
ID=2ee349dac409e97974ce8d01b70d250b85e0ba8189299c126a87812311951e26
Type=container Status=attach
ID=2ee349dac409e97974ce8d01b70d250b85e0ba8189299c126a87812311951e26
Type=container Status=start
ID=2ee349dac409e97974ce8d01b70d250b85e0ba8189299c126a87812311951e26
Type=container Status=resize
ID=2ee349dac409e97974ce8d01b70d250b85e0ba8189299c126a87812311951e26
Type=container Status=die
ID=2ee349dac409e97974ce8d01b70d250b85e0ba8189299c126a87812311951e26
Type=container Status=destroy
ID=2ee349dac409e97974ce8d01b70d250b85e0ba8189299c126a87812311951e26
FORMAT AS JSON
$ docker system events --format '{{json .}}'
{"status":"create","id":"196016a57679bf42424484918746a9474cd905dd993c4d0f4..
{"status":"attach","id":"196016a57679bf42424484918746a9474cd905dd993c4d0f4..
{"Type":"network","Action":"connect","Actor":{"ID":"1b50a5bf755f6021dfa78e..
{"status":"start","id":"196016a57679bf42424484918746a9474cd905dd993c4d0f42..
{"status":"resize","id":"196016a57679bf42424484918746a9474cd905dd993c4d0f4..
Description
Display system-wide information
Usage
docker system info [OPTIONS]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker system events Get real time events from the server
Description
Remove unused data
API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker system prune [OPTIONS]
Options
Name, shorthand Default Description
API 1.28+
--filter
Provide filter values (e.g. ‘label==')
Related commands
Command Description
docker system events Get real time events from the server
Extended description
Remove all unused containers, networks, images (both dangling and unreferenced), and optionally,
volumes.
Examples
$ docker system prune
Deleted Containers:
f44f9b81948b3919590d5f79a680d8378f1139b41952e219830a33027c80c867
792776e68ac9d75bce4092bc1b5cc17b779bc926ab04f4185aec9bf1c0d4641f
Deleted Networks:
network1
network2
Deleted Images:
untagged: hello-
world@sha256:f3b3b28a45160805bb16542c9531888519430e9e6d6ffc09d72261b0d26ff74f
deleted: sha256:1815c82652c03bfd8644afda26fb184f2ed891d921b20a0703b46768f9755c57
deleted: sha256:45761469c965421a92a69cc50e92c01e0cfa94fe026cdd1233445ea00e96289a
By default, volumes are not removed to prevent important data from being deleted if there is
currently no container using the volume. Use the --volumes flag when running the command to
prune volumes as well:
$ docker system prune -a --volumes
Deleted Containers:
0998aa37185a1a7036b0e12cf1ac1b6442dcfa30a5c9650a42ed5010046f195b
73958bfb884fa81fa4cc6baf61055667e940ea2357b4036acbbe25a60f442a4d
Deleted Networks:
my-network-a
my-network-b
Deleted Volumes:
named-vol
Deleted Images:
untagged: my-curl:latest
deleted: sha256:7d88582121f2a29031d92017754d62a0d1a215c97e8f0106c586546e7404447d
deleted: sha256:dd14a93d83593d4024152f85d7c63f76aaa4e73e228377ba1d130ef5149f4d8b
untagged: alpine:3.3
deleted: sha256:695f3d04125db3266d4ab7bbb3c6b23aa4293923e762aa2562c54f49a28f009f
untagged: alpine:latest
deleted: sha256:ee4603260daafe1a8c2f3b78fd760922918ab2441cbb2853ed5c439e59c52f96
deleted: sha256:9007f5987db353ec398a223bc5a135c5a9601798ba20a1abba537ea2f8ac765f
deleted: sha256:71fa90c8f04769c9721459d5aa0936db640b92c8c91c9b589b54abd412d120ab
deleted: sha256:bb1c3357b3c30ece26e6604aea7d2ec0ace4166ff34c3616701279c22444c0f3
untagged: my-jq:latest
deleted: sha256:6e66d724542af9bc4c4abf4a909791d7260b6d0110d8e220708b09e4ee1322e1
deleted: sha256:07b3fa89d4b17009eb3988dfc592c7d30ab3ba52d2007832dffcf6d40e3eda7f
deleted: sha256:3a88a5c81eb5c283e72db2dbc6d65cbfd8e80b6c89bb6e714cfaaa0eed99c548
Note: The --volumes option was added in Docker 17.06.1. Older versions of Docker prune volumes
by default, along with other Docker objects. On older versions, run docker container prune, docker
network prune, and docker image pruneseparately to remove unused containers, networks, and
images, without removing volumes.
Filtering
The filtering flag (--filter) format is of “key=value”. If there is more than one filter, then pass
multiple flags (e.g., --filter "foo=bar" --filter "bif=baz")
until (<timestamp>) - only remove containers, images, and networks created before given
timestamp
label (label=<key>, label=<key>=<value>, label!=<key>, or label!=<key>=<value>) - only
remove containers, images, networks, and volumes with (or without, in case label!=... is
used) the specified labels.
The until filter can be Unix timestamps, date formatted timestamps, or Go duration strings
(e.g. 10m, 1h30m) computed relative to the daemon machine’s time. Supported formats for date
formatted time stamps include RFC3339Nano, RFC3339, 2006-01-02T15:04:05,2006-01-
02T15:04:05.999999999, 2006-01-02Z07:00, and 2006-01-02. The local timezone on the daemon will
be used if you do not provide either a Z or a +-00:00 timezone offset at the end of the timestamp.
When providing Unix timestamps enter seconds[.nanoseconds], where seconds is the number of
seconds that have elapsed since January 1, 1970 (midnight UTC/GMT), not counting leap seconds
(aka Unix epoch or Unix time), and the optional .nanoseconds field is a fraction of a second no more
than nine digits long.
The label filter accepts two formats. One is the label=... (label=<key> or label=<key>=<value>),
which removes containers, images, networks, and volumes with the specified labels. The other
format is the label!=... (label!=<key> or label!=<key>=<value>), which removes containers,
images, networks, and volumes without the specified labels.
docker tag
Estimated reading time: 2 minutes
Description
Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
Usage
docker tag SOURCE_IMAGE[:TAG] TARGET_IMAGE[:TAG]
Parent command
Command Description
A tag name must be valid ASCII and may contain lowercase and uppercase letters, digits,
underscores, periods and dashes. A tag name may not start with a period or a dash and may contain
a maximum of 128 characters.
You can group your images together using names and tags, and then upload them to Share Images
via Repositories.
Examples
Tag an image referenced by ID
To tag a local image with ID “0e5574283393” into the “fedora” repository with “version1.0”:
Note that since the tag name is not specified, the alias is created for an existing local
version httpd:latest.
docker template
Estimated reading time: 1 minute
Description
Use templates to quickly create new services
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Child commands
Command Description
docker template
Modify docker template configuration
config
docker template
Inspect service templates or application templates
inspect
docker template
Print version information
version
Parent command
Command Description
Description
Modify docker template configuration
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Child commands
Command Description
docker template config set set default values for docker template
docker template config view view default values for docker template
Parent command
Command Description
Related commands
Command Description
docker template
Modify docker template configuration
config
docker template
Inspect service templates or application templates
inspect
docker template
Print version information
version
docker template config set
Estimated reading time: 2 minutes
Description
set default values for docker template
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker template config set
Options
Name,
Default Description
shorthand
Parent command
Command Description
Related commands
Command Description
docker template config set set default values for docker template
docker template config view view default values for docker template
Description
view default values for docker template
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker template config view
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker template config set set default values for docker template
docker template config view view default values for docker template
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker template inspect <service or application>
Options
Name, shorthand Default Description
Parent command
Command Description
docker template
Modify docker template configuration
config
docker template
Inspect service templates or application templates
inspect
docker template
Print version information
version
Description
List available templates with their informations
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker template list
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker template
Modify docker template configuration
config
docker template
Inspect service templates or application templates
inspect
docker template
Print version information
version
docker template scaffold
Estimated reading time: 2 minutes
Description
Choose an application template or service template(s) and scaffold a new project
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker template scaffold application [<alias=service>...] OR scaffold [alias=]service
[<[alias=]service>...]
Options
Name,
Default Description
shorthand
Parent command
Command Description
Related commands
Command Description
docker template
Modify docker template configuration
config
docker template
Inspect service templates or application templates
inspect
docker template
Print version information
version
Examples
docker template scaffold react-java-mysql -s back.java=10 -s front.externalPort=80 docker template
scaffold react-java-mysql java=back reactjs=front -s reactjs.externalPort=80 docker template scaffold
back=spring front=react -s back.externalPort=9000 docker template scaffold react-java-mysql --
server=myregistry:5000 --org=myorg
Description
Print version information
This command is experimental on the Docker client. It should not be used in production
environments. To enable experimental features in the Docker CLI, edit theconfig.json and
set experimental to enabled.
Experimental features provide early access to future product functionality. These features are
intended for testing and feedback only as they may change between releases without warning or can
be removed entirely from a future release. Experimental features must not be used in production
environments. Docker does not offer support for experimental features. For more information,
see Experimental features.
To enable experimental features in the Docker CLI, edit the config.json file and set experimental to
enabled.
To enable experimental features from the Docker Desktop menu, click Settings(Preferences on
macOS) > Daemon and then select the Experimental features check box.
Usage
docker template version
Parent command
Command Description
Related commands
Command Description
docker template
Modify docker template configuration
config
docker template
Inspect service templates or application templates
inspect
docker template
Print version information
version
docker top
Estimated reading time: 1 minute
Description
Display the running processes of a container
Usage
docker top CONTAINER [ps OPTIONS]
Parent command
Command Description
docker trust
Estimated reading time: 1 minute
Description
Manage trust on Docker images
Usage
docker trust COMMAND
Child commands
Command Description
docker trust inspect Return low-level information about keys and signatures
docker trust signer Manage entities who can sign Docker images
Parent command
Command Description
Description
Return low-level information about keys and signatures
Usage
docker trust inspect IMAGE[:TAG] [IMAGE[:TAG]...]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker trust inspect Return low-level information about keys and signatures
docker trust signer Manage entities who can sign Docker images
Extended description
docker trust inspect provides low-level JSON information on signed repositories. This includes all
image tags that are signed, who signed them, and who can sign new tags.
Examples
Get low-level details about signatures for a single image tag
Use the docker trust inspect to get trust information about an image. The following example prints
trust information for the alpine:latest image:
$ docker trust inspect alpine:latest
[
{
"Name": "alpine:latest",
"SignedTags": [
{
"SignedTag": "latest",
"Digest": "d6bfc3baf615dc9618209a8d607ba2a8103d9c8a405b3bd8741d88b4bef36478",
"Signers": [
"Repo Admin"
]
}
],
"Signers": [],
"AdministrativeKeys": [
{
"Name": "Repository",
"Keys": [
{
"ID":
"5a46c9aaa82ff150bb7305a2d17d0c521c2d784246807b2dc611f436a69041fd"
}
]
},
{
"Name": "Root",
"Keys": [
{
"ID":
"a2489bcac7a79aa67b19b96c4a3bf0c675ffdf00c6d2fabe1a5df1115e80adce"
}
]
}
]
}
]
The SignedTags key will list the SignedTag name, its Digest, and the Signers responsible for the
signature.
AdministrativeKeys will list the Repository and Root keys.
If signers are set up for the repository via other docker trust commands, docker trust
inspect includes a Signers key:
If the image tag is unsigned or unavailable, docker trust inspect does not display any signed tags.
$ docker trust inspect unsigned-img
No signatures or cannot access unsigned-img
However, if other tags are signed in the same image repository, docker trust inspectreports
relevant key information:
$ docker trust inspect alpine:unsigned
[
{
"Name": "alpine:unsigned",
"Signers": [],
"AdministrativeKeys": [
{
"Name": "Repository",
"Keys": [
{
"ID":
"5a46c9aaa82ff150bb7305a2d17d0c521c2d784246807b2dc611f436a69041fd"
}
]
},
{
"Name": "Root",
"Keys": [
{
"ID":
"a2489bcac7a79aa67b19b96c4a3bf0c675ffdf00c6d2fabe1a5df1115e80adce"
}
]
}
]
}
]
Formatting
You can print the inspect output in a human-readable format instead of the default JSON output, by
using the --pretty option:
The SIGNED TAG is the signed image tag with a unique content-addressable DIGEST. SIGNERS lists all
entities who have signed.
The administrative keys listed specify the root key of trust, as well as the administrative repository
key. These keys are responsible for modifying signers, and rotating keys for the signed repository.
If signers are set up for the repository via other docker trust commands,docker trust inspect --
pretty displays them appropriately as a SIGNER and specify their KEYS:
SIGNER KEYS
alice 47caae5b3e61, a85aab9d20a4
bob 034370bcbd77, 82a66673242c
carol b6f9f8e1aab0
Here’s an example with signers that are set up by docker trust commands:
$ docker trust inspect --pretty my-image
SIGNED TAG DIGEST
SIGNERS
red 852cc04935f930a857b630edc4ed6131e91b22073bcc216698842e44f64d2943
alice
blue f1c38dbaeeb473c36716f6494d803fbfbe9d8a76916f7c0093f227821e378197
alice, bob
green cae8fedc840f90c8057e1c24637d11865743ab1e61a972c1c9da06ec2de9a139
alice, bob
yellow 9cc65fc3126790e683d1b92f307a71f48f75fa7dd47a7b03145a123eaf0b45ba
carol
purple 941d3dba358621ce3c41ef67b47cf80f701ff80cdf46b5cc86587eaebfe45557
alice, bob, carol
orange d6c271baa6d271bcc24ef1cbd65abf39123c17d2e83455bdab545a1a9093fc1c
alice
SIGNER KEYS
alice 47caae5b3e61, a85aab9d20a4
bob 034370bcbd77, 82a66673242c
carol b6f9f8e1aab0
Description
Manage keys for signing Docker images
Usage
docker trust key COMMAND
Child commands
Command Description
docker trust key load Load a private key file for signing
Parent command
Command Description
Related commands
Command Description
docker trust inspect Return low-level information about keys and signatures
docker trust signer Manage entities who can sign Docker images
Usage
docker trust key generate NAME
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker trust key load Load a private key file for signing
Description
Load a private key file for signing
Usage
docker trust key load [OPTIONS] KEYFILE
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker trust key load Load a private key file for signing
Description
Remove trust for an image
Usage
docker trust revoke [OPTIONS] IMAGE[:TAG]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker trust inspect Return low-level information about keys and signatures
docker trust signer Manage entities who can sign Docker images
Extended description
docker trust revoke removes signatures from tags in signed repositories.
Examples
Revoke signatures from a signed tag
Here’s an example of a repo with two signed tags:
SIGNER KEYS
alice 05e87edcaecb
bob 5600f5ab76a2
After revocation, the tag is removed from the list of released tags:
SIGNER KEYS
alice 05e87edcaecb
bob 5600f5ab76a2
Administrative keys for example/trust-demo:
Repository Key: ecc457614c9fc399da523a5f4e24fe306a0a6ee1cc79a10e4555b3c6ab02f71e
Root Key: 3cb2228f6561e58f46dbc4cda4fcaff9d5ef22e865a94636f82450d1d2234949
SIGNER KEYS
alice 05e87edcaecb
bob 5600f5ab76a2
All tags that have alice’s signature on them are removed from the list of released tags:
$ docker trust view example/trust-demo
SIGNER KEYS
alice 05e87edcaecb
bob 5600f5ab76a2
Description
Sign an image
Usage
docker trust sign IMAGE:TAG
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
docker trust inspect Return low-level information about keys and signatures
docker trust signer Manage entities who can sign Docker images
Extended description
docker trust sign adds signatures to tags to create signed repositories.
Examples
Sign a tag as a repo admin
Given an image:
SIGNER KEYS
alice 05e87edcaecb
bob 5600f5ab76a2
Description
Manage entities who can sign Docker images
Usage
docker trust signer COMMAND
Child commands
Command Description
Parent command
Command Description
docker trust inspect Return low-level information about keys and signatures
docker trust signer Manage entities who can sign Docker images
Description
Add a signer
Usage
docker trust signer add OPTIONS NAME REPOSITORY [REPOSITORY...]
Options
Name, shorthand Default Description
Parent command
Command Description
docker trust signer Manage entities who can sign Docker images
Related commands
Command Description
Description
Remove a signer
Usage
docker trust signer remove [OPTIONS] NAME REPOSITORY [REPOSITORY...]
Options
Name,
Default Description
shorthand
Parent command
Command Description
docker trust signer Manage entities who can sign Docker images
Related commands
Command Description
docker unpause
Estimated reading time: 1 minute
Description
Unpause all processes within one or more containers
Usage
docker unpause CONTAINER [CONTAINER...]
Parent command
Command Description
Extended description
The docker unpause command un-suspends all processes in the specified containers. On Linux, it
does this using the cgroups freezer.
Examples
$ docker unpause my_container
my_container
docker update
Estimated reading time: 4 minutes
Description
Update configuration of one or more containers
Usage
docker update [OPTIONS] CONTAINER [CONTAINER...]
Options
Name, shorthand Default Description
API 1.25+
--cpu-rt-period
Limit the CPU real-time period in microseconds
API 1.25+
--cpu-rt-runtime
Limit the CPU real-time runtime in microseconds
API 1.29+
--cpus
Number of CPUs
--memory-
Memory soft limit
reservation
API 1.40+
--pids-limit
Tune container pids limit (set -1 for unlimited)
Parent command
Command Description
Extended description
The docker update command dynamically updates container configuration. You can use this
command to prevent containers from consuming too many resources from their Docker host. With a
single command, you can place limits on a single container or on many. To specify more than one
container, provide space-separated list of container names or IDs.
With the exception of the --kernel-memory option, you can specify these options on a running or a
stopped container. On kernel version older than 4.6, you can only update --kernel-memory on a
stopped container or on a running container with kernel memory initialized.
Warning: The docker update and docker container update commands are not supported for
Windows containers.
Examples
The following sections illustrate ways to use this command.
Update kernel memory of running container test2 will fail. You need to stop the container before
updating the --kernel-memory setting. The next time you start it, the container uses the new value.
Kernel version newer than (include) 4.6 does not have this limitation, you can use --kernel-
memory the same way as other options.
Note that if the container is started with “--rm” flag, you cannot update the restart policy for it.
The AutoRemove and RestartPolicy are mutually exclusive for the container.
docker version
Estimated reading time: 1 minute
Description
Show the Docker version information
Usage
docker version [OPTIONS]
Options
Name, shorthand Default Description
Kubernetes
--kubeconfig
Kubernetes config file
Parent command
Command Description
Extended description
By default, this will render all version information in an easy to read layout. If a format is specified,
the given template will be executed instead.
Examples
Default output
$ docker version
Client:
Version: 1.8.0
API version: 1.20
Go version: go1.4.2
Git commit: f5bae0a
Built: Tue Jun 23 17:56:00 UTC 2015
OS/Arch: linux/amd64
Server:
Version: 1.8.0
API version: 1.20
Go version: go1.4.2
Git commit: f5bae0a
Built: Tue Jun 23 17:56:00 UTC 2015
OS/Arch: linux/amd64
1.8.0
{"Client":{"Version":"1.8.0","ApiVersion":"1.20","GitCommit":"f5bae0a","GoVersion":"g
o1.4.2","Os":"linux","Arch":"amd64","BuildTime":"Tue Jun 23 17:56:00 UTC
2015"},"ServerOK":true,"Server":{"Version":"1.8.0","ApiVersion":"1.20","GitCommit":"f
5bae0a","GoVersion":"go1.4.2","Os":"linux","Arch":"amd64","KernelVersion":"3.13.2-
gentoo","BuildTime":"Tue Jun 23 17:56:00 UTC 2015"}}
Description
Create a volume
API 1.21+ The client and daemon API must both be at least 1.21 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker volume create [OPTIONS] [VOLUME]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
Extended description
Creates a new volume that containers can consume and store data in. If a name is not specified,
Docker generates a random name.
Examples
Create a volume and then configure the container to use it:
hello
The mount is created inside the container’s /world directory. Docker does not support relative paths
for mount points inside the container.
Multiple containers can use the same volume in the same time period. This is useful if two
containers need access to shared data. For example, if one container writes and the other reads the
data.
Volume names must be unique among drivers. This means you cannot use the same volume name
with two different drivers. If you attempt this docker returns an error:
A volume named "hello" already exists with the "some-other" driver. Choose a
different volume name.
If you specify a volume name already in use on the current driver, Docker assumes you want to re-
use the existing volume and does not return an error.
Driver-specific options
Some volume drivers may take options to customize the volume creation. Use the -o or --opt flags
to pass driver options:
$ docker volume create --driver fake \
--opt tardis=blue \
--opt timey=wimey \
foo
These options are passed directly to the volume driver. Options for different volume drivers may do
different things (or nothing at all).
The built-in local driver on Windows does not support any options.
The built-in local driver on Linux accepts options similar to the linux mount command. You can
provide multiple options by passing the --opt flag multiple times. Some mount options (such as
the o option) can take a comma-separated list of options. Complete list of available mount options
can be found here.
For example, the following creates a tmpfs volume called foo with a size of 100 megabyte and uid of
1000.
$ docker volume create --driver local \
--opt type=tmpfs \
--opt device=tmpfs \
--opt o=size=100m,uid=1000 \
foo
Another example that uses btrfs:
$ docker volume create --driver local \
--opt type=btrfs \
--opt device=/dev/sda2 \
foo
Another example that uses nfs to mount the /path/to/dir in rw mode from192.168.1.1:
$ docker volume create --driver local \
--opt type=nfs \
--opt o=addr=192.168.1.1,rw \
--opt device=:/path/to/dir \
foo
Description
Display detailed information on one or more volumes
API 1.21+ The client and daemon API must both be at least 1.21 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker volume inspect [OPTIONS] VOLUME [VOLUME...]
Options
Related commands
Command Description
Extended description
Returns information about a volume. By default, this command renders all results in a JSON array.
You can specify an alternate format to execute a given template for each result.
Go’stext/template package describes all the details of the format.
Examples
$ docker volume create
85bffb0677236974f93955d8ecc4df55ef5070117b0e53333cc1b443777be24d
$ docker volume inspect
85bffb0677236974f93955d8ecc4df55ef5070117b0e53333cc1b443777be24d
[
{
"Name": "85bffb0677236974f93955d8ecc4df55ef5070117b0e53333cc1b443777be24d",
"Driver": "local",
"Mountpoint":
"/var/lib/docker/volumes/85bffb0677236974f93955d8ecc4df55ef5070117b0e53333cc1b443777b
e24d/_data",
"Status": null
}
]
docker volume ls
Estimated reading time: 5 minutes
Description
List volumes
API 1.21+ The client and daemon API must both be at least 1.21 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker volume ls [OPTIONS]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
Extended description
List all the volumes known to Docker. You can filter using the -f or --filter flag. Refer to
the filtering section for more information about available filter options.
Examples
Create a volume
$ docker volume create rosemary
rosemary
tyler
$ docker volume ls
Filtering
The filtering flag (-f or --filter) format is of “key=value”. If there is more than one filter, then pass
multiple flags (e.g., --filter "foo=bar" --filter "bif=baz")
DANGLING
The dangling filter matches on all volumes not referenced by any containers
$ docker run -d -v tyler:/tmpwork busybox
f86a7dd02898067079c99ceacd810149060a70528eff3754d0b0f1a93bd0af18
$ docker volume ls -f dangling=true
DRIVER VOLUME NAME
local rosemary
DRIVER
LABEL
The label filter matches volumes based on the presence of a label alone or a label and a value.
the-doctor
$ docker volume create daleks --label is-timelord=no
daleks
The following example filter matches volumes with the is-timelord label regardless of its value.
$ docker volume ls --filter label=is-timelord
As the above example demonstrates, both volumes with is-timelord=yes, andis-timelord=no are
returned.
Filtering on both key and value of the label, produces the expected result:
$ docker volume ls --filter label=is-timelord=yes
DRIVER VOLUME NAME
local the-doctor
Specifying multiple label filter produces an “and” search; all conditions should be met;
NAME
Formatting
The formatting options (--format) pretty-prints volumes output using a Go template.
Placeholder Description
vol1: local
vol2: local
vol3: local
Description
Remove all unused local volumes
API 1.25+ The client and daemon API must both be at least 1.25 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker volume prune [OPTIONS]
Options
Name, shorthand Default Description
Parent command
Command Description
Related commands
Command Description
Extended description
Remove all unused local volumes. Unused local volumes are those which are not referenced by any
containers
Examples
$ docker volume prune
WARNING! This will remove all local volumes not used by at least one container.
Are you sure you want to continue? [y/N] y
Deleted Volumes:
07c7bdf3e34ab76d921894c2b834f073721fccfbbcba792aa7648e3a7a664c2e
my-named-vol
docker volume rm
Estimated reading time: 1 minute
Description
Remove one or more volumes
API 1.21+ The client and daemon API must both be at least 1.21 to use this command. Use
the docker version command on the client to check your client and daemon API versions.
Usage
docker volume rm [OPTIONS] VOLUME [VOLUME...]
Options
Name, shorthand Default Description
API 1.25+
--force , -f
Force the removal of one or more volumes
Parent command
Command Description
Related commands
Command Description
Examples
$ docker volume rm hello
hello
docker wait
Estimated reading time: 1 minute
Description
Block until one or more containers stop, then print their exit codes
Usage
docker wait CONTAINER [CONTAINER...]
Parent command
Command Description
Examples
Start a container in the background.
Run docker wait, which should block until the container exits.
$ docker wait my_container
In another terminal, stop the first container. The docker wait command above returns the exit code.
$ docker stop my_container
This is the same docker wait command from above, but it now exits, returning 0.
$ docker wait my_container
Options:
--add-runtime runtime Register an additional OCI compatible
runtime (default [])
--allow-nondistributable-artifacts list Push nondistributable artifacts to
specified registries (default [])
--api-cors-header string Set CORS headers in the Engine API
--authorization-plugin list Authorization plugins to load (default
[])
--bip string Specify network bridge IP
-b, --bridge string Attach containers to a network bridge
--cgroup-parent string Set parent cgroup for all containers
--cluster-advertise string Address or interface name to advertise
--cluster-store string URL of the distributed storage backend
--cluster-store-opt map Set cluster store options (default
map[])
--config-file string Daemon configuration file (default
"/etc/docker/daemon.json")
--containerd string Path to containerd socket
--cpu-rt-period int Limit the CPU real-time period in
microseconds
--cpu-rt-runtime int Limit the CPU real-time runtime in
microseconds
--data-root string Root directory of persistent Docker
state (default "/var/lib/docker")
-D, --debug Enable debug mode
--default-gateway ip Container default gateway IPv4 address
--default-gateway-v6 ip Container default gateway IPv6 address
--default-address-pool Set the default address pool for local
node networks
--default-runtime string Default OCI runtime for containers
(default "runc")
--default-ulimit ulimit Default ulimits for containers (default
[])
--dns list DNS server to use (default [])
--dns-opt list DNS options to use (default [])
--dns-search list DNS search domains to use (default [])
--exec-opt list Runtime execution options (default [])
--exec-root string Root directory for execution state
files (default "/var/run/docker")
--experimental Enable experimental features
--fixed-cidr string IPv4 subnet for fixed IPs
--fixed-cidr-v6 string IPv6 subnet for fixed IPs
-G, --group string Group for the unix socket (default
"docker")
--help Print usage
-H, --host list Daemon socket(s) to connect to (default
[])
--icc Enable inter-container communication
(default true)
--init Run an init in the container to forward
signals and reap processes
--init-path string Path to the docker-init binary
--insecure-registry list Enable insecure registry communication
(default [])
--ip ip Default IP when binding container ports
(default 0.0.0.0)
--ip-forward Enable net.ipv4.ip_forward (default
true)
--ip-masq Enable IP masquerading (default true)
--iptables Enable addition of iptables rules
(default true)
--ipv6 Enable IPv6 networking
--label list Set key=value labels to the daemon
(default [])
--live-restore Enable live restore of docker when
containers are still running
--log-driver string Default driver for container logs
(default "json-file")
-l, --log-level string Set the logging level ("debug", "info",
"warn", "error", "fatal") (default "info")
--log-opt map Default log driver options for
containers (default map[])
--max-concurrent-downloads int Set the max concurrent downloads for
each pull (default 3)
--max-concurrent-uploads int Set the max concurrent uploads for each
push (default 5)
--metrics-addr string Set default address and port to serve
the metrics api on
--mtu int Set the containers network MTU
--node-generic-resources list Advertise user-defined resource
--no-new-privileges Set no-new-privileges by default for
new containers
--oom-score-adjust int Set the oom_score_adj for the daemon
(default -500)
-p, --pidfile string Path to use for daemon PID file
(default "/var/run/docker.pid")
--raw-logs Full timestamps without ANSI coloring
--registry-mirror list Preferred Docker registry mirror
(default [])
--seccomp-profile string Path to seccomp profile
--selinux-enabled Enable selinux support
--shutdown-timeout int Set the default shutdown timeout
(default 15)
-s, --storage-driver string Storage driver to use
--storage-opt list Storage driver options (default [])
--swarm-default-advertise-addr string Set default address or interface for
swarm advertised address
--tls Use TLS; implied by --tlsverify
--tlscacert string Trust certs signed only by this CA
(default "~/.docker/ca.pem")
--tlscert string Path to TLS certificate file (default
"~/.docker/cert.pem")
--tlskey string Path to TLS key file (default
~/.docker/key.pem")
--tlsverify Use TLS and verify the remote
--userland-proxy Use userland proxy for loopback traffic
(default true)
--userland-proxy-path string Path to the userland proxy binary
--userns-remap string User/Group setting for user namespaces
-v, --version Print version information and quit
Description
dockerd is the persistent process that manages containers. Docker uses different binaries for the
daemon and client. To run the daemon you type dockerd.
To run the daemon with debug output, use dockerd -D or add "debug": true to the daemon.json file.
Note: In Docker 1.13 and higher, enable experimental features by starting dockerdwith the --
experimental flag or adding "experimental": true to the daemon.jsonfile. In earlier Docker versions,
a different build was required to enable experimental features.
Examples
Daemon socket option
The Docker daemon can listen for Docker Engine API requests via three different types of
Socket: unix, tcp, and fd.
By default, a unix domain socket (or IPC socket) is created at /var/run/docker.sock, requiring
either root permission, or docker group membership.
If you need to access the Docker daemon remotely, you need to enable the tcp Socket. Beware that
the default setup provides un-encrypted and un-authenticated direct access to the Docker daemon -
and should be secured either using the built in HTTPS encrypted socket, or by putting a secure web
proxy in front of it. You can listen on port 2375 on all network interfaces with -H tcp://0.0.0.0:2375,
or on a particular network interface using its IP address: -H tcp://192.168.59.103:2375. It is
conventional to use port 2375 for un-encrypted, and port 2376 for encrypted communication with the
daemon.
Note: If you’re using an HTTPS encrypted socket, keep in mind that only TLS1.0 and greater are
supported. Protocols SSLv3 and under are not supported anymore for security reasons.
On Systemd based systems, you can communicate with the daemon via Systemd socket activation,
use dockerd -H fd://. Using fd:// will work perfectly for most setups but you can also specify
individual sockets: dockerd -H fd://3. If the specified socket activated files aren’t found, then
Docker will exit. You can find examples of using Systemd socket activation with Docker and
Systemd in the Docker source tree.
You can configure the Docker daemon to listen to multiple sockets at the same time using multiple -
H options:
# listen using the default unix socket, and on 2 specific IP addresses on this host.
The Docker client will honor the DOCKER_HOST environment variable to set the -H flag for the client.
Use one of the following commands:
$ docker -H tcp://0.0.0.0:2375 ps
$ export DOCKER_HOST="tcp://0.0.0.0:2375"
$ docker ps
Setting the DOCKER_TLS_VERIFY environment variable to any value other than the empty string is
equivalent to setting the --tlsverify flag. The following are equivalent:
$ docker --tlsverify ps
# or
$ export DOCKER_TLS_VERIFY=1
$ docker ps
The Docker client will honor the HTTP_PROXY, HTTPS_PROXY, and NO_PROXY environment variables (or
the lowercase versions thereof). HTTPS_PROXY takes precedence over HTTP_PROXY.
Starting with Docker 18.09, the Docker client supports connecting to a remote daemon via SSH:
$ docker -H ssh://[email protected]:22 ps
$ docker -H ssh://[email protected] ps
$ docker -H ssh://example.com ps
To use SSH connection, you need to set up ssh so that it can reach the remote host with public key
authentication. Password authentication is not supported. If your key is protected with passphrase,
you need to set up ssh-agent.
Also, you need to have docker binary 18.09 or later on the daemon host.
Warning: Changing the default docker daemon binding to a TCP port or Unix dockeruser group will
increase your security risks by allowing non-root users to gain rootaccess on the host. Make sure
you control access to docker. If you are binding to a TCP port, anyone with access to that port has
full Docker access; so it is not advisable on an open network.
With -H it is possible to make the Docker daemon to listen on a specific IP and port. By default, it will
listen on unix:///var/run/docker.sock to allow only local connections by theroot user.
You could set it to 0.0.0.0:2375 or a specific host IP to give access to everybody, but that is not
recommended because then it is trivial for someone to gain root access to the host where the
daemon is running.
Similarly, the Docker client can use -H to connect to a custom port. The Docker client will default to
connecting to unix:///var/run/docker.sock on Linux, and tcp://127.0.0.1:2376on Windows.
-H accepts host and port assignment in the following format:
tcp://[host]:[port][path] or unix://path
For example:
tcp:// -> TCP connection to 127.0.0.1 on either port 2376 when TLS encryption is on, or
port 2375 when communication is in plain text.
tcp://host:2375 -> TCP connection on host:2375
tcp://host:2375/path -> TCP connection on host:2375 and prepend path to all requests
unix://path/to/socket -> Unix socket located at path/to/socket
-H, when empty, will default to the same value as when no -H was passed in.
-H also accepts short form for TCP bindings: host: or host:port or :port
You can use multiple -H, for example, if you want to listen on both TCP and a Unix socket
# Run docker in daemon mode
$ sudo <path to>/dockerd -H tcp://127.0.0.1:2375 -H unix:///var/run/docker.sock &
# Download an ubuntu image, use default Unix socket
$ docker pull ubuntu
# OR use the TCP port
$ docker -H tcp://127.0.0.1:2375 pull ubuntu
Daemon storage-driver
On Linux, the Docker daemon has support for several different image layer storage
drivers: aufs, devicemapper, btrfs, zfs, overlay and overlay2.
The aufs driver is the oldest, but is based on a Linux kernel patch-set that is unlikely to be merged
into the main kernel. These are also known to cause some serious kernel crashes.
However aufs allows containers to share executable and shared library memory, so is a useful
choice when running thousands of containers with the same program or libraries.
The devicemapper driver uses thin provisioning and Copy on Write (CoW) snapshots. For each
devicemapper graph location – typically /var/lib/docker/devicemapper – a thin pool is created
based on two block devices, one for data and one for metadata. By default, these block devices are
created automatically by using loopback mounts of automatically created sparse files. Refer
to Devicemapper options below for a way how to customize this setup.~jpetazzo/Resizing Docker
containers with the Device Mapper plugin article explains how to tune your existing setup without the
use of options.
The btrfs driver is very fast for docker build - but like devicemapper does not share executable
memory between devices. Use dockerd -s btrfs -g /mnt/btrfs_partition.
The zfs driver is probably not as fast as btrfs but has a longer track record on stability. Thanks
to Single Copy ARC shared blocks between clones will be cached only once. Use dockerd -s zfs.
To select a different zfs filesystem set zfs.fsname option as described in ZFS options.
The overlay is a very fast union filesystem. It is now merged in the main Linux kernel as
of 3.18.0. overlay also supports page cache sharing, this means multiple containers accessing the
same file can share a single page cache entry (or entries), it makes overlay as efficient with memory
as aufs driver. Call dockerd -s overlay to use it.
Note: As promising as overlay is, the feature is still quite young and should not be used in
production. Most notably, using overlay can cause excessive inode consumption (especially as the
number of images grows), as well as being incompatible with the use of RPMs.
The overlay2 uses the same fast union filesystem but takes advantage of additional featuresadded
in Linux kernel 4.0 to avoid excessive inode consumption. Call dockerd -s overlay2 to use it.
Note: Both overlay and overlay2 are currently unsupported on btrfs or any Copy on Write
filesystem and should only be used over ext4 partitions.
On Windows, the Docker daemon supports a single image layer storage driver depending on the
image platform: windowsfilter for Windows images, and lcow for Linux containers on Windows.
DEVICEMAPPER OPTIONS
{
"storage-driver": "devicemapper",
"storage-opts": [
"dm.thinpooldev=/dev/mapper/thin-pool",
"dm.use_deferred_deletion=true",
"dm.use_deferred_removal=true"
]
}
dm.thinpooldev
Specifies a custom block storage device to use for the thin pool.
If using a block device for device mapper storage, it is best to use lvm to create and manage the
thin-pool volume. This volume is then handed to Docker to exclusively create snapshot volumes
needed for images and containers.
Managing the thin-pool outside of Engine makes for the most feature-rich method of having Docker
utilize device mapper thin provisioning as the backing storage for Docker containers. The highlights
of the lvm-based thin-pool management feature include: automatic or interactive thin-pool resize
support, dynamically changing thin-pool features, automatic thinp metadata checking when lvm
activates the thin-pool, etc.
As a fallback if no thin pool is provided, loopback files are created. Loopback is very slow, but can be
used without any pre-configuration of storage. It is strongly recommended that you do not use
loopback in production. Ensure your Engine daemon has a--storage-opt dm.thinpooldev argument
provided.
Example:
dm.directlvm_device
As an alternative to providing a thin pool as above, Docker can setup a block device for you.
Example:
dm.thinp_percent
Example:
dm.thinp_metapercent
Sets the percentage of the passed in block device to use for metadata storage.
Example:
dm.thinp_autoextend_threshold
Sets the value of the percentage of space used before lvm attempts to autoextend the available
space [100 = disabled]
Example:
dm.thinp_autoextend_percent
Sets the value percentage value to increase the thin pool by when lvm attempts to autoextend the
available space [100 = disabled]
Example:
dm.basesize
Specifies the size to use when creating the base device, which limits the size of images and
containers. The default value is 10G. Note, thin devices are inherently “sparse”, so a 10G device
which is mostly empty doesn’t use 10 GB of space on the pool. However, the filesystem will use
more space for the empty case the larger the device is.
The base device size can be increased at daemon restart which will allow all future images and
containers (based on those new images) to be of the new base device size.
Examples
This will increase the base device size to 50G. The Docker daemon will throw an error if existing
base device size is larger than 50G. A user can use this option to expand the base device size
however shrinking is not permitted.
This value affects the system-wide “base” empty filesystem that may already be initialized and
inherited by pulled images. Typically, a change to this value requires additional steps to take effect:
Specifies the size to use when creating the loopback file for the “data” device which is used for the
thin pool. The default size is 100G. The file is sparse, so it will not initially take up this much space.
Example
dm.loopmetadatasize
Note: This option configures devicemapper loopback, which should not be used in production.
Specifies the size to use when creating the loopback file for the “metadata” device which is used for
the thin pool. The default size is 2G. The file is sparse, so it will not initially take up this much space.
Example
dm.fs
Specifies the filesystem type to use for the base device. The supported options are “ext4” and “xfs”.
The default is “xfs”
Example
dm.mkfsarg
Specifies extra mkfs arguments to be used when creating the base device.
Example
dm.mountopt
Specifies extra mount options used when mounting the thin devices.
Example
dm.datadev
(Deprecated, use dm.thinpooldev)
Specifies a custom blockdevice to use for data for the thin pool.
If using a block device for device mapper storage, ideally both datadev and metadatadevshould be
specified to completely avoid using the loopback device.
Example
$ sudo dockerd \
--storage-opt dm.datadev=/dev/sdb1 \
--storage-opt dm.metadatadev=/dev/sdc1
dm.metadatadev
(Deprecated, use dm.thinpooldev)
Specifies a custom blockdevice to use for metadata for the thin pool.
For best performance the metadata should be on a different spindle than the data, or even better on
an SSD.
If setting up a new metadata pool it is required to be valid. This can be achieved by zeroing the first
4k to indicate empty metadata, like this:
Example
$ sudo dockerd \
--storage-opt dm.datadev=/dev/sdb1 \
--storage-opt dm.metadatadev=/dev/sdc1
dm.blocksize
Specifies a custom blocksize to use for the thin pool. The default blocksize is 64K.
Example
dm.blkdiscard
Enables or disables the use of blkdiscard when removing devicemapper devices. This is enabled by
default (only) if using loopback devices and is required to resparsify the loopback file on
image/container removal.
Disabling this on loopback can lead to much faster container removal times, but will make the space
used in /var/lib/docker directory not be returned to the system for other use when containers are
removed.
Examples
dm.override_udev_sync_check
Overrides the udev synchronization checks between devicemapper and udev. udev is the device
manager for the Linux kernel.
To view the udev sync support of a Docker daemon that is using the devicemapper driver, run:
$ docker info
[...]
Udev Sync Supported: true
[...]
When udev sync support is true, then devicemapper and udev can coordinate the activation and
deactivation of devices for containers.
When udev sync support is false, a race condition occurs between thedevicemapper and udev during
create and cleanup. The race condition results in errors and failures. (For information on these
failures, see docker#4036)
To allow the docker daemon to start, regardless of udev sync not being supported,
set dm.override_udev_sync_check to true:
$ sudo dockerd --storage-opt dm.override_udev_sync_check=true
When this value is true, the devicemapper continues and simply warns you the errors are happening.
Note: The ideal is to pursue a docker daemon and environment that does support synchronizing
with udev. For further discussion on this topic, see docker#4036. Otherwise, set this flag for migrating
existing Docker daemons to a daemon with a supported environment.
dm.use_deferred_removal
Enables use of deferred device removal if libdm and the kernel driver support the mechanism.
Deferred device removal means that if device is busy when devices are being removed/deactivated,
then a deferred removal is scheduled on device. And devices automatically go away when last user
of the device exits.
For example, when a container exits, its associated thin device is removed. If that device has leaked
into some other mount namespace and can’t be removed, the container exit still succeeds and this
option causes the system to schedule the device for deferred removal. It does not wait in a loop
trying to remove a busy device.
Example
dm.use_deferred_deletion
Enables use of deferred device deletion for thin pool devices. By default, thin pool device deletion is
synchronous. Before a container is deleted, the Docker daemon removes any associated devices. If
the storage driver can not remove a device, the container deletion fails and daemon returns.
Error deleting container: Error response from daemon: Cannot destroy container
To avoid this failure, enable both deferred device deletion and deferred device removal on the
daemon.
$ sudo dockerd \
--storage-opt dm.use_deferred_deletion=true \
--storage-opt dm.use_deferred_removal=true
With these two options enabled, if a device is busy when the driver is deleting a container, the driver
marks the device as deleted. Later, when the device isn’t in use, the driver deletes it.
In general it should be safe to enable this option by default. It will help when unintentional leaking of
mount point happens across multiple mount namespaces.
dm.min_free_space
Specifies the min free space percent in a thin pool require for new device creation to succeed. This
check applies to both free data space as well as free metadata space. Valid values are from 0% -
99%. Value 0% disables free space checking logic. If user does not specify a value for this option,
the Engine uses a default value of 10%.
Whenever a new a thin pool device is created (during docker pull or during container creation), the
Engine checks if the minimum free space is available. If sufficient space is unavailable, then device
creation fails and any relevant docker operation fails.
To recover from this error, you must create more free space in the thin pool to recover from the
error. You can create free space by deleting some images and containers from the thin pool. You
can also add more storage to the thin pool.
To add more space to a LVM (logical volume management) thin pool, just add more storage to the
volume group container thin pool; this should automatically resolve any errors. If your configuration
uses loop devices, then stop the Engine daemon, grow the size of loop files and restart the daemon
to resolve the issue.
Example
dm.xfs_nospace_max_retries
Specifies the maximum number of retries XFS should attempt to complete IO when ENOSPC (no
space) error is returned by underlying storage device.
By default XFS retries infinitely for IO to finish and this can result in unkillable process. To change
this behavior one can set xfs_nospace_max_retries to say 0 and XFS will not retry IO after getting
ENOSPC and will shutdown filesystem.
Example
dm.libdm_log_level
Specifies the maxmimum libdm log level that will be forwarded to the dockerd log (as specified by --
log-level). This option is primarily intended for debugging problems involving libdm. Using values
other than the defaults may cause false-positive warnings to be logged.
Values specified must fall within the range of valid libdm log levels. At the time of writing, the
following is the list of libdm log levels as well as their corresponding levels when output by dockerd.
_LOG_FATAL 2 error
_LOG_ERR 3 error
_LOG_WARN 4 warn
_LOG_NOTICE 5 info
_LOG_INFO 6 info
_LOG_DEBUG 7 debug
Example
$ sudo dockerd \
--log-level debug \
--storage-opt dm.libdm_log_level=7
ZFS OPTIONS
zfs.fsname
Set zfs filesystem under which docker will create its own datasets. By default docker will pick up the
zfs filesystem where docker graph (/var/lib/docker) is located.
Example
BTRFS OPTIONS
btrfs.min_space
Specifies the minimum size to use when creating the subvolume which is used for containers. If user
uses disk quota for btrfs when creating or running a container with --storage-opt sizeoption, docker
should ensure the size cannot be smaller than btrfs.min_space.
Example
OVERLAY2 OPTIONS
overlay2.override_kernel_check
Overrides the Linux kernel version check allowing overlay2. Support for specifying multiple lower
directories needed by overlay2 was added to the Linux kernel in 4.0.0. However, some older kernel
versions may be patched to add multiple lower directory support for OverlayFS. This option should
only be used after verifying this support exists in the kernel. Applying this option on a kernel without
this support will cause failures on mount.
overlay2.size
Sets the default max size of the container. It is supported only when the backing fs is xfsand
mounted with pquota mount option. Under these conditions the user can pass any size less then the
backing fs size.
Example
WINDOWSFILTER OPTIONS
size
Specifies the size to use when creating the sandbox which is used for containers. Defaults to 20G.
Example
lcow.globalmode
Specifies whether the daemon instantiates utility VM instances as required (recommended and
default if omitted), or uses single global utility VM (better performance, but has security implications
and not recommended for production deployments).
Example
lcow.kirdpath
Specifies the folder path to the location of a pair of kernel and initrd files used for booting a utility
VM. Defaults to %ProgramFiles%\Linux Containers.
Example
lcow.kernel
Specifies the filename of a kernel file located in the lcow.kirdpath path. Defaults to bootx64.efi.
Example
lcow.initrd
Specifies the filename of an initrd file located in the lcow.kirdpath path. Defaults to initrd.img.
Example
lcow.bootparameters
Specifies additional boot parameters for booting utility VMs when in kernel/ initrd mode. Ignored if
the utility VM is booting from VHD. These settings are kernel specific.
Example
lcow.vhdx
Specifies a custom VHDX to boot a utility VM, as an alternate to kernel and initrd booting. Defaults
to uvm.vhdx under lcow.kirdpath.
Example
lcow.timeout
Example
lcow.sandboxsize
Specifies the size in GB to use when creating the sandbox which is used for containers. Defaults to
20. Cannot be less than 20.
Example
Runtimes can be registered with the daemon either via the configuration file or using the --add-
runtime command line argument.
{
"default-runtime": "runc",
"runtimes": {
"runc": {
"path": "runc"
},
"custom": {
"path": "/usr/local/bin/my-runc-replacement",
"runtimeArgs": [
"--debug"
]
}
}
}
Note: Defining runtime arguments via the command line is not supported.
You can configure the runtime using options specified with the --exec-opt flag. All the flag’s options
have the native prefix. A single native.cgroupdriver option is available.
The native.cgroupdriver option specifies the management of the container’s cgroups. You can only
specify cgroupfs or systemd. If you specify systemd and it is not available, the system errors out. If
you omit the native.cgroupdriver option, cgroupfs is used.
This example sets the cgroupdriver to systemd:
$ sudo dockerd --exec-opt native.cgroupdriver=systemd
Also Windows Container makes use of --exec-opt for special purpose. Docker user can specify
default container isolation technology with this, for example:
> dockerd --exec-opt isolation=hyperv
Will make hyperv the default isolation technology on Windows. If no isolation value is specified on
daemon start, on Windows client, the default is hyperv, and on Windows server, the default
is process.
To set the DNS search domain for all Docker containers, use:
This option is useful when pushing images containing nondistributable artifacts to a registry on an
air-gapped network so hosts on that network can pull the images without connecting to another
server.
Warning: Nondistributable artifacts typically have restrictions on how and where they can be
distributed and shared. Only use this feature to push artifacts to private registries and ensure that
you are in compliance with any terms that cover redistributing nondistributable artifacts.
Insecure registries
Docker considers a private registry either secure or insecure. In the rest of this section, registry is
used for private registry, and myregistry:5000 is a placeholder example for a private registry.
A secure registry uses TLS and a copy of its CA certificate is placed on the Docker host
at /etc/docker/certs.d/myregistry:5000/ca.crt. An insecure registry is either not using TLS (i.e.,
listening on plain text HTTP), or is using TLS with a CA certificate not known by the Docker daemon.
The latter can happen when the certificate was not found
under/etc/docker/certs.d/myregistry:5000/, or if the certificate verification failed (i.e., wrong CA).
By default, Docker assumes all, but local (see local registries below), registries are secure.
Communicating with an insecure registry is not possible if Docker assumes that registry is secure. In
order to communicate with an insecure registry, the Docker daemon requires --insecure-
registry in one of the following two forms:
The flag can be used multiple times to allow multiple registries to be marked as insecure.
If an insecure registry is not marked as insecure, docker pull, docker push, and docker search will
result in an error message prompting the user to either secure or pass the --insecure-registry flag
to the Docker daemon as described above.
Local registries, whose IP address falls in the 127.0.0.0/8 range, are automatically marked as
insecure as of Docker 1.3.2. It is not recommended to rely on this, as it may change in the future.
LEGACY REGISTRIES
Starting with Docker 17.12, operations against registries supporting only the legacy v1 protocol are
no longer supported. Specifically, the daemon will not attempt push, pull and login to v1 registries.
The exception to this is search which can still be performed on v1 registries.
The disable-legacy-registry configuration option has been removed and, when used, will produce
an error on daemon startup.
This will only add the proxy and authentication to the Docker daemon’s requests - your docker
builds and running containers will need extra configuration to use the proxy
Default ulimit settings
--default-ulimit allows you to set the default ulimit options to use for all containers. It takes the
same options as --ulimit for docker run. If these defaults are not set, ulimitsettings will be
inherited, if not set on docker run, from the Docker daemon. Any --ulimitoptions passed to docker
run will overwrite these defaults.
Be careful setting nproc with the ulimit flag as nproc is designed by Linux to set the maximum
number of processes available to a user, not to a container. For details please check
the run reference.
Node discovery
The --cluster-advertise option specifies the host:port or interface:port combination that this
particular daemon instance should use when advertising itself to the cluster. The daemon is reached
by remote hosts through this value. If you specify an interface, make sure it includes the IP address
of the actual Docker host. For Engine installation created through docker-machine, the interface is
typically eth1.
The daemon uses libkv to advertise the node within the cluster. Some key-value backends support
mutual TLS. To configure the client TLS settings used by the daemon can be configured using the --
cluster-store-opt flag, specifying the paths to PEM encoded files. For example:
$ sudo dockerd \
--cluster-advertise 192.168.1.2:2376 \
--cluster-store etcd://192.168.1.2:2379 \
--cluster-store-opt kv.cacertfile=/path/to/ca.pem \
--cluster-store-opt kv.certfile=/path/to/cert.pem \
--cluster-store-opt kv.keyfile=/path/to/key.pem
Option Description
Specifies the path to a local file with a PEM encoded certificate. This
kv.certfile certificate is used as the client cert for communication with the
Key/Value store.
Specifies the path to a local file with a PEM encoded private key. This
kv.keyfile private key is used as the client key for communication with the
Key/Value store.
Specifies the path in the Key/Value store. If not configured, the default
kv.path
value is ‘docker/nodes’.
Access authorization
Docker’s access authorization can be extended by authorization plugins that your organization can
purchase or build themselves. You can install one or more authorization plugins when you start the
Docker daemon using the --authorization-plugin=PLUGIN_ID option.
$ sudo dockerd --authorization-plugin=plugin1 --authorization-plugin=plugin2,...
The PLUGIN_ID value is either the plugin’s name or a path to its specification file. The plugin’s
implementation determines whether you can specify a name or path. Consult with your Docker
administrator to get information about the plugins available to you.
Once a plugin is installed, requests made to the daemon through the command line or Docker’s
Engine API are allowed or denied by the plugin. If you have multiple plugins installed, each plugin, in
order, must allow the request for it to complete.
For information about how to create an authorization plugin, see authorization plugin section in the
Docker extend section of this documentation.
For details about how to use this feature, as well as limitations, see Isolate containers with a user
namespace.
Miscellaneous options
IP masquerading uses address translation to allow containers without a public IP to talk to other
machines on the Internet. This may interfere with some network topologies and can be disabled
with --ip-masq=false.
Docker supports softlinks for the Docker data directory (/var/lib/docker) and
for /var/lib/docker/tmp. The DOCKER_TMPDIR and the data directory can be set like this:
DOCKER_TMPDIR=/mnt/disk2/tmp /usr/local/bin/dockerd -D -g /var/lib/docker -H unix://
> /var/lib/docker-machine/docker.log 2>&1
# or
export DOCKER_TMPDIR=/mnt/disk2/tmp
/usr/local/bin/dockerd -D -g /var/lib/docker -H unix:// > /var/lib/docker-
machine/docker.log 2>&1
The --cgroup-parent option allows you to set the default cgroup parent to use for containers. If this
option is not set, it defaults to /docker for fs cgroup driver and system.slice for systemd cgroup
driver.
If the cgroup has a leading forward slash (/), the cgroup is created under the root cgroup, otherwise
the cgroup is created under the daemon cgroup.
Assuming the daemon is running in cgroup daemoncgroup, --cgroup-parent=/foobar creates a
cgroup in /sys/fs/cgroup/memory/foobar, whereas using --cgroup-parent=foobar creates the
cgroup in /sys/fs/cgroup/memory/daemoncgroup/foobar
The systemd cgroup driver has different rules for --cgroup-parent. Systemd represents hierarchy by
slice and the name of the slice encodes the location in the tree. So --cgroup-parent for systemd
cgroups should be a slice name. A name can consist of a dash-separated series of names, which
describes the path to the slice from the root slice. For example, --cgroup-parent=user-a-
b.slice means the memory cgroup for the container is created
in/sys/fs/cgroup/memory/user.slice/user-a.slice/user-a-b.slice/docker-<id>.scope.
This setting can also be set per container, using the --cgroup-parent option on docker
create and docker run, and takes precedence over the --cgroup-parent option on the daemon.
DAEMON METRICS
The --metrics-addr option takes a tcp address to serve the metrics API. This feature is still
experimental, therefore, the daemon must be running in experimental mode for this feature to work.
To serve the metrics API on localhost:9323 you would specify --metrics-addr 127.0.0.1:9323,
allowing you to make requests on the API at 127.0.0.1:9323/metrics to receive metrics in
the prometheus format.
Port 9323 is the default port associated with Docker metrics to avoid collisions with other prometheus
exporters and services.
If you are running a prometheus server you can add this address to your scrape configs to have
prometheus collect metrics on Docker. For more information on prometheus you can view the
website here.
scrape_configs:
- job_name: 'docker'
static_configs:
- targets: ['127.0.0.1:9323']
Please note that this feature is still marked as experimental as metrics and metric names could
change while this feature is still in experimental. Please provide feedback on what you would like to
see collected in the API.
The --node-generic-resources option takes a list of key-value pair (key=value) that allows you to
advertise user defined resources in a swarm cluster.
The current expected use case is to advertise NVIDIA GPUs so that services requesting NVIDIA-
GPU=[0-16] can land on a node that has enough GPUs for the task to run.
Example of usage:
{
"node-generic-resources": ["NVIDIA-GPU=UUID1", "NVIDIA-GPU=UUID2"]
}
On Linux
The default location of the configuration file on Linux is /etc/docker/daemon.json. The --config-
file flag can be used to specify a non-default location.
{
"authorization-plugins": [],
"data-root": "",
"dns": [],
"dns-opts": [],
"dns-search": [],
"exec-opts": [],
"exec-root": "",
"experimental": false,
"features": {},
"storage-driver": "",
"storage-opts": [],
"labels": [],
"live-restore": true,
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file":"5",
"labels": "somelabel",
"env": "os,customer"
},
"mtu": 0,
"pidfile": "",
"cluster-store": "",
"cluster-store-opts": {},
"cluster-advertise": "",
"max-concurrent-downloads": 3,
"max-concurrent-uploads": 5,
"default-shm-size": "64M",
"shutdown-timeout": 15,
"debug": true,
"hosts": [],
"log-level": "",
"tls": true,
"tlsverify": true,
"tlscacert": "",
"tlscert": "",
"tlskey": "",
"swarm-default-advertise-addr": "",
"api-cors-header": "",
"selinux-enabled": false,
"userns-remap": "",
"group": "",
"cgroup-parent": "",
"default-ulimits": {
"nofile": {
"Name": "nofile",
"Hard": 64000,
"Soft": 64000
}
},
"init": false,
"init-path": "/usr/libexec/docker-init",
"ipv6": false,
"iptables": false,
"ip-forward": false,
"ip-masq": false,
"userland-proxy": false,
"userland-proxy-path": "/usr/libexec/docker-proxy",
"ip": "0.0.0.0",
"bridge": "",
"bip": "",
"fixed-cidr": "",
"fixed-cidr-v6": "",
"default-gateway": "",
"default-gateway-v6": "",
"icc": false,
"raw-logs": false,
"allow-nondistributable-artifacts": [],
"registry-mirrors": [],
"seccomp-profile": "",
"insecure-registries": [],
"no-new-privileges": false,
"default-runtime": "runc",
"oom-score-adjust": -500,
"node-generic-resources": ["NVIDIA-GPU=UUID1", "NVIDIA-GPU=UUID2"],
"runtimes": {
"cc-runtime": {
"path": "/usr/bin/cc-runtime"
},
"custom": {
"path": "/usr/local/bin/my-runc-replacement",
"runtimeArgs": [
"--debug"
]
}
},
"default-address-pools":[
{"base":"172.80.0.0/16","size":24},
{"base":"172.90.0.0/16","size":24}
]
}
Note: You cannot set options in daemon.json that have already been set on daemon startup as a
flag. On systems that use systemd to start the Docker daemon, -H is already set, so you cannot use
the hosts key in daemon.json to add listening addresses. See
https://docs.docker.com/engine/admin/systemd/#custom-docker-daemon-options for how to
accomplish this task with a systemd drop-in file.
On Windows
{
"authorization-plugins": [],
"data-root": "",
"dns": [],
"dns-opts": [],
"dns-search": [],
"exec-opts": [],
"experimental": false,
"features":{},
"storage-driver": "",
"storage-opts": [],
"labels": [],
"log-driver": "",
"mtu": 0,
"pidfile": "",
"cluster-store": "",
"cluster-advertise": "",
"max-concurrent-downloads": 3,
"max-concurrent-uploads": 5,
"shutdown-timeout": 15,
"debug": true,
"hosts": [],
"log-level": "",
"tlsverify": true,
"tlscacert": "",
"tlscert": "",
"tlskey": "",
"swarm-default-advertise-addr": "",
"group": "",
"default-ulimits": {},
"bridge": "",
"fixed-cidr": "",
"raw-logs": false,
"allow-nondistributable-artifacts": [],
"registry-mirrors": [],
"insecure-registries": []
}
FEATURE OPTIONS
The optional field features in daemon.json allows users to enable or disable specific daemon
features. For example, {"features":{"buildkit": true}} enables buildkit as the default docker
image builder.
Some options can be reconfigured when the daemon is running without requiring to restart the
process. We use the SIGHUP signal in Linux to reload, and a global event in Windows with the
key Global\docker-daemon-config-$PID. The options can be modified in the configuration file but still
will check for conflicts with the provided flags. The daemon fails to reconfigure itself if there are
conflicts, but it won’t stop execution.
This section describes how to run multiple Docker daemons on a single host. To run multiple
daemons, you must configure each daemon so that it does not conflict with other daemons on the
same host. You can set these options either by providing them as flags, or by using a daemon
configuration file.
When your daemons use different values for these flags, you can run them on the same host without
any problems. It is very important to properly understand the meaning of those options and to use
them correctly.
The -b, --bridge= flag is set to docker0 as default bridge network. It is created automatically
when you install Docker. If you are not using the default, you must create and configure the
bridge manually or just set it to ‘none’: --bridge=none
--exec-root is the path where the container state is stored. The default value
is /var/run/docker. Specify the path for your running daemon here.
--data-root is the path where persisted data such as images, volumes, and cluster state are
stored. The default value is /var/lib/docker. To avoid any conflict with other daemons, set
this parameter separately for each daemon.
-p, --pidfile=/var/run/docker.pid is the path where the process ID of the daemon is
stored. Specify the path for your pid file here.
--host=[] specifies where the Docker daemon will listen for client connections. If
unspecified, it defaults to /var/run/docker.sock.
--iptables=false prevents the Docker daemon from adding iptables rules. If multiple
daemons manage iptables rules, they may overwrite rules set by another daemon. Be aware
that disabling this option requires you to manually add iptables rules to expose container
ports. If you prevent Docker from adding iptables rules, Docker will also not add IP
masquerading rules, even if you set --ip-masq to true. Without IP masquerading rules,
Docker containers will not be able to connect to external hosts or the internet when using
network other than default bridge.
--config-file=/etc/docker/daemon.json is the path where configuration file is stored. You
can use it instead of daemon flags. Specify the path for each daemon.
--tls* Docker daemon supports --tlsverify mode that enforces encrypted and
authenticated remote connections. The --tls* options enable use of specific certificates for
individual daemons.
Example script for a separate “bootstrap” instance of the Docker daemon without network:
$ sudo dockerd \
-H unix:///var/run/docker-bootstrap.sock \
-p /var/run/docker-bootstrap.pid \
--iptables=false \
--ip-masq=false \
--bridge=none \
--data-root=/var/lib/docker-bootstrap \
--exec-root=/var/run/docker-bootstrap
active
config
create
env
help
inspect
ip
kill
ls
mount
provision
regenerate-certs
restart
rm
scp
ssh
start
status
stop
upgrade
url
Docker Machine comes with command completion for the bash and zsh shell.
On a Mac:
sudo curl -L
https://raw.githubusercontent.com/docker/machine/v0.16.0/contrib/completion/ba
sh/docker-machine.bash -o `brew --prefix`/etc/bash_completion.d/docker-machine
sudo curl -L
https://raw.githubusercontent.com/docker/machine/v0.16.0/contrib/completion/ba
sh/docker-machine.bash -o /etc/bash_completion.d/docker-machine
Zsh
Place the completion script in your a completion file within the ZSH configuration directory, such
as ~/.zsh/completion/.
mkdir -p ~/.zsh/completion
curl -L
https://raw.githubusercontent.com/docker/machine/v0.16.0/contrib/completion/zsh/_dock
er-machine > ~/.zsh/completion/_docker-machine
Include the directory in your $fpath, by adding a like the following to the ~/.zshrcconfiguration file.
fpath=(~/.zsh/completion $fpath)
exec $SHELL -l
Available completions
Depending on what you typed on the command line so far, it completes:
docker-machine active
Estimated reading time: 1 minute
$ echo $DOCKER_HOST
tcp://203.0.113.81:2376
$ docker-machine active
staging
docker-machine config
Estimated reading time: 1 minute
Description:
Argument is a machine name.
Options:
For example:
docker-machine create
Estimated reading time: 9 minutes
Create a machine. Requires the --driver flag to indicate which provider (VirtualBox, DigitalOcean,
AWS, etc.) the machine should be created on, and an argument to indicate the name of the created
machine.
Looking for the full list of available drivers?
For a full list of drivers that work with docker-machine create and information on how to use them,
see Machine drivers.
Example
Here is an example of using the --virtualbox driver to create a machine called dev.
$ docker-machine create --driver virtualbox dev
Creating CA: /home/username/.docker/machine/certs/ca.pem
Creating client certificate: /home/username/.docker/machine/certs/cert.pem
Image cache does not exist, creating it at /home/username/.docker/machine/cache...
No default boot2docker iso found locally, downloading the latest release...
Downloading
https://github.com/boot2docker/boot2docker/releases/download/v1.6.2/boot2docker.iso
to /home/username/.docker/machine/cache/boot2docker.iso...
Creating VirtualBox VM...
Creating SSH key...
Starting VirtualBox VM...
Starting VM...
To see how to connect Docker to this machine, run: docker-machine env dev
Create a machine.
Run 'docker-machine create --driver name' to include the create flags for that driver
in the help text.
Options:
--driver, -d "none"
Driver to create machine with.
--engine-install-url "https://get.docker.com"
Custom URL to use for engine installation [$MACHINE_DOCKER_INSTALL_URL]
--engine-opt [--engine-opt option --engine-opt option]
Specify arbitrary flags to include with the created engine in the form flag=value
--engine-insecure-registry [--engine-insecure-registry option --engine-insecure-
registry option] Specify insecure registries to allow with the created engine
--engine-registry-mirror [--engine-registry-mirror option --engine-registry-mirror
option] Specify registry mirrors to use [$ENGINE_REGISTRY_MIRROR]
--engine-label [--engine-label option --engine-label option]
Specify labels for the created engine
--engine-storage-driver
Specify a storage driver to use with the engine
--engine-env [--engine-env option --engine-env option]
Specify environment variables to set in the engine
--swarm
Configure Machine with Swarm
--swarm-image "swarm:latest"
Specify Docker image to use for Swarm [$MACHINE_SWARM_IMAGE]
--swarm-master
Configure Machine to be a Swarm master
--swarm-discovery
Discovery service to use with Swarm
--swarm-strategy "spread"
Define a default scheduling strategy for Swarm
--swarm-opt [--swarm-opt option --swarm-opt option]
Define arbitrary flags for swarm
--swarm-host "tcp://0.0.0.0:3376"
ip/socket to listen on for Swarm master
--swarm-addr
addr to advertise for Swarm (default: detect and use the machine IP)
--swarm-experimental
Enable Swarm experimental features
Additionally, drivers can specify flags that Machine can accept as part of their plugin code. These
allow users to customize the provider-specific parameters of the created machine, such as size (--
amazonec2-instance-type m1.medium), geographical region (--amazonec2-region us-west-1), and so
on.
To see the provider-specific flags, simply pass a value for --driver when invoking the create help
text.
$ docker-machine create --driver virtualbox --help
Usage: docker-machine create [OPTIONS] [arg...]
Create a machine.
Run 'docker-machine create --driver name' to include the create flags for that driver
in the help text.
Options:
--driver, -d "none"
Driver to create machine with.
--engine-env [--engine-env option --engine-env option]
Specify environment variables to set in the engine
--engine-insecure-registry [--engine-insecure-registry option --engine-insecure-
registry option] Specify insecure registries to allow with the created engine
--engine-install-url "https://get.docker.com"
Custom URL to use for engine installation [$MACHINE_DOCKER_INSTALL_URL]
--engine-label [--engine-label option --engine-label option]
Specify labels for the created engine
--engine-opt [--engine-opt option --engine-opt option]
Specify arbitrary flags to include with the created engine in the form flag=value
--engine-registry-mirror [--engine-registry-mirror option --engine-registry-mirror
option] Specify registry mirrors to use [$ENGINE_REGISTRY_MIRROR]
--engine-storage-driver
Specify a storage driver to use with the engine
--swarm
Configure Machine with Swarm
--swarm-addr
addr to advertise for Swarm (default: detect and use the machine IP)
--swarm-discovery
Discovery service to use with Swarm
--swarm-experimental
Enable Swarm experimental features
--swarm-host "tcp://0.0.0.0:3376"
ip/socket to listen on for Swarm master
--swarm-image "swarm:latest"
Specify Docker image to use for Swarm [$MACHINE_SWARM_IMAGE]
--swarm-master
Configure Machine to be a Swarm master
--swarm-opt [--swarm-opt option --swarm-opt option]
Define arbitrary flags for swarm
--swarm-strategy "spread"
Define a default scheduling strategy for Swarm
--virtualbox-boot2docker-url
The URL of the boot2docker image. Defaults to the latest available version
[$VIRTUALBOX_BOOT2DOCKER_URL]
--virtualbox-cpu-count "1"
number of CPUs for the machine (-1 to use the number of CPUs available)
[$VIRTUALBOX_CPU_COUNT]
--virtualbox-disk-size "20000"
Size of disk for host in MB [$VIRTUALBOX_DISK_SIZE]
--virtualbox-host-dns-resolver
Use the host DNS resolver [$VIRTUALBOX_HOST_DNS_RESOLVER]
--virtualbox-dns-proxy
Proxy all DNS requests to the host [$VIRTUALBOX_DNS_PROXY]
--virtualbox-hostonly-cidr "192.168.99.1/24"
Specify the Host Only CIDR [$VIRTUALBOX_HOSTONLY_CIDR]
--virtualbox-hostonly-nicpromisc "deny"
Specify the Host Only Network Adapter Promiscuous Mode
[$VIRTUALBOX_HOSTONLY_NIC_PROMISC]
--virtualbox-hostonly-nictype "82540EM"
Specify the Host Only Network Adapter Type [$VIRTUALBOX_HOSTONLY_NIC_TYPE]
--virtualbox-import-boot2docker-vm
The name of a Boot2Docker VM to import
--virtualbox-memory "1024"
Size of memory for host in MB [$VIRTUALBOX_MEMORY_SIZE]
--virtualbox-no-share
Disable the mount of your home directory
You may notice that some flags specify environment variables that they are associated with as well
(located to the far left hand side of the row). If these environment variables are set when docker-
machine create is invoked, Docker Machine uses them for the default value of the flag.
This creates a virtual machine running locally in VirtualBox which uses the overlay storage backend,
has the key-value pairs foo=bar and spam=eggs as labels on the engine, and allows pushing / pulling
from the insecure registry located at registry.myco.com. You can verify much of this by inspecting
the output of docker info:
$ eval $(docker-machine env foobarmachine)
$ docker info
Containers: 0
Images: 0
Storage Driver: overlay
...
Name: foobarmachine
...
Labels:
foo=bar
spam=eggs
provider=virtualbox
The supported flags are as follows:
If the engine supports specifying the flag multiple times (such as with --label), then so does Docker
Machine.
In addition to this subset of daemon flags which are directly supported, Docker Machine also
supports an additional flag, --engine-opt, which can be used to specify arbitrary daemon options
with the syntax --engine-opt flagname=value. For example, to specify that the daemon should
use 8.8.8.8 as the DNS server for all containers, and always use the syslog log driver you could run
the following create command:
$ docker-machine create -d virtualbox \
--engine-opt dns=8.8.8.8 \
--engine-opt log-driver=syslog \
gdns
Additionally, Docker Machine supports a flag, --engine-env, which can be used to specify arbitrary
environment variables to be set within the engine with the syntax --engine-env name=value. For
example, to specify that the engine should use example.comas the proxy server, you could run the
following create command:
$ docker-machine create -d virtualbox \
--engine-env HTTP_PROXY=http://example.com:8080 \
--engine-env HTTPS_PROXY=https://example.com:8080 \
--engine-env NO_PROXY=example2.com \
proxbox
If you’re not sure how to configure these options, it is best to not specify configuration at all. Docker
Machine chooses sensible defaults for you and you don’t need to worry about it.
Example create:
This sets the swarm scheduling strategy to “binpack” (pack in containers as tightly as possible per
host instead of spreading them out), and the “heartbeat” interval to 5 seconds.
Pre-create check
Many drivers require a certain set of conditions to be in place before machines can be created. For
instance, VirtualBox needs to be installed before the virtualbox driver can be used. For this reason,
Docker Machine has a “pre-create check” which is specified at the driver level.
If this pre-create check succeeds, Docker Machine proceeds with the creation as normal. If the pre-
create check fails, the Docker Machine process exits with status code 3 to indicate that the source of
the non-zero exit was the pre-create check failing.
docker-machine env
Estimated reading time: 3 minutes
Set environment variables to dictate that docker should run a command against a particular
machine.
$ docker-machine env --help
Display the commands to set up the environment for the Docker client
Description:
Argument is a machine name.
Options:
docker-machine env machinename prints out export commands which can be run in a subshell.
Running docker-machine env -u prints unset commands which reverse this effect.
$ env | grep DOCKER
$ eval "$(docker-machine env dev)"
$ env | grep DOCKER
DOCKER_HOST=tcp://192.168.99.101:2376
DOCKER_CERT_PATH=/Users/nathanleclaire/.docker/machines/.client
DOCKER_TLS_VERIFY=1
DOCKER_MACHINE_NAME=dev
$ # If you run a docker command, now it runs against that host.
$ eval "$(docker-machine env -u)"
$ env | grep DOCKER
$ # The environment variables have been unset.
The output described above is intended for the shells bash and zsh (if you’re not sure which shell
you’re using, there’s a very good possibility that it’s bash). However, these are not the only shells
which Docker Machine supports. Docker Machine detects the shells available in your environment
and lists them. Docker supports bash, cmd, powershell, and emacs.
If you are using fish and the SHELL environment variable is correctly set to the path where fish is
located, docker-machine env name prints out the values in the format which fishexpects:
set -x DOCKER_TLS_VERIFY 1;
set -x DOCKER_CERT_PATH "/Users/nathanleclaire/.docker/machine/machines/overlay";
set -x DOCKER_HOST tcp://192.168.99.102:2376;
set -x DOCKER_MACHINE_NAME overlay
# Run this command to configure your shell:
# eval "$(docker-machine env overlay)"
If you are on Windows and using either PowerShell or cmd.exe, docker-machine env Docker Machine
should now detect your shell automatically. If the automatic detection does not work, you can still
override it using the --shell flag for docker-machine env.
For PowerShell:
For cmd.exe:
$ docker-machine.exe env --shell cmd dev
set DOCKER_TLS_VERIFY=1
set DOCKER_HOST=tcp://192.168.99.101:2376
set DOCKER_CERT_PATH=C:\Users\captain\.docker\machine\machines\dev
set DOCKER_MACHINE_NAME=dev
# Run this command to configure your shell: copy and paste the above values into your
command prompt
Tip: See also, how to unset environment variables in the current shell.
Excluding the created machine from proxies
The env command supports a --no-proxy flag which ensures that the created machine’s IP address
is added to the NO_PROXY/no_proxy environment variable.
This is useful when using docker-machine with a local VM provider, such
as virtualbox or vmwarefusion, in network environments where an HTTP proxy is required for
internet access.
$ docker-machine env --no-proxy default
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.104:2376"
export DOCKER_CERT_PATH="/Users/databus23/.docker/machine/certs"
export DOCKER_MACHINE_NAME="default"
export NO_PROXY="192.168.99.104"
# Run this command to configure your shell:
# eval "$(docker-machine env default)"
You may also want to visit the documentation on setting HTTP_PROXY for the created daemon using
the --engine-env flag for docker-machine create.
docker-machine help
Estimated reading time: 1 minute
For example:
Options:
docker-machine inspect
Estimated reading time: 1 minute
Description:
Argument is a machine name.
Options:
--format, -f Format the output using the given go template.
By default, this renders information about a machine as JSON. If a format is specified, the given
template is executed for each result.
In addition to the text/template syntax, there are some additional functions, json and prettyjson,
which can be used to format the output as JSON (documented below).
Examples
List all the details of a machine:
{
"DriverName": "virtualbox",
"Driver": {
"MachineName": "docker-host-
128be8d287b2028316c0ad5714b90bcfc11f998056f2f790f7c1f43f3d1e6eda",
"SSHPort": 55834,
"Memory": 1024,
"DiskSize": 20000,
"Boot2DockerURL": "",
"IPAddress": "192.168.5.99"
},
...
}
For the most part, you can pick out any field from the JSON in a fairly straightforward manner.
Formatting details:
If you want a subset of information formatted as JSON, you can use the json function in the
template.
$ docker-machine inspect --format='' dev-fusion
{"Boot2DockerURL":"","CPUS":8,"CPUs":8,"CaCertPath":"/Users/hairyhenderson/.docker/ma
chine/certs/ca.pem","DiskSize":20000,"IPAddress":"172.16.62.129","ISO":"/Users/hairyh
enderson/.docker/machine/machines/dev-fusion/boot2docker-1.5.0-
GH747.iso","MachineName":"dev-
fusion","Memory":1024,"PrivateKeyPath":"/Users/hairyhenderson/.docker/machine/certs/c
a-
key.pem","SSHPort":22,"SSHUser":"docker","SwarmDiscovery":"","SwarmHost":"tcp://0.0.0
.0:3376","SwarmMaster":false}
While this is usable, it’s not very human-readable. For this reason, there is prettyjson:
$ docker-machine inspect --format='{{prettyjson .Driver}}' dev-fusion
{
"Boot2DockerURL": "",
"CPUS": 8,
"CPUs": 8,
"CaCertPath": "/Users/hairyhenderson/.docker/machine/certs/ca.pem",
"DiskSize": 20000,
"IPAddress": "172.16.62.129",
"ISO": "/Users/hairyhenderson/.docker/machine/machines/dev-fusion/boot2docker-
1.5.0-GH747.iso",
"MachineName": "dev-fusion",
"Memory": 1024,
"PrivateKeyPath": "/Users/hairyhenderson/.docker/machine/certs/ca-key.pem",
"SSHPort": 22,
"SSHUser": "docker",
"SwarmDiscovery": "",
"SwarmHost": "tcp://0.0.0.0:3376",
"SwarmMaster": false
}
docker-machine ip
Estimated reading time: 1 minute
$ docker-machine ip dev
192.168.99.104
docker-machine kill
Estimated reading time: 1 minute
Description:
Argument(s) are one or more machine names.
For example:
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL
dev * virtualbox Running tcp://192.168.99.104:2376
$ docker-machine kill dev
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL
dev * virtualbox Stopped
docker-machine ls
Estimated reading time: 3 minutes
List machines
Options:
Timeout
The ls command tries to reach each host in parallel. If a given host does not answer in less than 10
seconds, the ls command states that this host is in Timeout state. In some circumstances (poor
connection, high load, or while troubleshooting), you may want to increase or decrease this value.
You can use the -t flag for this purpose with a numerical value in seconds.
Example
$ docker-machine ls -t 12
NAME ACTIVE DRIVER STATE URL SWARM DOCKER
ERRORS
default - virtualbox Running tcp://192.168.99.100:2376 v1.9.1
Filtering
The filtering flag (--filter) format is a key=value pair. If there is more than one filter, then pass
multiple flags. For example: --filter "foo=bar" --filter "bif=baz"
Examples
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER
ERRORS
dev - virtualbox Stopped
foo0 - virtualbox Running tcp://192.168.99.105:2376 v1.9.1
foo1 - virtualbox Running tcp://192.168.99.106:2376 v1.9.1
foo2 * virtualbox Running tcp://192.168.99.107:2376 v1.9.1
Formatting
The formatting option (--format) pretty-prints machines using a Go template.
Placeholder Description
When using the --format option, the ls command either outputs the data exactly as the template
declares or, when using the table directive, includes column headers as well.
The following example uses a template without headers and outputs the Name and Driverentries
separated by a colon for all running machines:
$ docker-machine ls --format "{{.Name}}: {{.DriverName}}"
default: virtualbox
ec2: amazonec2
To list all machine names with their driver in a table format you can use:
docker-machine mount
Estimated reading time: 1 minute
Example
Consider the following example:
$ mkdir foo
$ docker-machine ssh dev mkdir foo
$ docker-machine mount dev:/home/docker/foo foo
$ touch foo/bar
$ docker-machine ssh dev ls foo
bar
Now you can use the directory on the machine, for mounting into containers. Any changes done in
the local directory, is reflected in the machine too.
The files are actually being transferred using sftp (over an ssh connection), so this program (“sftp”)
needs to be present on the machine - but it usually is.
To unmount the directory again, you can use the same options but the -u flag. You can also
call fuserunmount (or fusermount -u) commands directly.
$ docker-machine mount -u dev:/home/docker/foo foo
$ rmdir foo
Files are actually being stored on the machine, not on the host. So make sure to make a copy
of any files you want to keep, before removing it!
docker-machine provision
Estimated reading time: 1 minute
1. Set the hostname on the instance to the name Machine addresses it by, such as default.
2. Install Docker if it is not present already.
3. Generate a set of certificates (usually with the default, self-signed CA) and configure the
daemon to accept connections over TLS.
4. Copy the generated certificates to the server and local config directory.
5. Configure the Docker Engine according to the options specified at create time.
6. Configure and activate Swarm if applicable.
docker-machine regenerate-certs
Estimated reading time: 1 minute
Description:
Argument(s) are one or more machine names.
Options:
Regenerate TLS certificates and update the machine with new certs.
For example:
If your certificates have expired, you’ll need to regenerate the client certs as well using the --client-
certs option:
docker-machine restart
Estimated reading time: 1 minute
Restart a machine
Description:
Argument(s) are one or more machine names.
docker-machine rm
Estimated reading time: 1 minute
Remove a machine. This removes the local reference and deletes it on the cloud provider or
virtualization management platform.
$ docker-machine rm --help
Remove a machine
Description:
Argument(s) are one or more machine names.
Options:
Examples
$ docker-machine ls
NAME ACTIVE URL STATE URL SWARM DOCKER
ERRORS
bar - virtualbox Running tcp://192.168.99.101:2376 v1.9.1
baz - virtualbox Running tcp://192.168.99.103:2376 v1.9.1
foo - virtualbox Running tcp://192.168.99.100:2376 v1.9.1
qix - virtualbox Running tcp://192.168.99.102:2376 v1.9.1
$ docker-machine rm baz
About to remove baz
Are you sure? (y/n): y
Successfully removed baz
$ docker-machine ls
NAME ACTIVE URL STATE URL SWARM DOCKER
ERRORS
bar - virtualbox Running tcp://192.168.99.101:2376 v1.9.1
foo - virtualbox Running tcp://192.168.99.100:2376 v1.9.1
qix - virtualbox Running tcp://192.168.99.102:2376 v1.9.1
$ docker-machine ls
NAME ACTIVE URL STATE URL SWARM DOCKER
ERRORS
foo - virtualbox Running tcp://192.168.99.100:2376 v1.9.1
$ docker-machine rm -y foo
About to remove foo
Successfully removed foo
docker-machine scp
Estimated reading time: 2 minutes
Copy files from your local host to a machine, from machine to machine, or from a machine to your
local host using scp.
The notation is machinename:/path/to/files for the arguments; in the host machine’s case, you
don’t need to specify the name, just the path.
Example
Consider the following example:
$ cat foo.txt
cat: foo.txt: No such file or directory
$ docker-machine ssh dev pwd
/home/docker
$ docker-machine ssh dev 'echo A file created remotely! >foo.txt'
$ docker-machine scp dev:/home/docker/foo.txt .
foo.txt 100% 28
0.0KB/s 00:00
$ cat foo.txt
A file created remotely!
Just like how scp has a -r flag for copying files recursively, docker-machine has a -r flag for this
feature.
In the case of transferring files from machine to machine, they go through the local host’s filesystem
first (using scp’s -3 flag).
When transferring large files or updating directories with lots of files, you can use the -dflag, which
uses rsync to transfer deltas instead of transferring all of the files.
When transferring directories and not just files, avoid rsync surprises by using trailing slashes on
both the source and destination. For example:
$ mkdir -p bar
$ touch bar/baz
$ docker-machine scp -r -d bar/ dev:/home/docker/bar/
$ docker-machine ssh dev ls bar
baz
version: "3.1"
services:
webapp:
image: alpine
command: cat /app/root.php
volumes:
- "/home/ubuntu/webapp:/app"
docker-machine ssh
Estimated reading time: 2 minutes
You can also specify commands to run remotely by appending them directly to thedocker-machine
ssh command, much like the regular ssh program works:
$ docker-machine ssh dev free
If you are using the “external” SSH type as detailed in the next section, you can include additional
arguments to pass through to the ssh binary in the generated command (unless they conflict with
any of the default arguments for the command generated by Docker Machine). For instance, the
following command forwards port 8080 from the defaultmachine to localhost on your host
computer:
$ docker-machine ssh default -L 8080:localhost:8080
There are some variations in behavior between the two methods, so report any issues or
inconsistencies if you come across them.
docker-machine start
Estimated reading time: 1 minute
Start a machine
Description:
Argument(s) are one or more machine names.
For example:
docker-machine status
Estimated reading time: 1 minute
Description:
Argument is a machine name.
For example:
$ docker-machine status dev
Running
docker-machine stop
Estimated reading time: 1 minute
Description:
Argument(s) are one or more machine names.
For example:
$ docker-machine ls
docker-machine upgrade
Estimated reading time: 1 minute
Upgrade a machine to the latest version of Docker. How this upgrade happens depends on the
underlying distribution used on the created instance.
For example, if the machine uses Ubuntu as the underlying operating system, it runs a command
similar to sudo apt-get upgrade docker-engine, because Machine expects Ubuntu machines it
manages to use this package. As another example, if the machine uses boot2docker for its OS, this
command downloads the latest boot2docker ISO and replace the machine’s existing ISO with the
latest.
$ docker-machine upgrade default
docker-machine url
Estimated reading time: 1 minute
Docker Swarm
Docker Swarm overview
You are viewing docs for legacy standalone Swarm. These topics describe standalone Docker
Swarm. In Docker 1.12 and higher, Swarm mode is integrated with Docker Engine. Most users
should use integrated Swarm mode — a good place to start is Getting started with swarm
mode, Swarm mode CLI commands, and the Get started with Docker walkthrough). Standalone
Docker Swarm is not integrated into the Docker Engine API and CLI commands.
Estimated reading time: 2 minutes
Docker Swarm is native clustering for Docker. It turns a pool of Docker hosts into a single, virtual
Docker host. Because Docker Swarm serves the standard Docker API, any tool that already
communicates with a Docker daemon can use Swarm to transparently scale to multiple hosts.
Supported tools include, but are not limited to, the following:
Dokku
Docker Compose
Docker Machine
Jenkins
Like other Docker projects, Docker Swarm follows the “swap, plug, and play” principle. As initial
development settles, an API develops to enable pluggable backends. This means you can swap out
the scheduling backend Docker Swarm uses out-of-the-box with a backend you prefer. Swarm’s
swappable design provides a smooth out-of-box experience for most use cases, and allows large-
scale production deployments to swap for more powerful backends, like Mesos.
open a TCP port on each node for communication with the swarm manager
install Docker on each node
create and manage TLS certificates to secure your cluster
As a starting point, the manual method is best suited for experienced administrators or programmers
contributing to Docker Swarm. The alternative is to use docker-machine to install a cluster.
Using Docker Machine, you can quickly install a Docker Swarm on cloud providers or inside your
own data center. If you have VirtualBox installed on your local machine, you can quickly build and
explore Docker Swarm in your local environment. This method automatically generates a certificate
to secure your cluster.
Using Docker Machine is the best method for users getting started with Swarm for the first time. To
try the recommended method of getting started, see Get Started with Docker Swarm.
If you are interested in manually installing or interested in contributing, see Build a swarm cluster for
production.
Discovery services
To dynamically configure and manage the services in your containers, you use a discovery backend
with Docker Swarm. For information on which backends are available, see the Discovery
service documentation.
Advanced scheduling
To learn more about advanced scheduling, see the strategies and filters documents.
Swarm API
The Docker Swarm API is compatible with the Docker remote API, and extends it with some new
endpoints.
Getting help
Docker Swarm is still in its infancy and under active development. If you need help, would like to
contribute, or simply want to talk about the project with like-minded individuals, we have a number of
open channels for communication.
To report bugs or file feature requests, use the issue tracker on Github.
To talk about the project with people in real time, join the #docker-swarm channel on IRC.
For more information and resources, visit the Getting Help project page.
If you are using Mac or Windows, then you must make sure you have started a Docker
Engine host running and pointed your terminal environment to it with the Docker Machine
commands. If you aren’t sure, you can verify:
$ docker-machine ls
NAME ACTIVE URL STATE URL SWARM
DOCKER ERRORS
default * virtualbox Running tcp://192.168.99.100:2376
v1.9.1
The easiest command is to get the help for the image. This command shows all the options
that are available with the image.
Options:
--debug debug mode [$DEBUG]
--log-level, -l "info" Log level (options: debug, info, warn, error,
fatal, panic)
--help, -h show help
--version, -v print the version
Commands:
create, c Create a cluster
list, l List nodes in a cluster
manage, m Manage a docker cluster
join, j join a docker cluster
help, h Shows a list of commands or help for one command
In this example, the swarm image did not exist on the Engine host, so the Engine downloaded
it. After it downloaded, the image executed the help subcommand to display the help text.
After displaying the help, the swarm image exits and returns you to your terminal command
line.
4. $ docker ps
5. CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
Swarm is no longer running. The swarm image exits after you issue it a command.
You don’t need to install a binary on the system to use the image.
The single command docker run gets and runs the most recent version of the image every
time.
The container isolates Swarm from your host environment. You don’t need to perform or
maintain shell paths and environments.
Running the Swarm image is the recommended way to create and manage your Swarm cluster. All
of Docker’s documentation and tutorials use this method.
You use Docker Swarm to host and schedule a cluster of Docker containers. This section introduces
you to Docker Swarm by teaching you how to create a swarm on your local machine using Docker
Machine and VirtualBox.
Prerequisites
Make sure your local system has VirtualBox installed. If you are using macOS or Windows and have
installed Docker, you should have VirtualBox already installed.
Using the instructions appropriate to your system architecture, install Docker Machine.
Before you create a swarm with docker-machine, you associate each node with a discovery service.
This example uses the token discovery service hosted by Docker Hub (only for testing/dev, not for
production). This discovery service associates a token with instances of the Docker Daemon running
on each node. Other discovery service backends such as etcd, consul, and zookeeper are available.
2. $ docker-machine ls
3. NAME ACTIVE DRIVER STATE URL SWARM
4. docker-vm * virtualbox Running tcp://192.168.99.100:2376
This example was run on a macOS system with Docker Toolbox installed. So, the docker-
vm virtual machine is in the list.
5. Create a VirtualBox machine called local on your system.
6. $ docker-machine create -d virtualbox local
7. INFO[0000] Creating SSH key...
8. INFO[0000] Creating VirtualBox VM...
9. INFO[0005] Starting VirtualBox VM...
10. INFO[0005] Waiting for VM to start...
11. INFO[0050] "local" has been created and is now the active machine.
12. INFO[0050] To point your Docker client at it, run this in your
shell: eval "$(docker-machine env local)"
The command below runs the swarm create command in a container. If you haven’t got
the swarm:latest image on your local machine, Docker pulls it for you.
$ docker run swarm create
Unable to find image 'swarm:latest' locally
latest: Pulling from swarm
de939d6ed512: Pull complete
79195899a8a4: Pull complete
79ad4f2cc8e0: Pull complete
0db1696be81b: Pull complete
ae3b6728155e: Pull complete
57ec2f5f3e06: Pull complete
73504b2882a3: Already exists
swarm:latest: The image you are pulling has been verified. Important: image
verification is a tech preview feature and should not be relied on to provide
security.
Digest: sha256:aaaf6c18b8be01a75099cc554b4fb372b8ec677ae81764dcdf85470279a61d6f
Status: Downloaded newer image for swarm:latest
fe0cc96a72cf04dba8c1c4aa79536ec3
The swarm create command returned the fe0cc96a72cf04dba8c1c4aa79536ec3 token. Note: This
command relies on Docker Swarm’s hosted discovery service. If this service is having issues, this
command may fail. In this case, see information on using other types of discovery backends. Check
the status page for service availability.
You use this token in the next step to create a Docker Swarm.
Swarm agents are responsible for hosting containers. They are regular docker daemons and you
can communicate with them using the Docker Engine API.
2. docker-machine create \
3. -d virtualbox \
4. --swarm \
5. --swarm-master \
6. --swarm-discovery token://<TOKEN-FROM-ABOVE> \
7. swarm-master
For example:
8. Open your VirtualBox Manager, it should contain the local machine and the new swarm-
master machine.
For example:
1. Point your Docker environment to the machine running the swarm master.
The master is running both the swarm manager and a swarm agent container. This isn’t
recommended in a production environment because it can cause problems with agent
failover. However, it is perfectly fine to do this in a learning environment like this one.
21. $ docker ps -a
22. CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
23. 78be991b58d1 swarm:latest "/swarm join --addr 3 minutes ago
Up 2 minutes 2375/tcp swarm-agent-
01/swarm-agent
24. da5127e4f0f9 swarm:latest "/swarm join --addr 6 minutes ago
Up 6 minutes 2375/tcp swarm-agent-
00/swarm-agent
25. ef395f316c59 swarm:latest "/swarm join --addr 16 minutes ago
Up 16 minutes 2375/tcp swarm-
master/swarm-agent
26. 45821ca5208e swarm:latest "/swarm manage --tls 16 minutes ago
Up 16 minutes 2375/tcp, 192.168.99.104:3376->3376/tcp swarm-
master/swarm-agent-master
40. Use the docker ps command to find out which node the container ran on.
41. $ docker ps -a
42. CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
43. 54a8690043dd hello-world:latest "/hello" 22 seconds ago
Exited (0) 3 seconds ago swarm-
agent-00/modest_goodall
44. 78be991b58d1 swarm:latest "/swarm join --addr 5 minutes ago
Up 4 minutes 2375/tcp swarm-
agent-01/swarm-agent
45. da5127e4f0f9 swarm:latest "/swarm join --addr 8 minutes ago
Up 8 minutes 2375/tcp swarm-
agent-00/swarm-agent
46. ef395f316c59 swarm:latest "/swarm join --addr 18 minutes ago
Up 18 minutes 2375/tcp swarm-
master/swarm-agent
47. 45821ca5208e swarm:latest "/swarm manage --tls 18 minutes ago
Up 18 minutes 2375/tcp, 192.168.99.104:3376->3376/tcp swarm-
master/swarm-agent-master
Where to go next
At this point, you’ve installed Docker Swarm by pulling the latest image of it from Docker Hub. Then,
you built and ran a swarm on your local machine using VirtualBox. If you want, you can onto read
an overview of Docker Swarm features. Alternatively, you can develop a more in-depth view of
Swarm by manually installing Swarm on a network.
This article provides guidance to help you plan, deploy, and manage Docker swarm clusters in
business critical production environments. The following high level topics are covered:
Security
High Availability
Performance
Cluster ownership
Security
There are many aspects to securing a Docker Swarm cluster. This section covers:
These topics are not exhaustive. They form part of a wider security architecture that includes:
security patching, strong password policies, role based access control, technologies such as
SELinux and AppArmor, strict auditing, and more.
The Engine daemons, including the swarm manager, that are configured to use TLS only accepts
commands from Docker Engine clients that sign their communications. Engine and Swarm support
external 3rd party Certificate Authorities (CA) as well as internal corporate CAs.
For more information on configuring Swarm for TLS, see the Overview Docker Swarm with
TLSpage.
Swarm manager.
o Inbound 80/tcp (HTTP). This allows docker pull commands to work. If you plan to
pull images from Docker Hub, you must allow Internet connections through port 80.
o Inbound 2375/tcp. This allows Docker Engine CLI commands direct to the Engine
daemon.
o Inbound 3375/tcp. This allows Engine CLI commands to the swarm manager.
o Inbound 22/tcp. This allows remote management via SSH.
Service Discovery:
o Inbound 80/tcp (HTTP). This allows docker pull commands to work. If you plan to
pull images from Docker Hub, you must allow Internet connections through port 80.
o Inbound Discovery service port. This needs setting to the port that the backend
discovery service listens on (consul, etcd, or zookeeper).
o Inbound 22/tcp. This allows remote management via SSH.
Swarm nodes:
o Inbound 80/tcp (HTTP). This allows docker pull commands to work. If you plan to
pull images from Docker Hub, you must allow Internet connections through port 80.
o Inbound 2375/tcp. This allows Engine CLI commands direct to the Docker daemon.
o Inbound 22/tcp. This allows remote management via SSH.
Custom, cross-host container networks:
o Inbound 7946/tcp Allows for discovering other container networks.
o Inbound 7946/udp Allows for discovering other container networks.
o Inbound <store-port>/tcp Network key-value store service port.
o 4789/udp For the container overlay network.
o ESP packets For encrypted overlay networks.
If your firewalls and other network devices are connection state aware, they allow responses to
established TCP connections. If your devices are not state aware, you need to open up ephemeral
ports from 32768-65535. For added security you can configure the ephemeral port rules to only
allow connections from interfaces on known swarm devices.
If your swarm cluster is configured for TLS, replace 2375 with 2376, and 3375 with 3376.
The ports listed above are just for swarm cluster operations such as cluster creation, cluster
management, and scheduling of containers against the cluster. You may need to open additional
network ports for application-related communications.
It is possible for different components of a swarm cluster to exist on separate networks. For
example, many organizations operate separate management and production networks. Some
Docker Engine clients may exist on a management network, while swarm managers, discovery
service instances, and nodes might exist on one or more production networks. To offset against
network failures, you can deploy swarm managers, discovery services, and nodes across multiple
production networks. In all of these cases you can use the list of ports above to assist the work of
your network infrastructure teams to efficiently and securely configure your network.
The following sections discuss some technologies and best practices that can enable you to build
resilient, highly-available swarm clusters. You can then use these cluster to run your most
demanding production applications and workloads.
Swarm manager HA
The swarm manager is responsible for accepting all commands coming in to a swarm cluster, and
scheduling resources against the cluster. If the swarm manager becomes unavailable, some cluster
operations cannot be performed until the swarm manager becomes available again. This is
unacceptable in large-scale business critical scenarios.
Swarm provides HA features to mitigate against possible failures of the swarm manager. You can
use Swarm’s HA feature to configure multiple swarm managers for a single cluster. These swarm
managers operate in an active/passive formation with a single swarm manager being the primary,
and all others being secondaries.
Swarm secondary managers operate as warm standby’s, meaning they run in the background of the
primary swarm manager. The secondary swarm managers are online and accept commands issued
to the cluster, just as the primary swarm manager. However, any commands received by the
secondaries are forwarded to the primary where they are executed. Should the primary swarm
manager fail, a new primary is elected from the surviving secondaries.
When creating HA swarm managers, you should take care to distribute them over as many failure
domains as possible. A failure domain is a network section that can be negatively affected if a critical
device or service experiences problems. For example, if your cluster is running in the Ireland Region
of Amazon Web Services (eu-west-1) and you configure three swarm managers (1 x primary, 2 x
secondary), you should place one in each availability zone as shown below.
In this configuration, the swarm cluster can survive the loss of any two availability zones. For your
applications to survive such failures, they must be architected across as many failure domains as
well.
For swarm clusters serving high-demand, line-of-business applications, you should have 3 or more
swarm managers. This configuration allows you to take one manager down for maintenance, suffer
an unexpected failure, and still continue to manage and operate the cluster.
Discovery service HA
The discovery service is a key component of a swarm cluster. If the discovery service becomes
unavailable, this can prevent certain cluster operations. For example, without a working discovery
service, operations such as adding new nodes to the cluster and making queries against the cluster
configuration fail. This is not acceptable in business critical production environments.
When creating a highly available swarm discovery service, you should take care to distribute each
discovery service instance over as many failure domains as possible. For example, if your cluster is
running in the Ireland Region of Amazon Web Services (eu-west-1) and you configure three
discovery service instances, you should place one in each availability zone.
The diagram below shows a swarm cluster configured for HA. It has three swarm managers and
three discovery service instances spread over three failure domains (availability zones). It also has
swarm nodes balanced across all three failure domains. The loss of two availability zones in the
configuration shown below does not cause the swarm cluster to go down.
It is possible to share the same Consul, etcd, or Zookeeper containers between the swarm discovery
and Engine container networks. However, for best performance and availability you should deploy
dedicated instances – a discovery instance for Swarm and another for your container networks.
Multiple clouds
You can architect and build swarm clusters that stretch across multiple cloud providers, and even
across public cloud and on premises infrastructures. The diagram below shows an example swarm
cluster stretched across AWS and Azure.
While such architectures may appear to provide the ultimate in availability, there are several factors
to consider. Network latency can be problematic, as can partitioning. As such, you should seriously
consider technologies that provide reliable, high speed, low latency connections into these cloud
platforms – technologies such as AWS Direct Connect and Azure ExpressRoute.
If you are considering a production deployment across multiple infrastructures like this, make sure
you have good test coverage over your entire system.
It is not unusual for a company to use one operating system in development environments, and a
different one in production. A common example of this is to use CentOS in development
environments, but then to use Red Hat Enterprise Linux (RHEL) in production. This decision is often
a balance between cost and support. CentOS Linux can be downloaded and used for free, but
commercial support options are few and far between. Whereas RHEL has an associated support
and license cost, but comes with world class commercial support from Red Hat.
When choosing the production operating system to use with your swarm clusters, choose one that
closely matches what you have used in development and staging environments. Although containers
abstract much of the underlying OS, some features have configuration requirements. For example,
to use Docker container networking with Docker Engine 1.10 or higher, your host must have a Linux
kernel that is version 3.10 or higher. Refer to the change logs to understand the requirements for a
particular version of Docker Engine or Swarm.
You should also consider procedures and channels for deploying and potentially patching your
production operating systems.
Performance
Performance is critical in environments that support business critical line of business applications.
The following sections discuss some technologies and best practices that can help you build high
performance swarm clusters.
Container networks
Docker Engine container networks are overlay networks and can be created across multiple Engine
hosts. For this reason, a container network requires a key-value (KV) store to maintain network
configuration and state. This KV store can be shared in common with the one used by the swarm
cluster discovery service. However, for best performance and fault isolation, you should deploy
individual KV store instances for container networks and swarm discovery. This is especially so in
demanding business critical production environments.
Beginning with Docker Engine 1.9, Docker container networks require specific Linux kernel versions.
Higher kernel versions are usually preferred, but carry an increased risk of instability because of the
newness of the kernel. Where possible, use a kernel version that is already approved for use in your
production environment. If you can not use a 3.10 or higher Linux kernel version for production, you
should begin the process of approving a newer kernel as early as possible.
Scheduling strategies
Scheduling strategies are how Swarm decides which nodes in a cluster to start containers on.
Swarm supports the following strategies:
spread
binpack
random (not for production use)
Spread is the default strategy. It attempts to balance the number of containers evenly across all
nodes in the cluster. This is a good choice for high performance clusters, as it spreads container
workload across all resources in the cluster. These resources include CPU, RAM, storage, and
network bandwidth.
If your swarm nodes are balanced across multiple failure domains, the spread strategy evenly
balance containers across those failure domains. However, spread on its own is not aware of the
roles of any of those containers, so has no intelligence to spread multiple instances of the same
service across failure domains. To achieve this you should use tags and constraints.
The binpack strategy runs as many containers as possible on a node, effectively filling it up, before
scheduling containers on the next node.
This means that binpack does not use all cluster resources until the cluster fills up. As a result,
applications running on swarm clusters that operate the binpack strategy might not perform as well
as those that operate the spread strategy. However, binpack is a good choice for minimizing
infrastructure requirements and cost. For example, imagine you have a 10-node cluster where each
node has 16 CPUs and 128GB of RAM. However, your container workload across the entire cluster
is only using the equivalent of 6 CPUs and 64GB RAM. The spread strategy would balance
containers across all nodes in the cluster. However, the binpack strategy would fit all containers on a
single node, potentially allowing you turn off the additional nodes and save on cost.
Whose budget does the production swarm infrastructure come out of?
Who owns the accounts that can administer and manage the production swarm cluster?
Who is responsible for monitoring the production swarm infrastructure?
Who is responsible for patching and upgrading the production swarm infrastructure?
On-call responsibilities and escalation procedures?
The above is not a complete list, and the answers to the questions vary depending on how your
organization’s and team’s are structured. Some companies are along way down the DevOps route,
while others are not. Whatever situation your company is in, it is important that you factor all of the
above into the planning, deployment, and ongoing management of your production swarm clusters.
This page teaches you to deploy a high-availability swarm cluster. Although the example installation
uses the Amazon Web Services (AWS) platform, you can deploy an equivalent swarm on many
other platforms. In this example, you do the following:
For a quickstart for Docker Swarm, try the Evaluate Swarm in a sandbox page.
Prerequisites
An Amazon Web Services (AWS) account
Familiarity with AWS features and tools, such as:
o Elastic Cloud (EC2) Dashboard
o Virtual Private Cloud (VPC) Dashboard
o VPC Security groups
o Connecting to an EC2 instance using SSH
You’re going to add a couple of rules to allow inbound SSH connections and inbound container
images. This set of rules somewhat protects the Engine, Swarm, and Consul ports. For a production
environment, you would apply more restrictive security measures. Do not leave Docker Engine ports
unprotected.
3. Select the default security group that’s associated with your default VPC.
The SSH connection allows you to connect to the host while the HTTP is for container images.
Step 2. Create your instances
In this step, you create five Linux hosts that are part of your default security group. When complete,
the example deployment contains three types of nodes:
1. Open the EC2 Dashboard and launch five EC2 instances, one at a time.
o During Step 1: Choose an Amazon Machine Image (AMI), pick the Amazon Linux
AMI.
o During Step 5: Tag Instance, under Value, give each instance one of these names:
manager0
manager1
consul0
node0
node1
2. Edit /etc/docker/daemon.json. Create it if it does not exist. Assuming the file was empty, its
contents should be:
3. {
4. "hosts": ["tcp://0.0.0.0:2375", "unix:///var/run/docker.sock"]
5. }
Start or restart Docker for the changes to take effect.
TROUBLESHOOTING
For this example, don’t create an AMI image from one of your instances running Docker
Engine and then re-use it to create the other instances. Doing so produces errors.
If your host cannot reach Docker Hub, docker run commands that pull images fail. In that
case, check that your VPC is associated with a security group with a rule that allows inbound
traffic. Also check the Docker Hub status page for service availability.
To keep things simple, you are going to run a single consul daemon on the same host as one of the
swarm managers.
3. From the output, copy the eth0 IP address from inet addr.
4. To set up a discovery backend, use the following command, replacing <consul0_ip>with the
IP address from the previous command:
5. $ docker run -d -p 8500:8500 --name=consul progrium/consul -server -bootstrap
-advertise=<consul0_ip>
6. Enter docker ps.
From the output, verify that a consul container is running. Then, disconnect from
the consul0 instance.
Your Consul node is up and running, providing your cluster with a discovery backend. To increase its
reliability, you can create a high-availability cluster using a trio of consul nodes using the link
mentioned at the end of this page. (Before creating a cluster of consul nodes, update the VPC
security group with rules to allow inbound traffic on the required port numbers.)
1. Use SSH to connect to the manager0 instance and use ifconfig to get its IP address.
2. $ ifconfig
3. To create the primary manager in a high-availability swarm cluster, use the following syntax:
Replacing <manager0_ip> and <consul0_ip> with the IP address from the previous command,
for example:
$ docker run -d -p 4000:4000 swarm manage -H :4000 --replication --advertise
172.30.0.125:4000 consul://172.30.0.161:8500
Replacing <manager1_ip> with the IP address from the previous command, for example:
$ docker run -d -p 4000:4000 swarm manage -H :4000 --replication --advertise
<manager1_ip>:4000 consul://172.30.0.161:8500
9. Enter docker ps to verify that a swarm container is running. Then disconnect from
the manager1 instance.
10. Connect to node0 and node1 in turn and join them to the cluster.
a. Get the node IP addresses with the ifconfig command.
For example:
c. Enter docker ps to verify that the swarm cluster container started from the previous
command is running.
Your small swarm cluster is up and running on multiple hosts, providing you with a high-availability
virtual Docker Engine. To increase its reliability and capacity, you can add more swarm managers,
nodes, and a high-availability discovery backend.
The output gives the manager’s role as primary (Role: primary) and information about each
of the nodes.
6. $ docker -H :4000 ps
4. Shut down the primary manager, replacing <id_name> with the container’s ID or name (for
example, “8862717fe6d3” or “trusting_lamarr”).
5. docker container rm -f <id_name>
8. Review the Engine’s daemon logs, replacing <id_name> with the new container’s ID or name:
9. $ sudo docker logs <id_name>
10. To get information about the manager and nodes in the cluster, enter:
You can connect to the manager1 node and run the info and logs commands. They display
corresponding entries for the change in leadership.
Deploy application infrastructure
You are viewing docs for legacy standalone Swarm. These topics describe standalone Docker
Swarm. In Docker 1.12 and higher, Swarm mode is integrated with Docker Engine. Most users
should use integrated Swarm mode — a good place to start is Getting started with swarm
mode, Swarm mode CLI commands, and the Get started with Docker walkthrough). Standalone
Docker Swarm is not integrated into the Docker Engine API and CLI commands.
Estimated reading time: 13 minutes
In this step, you create several Docker hosts to run your application stack on. Before you continue,
make sure you have taken the time to learn the application architecture.
While this example uses Docker Machine, this is only one example of an infrastructure you can use.
You can create the environment design on whatever infrastructure you wish. For example, you could
place the application on another public cloud platform such as Azure or DigitalOcean, on premises in
your data center, or even in a test environment on your laptop.
Finally, these instructions use some common bash command substitution techniques to resolve
some values, for example:
$ eval $(docker-machine env keystore)
In a Windows environment, these substitution fail. If you are running in Windows, replace the
substitution $(docker-machine env keystore) with the actual value.
Several different backends are supported. This example uses Consul container.
You can set options for the Engine daemon with the --engine-opt flag. In this command, you
use it to label this Engine instance.
4. Set your local shell to the keystore Docker host.
5. $ eval $(docker-machine env keystore)
The -p flag publishes port 8500 on the container which is where the Consul server listens.
The server also has several other ports exposed which you can see by running docker ps.
$ docker ps
CONTAINER ID IMAGE ... PORTS
NAMES
372ffcbc96ed progrium/consul ... 53/tcp, 53/udp, 8300-8302/tcp,
8400/tcp, 8301-8302/udp, 0.0.0.0:8500->8500/tcp dreamy_ptolemy
This command uses the TLS certificates created for the boot2docker.iso or the manager.
This is key for the manager when it connects to other machines in the cluster.
16. Test your work by displaying the Docker daemon logs from the host.
The output indicates that the consul and the manager are communicating correctly.
For example:
$ docker-machine ip manager
192.168.99.101
5. Use your favorite editor to create a config.toml file and add this content to the file:
6. ListenAddr = ":8080"
7. DockerURL = "tcp://SWARM_MANAGER_IP:3376"
8. TLSCACert = "/var/lib/boot2docker/ca.pem"
9. TLSCert = "/var/lib/boot2docker/server.pem"
10. TLSKey = "/var/lib/boot2docker/server-key.pem"
11.
12. [[Extensions]]
13. Name = "nginx"
14. ConfigPath = "/etc/nginx/nginx.conf"
15. PidPath = "/var/run/nginx.pid"
16. MaxConn = 1024
17. Port = 80
18. In the configuration, replace the SWARM_MANAGER_IP with the manager IP you got in Step 4.
You use this value because the load balancer listens on the manager’s event stream.
If you don’t see the image running, use docker ps -a to list all images to make sure the
system attempted to start the image. Then, get the logs to see why the container failed to
start.
$ docker logs interlock
INFO[0000] interlock 1.0.1 (000291d)
DEBU[0000] loading config from: /etc/config.toml
FATA[0000] read /etc/config.toml: is a directory
This error usually means you weren’t starting the docker run from the same configdirectory
where the config.toml file is. If you run the command and get a Conflict error such as:
docker: Error response from daemon: Conflict. The name "/interlock" is already in
use by container
d846b801a978c76979d46a839bb05c26d2ab949ff9f4f740b06b5e2564bae958. You have to
remove (or rename) that container to reuse that name.
Remove the interlock container with the docker container rm interlock and try again.
37. Start an nginx container on the load balancer.
38. $ docker run -ti -d \
39. -p 80:80 \
40. --label interlock.ext.name=nginx \
41. --link=interlock:interlock \
42. -v nginx:/etc/conf \
43. --name nginx \
44. nginx nginx -g "daemon off;" -c /etc/conf/nginx.conf
If you were building this in a non-Mac/Windows environment, you’d only need to run
the join command to add a node to the Swarm cluster and register it with the Consul discovery
service. When you create a node, you also give it a label, for example:
--engine-opt="label=com.function=frontend01"
These labels are used later when starting application containers. In the commands below, notice the
label you are applying to each node.
At this point, you have deployed on the infrastructure you need to run the application. Test
this now by listing the running machines:
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
DOCKER ERRORS
dbstore - virtualbox Running tcp://192.168.99.111:2376
v1.10.3
frontend01 - virtualbox Running tcp://192.168.99.108:2376
v1.10.3
frontend02 - virtualbox Running tcp://192.168.99.109:2376
v1.10.3
keystore - virtualbox Running tcp://192.168.99.100:2376
v1.10.3
loadbalancer - virtualbox Running tcp://192.168.99.107:2376
v1.10.3
manager - virtualbox Running tcp://192.168.99.101:2376
v1.10.3
worker01 * virtualbox Running tcp://192.168.99.110:2376
v1.10.3
30. Make sure the Swarm manager sees all your nodes.
The command is acting on the Swarm port, so it returns information about the entire cluster.
You have a manager and no nodes.
You’ve deployed the load balancer, the discovery backend, and a swarm cluster so now you can
build and deploy the voting application itself. You do this by starting a number of “Dockerized
applications” running in containers.
The diagram below shows the final application configuration including the overlay container
network, voteapp.
In this procedure you connect containers to this network. The voteapp network is available to all
Docker hosts using the Consul discovery backend. Notice that the interlock, nginx, consul,
and swarm manager containers on are not part of the voteapp overlay container network.
You can create the network on a cluster node and the network is visible on them all.
7. Verify you can see the new network from the dbstore node.
8. $ docker network ls
9. NETWORK ID NAME DRIVER
10. e952814f610a voteapp overlay
11. 1f12c5e7bcc4 bridge bridge
12. 3ca38e887cd8 none null
13. 3da57c44586b host host
You can launch these containers from any host in the cluster using the commands in this section.
Each command includes a -H flag so that they execute against the swarm manager.
The commands also all use the -e flag which is a Swarm constraint. The constraint tells the manager
to look for a node with a matching function label. You set established the labels when you created
the nodes. As you run each command below, look for the value constraint.
27. Start the voting application twice; once on each frontend node.
Manual restart is required because the current Interlock server is not forcing an Nginx
configuration reload.
Docker Compose let’s you define your microservice containers and their dependencies in a
Compose file. Then, you can use the Compose file to start all the containers at once. This extra
credit
$ DOCKER_HOST=$(docker-machine ip manager):3376
2. Try to create Compose file on your own by reviewing the tasks in this tutorial.
The version 2 Compose file format is the best to use. Translate each docker runcommand
into a service in the docker-compose.yml file. For example, this command:
$ docker -H $(docker-machine ip manager):3376 run -t -d \
-e constraint:com.function==worker01 \
--net="voteapp" \
--net-alias=workers \
--name worker01 docker/example-voting-app-worker
In general, Compose starts services in reverse order they appear in the file. So, if you want a
service to start before all the others, make it the last service in the file. This application relies
on a volume and a network, declare those at the bottom of the file.
4. When you are satisfied, save the docker-compose.yml file to your system.
5. Set DOCKER_HOST to the swarm manager.
6. $ DOCKER_HOST=$(docker-machine ip manager):3376
41. Use the docker ps command to see the containers on the swarm cluster.
42. $ docker -H $(docker-machine ip manager):3376 ps
43. CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS NAMES
44. b71555033caa docker/example-voting-app-result "node server.js"
6 seconds ago Up 4 seconds 192.168.99.104:32774->80/tcp
frontend01/scale_result-app_1
45. cf29ea21475d docker/example-voting-app-worker "/usr/lib/jvm/java-
7-" 6 seconds ago Up 4 seconds
worker01/scale_worker_1
46. 98414cd40ab9 redis "/entrypoint.sh
redis" 7 seconds ago Up 5 seconds 192.168.99.105:32774-
>6379/tcp frontend02/redis
47. 1f214acb77ae postgres:9.4 "/docker-
entrypoint.s" 7 seconds ago Up 5 seconds 5432/tcp
frontend01/db
48. 1a4b8f7ce4a9 docker/example-voting-app-vote "python app.py"
7 seconds ago Up 5 seconds 192.168.99.107:32772->80/tcp
dbstore/scale_voting-app_1
When you started the services manually, you had a voting-app instances running on two
frontend servers. How many do you have now?
49. Scale your application up by adding some voting-app instances.
50. $ docker-compose scale voting-app=3
51. Creating and starting 2 ... done
52. Creating and starting 3 ... done
After you scale up, list the containers on the cluster again.
This log shows the activity on one of the active voting application containers.
It’s a fact of life that things fail. With this in mind, it’s important to understand what happens when
failures occur and how to mitigate them. The following sections cover different failure scenarios:
If the failure is the swarm manager container unexpectedly exiting, Docker automatically attempts to
restart it. This is because the container was started with the --restart=unless-stopped switch.
While the swarm manager is unavailable, the application continues to work in its current
configuration. However, you cannot provision more nodes or containers until you have a working
swarm manager.
Docker Swarm supports high availability for swarm managers. This allows a single swarm cluster to
have two or more managers. One manager is elected as the primary manager and all others operate
as secondaries. In the event that the primary manager fails, one of the secondaries is elected as the
new primary, and cluster operations continue gracefully. If you are deploying multiple swarm
managers for high availability, you should consider spreading them across multiple failure domains
within your infrastructure.
If the failure is the consul container unexpectedly exiting, Docker automatically attempts to restart it.
This is because the container was started with the --restart=unless-stoppedswitch.
The Consul, etcd, and Zookeeper discovery service backends support various options for high
availability. These include Paxos/Raft quorums. You should follow existing best practices for
deploying HA configurations of your chosen discover service backend. If you are deploying multiple
discovery service instances for high availability, you should consider spreading them across multiple
failure domains within your infrastructure.
If you operate your swarm cluster with a single discovery backend service and this service fails and
is unrecoverable, you can start a new empty instance of the discovery backend and the swarm
agents on each node in the cluster repopulate it.
Handling failures
There are many reasons why containers can fail. However, Swarm does not attempt to restart failed
containers.
One way to automatically restart failed containers is to explicitly start them with the --
restart=unless-stopped flag. This tells the local Docker daemon to attempt to restart the container if
it unexpectedly exits. This only works in situations where the node hosting the container and its
Docker daemon are still up. This cannot restart a container if the node hosting it has failed, or if the
Docker daemon itself has failed.
Another way is to have an external tool (external to the cluster) monitor the state of your application,
and make sure that certain service levels are maintained. These service levels can include things
like “have at least 10 web server containers running”. In this scenario, if the number of web
containers drops below 10, the tool attempts to start more.
In our simple voting-app example, the front-end is scalable and serviced by a load balancer. In the
event that one of the two web containers fails (or the node that is hosting it fails), the load balancer
stops routing requests to it and sends all requests to the surviving web container. This solution is
highly scalable meaning you can have up to n web containers behind the load balancer.
If the failure is the interlock container unexpectedly exiting, Docker automatically attempts to restart
it. This is because the container was started with the --restart=unless-stoppedswitch.
It is possible to build an HA Interlock load balancer configuration. One such way is to have multiple
Interlock containers on multiple nodes. You can then use DNS round robin, or other technologies, to
load balance across each Interlock container. That way, if one Interlock container or node goes
down, the others continue to service requests.
If you deploy multiple interlock load balancers, you should consider spreading them across multiple
failure domains within your infrastructure.
In the event that one of the web containers or nodes fails, the load balancer starts directing all
incoming requests to surviving instance. Once the failed instance is back up, or a replacement is
added, the load balancer adds it to the configuration and starts sending a portion of the incoming
requests to it.
For highest availability you should deploy the two frontend web services
(frontend01 and frontend02) in different failure zones within your infrastructure. You should also
consider deploying more.
Redis failures
If the redis container fails, its partnered voting-app container does not function correctly. The best
solution in this instance might be to configure health monitoring that verifies the ability to write to
each Redis instance. If an unhealthy redis instance is encountered, remove the voting-
app and redis combination and attempt remedial actions.
If the failure is the worker01 container unexpectedly exiting, Docker automatically attempts to restart
it. This is because the container was started with the --restart=unless-stoppedswitch.
Postgres failures
This application does not implement any for of HA or replication for Postgres. Therefore losing the
Postgres container would cause the application to fail and potential lose or corrupt data. A better
solution would be to implement some form of Postgres HA or replication.
Results-app failures
If the results-app container exits, you cannot browse to the results of the poll until the container is
back up and running. Results continue to be collected and counted, but you can’t view results until
the container is back up and running.
The results-app container was started with the --restart=unless-stopped flag meaning that the
Docker daemon automatically attempts to restart it unless it was administratively stopped.
Infrastructure failures
There are many ways in which the infrastructure underpinning your applications can fail. However,
there are a few best practices that can be followed to help mitigate and offset these failures.
One of these is to deploy infrastructure components over as many failure domains as possible. On a
service such as AWS, this often translates into balancing infrastructure and services across multiple
AWS Availability Zones (AZ) within a Region.
Configure the swarm manager for HA and deploy HA nodes in different AZs
Configure the Consul discovery service for HA and deploy HA nodes in different AZs
Deploy all scalable components of the application across multiple AZs
This allows us to lose an entire AZ and still have our cluster and application operate.
But it doesn’t have to stop there. Some applications can be balanced across AWS Regions. It’s even
becoming possible to deploy services across cloud providers, or have balance services across
public cloud providers and your on premises date centers!
The diagram below shows parts of the application and infrastructure deployed across AWS and
Microsoft Azure. But you could just as easily replace one of those cloud providers with your own on
premises data center. In these scenarios, network latency and reliability is key to a smooth and
workable solution.
In Docker Swarm, the swarm manager is responsible for the entire cluster and manages the
resources of multiple Docker hosts at scale. If the swarm manager dies, you must create a new one
and deal with an interruption of service.
The High Availability feature allows a swarm to gracefully handle the failover of a manager instance.
Using this feature, you can create a single primary manager instance and
multiple replica instances.
A primary manager is the main point of contact with the swarm cluster. You can also create and talk
to replica instances that act as backups. Requests issued on a replica are automatically proxied to
the primary manager. If the primary manager fails, a replica takes away the lead. In this way, you
always keep a point of contact with the cluster.
Assumptions
You need either a Consul, etcd, or Zookeeper cluster. This procedure is written assuming
a Consul server running on address 192.168.42.10:8500. All hosts have a Docker Engine configured
to listen on port 2375. The Managers operate on port 4000. The sample swarm configuration has
three machines:
manager-1 on 192.168.42.200
manager-2 on 192.168.42.201
manager-3 on 192.168.42.202
The --replication flag tells Swarm that the manager is part of a multi-manager configuration and
that this primary manager competes with other manager instances for the primary role. The primary
manager has the authority to manage cluster, replicate logs, and replicate events happening inside
the cluster.
The --advertise option specifies the primary manager address. Swarm uses this address to
advertise to the cluster when the node is elected as the primary. As you see in the command’s
output, the address you provided now appears to be the one of the elected Primary manager.
Create two replicas
Now that you have a primary manager, you can create replicas.
Once you have established your primary manager and the replicas, create swarm agents as you
normally would.
This information shows that manager-1 is the current primary and supplies the address to use to
contact this primary.
Because the primary manager, manager-1, failed right after it was elected, the replica with the
address 192.168.42.201:4000, manager-2, recognized the failure and attempted to take away the
lead. Because manager-2 was fast enough, the process was effectively elected as the primary
manager. As a result, manager-2 became the primary manager of the cluster.
If we take a look at manager-3 we should see those logs:
user@manager-3 $ swarm manage -H :4000 <tls-config-flags> --replication --advertise
192.168.42.202:4000 consul://192.168.42.10:8500/nodes
INFO[0000] Listening for HTTP addr=:4000 proto=tcp
INFO[0000] Cluster leadership lost
INFO[0000] New leader elected: 192.168.42.200:4000
INFO[0036] New leader elected: 192.168.42.201:4000 <--- manager-2 sees the new
Primary Manager
[...]
You can use the docker command on any swarm manager or any replica.
If you like, you can use custom mechanisms to always point DOCKER_HOST to the current primary
manager. Then, you never lose contact with your swarm in the event of a failover.
Docker Swarm is fully compatible with Docker’s networking features. This includes the multi-host
networking feature which allows creation of custom container networks that span multiple Docker
hosts.
Before using Swarm with a custom network, read through the conceptual information in Docker
container networking. You should also have walked through the Get started with multi-host
networking example.
To create a custom network, you must choose a key-value store backend and implement it on your
network. Then, you configure the Docker Engine daemon to use this store. Two required
parameters, --cluster-store and --cluster-advertise, refer to your key-value store server.
Once you’ve configured and restarted the daemon on each Swarm node, you are ready to create a
network.
List networks
This example assumes there are two nodes node-0 and node-1 in the cluster. From a Swarm node,
list the networks:
$ docker network ls
NETWORK ID NAME DRIVER
3dd50db9706d node-0/host host
09138343e80e node-0/bridge bridge
8834dbd552e5 node-0/none null
45782acfe427 node-1/host host
8926accb25fd node-1/bridge bridge
6382abccd23d node-1/none null
As you can see, each network name is prefixed by the node name.
Create a network
By default, Swarm is using the overlay network driver, a global-scope network driver. A global-scope
network driver creates a network across an entire Swarm cluster. When you create
an overlay network under Swarm, you can omit the -d option:
$ docker network create swarm_network
42131321acab3233ba342443Ba4312
$ docker network ls
NETWORK ID NAME DRIVER
3dd50db9706d node-0/host host
09138343e80e node-0/bridge bridge
8834dbd552e5 node-0/none null
42131321acab node-0/swarm_network overlay
45782acfe427 node-1/host host
8926accb25fd node-1/bridge bridge
6382abccd23d node-1/none null
42131321acab node-1/swarm_network overlay
As you can see here, both the node-0/swarm_network and the node-1/swarm_network have the same
ID. This is because when you create a network on the cluster, it is accessible from all the nodes.
To create a local scope network (for example with the bridge network driver) you should
use <node>/<name> otherwise your network is created on a random node.
$ docker network create node-0/bridge2 -b bridge
921817fefea521673217123abab223
$ docker network create node-1/bridge2 -b bridge
5262bbfe5616fef6627771289aacc2
$ docker network ls
NETWORK ID NAME DRIVER
3dd50db9706d node-0/host host
09138343e80e node-0/bridge bridge
8834dbd552e5 node-0/none null
42131321acab node-0/swarm_network overlay
921817fefea5 node-0/bridge2 bridge
45782acfe427 node-1/host host
8926accb25fd node-1/bridge bridge
6382abccd23d node-1/none null
42131321acab node-1/swarm_network overlay
5262bbfe5616 node-1/bridge2 bridge
--opt encrypted is a feature only available in Docker Swarm mode. It’s not supported in Swarm
standalone. Network encryption requires key management, which is outside the scope of Swarm.
Remove a network
To remove a network you can use its ID or its name. If two different networks have the same name,
include the <node> value:
$ docker network rm swarm_network
42131321acab3233ba342443Ba4312
$ docker network rm node-0/bridge2
921817fefea521673217123abab223
$ docker network ls
NETWORK ID NAME DRIVER
3dd50db9706d node-0/host host
09138343e80e node-0/bridge bridge
8834dbd552e5 node-0/none null
45782acfe427 node-1/host host
8926accb25fd node-1/bridge bridge
6382abccd23d node-1/none null
5262bbfe5616 node-1/bridge2 bridge
The swarm_network was removed from every node. The bridge2 was removed only from node-0.
Docker Swarm comes with multiple discovery backends. You use a hosted discovery service with
Docker Swarm. The service maintains a list of IPs in your cluster. This page describes the different
types of hosted discovery. These are:
For details about libkv and a detailed technical overview of the supported backends, refer to the libkv
project.
The node IP address doesn’t need to be public as long as the Swarm manager can access it.
In a large cluster, the nodes joining swarm may trigger request spikes to discovery. For
example, a large number of nodes are added by a script, or recovered from a network
partition. This may result in discovery failure. You can use --delayoption to specify a delay
limit. The swarm join command adds a random delay less than this limit to reduce pressure
to discovery.
Etcd:
Consul:
ZooKeeper:
Etcd:
Consul:
Etcd:
Consul:
ZooKeeper:
This works the same way for the swarm manage and list commands.
You can use a static file or list of nodes for your discovery backend. The file must be stored on a
host that is accessible from the swarm manager. You can also pass a node list as an option when
you start Swarm.
Both the static file and the nodes option support an IP address range. To specify a range supply a
pattern, for example, 10.0.0.[10:200] refers to nodes starting from 10.0.0.10 to 10.0.0.200. For
example for the file discovery method.
$ echo "10.0.0.[11:100]:2375" >> /tmp/my_cluster
$ echo "10.0.1.[15:20]:2375" >> /tmp/my_cluster
$ echo "192.168.1.2:[2:20]375" >> /tmp/my_cluster
To create a file
1. Edit the file and add line for each of your nodes.
This example creates a file named /tmp/my_cluster. You can use any name you like.
or
This example uses the hosted discovery service on Docker Hub. Using Docker Hub’s hosted
discovery service requires that each node in the swarm is connected to the public internet. To create
your cluster:
1. Create a cluster.
2. $ swarm create
3. 6856663cdefdec325839a4b7e1de38e8 # <- this is your unique <cluster_id>
On each of your nodes, start the swarm agent. The node IP address doesn’t need to be
public (eg. 192.168.0.X) but the swarm manager must be able to access it.
You can use Docker Machine to provision a Docker Swarm cluster. Machine is the Docker
provisioning tool. Machine provisions the hosts, installs Docker Engine on them, and then configures
the Docker CLI client. With Machine’s Swarm options, you can also quickly configure a Swarm
cluster as part of this provisioning.
This page explains the commands you need to provision a basic Swarm cluster on a local Mac or
Windows computer using Machine. Once you understand the process, you can use it to setup a
Swarm cluster on a cloud provider, or inside your company’s data center.
If this is the first time you are creating a Swarm cluster, you should first learn about Swarm and its
requirements by installing a Swarm for evaluation or installing a Swarm for production. If this is the
first time you have used Machine, you should take some time to understand Machine before
continuing.
Machine supports installing on AWS, DigitalOcean, Google Cloud Platform, IBM Softlayer, Microsoft
Azure and Hyper-V, OpenStack, Rackspace, VirtualBox, VMware Fusion®, vCloud® AirTM and
vSphere®. This example uses VirtualBox to run several VMs based on the boot2docker.iso image.
This image is a small-footprint Linux distribution for running Engine.
The Toolbox installation gives you VirtualBox and the boot2docker.iso image you need. It also gives
you the ability provision on all the systems Machine supports.
Note: These examples assume you are using macOS or Windows, if you like you can alsoinstall
Docker Machine directly on a Linux system.
This example uses VirtualBox but it could easily be DigitalOcean or a host on your data center.
The local value is the host name. Once you create it, configure your terminal’s shell environment to
interact with the local host.
eval "$(docker-machine env local)"
Each Swarm host has a token installed into its Engine configuration. The token allows the Swarm
discovery backend to recognize a node as belonging to a particular Swarm cluster. Create the token
for your cluster by running the swarm image:
docker run swarm create
Unable to find image 'swarm' locally
1.1.0-rc2: Pulling from library/swarm
892cb307750a: Pull complete
fe3c9860e6d5: Pull complete
cc01ef3f1fbc: Pull complete
b7e14a9c9c72: Pull complete
3ec746117013: Pull complete
703cb7acfce6: Pull complete
d4f6bb678158: Pull complete
2ad500e1bf96: Pull complete
Digest: sha256:f02993cd1afd86b399f35dc7ca0240969e971c92b0232a8839cf17a37d6e7009
Status: Downloaded newer image for swarm
0de84fa62a1d9e9cc2156111f63ac31f
The output of the swarm create command is a cluster token. Copy the token to a safe place. Once
you have the token, you can provision the swarm nodes and join them to the cluster_id. The rest of
this documentation, refers to this token as the SWARM_CLUSTER_TOKEN.
Then, provision an additional node. You must supply the SWARM_CLUSTER_TOKEN and a unique name
for each host node, HOST_NODE_NAME.
docker-machine create \
-d virtualbox \
--swarm \
--swarm-discovery token://SWARM_CLUSTER_TOKEN \
HOST_NODE_NAME
For example, you might use node-01 as the HOST_NODE_NAME in the previous example.
Note: These commands rely on Docker Swarm’s hosted discovery service, Docker Hub. If Docker
Hub or your network is having issues, these commands may fail. Check the Docker Hub status
page for service availability. If the problem is Docker Hub, you can wait for it to recover or configure
other types of discovery backends.
Docker Machine provides a special --swarm flag with its env command to connect to swarm nodes.
docker-machine env --swarm HOST_NODE_NAME
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.101:3376"
export DOCKER_CERT_PATH="/Users/mary/.docker/machine/machines/swarm-manager"
export DOCKER_MACHINE_NAME="swarm-manager"
# Run this command to configure your shell:
# eval $(docker-machine env --swarm HOST_NODE_NAME)
To set your SHELL connect to a swarm node called swarm-manager, you would do this:
eval "$(docker-machine env --swarm swarm-manager)"
Now, you can use the Docker CLI to query and interact with your cluster.
docker info
Containers: 2
Images: 1
Role: primary
Strategy: spread
Filters: health, port, dependency, affinity, constraint
Nodes: 1
swarm-manager: 192.168.99.101:2376
└ Status: Healthy
└ Containers: 2
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 1.021 GiB
└ Labels: executiondriver=native-0.2, kernelversion=4.1.13-boot2docker,
operatingsystem=Boot2Docker 1.9.1 (TCL 6.4.1); master : cef800b - Fri Nov 20 19:33:59
UTC 2015, provider=virtualbox, storagedriver=aufs
CPUs: 1
Total Memory: 1.021 GiB
Name: swarm-manager
Swarm filters
You are viewing docs for legacy standalone Swarm. These topics describe standalone Docker
Swarm. In Docker 1.12 and higher, Swarm mode is integrated with Docker Engine. Most users
should use integrated Swarm mode — a good place to start is Getting started with swarm
mode, Swarm mode CLI commands, and the Get started with Docker walkthrough). Standalone
Docker Swarm is not integrated into the Docker Engine API and CLI commands.
Estimated reading time: 18 minutes
Filters tell Docker Swarm scheduler which nodes to use when creating and running a container.
Each filter has a name that identifies it. The node filters are:
constraint
health
containerslots
affinity
dependency
port
When you start a Swarm manager with the swarm manage command, all the filters are enabled. If you
want to limit the filters available to your Swarm, specify a subset of filters by passing the --
filter flag and the name:
$ swarm manage --filter=health --filter=dependency
Note: Container configuration filters match all containers, including stopped containers, when
applying the filter. To release a node used by a container, you must remove the container from the
node.
Node filters
When creating a container or building an image, you use a constraint or health filter to select a
subset of nodes to consider for scheduling. If a node in Swarm cluster has a label with
key containerslots and a number-value, Swarm does not launch more containers than the given
number.
Custom node labels you apply when you start the docker daemon, for example:
$ docker daemon --label com.example.environment="production" --label
com.example.storage="ssd"
Then, when you start a container on the cluster, you can set constraints using these default tags or
custom labels. The Swarm scheduler looks for matching node on the cluster and starts the container
there. This approach has several practical applications:
Schedule based on specific host properties, for example, storage=ssd schedules containers
on specific hardware.
Force containers to run in a given location, for example region=us-east.
Create logical cluster partitions by splitting a cluster into sub-clusters with different
properties, for example environment=production.
Once the nodes are joined to a cluster, the Swarm manager pulls their respective tags. Moving
forward, the manager takes the tags into account when scheduling new containers.
Continuing the previous example, assuming your cluster with node-1 and node-2, you can run a
MySQL server container on the cluster. When you run the container, you can use a constraint to
ensure the database gets good I/O performance. You do this by filtering for nodes with flash drives:
$ docker tcp://<manager_ip:manager_port> run -d -P -e constraint:storage==ssd --name
db mysql
f8b693db9cd6
$ docker tcp://<manager_ip:manager_port> ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
f8b693db9cd6 mysql:latest "mysqld" Less than a second ago
running 192.168.0.42:49178->3306/tcp node-1/db
In this example, the manager selected all nodes that met the storage=ssd constraint and applied
resource management on top of them. Only node-1 was selected because it’s the only host running
flash.
Suppose you want to run an Nginx frontend in a cluster. In this case, you wouldn’t want flash drives
because the frontend mostly writes logs to disk.
$ docker tcp://<manager_ip:manager_port> ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
963841b138d8 nginx:latest "nginx" Less than a second ago
running 192.168.0.43:49177->80/tcp node-2/frontend
f8b693db9cd6 mysql:latest "mysqld" Up About a minute
running 192.168.0.42:49178->3306/tcp node-1/db
The scheduler selected node-2 since it was started with the storage=disk label.
Finally, build args can be used to apply node constraints to a docker build. This example shows
how to avoid flash drives.
$ mkdir sinatra
$ cd sinatra
$ echo "FROM ubuntu:14.04" > Dockerfile
$ echo "RUN apt-get update && apt-get install -y ruby ruby-dev" >> Dockerfile
$ echo "RUN gem install sinatra" >> Dockerfile
$ docker build --build-arg=constraint:storage==disk -t ouruser/sinatra:v2 .
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM ubuntu:14.04
---> a5a467fddcb8
Step 2 : RUN apt-get update && apt-get install -y ruby ruby-dev
---> Running in 26c9fbc55aeb
---> 30681ef95fff
Removing intermediate container 26c9fbc55aeb
Step 3 : RUN gem install sinatra
---> Running in 68671d4a17b0
---> cd70495a1514
Removing intermediate container 68671d4a17b0
Successfully built cd70495a1514
$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
dockerswarm/swarm manager 8c2c56438951 2 days ago 795.7
MB
ouruser/sinatra v2 cd70495a1514 35 seconds ago 318.7
MB
ubuntu 14.04 a5a467fddcb8 11 days ago 187.9
MB
Swarm runs up to 3 containers at this node, if all nodes are “full”, an error is thrown indicating no
suitable node can be found. If the value cannot be cast to an integer number or is not present, there
is no limit on container number.
Container filters
When creating a container, you can use three types of container filters:
affinity
dependency
port
container name or ID
an image on the host
a custom label applied to the container
These affinities ensure that containers run on the same network node — without you having to know
what each node is running.
You can schedule a new container to run next to another based on a container name or ID. For
example, you can start a container called frontend running nginx:
$ docker tcp://<manager_ip:manager_port> run -d -p 80:80 --name frontend nginx
87c4376856a8
$ docker tcp://<manager_ip:manager_port> ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
87c4376856a8 nginx:latest "nginx" Less than a second ago
running 192.168.0.42:80->80/tcp node-1/frontend
$ docker tcp://<manager_ip:manager_port> ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
87c4376856a8 nginx:latest "nginx" Less than a second ago
running 192.168.0.42:80->80/tcp node-1/frontend
963841b138d8 logger:latest "logger" Less than a second ago
running node-1/logger
Because of name affinity, the logger container ends up on node-1 along with the frontend container.
Instead of the frontend name you could have supplied its ID as follows:
$ docker tcp://<manager_ip:manager_port> run -d --name logger -e
affinity:container==87c4376856a8
You can schedule a container to run only on nodes where a specific image is already pulled. For
example, suppose you pull a redis image to two hosts and a mysql image to a third.
$ docker -H node-1:2375 pull redis
$ docker -H node-2:2375 pull mysql
$ docker -H node-3:2375 pull redis
Only node-1 and node-3 have the redis image. Specify a -e affinity:image==redisfilter to
schedule several additional containers to run on these nodes.
$ docker tcp://<manager_ip:manager_port> run -d --name redis1 -e
affinity:image==redis redis
$ docker tcp://<manager_ip:manager_port> run -d --name redis2 -e
affinity:image==redis redis
$ docker tcp://<manager_ip:manager_port> run -d --name redis3 -e
affinity:image==redis redis
$ docker tcp://<manager_ip:manager_port> run -d --name redis4 -e
affinity:image==redis redis
$ docker tcp://<manager_ip:manager_port> run -d --name redis5 -e
affinity:image==redis redis
$ docker tcp://<manager_ip:manager_port> run -d --name redis6 -e
affinity:image==redis redis
$ docker tcp://<manager_ip:manager_port> run -d --name redis7 -e
affinity:image==redis redis
$ docker tcp://<manager_ip:manager_port> run -d --name redis8 -e
affinity:image==redis redis
$ docker tcp://<manager_ip:manager_port> ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
87c4376856a8 redis:latest "redis" Less than a second ago
running node-1/redis1
1212386856a8 redis:latest "redis" Less than a second ago
running node-1/redis2
87c4376639a8 redis:latest "redis" Less than a second ago
running node-3/redis3
1234376856a8 redis:latest "redis" Less than a second ago
running node-1/redis4
86c2136253a8 redis:latest "redis" Less than a second ago
running node-3/redis5
87c3236856a8 redis:latest "redis" Less than a second ago
running node-3/redis6
87c4376856a8 redis:latest "redis" Less than a second ago
running node-3/redis7
963841b138d8 redis:latest "redis" Less than a second ago
running node-1/redis8
As you can see here, the containers were only scheduled on nodes that had the redisimage.
Instead of the image name, you could have specified the image ID.
$ docker image ls
REPOSITORY TAG IMAGE ID
CREATED VIRTUAL SIZE
redis latest 06a1f75304ba 2
days ago 111.1 MB
A label affinity allows you to filter based on a custom container label. For example, you can run
a nginx container and apply the com.example.type=frontend custom label.
$ docker tcp://<manager_ip:manager_port> run -d -p 80:80 --label
com.example.type=frontend nginx
87c4376856a8
$ docker tcp://<manager_ip:manager_port> ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
87c4376856a8 nginx:latest "nginx" Less than a second ago
running 192.168.0.42:80->80/tcp node-1/trusting_yonath
963841b138d8 logger:latest "logger" Less than a second ago
running node-1/happy_hawking
Swarm attempts to co-locate the dependent container on the same node. If it cannot be done
(because the dependent container doesn’t exist, or because the node doesn’t have enough
resources), it prevents the container creation.
The combination of multiple dependencies are honored if possible. For instance, if you specify --
volumes-from=A --net=container:B, the scheduler attempts to co-locate the container on the same
node as A and B. If those containers are running on different nodes, Swarm does not schedule the
container.
By default, containers run on Docker’s bridge network. To use the port filter with the bridge network,
you run a container as follows.
$ docker tcp://<manager_ip:manager_port> run -d -p 80:80 nginx
87c4376856a8
$ docker tcp://<manager_ip:manager_port> ps
CONTAINER ID IMAGE COMMAND PORTS NAMES
87c4376856a8 nginx:latest "nginx" 192.168.0.42:80->80/tcp node-
1/prickly_engelbart
Docker Swarm selects a node where port 80 is available and unoccupied by another container or
process, in this case node-1. Attempting to run another container that uses the host port 80 results in
Swarm selecting a different node, because port 80 is already occupied on node-1:
$ docker tcp://<manager_ip:manager_port> run -d -p 80:80 nginx
963841b138d8
$ docker tcp://<manager_ip:manager_port> ps
CONTAINER ID IMAGE COMMAND PORTS
NAMES
963841b138d8 nginx:latest "nginx" 192.168.0.43:80->80/tcp
node-2/dreamy_turing
87c4376856a8 nginx:latest "nginx" 192.168.0.42:80->80/tcp
node-1/prickly_engelbart
Again, repeating the same command results in the selection of node-3, since port 80 is neither
available on node-1 nor node-2:
$ docker tcp://<manager_ip:manager_port> run -d -p 80:80 nginx
963841b138d8
$ docker tcp://<manager_ip:manager_port> ps
CONTAINER ID IMAGE COMMAND PORTS
NAMES
f8b693db9cd6 nginx:latest "nginx" 192.168.0.44:80->80/tcp
node-3/stoic_albattani
963841b138d8 nginx:latest "nginx" 192.168.0.43:80->80/tcp
node-2/dreamy_turing
87c4376856a8 nginx:latest "nginx" 192.168.0.42:80->80/tcp
node-1/prickly_engelbart
Finally, Docker Swarm refuses to run another container that requires port 80, because it is not
available on any node in the cluster:
$ docker tcp://<manager_ip:manager_port> run -d -p 80:80 nginx
2014/10/29 00:33:20 Error response from daemon: no resources available to schedule
container
Each container occupies port 80 on its residing node when the container is created and releases the
port when the container is deleted. A container in exited state still owns the port.
If prickly_engelbart on node-1 is stopped but not deleted, trying to start another container on node-
1 that requires port 80 would fail because port 80 is associated with prickly_engelbart. To increase
running instances of nginx, you can either restart prickly_engelbart, or start another container after
deleting prickly_englbart.
A container running with --net=host differs from the default bridge mode as the hostmode does not
perform any port binding. Instead, host mode requires that you explicitly expose one or more port
numbers. You expose a port using EXPOSE in the Dockerfile or --expose on the command line.
Swarm makes use of this information in conjunction with the host mode to choose an available node
for a new container.
For example, the following commands start nginx on 3-node cluster.
$ docker tcp://<manager_ip:manager_port> run -d --expose=80 --net=host nginx
640297cb29a7
$ docker tcp://<manager_ip:manager_port> run -d --expose=80 --net=host nginx
7ecf562b1b3f
$ docker tcp://<manager_ip:manager_port> run -d --expose=80 --net=host nginx
09a92f582bc2
Port binding information is not available through the docker ps command because all the nodes
were started with the host network.
$ docker tcp://<manager_ip:manager_port> ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
640297cb29a7 nginx:1 "nginx -g 'daemon of Less than a second ago
Up 30 seconds box3/furious_heisenberg
7ecf562b1b3f nginx:1 "nginx -g 'daemon of Less than a second ago
Up 28 seconds box2/ecstatic_meitner
09a92f582bc2 nginx:1 "nginx -g 'daemon of 46 seconds ago
Up 27 seconds box1/mad_goldstine
Swarm refuses the operation when trying to instantiate the 4th container.
However, port binding to the different value, for example 81, is still allowed.
$ docker tcp://<manager_ip:manager_port> run -d -p 81:80 nginx:latest
832f42819adc
$ docker tcp://<manager_ip:manager_port> ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
832f42819adc nginx:1 "nginx -g 'daemon of Less than a second ago
Up Less than a second 443/tcp, 192.168.136.136:81->80/tcp box3/thirsty_hawking
640297cb29a7 nginx:1 "nginx -g 'daemon of 8 seconds ago
Up About a minute box3/furious_heisenberg
7ecf562b1b3f nginx:1 "nginx -g 'daemon of 13 seconds ago
Up About a minute box2/ecstatic_meitner
09a92f582bc2 nginx:1 "nginx -g 'daemon of About a minute ago
Up About a minute box1/mad_goldstine
<filter-type>:<key><operator><value>
The <filter-type> is either the affinity or the constraint keyword. It identifies the type filter you
intend to use.
The <key> is an alpha-numeric and must start with a letter or underscore. The <key>corresponds to
one of the following:
Swarm rescheduling
You are viewing docs for legacy standalone Swarm. These topics describe standalone Docker
Swarm. In Docker 1.12 and higher, Swarm mode is integrated with Docker Engine. Most users
should use integrated Swarm mode — a good place to start is Getting started with swarm
mode, Swarm mode CLI commands, and the Get started with Docker walkthrough). Standalone
Docker Swarm is not integrated into the Docker Engine API and CLI commands.
Estimated reading time: 1 minute
You can set rescheduling policies with Docker Swarm. A rescheduling policy determines what the
Swarm scheduler does for containers when the nodes they are running on fail.
Rescheduling policies
You set the reschedule policy when you start a container. You can do this with
the reschedule environment variable or the com.docker.swarm.reschedule-policies label. If you
don’t specify a policy, the default rescheduling policy is off which means that Swarm does not
restart a container when a node fails.
To set the on-node-failure policy with a reschedule environment variable:
$ docker run -d -e "reschedule:on-node-failure" redis
If for some reason, the new container fails to start on the new node, the log contains:
The Docker Swarm scheduler features multiple strategies for ranking nodes. The strategy you
choose determines how Swarm computes ranking. When you run a new container, Swarm chooses
to place it on the node with the highest computed ranking for your chosen strategy.
To choose a ranking strategy, pass the --strategy flag and a strategy value to the swarm
manage command. Swarm currently supports these values:
spread
binpack
random
The spread and binpack strategies compute rank according to a node’s available CPU, its RAM, and
the number of containers it has. The random strategy uses no computation. It selects a node at
random and is primarily intended for debugging.
Your goal in choosing a strategy is to best optimize your cluster according to your company’s needs.
Under the spread strategy, Swarm optimizes for the node with the least number of containers.
The binpack strategy causes Swarm to optimize for the node which is most packed. A container
occupies resource during its life cycle, including exited state. Users should be aware of this
condition to schedule containers. For example, spread strategy only checks number of containers
disregarding their states. A node with no active containers but high number of stopped containers
may not be selected, defeating the purpose of load sharing. User could either remove stopped
containers, or start stopped containers to achieve load spreading. The random strategy, like it
sounds, chooses nodes at random regardless of their available CPU or RAM.
Using the spread strategy results in containers spread thinly over many machines. The advantage of
this strategy is that if a node goes down you only lose a few containers.
The binpack strategy avoids fragmentation because it leaves room for bigger containers on unused
machines. The strategic advantage of binpack is that you use fewer machines as Swarm tries to
pack as many containers as it can on a node.
If you do not specify a --strategy Swarm uses spread by default.
$ docker tcp://<manager_ip:manager_port> ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
963841b138d8 nginx:latest "nginx" Less than a second ago
running 192.168.0.42:49177->80/tcp node-2/frontend
f8b693db9cd6 mysql:latest "mysqld" Up About a minute
running 192.168.0.42:49178->3306/tcp node-1/db
The container frontend was started on node-2 because it was the node the least loaded already. If
two nodes have the same amount of available RAM and CPUs, the spread strategy prefers the node
with least containers.
$ docker tcp://<manager_ip:manager_port> ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
f8b693db9cd6 mysql:latest "mysqld" Less than a second ago
running 192.168.0.42:49178->3306/tcp node-1/db
$ docker tcp://<manager_ip:manager_port> ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
963841b138d8 nginx:latest "nginx" Less than a second ago
running 192.168.0.42:49177->80/tcp node-1/frontend
f8b693db9cd6 mysql:latest "mysqld" Up About a minute
running 192.168.0.42:49178->3306/tcp node-1/db
The system starts the new frontend container on node-1 because it was the node the most packed
already. This allows us to start a container requiring 2G of RAM on node-2.
If two nodes have the same amount of available RAM and CPUs, the binpack strategy prefers the
node with most containers.
All nodes in a Swarm cluster must bind their Docker daemons to a network port. This has obvious
security implications. These implications are compounded when the network in question is untrusted
such as the internet. To mitigate these risks, Docker Swarm and the Docker Engine daemon support
Transport Layer Security (TLS).
Note: TLS is the successor to SSL (Secure Sockets Layer) and the two terms are often used
interchangeably. Docker uses TLS, this term is used throughout this article.
The following analogy may be useful. It is common practice that passports are used to verify an
individual’s identity. Passports usually contain a photograph and biometric information that identify
the owner. A passport also lists the country that issued it, as well as valid fromand valid to dates.
Digital certificates are very similar. The text below is an extract from a digital certificate:
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 9590646456311914051 (0x8518d2237ad49e43)
Signature Algorithm: sha256WithRSAEncryption
Issuer: C=US, ST=CA, L=Sanfrancisco, O=Docker Inc
Validity
Not Before: Jan 18 09:42:16 2016 GMT
Not After : Jan 15 09:42:16 2026 GMT
Subject: CN=swarm
This certificate identifies a computer called swarm. The certificate is valid between January 2016
and January 2026 and was issued by Docker Inc. based in the state of California in the US.
Just as passports authenticate individuals as they board flights and clear customs, digital certificates
authenticate computers on a network.
Public key infrastructure (PKI) is the combination of technologies, policies, and procedures that work
behind the scenes to enable digital certificates. Some of the technologies, policies and procedures
provided by PKI include:
The Docker Engine daemon must also trust the certificate that the Docker Engine CLI uses. This
trust is usually established by way of a trusted third party. The Docker Engine CLI and Docker
Engine daemon in the diagram below are configured to require TLS authentication.
The trusted third party in this diagram is the Certificate Authority (CA) server. Like the country in the
passport example, a CA creates, signs, issues, revokes certificates. Trust is established by installing
the CA’s root certificate on the host running the Docker Engine daemon. The Docker Engine CLI
then requests its own certificate from the CA server, which the CA server signs and issues to the
client.
The Docker Engine CLI sends its certificate to the Docker Engine daemon before issuing
commands. The Docker Engine daemon inspects the certificate, and because the Docker Engine
daemon trusts the CA, the Docker Engine daemon automatically trusts any certificates signed by the
CA. Assuming the certificate is in order (the certificate has not expired or been revoked etc.) the
Docker Engine daemon accepts commands from this trusted Docker Engine CLI.
The Docker Engine CLI is simply a client that uses the Docker Engine API to communicate with the
Docker Engine daemon. Any client that uses this Docker Engine API can use TLS. For example,
Docker Engine clients such as ‘Docker Universal Control Plane’ (UCP) have TLS support built-in.
Other, third party products, that use Docker Engine API, can also be configured this way.
These configurations are differentiated by the type of entity acting as the Certificate Authority (CA).
When you use an external 3rd party CA, they create, sign, issue, revoke and otherwise manage your
certificates. They normally charge a fee for these services, but are considered an enterprise-class
scalable solution that provides a high degree of trust.
Internal corporate CA
Many organizations choose to implement their own Certificate Authorities and PKI. Common
examples are using OpenSSL and Microsoft Active Directory. In this case, your company is its own
Certificate Authority with all the work it entails. The benefit is, as your own CA, you have more
control over your PKI.
Running your own CA and PKI requires you to provide all of the services offered by external 3rd
party CAs. These include creating, issuing, revoking, and otherwise managing certificates. Doing all
of this yourself has its own costs and overheads. However, for a large corporation, it still may reduce
costs in comparison to using an external 3rd party service.
Assuming you operate and manage your own internal CAs and PKI properly, an internal, corporate
CA can be a highly scalable and highly secure option.
Self-signed certificates
As the name suggests, self-signed certificates are certificates that are signed with their own private
key rather than a trusted CA. This is a low cost and simple to use option. If you implement and
manage self-signed certificates correctly, they can be better than using no certificates.
Because self-signed certificates lack of a full-blown PKI, they do not scale well and lack many of the
advantages offered by the other options. One of their disadvantages is that you cannot revoke self-
signed certificates. Due to this, and other limitations, self-signed certificates are considered the least
secure of the three options. Self-signed certificates are not recommended for public facing
production workloads exposed to untrusted networks.
In this procedure you create a two-node swarm cluster, a Docker Engine CLI, a swarm manager,
and a Certificate Authority as shown below. All the Docker Engine hosts (client, swarm, node1,
and node2) have a copy of the CA’s certificate as well as their own key-pair signed by the CA.
This procedure includes the following steps:
Make sure that you have SSH access to all 5 servers and that they can communicate with each
other using DNS name resolution. In particular:
Open TCP port 2376 between the swarm manager and swarm nodes
Open TCP port 3376 between the Docker Engine client and the swarm manager
You can choose different ports if these are already in use. This example assumes you use these
ports though.
Each server must run an operating system compatible with Docker Engine. For simplicity, the steps
that follow assume all servers are running Ubuntu 14.04 LTS.
2. $ sudo su
3. Create a private key called ca-priv-key.pem for the CA:
4. # openssl genrsa -out ca-priv-key.pem 2048
5. Generating RSA private key, 2048 bit long modulus
6. ...........................................................+++
7. .....+++
8. e is 65537 (0x10001)
The public key is based on the private key created in the previous step.
You have now configured a CA server with a public and private keypair. You can inspect the
contents of each key. To inspect the private key:
The following command shows the partial contents of the CA’s public key.
Later, you use this certificate to sign keys for other servers in the infrastructure.
Key Description
The CA’s private key and must be kept secure. It is used later to sign new
ca-priv-key.pem keys for the other nodes in the environment. Together with the ca.pem file,
this makes up the CA’s key pair.
The CA’s public key (also called certificate). This is installed on all nodes in
ca.pem the environment so that all nodes trust certificates signed by the CA.
Together with the ca-priv-key.pem file, this makes up the CA’s key pair.
A private key signed by the CA. The node uses this key to authenticate
NODE_NAME-priv-
itself with remote Docker Engines. Together with the NODE_NAME-
key.pem
cert.pem file, this makes up a node’s key pair.
NODE_NAME- A certificate signed by the CA. This is not used in this example. Together
cert.pem with the NODE_NAME-priv-key.pem file, this makes up a node’s key pair.
The commands below show how to create keys for all of your nodes. You perform this procedure in
a working directory located on your CA server.
2. $ sudo su
9. Generate a certificate signing request (CSR) swarm.csr using the private key you create in
the previous step.
10. # openssl req -subj "/CN=swarm" -new -key swarm-priv-key.pem -out swarm.csr
Remember, this is only for demonstration purposes. The process to create a CSR is slightly
different in real-world production environments.
11. Create the certificate swarm-cert.pem based on the CSR created in the previous step.
12. # openssl x509 -req -days 1825 -in swarm.csr -CA ca.pem -CAkey ca-priv-key.pem
-CAcreateserial -out swarm-cert.pem -extensions v3_req -extfile
/usr/lib/ssl/openssl.cnf
13. <snip>
14. # openssl rsa -in swarm-priv-key.pem -out swarm-priv-key.pem
16. Verify that your working directory contains the following files:
17. # ls -l
18. total 64
19. -rw-r--r-- 1 root root 1679 Jan 16 18:27 ca-priv-key.pem
20. -rw-r--r-- 1 root root 1229 Jan 16 18:28 ca.pem
21. -rw-r--r-- 1 root root 17 Jan 18 09:56 ca.srl
22. -rw-r--r-- 1 root root 1086 Jan 18 09:56 client-cert.pem
23. -rw-r--r-- 1 root root 887 Jan 18 09:55 client.csr
24. -rw-r--r-- 1 root root 1679 Jan 18 09:56 client-priv-key.pem
25. -rw-r--r-- 1 root root 1082 Jan 18 09:44 node1-cert.pem
26. -rw-r--r-- 1 root root 887 Jan 18 09:43 node1.csr
27. -rw-r--r-- 1 root root 1675 Jan 18 09:44 node1-priv-key.pem
28. -rw-r--r-- 1 root root 1082 Jan 18 09:49 node2-cert.pem
29. -rw-r--r-- 1 root root 887 Jan 18 09:49 node2.csr
30. -rw-r--r-- 1 root root 1675 Jan 18 09:49 node2-priv-key.pem
31. -rw-r--r-- 1 root root 1082 Jan 18 09:42 swarm-cert.pem
32. -rw-r--r-- 1 root root 887 Jan 18 09:41 swarm.csr
33. -rw-r--r-- 1 root root 1679 Jan 18 09:42 swarm-priv-key.pem
You can inspect the contents of each of the keys. To inspect a private key:
The following command shows the partial contents of the swarm manager’s publicswarm-
cert.pem key.
<output truncated>
The procedure below shows you how to copy these files from the CA server to each server
using scp. As part of the copy procedure, rename each file as follows on each node:
Original name Copied name
ca.pem ca.pem
<server>-cert.pem cert.pem
<server>-priv-key.pem key.pem
1. Logon to the terminal of your CA server and elevate to root.
2. $ sudo su
3. Create a ~/.certs directory on the swarm manager. Here we assume user account is
ubuntu.
4. $ ssh ubuntu@swarm 'mkdir -p /home/ubuntu/.certs'
Note: You may need to provide authentication for the scp commands to work. For example,
AWS EC2 instances use certificate-based authentication. To copy the files to an EC2
instance associated with a public key called nigel.pem, modify the scp command as
follows: scp -i /path/to/nigel.pem ./ca.pem ubuntu@swarm:/home/ubuntu/.certs/ca.pem .
When the copying is complete, each machine should have the following keys.
Each node in your infrastructure should have the following files in
the/home/ubuntu/.certs/ directory:
# ls -l /home/ubuntu/.certs/
total 16
-rw-r--r-- 1 ubuntu ubuntu 1229 Jan 18 10:03 ca.pem
-rw-r--r-- 1 ubuntu ubuntu 1082 Jan 18 10:06 cert.pem
-rw-r--r-- 1 ubuntu ubuntu 1679 Jan 18 10:06 key.pem
3. Add the following configuration keys to the /etc/docker/daemon.json. If the file does not yet
exist, create it.
4. {
5. "hosts": ["tcp://0.0.0.0:2376"],
6. "tlsverify": "true",
7. "tlscacert": "/home/ubuntu/.certs/ca.pem",
8. "tlscert": "/home/ubuntu/.certs/cert.pem",
9. "tlskey": "/home/ubuntu/.certs/key.pem"
10. }
Restart Docker for the changes to take effect. If the file is not valid JSON, Docker fails to
start and emits an error.
2. Create the cluster and export it’s unique ID to the TOKEN environment variable.
3. $ sudo export TOKEN=$(docker run --rm swarm create)
4. Unable to find image 'swarm:latest' locally
5. latest: Pulling from library/swarm
6. d681c900c6e3: Pulling fs layer
7. <snip>
8. 986340ab62f0: Pull complete
9. a9975e2cc0a3: Pull complete
10. Digest:
sha256:c21fd414b0488637b1f05f13a59b032a3f9da5d818d31da1a4ca98a84c0c781b
11. Status: Downloaded newer image for swarm:latest
The command above launches a new container based on the swarm image and it maps
port 3376 on the server to port 3376 inside the container. This mapping ensures that Docker
Engine commands sent to the host on port 3376 are passed on to port 3376inside the
container. The container runs the swarm manage process with the --tlsverify, --
tlscacert, --tlscert and --tlskey options specified. These options force TLS verification
and specify the location of the swarm manager’s TLS keys.
3. Run a docker ps command to verify that your swarm manager container is up and running.
4. $ docker ps
5. CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
6. 035dbf57b26e swarm "/swarm manage --tlsv" 7 seconds ago
Up 7 seconds 2375/tcp, 0.0.0.0:3376->3376/tcp compassionate_lovelace
When issuing the command, you must pass it the location of the clients certifications.
Server:
Version: swarm/1.0.1
API version: 1.21
Go version: go1.5.2
Git commit: 744e3a3
Built:
OS/Arch: linux/amd64
The output above shows the Server version as “swarm/1.0.1”. This means that the command
was successfully issued against the swarm manager.
3. Verify that the same command does not work without TLS.
The output above shows that the command was rejected by the server. This is because the
server (swarm manager) is configured to only accept connections from authenticated clients
using TLS.
Variable Description
DOCKER_HOST Sets the Docker host and TCP port to send all Engine commands to.
9. export DOCKER_HOST=tcp://swarm:3376
10. export DOCKER_TLS_VERIFY=1
11. export DOCKER_CERT_PATH=/home/ubuntu/.docker/
15. Verify that the procedure worked by issuing a docker version command
16. $ docker version
17. Client:
18. Version: 1.9.1
19. API version: 1.21
20. Go version: go1.4.2
21. Git commit: a34a1d5
22. Built: Fri Nov 20 13:12:04 UTC 2015
23. OS/Arch: linux/amd64
24.
25. Server:
26. Version: swarm/1.0.1
27. API version: 1.21
28. Go version: go1.5.2
29. Git commit: 744e3a3
30. Built:
31. OS/Arch: linux/amd64
The server portion of the output above command shows that your Docker client is issuing
commands to the swarm manager and using TLS.
The create command uses Docker Hub’s hosted discovery backend to create a unique discovery
token for your cluster. For example:
$ docker run --rm swarm create
86222732d62b6868d441d430aee4f055
Later, when you use manage or join to create Swarm managers and nodes, you use the discovery
token in the <discovery> argument. For instance, token://86222732d62b6868d441d430aee4f055 . The
discovery backend registers each new Swarm manager and node that uses the token as a member
of your cluster.
Warning: Docker Hub’s hosted discovery backend is not recommended for production use. It’s
intended only for testing/development.
To see a list of arguments and options for a specific Swarm command, enter:
Arguments:
<discovery> discovery service to use [$SWARM_DISCOVERY]
* token://<token>
* consul://<ip>/<path>
* etcd://<ip1>,<ip2>/<path>
* file://path/to/file
* zk://<ip1>,<ip2>/<path>
* [nodes://]<ip1>,<ip2>
Options:
--timeout "10s" timeout
period
--discovery-opt [--discovery-opt option --discovery-opt option] discovery options
Prerequisite: Before using join, establish a discovery backend as described in this discovery topic.
The join command creates a Swarm node whose purpose is to run containers on behalf of the
cluster. A typical cluster has multiple Swarm nodes.
For example, to create a Swarm node in a high-availability cluster with other managers, enter:
Arguments
The join command has only one argument:
<discovery> — Discovery backend
Before you create a Swarm node, create a discovery token or set up a discovery backend for your
cluster.
When you create the Swarm node, use the <discovery> argument to specify one of the following
discovery backends:
token://<token>
consul://<ip1>/<path>
etcd://<ip1>,<ip2>,<ip3>/<path>
file://<path/to/file>
zk://<ip1>,<ip2>/<path>
[nodes://]<iprange>,<iprange>
Where:
ip1, ip2, ip3 are each the IP address and port numbers of a discovery backend node.
path (optional) is a path to a key-value store on the discovery backend. When you use a
single backend to service multiple clusters, you use paths to maintain separate key-value
stores for each cluster.
path/to/file is the path to a file that contains a static list of the Swarm managers and nodes
that are members of the cluster.
iprange is an IP address or a range of IP addresses followed by a port number.
For example:
For more information and examples, see the Docker Swarm Discovery topic.
Options
The join command has the following options:
--advertise or --addr — Advertise the Docker Engine’s IP and
port number
Use --advertise <ip>:<port> or --addr <ip>:<port> to advertise the IP address and port number
of the Docker Engine. For example, --advertise 172.30.0.161:4000. Swarm managers MUST be
able to reach this Swarm node at this address.
The environment variable for --advertise is $SWARM_ADVERTISE.
--heartbeat — Period between each heartbeat
Use --heartbeat "<interval>s" to specify the interval, in seconds, between heartbeats the node
sends to the primary manager. These heartbeats indicate that the node is healthy and reachable. By
default, the interval is 60 seconds.
--ttl — Sets the expiration of an ephemeral node
Use --ttl "<interval>s" to specify the time-to-live (TTL) interval, in seconds, of an ephemeral
node. The default interval is 180s.
--delay — Add a random delay in [0s,delay] to avoid
synchronized registration
Use --delay "<interval>s" to specify the maximum interval for a random delay, in seconds, before
the node registers with the discovery backend. If you deploy a large number of nodes
simultaneously, the random delay spreads registrations out over the interval and avoids saturating
the discovery backend.
--discovery-opt — Discovery options
Use --discovery-opt <value> to discovery options, such as paths to the TLS files; the CA’s public
key certificate, the certificate, and the private key of the distributed K/V store on a Consul or etcd
discovery backend. You can enter multiple discovery options. For example:
--discovery-opt kv.cacertfile=/path/to/mycacert.pem \
--discovery-opt kv.certfile=/path/to/mycert.pem \
--discovery-opt kv.keyfile=/path/to/mykey.pem \
The following examples show a few different syntaxes for the <discovery> argument:
etcd:
Consul:
ZooKeeper:
Arguments
The list command has only one argument:
When you use the list command, use the <discovery> argument to specify one of the following
discovery backends:
token://<token>
consul://<ip1>/<path>
etcd://<ip1>,<ip2>,<ip3>/<path>
file://<path/to/file>
zk://<ip1>,<ip2>/<path>
[nodes://]<iprange>,<iprange>
Where:
ip1, ip2, ip3 are each the IP address and port numbers of a discovery backend node.
path (optional) is a path to a key-value store on the discovery backend. When you use a
single backend to service multiple clusters, you use paths to maintain separate key-value
stores for each cluster.
path/to/file is the path to a file that contains a static list of the Swarm managers and nodes
that are members of the cluster.
iprange is an IP address or a range of IP addresses followed by a port number.
For example:
For more information and examples, see the Docker Swarm Discovery topic.
Options
The list command has the following options:
Use --timeout "<interval>s" to specify the timeout period, in seconds, to wait for the discovery
backend to return the list. The default interval is 10s.
Use --discovery-opt <value> to discovery options, such as paths to the TLS files; the CA’s public
key certificate, the certificate, and the private key of the distributed K/V store on a Consul or etcd
discovery backend. You can enter multiple discovery options. For example:
--discovery-opt kv.cacertfile=/path/to/mycacert.pem \
--discovery-opt kv.certfile=/path/to/mycert.pem \
--discovery-opt kv.keyfile=/path/to/mykey.pem \
Prerequisite: Before using manage to create a Swarm manager, establish a discovery backend as
described in this discovery topic.
The manage command creates a Swarm manager whose purpose is to receive commands on behalf
of the cluster and assign containers to Swarm nodes. You can create multiple Swarm managers as
part of a high-availability cluster.
For example, you can use manage to create a Swarm manager in a high-availability cluster with other
managers:
$ docker run -d -p 4000:4000 swarm manage -H :4000 --replication --advertise
172.30.0.161:4000 consul://172.30.0.165:8500
Or, for example, you can use it to create a Swarm manager that uses Transport Layer Security
(TLS) to authenticate the Docker Client and Swarm nodes:
Argument
The manage command has only one argument:
<discovery> — Discovery backend
Before you create a Swarm manager, create a discovery token or set up a discovery backendfor
your cluster.
When you create the swarm node, use the <discovery> argument to specify one of the following
discovery backends:
token://<token>
consul://<ip1>/<path>
etcd://<ip1>,<ip2>,<ip3>/<path>
file://<path/to/file>
zk://<ip1>,<ip2>/<path>
[nodes://]<iprange>,<iprange>
Where:
Warning: Docker Hub’s hosted discovery backend is not recommended for production use.
It’s intended only for testing/development.
ip1, ip2, ip3 are each the IP address and port numbers of a discovery backend node.
path (optional) is a path to a key-value store on the discovery backend. When you use a
single backend to service multiple clusters, you use paths to maintain separate key-value
stores for each cluster.
path/to/file is the path to a file that contains a static list of the Swarm managers and nodes
that are members the cluster.
iprange is an IP address or a range of IP addresses followed by a port number.
For more information and examples, see the Docker Swarm Discovery topic.
Options
The manage command has the following options:
--strategy — Scheduler placement strategy
Use --strategy "<value>" to tell the Docker Swarm scheduler which placement strategy to use.
Where <value> is:
spread — Assign each container to the Swarm node with the most available resources.
binpack - Assign containers to one Swarm node until it is full before assigning them to
another one.
random - Assign each container to a random Swarm node.
health — Use nodes that are running and communicating with the discovery backend.
port — For containers that have a static port mapping, use nodes whose corresponding port
number is available and not occupied by another container or process.
dependency — For containers that have a declared dependency, use nodes that already have
a container with the same dependency.
affinity — For containers that have a declared affinity, use nodes that already have a
container with the same affinity.
constraint — For containers that have a declared constraint, use nodes that already have a
container with the same constraint.
For more information about using Mesos driver, see Using Docker Swarm and Mesos.
For example, you use swarm with the manage subcommand to create a Swarm manager in a high-
availability cluster with other managers:
$ docker run -d -p 4000:4000 swarm manage -H :4000 --replication --advertise
172.30.0.161:4000 consul://172.30.0.165:8500
Options
The swarm command has the following options:
--debug — Enable debug mode. Display messages that you can use to debug a Swarm
node. For example:
time="2016-02-17T17:57:40Z" level=fatal msg="discovery required to join a
cluster. See 'swarm join --help'."
Docker Engine provides a REST API for making calls to the Engine daemon. Docker Swarm allows
a caller to make the same calls to a cluster of Engine daemons. While the API calls are the same,
the API response status codes do differ. This document explains the differences.
Four methods are included, and they are GET, POST, PUT, and DELETE.
The comparison is based on api v1.22, and all Docker Status Codes in api v1.22 are referenced
from docker-remote-api-v1.22.
GET
Route: /_ping
Handler: ping
Swarm Status Code Docker Status Code
200 200
500
Route: /events
Handler: getEvents
200 200
400
500
Route: /info
Handler: getInfo
200 200
500
Route: /version
Handler: getVersion
200 200
500
Route: /images/json
Handler: getImagesJSON
200 200
500 500
Route: /images/viz
Handler: notImplementedHandler
Route: /images/search
Handler: proxyRandom
200 200
500 500
Route: /images/get
Handler: getImages
200 200
404
500 500
Route: /images/{name:.*}/get
Handler: proxyImageGet
200 200
404
500 500
Route: /images/{name:.*}/history
Handler: proxyImage
200 200
404 404
500 500
Route: /images/{name:.*}/json
Handler: proxyImage
200 200
404 404
500 500
Route: /containers/ps
Handler: getContainersJSON
Route: /containers/json
Handler: getContainersJSON
200 200
400
404
Swarm Status Code Docker Status Code
500 500
Route: /containers/{name:.*}/archive
Handler: proxyContainer
200 200
400 400
404 404
500 500
Route: /containers/{name:.*}/export
Handler: proxyContainer
200 200
404 404
500 500
Route: /containers/{name:.*}/changes
Handler: proxyContainer
200 200
404 404
500 500
Route: /containers/{name:.*}/json
Handler: getContainerJSON
Swarm Status Code Docker Status Code
200 200
404 404
500 500
Route: /containers/{name:.*}/top
Handler: proxyContainer
200 200
404 404
500 500
Route: /containers/{name:.*}/logs
Handler: proxyContainer
101 101
200 200
404 404
500 500
Route: /containers/{name:.*}/stats
Handler: proxyContainer
200 200
404 404
500 500
Route: /containers/{name:.*}/attach/ws
Handler: proxyHijack
200 200
400 400
404 404
500 500
Route: /exec/{execid:.*}/json
Handler: proxyContainer
200 200
404 404
500 500
Route: /networks
Handler: getNetworks
200 200
400
500 500
Route: /networks/{networkid:.*}
Handler: getNetwork
200 200
404 404
Route: /volumes
Handler: getVolumes
200 200
500
Route: /volumes/{volumename:.*}
Handler: getVolume
200 200
404 404
500
POST
Route: /auth
Handler: proxyRandom
200 200
204 204
500 500
Route: /commit
Handler: postCommit
201 201
404 404
Swarm Status Code Docker Status Code
500 500
Route: /build
Handler: postBuild
200 200
500 500
Route: /images/create
Handler: postImagesCreate
200 200
500 500
Route: /images/load
Handler: postImagesLoad
200
201
500
Route: /images/{name:.*}/push
Handler: proxyImagePush
200 200
Swarm Status Code Docker Status Code
404 404
500 500
Route: /images/{name:.*}/tag
Handler: postTagImage
200
201
400
404 404
409
500 500
Route: /containers/create
Handler: postContainersCreate
201 201
400
404
406
409
500 500
Route: /containers/{name:.*}/kill
Handler: proxyContainerAndForceRefresh
204 204
404 404
500 500
Route: /containers/{name:.*}/pause
Handler: proxyContainerAndForceRefresh
204 204
404 404
500 500
Route: /containers/{name:.*}/unpause
Handler: proxyContainerAndForceRefresh
204 204
404 404
500 500
Route: /containers/{name:.*}/rename
Handler: postRenameContainer
200
204
404 404
Swarm Status Code Docker Status Code
409 409
500 500
Route: /containers/{name:.*}/restart
Handler: proxyContainerAndForceRefresh
204 204
404 404
500 500
Route: /containers/{name:.*}/start
Handler: postContainersStart
204 204
304
404 404
500 500
Route: /containers/{name:.*}/stop
Handler: proxyContainerAndForceRefresh
204 204
304 304
404 404
500 500
Route: /containers/{name:.*}/update
Handler: proxyContainerAndForceRefresh
200 200
400 400
404 404
500 500
Route: /containers/{name:.*}/wait
Handler: proxyContainerAndForceRefresh
204 204
404 404
500 500
Route: /containers/{name:.*}/resize
Handler: proxyContainer
200 200
404 404
500 500
Route: /containers/{name:.*}/attach
Handler: proxyHijack
101 101
200 200
Swarm Status Code Docker Status Code
400 400
404 404
500 500
Route: /containers/{name:.*}/copy
Handler: proxyContainer
200 200
404 404
500 500
Route: /containers/{name:.*}/exec
Handler: postContainersExec
201 201
404 404
409
500 500
Route: /exec/{execid:.*}/start
Handler: postExecStart
200 200
404 404
409 409
Swarm Status Code Docker Status Code
500
Route: /exec/{execid:.*}/resize
Handler: proxyContainer
201 201
404 404
500
Route: /networks/create
Handler: postNetworksCreate
200
201
400
404
500 500
Route: /networks/{networkid:.*}/connect
Handler: proxyNetworkConnect
200 200
404 404
500 500
Route: /networks/{networkid:.*}/disconnect
Handler: proxyNetworkDisconnect
Swarm Status Code Docker Status Code
200 200
404 404
500 500
Route: /volumes/create
Handler: postVolumesCreate
200
201
400
500 500
PUT
Route: /containers/{name:.*}/archive"
Handler: proxyContainer
200 200
400 400
403 403
404 404
500 500
DELETE
Route: /containers/{name:.*}
Handler: deleteContainers
Swarm Status Code Docker Status Code
200
204
400
404 404
500 500
Route: /images/{name:.*}
Handler: deleteImages
200 200
404 404
409
500 500
Route: /networks/{networkid:.*}
Handler: deleteNetworks
200
204
404 404
500 500
Route: /volumes/{name:.*}"
Handler: deleteVolumes
Swarm Status Code Docker Status Code
204 204
404 404
409
500 500
The Docker Swarm API is mostly compatible with the Docker Remote API. This document is an
overview of the differences between the Swarm API and the Docker Engine API.
Missing endpoints
Some endpoints have not yet been implemented and return a 404 error.
"Node": {
GET "/containers/{name:.*}/json" "Id":
"ODAI:IC6Q:MSBL:TPB5:HIEE:6IKC:VCAM:QRNH:PRGX:ERZT
:OK46:PMFX",
"Ip": "0.0.0.0",
"Addr": "http://0.0.0.0:4243",
"Name": "vagrant-ubuntu-saucy-64"
}
Registry authentication
During container create calls, the Swarm API optionally accepts an X-Registry-Auth header. If
provided, this header is passed down to the engine if the image must be pulled to complete the
create operation.
The following two examples demonstrate how to utilize this using the existing Docker CLI.
# obtain a JSON token, and extract the "token" value using 'jq'
TOKEN=$(curl -s -u "${REPO_USER}:${PASSWORD}"
"${AUTH_URL}?scope=repository:${REPO}:pull&service=registry.docker.io" | jq -r
".token")
HEADER=$(echo "{\"registrytoken\":\"${TOKEN}\"}"|base64 -w 0 )
echo HEADER=$HEADER
You can now authenticate to the registry, and run private images on Swarm:
REPO_USER=yourusername
read -s PASSWORD
HEADER=$(echo "{\"username\":\"${REPO_USER}\",\"password\":\"${PASSWORD}\"}" | base64
-w 0 )
unset PASSWORD
echo HEADER=$HEADER
You can now authenticate to the registry, and run private images on Swarm:
This page provides the usage information for the docker-compose Command.
Usage:
docker-compose [-f <arg>...] [options] [COMMAND] [ARGS...]
docker-compose -h|--help
Options:
-f, --file FILE Specify an alternate compose file
(default: docker-compose.yml)
-p, --project-name NAME Specify an alternate project name
(default: directory name)
--verbose Show more output
--log-level LEVEL Set log level (DEBUG, INFO, WARNING, ERROR, CRITICAL)
--no-ansi Do not print ANSI control characters
-v, --version Print version and exit
-H, --host HOST Daemon socket to connect to
Commands:
build Build or rebuild services
bundle Generate a Docker bundle from the Compose file
config Validate and view the Compose file
create Create services
down Stop and remove containers, networks, images, and volumes
events Receive real time events from containers
exec Execute a command in a running container
help Get help on a command
images List images
kill Kill containers
logs View output from containers
pause Pause services
port Print the public port for a port binding
ps List containers
pull Pull service images
push Push service images
restart Restart services
rm Remove stopped containers
run Run a one-off command
scale Set number of containers for a service
start Start services
stop Stop services
top Display the running processes
unpause Unpause services
up Create and start containers
version Show the Docker-Compose version information
You can use Docker Compose binary, docker-compose [-f <arg>...] [options] [COMMAND]
[ARGS...], to build and manage multiple services in Docker containers.
If the docker-compose.admin.yml also specifies this same service, any matching fields override the
previous file. New values, add to the webapp service configuration.
webapp:
build: .
environment:
- DEBUG=1
Use a -f with - (dash) as the filename to read the configuration from stdin. When stdin is used all
paths in the configuration are relative to the current working directory.
The -f flag is optional. If you don’t provide this flag on the command line, Compose traverses the
working directory and its parent directories looking for a docker-compose.ymland a docker-
compose.override.yml file. You must supply at least the docker-compose.ymlfile. If both files are
present on the same directory level, Compose combines the two files into a single configuration.
The configuration in the docker-compose.override.yml file is applied over and in addition to the
values in the docker-compose.yml file.
Several environment variables are available for you to configure the Docker Compose command-line
behaviour.
Variables starting with DOCKER_ are the same as those used to configure the Docker command-line
client. If you’re using docker-machine, then the eval "$(docker-machine env my-docker-
vm)" command should set them to their correct values. (In this example, my-docker-vm is the name of
a machine you created.)
Note: Some of these variables can also be provided using an environment file.
COMPOSE_PROJECT_NAME
Sets the project name. This value is prepended along with the service name to the container on start
up. For example, if your project name is myapp and it includes two services dband web, then Compose
starts containers named myapp_db_1 and myapp_web_1respectively.
Setting this is optional. If you do not set this, the COMPOSE_PROJECT_NAME defaults to the basename of
the project directory. See also the -p command-line option.
COMPOSE_FILE
Specify the path to a Compose file. If not provided, Compose looks for a file nameddocker-
compose.yml in the current directory and then each parent directory in succession until a file by that
name is found.
This variable supports multiple Compose files separated by a path separator (on Linux and macOS
the path separator is :, on Windows it is ;). For example:COMPOSE_FILE=docker-
compose.yml:docker-compose.prod.yml. The path separator can also be customized
using COMPOSE_PATH_SEPARATOR.
See also the -f command-line option.
COMPOSE_API_VERSION
The Docker API only supports requests from clients which report a specific version. If you receive
a client and server don't have same version error using docker-compose, you can workaround
this error by setting this environment variable. Set the version value to match the server version.
Setting this variable is intended as a workaround for situations where you need to run temporarily
with a mismatch between the client and server version. For example, if you can upgrade the client
but need to wait to upgrade the server.
Running with this variable set and a known mismatch does prevent some Docker features from
working properly. The exact features that fail would depend on the Docker client and server
versions. For this reason, running with this variable set is only intended as a workaround and it is not
officially supported.
If you run into problems running with this set, resolve the mismatch through upgrade and remove
this setting to see if your problems resolve before notifying support.
DOCKER_HOST
Sets the URL of the docker daemon. As with the Docker client, defaults
to unix:///var/run/docker.sock.
DOCKER_TLS_VERIFY
When set to anything other than an empty string, enables TLS communication with
the docker daemon.
DOCKER_CERT_PATH
Configures the path to the ca.pem, cert.pem, and key.pem files used for TLS verification. Defaults
to ~/.docker.
COMPOSE_HTTP_TIMEOUT
Configures the time (in seconds) a request to the Docker daemon is allowed to hang before
Compose considers it failed. Defaults to 60 seconds.
COMPOSE_TLS_VERSION
Configure which TLS version is used for TLS communication with the docker daemon. Defaults
to TLSv1. Supported values are: TLSv1, TLSv1_1, TLSv1_2.
COMPOSE_CONVERT_WINDOWS_PATHS
Enable path conversion from Windows-style to Unix-style in volume definitions. Users of Docker
Machine and Docker Toolbox on Windows should always set this. Defaults to 0. Supported
values: true or 1 to enable, false or 0 to disable.
COMPOSE_PATH_SEPARATOR
If set, the value of the COMPOSE_FILE environment variable is separated using this character as path
separator.
COMPOSE_FORCE_WINDOWS_HOST
If set, volume declarations using the short syntax are parsed assuming the host path is a Windows
path, even if Compose is running on a UNIX-based system. Supported values: true or 1 to
enable, false or 0 to disable.
COMPOSE_IGNORE_ORPHANS
If set, Compose doesn’t try to detect orphaned containers for the project. Supported
values: true or 1 to enable, false or 0 to disable.
COMPOSE_PARALLEL_LIMIT
Sets a limit for the number of operations Compose can execute in parallel. The default value is 64,
and may not be set lower than 2.
COMPOSE_INTERACTIVE_NO_CLI
If set, Compose doesn’t attempt to use the Docker CLI for interactive run and execoperations. This
option is not available on Windows where the CLI is required for the aforementioned operations.
Supported: true or 1 to enable, false or 0 to disable.
Command-line completion
Estimated reading time: 3 minutes
Compose comes with command completion for the bash and zsh shell.
LINUX
Mac
Install via Homebrew
2. After the installation, Brew displays the installation path. Make sure to place the completion
script in the path.
For example, when running this command on Mac 10.13.2, place the completion script
in /usr/local/etc/bash_completion.d/.
sudo curl -L
https://raw.githubusercontent.com/docker/compose/1.24.1/contrib/completion/bas
h/docker-compose -o /usr/local/etc/bash_completion.d/docker-compose
7. You can source your ~/.bash_profile or launch a new terminal to utilize completion.
6. You can source your ~/.bash_profile or launch a new terminal to utilize completion.
Zsh
Make sure you have installed oh-my-zsh on your computer.
Add docker and docker-compose to the plugins list in ~/.zshrc to run autocompletion within the oh-
my-zsh shell. In the following example, ... represent other Zsh plugins you may have installed.
plugins=(... docker docker-compose
)
9. exec $SHELL -l
Available completions
Depending on what you typed on the command line so far, it completes:
docker-compose build
Estimated reading time: 1 minute
Options:
--compress Compress the build context using gzip.
--force-rm Always remove intermediate containers.
--no-cache Do not use cache when building the image.
--pull Always attempt to pull a newer version of the image.
-m, --memory MEM Sets memory limit for the build container.
--build-arg key=val Set build-time variables for services.
--parallel Build images in parallel.
Services are built once and then tagged, by default as project_service. For
example, composetest_db. If the Compose file specifies an image name, the image is tagged with
that name, substituting any variables beforehand. See variable substitution.
If you change a service’s Dockerfile or the contents of its build directory, run docker-compose
build to rebuild it.
docker-compose bundle
Estimated reading time: 1 minute
Options:
--push-images Automatically push images for any services
which have a `build` option specified.
Images must have digests stored, which requires interaction with a Docker registry. If digests aren’t
stored for all images, you can fetch them with docker-compose pull or docker-compose push. To
push images automatically when bundling, pass --push-images. Only services with a build option
specified have their images pushed.
docker-compose config
Estimated reading time: 1 minute
Options:
--resolve-image-digests Pin image tags to digests.
-q, --quiet Only validate the configuration, don't print anything.
--services Print the service names, one per line.
--volumes Print the volume names, one per line.
--hash="*" Print the service config hash, one per line.
Set "service1,service2" for a list of specified services
or use the wildcard symbol to display all services.
docker-compose create
Estimated reading time: 1 minute
Options:
--force-recreate Recreate containers even if their configuration and
image haven't changed. Incompatible with --no-recreate.
--no-recreate If containers already exist, don't recreate them.
Incompatible with --force-recreate.
--no-build Don't build an image, even if it's missing.
--build Build images before creating containers.
docker-compose down
Estimated reading time: 1 minute
Options:
--rmi type Remove images. Type must be one of:
'all': Remove all images used by any service.
'local': Remove only images that don't have a
custom tag set by the `image` field.
-v, --volumes Remove named volumes declared in the `volumes`
section of the Compose file and anonymous volumes
attached to containers.
--remove-orphans Remove containers for services not defined in the
Compose file
-t, --timeout TIMEOUT Specify a shutdown timeout in seconds.
(default: 10)
Stops containers and removes containers, networks, volumes, and images created by up.
docker-compose events
Estimated reading time: 1 minute
Options:
--json Output events as a stream of json objects
With the --json flag, a json object is printed one per line with the format:
{
"time": "2015-11-20T18:01:03.615550",
"type": "container",
"action": "create",
"id": "213cf7...5fc39a",
"service": "web",
"attributes": {
"name": "application_web_1",
"image": "alpine:edge"
}
}
docker-compose exec
Estimated reading time: 1 minute
Options:
-d, --detach Detached mode: Run command in the background.
--privileged Give extended privileges to the process.
-u, --user USER Run the command as this user.
-T Disable pseudo-tty allocation. By default `docker-compose exec`
allocates a TTY.
--index=index index of the container if there are multiple
instances of a service [default: 1]
-e, --env KEY=VAL Set environment variables (can be used multiple times,
not supported in API < 1.25)
-w, --workdir DIR Path to workdir directory for this command.
This is the equivalent of docker exec. With this subcommand you can run arbitrary commands in
your services. Commands are by default allocating a TTY, so you can use a command such
as docker-compose exec web sh to get an interactive prompt.
docker-compose help
Estimated reading time: 1 minute
Options:
-s SIGNAL SIGNAL to send to the container.
Default signal is SIGKILL.
Forces running containers to stop by sending a SIGKILL signal. Optionally the signal can be passed,
for example:
docker-compose kill -s SIGINT
docker-compose logs
Estimated reading time: 1 minute
Options:
--no-color Produce monochrome output.
-f, --follow Follow log output.
-t, --timestamps Show timestamps.
--tail="all" Number of lines to show from the end of the logs
for each container.
docker-compose pause
Estimated reading time: 1 minute
Pauses running containers of a service. They can be unpaused with docker-compose unpause.
docker-compose port
Estimated reading time: 1 minute
Options:
--protocol=proto tcp or udp [default: tcp]
--index=index index of the container if there are multiple
instances of a service [default: 1]
docker-compose ps
Estimated reading time: 1 minute
Options:
-q, --quiet Only display IDs
--services Display services
--filter KEY=VAL Filter services by a property
-a, --all Show all stopped containers (including those created by the
run command)
Lists containers.
$ docker-compose ps
Name Command State Ports
-------------------------------------------------------------------------------------
--------
mywordpress_db_1 docker-entrypoint.sh mysqld Up (healthy) 3306/tcp
mywordpress_wordpress_1 /entrypoint.sh apache2-for ... Restarting
0.0.0.0:8000->80/tcp
docker-compose pull
Estimated reading time: 1 minute
Options:
--ignore-pull-failures Pull what it can and ignores images with pull failures.
--parallel Deprecated, pull multiple images in parallel (enabled by
default).
--no-parallel Disable parallel pulling.
-q, --quiet Pull without printing progress information
--include-deps Also pull services declared as dependencies
If you run docker-compose pull ServiceName in the same directory as the docker-compose.yml file
that defines the service, Docker pulls the associated image. For example, to call the postgres image
configured as the db service in our example, you would run docker-compose pull db.
$ docker-compose pull db
Pulling db (postgres:latest)...
latest: Pulling from library/postgres
cd0a524342ef: Pull complete
9c784d04dcb0: Pull complete
d99dddf7e662: Pull complete
e5bff71e3ce6: Pull complete
cb3e0a865488: Pull complete
31295d654cd5: Pull complete
fc930a4e09f5: Pull complete
8650cce8ef01: Pull complete
61949acd8e52: Pull complete
527a203588c0: Pull complete
26dec14ac775: Pull complete
0efc0ed5a9e5: Pull complete
40cd26695b38: Pull complete
Digest: sha256:fd6c0e2a9d053bebb294bb13765b3e01be7817bf77b01d58c2377ff27a4a46dc
Status: Downloaded newer image for postgres:latest
docker-compose push
Estimated reading time: 1 minute
Options:
--ignore-push-failures Push what it can and ignores images with push failures.
Example
version: '3'
services:
service1:
build: .
image: localhost:5000/yourimage # goes to local registry
service2:
build: .
image: youruser/yourimage # goes to youruser DockerHub registry
docker-compose restart
Estimated reading time: 1 minute
Options:
-t, --timeout TIMEOUT Specify a shutdown timeout in seconds.
(default: 10)
If you make changes to your docker-compose.yml configuration these changes are not reflected after
running this command.
For example, changes to environment variables (which are added after a container is built, but
before the container’s command is executed) are not updated after restarting.
If you are looking to configure a service’s restart policy, please refer to restart in Compose file v3
and restart in Compose v2. Note that if you are deploying a stack in swarm mode, you should
use restart_policy, instead.
docker-compose rm
Estimated reading time: 1 minute
By default, anonymous volumes attached to containers are not removed. You can override this
with -v. To list all volumes, use docker volume ls.
Running the command with no options also removes one-off containers created by docker-compose
up or docker-compose run:
$ docker-compose rm
Going to remove djangoquickstart_web_run_1
Are you sure? [yN] y
Removing djangoquickstart_web_run_1 ... done
docker-compose run
Estimated reading time: 2 minutes
Usage:
run [options] [-v VOLUME...] [-p PORT...] [-e KEY=VAL...] [-l KEY=VALUE...]
SERVICE [COMMAND] [ARGS...]
Options:
-d, --detach Detached mode: Run container in the background, print
new container name.
--name NAME Assign a name to the container
--entrypoint CMD Override the entrypoint of the image.
-e KEY=VAL Set an environment variable (can be used multiple times)
-l, --label KEY=VAL Add or override a label (can be used multiple times)
-u, --user="" Run as specified username or uid
--no-deps Don't start linked services.
--rm Remove container after run. Ignored in detached mode.
-p, --publish=[] Publish a container's port(s) to the host
--service-ports Run command with the service's ports enabled and mapped
to the host.
--use-aliases Use the service's network aliases in the network(s) the
container connects to.
-v, --volume=[] Bind mount a volume (default [])
-T Disable pseudo-tty allocation. By default `docker-compose
run`
allocates a TTY.
-w, --workdir="" Working directory inside the container
Runs a one-time command against a service. For example, the following command starts
the web service and runs bash as its command.
docker-compose run web bash
Commands you use with run start in new containers with configuration defined by that of the service,
including volumes, links, and other details. However, there are two important differences.
First, the command passed by run overrides the command defined in the service configuration. For
example, if the web service configuration is started with bash, then docker-compose run web python
app.py overrides it with python app.py.
The second difference is that the docker-compose run command does not create any of the ports
specified in the service configuration. This prevents port collisions with already-open ports. If you do
want the service’s ports to be created and mapped to the host, specify the --service-ports flag:
docker-compose run --service-ports web python manage.py shell
Alternatively, manual port mapping can be specified with the --publish or -p options, just as when
using docker run:
docker-compose run --publish 8080:80 -p 2022:22 -p 127.0.0.1:2021:21 web python
manage.py shell
If you start a service configured with links, the run command first checks to see if the linked service
is running and starts the service if it is stopped. Once all the linked services are running,
the run executes the command you passed it. For example, you could run:
docker-compose run db psql -h db -U docker
If you want to remove the container after running while overriding the container’s restart policy, use
the --rm flag:
docker-compose run --rm web python manage.py db upgrade
This runs a database upgrade script, and removes the container when finished running, even if a
restart policy is specified in the service configuration.
docker-compose scale
Estimated reading time: 1 minute
Note: This command is deprecated. Use the up command with the --scale flag instead. Beware that
using up with --scale flag has some subtle differences with the scale command as it incorporates
the behaviour of up command.
Usage: scale [options] [SERVICE=NUM...]
Options:
-t, --timeout TIMEOUT Specify a shutdown timeout in seconds.
(default: 10)
Tip: Alternatively, in Compose file version 3.x, you can specify replicas under the deploykey as part
of a service configuration for Swarm mode. The deploy key and its sub-options (including replicas)
only works with the docker stack deploy command, not docker-compose up or docker-compose run.
docker-compose start
Estimated reading time: 1 minute
docker-compose stop
Estimated reading time: 1 minute
Options:
-t, --timeout TIMEOUT Specify a shutdown timeout in seconds.
(default: 10)
Stops running containers without removing them. They can be started again withdocker-compose
start.
docker-compose top
Estimated reading time: 1 minute
$ docker-compose top
compose_service_a_1
PID USER TIME COMMAND
----------------------------
4060 root 0:00 top
compose_service_b_1
PID USER TIME COMMAND
----------------------------
4115 root 0:00 top
docker-compose unpause
Estimated reading time: 1 minute
docker-compose up
Estimated reading time: 2 minutes
Options:
-d, --detach Detached mode: Run containers in the background,
print new container names. Incompatible with
--abort-on-container-exit.
--no-color Produce monochrome output.
--quiet-pull Pull without printing progress information
--no-deps Don't start linked services.
--force-recreate Recreate containers even if their configuration
and image haven't changed.
--always-recreate-deps Recreate dependent containers.
Incompatible with --no-recreate.
--no-recreate If containers already exist, don't recreate
them. Incompatible with --force-recreate and -V.
--no-build Don't build an image, even if it's missing.
--no-start Don't start the services after creating them.
--build Build images before starting containers.
--abort-on-container-exit Stops all containers if any container was
stopped. Incompatible with -d.
-t, --timeout TIMEOUT Use this timeout in seconds for container
shutdown when attached or when containers are
already running. (default: 10)
-V, --renew-anon-volumes Recreate anonymous volumes instead of retrieving
data from the previous containers.
--remove-orphans Remove containers for services not defined
in the Compose file.
--exit-code-from SERVICE Return the exit code of the selected service
container. Implies --abort-on-container-exit.
--scale SERVICE=NUM Scale SERVICE to NUM instances. Overrides the
`scale` setting in the Compose file if present.
Unless they are already running, this command also starts any linked services.
This table shows which Compose file versions support specific Docker releases.
3.7 18.06.0+
3.6 18.02.0+
3.5 17.12.0+
3.4 17.09.0+
3.3 17.06.0+
3.2 17.04.0+
3.1 1.13.1+
3.0 1.13.0+
2.4 17.12.0+
2.3 17.06.0+
2.2 1.13.0+
2.1 1.12.0+
2.0 1.10.0+
1.0 1.9.1.+
In addition to Compose file format versions shown in the table, the Compose itself is on a release
schedule, as shown in Compose releases, but file format versions do not necessarily increment with
each release. For example, Compose file format 3.0 was first introduced in Compose release 1.10.0,
and versioned gradually in subsequent releases.
The topics on this reference page are organized alphabetically by top-level key to reflect the
structure of the Compose file itself. Top-level keys that define a section in the configuration file such
as build, deploy, depends_on, networks, and so on, are listed with the options that support them as
sub-topics. This maps to the <key>: <option>: <value> indent structure of the Compose file.
A good place to start is the Getting Started tutorial which uses version 3 Compose stack files to
implement multi-container apps, service definitions, and swarm mode. Here are some Compose files
used in the tutorial.
Another good reference is the Compose file for the voting app sample used in the Docker for
Beginners lab topic on Deploying an app to a Swarm. This is also shown on the accordion at the top
of this section.
This section contains a list of all configuration options supported by a service definition in version 3.
build
Configuration options that are applied at build time.
build can be specified either as a string containing a path to the build context:
version: "3.7"
services:
webapp:
build: ./dir
Or, as an object with the path specified under context and optionally Dockerfile and args:
version: "3.7"
services:
webapp:
build:
context: ./dir
dockerfile: Dockerfile-alternate
args:
buildno: 1
If you specify image as well as build, then Compose names the built image with the webapp and
optional tag specified in image:
build: ./dir
image: webapp:tag
This results in an image named webapp and tagged tag, built from ./dir.
Note: This option is ignored when deploying a stack in swarm mode with a (version 3) Compose file.
The docker stack command accepts only pre-built images.
CONTEXT
When the value supplied is a relative path, it is interpreted as relative to the location of the Compose
file. This directory is also the build context that is sent to the Docker daemon.
Compose builds and tags it with a generated name, and uses that image thereafter.
build:
context: ./dir
DOCKERFILE
Alternate Dockerfile.
Compose uses an alternate file to build with. A build path must also be specified.
build:
context: .
dockerfile: Dockerfile-alternate
ARGS
Add build arguments, which are environment variables accessible only during the build process.
ARG buildno
ARG gitcommithash
Then specify the arguments under the build key. You can pass a mapping or a list:
build:
context: .
args:
buildno: 1
gitcommithash: cdc3b19
build:
context: .
args:
- buildno=1
- gitcommithash=cdc3b19
Note: In your Dockerfile, if you specify ARG before the FROM instruction, ARG is not available in the
build instructions under FROM. If you need an argument to be available in both places, also specify it
under the FROM instruction. See Understand how ARGS and FROM interact for usage details.
You can omit the value when specifying a build argument, in which case its value at build time is the
value in the environment where Compose is running.
args:
- buildno
- gitcommithash
Note: YAML boolean values (true, false, yes, no, on, off) must be enclosed in quotes, so that the
parser interprets them as strings.
CACHE_FROM
Note: This option is new in v3.2
build:
context: .
cache_from:
- alpine:latest
- corp/web_app:3.14
LABELS
Note: This option is new in v3.3
Add metadata to the resulting image using Docker labels. You can use either an array or a
dictionary.
We recommend that you use reverse-DNS notation to prevent your labels from conflicting with those
used by other software.
build:
context: .
labels:
com.example.description: "Accounting webapp"
com.example.department: "Finance"
com.example.label-with-empty-value: ""
build:
context: .
labels:
- "com.example.description=Accounting webapp"
- "com.example.department=Finance"
- "com.example.label-with-empty-value"
SHM_SIZE
Added in version 3.5 file format
Set the size of the /dev/shm partition for this build’s containers. Specify as an integer value
representing the number of bytes or as a string expressing a byte value.
build:
context: .
shm_size: '2gb'
build:
context: .
shm_size: 10000000
TARGET
Added in version 3.4 file format
Build the specified stage as defined inside the Dockerfile. See the multi-stage build docs for details.
build:
context: .
target: prod
cap_add, cap_drop
Add or drop container capabilities. See man 7 capabilities for a full list.
cap_add:
- ALL
cap_drop:
- NET_ADMIN
- SYS_ADMIN
Note: These options are ignored when deploying a stack in swarm mode with a (version 3)
Compose file.
cgroup_parent
Specify an optional parent cgroup for the container.
cgroup_parent: m-executor-abcd
Note: This option is ignored when deploying a stack in swarm mode with a (version 3) Compose file.
command
Override the default command.
configs
Grant access to configs on a per-service basis using the per-service configs configuration. Two
different syntax variants are supported.
Note: The config must already exist or be defined in the top-level configsconfiguration of this stack
file, or stack deployment fails.
LONG SYNTAX
The long syntax provides more granularity in how the config is created within the service’s task
containers.
The following example sets the name of my_config to redis_config within the container, sets the
mode to 0440 (group-readable) and sets the user and group to 103. The redisservice does not have
access to the my_other_config config.
version: "3.7"
services:
redis:
image: redis:latest
deploy:
replicas: 1
configs:
- source: my_config
target: /redis_config
uid: '103'
gid: '103'
mode: 0440
configs:
my_config:
file: ./my_config.txt
my_other_config:
external: true
You can grant a service access to multiple configs and you can mix long and short syntax. Defining
a config does not imply granting a service access to it.
container_name
Specify a custom container name, rather than a generated default name.
container_name: my-web-container
Because Docker container names must be unique, you cannot scale a service beyond 1 container if
you have specified a custom name. Attempting to do so results in an error.
Note: This option is ignored when deploying a stack in swarm mode with a (version 3) Compose file.
credential_spec
Note: This option was added in v3.3. Using group Managed Service Account (gMSA) configurations
with compose files is supported in Compose version 3.8.
Configure the credential spec for managed service account. This option is only used for services
using Windows containers. The credential_spec must be in the
format file://<filename> or registry://<value-name>.
When using file:, the referenced file must be present in the CredentialSpecs subdirectory in the
Docker data directory, which defaults to C:\ProgramData\Docker\ on Windows. The following
example loads the credential spec from a file namedC:\ProgramData\Docker\CredentialSpecs\my-
credential-spec.json:
credential_spec:
file: my-credential-spec.json
When using registry:, the credential spec is read from the Windows registry on the daemon’s host.
A registry value with the given name must be located in:
HKLM\SOFTWARE\Microsoft\Windows
NT\CurrentVersion\Virtualization\Containers\CredentialSpecs
The following example load the credential spec from a value named my-credential-spec in the
registry:
credential_spec:
registry: my-credential-spec
configs:
my_credentials_spec:
file: ./my-credential-spec.json|
depends_on
Express dependency between services, Service dependencies cause the following behaviors:
Simple example:
version: "3.7"
services:
web:
build: .
depends_on:
- db
- redis
redis:
image: redis
db:
image: postgres
version: "3.7"
services:
redis:
image: redis:alpine
deploy:
replicas: 6
update_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure
services:
wordpress:
image: wordpress
ports:
- "8080:80"
networks:
- overlay
deploy:
mode: replicated
replicas: 2
endpoint_mode: vip
mysql:
image: mysql
volumes:
- db-data:/var/lib/mysql/data
networks:
- overlay
deploy:
mode: replicated
replicas: 2
endpoint_mode: dnsrr
volumes:
db-data:
networks:
overlay:
The options for endpoint_mode also work as flags on the swarm mode CLI command docker service
create. For a quick list of all swarm related docker commands, see Swarm mode CLI commands.
To learn more about service discovery and networking in swarm mode, see Configure service
discovery in the swarm mode topics.
LABELS
Specify labels for the service. These labels are only set on the service, and not on any containers for
the service.
version: "3.7"
services:
web:
image: web
deploy:
labels:
com.example.description: "This label will appear on the web service"
To set labels on containers instead, use the labels key outside of deploy:
version: "3.7"
services:
web:
image: web
labels:
com.example.description: "This label will appear on all containers for the web
service"
MODE
Either global (exactly one container per swarm node) or replicated (a specified number of
containers). The default is replicated. (To learn more, see Replicated and global services in
the swarm topics.)
version: "3.7"
services:
worker:
image: dockersamples/examplevotingapp_worker
deploy:
mode: global
PLACEMENT
Specify placement of constraints and preferences. See the docker service create documentation for
a full description of the syntax and available types of constraints and preferences.
version: "3.7"
services:
db:
image: postgres
deploy:
placement:
constraints:
- node.role == manager
- engine.labels.operatingsystem == ubuntu 14.04
preferences:
- spread: node.labels.zone
REPLICAS
If the service is replicated (which is the default), specify the number of containers that should be
running at any given time.
version: "3.7"
services:
worker:
image: dockersamples/examplevotingapp_worker
networks:
- frontend
- backend
deploy:
mode: replicated
replicas: 6
RESOURCES
Note: This replaces the older resource constraint options for non swarm mode in Compose files prior
to version 3 (cpu_shares, cpu_quota, cpuset, mem_limit, memswap_limit, mem_swappiness), as
described in Upgrading version 2.x to 3.x.
Each of these is a single value, analogous to its docker service create counterpart.
In this general example, the redis service is constrained to use no more than 50M of memory
and 0.50 (50% of a single core) of available processing time (CPU), and has 20M of memory
and 0.25 CPU time reserved (as always available to it).
version: "3.7"
services:
redis:
image: redis:alpine
deploy:
resources:
limits:
cpus: '0.50'
memory: 50M
reservations:
cpus: '0.25'
memory: 20M
The topics below describe available options to set resource constraints on services or containers in
a swarm.
The options described here are specific to the deploy key and swarm mode. If you want to set
resource constraints on non swarm deployments, use Compose file format version 2 CPU, memory,
and other resource options. If you have further questions, refer to the discussion on the GitHub
issue docker/compose/4513.
Out Of Memory Exceptions (OOME)
If your services or containers attempt to use more memory than the system has available, you may
experience an Out Of Memory Exception (OOME) and a container, or the Docker daemon, might be
killed by the kernel OOM killer. To prevent this from happening, ensure that your application runs on
hosts with adequate memory and see Understand the risks of running out of memory.
RESTART_POLICY
Configures if and how to restart containers when they exit. Replaces restart.
version: "3.7"
services:
redis:
image: redis:alpine
deploy:
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
ROLLBACK_CONFIG
Version 3.7 file format and up
UPDATE_CONFIG
Configures how the service should be updated. Useful for configuring rolling updates.
Note: order is only supported for v3.4 and higher of the compose file format.
version: "3.7"
services:
vote:
image: dockersamples/examplevotingapp_vote:before
depends_on:
- redis
deploy:
replicas: 2
update_config:
parallelism: 2
delay: 10s
order: stop-first
build
cgroup_parent
container_name
devices
tmpfs
external_links
links
network_mode
restart
security_opt
sysctls
userns_mode
Tip: See the section on how to configure volumes for services, swarms, and docker-stack.yml files.
Volumes are supported but to work with swarms and services, they must be configured as named
volumes or associated with services that are constrained to nodes with access to the requisite
volumes.
devices
List of device mappings. Uses the same format as the --device docker client create option.
devices:
- "/dev/ttyUSB0:/dev/ttyUSB0"
Note: This option is ignored when deploying a stack in swarm mode with a (version 3) Compose file.
depends_on
Express dependency between services, Service dependencies cause the following behaviors:
Simple example:
version: "3.7"
services:
web:
build: .
depends_on:
- db
- redis
redis:
image: redis
db:
image: postgres
dns: 8.8.8.8
dns:
- 8.8.8.8
- 9.9.9.9
dns_search
Custom DNS search domains. Can be a single value or a list.
dns_search: example.com
dns_search:
- dc1.example.com
- dc2.example.com
entrypoint
Override the default entrypoint.
entrypoint: /code/entrypoint.sh
entrypoint:
- php
- -d
- zend_extension=/usr/local/lib/php/extensions/no-debug-non-zts-
20100525/xdebug.so
- -d
- memory_limit=-1
- vendor/bin/phpunit
Note: Setting entrypoint both overrides any default entrypoint set on the service’s image with
the ENTRYPOINT Dockerfile instruction, and clears out any default command on the image - meaning
that if there’s a CMD instruction in the Dockerfile, it is ignored.
env_file
Add environment variables from a file. Can be a single value or a list.
If you have specified a Compose file with docker-compose -f FILE, paths in env_file are relative to
the directory that file is in.
Environment variables declared in the environment section override these values – this holds true
even if those values are empty or undefined.
env_file: .env
env_file:
- ./common.env
- ./apps/web.env
- /opt/secrets.env
Compose expects each line in an env file to be in VAR=VAL format. Lines beginning with #are treated
as comments and are ignored. Blank lines are also ignored.
# Set Rails/Rack environment
RACK_ENV=development
Note: If your service specifies a build option, variables defined in environment files
are not automatically visible during the build. Use the args sub-option of build to define build-time
environment variables.
The value of VAL is used as is and not modified at all. For example if the value is surrounded by
quotes (as is often the case of shell variables), the quotes are included in the value passed to
Compose.
Keep in mind that the order of files in the list is significant in determining the value assigned to a
variable that shows up more than once. The files in the list are processed from the top down. For the
same variable specified in file a.env and assigned a different value in file b.env, if b.env is listed
below (after), then the value from b.env stands. For example, given the following declaration
in docker-compose.yml:
services:
some-service:
env_file:
- a.env
- b.env
# a.env
VAR=1
and
# b.env
VAR=hello
$VAR is hello.
environment
Add environment variables. You can use either an array or a dictionary. Any boolean values; true,
false, yes no, need to be enclosed in quotes to ensure they are not converted to True or False by the
YML parser.
Environment variables with only a key are resolved to their values on the machine Compose is
running on, which can be helpful for secret or host-specific values.
environment:
RACK_ENV: development
SHOW: 'true'
SESSION_SECRET:
environment:
- RACK_ENV=development
- SHOW=true
- SESSION_SECRET
Note: If your service specifies a build option, variables defined in environment are notautomatically
visible during the build. Use the args sub-option of build to define build-time environment variables.
expose
Expose ports without publishing them to the host machine - they’ll only be accessible to linked
services. Only the internal port can be specified.
expose:
- "3000"
- "8000"
external_links
Link to containers started outside this docker-compose.yml or even outside of Compose, especially
for containers that provide shared or common services. external_links follow semantics similar to
the legacy option links when specifying both the container name and the link alias
(CONTAINER:ALIAS).
external_links:
- redis_1
- project_db_1:mysql
- project_db_1:postgresql
Notes:
If you’re using the version 2 or above file format, the externally-created containers must be
connected to at least one of the same networks as the service that is linking to them. Links are a
legacy option. We recommend using networks instead.
This option is ignored when deploying a stack in swarm mode with a (version 3) Compose file.
extra_hosts
Add hostname mappings. Use the same values as the docker client --add-host parameter.
extra_hosts:
- "somehost:162.242.195.82"
- "otherhost:50.31.209.229"
An entry with the ip address and hostname is created in /etc/hosts inside containers for this
service, e.g:
162.242.195.82 somehost
50.31.209.229 otherhost
healthcheck
Version 2.1 file format and up.
Configure a check that’s run to determine whether or not containers for this service are “healthy”.
See the docs for the HEALTHCHECK Dockerfile instruction for details on how healthchecks work.
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost"]
interval: 1m30s
timeout: 10s
retries: 3
start_period: 40s
To disable any default healthcheck set by the image, you can use disable: true. This is equivalent
to specifying test: ["NONE"].
healthcheck:
disable: true
image
Specify the image to start the container from. Can either be a repository/tag or a partial image ID.
image: redis
image: ubuntu:14.04
image: tutum/influxdb
image: example-registry.com:4000/postgresql
image: a4bc65fd
If the image does not exist, Compose attempts to pull it, unless you have also specified build, in
which case it builds it using the specified options and tags it with the specified tag.
init
Added in version 3.7 file format.
Run an init inside the container that forwards signals and reaps processes. Set this option to true to
enable this feature for the service.
version: "3.7"
services:
web:
image: alpine:latest
init: true
The default init binary that is used is Tini, and is installed in /usr/libexec/docker-initon the
daemon host. You can configure the daemon to use a custom init binary through the init-
path configuration option.
isolation
Specify a container’s isolation technology. On Linux, the only supported value is default. On
Windows, acceptable values are default, process and hyperv. Refer to the Docker Engine docs for
details.
labels
Add metadata to containers using Docker labels. You can use either an array or a dictionary.
It’s recommended that you use reverse-DNS notation to prevent your labels from conflicting with
those used by other software.
labels:
com.example.description: "Accounting webapp"
com.example.department: "Finance"
com.example.label-with-empty-value: ""
labels:
- "com.example.description=Accounting webapp"
- "com.example.department=Finance"
- "com.example.label-with-empty-value"
links
Warning: The --link flag is a legacy feature of Docker. It may eventually be removed. Unless you
absolutely need to continue using it, we recommend that you use user-defined networks to facilitate
communication between two containers instead of using --link. One feature that user-defined
networks do not support that you can do with --link is sharing environmental variables between
containers. However, you can use other mechanisms such as volumes to share environment
variables between containers in a more controlled way.
Link to containers in another service. Either specify both the service name and a link alias
(SERVICE:ALIAS), or just the service name.
web:
links:
- db
- db:database
- redis
Containers for the linked service are reachable at a hostname identical to the alias, or the service
name if no alias was specified.
Links are not required to enable services to communicate - by default, any service can reach any
other service at that service’s name. (See also, the Links topic in Networking in Compose.)
Links also express dependency between services in the same way as depends_on, so they
determine the order of service startup.
Notes
If you define both links and networks, services with links between them must share at least
one network in common to communicate.
This option is ignored when deploying a stack in swarm mode with a (version 3) Compose
file.
logging
Logging configuration for the service.
logging:
driver: syslog
options:
syslog-address: "tcp://192.168.0.42:123"
The driver name specifies a logging driver for the service’s containers, as with the --log-
driver option for docker run (documented here).
driver: "json-file"
driver: "syslog"
driver: "none"
Note: Only the json-file and journald drivers make the logs available directly from docker-compose
up and docker-compose logs. Using any other driver does not print any logs.
Specify logging options for the logging driver with the options key, as with the --log-optoption
for docker run.
Logging options are key-value pairs. An example of syslog options:
driver: "syslog"
options:
syslog-address: "tcp://192.168.0.42:123"
The default driver json-file, has options to limit the amount of logs stored. To do this, use a key-value
pair for maximum storage size and maximum number of files:
options:
max-size: "200k"
max-file: "10"
The example shown above would store log files until they reach a max-size of 200kB, and then
rotate them. The amount of individual log files stored is specified by the max-filevalue. As logs grow
beyond the max limits, older log files are removed to allow storage of new logs.
Here is an example docker-compose.yml file that limits logging storage:
version: "3.7"
services:
some-service:
image: some-service
logging:
driver: "json-file"
options:
max-size: "200k"
max-file: "10"
The above example for controlling log files and sizes uses options specific to the json-file driver.
These particular options are not available on other logging drivers. For a full list of supported logging
drivers and their options, see logging drivers.
network_mode
Network mode. Use the same values as the docker client --network parameter, plus the special
form service:[service name].
network_mode: "bridge"
network_mode: "host"
network_mode: "none"
network_mode: "service:[service name]"
network_mode: "container:[container name/id]"
Notes
This option is ignored when deploying a stack in swarm mode with a (version 3) Compose
file.
network_mode: "host" cannot be mixed with links.
networks
Networks to join, referencing entries under the top-level networks key.
services:
some-service:
networks:
- some-network
- other-network
ALIASES
Aliases (alternative hostnames) for this service on the network. Other containers on the same
network can use either the service name or this alias to connect to one of the service’s containers.
Since aliases is network-scoped, the same service can have different aliases on different networks.
Note: A network-wide alias can be shared by multiple containers, and even by multiple services. If it
is, then exactly which container the name resolves to is not guaranteed.
services:
some-service:
networks:
some-network:
aliases:
- alias1
- alias3
other-network:
aliases:
- alias2
In the example below, three services are provided (web, worker, and db), along with two networks
(new and legacy). The db service is reachable at the hostname db or database on the new network,
and at db or mysql on the legacy network.
version: "3.7"
services:
web:
image: "nginx:alpine"
networks:
- new
worker:
image: "my-worker-image:latest"
networks:
- legacy
db:
image: mysql
networks:
new:
aliases:
- database
legacy:
aliases:
- mysql
networks:
new:
legacy:
IPV4_ADDRESS, IPV6_ADDRESS
Specify a static IP address for containers for this service when joining the network.
The corresponding network configuration in the top-level networks section must have anipam block
with subnet configurations covering each static address.
If IPv6 addressing is desired, the enable_ipv6 option must be set, and you must use a version 2.x
Compose file. IPv6 options do not currently work in swarm mode.
An example:
version: "3.7"
services:
app:
image: nginx:alpine
networks:
app_net:
ipv4_address: 172.16.238.10
ipv6_address: 2001:3984:3989::10
networks:
app_net:
ipam:
driver: default
config:
- subnet: "172.16.238.0/24"
- subnet: "2001:3984:3989::/64"
pid
pid: "host"
Sets the PID mode to the host PID mode. This turns on sharing between container and the host
operating system the PID address space. Containers launched with this flag can access and
manipulate other containers in the bare-metal machine’s namespace and vice versa.
ports
Expose ports.
The long form syntax allows the configuration of additional fields that can’t be expressed in the short
form.
ports:
- target: 80
published: 8080
protocol: tcp
mode: host
Note: This option is ignored when deploying a stack in swarm mode with a (version 3) Compose file.
Use restart_policy instead.
secrets
Grant access to secrets on a per-service basis using the per-service secrets configuration. Two
different syntax variants are supported.
Note: The secret must already exist or be defined in the top-level secretsconfiguration of this stack
file, or stack deployment fails.
LONG SYNTAX
The long syntax provides more granularity in how the secret is created within the service’s task
containers.
The following example sets name of the my_secret to redis_secret within the container, sets the
mode to 0440 (group-readable) and sets the user and group to 103. The redisservice does not have
access to the my_other_secret secret.
version: "3.7"
services:
redis:
image: redis:latest
deploy:
replicas: 1
secrets:
- source: my_secret
target: redis_secret
uid: '103'
gid: '103'
mode: 0440
secrets:
my_secret:
file: ./my_secret.txt
my_other_secret:
external: true
You can grant a service access to multiple secrets and you can mix long and short syntax. Defining
a secret does not imply granting a service access to it.
security_opt
Override the default labeling scheme for each container.
security_opt:
- label:user:USER
- label:role:ROLE
Note: This option is ignored when deploying a stack in swarm mode with a (version 3) Compose file.
stop_grace_period
Specify how long to wait when attempting to stop a container if it doesn’t handle SIGTERM (or
whatever stop signal has been specified with stop_signal), before sending SIGKILL. Specified as
a duration.
stop_grace_period: 1s
stop_grace_period: 1m30s
By default, stop waits 10 seconds for the container to exit before sending SIGKILL.
stop_signal
Sets an alternative signal to stop the container. By default stop uses SIGTERM. Setting an
alternative signal using stop_signal causes stop to send that signal instead.
stop_signal: SIGUSR1
sysctls
Kernel parameters to set in the container. You can use either an array or a dictionary.
sysctls:
net.core.somaxconn: 1024
net.ipv4.tcp_syncookies: 0
sysctls:
- net.core.somaxconn=1024
- net.ipv4.tcp_syncookies=0
Note: This option is ignored when deploying a stack in swarm mode with a (version 3) Compose file.
tmpfs
Version 2 file format and up.
Mount a temporary file system inside the container. Can be a single value or a list.
tmpfs: /run
tmpfs:
- /run
- /tmp
Note: This option is ignored when deploying a stack in swarm mode with a (version 3-3.5) Compose
file.
Version 3.6 file format and up.
Mount a temporary file system inside the container. Size parameter specifies the size of the tmpfs
mount in bytes. Unlimited by default.
- type: tmpfs
target: /app
tmpfs:
size: 1000
ulimits
Override the default ulimits for a container. You can either specify a single limit as an integer or
soft/hard limits as a mapping.
ulimits:
nproc: 65535
nofile:
soft: 20000
hard: 40000
userns_mode
userns_mode: "host"
Disables the user namespace for this service, if Docker daemon is configured with user
namespaces. See dockerd for more information.
Note: This option is ignored when deploying a stack in swarm mode with a (version 3) Compose file.
volumes
Mount host paths or named volumes, specified as sub-options to a service.
You can mount a host path as part of a definition for a single service, and there is no need to define
it in the top level volumes key.
But, if you want to reuse a volume across multiple services, then define a named volume in the top-
level volumes key. Use named volumes with services, swarms, and stack files.
Note: The top-level volumes key defines a named volume and references it from each
service’s volumes list. This replaces volumes_from in earlier versions of the Compose file format.
See Use volumes and Volume Plugins for general information on volumes.
This example shows a named volume (mydata) being used by the web service, and a bind mount
defined for a single service (first path under db service volumes). The db service also uses a named
volume called dbdata (second path under db service volumes), but defines it using the old string
format for mounting a named volume. Named volumes must be listed under the top-
level volumes key, as shown.
version: "3.7"
services:
web:
image: nginx:alpine
volumes:
- type: volume
source: mydata
target: /data
volume:
nocopy: true
- type: bind
source: ./static
target: /opt/app/static
db:
image: postgres:latest
volumes:
- "/var/run/postgres/postgres.sock:/var/run/postgres/postgres.sock"
- "dbdata:/var/lib/postgresql/data"
volumes:
mydata:
dbdata:
Note: See Use volumes and Volume Plugins for general information on volumes.
SHORT SYNTAX
Optionally specify a path on the host machine (HOST:CONTAINER), or an access mode
(HOST:CONTAINER:ro).
You can mount a relative path on the host, that expands relative to the directory of the Compose
configuration file being used. Relative paths should always begin with . or ...
volumes:
# Just specify a path and let the Engine create a volume
- /var/lib/mysql
# User-relative path
- ~/configs:/etc/configs/:ro
# Named volume
- datavolume:/var/lib/mysql
LONG SYNTAX
The long form syntax allows the configuration of additional fields that can’t be expressed in the short
form.
version: "3.7"
services:
web:
image: nginx:alpine
ports:
- "80:80"
volumes:
- type: volume
source: mydata
target: /data
volume:
nocopy: true
- type: bind
source: ./static
target: /opt/app/static
networks:
webnet:
volumes:
mydata:
In the absence of having named volumes with specified sources, Docker creates an anonymous
volume for each task backing a service. Anonymous volumes do not persist after the associated
containers are removed.
If you want your data to persist, use a named volume and a volume driver that is multi-host aware,
so that the data is accessible from any node. Or, set constraints on the service so that its tasks are
deployed on a node that has the volume present.
As an example, the docker-stack.yml file for the votingapp sample in Docker Labs defines a service
called db that runs a postgres database. It is configured as a named volume to persist the data on
the swarm, and is constrained to run only on manager nodes. Here is the relevant snip-it from that
file:
version: "3.7"
services:
db:
image: postgres:9.4
volumes:
- db-data:/var/lib/postgresql/data
networks:
- backend
deploy:
placement:
constraints: [node.role == manager]
consistent: Full consistency. The container runtime and the host maintain an identical view
of the mount at all times. This is the default.
cached: The host’s view of the mount is authoritative. There may be delays before updates
made on the host are visible within a container.
delegated: The container runtime’s view of the mount is authoritative. There may be delays
before updates made in a container are visible on the host.
Here is an example of configuring a volume as cached:
version: "3.7"
services:
php:
image: php:7.1-fpm
ports:
- "9000"
volumes:
- .:/var/www/project:cached
Full detail on these flags, the problems they solve, and their docker run counterparts is in the
Docker Desktop for Mac topic Performance tuning for volume mounts (shared filesystems).
domainname, hostname, ipc, mac_address, privileged,
read_only, shm_size, stdin_open, tty, user, working_dir
Each of these is a single value, analogous to its docker run counterpart. Note that mac_address is a
legacy option.
user: postgresql
working_dir: /code
domainname: foo.com
hostname: foo
ipc: host
mac_address: 02:42:ac:11:65:43
privileged: true
read_only: true
shm_size: 64M
stdin_open: true
tty: true
Specifying durations
Some configuration options, such as the interval and timeout sub-options for check, accept a
duration as a string in a format that looks like this:
2.5s
10s
1m30s
2h32m
5h34m56s
The supported units are us, ms, s, m and h.
The supported units are b, k, m and g, and their alternative notation kb, mb and gb. Decimal values are
not supported at this time.
See Use volumes and Volume Plugins for general information on volumes.
Here’s an example of a two-service setup where a database’s data directory is shared with another
service as a volume so that it can be periodically backed up:
version: "3.7"
services:
db:
image: db
volumes:
- data-volume:/var/lib/db
backup:
image: backup-service
volumes:
- data-volume:/var/lib/backup/data
volumes:
data-volume:
An entry under the top-level volumes key can be empty, in which case it uses the default driver
configured by the Engine (in most cases, this is the local driver). Optionally, you can configure it
with the following keys:
driver
Specify which volume driver should be used for this volume. Defaults to whatever driver the Docker
Engine has been configured to use, which in most cases is local. If the driver is not available, the
Engine returns an error when docker-compose up tries to create the volume.
driver: foobar
driver_opts
Specify a list of options as key-value pairs to pass to the driver for this volume. Those options are
driver-dependent - consult the driver’s documentation for more information. Optional.
volumes:
example:
driver_opts:
type: "nfs"
o: "addr=10.40.0.199,nolock,soft,rw"
device: ":/docker/example"
external
If set to true, specifies that this volume has been created outside of Compose. docker-compose
up does not attempt to create it, and raises an error if it doesn’t exist.
For version 3.3 and below of the format, external cannot be used in conjunction with other volume
configuration keys (driver, driver_opts, labels). This limitation no longer exists for version 3.4 and
above.
In the example below, instead of attempting to create a volume called [projectname]_data,
Compose looks for an existing volume simply called data and mount it into the db service’s
containers.
version: "3.7"
services:
db:
image: postgres
volumes:
- data:/var/lib/postgresql/data
volumes:
data:
external: true
external.name was deprecated in version 3.4 file format use name instead.
You can also specify the name of the volume separately from the name used to refer to it within the
Compose file:
volumes:
data:
external:
name: actual-name-of-volume
External volumes that do not exist are created if you use docker stack deploy to launch the app
in swarm mode (instead of docker compose up). In swarm mode, a volume is automatically created
when it is defined by a service. As service tasks are scheduled on new nodes, swarmkit creates the
volume on the local node. To learn more, seemoby/moby#29976.
labels
Add metadata to containers using Docker labels. You can use either an array or a dictionary.
It’s recommended that you use reverse-DNS notation to prevent your labels from conflicting with
those used by other software.
labels:
com.example.description: "Database volume"
com.example.department: "IT/Ops"
com.example.label-with-empty-value: ""
labels:
- "com.example.description=Database volume"
- "com.example.department=IT/Ops"
- "com.example.label-with-empty-value"
name
Added in version 3.4 file format
Set a custom name for this volume. The name field can be used to reference volumes that contain
special characters. The name is used as is and will not be scoped with the stack name.
version: "3.7"
volumes:
data:
name: my-app-data
For a full explanation of Compose’s use of Docker networking features and all network driver
options, see the Networking guide.
For Docker Labs tutorials on networking, start with Designing Scalable, Portable Docker
Container Networks
driver
Specify which driver should be used for this network.
The default driver depends on how the Docker Engine you’re using is configured, but in most
instances it is bridge on a single host and overlay on a Swarm.
driver: overlay
BRIDGE
Docker defaults to using a bridge network on a single host. For examples of how to work with bridge
networks, see the Docker Labs tutorial on Bridge networking.
OVERLAY
The overlay driver creates a named network across multiple nodes in a swarm.
For a working example of how to build and use an overlay network with a service in swarm
mode, see the Docker Labs tutorial on Overlay networking and service discovery.
For an in-depth look at how it works under the hood, see the networking concepts lab on
the Overlay Driver Network Architecture.
HOST OR NONE
Use the host’s networking stack, or no networking. Equivalent to docker run --net=host or docker
run --net=none. Only used if you use docker stack commands. If you use the docker-
compose command, use network_mode instead.
If you want to use a particular network on a common build, use [network] as mentioned in the
second yaml file example.
The syntax for using built-in networks such as host and none is a little different. Define an external
network with the name host or none (that Docker has already created automatically) and an alias that
Compose can use (hostnet or nonet in the following examples), then grant the service access to that
network using the alias.
version: "3.7"
services:
web:
networks:
hostnet: {}
networks:
hostnet:
external: true
name: host
services:
web:
...
build:
...
network: host
context: .
...
services:
web:
...
networks:
nonet: {}
networks:
nonet:
external: true
name: none
driver_opts
Specify a list of options as key-value pairs to pass to the driver for this network. Those options are
driver-dependent - consult the driver’s documentation for more information. Optional.
driver_opts:
foo: "bar"
baz: 1
attachable
Note: Only supported for v3.2 and higher.
Only used when the driver is set to overlay. If set to true, then standalone containers can attach to
this network, in addition to services. If a standalone container attaches to an overlay network, it can
communicate with services and standalone containers that are also attached to the overlay network
from other Docker daemons.
networks:
mynet1:
driver: overlay
attachable: true
enable_ipv6
Enable IPv6 networking on this network.
enable_ipv6 requires you to use a version 2 Compose file, as this directive is not yet supported in
Swarm mode.
ipam
Specify custom IPAM config. This is an object with several properties, each of which is optional:
A full example:
ipam:
driver: default
config:
- subnet: 172.28.0.0/16
Note: Additional IPAM configurations, such as gateway, are only honored for version 2 at the
moment.
internal
By default, Docker also connects a bridge network to it to provide external connectivity. If you want
to create an externally isolated overlay network, you can set this option to true.
labels
Add metadata to containers using Docker labels. You can use either an array or a dictionary.
It’s recommended that you use reverse-DNS notation to prevent your labels from conflicting with
those used by other software.
labels:
com.example.description: "Financial transaction network"
com.example.department: "Finance"
com.example.label-with-empty-value: ""
labels:
- "com.example.description=Financial transaction network"
- "com.example.department=Finance"
- "com.example.label-with-empty-value"
external
If set to true, specifies that this network has been created outside of Compose. docker-compose
up does not attempt to create it, and raises an error if it doesn’t exist.
For version 3.3 and below of the format, external cannot be used in conjunction with other network
configuration keys (driver, driver_opts, ipam, internal). This limitation no longer exists for version
3.4 and above.
In the example below, proxy is the gateway to the outside world. Instead of attempting to create a
network called [projectname]_outside, Compose looks for an existing network simply
called outside and connect the proxy service’s containers to it.
version: "3.7"
services:
proxy:
build: ./proxy
networks:
- outside
- default
app:
build: ./app
networks:
- default
networks:
outside:
external: true
external.name was deprecated in version 3.5 file format use name instead.
You can also specify the name of the network separately from the name used to refer to it within the
Compose file:
version: "3.7"
networks:
outside:
external:
name: actual-name-of-network
name
Added in version 3.5 file format
Set a custom name for this network. The name field can be used to reference networks which
contain special characters. The name is used as is and will not be scoped with the stack name.
version: "3.7"
networks:
network1:
name: my-app-net
file: The config is created with the contents of the file at the specified path.
external: If set to true, specifies that this config has already been created. Docker does not
attempt to create it, and if it does not exist, a config not found error occurs.
name: The name of the config object in Docker. This field can be used to reference configs
that contain special characters. The name is used as is and will not be scoped with the stack
name. Introduced in version 3.5 file format.
Another variant for external configs is when the name of the config in Docker is different from the
name that exists within the service. The following example modifies the previous one to use the
external config called redis_config.
configs:
my_first_config:
file: ./config_data
my_second_config:
external:
name: redis_config
You still need to grant access to the config to each service in the stack.
file: The secret is created with the contents of the file at the specified path.
external: If set to true, specifies that this secret has already been created. Docker does not
attempt to create it, and if it does not exist, a secret not found error occurs.
name: The name of the secret object in Docker. This field can be used to reference secrets
that contain special characters. The name is used as is and will not be scoped with the stack
name. Introduced in version 3.5 file format.
In this example, my_first_secret is created as <stack_name>_my_first_secret when the stack is
deployed, and my_second_secret already exists in Docker.
secrets:
my_first_secret:
file: ./secret_data
my_second_secret:
external: true
Another variant for external secrets is when the name of the secret in Docker is different from the
name that exists within the service. The following example modifies the previous one to use the
external secret called redis_secret.
Compose File v3.5 and above
secrets:
my_first_secret:
file: ./secret_data
my_second_secret:
external: true
name: redis_secret
You still need to grant access to the secrets to each service in the stack.
Variable substitution
Your configuration options can contain environment variables. Compose uses the variable values
from the shell environment in which docker-compose is run. For example, suppose the shell
contains POSTGRES_VERSION=9.3 and you supply this configuration:
db:
image: "postgres:${POSTGRES_VERSION}"
When you run docker-compose up with this configuration, Compose looks for
thePOSTGRES_VERSION environment variable in the shell and substitutes its value in. For this example,
Compose resolves the image to postgres:9.3 before running the configuration.
If an environment variable is not set, Compose substitutes with an empty string. In the example
above, if POSTGRES_VERSION is not set, the value for the image option is postgres:.
You can set default values for environment variables using a .env file, which Compose automatically
looks for. Values set in the shell environment override those set in the .envfile.
Important: The .env file feature only works when you use thedocker-compose up command and
does not work with docker stack deploy.
Both $VARIABLE and ${VARIABLE} syntax are supported. Additionally when using the 2.1 file format, it
is possible to provide inline default values using typical shell syntax:
${VARIABLE:?err} exits with an error message containing err if VARIABLE is unset or empty in
the environment.
${VARIABLE?err} exits with an error message containing err if VARIABLE is unset in the
environment.
If you forget and use a single dollar sign ($), Compose interprets the value as an environment
variable and warns you:
Extension fields
Added in version 3.4 file format.
It is possible to re-use configuration fragments using extension fields. Those special fields can be of
any format as long as they are located at the root of your Compose file and their name start with
the x- character sequence.
Note
Starting with the 3.7 format (for the 3.x series) and 2.4 format (for the 2.x series), extension fields are
also allowed at the root of service, volume, network, config and secret definitions.
version: '3.4'
x-custom:
items:
- a
- b
options:
max-size: '12m'
name: "custom"
The contents of those fields are ignored by Compose, but they can be inserted in your resource
definitions using YAML anchors. For example, if you want several of your services to use the same
logging configuration:
logging:
options:
max-size: '12m'
max-file: '5'
driver: json-file
version: '3.4'
x-logging:
&default-logging
options:
max-size: '12m'
max-file: '5'
driver: json-file
services:
web:
image: myapp/web:latest
logging: *default-logging
db:
image: mysql:latest
logging: *default-logging
It is also possible to partially override values in extension fields using the YAML merge type. For
example:
version: '3.4'
x-volumes:
&default-volume
driver: foobar-storage
services:
web:
image: myapp/web:latest
volumes: ["vol1", "vol2", "vol3"]
volumes:
vol1: *default-volume
vol2:
<< : *default-volume
name: volume02
vol3:
<< : *default-volume
driver: default
name: volume-local
This table shows which Compose file versions support specific Docker releases.
3.7 18.06.0+
3.6 18.02.0+
3.5 17.12.0+
3.4 17.09.0+
3.3 17.06.0+
3.2 17.04.0+
3.1 1.13.1+
3.0 1.13.0+
2.4 17.12.0+
2.3 17.06.0+
2.2 1.13.0+
2.1 1.12.0+
2.0 1.10.0+
1.0 1.9.1.+
In addition to Compose file format versions shown in the table, the Compose itself is on a release
schedule, as shown in Compose releases, but file format versions do not necessarily increment with
each release. For example, Compose file format 3.0 was first introduced in Compose release 1.10.0,
and versioned gradually in subsequent releases.
This section contains a list of all configuration options supported by a service definition in version 2.
blkio_config
A set of configuration options to set block IO limits for this service.
version: "2.4"
services:
foo:
image: busybox
blkio_config:
weight: 300
weight_device:
- path: /dev/sda
weight: 400
device_read_bps:
- path: /dev/sdb
rate: '12mb'
device_read_iops:
- path: /dev/sdb
rate: 120
device_write_bps:
- path: /dev/sdb
rate: '1024k'
device_write_iops:
- path: /dev/sdb
rate: 30
DEVICE_READ_BPS, DEVICE_WRITE_BPS
Set a limit in bytes per second for read / write operations on a given device. Each item in the list
must have two keys:
DEVICE_READ_IOPS, DEVICE_WRITE_IOPS
Set a limit in operations per second for read / write operations on a given device. Each item in the list
must have two keys:
WEIGHT
Modify the proportion of bandwidth allocated to this service relative to other services. Takes an
integer value between 10 and 1000, with 500 being the default.
WEIGHT_DEVICE
Fine-tune bandwidth allocation by device. Each item in the list must have two keys:
build
Configuration options that are applied at build time.
build can be specified either as a string containing a path to the build context, or an object with the
path specified under context and optionally dockerfile and args.
build: ./dir
build:
context: ./dir
dockerfile: Dockerfile-alternate
args:
buildno: 1
If you specify image as well as build, then Compose names the built image with the webapp and
optional tag specified in image:
build: ./dir
image: webapp:tag
This results in an image named webapp and tagged tag, built from ./dir.
CACHE_FROM
Added in version 2.2 file format
build:
context: .
cache_from:
- alpine:latest
- corp/web_app:3.14
CONTEXT
Version 2 file format and up. In version 1, just use build.
When the value supplied is a relative path, it is interpreted as relative to the location of the Compose
file. This directory is also the build context that is sent to the Docker daemon.
Compose builds and tags it with a generated name, and use that image thereafter.
build:
context: ./dir
DOCKERFILE
Alternate Dockerfile.
Compose uses an alternate file to build with. A build path must also be specified.
build:
context: .
dockerfile: Dockerfile-alternate
ARGS
Version 2 file format and up.
Add build arguments, which are environment variables accessible only during the build process.
ARG buildno
ARG gitcommithash
Then specify the arguments under the build key. You can pass a mapping or a list:
build:
context: .
args:
buildno: 1
gitcommithash: cdc3b19
build:
context: .
args:
- buildno=1
- gitcommithash=cdc3b19
Note: In your Dockerfile, if you specify ARG before the FROM instruction, If you need an argument to be
available in both places, also specify it under the FROM instruction. See Understand how ARGS and
FROM interact for usage details.
You can omit the value when specifying a build argument, in which case its value at build time is the
value in the environment where Compose is running.
args:
- buildno
- gitcommithash
Note: YAML boolean values (true, false, yes, no, on, off) must be enclosed in quotes, so that the
parser interprets them as strings.
EXTRA_HOSTS
Add hostname mappings at build-time. Use the same values as the docker client --add-
hostparameter.
extra_hosts:
- "somehost:162.242.195.82"
- "otherhost:50.31.209.229"
An entry with the ip address and hostname is created in /etc/hosts inside containers for this build,
e.g:
162.242.195.82 somehost
50.31.209.229 otherhost
ISOLATION
Added in version 2.1 file format.
Specify a build’s container isolation technology. On Linux, the only supported value is default. On
Windows, acceptable values are default, process and hyperv. Refer to theDocker Engine docs for
details.
If unspecified, Compose will use the isolation value found in the service’s definition to determine
the value to use for builds.
LABELS
Added in version 2.1 file format
Add metadata to the resulting image using Docker labels. You can use either an array or a
dictionary.
It’s recommended that you use reverse-DNS notation to prevent your labels from conflicting with
those used by other software.
build:
context: .
labels:
com.example.description: "Accounting webapp"
com.example.department: "Finance"
com.example.label-with-empty-value: ""
build:
context: .
labels:
- "com.example.description=Accounting webapp"
- "com.example.department=Finance"
- "com.example.label-with-empty-value"
NETWORK
Added in version 2.2 file format
Set the network containers connect to for the RUN instructions during build.
build:
context: .
network: host
build:
context: .
network: custom_network_1
SHM_SIZE
Added in version 2.3 file format
Set the size of the /dev/shm partition for this build’s containers. Specify as an integer value
representing the number of bytes or as a string expressing a byte value.
build:
context: .
shm_size: '2gb'
build:
context: .
shm_size: 10000000
TARGET
Added in version 2.3 file format
Build the specified stage as defined inside the Dockerfile. See the multi-stage build docs for details.
build:
context: .
target: prod
cap_add, cap_drop
Add or drop container capabilities. See man 7 capabilities for a full list.
cap_add:
- ALL
cap_drop:
- NET_ADMIN
- SYS_ADMIN
command
Override the default command.
cgroup_parent
Specify an optional parent cgroup for the container.
cgroup_parent: m-executor-abcd
container_name
Specify a custom container name, rather than a generated default name.
container_name: my-web-container
Because Docker container names must be unique, you cannot scale a service beyond 1 container if
you have specified a custom name. Attempting to do so results in an error.
cpu_rt_runtime, cpu_rt_period
Added in version 2.2 file format
Configure CPU allocation parameters using the Docker daemon realtime scheduler.
cpu_rt_runtime: '400ms'
cpu_rt_period: '1400us'
device_cgroup_rules
Added in version 2.3 file format.
device_cgroup_rules:
- 'c 1:3 mr'
- 'a 7:* rmw'
devices
List of device mappings. Uses the same format as the --device docker client create option.
devices:
- "/dev/ttyUSB0:/dev/ttyUSB0"
depends_on
Version 2 file format and up.
Simple example:
version: "2.4"
services:
web:
build: .
depends_on:
- db
- redis
redis:
image: redis
db:
image: postgres
Note: depends_on does not wait for db and redis to be “ready” before starting web - only until they
have been started. If you need to wait for a service to be ready, see Controlling startup order for
more on this problem and strategies for solving it.
Added in version 2.1 file format.
A healthcheck indicates that you want a dependency to wait for another container to be “healthy” (as
indicated by a successful state from the healthcheck) before starting.
Example:
version: "2.4"
services:
web:
build: .
depends_on:
db:
condition: service_healthy
redis:
condition: service_started
redis:
image: redis
db:
image: redis
healthcheck:
test: "exit 0"
In the above example, Compose waits for the redis service to be started (legacy behavior) and
the db service to be healthy before starting web.
dns
Custom DNS servers. Can be a single value or a list.
dns: 8.8.8.8
dns:
- 8.8.8.8
- 9.9.9.9
dns_opt
List of custom DNS options to be added to the container’s resolv.conf file.
dns_opt:
- use-vc
- no-tld-query
dns_search
Custom DNS search domains. Can be a single value or a list.
dns_search: example.com
dns_search:
- dc1.example.com
- dc2.example.com
tmpfs
Mount a temporary file system inside the container. Can be a single value or a list.
tmpfs: /run
tmpfs:
- /run
- /tmp
entrypoint
Override the default entrypoint.
entrypoint: /code/entrypoint.sh
entrypoint:
- php
- -d
- zend_extension=/usr/local/lib/php/extensions/no-debug-non-zts-
20100525/xdebug.so
- -d
- memory_limit=-1
- vendor/bin/phpunit
Note: Setting entrypoint both overrides any default entrypoint set on the service’s image with
the ENTRYPOINT Dockerfile instruction, and clears out any default command on the image - meaning
that if there’s a CMD instruction in the Dockerfile, it is ignored.
env_file
Add environment variables from a file. Can be a single value or a list.
If you have specified a Compose file with docker-compose -f FILE, paths in env_file are relative to
the directory that file is in.
Environment variables declared in the environment section override these values – this holds true
even if those values are empty or undefined.
env_file: .env
env_file:
- ./common.env
- ./apps/web.env
- /opt/secrets.env
Compose expects each line in an env file to be in VAR=VAL format. Lines beginning with #are
processed as comments and are ignored. Blank lines are also ignored.
# Set Rails/Rack environment
RACK_ENV=development
Note: If your service specifies a build option, variables defined in environment files
are not automatically visible during the build. Use the args sub-option of build to define build-time
environment variables.
The value of VAL is used as is and not modified at all. For example if the value is surrounded by
quotes (as is often the case of shell variables), the quotes are included in the value passed to
Compose.
Keep in mind that the order of files in the list is significant in determining the value assigned to a
variable that shows up more than once. The files in the list are processed from the top down. For the
same variable specified in file a.env and assigned a different value in file b.env, if b.env is listed
below (after), then the value from b.env stands. For example, given the following declaration
in docker-compose.yml:
services:
some-service:
env_file:
- a.env
- b.env
# a.env
VAR=1
and
# b.env
VAR=hello
$VAR is hello.
environment
Add environment variables. You can use either an array or a dictionary. Any boolean values; true,
false, yes no, need to be enclosed in quotes to ensure they are not converted to True or False by the
YML parser.
Environment variables with only a key are resolved to their values on the machine Compose is
running on, which can be helpful for secret or host-specific values.
environment:
RACK_ENV: development
SHOW: 'true'
SESSION_SECRET:
environment:
- RACK_ENV=development
- SHOW=true
- SESSION_SECRET
Note: If your service specifies a build option, variables defined in environment are notautomatically
visible during the build. Use the args sub-option of build to define build-time environment variables.
expose
Expose ports without publishing them to the host machine - they’ll only be accessible to linked
services. Only the internal port can be specified.
expose:
- "3000"
- "8000"
extends
Extend another service, in the current file or another, optionally overriding configuration.
You can use extends on any service together with other configuration keys. The extendsvalue must
be a dictionary defined with a required service and an optional file key.
extends:
file: common.yml
service: webapp
The service the name of the service being extended, for example web or database. The file is the
location of a Compose configuration file defining that service.
If you omit the file Compose looks for the service configuration in the current file. The file value
can be an absolute or relative path. If you specify a relative path, Compose treats it as relative to the
location of the current file.
You can extend a service that itself extends another. You can extend indefinitely. Compose does not
support circular references and docker-compose returns an error if it encounters one.
For more on extends, see the the extends documentation.
external_links
Link to containers started outside this docker-compose.yml or even outside of Compose, especially
for containers that provide shared or common services. external_links follow semantics similar
to links when specifying both the container name and the link alias (CONTAINER:ALIAS).
external_links:
- redis_1
- project_db_1:mysql
- project_db_1:postgresql
Note: For version 2 file format, the externally-created containers must be connected to at least one
of the same networks as the service which is linking to them.
extra_hosts
Add hostname mappings. Use the same values as the docker client --add-host parameter.
extra_hosts:
- "somehost:162.242.195.82"
- "otherhost:50.31.209.229"
An entry with the ip address and hostname is created in /etc/hosts inside containers for this
service, e.g:
162.242.195.82 somehost
50.31.209.229 otherhost
group_add
Specify additional groups (by name or number) which the user inside the container should be a
member of. Groups must exist in both the container and the host system to be added. An example of
where this is useful is when multiple containers (running as different users) need to all read or write
the same file on the host system. That file can be owned by a group shared by all the containers,
and specified in group_add. See the Docker documentation for more details.
A full example:
version: "2.4"
services:
myservice:
image: alpine
group_add:
- mail
Running id inside the created container shows that the user belongs to the mail group, which would
not have been the case if group_add were not used.
healthcheck
Version 2.1 file format and up.
Configure a check that’s run to determine whether or not containers for this service are “healthy”.
See the docs for the HEALTHCHECK Dockerfile instruction for details on how healthchecks work.
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost"]
interval: 1m30s
timeout: 10s
retries: 3
start_period: 40s
To disable any default healthcheck set by the image, you can use disable: true. This is equivalent
to specifying test: ["NONE"].
healthcheck:
disable: true
Note: The start_period option is a more recent feature and is only available with the 2.3 file format.
image
Specify the image to start the container from. Can either be a repository/tag or a partial image ID.
image: redis
image: ubuntu:14.04
image: tutum/influxdb
image: example-registry.com:4000/postgresql
image: a4bc65fd
If the image does not exist, Compose attempts to pull it, unless you have also specified build, in
which case it builds it using the specified options and tags it with the specified tag.
init
Added in version 2.2 file format.
Run an init inside the container that forwards signals and reaps processes. Set this option to true to
enable this feature for the service.
version: "2.4"
services:
web:
image: alpine:latest
init: true
The default init binary that is used is Tini, and is installed in /usr/libexec/docker-initon the
daemon host. You can configure the daemon to use a custom init binary through the init-
path configuration option.
isolation
Added in version 2.1 file format.
Specify a container’s isolation technology. On Linux, the only supported value is default. On
Windows, acceptable values are default, process and hyperv. Refer to the Docker Engine docs for
details.
labels
Add metadata to containers using Docker labels. You can use either an array or a dictionary.
It’s recommended that you use reverse-DNS notation to prevent your labels from conflicting with
those used by other software.
labels:
com.example.description: "Accounting webapp"
com.example.department: "Finance"
com.example.label-with-empty-value: ""
labels:
- "com.example.description=Accounting webapp"
- "com.example.department=Finance"
- "com.example.label-with-empty-value"
links
Link to containers in another service. Either specify both the service name and a link alias
("SERVICE:ALIAS"), or just the service name.
Links are a legacy option. We recommend using networks instead.
web:
links:
- "db"
- "db:database"
- "redis"
Containers for the linked service are reachable at a hostname identical to the alias, or the service
name if no alias was specified.
Links also express dependency between services in the same way as depends_on, so they
determine the order of service startup.
Note: If you define both links and networks, services with links between them must share at least
one network in common in order to communicate. We recommend using networks instead.
logging
Logging configuration for the service.
logging:
driver: syslog
options:
syslog-address: "tcp://192.168.0.42:123"
The driver name specifies a logging driver for the service’s containers, as with the --log-
driver option for docker run (documented here).
driver: "json-file"
driver: "syslog"
driver: "none"
Note: Only the json-file and journald drivers make the logs available directly fromdocker-compose
up and docker-compose logs. Using any other driver does not print any logs.
Specify logging options for the logging driver with the options key, as with the --log-optoption
for docker run.
Logging options are key-value pairs. An example of syslog options:
driver: "syslog"
options:
syslog-address: "tcp://192.168.0.42:123"
network_mode
Version 2 file format and up. Replaces the version 1 net option.
Network mode. Use the same values as the docker client --net parameter, plus the special
form service:[service name].
network_mode: "bridge"
network_mode: "host"
network_mode: "none"
network_mode: "service:[service name]"
network_mode: "container:[container name/id]"
networks
Version 2 file format and up. Replaces the version 1 net option.
Networks to join, referencing entries under the top-level networks key.
services:
some-service:
networks:
- some-network
- other-network
ALIASES
Aliases (alternative hostnames) for this service on the network. Other containers on the same
network can use either the service name or this alias to connect to one of the service’s containers.
Since aliases is network-scoped, the same service can have different aliases on different networks.
Note: A network-wide alias can be shared by multiple containers, and even by multiple services. If it
is, then exactly which container the name resolves to is not guaranteed.
services:
some-service:
networks:
some-network:
aliases:
- alias1
- alias3
other-network:
aliases:
- alias2
In the example below, three services are provided (web, worker, and db), along with two networks
(new and legacy). The db service is reachable at the hostname db or database on the new network,
and at db or mysql on the legacy network.
version: "2.4"
services:
web:
build: ./web
networks:
- new
worker:
build: ./worker
networks:
- legacy
db:
image: mysql
networks:
new:
aliases:
- database
legacy:
aliases:
- mysql
networks:
new:
legacy:
IPV4_ADDRESS, IPV6_ADDRESS
Specify a static IP address for containers for this service when joining the network.
The corresponding network configuration in the top-level networks section must have an ipam block
with subnet and gateway configurations covering each static address. If IPv6 addressing is desired,
the enable_ipv6 option must be set.
An example:
version: "2.4"
services:
app:
image: busybox
command: ifconfig
networks:
app_net:
ipv4_address: 172.16.238.10
ipv6_address: 2001:3984:3989::10
networks:
app_net:
driver: bridge
enable_ipv6: true
ipam:
driver: default
config:
- subnet: 172.16.238.0/24
gateway: 172.16.238.1
- subnet: 2001:3984:3989::/64
gateway: 2001:3984:3989::1
LINK_LOCAL_IPS
Added in version 2.1 file format.
Specify a list of link-local IPs. Link-local IPs are special IPs which belong to a well known subnet and
are purely managed by the operator, usually dependent on the architecture where they are
deployed. Therefore they are not managed by docker (IPAM driver).
Example usage:
version: "2.4"
services:
app:
image: busybox
command: top
networks:
app_net:
link_local_ips:
- 57.123.22.11
- 57.123.22.13
networks:
app_net:
driver: bridge
PRIORITY
Specify a priority to indicate in which order Compose should connect the service’s containers to its
networks. If unspecified, the default value is 0.
In the following example, the app service connects to app_net_1 first as it has the highest priority. It
then connects to app_net_3, then app_net_2, which uses the default priority value of 0.
version: "2.4"
services:
app:
image: busybox
command: top
networks:
app_net_1:
priority: 1000
app_net_2:
app_net_3:
priority: 100
networks:
app_net_1:
app_net_2:
app_net_3:
Note: If multiple networks have the same priority, the connection order is undefined.
pid
pid: "host"
pid: "container:custom_container_1"
pid: "service:foobar"
If set to “host”, the service’s PID mode is the host PID mode. This turns on sharing between
container and the host operating system the PID address space. Containers launched with this flag
can access and manipulate other containers in the bare-metal machine’s namespace and vice
versa.
Note: the service: and container: forms require version 2.1 or above
pids_limit
Added in version 2.1 file format.
Tunes a container’s PIDs limit. Set to -1 for unlimited PIDs.
pids_limit: 10
platform
Added in version 2.4 file format.
Target platform containers for this service will run on, using the os[/arch[/variant]] syntax, e.g.
platform: osx
platform: windows/amd64
platform: linux/arm64/v8
This parameter determines which version of the image will be pulled and/or on which platform the
service’s build will be performed.
ports
Expose ports. Either specify both ports (HOST:CONTAINER), or just the container port (an ephemeral
host port is chosen).
Note: When mapping ports in the HOST:CONTAINER format, you may experience erroneous results
when using a container port lower than 60, because YAML parses numbers in the format xx:yy as a
base-60 value. For this reason, we recommend always explicitly specifying your port mappings as
strings.
ports:
- "3000"
- "3000-3005"
- "8000:8000"
- "9090-9091:8080-8081"
- "49100:22"
- "127.0.0.1:8001:8001"
- "127.0.0.1:5000-5010:5000-5010"
- "6060:6060/udp"
- "12400-12500:1240"
runtime
Added in version 2.3 file format
Specify which runtime to use for the service’s containers. Default runtime and available runtimes are
listed in the output of docker info.
web:
image: busybox:latest
command: true
runtime: runc
scale
Added in version 2.2 file format
Specify the default number of containers to deploy for this service. Whenever you run docker-
compose up, Compose creates or removes containers to match the specified number. This value can
be overridden using the --scale flag.
web:
image: busybox:latest
command: echo 'scaled'
scale: 3
security_opt
Override the default labeling scheme for each container.
security_opt:
- label:user:USER
- label:role:ROLE
stop_grace_period
Specify how long to wait when attempting to stop a container if it doesn’t handle SIGTERM (or
whatever stop signal has been specified with stop_signal), before sending SIGKILL. Specified as
a duration.
stop_grace_period: 1s
stop_grace_period: 1m30s
By default, stop waits 10 seconds for the container to exit before sending SIGKILL.
stop_signal
Sets an alternative signal to stop the container. By default stop uses SIGTERM. Setting an
alternative signal using stop_signal causes stop to send that signal instead.
stop_signal: SIGUSR1
storage_opt
Added in version 2.1 file format.
storage_opt:
size: '1G'
sysctls
Added in version 2.1 file format.
Kernel parameters to set in the container. You can use either an array or a dictionary.
sysctls:
net.core.somaxconn: 1024
net.ipv4.tcp_syncookies: 0
sysctls:
- net.core.somaxconn=1024
- net.ipv4.tcp_syncookies=0
ulimits
Override the default ulimits for a container. You can either specify a single limit as an integer or
soft/hard limits as a mapping.
ulimits:
nproc: 65535
nofile:
soft: 20000
hard: 40000
userns_mode
Added in version 2.1 file format.
userns_mode: "host"
Disables the user namespace for this service, if Docker daemon is configured with user
namespaces. See dockerd for more information.
volumes
Mount host folders or named volumes. Named volumes need to be specified with the top-
level volumes key.
You can mount a relative path on the host, which expands relative to the directory of the Compose
configuration file being used. Relative paths should always begin with . or ...
SHORT SYNTAX
The short syntax uses the generic [SOURCE:]TARGET[:MODE] format, where SOURCE can be either a
host path or volume name. TARGET is the container path where the volume is mounted. Standard
modes are ro for read-only and rw for read-write (default).
volumes:
# Just specify a path and let the Engine create a volume
- /var/lib/mysql
# Named volume
- datavolume:/var/lib/mysql
LONG SYNTAX
Added in version 2.3 file format.
The long form syntax allows the configuration of additional fields that can’t be expressed in the short
form.
version: "2.4"
services:
web:
image: nginx:alpine
ports:
- "80:80"
volumes:
- type: volume
source: mydata
target: /data
volume:
nocopy: true
- type: bind
source: ./static
target: /opt/app/static
networks:
webnet:
volumes:
mydata:
Note: When creating bind mounts, using the long syntax requires the referenced folder to be created
beforehand. Using the short syntax creates the folder on the fly if it doesn’t exist. See the bind
mounts documentation for more information.
volume_driver
Specify a default volume driver to be used for all declared volumes on this service.
volume_driver: mydriver
Note: In version 2 files, this option only applies to anonymous volumes (those specified in the image,
or specified under volumes without an explicit named volume or host path). To configure the driver
for a named volume, use the driver key under the entry in the top-level volumes option.
volumes_from
Mount all of the volumes from another service or container, optionally specifying read-only access
(ro) or read-write (rw). If no access level is specified, then read-write is used.
volumes_from:
- service_name
- service_name:ro
- container:container_name
- container:container_name:rw
Notes
The container:... formats are only supported in the version 2 file format.
In version 1, you can use container names without marking them as such:
o service_name
o service_name:ro
o container_name
o container_name:rw
restart
no is the default restart policy, and it doesn’t restart a container under any circumstance.
When always is specified, the container always restarts. The on-failure policy restarts a container if
the exit code indicates an on-failure error.
- restart: no
- restart: always
- restart: on-failure
Note: The following options were added in version 2.2: cpu_count, cpu_percent, cpus. The following
options were added in version 2.1: oom_kill_disable, cpu_period
cpu_count: 2
cpu_percent: 50
cpus: 0.5
cpu_shares: 73
cpu_quota: 50000
cpu_period: 20ms
cpuset: 0,1
user: postgresql
working_dir: /code
domainname: foo.com
hostname: foo
ipc: host
mac_address: 02:42:ac:11:65:43
mem_limit: 1000000000
memswap_limit: 2000000000
mem_reservation: 512m
privileged: true
oom_score_adj: 500
oom_kill_disable: true
read_only: true
shm_size: 64M
stdin_open: true
tty: true
Specifying durations
Some configuration options, such as the interval and timeout sub-options forhealthcheck, accept a
duration as a string in a format that looks like this:
2.5s
10s
1m30s
2h32m
5h34m56s
The supported units are b, k, m and g, and their alternative notation kb, mb and gb. Decimal values are
not supported at this time.
Here’s an example of a two-service setup where a database’s data directory is shared with another
service as a volume so that it can be periodically backed up:
version: "2.4"
services:
db:
image: db
volumes:
- data-volume:/var/lib/db
backup:
image: backup-service
volumes:
- data-volume:/var/lib/backup/data
volumes:
data-volume:
An entry under the top-level volumes key can be empty, in which case it uses the default driver
configured by the Engine (in most cases, this is the local driver). Optionally, you can configure it
with the following keys:
driver
Specify which volume driver should be used for this volume. Defaults to whatever driver the Docker
Engine has been configured to use, which in most cases is local. If the driver is not available, the
Engine returns an error when docker-compose up tries to create the volume.
driver: foobar
driver_opts
Specify a list of options as key-value pairs to pass to the driver for this volume. Those options are
driver-dependent - consult the driver’s documentation for more information. Optional.
driver_opts:
foo: "bar"
baz: 1
external
If set to true, specifies that this volume has been created outside of Compose. docker-compose
up does not attempt to create it, and raises an error if it doesn’t exist.
For version 2.0 of the format, external cannot be used in conjunction with other volume
configuration keys (driver, driver_opts, labels). This limitation no longer exists forversion 2.1 and
above.
In the example below, instead of attempting to create a volume called [projectname]_data,
Compose looks for an existing volume simply called data and mount it into the db service’s
containers.
version: "2.4"
services:
db:
image: postgres
volumes:
- data:/var/lib/postgresql/data
volumes:
data:
external: true
You can also specify the name of the volume separately from the name used to refer to it within the
Compose file:
volumes:
data:
external:
name: actual-name-of-volume
Note: In newer versions of Compose, the external.name property is deprecated in favor of simply
using the name property.
labels
Added in version 2.1 file format.
Add metadata to containers using Docker labels. You can use either an array or a dictionary.
It’s recommended that you use reverse-DNS notation to prevent your labels from conflicting with
those used by other software.
labels:
com.example.description: "Database volume"
com.example.department: "IT/Ops"
com.example.label-with-empty-value: ""
labels:
- "com.example.description=Database volume"
- "com.example.department=IT/Ops"
- "com.example.label-with-empty-value"
name
Added in version 2.1 file format
version: "2.4"
volumes:
data:
name: my-app-data
It can also be used in conjunction with the external property:
version: "2.4"
volumes:
data:
external: true
name: my-app-data
The default driver depends on how the Docker Engine you’re using is configured, but in most
instances it is bridge on a single host and overlay on a Swarm.
driver: overlay
Starting in Compose file format 2.1, overlay networks are always created as attachable, and this is
not configurable. This means that standalone containers can connect to overlay networks.
driver_opts
Specify a list of options as key-value pairs to pass to the driver for this network. Those options are
driver-dependent - consult the driver’s documentation for more information. Optional.
driver_opts:
foo: "bar"
baz: 1
enable_ipv6
Added in version 2.1 file format.
A full example:
ipam:
driver: default
config:
- subnet: 172.28.0.0/16
ip_range: 172.28.5.0/24
gateway: 172.28.5.254
aux_addresses:
host1: 172.28.1.5
host2: 172.28.1.6
host3: 172.28.1.7
options:
foo: bar
baz: "0"
internal
By default, Docker also connects a bridge network to it to provide external connectivity. If you want
to create an externally isolated overlay network, you can set this option to true.
labels
Added in version 2.1 file format.
Add metadata to containers using Docker labels. You can use either an array or a dictionary.
It’s recommended that you use reverse-DNS notation to prevent your labels from conflicting with
those used by other software.
labels:
com.example.description: "Financial transaction network"
com.example.department: "Finance"
com.example.label-with-empty-value: ""
labels:
- "com.example.description=Financial transaction network"
- "com.example.department=Finance"
- "com.example.label-with-empty-value"
external
If set to true, specifies that this network has been created outside of Compose. docker-compose
up does not attempt to create it, and raises an error if it doesn’t exist.
For version 2.0 of the format, external cannot be used in conjunction with other network
configuration keys (driver, driver_opts, ipam, internal). This limitation no longer exists for version
2.1 and above.
In the example below, proxy is the gateway to the outside world. Instead of attempting to create a
network called [projectname]_outside, Compose looks for an existing network simply
called outside and connect the proxy service’s containers to it.
version: "2.4"
services:
proxy:
build: ./proxy
networks:
- outside
- default
app:
build: ./app
networks:
- default
networks:
outside:
external: true
You can also specify the name of the network separately from the name used to refer to it within the
Compose file:
networks:
outside:
external:
name: actual-name-of-network
version: "2.4"
networks:
network1:
name: my-app-net
Variable substitution
Your configuration options can contain environment variables. Compose uses the variable values
from the shell environment in which docker-compose is run. For example, suppose the shell
contains POSTGRES_VERSION=9.3 and you supply this configuration:
db:
image: "postgres:${POSTGRES_VERSION}"
When you run docker-compose up with this configuration, Compose looks for
thePOSTGRES_VERSION environment variable in the shell and substitutes its value in. For this example,
Compose resolves the image to postgres:9.3 before running the configuration.
If an environment variable is not set, Compose substitutes with an empty string. In the example
above, if POSTGRES_VERSION is not set, the value for the image option is postgres:.
You can set default values for environment variables using a .env file, which Compose automatically
looks for. Values set in the shell environment override those set in the .envfile.
Important: The .env file feature only works when you use thedocker-compose up command and
does not work with docker stack deploy.
Both $VARIABLE and ${VARIABLE} syntax are supported. Additionally when using the 2.1 file format, it
is possible to provide inline default values using typical shell syntax:
${VARIABLE:?err} exits with an error message containing err if VARIABLE is unset or empty in
the environment.
${VARIABLE?err} exits with an error message containing err if VARIABLE is unset in the
environment.
If you forget and use a single dollar sign ($), Compose interprets the value as an environment
variable and warns you:
Extension fields
Added in version 2.1 file format.
It is possible to re-use configuration fragments using extension fields. Those special fields can be of
any format as long as they are located at the root of your Compose file and their name start with
the x- character sequence.
Note
Starting with the 3.7 format (for the 3.x series) and 2.4 format (for the 2.x series), extension fields are
also allowed at the root of service, volume, network, config and secret definitions.
version: '3.4'
x-custom:
items:
- a
- b
options:
max-size: '12m'
name: "custom"
The contents of those fields are ignored by Compose, but they can be inserted in your resource
definitions using YAML anchors. For example, if you want several of your services to use the same
logging configuration:
logging:
options:
max-size: '12m'
max-file: '5'
driver: json-file
version: '3.4'
x-logging:
&default-logging
options:
max-size: '12m'
max-file: '5'
driver: json-file
services:
web:
image: myapp/web:latest
logging: *default-logging
db:
image: mysql:latest
logging: *default-logging
It is also possible to partially override values in extension fields using the YAML merge type. For
example:
version: '3.4'
x-volumes:
&default-volume
driver: foobar-storage
services:
web:
image: myapp/web:latest
volumes: ["vol1", "vol2", "vol3"]
volumes:
vol1: *default-volume
vol2:
<< : *default-volume
name: volume02
vol3:
<< : *default-volume
driver: default
name: volume-local
This table shows which Compose file versions support specific Docker releases.
3.7 18.06.0+
3.6 18.02.0+
3.5 17.12.0+
3.4 17.09.0+
3.3 17.06.0+
3.2 17.04.0+
3.1 1.13.1+
3.0 1.13.0+
2.4 17.12.0+
2.3 17.06.0+
2.2 1.13.0+
2.1 1.12.0+
2.0 1.10.0+
1.0 1.9.1.+
In addition to Compose file format versions shown in the table, the Compose itself is on a release
schedule, as shown in Compose releases, but file format versions do not necessarily increment with
each release. For example, Compose file format 3.0 was first introduced in Compose release 1.10.0,
and versioned gradually in subsequent releases.
This section contains a list of all configuration options supported by a service definition in version 1.
build
Configuration options that are applied at build time.
build: ./dir
Note
Only the string form (build: .) is allowed - not the object form that is allowed in Version 2
and up.
Using build together with image is not allowed. Attempting to do so results in an error.
DOCKERFILE
Alternate Dockerfile.
Compose uses an alternate file to build with. A build path must also be specified.
build: .
dockerfile: Dockerfile-alternate
Note
In the version 1 file format, dockerfile is different from newer versions in two ways:
It appears alongside build, not as a sub-option:
Using dockerfile together with image is not allowed. Attempting to do so results in an error.
cap_add, cap_drop
Add or drop container capabilities. See man 7 capabilities for a full list.
cap_add:
- ALL
cap_drop:
- NET_ADMIN
- SYS_ADMIN
Note: These options are ignored when deploying a stack in swarm mode with a (version 3)
Compose file.
command
Override the default command.
cgroup_parent
Specify an optional parent cgroup for the container.
cgroup_parent: m-executor-abcd
container_name
Specify a custom container name, rather than a generated default name.
container_name: my-web-container
Because Docker container names must be unique, you cannot scale a service beyond 1 container if
you have specified a custom name. Attempting to do so results in an error.
devices
List of device mappings. Uses the same format as the --device docker client create option.
devices:
- "/dev/ttyUSB0:/dev/ttyUSB0"
dns
Custom DNS servers. Can be a single value or a list.
dns: 8.8.8.8
dns:
- 8.8.8.8
- 9.9.9.9
dns_search
Custom DNS search domains. Can be a single value or a list.
dns_search: example.com
dns_search:
- dc1.example.com
- dc2.example.com
entrypoint
Override the default entrypoint.
entrypoint: /code/entrypoint.sh
entrypoint:
- php
- -d
- zend_extension=/usr/local/lib/php/extensions/no-debug-non-zts-
20100525/xdebug.so
- -d
- memory_limit=-1
- vendor/bin/phpunit
Note: Setting entrypoint both overrides any default entrypoint set on the service’s image with
the ENTRYPOINT Dockerfile instruction, and clears out any default command on the image - meaning
that if there’s a CMD instruction in the Dockerfile, it is ignored.
env_file
Add environment variables from a file. Can be a single value or a list.
If you have specified a Compose file with docker-compose -f FILE, paths in env_file are relative to
the directory that file is in.
env_file: .env
env_file:
- ./common.env
- ./apps/web.env
- /opt/secrets.env
Compose expects each line in an env file to be in VAR=VAL format. Lines beginning with #are
processed as comments and are ignored. Blank lines are also ignored.
# Set Rails/Rack environment
RACK_ENV=development
Note: If your service specifies a build option, variables defined in environment files
are not automatically visible during the build.
The value of VAL is used as is and not modified at all. For example if the value is surrounded by
quotes (as is often the case of shell variables), the quotes are included in the value passed to
Compose.
Keep in mind that the order of files in the list is significant in determining the value assigned to a
variable that shows up more than once. The files in the list are processed from the top down. For the
same variable specified in file a.env and assigned a different value in file b.env, if b.env is listed
below (after), then the value from b.env stands. For example, given the following declaration
in docker-compose.yml:
services:
some-service:
env_file:
- a.env
- b.env
# a.env
VAR=1
and
# b.env
VAR=hello
$VAR is hello.
environment
Add environment variables. You can use either an array or a dictionary. Any boolean values; true,
false, yes no, need to be enclosed in quotes to ensure they are not converted to True or False by the
YML parser.
Environment variables with only a key are resolved to their values on the machine Compose is
running on, which can be helpful for secret or host-specific values.
environment:
RACK_ENV: development
SHOW: 'true'
SESSION_SECRET:
environment:
- RACK_ENV=development
- SHOW=true
- SESSION_SECRET
Note: If your service specifies a build option, variables defined in environment are notautomatically
visible during the build.
expose
Expose ports without publishing them to the host machine - they’ll only be accessible to linked
services. Only the internal port can be specified.
expose:
- "3000"
- "8000"
extends
Extend another service, in the current file or another, optionally overriding configuration.
You can use extends on any service together with other configuration keys. The extendsvalue must
be a dictionary defined with a required service and an optional file key.
extends:
file: common.yml
service: webapp
The service the name of the service being extended, for example web or database. The file is the
location of a Compose configuration file defining that service.
If you omit the file Compose looks for the service configuration in the current file. The file value
can be an absolute or relative path. If you specify a relative path, Compose treats it as relative to the
location of the current file.
You can extend a service that itself extends another. You can extend indefinitely. Compose does not
support circular references and docker-compose returns an error if it encounters one.
For more on extends, see the the extends documentation.
external_links
Link to containers started outside this docker-compose.yml or even outside of Compose, especially
for containers that provide shared or common services. external_links follow semantics similar
to links when specifying both the container name and the link alias (CONTAINER:ALIAS).
external_links:
- redis_1
- project_db_1:mysql
- project_db_1:postgresql
extra_hosts
Add hostname mappings. Use the same values as the docker client --add-host parameter.
extra_hosts:
- "somehost:162.242.195.82"
- "otherhost:50.31.209.229"
An entry with the ip address and hostname is created in /etc/hosts inside containers for this
service, e.g:
162.242.195.82 somehost
50.31.209.229 otherhost
image
Specify the image to start the container from. Can either be a repository/tag or a partial image ID.
image: redis
image: ubuntu:14.04
image: tutum/influxdb
image: example-registry.com:4000/postgresql
image: a4bc65fd
If the image does not exist, Compose attempts to pull it, unless you have also specified build, in
which case it builds it using the specified options and tags it with the specified tag.
Note: In the version 1 file format, using build together with image is not allowed. Attempting to do so
results in an error.
labels
Add metadata to containers using Docker labels. You can use either an array or a dictionary.
It’s recommended that you use reverse-DNS notation to prevent your labels from conflicting with
those used by other software.
labels:
com.example.description: "Accounting webapp"
com.example.department: "Finance"
com.example.label-with-empty-value: ""
labels:
- "com.example.description=Accounting webapp"
- "com.example.department=Finance"
- "com.example.label-with-empty-value"
links
Link to containers in another service. Either specify both the service name and a link alias
(SERVICE:ALIAS), or just the service name.
Links are a legacy option. We recommend using networks instead.
web:
links:
- db
- db:database
- redis
Containers for the linked service are reachable at a hostname identical to the alias, or the service
name if no alias was specified.
Links also express dependency between services in the same way as depends_on, so they
determine the order of service startup.
Note: If you define both links and networks, services with links between them must share at least
one network in common in order to communicate.
log_driver
Version 1 file format only. In version 2 and up, use logging.
Specify a log driver. The default is json-file.
log_driver: syslog
log_opt
Version 1 file format only. In version 2 and up, use logging.
Specify logging options as key-value pairs. An example of syslog options:
log_opt:
syslog-address: "tcp://192.168.0.42:123"
net
Version 1 file format only. In version 2 and up, use network_mode and networks.
Network mode. Use the same values as the docker client --net parameter. The container:... form
can take a service name instead of a container name or id.
net: "bridge"
net: "host"
net: "none"
net: "container:[service name or container name/id]"
pid
pid: "host"
Sets the PID mode to the host PID mode. This turns on sharing between container and the host
operating system the PID address space. Containers launched with this flag can access and
manipulate other containers in the bare-metal machine’s namespace and vice versa.
ports
Expose ports. Either specify both ports (HOST:CONTAINER), or just the container port (an ephemeral
host port is chosen).
Note: When mapping ports in the HOST:CONTAINER format, you may experience erroneous results
when using a container port lower than 60, because YAML parses numbers in the format xx:yy as a
base-60 value. For this reason, we recommend always explicitly specifying your port mappings as
strings.
ports:
- "3000"
- "3000-3005"
- "8000:8000"
- "9090-9091:8080-8081"
- "49100:22"
- "127.0.0.1:8001:8001"
- "127.0.0.1:5000-5010:5000-5010"
- "6060:6060/udp"
security_opt
Override the default labeling scheme for each container.
security_opt:
- label:user:USER
- label:role:ROLE
stop_signal
Sets an alternative signal to stop the container. By default stop uses SIGTERM. Setting an
alternative signal using stop_signal causes stop to send that signal instead.
stop_signal: SIGUSR1
ulimits
Override the default ulimits for a container. You can either specify a single limit as an integer or
soft/hard limits as a mapping.
ulimits:
nproc: 65535
nofile:
soft: 20000
hard: 40000
volumes, volume_driver
Mount paths or named volumes, optionally specifying a path on the host machine (HOST:CONTAINER),
or an access mode (HOST:CONTAINER:ro). For version 2 files, named volumes need to be specified
with the top-level volumes key. When using version 1, the Docker Engine creates the named volume
automatically if it doesn’t exist.
You can mount a relative path on the host, which expands relative to the directory of the Compose
configuration file being used. Relative paths should always begin with . or ...
volumes:
# Just specify a path and let the Engine create a volume
- /var/lib/mysql
# User-relative path
- ~/configs:/etc/configs/:ro
# Named volume
- datavolume:/var/lib/mysql
There are several things to note, depending on which Compose file version you’re using:
For version 1 files, both named volumes and container volumes use the specified driver.
No path expansion is done if you have also specified a volume_driver. For example, if you
specify a mapping of ./foo:/data, the ./foo part is passed straight to the volume driver
without being expanded.
volumes_from
Mount all of the volumes from another service or container, optionally specifying read-only access
(ro) or read-write (rw). If no access level is specified, then read-write is used.
volumes_from:
- service_name
- service_name:ro
cpu_shares: 73
cpu_quota: 50000
cpuset: 0,1
user: postgresql
working_dir: /code
domainname: foo.com
hostname: foo
ipc: host
mac_address: 02:42:ac:11:65:43
mem_limit: 1000000000
memswap_limit: 2000000000
privileged: true
restart: always
read_only: true
shm_size: 64M
stdin_open: true
tty: true
The Compose file is a YAML file defining services, networks, and volumes for a Docker application.
The Compose file formats are now described in these references, specific to each version.
Compatibility matrix
There are several versions of the Compose file format – 1, 2, 2.x, and 3.x
This table shows which Compose file versions support specific Docker releases.
3.7 18.06.0+
3.6 18.02.0+
3.5 17.12.0+
3.4 17.09.0+
3.3 17.06.0+
3.2 17.04.0+
3.1 1.13.1+
3.0 1.13.0+
2.4 17.12.0+
2.3 17.06.0+
2.2 1.13.0+
2.1 1.12.0+
2.0 1.10.0+
1.0 1.9.1.+
In addition to Compose file format versions shown in the table, the Compose itself is on a release
schedule, as shown in Compose releases, but file format versions do not necessarily increment with
each release. For example, Compose file format 3.0 was first introduced in Compose release 1.10.0,
and versioned gradually in subsequent releases.
Looking for more detail on Docker and Compose compatibility?
We recommend keeping up-to-date with newer releases as much as possible. However, if you are
using an older version of Docker and want to determine which Compose release is compatible, refer
to the Compose release notes. Each set of release notes gives details on which versions of Docker
Engine are supported, along with compatible Compose file format versions. (See also, the
discussion in issue #3404.)
For details on versions and how to upgrade, see Versioning and Upgrading.
Versioning
There are currently three versions of the Compose file format:
Version 1, the legacy format. This is specified by omitting a version key at the root of the
YAML.
Version 2.x. This is specified with a version: '2' or version: '2.1', etc., entry at the root of
the YAML.
Version 3.x, the latest and recommended version, designed to be cross-compatible between
Compose and the Docker Engine’s swarm mode. This is specified with a version:
'3' or version: '3.1', etc., entry at the root of the YAML.
v2 and v3 Declaration
Note: When specifying the Compose file version to use, make sure to specify both
the major and minor numbers. If no minor version is given, 0 is used by default and not the latest
minor version.
The Compatibility Matrix shows Compose file versions mapped to Docker Engine releases.
Note: If you’re using multiple Compose files or extending services, each file must be of the same
version - you cannot, for example, mix version 1 and 2 in a single project.
Compose does not take advantage of networking when you use version 1: every container is placed
on the default bridge network and is reachable from every other container at its IP address. You
need to use links to enable discovery between containers.
Example:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
links:
- redis
redis:
image: redis
Version 2
Compose files using the version 2 syntax must indicate the version number at the root of the
document. All services must be declared under the services key.
Version 2 files are supported by Compose 1.6.0+ and require a Docker Engine of version 1.10.0+.
Named volumes can be declared under the volumes key, and networks can be declared under
the networks key.
Note: When specifying the Compose file version to use, make sure to specify both
the major and minor numbers. If no minor version is given, 0 is used by default and not the
latest minor version. As a result, features added in later versions will not be supported. For
example:
version: "2"
is equivalent to:
version: "2.0"
Simple example:
version: "2.4"
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
redis:
image: redis
version: "2.4"
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
networks:
- front-tier
- back-tier
redis:
image: redis
volumes:
- redis-data:/var/lib/redis
networks:
- back-tier
volumes:
redis-data:
driver: local
networks:
front-tier:
driver: bridge
back-tier:
driver: bridge
aliases
The depends_on option can be used in place of links to indicate dependencies between
services and startup order.
version: "2.4"
services:
web:
build: .
depends_on:
- db
- redis
redis:
image: redis
db:
image: postgres
ipv4_address, ipv6_address
link_local_ips
isolation in build configurations and service definitions
labels for volumes and networks
name for volumes
userns_mode
healthcheck
sysctls
pids_limit
oom_kill_disable
cpu_period
Version 2.2
An upgrade of version 2.1 that introduces new parameters only available with Docker Engine
version 1.13.0+. Version 2.2 files are supported by Compose 1.13.0+. This version also allows you
to specify default scale numbers inside the service’s configuration.
init
scale
cpu_rt_runtime and cpu_rt_period
Version 2.3
An upgrade of version 2.2 that introduces new parameters only available with Docker Engine
version 17.06.0+. Version 2.3 files are supported by Compose 1.16.0+.
Version 3
Designed to be cross-compatible between Compose and the Docker Engine’s swarm mode, version
3 removes several options and adds several more.
Added: deploy
Note: When specifying the Compose file version to use, make sure to specify both
the major and minor numbers. If no minor version is given, 0 is used by default and not the
latest minor version. As a result, features added in later versions will not be supported. For
example:
version: "3"
is equivalent to:
version: "3.0"
Version 3.3
An upgrade of version 3 that introduces new parameters only available with Docker Engine
version 17.06.0+, and higher.
build labels
credential_spec
configs
deploy endpoint_mode
Version 3.4
An upgrade of version 3 that introduces new parameters. It is only available with Docker Engine
version 17.09.0 and higher.
Version 3.5
An upgrade of version 3 that introduces new parameters. It is only available with Docker Engine
version 17.12.0 and higher.
Version 3.6
An upgrade of version 3 that introduces new parameters. It is only available with Docker Engine
version 18.02.0 and higher.
Version 3.7
An upgrade of version 3 that introduces new parameters. It is only available with Docker Engine
version 18.06.0 and higher.
Upgrading
Version 2.x to 3.x
Between versions 2.x and 3.x, the structure of the Compose file is the same, but several options
have been removed:
volume_driver: Instead of setting the volume driver on the service, define a volume using
the top-level volumes option and specify the driver there.
version: "3.7"
services:
db:
image: postgres
volumes:
- data:/var/lib/postgresql/data
volumes:
data:
driver: mydriver
Version 1 to 2.x
In the majority of cases, moving from version 1 to 2 is a very simple process:
1. Indent the whole file by one level and put a services: key at the top.
2. Add a version: '2' line at the top of the file.
context: .
dockerfile: Dockerfile-alternate
logging:
driver: syslog
options:
syslog-address: "tcp://192.168.0.42:123"
links:
- db
environment:
- DB_PORT=tcp://db:5432
external_links: Compose uses Docker networks when running version 2 projects, so links
behave slightly differently. In particular, two containers must be connected to at least one
network in common in order to communicate, even if explicitly linked together.
Either connect the external container to your app’s default network, or connect both the
external container and your service’s containers to an external network.
If you’re using net: "container:[container name/id]", the value does not need to change.
net: "container:cont-name" -> network_mode: "container:cont-name"
net: "container:abc12345" -> network_mode: "container:abc12345"
volumes with named volumes: these must now be explicitly declared in a top-
level volumes section of your Compose file. If a service mounts a named volume called data,
you must declare a data volume in your top-level volumes section. The whole file might look
like this:
version: "2.4"
services:
db:
image: postgres
volumes:
- data:/var/lib/postgresql/data
volumes:
data: {}
By default, Compose creates a volume whose name is prefixed with your project name. If
you want it to just be called data, declare it as external:
volumes:
data:
external: true
Compatibility mode
docker-compose 1.20.0 introduces a new --compatibility flag designed to help developers
transition to version 3 more easily. When enabled, docker-compose reads the deploysection of each
service’s definition and attempts to translate it into the equivalent version 2 parameter. Currently, the
following deploy keys are translated:
Note: This is a modified copy of the Docker Stacks and Distributed Application Bundlesdocument in
the docker/docker-ce repo. It’s been updated to accurately reflect newer releases.
Overview
A Dockerfile can be built into an image, and containers can be created from that image. Similarly,
a docker-compose.yml can be built into a distributed application bundle, and stackscan be
created from that bundle. In that sense, the bundle is a multi-services distributable image format.
Docker Stacks and Distributed Application Bundles started as experimental features introduced in
Docker 1.12 and Docker Compose 1.8, alongside the concept of swarm mode, and nodes and
services in the Engine API. Neither Docker Engine nor the Docker Registry support distribution of
bundles, and the concept of a bundle is not the emphasis for new releases going forward.
However, swarm mode, multi-service applications, and stack files now are fully supported. A stack
file is a particular type of version 3 Compose file.
If you are just getting started with Docker and want to learn the best way to deploy multi-service
applications, a good place to start is the Get Started walkthrough. This shows you how to define a
service configuration in a Compose file, deploy the app, and use the relevant tools and commands.
Produce a bundle
The easiest way to produce a bundle is to generate it using docker-compose from an existing docker-
compose.yml. Of course, that’s just one possible way to proceed, in the same way that docker
build isn’t the only way to produce a Docker image.
From docker-compose:
$ docker-compose bundle
WARNING: Unsupported key 'network_mode' in services.nsqd - ignoring
WARNING: Unsupported key 'links' in services.nsqd - ignoring
WARNING: Unsupported key 'volumes' in services.nsqd - ignoring
[...]
Wrote bundle to vossibility-stack.dab
If you’re on Mac or Windows, download the “Beta channel” version of Docker Desktop for
Mac or Docker Desktop for Windows to install it. If you’re on Linux, follow the instructions in
the experimental build README.
A stack is created using the docker deploy command:
# docker deploy --help
Options:
--file string Path to a Distributed Application Bundle file (Default:
STACK.dab)
--help Print usage
--with-registry-auth Send registry authentication details to Swarm agents
# docker service ls
ID NAME REPLICAS IMAGE
COMMAND
29bv0vnlm903 vossibility-stack_lookupd 1
nsqio/nsq@sha256:eeba05599f31eba418e96e71e0984c3dc96963ceb66924dd37a47bf7ce18a662
/nsqlookupd
4awt47624qwh vossibility-stack_nsqd 1
nsqio/nsq@sha256:eeba05599f31eba418e96e71e0984c3dc96963ceb66924dd37a47bf7ce18a662
/nsqd --data-path=/data --lookupd-tcp-address=lookupd:4160
4tjx9biia6fs vossibility-stack_elasticsearch 1
elasticsearch@sha256:12ac7c6af55d001f71800b83ba91a04f716e58d82e748fa6e5a7359eed2301aa
7563uuzr9eys vossibility-stack_kibana 1
kibana@sha256:6995a2d25709a62694a937b8a529ff36da92ebee74bafd7bf00e6caf6db2eb03
9gc5m4met4he vossibility-stack_logstash 1
logstash@sha256:2dc8bddd1bb4a5a34e8ebaf73749f6413c101b2edef6617f2f7713926d2141fe
logstash -f /etc/logstash/conf.d/logstash.conf
axqh55ipl40h vossibility-stack_vossibility-collector 1 icecrime/vossibility-
collector@sha256:f03f2977203ba6253988c18d04061c5ec7aab46bca9dfd89a9a1fa4500989fba --
config /config/config.toml --debug
Manage stacks
Stacks are managed using the docker stack command:
# docker stack --help
Commands:
config Print the stack configuration
deploy Create and update a stack
rm Remove the stack
services List the services in the stack
tasks List the tasks in the stack
The image that the service runs. Docker images should be referenced with full content hash
to fully specify the deployment artifact for the service.
Example: postgres@sha256:e0a230a9f5b4e1b8b03bb3e8cf7322b0e42b7838c5c87f4545edb48f5
eb8f077
Command []string
Args []string
Env []string
Environment variables.
Labels map[string]string
Ports []Port
Service ports (composed of Port (int) and Protocol (string). A service description can only
specify the container port to be exposed. These ports can be mapped on runtime hosts at
the operator's discretion.
WorkingDir string
User string
Networks []string
Networks that the service containers should be connected to. An entity deploying a bundle
should create networks as needed.
Note: Some configuration options are not yet supported in the DAB format, including volume
mounts.
Docker Compose and Docker Swarm aim to have full integration, meaning you can point a Compose
app at a Swarm cluster and have it all just work as if you were using a single Docker host.
The actual extent of integration depends on which version of the Compose file format you are using:
1. If you’re using version 1 along with links, your app works, but Swarm schedules all
containers on one host, because links between containers do not work across hosts with the
old networking system.
o as long as the Swarm cluster is configured to use the overlay driver, or a custom
driver which supports multi-host networking.
Read Get started with multi-host networking to see how to set up a Swarm cluster with Docker
Machine and the overlay driver. Once you’ve got it running, deploying your app to it should be as
simple as:
Limitations
Building images
Swarm can build an image from a Dockerfile just like a single-host Docker instance can, but the
resulting image only lives on a single node and won’t be distributed to other nodes.
If you want to use Compose to scale the service in question to multiple nodes, build the image, push
it to a registry such as Docker Hub, and reference it from docker-compose.yml:
$ docker build -t myusername/web .
$ docker push myusername/web
$ cat docker-compose.yml
web:
image: myusername/web
$ docker-compose up -d
$ docker-compose scale web=3
Multiple dependencies
If a service has multiple dependencies of the type which force co-scheduling (see Automatic
scheduling below), it’s possible that Swarm schedules the dependencies on different nodes, making
the dependent service impossible to schedule. For example, here foo needs to be co-scheduled
with bar and baz:
version: "2"
services:
foo:
image: foo
volumes_from: ["bar"]
network_mode: "service:baz"
bar:
image: bar
baz:
image: baz
The problem is that Swarm might first schedule bar and baz on different nodes (since they’re not
dependent on one another), making it impossible to pick an appropriate node for foo.
To work around this, use manual scheduling to ensure that all three services end up on the same
node:
version: "2"
services:
foo:
image: foo
volumes_from: ["bar"]
network_mode: "service:baz"
environment:
- "constraint:node==node-1"
bar:
image: bar
environment:
- "constraint:node==node-1"
baz:
image: baz
environment:
- "constraint:node==node-1"
If a service maps a port from the host, such as 80:8000, then you may get an error like this when
running docker-compose up on it after the first time:
docker: Error response from daemon: unable to find a node that satisfies
container==6ab2dfe36615ae786ef3fc35d641a260e3ea9663d6e69c5b70ce0ca6cb373c02.
The usual cause of this error is that the container has a volume (defined either in its image or in the
Compose file) without an explicit mapping, and so in order to preserve its data, Compose has
directed Swarm to schedule the new container on the same node as the old container. This results in
a port clash.
Specify a named volume, and use a volume driver which is capable of mounting the volume
into the container regardless of what node it’s scheduled on.
Compose does not give Swarm any specific scheduling instructions if a service uses only
named volumes.
version: "2"
services:
web:
build: .
ports:
- "80:8000"
volumes:
- web-logs:/var/log/web
volumes:
web-logs:
driver: custom-volume-driver
Remove the old container before creating the new one. You lose any data in the volume.
$ docker-compose rm -f web
$ docker-compose up web
Scheduling containers
Automatic scheduling
Some configuration options result in containers being automatically scheduled on the same Swarm
node to ensure that they work correctly. These are:
Manual scheduling
Swarm offers a rich set of scheduling and affinity hints, enabling you to control where containers are
located. They are specified via container environment variables, so you can use
Compose’s environment option to set them.
# Schedule containers on a specific node
environment:
- "constraint:node==node-1"
# Schedule containers on a node that has the 'storage' label set to 'ssd'
environment:
- "constraint:storage==ssd"
Syntax rules
These syntax rules apply to the .env file:
COMPOSE_API_VERSION
COMPOSE_CONVERT_WINDOWS_PATHS
COMPOSE_FILE
COMPOSE_HTTP_TIMEOUT
COMPOSE_TLS_VERSION
COMPOSE_PROJECT_NAME
DOCKER_CERT_PATH
DOCKER_HOST
DOCKER_TLS_VERIFY
Notes
Values present in the environment at runtime always override those defined inside
the .env file. Similarly, values passed via command-line arguments take precedence as well.
Environment variables defined in the .env file are not automatically visible inside containers.
To set container-applicable environment variables, follow the guidelines in the
topic Environment variables in Compose, which describes how to pass shell environment
variables through to containers, define environment variables in Compose files, and more.
Environment variables in Compose
Estimated reading time: 4 minutes
There are multiple parts of Compose that deal with environment variables in one sense or another.
This page should help you find the information you need.
web:
image: "webapp:${TAG}"
For more information, see the Variable substitution section in the Compose file reference.
The value of the DEBUG variable in the container is taken from the value for the same variable in the
shell in which Compose is run.
You can also pass a variable through from the shell by not giving it a value:
The value of the DEBUG variable in the container is taken from the value for the same variable in the
shell in which Compose is run.
$ cat docker-compose.yml
version: '3'
services:
web:
image: "webapp:${TAG}"
When you run docker-compose up, the web service defined above uses the image webapp:v1.5. You
can verify this with the config command, which prints your resolved application config to the terminal:
$ docker-compose config
version: '3'
services:
web:
image: 'webapp:v1.5'
Values in the shell take precedence over those specified in the .env file. If you set TAG to a different
value in your shell, the substitution in image uses that instead:
$ export TAG=v2.0
$ docker-compose config
version: '3'
services:
web:
image: 'webapp:v2.0'
When you set the same environment variable in multiple files, here’s the priority used by Compose
to choose which value to use:
1. Compose file
2. Shell environment variables
3. Environment file
4. Dockerfile
5. Variable is not defined
In the example below, we set the same environment variable on an Environment file, and the
Compose file:
$ cat ./Docker/api/api.env
NODE_ENV=test
$ cat docker-compose.yml
version: '3'
services:
api:
image: 'node:6-alpine'
env_file:
- ./Docker/api/api.env
environment:
- NODE_ENV=production
When you run the container, the environment variable defined in the Compose file takes
precedence.
> process.env.NODE_ENV
'production'
Having any ARG or ENV setting in a Dockerfile evaluates only if there is no Docker Compose entry
for environment or env_file.
Specifics for NodeJS containers
If you have a package.json entry for script:start likeNODE_ENV=test node server.js, then this
overrules any setting in yourdocker-compose.yml file.
However, these variables are deprecated. Use the link alias as a hostname instead.
If a service is defined in both files, Compose merges the configurations using the rules described
in Adding and overriding configuration.
To use multiple override files, or an override file with a different name, you can use the -foption to
specify the list of files. Compose merges files in the order they’re specified on the command line.
See the docker-compose command reference for more information about using -f.
When you use multiple configuration files, you must make sure all paths in the files are relative to the
base Compose file (the first Compose file specified with -f). This is required because override files
need not be valid Compose files. Override files can contain small fragments of configuration.
Tracking which fragment of a service is relative to which path is difficult and confusing, so to keep
paths easier to understand, all paths must be defined relative to the base file.
In this section, there are two common use cases for multiple Compose files: changing a Compose
app for different environments, and running administrative tasks against a Compose app.
DIFFERENT ENVIRONMENTS
A common use case for multiple files is changing a development Compose app for a production-like
environment (which may be production, staging or CI). To support these differences, you can split
your Compose configuration into a few different files:
Start with a base file that defines the canonical configuration for the services.
docker-compose.yml
web:
image: example/my_web_app:latest
links:
- db
- cache
db:
image: postgres:latest
cache:
image: redis:latest
In this example the development configuration exposes some ports to the host, mounts our code as
a volume, and builds the web image.
docker-compose.override.yml
web:
build: .
volumes:
- '.:/code'
ports:
- 8883:80
environment:
DEBUG: 'true'
db:
command: '-d'
ports:
- 5432:5432
cache:
ports:
- 6379:6379
Now, it would be nice to use this Compose app in a production environment. So, create another
override file (which might be stored in a different git repo or managed by a different team).
docker-compose.prod.yml
web:
ports:
- 80:80
environment:
PRODUCTION: 'true'
cache:
environment:
TTL: '500'
This deploys all three services using the configuration in docker-compose.yml and docker-
compose.prod.yml (but not the dev configuration in docker-compose.override.yml).
Another common use case is running adhoc or administrative tasks against one or more services in
a Compose app. This example demonstrates running a database backup.
web:
image: example/my_web_app:latest
links:
- db
db:
image: postgres:latest
dbadmin:
build: database_admin/
links:
- db
To start a normal environment run docker-compose up -d. To run a database backup, include
the docker-compose.admin.yml as well.
docker-compose -f docker-compose.yml -f docker-compose.admin.yml \
run dbadmin db-backup
Extending services
Note: The extends keyword is supported in earlier Compose file formats up to Compose file version
2.1 (see extends in v1 and extends in v2), but is not supported in Compose version 3.x. See
the Version 3 summary of keys added and removed, along with information on how to upgrade.
See moby/moby#31101 to follow the discussion thread on possibility of adding support for extends in
some form in future versions.
Docker Compose’s extends keyword enables sharing of common configurations among different
files, or even different projects entirely. Extending services is useful if you have several services that
reuse a common set of configuration options. Using extends you can define a common set of service
options in one place and refer to it from anywhere.
Keep in mind that links, volumes_from, and depends_on are never shared between services
using extends. These exceptions exist to avoid implicit dependencies; you always
define links and volumes_from locally. This ensures dependencies between services are clearly
visible when reading the current file. Defining these locally also ensures that changes to the
referenced file don’t break anything.
This instructs Compose to re-use the configuration for the webapp service defined in the common-
services.yml file. Suppose that common-services.yml looks like this:
webapp:
build: .
ports:
- "8000:8000"
volumes:
- "/data"
In this case, you get exactly the same result as if you wrote docker-compose.yml with the
same build, ports and volumes configuration values defined directly under web.
You can go further and define (or re-define) configuration locally in docker-compose.yml:
web:
extends:
file: common-services.yml
service: webapp
environment:
- DEBUG=1
cpu_shares: 5
important_web:
extends: web
cpu_shares: 10
You can also write other services and link your web service to them:
web:
extends:
file: common-services.yml
service: webapp
environment:
- DEBUG=1
cpu_shares: 5
links:
- db
db:
image: postgres
Extending an individual service is useful when you have multiple services that have a common
configuration. The example below is a Compose app with two services: a web application and a
queue worker. Both services use the same codebase and share many configuration options.
app:
build: .
environment:
CONFIG_FILE_PATH: /code/config
API_KEY: xxxyyy
cpu_shares: 5
In a docker-compose.yml we define the concrete services which use the common configuration:
webapp:
extends:
file: common.yml
service: app
command: /code/run_web_app
ports:
- 8080:8080
links:
- queue
- db
queue_worker:
extends:
file: common.yml
service: app
command: /code/run_worker
links:
- queue
For single-value options like image, command or mem_limit, the new value replaces the old value.
# original service
command: python app.py
# local service
command: python otherapp.py
# result
command: python otherapp.py
# local service
expose:
- "4000"
- "5000"
# result
expose:
- "3000"
- "4000"
- "5000"
In the case of environment, labels, volumes, and devices, Compose “merges” entries together with
locally-defined values taking precedence. For environment and labels, the environment variable or
label name determines which value is used:
# original service
environment:
- FOO=original
- BAR=original
# local service
environment:
- BAR=local
- BAZ=local
# result
environment:
- FOO=original
- BAR=local
- BAZ=local
Entries for volumes and devices are merged using the mount path in the container:
# original service
volumes:
- ./original:/foo
- ./original:/bar
# local service
volumes:
- ./local:/bar
- ./local:/baz
# result
volumes:
- ./original:/foo
- ./local:/bar
- ./local:/baz
Networking in Compose
Estimated reading time: 5 minutes
This page applies to Compose file formats version 2 and higher. Networking features are not
supported for Compose file version 1 (legacy).
By default Compose sets up a single network for your app. Each container for a service joins the
default network and is both reachable by other containers on that network, and discoverable by them
at a hostname identical to the container name.
Note: Your app’s network is given a name based on the “project name”, which is based on the name
of the directory it lives in. You can override the project name with either the --project-name flag or
the COMPOSE_PROJECT_NAME environment variable.
For example, suppose your app is in a directory called myapp, and your docker-compose.ymllooks like
this:
version: "3"
services:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
ports:
- "8001:5432"
Update containers
If you make a configuration change to a service and run docker-compose up to update it, the old
container is removed and the new one joins the network under a different IP address but the same
name. Running containers can look up that name and connect to the new address, but the old
address stops working.
If any containers have connections open to the old container, they are closed. It is a container’s
responsibility to detect this condition, look up the name again and reconnect.
Links
Links allow you to define extra aliases by which a service is reachable from another service. They
are not required to enable services to communicate - by default, any service can reach any other
service at that service’s name. In the following example, db is reachable from web at the
hostnames db and database:
version: "3"
services:
web:
build: .
links:
- "db:database"
db:
image: postgres
Multi-host networking
Note: The instructions in this section refer to legacy Docker Swarm operations, and only work when
targeting a legacy Swarm cluster. For instructions on deploying a compose project to the newer
integrated swarm mode, consult the Docker Stacksdocumentation.
When deploying a Compose application to a Swarm cluster, you can make use of the built-
in overlay driver to enable multi-host communication between containers with no changes to your
Compose file or application code.
Consult the Getting started with multi-host networking to see how to set up a Swarm cluster. The
cluster uses the overlay driver by default, but you can specify it explicitly if you prefer - see below for
how to do this.
proxy:
build: ./proxy
networks:
- frontend
app:
build: ./app
networks:
- frontend
- backend
db:
image: postgres
networks:
- backend
networks:
frontend:
# Use a custom driver
driver: custom-driver-1
backend:
# Use a custom driver which takes special options
driver: custom-driver-2
driver_opts:
foo: "1"
bar: "2"
Networks can be configured with static IP addresses by setting the ipv4_address and/or
ipv6_address for each attached network.
version: "3.5"
networks:
frontend:
name: custom_frontend
driver: custom-driver-1
For full details of the network configuration options available, see the following references:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
networks:
default:
# Use a custom driver
driver: custom-driver-1
When you define your app with Compose in development, you can use this definition to run your
application in different environments such as CI, staging, and production.
The easiest way to deploy an application is to run it on a single server, similar to how you would run
your development environment. If you want to scale up your application, you can run Compose apps
on a Swarm cluster.
You probably need to make changes to your app configuration to make it ready for production.
These changes may include:
Removing any volume bindings for application code, so that code stays inside the container
and can’t be changed from outside
Binding to different ports on the host
Setting environment variables differently, such as when you need to decrease the verbosity
of logging, or to enable email sending)
Specifying a restart policy like restart: always to avoid downtime
Adding extra services such as a log aggregator
For this reason, consider defining an additional Compose file, say production.yml, which specifies
production-appropriate configuration. This configuration file only needs to include the changes you’d
like to make from the original Compose file. The additional Compose file can be applied over the
original docker-compose.yml to create a new configuration.
Once you’ve got a second configuration file, tell Compose to use it with the -f option:
docker-compose -f docker-compose.yml -f production.yml up -d
See Using multiple compose files for a more complete example.
Deploying changes
When you make changes to your app code, remember to rebuild your image and recreate your app’s
containers. To redeploy a service called web, use:
$ docker-compose build web
$ docker-compose up --no-deps -d web
This first rebuilds the image for web and then stop, destroy, and recreate just the webservice. The --
no-deps flag prevents Compose from also recreating any services which webdepends on.
You can use Compose to deploy an app to a remote Docker host by setting
the DOCKER_HOST, DOCKER_TLS_VERIFY, and DOCKER_CERT_PATH environment variables appropriately.
For tasks like this, Docker Machine makes managing local and remote Docker hosts very easy, and
is recommended even if you’re not deploying remotely.
Once you’ve set up your environment variables, all the normal docker-compose commands work with
no further configuration.
Docker Swarm, a Docker-native clustering system, exposes the same API as a single Docker host,
which means you can use Compose against a Swarm instance and run your apps across multiple
hosts.
Note: Environment variables are no longer the recommended method for connecting to linked
services. Instead, you should use the link name (by default, the name of the linked service) as
the hostname to connect to. See the docker-compose.yml documentation for details.
Environment variables are only populated if you’re using the legacy version 1 Compose file format.
Compose uses Docker links to expose services’ containers to one another. Each linked container
injects a set of environment variables, each of which begins with the uppercase name of the
container.
To see what environment variables are available to a service, run docker-compose run SERVICE env.
name_PORT
Full URL, such as DB_PORT=tcp://172.17.0.5:5432
name_PORT_num_protocol
Full URL, such as DB_PORT_5432_TCP=tcp://172.17.0.5:5432
name_PORT_num_protocol_ADDR
Container’s IP address, such as DB_PORT_5432_TCP_ADDR=172.17.0.5
name_PORT_num_protocol_PORT
Exposed port number, such as DB_PORT_5432_TCP_PORT=5432
name_PORT_num_protocol_PROTO
Protocol (tcp or udp), such as DB_PORT_5432_TCP_PROTO=tcp
name_NAME
Fully qualified container name, such as DB_1_NAME=/myapp_web_1/myapp_db_1
You can control the order of service startup and shutdown with the depends_on option. Compose
always starts and stops containers in dependency order, where dependencies are determined
by depends_on, links, volumes_from, and network_mode: "service:...".
However, for startup Compose does not wait until a container is “ready” (whatever that means for
your particular application) - only until it’s running. There’s a good reason for this.
The problem of waiting for a database (for example) to be ready is really just a subset of a much
larger problem of distributed systems. In production, your database could become unavailable or
move hosts at any time. Your application needs to be resilient to these types of failures.
To handle this, design your application to attempt to re-establish a connection to the database after
a failure. If the application retries the connection, it can eventually connect to the database.
The best solution is to perform this check in your application code, both at startup and whenever a
connection is lost for any reason. However, if you don’t need this level of resilience, you can work
around the problem with a wrapper script:
Use a tool such as wait-for-it, dockerize, or sh-compatible wait-for. These are small wrapper
scripts which you can include in your application’s image to poll a given host and port until
it’s accepting TCP connections.
Tip: There are limitations to this first solution. For example, it doesn’t verify when a specific
service is really ready. If you add more arguments to the command, use the bash
shift command with a loop, as shown in the next example.
Alternatively, write your own wrapper script to perform a more application-specific health
check. For example, you might want to wait until Postgres is definitely ready to accept
commands:
#!/bin/sh
# wait-for-postgres.sh
set -e
host="$1"
shift
cmd="$@"
sleep 1
done
exec $cmd
You can use this as a wrapper script as in the previous example, by setting:
The following samples show the various aspects of how to work with Docker Compose. As a
prerequisite, be sure to install Docker Compose if you have not already done so.
Quickstart: Compose and Django - Shows how to use Docker Compose to set up and run a
simple Django/PostgreSQL app.
Quickstart: Compose and Rails - Shows how to use Docker Compose to set up and run a
Rails/PostgreSQL app.
Quickstart: Compose and WordPress - Shows how to use Docker Compose to set up and
run WordPress in an isolated environment with Docker containers.
Get Started with Docker - This multi-part tutorial covers writing your first app, data storage,
networking, and swarms, and ends with your app running on production servers in the cloud.
Deploying an app to a Swarm - This tutorial from Docker Labs shows you how to create and
customize a sample voting app, deploy it to a swarm, test it, reconfigure the app, and
redeploy.
DTR CLI
docker/dtr overview
Estimated reading time: 1 minute
This tool has commands to install, configure, and backup Docker Trusted Registry (DTR). It also
allows uninstalling DTR. By default the tool runs in interactive mode. It prompts you for the values
needed.
Additional help is available for each command with the ‘--help’ option.
Usage
docker run -it --rm docker/dtr \
command [command options]
If not specified, docker/dtr uses the latest tag by default. To work with a different version, specify it
in the command. For example, docker run -it --rm docker/dtr:2.6.0.
Commands
Option Description
docker/dtr backup
Estimated reading time: 3 minutes
Usage
docker run -i --rm docker/dtr \
backup [command options] > backup.tar
Example Commands
BASIC
For a detailed explanation on the advanced example, see Back up your DTR metadata. To learn
more about the --log-driver option for docker run, see docker run reference.
Description
This command creates a tar file with the contents of the volumes used by DTR, and prints it. You
can then use docker/dtr restore to restore the data from an existing backup.
Note:
This command only creates backups of configurations, and image metadata. It does not back
up users and organizations. Users and organizations can be backed up during a UCP
backup.
It also does not back up Docker images stored in your registry. You should implement a
separate backup policy for the Docker images stored in your registry, taking into
consideration whether your DTR installation is configured to store images on the filesystem
or is using a cloud provider.
This backup contains sensitive information and should be stored securely.
Using the --offline-backup flag temporarily shuts down the RethinkDB container. Take the
replica out of your load balancer to avoid downtime.
Options
Option Environment Variable Description
--help-
$DTR_EXTENDED_HELP Display extended help text for a given command.
extended
--ucp-
$UCP_PASSWORD The UCP administrator password.
password
--ucp-
$UCP_USERNAME The UCP administrator username.
username
docker/dtr destroy
Estimated reading time: 1 minute
Usage
docker run -it --rm docker/dtr \
destroy [command options]
Description
This command forcefully removes all containers and volumes associated with a DTR replica without
notifying the rest of the cluster. Use this command on all replicas uninstall DTR.
Use the ‘remove’ command to gracefully scale down your DTR cluster.
Options
Option Environment Variable Description
--replica-
$DTR_DESTROY_REPLICA_ID The ID of the replica to destroy.
id
--ucp-
$UCP_USERNAME The UCP administrator username.
username
--ucp-
$UCP_PASSWORD The UCP administrator password.
password
docker/dtr emergency-repair
Estimated reading time: 3 minutes
Usage
docker run -it --rm docker/dtr \
emergency-repair [command options]
Description
This command repairs a DTR cluster that has lost quorum by reverting your cluster to a single DTR
replica.
There are three steps you can take to recover an unhealthy DTR cluster:
1. If the majority of replicas are healthy, remove the unhealthy nodes from the cluster, and join
new ones for high availability.
2. If the majority of replicas are unhealthy, use this command to revert your cluster to a single
DTR replica.
3. If you can’t repair your cluster to a single replica, you’ll have to restore from an existing
backup, using the restore command.
When you run this command, a DTR replica of your choice is repaired and turned into the only
replica in the whole DTR cluster. The containers for all the other DTR replicas are stopped and
removed. When using the force option, the volumes for these replicas are also deleted.
After repairing the cluster, you should use the join command to add more DTR replicas for high
availability.
Options
Option Environment Variable Description
--help-
extended
$DTR_EXTENDED_HELP Display extended help text for a given command.
--ucp-
password
$UCP_PASSWORD The UCP administrator password.
--ucp-
username
$UCP_USERNAME The UCP administrator username.
docker/dtr install
Estimated reading time: 8 minutes
Usage
docker run -it --rm docker/dtr \
install [command options]
Description
This command installs Docker Trusted Registry (DTR) on a node managed by Docker Universal
Control Plane (UCP).
After installing DTR, you can join additional DTR replicas using docker/dtr join.
Example Usage
$ docker run -it --rm docker/dtr:2.7.0 install \
--ucp-node <UCP_NODE_HOSTNAME> \
--ucp-insecure-tls
Note: Use --ucp-ca "$(cat ca.pem)" instead of --ucp-insecure-tls for a production deployment.
Options
Option Environment Variable Description
--http-
$DTR_HTTP_PROXY The HTTP proxy used for outgoing requests.
proxy
--https-
$DTR_HTTPS_PROXY The HTTPS proxy used for outgoing requests.
proxy
--ucp-
$UCP_PASSWORD The UCP administrator password.
password
--ucp-
$UCP_USERNAME The UCP administrator username.
username
docker/dtr join
Estimated reading time: 3 minutes
Add a new replica to an existing DTR cluster. Use SSH to log into any node that is already part of
UCP.
Usage
docker run -it --rm \
docker/dtr:2.6.0 join \
--ucp-node <ucp-node-name> \
--ucp-insecure-tls
Description
This command creates a replica of an existing DTR on a node managed by Docker Universal
Control Plane (UCP).
Options
Option Environment Variable Description
--ucp-
$UCP_PASSWORD The UCP administrator password.
password
--ucp-
$UCP_USERNAME The UCP administrator username.
username
docker/dtr reconfigure
Estimated reading time: 8 minutes
Usage
docker run -it --rm docker/dtr \
reconfigure [command options]
Description
This command changes DTR configuration settings. If you are using NFS as a storage volume,
see Use NFS for details on changes to the reconfiguration process.
DTR is restarted for the new configurations to take effect. To have no down time, configure your
DTR for high availability.
Options
Option Environment Variable Description
--http-
$DTR_HTTP_PROXY The HTTP proxy used for outgoing requests.
proxy
--ucp-
$UCP_PASSWORD The UCP administrator password.
password
--ucp-
$UCP_USERNAME The UCP administrator username.
username
docker/dtr remove
Estimated reading time: 1 minute
Description
This command gracefully scales down your DTR cluster by removing exactly one replica. All other
replicas must be healthy and will remain healthy after this operation.
Options
Option Environment Variable Description
--replica-
$DTR_REMOVE_REPLICA_ID DEPRECATED Alias for --replica-ids.
id
--ucp-
$UCP_PASSWORD The UCP administrator password.
password
--ucp-
$UCP_USERNAME The UCP administrator username.
username
docker/dtr restore
Estimated reading time: 8 minutes
Usage
docker run -i --rm docker/dtr \
restore [command options] < backup.tar
Description
This command performs a fresh installation of DTR, and reconfigures it with configuration data from
a tar file generated by docker/dtr backup. If you are restoring DTR after a failure, please make sure
you have destroyed the old DTR fully. See DTR disastery recovery for Docker’s recommended
recovery strategies based on your setup.
There are three steps you can take to recover an unhealthy DTR cluster:
1. If the majority of replicas are healthy, remove the unhealthy nodes from the cluster, and join
new nodes for high availability.
2. If the majority of replicas are unhealthy, use this command to revert your cluster to a single
DTR replica.
3. If you can’t repair your cluster to a single replica, you’ll have to restore from an existing
backup, using the restore command.
This command does not restore Docker images. You should implement a separate restore
procedure for the Docker images stored in your registry, taking in consideration whether your DTR
installation is configured to store images on the local filesystem or using a cloud provider.
After restoring the cluster, you should use the join command to add more DTR replicas for high
availability.
Options
Option Environment Variable Description
--help-
$DTR_EXTENDED_HELP Display extended help text for a given command.
extended
Option Environment Variable Description
--http-
$DTR_HTTP_PROXY The HTTP proxy used for outgoing requests.
proxy
--https-
$DTR_HTTPS_PROXY The HTTPS proxy used for outgoing requests.
proxy
--ucp-
$UCP_PASSWORD The UCP administrator password.
password
--ucp-
$UCP_USERNAME The UCP administrator username.
username
docker/dtr upgrade
Estimated reading time: 1 minute
Usage
docker run -it --rm docker/dtr \
upgrade [command options]
Description
This command upgrades DTR 2.5.x to the current version of this image.
Options
Option Environment Variable Description
--help-
$DTR_EXTENDED_HELP Display extended help text for a given command.
extended
--ucp-
$UCP_PASSWORD The UCP administrator password.
password
--ucp-
$UCP_USERNAME The UCP administrator username.
username
UCP CLI
docker/ucp overview
Estimated reading time: 2 minutes
This image has commands to install and manage Docker Universal Control Plane (UCP) on a
Docker Engine.
You can configure the commands using flags or environment variables. When using environment
variables, use the docker container run -e VARIABLE_NAME syntax to pass the value from your
shell, or docker container run -e VARIABLE_NAME=value to specify the value explicitly on the
command line.
The container running this image needs to be named ucp and bind-mount the Docker daemon
socket. Below you can find an example of how to run this image.
Additional help is available for each command with the --help flag.
Usage
docker container run -it --rm \
--name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
docker/ucp \
command [command arguments]
Commands
Option Description
dump-certs Print the public certificates used by this UCP web server
docker/ucp dump-certs
Estimated reading time: 1 minute
Usage
docker container run --rm \
--name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
docker/ucp \
dump-certs [command options]
Description
This command outputs the public certificates for the UCP web server running on this node. By
default, it prints the contents of the ca.pem and cert.pem files.
When integrating UCP and DTR, use this command with the --cluster --ca flags to configure DTR.
Options
Option Description
--cluster Print the internal UCP swarm root CA and cert instead of the public server cert
docker/ucp example-config
Estimated reading time: 1 minute
Usage
docker container run --rm -i \
--name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
docker/ucp \
example-config
docker/ucp id
Estimated reading time: 1 minute
Usage
docker container run --rm \
--name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
docker/ucp \
id
Description
This command prints the ID of the UCP components running on this node. This ID matches what you
see when running the docker info command while using a client bundle.
Options
Option Description
docker/ucp images
Estimated reading time: 1 minute
Usage
docker container run --rm -it \
--name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
docker/ucp \
images [command options]
Description
This command checks the UCP images that are available in this node, and pulls the ones that are
missing.
Options
Option Description
--list List all images used by UCP but don’t pull them
docker/ucp install
Estimated reading time: 6 minutes
Usage
docker container run --rm -it \
--name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
docker/ucp \
install [command options]
Description
This command initializes a new swarm, turns anode into a manager, and installs Docker Universal
Control Plane (UCP).
If you’re joining more nodes to this swarm, open the following ports in your firewall:
If you have SELinux policies enabled for your Docker install, you will need to use docker container
run --rm -it --security-opt label=disable ... when running this command.
Options
Option Description
--admin-
The UCP administrator password [$UCP_ADMIN_PASSWORD]
password value
--admin-
The UCP administrator username [$UCP_ADMIN_USER]
username value
Set the Docker Swarm scheduler to binpack mode. Used for backwards
--binpack
compatibility
--cloud-
The cloud provider for the cluster
provider value
--controller-
Port for the web UI and API (default: 443)
port value
Option Description
--data-path- Address or interface to use for data path traffic. Format: IP address or
addr value network interface name [$UCP_DATA_PATH_ADDR]
--disable-
Disable anonymous tracking and analytics
tracking
--dns-opt value Set DNS options for the UCP containers [$DNS_OPT]
--dns- Set custom DNS search domains for the UCP containers
search value [$DNS_SEARCH]
--dns value Set custom DNS servers for the UCP containers [$DNS]
--enable-
Enable performance profiling
profiling
Use the latest existing UCP config during this installation. The install will
--existing-config
fail if a config is not found
--external-
Customize the certificates used by the UCP web server
server-cert
--external- Set the IP address of the load balancer that published services are
service-lb value expected to be reachable on
--force-insecure-
Force install to continue even with unauthenticated Docker Engine ports.
tcp
Force the install/upgrade even if the system does not meet the minimum
--force-minimums
requirements
--iscsiadm- Path to the host iscsiadm binary. This option is applicable only when --
pathvalue storage-iscsi is specified
--iscsidb- Path to the host iscsi DB. This option is applicable only when --storage-
path value iscsi is specified
--kube-apiserver-
Port for the Kubernetes API server (default: 6443)
port value
Option Description
--nodeport- Allowed port range for Kubernetes services of type NodePort (Default:
range value 32768-35535) (default: “32768-35535”)
--pull value Pull UCP images: ‘always’, when ‘missing’, or ‘never’ (default: “missing”)
Set the Docker Swarm scheduler to random mode. Used for backwards
--random
compatibility
--registry-
Password to use when pulling images [$REGISTRY_PASSWORD]
password value
--registry-
Username to use when pulling images [$REGISTRY_USERNAME]
username value
--service-
cluster-ip- Kubernetes Cluster IP Range for Services (default: “10.96.0.0/16”)
rangevalue
--skip-cloud- Disables checks which rely on detecting which (if any) cloud provider the
provider-check cluster is currently running on
--storage-expt-
Flag to enable experimental features in Kubernetes storage
enabled
--swarm-grpc-
Port for communication between nodes (default: 2377)
port value
--swarm- Port for the Docker Swarm manager. Used for backwards compatibility
port value (default: 2376)
--unlock- The unlock key for this swarm-mode cluster, if one exists.
key value [$UNLOCK_KEY]
docker/ucp port-check-server
Estimated reading time: 1 minute
Usage
docker run --rm -it \
-v /var/run/docker.sock:/var/run/docker.sock \
docker/ucp \
port-check-server [command options]
Description
Checks the suitablility of the node for a UCP installation
Options
Option Description
docker/ucp restore
Estimated reading time: 2 minutes
Usage
docker container run --rm -i \
--name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
docker/ucp \
restore [command options] < backup.tar
Description
This command installs a new UCP cluster that is populated with the state of a previous UCP
manager node using a tar file generated by the backup command. All UCP settings, users, teams
and permissions will be restored from the backup file. The Restore operation does not alter or
recover any containers, networks, volumes or services of an underlying cluster.
The restore command can be performed on any manager node of an existing cluster. If the current
node does not belong in a cluster, one will be initialized using the value of the --host-address flag.
When restoring on an existing swarm-mode cluster, no previous UCP components must be running
on any node of the cluster. This cleanup can be performed with the uninstall-ucp command.
If restore is performed on a different cluster than the one where the backup file was taken on, the
Cluster Root CA of the old UCP installation will not be restored. This will invalidate any previously
issued Admin Client Bundles and all administrator will be required to download new client bundles
after the operation is completed. Any existing Client Bundles for non-admin users will still be fully
operational.
By default, the backup tar file is read from stdin. You can also bind-mount the backup file
under /config/backup.tar, and run the restore command with the --interactive flag.
Notes:
Run uninstall-ucp before attempting the restore operation on an existing UCP cluster.
If your swarm-mode cluster has lost quorum and the original set of managers are not
recoverable, you can attempt to recover a single-manager cluster with docker swarm init --
force-new-cluster.
You can restore from a backup that was taken on a different manager node or a different
cluster altogether.
Options
Option Description
--data-path-
Address or interface to use for data path traffic
addrvalue
Force the install/upgrade even if the system does not meet the
--force-minimums
minimum requirements
--passphrasevalue Decrypt the backup tar file with the provided passphrase
`--swarm-grpc-
Port for communication between nodes (default: 2377)
port value
--unlock-keyvalue The unlock key for this swarm-mode cluster, if one exists.
docker/ucp uninstall-ucp
Estimated reading time: 1 minute
Usage
docker container run --rm -it \
--name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
docker/ucp \
uninstall-ucp [command options]
Description
This command uninstalls UCP from the swarm, but preserves the swarm so that your applications
can continue running.
After UCP is uninstalled you can use the docker swarm leave and docker node rmcommands to
remove nodes from the swarm.
Once UCP is uninstalled, you won’t be able to join nodes to the swarm unless UCP is installed
again.
Options
Option Description
docker/ucp upgrade
Estimated reading time: 1 minute
Description
This command upgrades the UCP running on this cluster.
Before performing an upgrade, you should perform a backup by using the backup command.
After upgrading UCP, go to the UCP web interface and confirm each node is healthy and that all
nodes have been upgraded successfully.
Options
Option Description
Force the install/upgrade even if the system does not meet the
--force-minimums
minimum requirements
--manual-worker-
Whether to manually upgrade worker nodes. Defaults to false
upgrade
--registry-
Password to use when pulling images
passwordvalue
--registry-
Username to use when pulling images
usernamevalue