DockerCLI/man/docker-build.1.md

232 lines
8.8 KiB
Markdown
Raw Normal View History

% DOCKER(1) Docker User Manuals
% Docker Community
% JUNE 2014
# NAME
docker-build - Build a new image from the source code at PATH
# SYNOPSIS
**docker build**
[**--help**]
[**-f**|**--file**[=*PATH/Dockerfile*]]
[**--force-rm**[=*false*]]
[**--no-cache**[=*false*]]
[**--pull**[=*false*]]
[**-q**|**--quiet**[=*false*]]
[**--rm**[=*true*]]
[**-t**|**--tag**[=*TAG*]]
[**-m**|**--memory**[=*MEMORY*]]
[**--memory-swap**[=*MEMORY-SWAP*]]
[**-c**|**--cpu-shares**[=*0*]]
[**--cpu-period**[=*0*]]
[**--cpu-quota**[=*0*]]
[**--cpuset-cpus**[=*CPUSET-CPUS*]]
[**--cpuset-mems**[=*CPUSET-MEMS*]]
[**--cgroup-parent**[=*CGROUP-PARENT*]]
PATH | URL | -
# DESCRIPTION
This will read the Dockerfile from the directory specified in **PATH**.
It also sends any other files and directories found in the current
directory to the Docker daemon. The contents of this directory would
be used by **ADD** commands found within the Dockerfile.
Warning, this will send a lot of data to the Docker daemon depending
on the contents of the current directory. The build is run by the Docker
daemon, not by the CLI, so the whole context must be transferred to the daemon.
The Docker CLI reports "Sending build context to Docker daemon" when the context is sent to
the daemon.
When the URL to a tarball archive or to a single Dockerfile is given, no context is sent from
the client to the Docker daemon. When a Git repository is set as the **URL**, the repository is
cloned locally and then sent as the context.
# OPTIONS
**-f**, **--file**=*PATH/Dockerfile*
Path to the Dockerfile to use. If the path is a relative path and you are
building from a local directory, then the path must be relative to that
directory. If you are building from a remote URL pointing to either a
tarball or a Git repository, then the path must be relative to the root of
the remote context. In all cases, the file must be within the build context.
The default is *Dockerfile*.
**--force-rm**=*true*|*false*
Always remove intermediate containers, even after unsuccessful builds. The default is *false*.
**--no-cache**=*true*|*false*
Do not use cache when building the image. The default is *false*.
**--help**
Print usage statement
**--pull**=*true*|*false*
Always attempt to pull a newer version of the image. The default is *false*.
**-q**, **--quiet**=*true*|*false*
Suppress the verbose output generated by the containers. The default is *false*.
**--rm**=*true*|*false*
Remove intermediate containers after a successful build. The default is *true*.
**-t**, **--tag**=""
Repository name (and optionally a tag) to be applied to the resulting image in case of success
**-m**, **--memory**=*MEMORY*
Memory limit
**--memory-swap**=*MEMORY-SWAP*
Total memory (memory + swap), '-1' to disable swap.
**-c**, **--cpu-shares**=*0*
CPU shares (relative weight).
By default, all containers get the same proportion of CPU cycles. You can
change this proportion by adjusting the container's CPU share weighting
relative to the weighting of all other running containers.
To modify the proportion from the default of 1024, use the **-c** or
**--cpu-shares** flag to set the weighting to 2 or higher.
The proportion is only applied when CPU-intensive processes are running.
When tasks in one container are idle, the other containers can use the
left-over CPU time. The actual amount of CPU time used varies depending on
the number of containers running on the system.
For example, consider three containers, one has a cpu-share of 1024 and
two others have a cpu-share setting of 512. When processes in all three
containers attempt to use 100% of CPU, the first container would receive
50% of the total CPU time. If you add a fourth container with a cpu-share
of 1024, the first container only gets 33% of the CPU. The remaining containers
receive 16.5%, 16.5% and 33% of the CPU.
On a multi-core system, the shares of CPU time are distributed across the CPU
cores. Even if a container is limited to less than 100% of CPU time, it can
use 100% of each individual CPU core.
For example, consider a system with more than three cores. If you start one
container **{C0}** with **-c=512** running one process, and another container
**{C1}** with **-c=1024** running two processes, this can result in the following
division of CPU shares:
PID container CPU CPU share
100 {C0} 0 100% of CPU0
101 {C1} 1 100% of CPU1
102 {C1} 2 100% of CPU2
**--cpu-period**=*0*
Limit the CPU CFS (Completely Fair Scheduler) period.
Limit the container's CPU usage. This flag causes the kernel to restrict the
container's CPU usage to the period you specify.
**--cpu-quota**=*0*
Limit the CPU CFS (Completely Fair Scheduler) quota.
By default, containers run with the full CPU resource. This flag causes the
kernel to restrict the container's CPU usage to the quota you specify.
**--cpuset-cpus**=*CPUSET-CPUS*
CPUs in which to allow execution (0-3, 0,1).
**--cpuset-mems**=*CPUSET-MEMS*
Memory nodes (MEMs) in which to allow execution (-1-3, 0,1). Only effective on
NUMA systems.
For example, if you have four memory nodes on your system (0-3), use `--cpuset-mems=0,1`
to ensure the processes in your Docker container only use memory from the first
two memory nodes.
**--cgroup-parent**=*CGROUP-PARENT*
Path to `cgroups` under which the container's `cgroup` are created.
If the path is not absolute, the path is considered relative to the `cgroups` path of the init process.
Cgroups are created if they do not already exist.
# EXAMPLES
## Building an image using a Dockerfile located inside the current directory
Docker images can be built using the build command and a Dockerfile:
docker build .
During the build process Docker creates intermediate images. In order to
keep them, you must explicitly set `--rm=false`.
docker build --rm=false .
A good practice is to make a sub-directory with a related name and create
the Dockerfile in that directory. For example, a directory called mongo may
contain a Dockerfile to create a Docker MongoDB image. Likewise, another
directory called httpd may be used to store Dockerfiles for Apache web
server images.
It is also a good practice to add the files required for the image to the
sub-directory. These files will then be specified with the `COPY` or `ADD`
instructions in the `Dockerfile`.
Note: If you include a tar file (a good practice), then Docker will
automatically extract the contents of the tar file specified within the `ADD`
instruction into the specified target.
## Building an image and naming that image
A good practice is to give a name to the image you are building. There are
no hard rules here but it is best to give the names consideration.
The **-t**/**--tag** flag is used to rename an image. Here are some examples:
Though it is not a good practice, image names can be arbitrary:
docker build -t myimage .
A better approach is to provide a fully qualified and meaningful repository,
name, and tag (where the tag in this context means the qualifier after
the ":"). In this example we build a JBoss image for the Fedora repository
and give it the version 1.0:
docker build -t fedora/jboss:1.0
The next example is for the "whenry" user repository and uses Fedora and
JBoss and gives it the version 2.1 :
docker build -t whenry/fedora-jboss:V2.1
If you do not provide a version tag then Docker will assign `latest`:
docker build -t whenry/fedora-jboss
When you list the images, the image above will have the tag `latest`.
So renaming an image is arbitrary but consideration should be given to
a useful convention that makes sense for consumers and should also take
into account Docker community conventions.
## Building an image using a URL
This will clone the specified GitHub repository from the URL and use it
as context. The Dockerfile at the root of the repository is used as
Dockerfile. This only works if the GitHub repository is a dedicated
repository.
docker build github.com/scollier/Fedora-Dockerfiles/tree/master/apache
Note: You can set an arbitrary Git repository via the `git://` schema.
## Building an image using a URL to a tarball'ed context
This will send the URL itself to the Docker daemon. The daemon will fetch the
tarball archive, decompress it and use its contents as the build context. If you
pass an *-f PATH/Dockerfile* option as well, the system will look for that file
inside the contents of the tarball.
docker build -f dev/Dockerfile https://10.10.10.1/docker/context.tar.gz
Note: supported compression formats are 'xz', 'bzip2', 'gzip' and 'identity' (no compression).
# HISTORY
March 2014, Originally compiled by William Henry (whenry at redhat dot com)
based on docker.com source material and internal work.
June 2014, updated by Sven Dowideit <SvenDowideit@home.org.au>