Docs auto-conversion fixes and MD marking and structure improvements.

- Remove redundant chars and all errors caused by RST->MD conversion.
   e.g. [/#, /\, \<, />, etc.]
 - Fix broken inter-document links
 - Fix outbound links no-longer active or changed
 - Fix lists
 - Fix code blocks
 - Correct apostrophes
 - Replace redundant inline note marks for code with code marks
 - Fix broken image links
 - Remove non-functional title links
 - Correct broken cross-docs links
 - Improve readability

Note: This PR does not try to fix/amend:

 - Grammatical errors
 - Lexical errors
 - Linguistic-logic errors etc.

It just aims to fix main structural or conversion errors to serve as
a base for further amendments that will cover others including but
not limited to those mentioned above.

Docker-DCO-1.1-Signed-off-by: O.S. Tezer <ostezer@gmail.com> (github: ostezer)

Update:

 - Fix backtick issues

Docker-DCO-1.1-Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au> (github: SvenDowideit)
This commit is contained in:
O.S.Tezer 2014-04-23 23:48:28 +03:00 committed by Tibor Vass
parent 149437ff87
commit 148a2be878
3 changed files with 635 additions and 731 deletions

View File

@ -4,23 +4,21 @@ page_keywords: builder, docker, Dockerfile, automation, image creation
# Dockerfile Reference # Dockerfile Reference
**Docker can act as a builder** and read instructions from a text **Docker can act as a builder** and read instructions from a text *Dockerfile*
`Dockerfile` to automate the steps you would to automate the steps you would otherwise take manually to create an image.
otherwise take manually to create an image. Executing Executing `docker build` will run your steps and commit them along the way,
`docker build` will run your steps and commit them giving you a final image.
along the way, giving you a final image.
## Usage ## Usage
To [*build*](../commandline/cli/#cli-build) an image from a source To [*build*](../commandline/cli/#cli-build) an image from a source repository,
repository, create a description file called `Dockerfile` create a description file called Dockerfile at the root of your repository.
at the root of your repository. This file will describe the This file will describe the steps to assemble the image.
steps to assemble the image.
Then call `docker build` with the path of your Then call `docker build` with the path of you source repository as argument
source repository as argument (for example, `.`): (for example, `.`):
> `sudo docker build .` sudo docker build .
The path to the source repository defines where to find the *context* of The path to the source repository defines where to find the *context* of
the build. The build is run by the Docker daemon, not by the CLI, so the the build. The build is run by the Docker daemon, not by the CLI, so the
@ -30,7 +28,7 @@ whole context must be transferred to the daemon. The Docker CLI reports
You can specify a repository and tag at which to save the new image if You can specify a repository and tag at which to save the new image if
the build succeeds: the build succeeds:
> `sudo docker build -t shykes/myapp .` sudo docker build -t shykes/myapp .
The Docker daemon will run your steps one-by-one, committing the result The Docker daemon will run your steps one-by-one, committing the result
to a new image if necessary, before finally outputting the ID of your to a new image if necessary, before finally outputting the ID of your
@ -38,12 +36,11 @@ new image. The Docker daemon will automatically clean up the context you
sent. sent.
Note that each instruction is run independently, and causes a new image Note that each instruction is run independently, and causes a new image
to be created - so `RUN cd /tmp` will not have any to be created - so `RUN cd /tmp` will not have any effect on the next
effect on the next instructions. instructions.
Whenever possible, Docker will re-use the intermediate images, Whenever possible, Docker will re-use the intermediate images,
accelerating `docker build` significantly (indicated accelerating `docker build` significantly (indicated by `Using cache`):
by `Using cache`):
$ docker build -t SvenDowideit/ambassador . $ docker build -t SvenDowideit/ambassador .
Uploading context 10.24 kB Uploading context 10.24 kB
@ -58,9 +55,9 @@ by `Using cache`):
---> 1a5ffc17324d ---> 1a5ffc17324d
Successfully built 1a5ffc17324d Successfully built 1a5ffc17324d
When youre done with your build, youre ready to look into [*Pushing a When you're done with your build, you're ready to look into
repository to its [*Pushing a repository to its registry*](
registry*](../../use/workingwithrepository/#image-push). ../../use/workingwithrepository/#image-push).
## Format ## Format
@ -83,84 +80,73 @@ be treated as an argument. This allows statements like:
# Comment # Comment
RUN echo 'we are running some # of cool things' RUN echo 'we are running some # of cool things'
Here is the set of instructions you can use in a `Dockerfile` Here is the set of instructions you can use in a Dockerfile
for building images. for building images.
## `FROM` ## FROM
> `FROM <image>` FROM <image>
Or Or
> `FROM <image>:<tag>` FROM <image>:<tag>
The `FROM` instruction sets the [*Base The `FROM` instruction sets the [*Base Image*](../../terms/image/#base-image-def)
Image*](../../terms/image/#base-image-def) for subsequent instructions. for subsequent instructions. As such, a valid Dockerfile must have `FROM` as
As such, a valid Dockerfile must have `FROM` as its its first instruction. The image can be any valid image it is especially easy
first instruction. The image can be any valid image it is especially to start by **pulling an image** from the [*Public Repositories*](
easy to start by **pulling an image** from the [*Public ../../use/workingwithrepository/#using-public-repositories).
Repositories*](../../use/workingwithrepository/#using-public-repositories).
`FROM` must be the first non-comment instruction in `FROM` must be the first non-comment instruction in the Dockerfile.
the `Dockerfile`.
`FROM` can appear multiple times within a single `FROM` can appear multiple times within a single Dockerfile in order to create
Dockerfile in order to create multiple images. Simply make a note of the multiple images. Simply make a note of the last image id output by the commit
last image id output by the commit before each new `FROM` before each new `FROM` command.
command.
If no `tag` is given to the `FROM` If no `tag` is given to the `FROM` instruction, `latest` is assumed. If the
instruction, `latest` is assumed. If the
used tag does not exist, an error will be returned. used tag does not exist, an error will be returned.
## `MAINTAINER` ## MAINTAINER
> `MAINTAINER <name>` MAINTAINER <name>
The `MAINTAINER` instruction allows you to set the The `MAINTAINER` instruction allows you to set the *Author* field of the
*Author* field of the generated images. generated images.
## `RUN` ## RUN
RUN has 2 forms: RUN has 2 forms:
- `RUN <command>` (the command is run in a shell - - `RUN <command>` (the command is run in a shell - `/bin/sh -c`)
`/bin/sh -c`) - `RUN ["executable", "param1", "param2"]` (*exec* form)
- `RUN ["executable", "param1", "param2"]` (*exec*
form)
The `RUN` instruction will execute any commands in a The `RUN` instruction will execute any commands in a new layer on top of the
new layer on top of the current image and commit the results. The current image and commit the results. The resulting committed image will be
resulting committed image will be used for the next step in the used for the next step in the Dockerfile.
Dockerfile.
Layering `RUN` instructions and generating commits Layering `RUN` instructions and generating commits conforms to the core
conforms to the core concepts of Docker where commits are cheap and concepts of Docker where commits are cheap and containers can be created from
containers can be created from any point in an images history, much any point in an image's history, much like source control.
like source control.
The *exec* form makes it possible to avoid shell string munging, and to The *exec* form makes it possible to avoid shell string munging, and to `RUN`
`RUN` commands using a base image that does not commands using a base image that does not contain `/bin/sh`.
contain `/bin/sh`.
### Known Issues (RUN) ### Known Issues (RUN)
- [Issue 783](https://github.com/dotcloud/docker/issues/783) is about - [Issue 783](https://github.com/dotcloud/docker/issues/783) is about file
file permissions problems that can occur when using the AUFS file permissions problems that can occur when using the AUFS file system. You
system. You might notice it during an attempt to `rm` might notice it during an attempt to `rm` a file, for example. The issue
a file, for example. The issue describes a workaround. describes a workaround.
- [Issue 2424](https://github.com/dotcloud/docker/issues/2424) Locale - [Issue 2424](https://github.com/dotcloud/docker/issues/2424) Locale will
will not be set automatically. not be set automatically.
## `CMD` ## CMD
CMD has three forms: CMD has three forms:
- `CMD ["executable","param1","param2"]` (like an - `CMD ["executable","param1","param2"]` (like an *exec*, preferred form)
*exec*, preferred form) - `CMD ["param1","param2"]` (as *default parameters to ENTRYPOINT*)
- `CMD ["param1","param2"]` (as *default - `CMD command param1 param2` (as a *shell*)
parameters to ENTRYPOINT*)
- `CMD command param1 param2` (as a *shell*)
There can only be one CMD in a Dockerfile. If you list more than one CMD There can only be one CMD in a Dockerfile. If you list more than one CMD
then only the last CMD will take effect. then only the last CMD will take effect.
@ -169,83 +155,75 @@ then only the last CMD will take effect.
container.** These defaults can include an executable, or they can omit container.** These defaults can include an executable, or they can omit
the executable, in which case you must specify an ENTRYPOINT as well. the executable, in which case you must specify an ENTRYPOINT as well.
When used in the shell or exec formats, the `CMD` When used in the shell or exec formats, the `CMD` instruction sets the command
instruction sets the command to be executed when running the image. to be executed when running the image.
If you use the *shell* form of the CMD, then the `<command>` If you use the *shell* form of the CMD, then the `<command>` will execute in
will execute in `/bin/sh -c`: `/bin/sh -c`:
FROM ubuntu FROM ubuntu
CMD echo "This is a test." | wc - CMD echo "This is a test." | wc -
If you want to **run your** `<command>` **without a If you want to **run your** `<command>` **without a shell** then you must
shell** then you must express the command as a JSON array and give the express the command as a JSON array and give the full path to the executable.
full path to the executable. **This array form is the preferred format **This array form is the preferred format of CMD.** Any additional parameters
of CMD.** Any additional parameters must be individually expressed as must be individually expressed as strings in the array:
strings in the array:
FROM ubuntu FROM ubuntu
CMD ["/usr/bin/wc","--help"] CMD ["/usr/bin/wc","--help"]
If you would like your container to run the same executable every time, If you would like your container to run the same executable every time, then
then you should consider using `ENTRYPOINT` in you should consider using `ENTRYPOINT` in combination with `CMD`. See
combination with `CMD`. See
[*ENTRYPOINT*](#entrypoint). [*ENTRYPOINT*](#entrypoint).
If the user specifies arguments to `docker run` then If the user specifies arguments to `docker run` then they will override the
they will override the default specified in CMD. default specified in CMD.
> **Note**: > **Note**:
> Dont confuse `RUN` with `CMD`. `RUN` actually runs a command and commits > don't confuse `RUN` with `CMD`. `RUN` actually runs a command and commits
> the result; `CMD` does not execute anything at build time, but specifies > the result; `CMD` does not execute anything at build time, but specifies
> the intended command for the image. > the intended command for the image.
## `EXPOSE` ## EXPOSE
> `EXPOSE <port> [<port>...]` EXPOSE <port> [<port>...]
The `EXPOSE` instructions informs Docker that the The `EXPOSE` instructions informs Docker that the container will listen on the
container will listen on the specified network ports at runtime. Docker specified network ports at runtime. Docker uses this information to interconnect
uses this information to interconnect containers using links (see containers using links (see
[*links*](../../use/working_with_links_names/#working-with-links-names)), [*links*](../../use/working_with_links_names/#working-with-links-names)),
and to setup port redirection on the host system (see [*Redirect and to setup port redirection on the host system (see [*Redirect Ports*](
Ports*](../../use/port_redirection/#port-redirection)). ../../use/port_redirection/#port-redirection)).
## `ENV` ## ENV
> `ENV <key> <value>` ENV <key> <value>
The `ENV` instruction sets the environment variable The `ENV` instruction sets the environment variable `<key>` to the value
`<key>` to the value `<value>`. `<value>`. This value will be passed to all future `RUN` instructions. This is
This value will be passed to all future `RUN` functionally equivalent to prefixing the command with `<key>=<value>`
instructions. This is functionally equivalent to prefixing the command
with `<key>=<value>`
The environment variables set using `ENV` will The environment variables set using `ENV` will persist when a container is run
persist when a container is run from the resulting image. You can view from the resulting image. You can view the values using `docker inspect`, and
the values using `docker inspect`, and change them change them using `docker run --env <key>=<value>`.
using `docker run --env <key>=<value>`.
> **Note**: > **Note**:
> One example where this can cause unexpected consequenses, is setting > One example where this can cause unexpected consequenses, is setting
> `ENV DEBIAN_FRONTEND noninteractive`. Which will > `ENV DEBIAN_FRONTEND noninteractive`. Which will persist when the container
> persist when the container is run interactively; for example: > is run interactively; for example: `docker run -t -i image bash`
> `docker run -t -i image bash`
## `ADD` ## ADD
> `ADD <src> <dest>` ADD <src> <dest>
The `ADD` instruction will copy new files from The `ADD` instruction will copy new files from `<src>` and add them to the
\<src\> and add them to the containers filesystem at path container's filesystem at path `<dest>`.
`<dest>`.
`<src>` must be the path to a file or directory `<src>` must be the path to a file or directory relative to the source directory
relative to the source directory being built (also called the *context* being built (also called the *context* of the build) or a remote file URL.
of the build) or a remote file URL.
`<dest>` is the absolute path to which the source `<dest>` is the absolute path to which the source will be copied inside the
will be copied inside the destination container. destination container.
All new files and directories are created with mode 0755, uid and gid 0. All new files and directories are created with mode 0755, uid and gid 0.
@ -262,79 +240,64 @@ All new files and directories are created with mode 0755, uid and gid 0.
The copy obeys the following rules: The copy obeys the following rules:
- The `<src>` path must be inside the *context* of - The `<src>` path must be inside the *context* of the build;
the build; you cannot `ADD ../something /something` you cannot `ADD ../something /something`, because the first step of a
, because the first step of a `docker build` `docker build` is to send the context directory (and subdirectories) to the
is to send the context directory (and subdirectories) to docker daemon.
the docker daemon.
- If `<src>` is a URL and `<dest>` - If `<src>` is a URL and `<dest>` does not end with a trailing slash, then a
does not end with a trailing slash, then a file is file is downloaded from the URL and copied to `<dest>`.
downloaded from the URL and copied to `<dest>`.
- If `<src>` is a URL and `<dest>` - If `<src>` is a URL and `<dest>` does end with a trailing slash, then the
does end with a trailing slash, then the filename is filename is inferred from the URL and the file is downloaded to
inferred from the URL and the file is downloaded to `<dest>/<filename>`. For instance, `ADD http://example.com/foobar /` would
`<dest>/<filename>`. For instance, create the file `/foobar`. The URL must have a nontrivial path so that an
`ADD http://example.com/foobar /` would create appropriate filename can be discovered in this case (`http://example.com`
the file `/foobar`. The URL must have a will not work).
nontrivial path so that an appropriate filename can be discovered in
this case (`http://example.com` will not work).
- If `<src>` is a directory, the entire directory - If `<src>` is a directory, the entire directory is copied, including
is copied, including filesystem metadata. filesystem metadata.
- If `<src>` is a *local* tar archive in a - If `<src>` is a *local* tar archive in a recognized compression format
recognized compression format (identity, gzip, bzip2 or xz) then it (identity, gzip, bzip2 or xz) then it is unpacked as a directory. Resources
is unpacked as a directory. Resources from *remote* URLs are **not** from *remote* URLs are **not** decompressed. When a directory is copied or
decompressed. unpacked, it has the same behavior as `tar -x`: the result is the union of:
When a directory is copied or unpacked, it has the same behavior as 1. whatever existed at the destination path and
`tar -x`: the result is the union of 2. the contents of the source tree, with conflicts resolved in favor of
"2." on a file-by-file basis.
1. whatever existed at the destination path and - If `<src>` is any other kind of file, it is copied individually along with
2. the contents of the source tree, its metadata. In this case, if `<dest>` ends with a trailing slash `/`, it
will be considered a directory and the contents of `<src>` will be written
at `<dest>/base(<src>)`.
with conflicts resolved in favor of "2." on a file-by-file basis. - If `<dest>` does not end with a trailing slash, it will be considered a
regular file and the contents of `<src>` will be written at `<dest>`.
- If `<src>` is any other kind of file, it is - If `<dest>` doesn't exist, it is created along with all missing directories
copied individually along with its metadata. In this case, if in its path.
`<dest>` ends with a trailing slash
`/`, it will be considered a directory and the
contents of `<src>` will be written at
`<dest>/base(<src>)`.
- If `<dest>` does not end with a trailing slash, ## ENTRYPOINT
it will be considered a regular file and the contents of
`<src>` will be written at `<dest>`
.
- If `<dest>` doesnt exist, it is created along
with all missing directories in its path.
## `ENTRYPOINT`
ENTRYPOINT has two forms: ENTRYPOINT has two forms:
- `ENTRYPOINT ["executable", "param1", "param2"]` - `ENTRYPOINT ["executable", "param1", "param2"]`
(like an *exec*, preferred form) (like an *exec*, preferred form)
- `ENTRYPOINT command param1 param2` (as a - `ENTRYPOINT command param1 param2`
*shell*) (as a *shell*)
There can only be one `ENTRYPOINT` in a Dockerfile. There can only be one `ENTRYPOINT` in a Dockerfile. If you have more than one
If you have more than one `ENTRYPOINT`, then only `ENTRYPOINT`, then only the last one in the Dockerfile will have an effect.
the last one in the Dockerfile will have an effect.
An `ENTRYPOINT` helps you to configure a container An `ENTRYPOINT` helps you to configure a container that you can run as an
that you can run as an executable. That is, when you specify an executable. That is, when you specify an `ENTRYPOINT`, then the whole container
`ENTRYPOINT`, then the whole container runs as if it runs as if it was just that executable.
was just that executable.
The `ENTRYPOINT` instruction adds an entry command that will **not** be The `ENTRYPOINT` instruction adds an entry command that will **not** be
overwritten when arguments are passed to `docker run`, unlike the overwritten when arguments are passed to `docker run`, unlike the behavior
behavior of `CMD`. This allows arguments to be passed to the entrypoint. of `CMD`. This allows arguments to be passed to the entrypoint. i.e.
i.e. `docker run <image> -d` will pass the "-d" argument to the `docker run <image> -d` will pass the "-d" argument to the ENTRYPOINT.
ENTRYPOINT.
You can specify parameters either in the ENTRYPOINT JSON array (as in You can specify parameters either in the ENTRYPOINT JSON array (as in
"like an exec" above), or by using a CMD statement. Parameters in the "like an exec" above), or by using a CMD statement. Parameters in the
@ -342,13 +305,13 @@ ENTRYPOINT will not be overridden by the `docker run`
arguments, but parameters specified via CMD will be overridden arguments, but parameters specified via CMD will be overridden
by `docker run` arguments. by `docker run` arguments.
Like a `CMD`, you can specify a plain string for the Like a `CMD`, you can specify a plain string for the `ENTRYPOINT` and it will
ENTRYPOINT and it will execute in `/bin/sh -c`: execute in `/bin/sh -c`:
FROM ubuntu FROM ubuntu
ENTRYPOINT wc -l - ENTRYPOINT wc -l -
For example, that Dockerfiles image will *always* take stdin as input For example, that Dockerfile's image will *always* take stdin as input
("-") and print the number of lines ("-l"). If you wanted to make this ("-") and print the number of lines ("-l"). If you wanted to make this
optional but default, you could use a CMD: optional but default, you could use a CMD:
@ -356,44 +319,41 @@ optional but default, you could use a CMD:
CMD ["-l", "-"] CMD ["-l", "-"]
ENTRYPOINT ["/usr/bin/wc"] ENTRYPOINT ["/usr/bin/wc"]
## `VOLUME` ## VOLUME
> `VOLUME ["/data"]` VOLUME ["/data"]
The `VOLUME` instruction will create a mount point The `VOLUME` instruction will create a mount point with the specified name
with the specified name and mark it as holding externally mounted and mark it as holding externally mounted volumes from native host or other
volumes from native host or other containers. For more containers. For more information/examples and mounting instructions via docker
information/examples and mounting instructions via docker client, refer client, refer to [*Share Directories via Volumes*](
to [*Share Directories via ../../use/working_with_volumes/#volume-def) documentation.
Volumes*](../../use/working_with_volumes/#volume-def) documentation.
## `USER` ## USER
> `USER daemon` USER daemon
The `USER` instruction sets the username or UID to The `USER` instruction sets the username or UID to use when running the image.
use when running the image.
## `WORKDIR` ## WORKDIR
> `WORKDIR /path/to/workdir` WORKDIR /path/to/workdir
The `WORKDIR` instruction sets the working directory The `WORKDIR` instruction sets the working directory for the `RUN`, `CMD` and
for the `RUN`, `CMD` and
`ENTRYPOINT` Dockerfile commands that follow it. `ENTRYPOINT` Dockerfile commands that follow it.
It can be used multiple times in the one Dockerfile. If a relative path It can be used multiple times in the one Dockerfile. If a relative path
is provided, it will be relative to the path of the previous is provided, it will be relative to the path of the previous `WORKDIR`
`WORKDIR` instruction. For example: instruction. For example:
> WORKDIR /a WORKDIR b WORKDIR c RUN pwd WORKDIR /a WORKDIR b WORKDIR c RUN pwd
The output of the final `pwd` command in this The output of the final `pwd` command in this
Dockerfile would be `/a/b/c`. Dockerfile would be `/a/b/c`.
## `ONBUILD` ## ONBUILD
> `ONBUILD [INSTRUCTION]` ONBUILD [INSTRUCTION]
The `ONBUILD` instruction adds to the image a The `ONBUILD` instruction adds to the image a
"trigger" instruction to be executed at a later time, when the image is "trigger" instruction to be executed at a later time, when the image is
@ -410,7 +370,7 @@ daemon which may be customized with user-specific configuration.
For example, if your image is a reusable python application builder, it For example, if your image is a reusable python application builder, it
will require application source code to be added in a particular will require application source code to be added in a particular
directory, and it might require a build script to be called *after* directory, and it might require a build script to be called *after*
that. You cant just call *ADD* and *RUN* now, because you dont yet that. You can't just call *ADD* and *RUN* now, because you don't yet
have access to the application source code, and it will be different for have access to the application source code, and it will be different for
each application build. You could simply provide application developers each application build. You could simply provide application developers
with a boilerplate Dockerfile to copy-paste into their application, but with a boilerplate Dockerfile to copy-paste into their application, but
@ -420,23 +380,23 @@ mixes with application-specific code.
The solution is to use *ONBUILD* to register in advance instructions to The solution is to use *ONBUILD* to register in advance instructions to
run later, during the next build stage. run later, during the next build stage.
Heres how it works: Here's how it works:
1. When it encounters an *ONBUILD* instruction, the builder adds a 1. When it encounters an *ONBUILD* instruction, the builder adds a
trigger to the metadata of the image being built. The instruction trigger to the metadata of the image being built. The instruction
does not otherwise affect the current build. does not otherwise affect the current build.
2. At the end of the build, a list of all triggers is stored in the 2. At the end of the build, a list of all triggers is stored in the
image manifest, under the key *OnBuild*. They can be inspected with image manifest, under the key *OnBuild*. They can be inspected with
*docker inspect*. *docker inspect*.
3. Later the image may be used as a base for a new build, using the 3. Later the image may be used as a base for a new build, using the
*FROM* instruction. As part of processing the *FROM* instruction, *FROM* instruction. As part of processing the *FROM* instruction,
the downstream builder looks for *ONBUILD* triggers, and executes the downstream builder looks for *ONBUILD* triggers, and executes
them in the same order they were registered. If any of the triggers them in the same order they were registered. If any of the triggers
fail, the *FROM* instruction is aborted which in turn causes the fail, the *FROM* instruction is aborted which in turn causes the
build to fail. If all triggers succeed, the FROM instruction build to fail. If all triggers succeed, the FROM instruction
completes and the build continues as usual. completes and the build continues as usual.
4. Triggers are cleared from the final image after being executed. In 4. Triggers are cleared from the final image after being executed. In
other words they are not inherited by "grand-children" builds. other words they are not inherited by "grand-children" builds.
For example you might add something like this: For example you might add something like this:
@ -445,7 +405,7 @@ For example you might add something like this:
ONBUILD RUN /usr/local/bin/python-build --dir /app/src ONBUILD RUN /usr/local/bin/python-build --dir /app/src
[...] [...]
> **Warning**: Chaining ONBUILD instructions using ONBUILD ONBUILD isnt allowed. > **Warning**: Chaining ONBUILD instructions using ONBUILD ONBUILD isn't allowed.
> **Warning**: ONBUILD may not trigger FROM or MAINTAINER instructions. > **Warning**: ONBUILD may not trigger FROM or MAINTAINER instructions.

File diff suppressed because it is too large Load Diff

View File

@ -2,7 +2,7 @@ page_title: Docker Run Reference
page_description: Configure containers at runtime page_description: Configure containers at runtime
page_keywords: docker, run, configure, runtime page_keywords: docker, run, configure, runtime
# [Docker Run Reference](#id2) # Docker Run Reference
**Docker runs processes in isolated containers**. When an operator **Docker runs processes in isolated containers**. When an operator
executes `docker run`, she starts a process with its executes `docker run`, she starts a process with its
@ -10,59 +10,60 @@ own file system, its own networking, and its own isolated process tree.
The [*Image*](../../terms/image/#image-def) which starts the process may The [*Image*](../../terms/image/#image-def) which starts the process may
define defaults related to the binary to run, the networking to expose, define defaults related to the binary to run, the networking to expose,
and more, but `docker run` gives final control to and more, but `docker run` gives final control to
the operator who starts the container from the image. Thats the main the operator who starts the container from the image. That's the main
reason [*run*](../commandline/cli/#cli-run) has more options than any reason [*run*](../../commandline/cli/#cli-run) has more options than any
other `docker` command. other `docker` command.
Every one of the [*Examples*](../../examples/#example-list) shows Every one of the [*Examples*](../../examples/#example-list) shows
running containers, and so here we try to give more in-depth guidance. running containers, and so here we try to give more in-depth guidance.
## [General Form](#id3) ## General Form
As youve seen in the [*Examples*](../../examples/#example-list), the As you`ve seen in the [*Examples*](../../examples/#example-list), the
basic run command takes this form: basic run command takes this form:
docker run [OPTIONS] IMAGE[:TAG] [COMMAND] [ARG...] docker run [OPTIONS] IMAGE[:TAG] [COMMAND] [ARG...]
To learn how to interpret the types of `[OPTIONS]`, To learn how to interpret the types of `[OPTIONS]`,
see [*Option types*](../commandline/cli/#cli-options). see [*Option types*](../../commandline/cli/#cli-options).
The list of `[OPTIONS]` breaks down into two groups: The list of `[OPTIONS]` breaks down into two groups:
1. Settings exclusive to operators, including: 1. Settings exclusive to operators, including:
- Detached or Foreground running,
- Container Identification,
- Network settings, and
- Runtime Constraints on CPU and Memory
- Privileges and LXC Configuration
2. Setting shared between operators and developers, where operators can - Detached or Foreground running,
override defaults developers set in images at build time. - Container Identification,
- Network settings, and
- Runtime Constraints on CPU and Memory
- Privileges and LXC Configuration
2. Setting shared between operators and developers, where operators can
override defaults developers set in images at build time.
Together, the `docker run [OPTIONS]` give complete Together, the `docker run [OPTIONS]` give complete
control over runtime behavior to the operator, allowing them to override control over runtime behavior to the operator, allowing them to override
all defaults set by the developer during `docker build` all defaults set by the developer during `docker build`
and nearly all the defaults set by the Docker runtime itself. and nearly all the defaults set by the Docker runtime itself.
## [Operator Exclusive Options](#id4) ## Operator Exclusive Options
Only the operator (the person executing `docker run`) can set the Only the operator (the person executing `docker run`) can set the
following options. following options.
- [Detached vs Foreground](#detached-vs-foreground) - [Detached vs Foreground](#detached-vs-foreground)
- [Detached (-d)](#detached-d) - [Detached (-d)](#detached-d)
- [Foreground](#foreground) - [Foreground](#foreground)
- [Container Identification](#container-identification) - [Container Identification](#container-identification)
- [Name (name)](#name-name) - [Name (name)](#name-name)
- [PID Equivalent](#pid-equivalent) - [PID Equivalent](#pid-equivalent)
- [Network Settings](#network-settings) - [Network Settings](#network-settings)
- [Clean Up (rm)](#clean-up-rm) - [Clean Up (rm)](#clean-up-rm)
- [Runtime Constraints on CPU and - [Runtime Constraints on CPU and
Memory](#runtime-constraints-on-cpu-and-memory) Memory](#runtime-constraints-on-cpu-and-memory)
- [Runtime Privilege and LXC - [Runtime Privilege and LXC
Configuration](#runtime-privilege-and-lxc-configuration) Configuration](#runtime-privilege-and-lxc-configuration)
### [Detached vs Foreground](#id2) ## Detached vs Foreground
When starting a Docker container, you must first decide if you want to When starting a Docker container, you must first decide if you want to
run the container in the background in a "detached" mode or in the run the container in the background in a "detached" mode or in the
@ -70,53 +71,50 @@ default foreground mode:
-d=false: Detached mode: Run container in the background, print new container id -d=false: Detached mode: Run container in the background, print new container id
#### [Detached (-d)](#id3) ### Detached (-d)
In detached mode (`-d=true` or just `-d`), all I/O should be done In detached mode (`-d=true` or just `-d`), all I/O should be done
through network connections or shared volumes because the container is through network connections or shared volumes because the container is
no longer listening to the commandline where you executed `docker run`. no longer listening to the commandline where you executed `docker run`.
You can reattach to a detached container with `docker` You can reattach to a detached container with `docker`
[*attach*](../commandline/cli/#cli-attach). If you choose to run a [*attach*](commandline/cli/#attach). If you choose to run a
container in the detached mode, then you cannot use the `--rm` option. container in the detached mode, then you cannot use the `--rm` option.
#### [Foreground](#id4) ### Foreground
In foreground mode (the default when `-d` is not In foreground mode (the default when `-d` is not specified), `docker run`
specified), `docker run` can start the process in can start the process in the container and attach the console to the process's
the container and attach the console to the processs standard input, standard input, output, and standard error. It can even pretend to be a TTY
output, and standard error. It can even pretend to be a TTY (this is (this is what most commandline executables expect) and pass along signals. All
what most commandline executables expect) and pass along signals. All of of that is configurable:
that is configurable:
-a=[] : Attach to ``stdin``, ``stdout`` and/or ``stderr`` -a=[] : Attach to ``stdin``, ``stdout`` and/or ``stderr``
-t=false : Allocate a pseudo-tty -t=false : Allocate a pseudo-tty
--sig-proxy=true: Proxify all received signal to the process (even in non-tty mode) --sig-proxy=true: Proxify all received signal to the process (even in non-tty mode)
-i=false : Keep STDIN open even if not attached -i=false : Keep STDIN open even if not attached
If you do not specify `-a` then Docker will [attach If you do not specify `-a` then Docker will [attach everything (stdin,stdout,stderr)](
everything https://github.com/dotcloud/docker/blob/
(stdin,stdout,stderr)](https://github.com/dotcloud/docker/blob/75a7f4d90cde0295bcfb7213004abce8d4779b75/commands.go#L1797). 75a7f4d90cde0295bcfb7213004abce8d4779b75/commands.go#L1797). You can specify to which
You can specify to which of the three standard streams of the three standard streams (`stdin`, `stdout`, `stderr`) you'd like to connect
(`stdin`, `stdout`, instead, as in:
`stderr`) youd like to connect instead, as in:
docker run -a stdin -a stdout -i -t ubuntu /bin/bash docker run -a stdin -a stdout -i -t ubuntu /bin/bash
For interactive processes (like a shell) you will typically want a tty For interactive processes (like a shell) you will typically want a tty as well as
as well as persistent standard input (`stdin`), so persistent standard input (`stdin`), so you'll use `-i -t` together in most
youll use `-i -t` together in most interactive interactive cases.
cases.
### [Container Identification](#id5) ## Container Identification
#### [Name (name)](#id6) ### Name (name)
The operator can identify a container in three ways: The operator can identify a container in three ways:
- UUID long identifier - UUID long identifier
("f78375b1c487e03c9438c729345e54db9d20cfa2ac1fc3494b6eb60872e74778") ("f78375b1c487e03c9438c729345e54db9d20cfa2ac1fc3494b6eb60872e74778")
- UUID short identifier ("f78375b1c487") - UUID short identifier ("f78375b1c487")
- Name ("evil\_ptolemy") - Name ("evil_ptolemy")
The UUID identifiers come from the Docker daemon, and if you do not The UUID identifiers come from the Docker daemon, and if you do not
assign a name to the container with `--name` then assign a name to the container with `--name` then
@ -127,16 +125,16 @@ name when defining
(or any other place you need to identify a container). This works for (or any other place you need to identify a container). This works for
both background and foreground Docker containers. both background and foreground Docker containers.
#### [PID Equivalent](#id7) ### PID Equivalent
And finally, to help with automation, you can have Docker write the And finally, to help with automation, you can have Docker write the
container ID out to a file of your choosing. This is similar to how some container ID out to a file of your choosing. This is similar to how some
programs might write out their process ID to a file (youve seen them as programs might write out their process ID to a file (you`ve seen them as
PID files): PID files):
--cidfile="": Write the container ID to the file --cidfile="": Write the container ID to the file
### [Network Settings](#id8) ## Network Settings
-n=true : Enable networking for this container -n=true : Enable networking for this container
--dns=[] : Set custom dns servers for the container --dns=[] : Set custom dns servers for the container
@ -150,19 +148,19 @@ files or STDIN/STDOUT only.
Your container will use the same DNS servers as the host by default, but Your container will use the same DNS servers as the host by default, but
you can override this with `--dns`. you can override this with `--dns`.
### [Clean Up (rm)](#id9) ## Clean Up (rm)
By default a containers file system persists even after the container By default a container's file system persists even after the container
exits. This makes debugging a lot easier (since you can inspect the exits. This makes debugging a lot easier (since you can inspect the
final state) and you retain all your data by default. But if you are final state) and you retain all your data by default. But if you are
running short-term **foreground** processes, these container file running short-term **foreground** processes, these container file
systems can really pile up. If instead youd like Docker to systems can really pile up. If instead you'd like Docker to
**automatically clean up the container and remove the file system when **automatically clean up the container and remove the file system when
the container exits**, you can add the `--rm` flag: the container exits**, you can add the `--rm` flag:
--rm=false: Automatically remove the container when it exits (incompatible with -d) --rm=false: Automatically remove the container when it exits (incompatible with -d)
### [Runtime Constraints on CPU and Memory](#id10) ## Runtime Constraints on CPU and Memory
The operator can also adjust the performance parameters of the The operator can also adjust the performance parameters of the
container: container:
@ -181,7 +179,7 @@ the same priority and get the same proportion of CPU cycles, but you can
tell the kernel to give more shares of CPU time to one or more tell the kernel to give more shares of CPU time to one or more
containers when you start them via Docker. containers when you start them via Docker.
### [Runtime Privilege and LXC Configuration](#id11) ## Runtime Privilege and LXC Configuration
--privileged=false: Give extended privileges to this container --privileged=false: Give extended privileges to this container
--lxc-conf=[]: (lxc exec-driver only) Add custom lxc options --lxc-conf="lxc.cgroup.cpuset.cpus = 0,1" --lxc-conf=[]: (lxc exec-driver only) Add custom lxc options --lxc-conf="lxc.cgroup.cpuset.cpus = 0,1"
@ -189,71 +187,63 @@ containers when you start them via Docker.
By default, Docker containers are "unprivileged" and cannot, for By default, Docker containers are "unprivileged" and cannot, for
example, run a Docker daemon inside a Docker container. This is because example, run a Docker daemon inside a Docker container. This is because
by default a container is not allowed to access any devices, but a by default a container is not allowed to access any devices, but a
"privileged" container is given access to all devices (see "privileged" container is given access to all devices (see [lxc-template.go](
[lxc-template.go](https://github.com/dotcloud/docker/blob/master/execdriver/lxc/lxc_template.go) https://github.com/dotcloud/docker/blob/master/execdriver/lxc/lxc_template.go)
and documentation on [cgroups and documentation on [cgroups devices](
devices](https://www.kernel.org/doc/Documentation/cgroups/devices.txt)). https://www.kernel.org/doc/Documentation/cgroups/devices.txt)).
When the operator executes `docker run --privileged`, Docker will enable When the operator executes `docker run --privileged`, Docker will enable
to access to all devices on the host as well as set some configuration to access to all devices on the host as well as set some configuration
in AppArmor to allow the container nearly all the same access to the in AppArmor to allow the container nearly all the same access to the
host as processes running outside containers on the host. Additional host as processes running outside containers on the host. Additional
information about running with `--privileged` is available on the information about running with `--privileged` is available on the
[Docker [Docker Blog](http://blog.docker.io/2013/09/docker-can-now-run-within-docker/).
Blog](http://blog.docker.io/2013/09/docker-can-now-run-within-docker/).
If the Docker daemon was started using the `lxc` If the Docker daemon was started using the `lxc` exec-driver
exec-driver (`docker -d --exec-driver=lxc`) then the (`docker -d --exec-driver=lxc`) then the operator can also specify LXC options
operator can also specify LXC options using one or more using one or more `--lxc-conf` parameters. These can be new parameters or
`--lxc-conf` parameters. These can be new parameters override existing parameters from the [lxc-template.go](
or override existing parameters from the https://github.com/dotcloud/docker/blob/master/execdriver/lxc/lxc_template.go).
[lxc-template.go](https://github.com/dotcloud/docker/blob/master/execdriver/lxc/lxc_template.go). Note that in the future, a given host's docker daemon may not use LXC, so this
Note that in the future, a given hosts Docker daemon may not use LXC, is an implementation-specific configuration meant for operators already
so this is an implementation-specific configuration meant for operators familiar with using LXC directly.
already familiar with using LXC directly.
## Overriding `Dockerfile` Image Defaults ## Overriding Dockerfile Image Defaults
When a developer builds an image from a When a developer builds an image from a [*Dockerfile*](builder/#dockerbuilder)
[*Dockerfile*](../builder/#dockerbuilder) or when she commits it, the or when she commits it, the developer can set a number of default parameters
developer can set a number of default parameters that take effect when that take effect when the image starts up as a container.
the image starts up as a container.
Four of the `Dockerfile` commands cannot be Four of the Dockerfile commands cannot be overridden at runtime: `FROM`,
overridden at runtime: `FROM, MAINTAINER, RUN`, and `MAINTAINER`, `RUN`, and `ADD`. Everything else has a corresponding override
`ADD`. Everything else has a corresponding override in `docker run`. We'll go through what the developer might have set in each
in `docker run`. Well go through what the developer Dockerfile instruction and how the operator can override that setting.
might have set in each `Dockerfile` instruction and
how the operator can override that setting.
- [CMD (Default Command or Options)](#cmd-default-command-or-options) - [CMD (Default Command or Options)](#cmd-default-command-or-options)
- [ENTRYPOINT (Default Command to Execute at - [ENTRYPOINT (Default Command to Execute at Runtime](
Runtime](#entrypoint-default-command-to-execute-at-runtime) #entrypoint-default-command-to-execute-at-runtime)
- [EXPOSE (Incoming Ports)](#expose-incoming-ports) - [EXPOSE (Incoming Ports)](#expose-incoming-ports)
- [ENV (Environment Variables)](#env-environment-variables) - [ENV (Environment Variables)](#env-environment-variables)
- [VOLUME (Shared Filesystems)](#volume-shared-filesystems) - [VOLUME (Shared Filesystems)](#volume-shared-filesystems)
- [USER](#user) - [USER](#user)
- [WORKDIR](#workdir) - [WORKDIR](#workdir)
### [CMD (Default Command or Options)](#id12) ## CMD (Default Command or Options)
Recall the optional `COMMAND` in the Docker Recall the optional `COMMAND` in the Docker
commandline: commandline:
docker run [OPTIONS] IMAGE[:TAG] [COMMAND] [ARG...] docker run [OPTIONS] IMAGE[:TAG] [COMMAND] [ARG...]
This command is optional because the person who created the This command is optional because the person who created the `IMAGE` may have
`IMAGE` may have already provided a default already provided a default `COMMAND` using the Dockerfile `CMD`. As the
`COMMAND` using the `Dockerfile` operator (the person running a container from the image), you can override that
`CMD`. As the operator (the person running a `CMD` just by specifying a new `COMMAND`.
container from the image), you can override that `CMD`
just by specifying a new `COMMAND`.
If the image also specifies an `ENTRYPOINT` then the If the image also specifies an `ENTRYPOINT` then the `CMD` or `COMMAND` get
`CMD` or `COMMAND` get appended appended as arguments to the `ENTRYPOINT`.
as arguments to the `ENTRYPOINT`.
### [ENTRYPOINT (Default Command to Execute at Runtime](#id13) ## ENTRYPOINT (Default Command to Execute at Runtime
--entrypoint="": Overwrite the default entrypoint set by the image --entrypoint="": Overwrite the default entrypoint set by the image
@ -276,13 +266,12 @@ or two examples of how to pass more parameters to that ENTRYPOINT:
docker run -i -t --entrypoint /bin/bash example/redis -c ls -l docker run -i -t --entrypoint /bin/bash example/redis -c ls -l
docker run -i -t --entrypoint /usr/bin/redis-cli example/redis --help docker run -i -t --entrypoint /usr/bin/redis-cli example/redis --help
### [EXPOSE (Incoming Ports)](#id14) ## EXPOSE (Incoming Ports)
The `Dockerfile` doesnt give much control over The Dockerfile doesn't give much control over networking, only providing the
networking, only providing the `EXPOSE` instruction `EXPOSE` instruction to give a hint to the operator about what incoming ports
to give a hint to the operator about what incoming ports might provide might provide services. The following options work with or override the
services. The following options work with or override the Dockerfile's exposed defaults:
`Dockerfile`s exposed defaults:
--expose=[]: Expose a port from the container --expose=[]: Expose a port from the container
without publishing it to your host without publishing it to your host
@ -293,40 +282,34 @@ services. The following options work with or override the
(use 'docker port' to see the actual mapping) (use 'docker port' to see the actual mapping)
--link="" : Add link to another container (name:alias) --link="" : Add link to another container (name:alias)
As mentioned previously, `EXPOSE` (and As mentioned previously, `EXPOSE` (and `--expose`) make a port available **in**
`--expose`) make a port available **in** a container a container for incoming connections. The port number on the inside of the
for incoming connections. The port number on the inside of the container container (where the service listens) does not need to be the same number as the
(where the service listens) does not need to be the same number as the port exposed on the outside of the container (where clients connect), so inside
port exposed on the outside of the container (where clients connect), so the container you might have an HTTP service listening on port 80 (and so you
inside the container you might have an HTTP service listening on port 80 `EXPOSE 80` in the Dockerfile), but outside the container the port might be
(and so you `EXPOSE 80` in the 42800.
`Dockerfile`), but outside the container the port
might be 42800.
To help a new client container reach the server containers internal To help a new client container reach the server container's internal port
port operator `--expose`d by the operator or operator `--expose``d by the operator or `EXPOSE``d by the developer, the
`EXPOSE`d by the developer, the operator has three operator has three choices: start the server container with `-P` or `-p,` or
choices: start the server container with `-P` or start the client container with `--link`.
`-p,` or start the client container with
`--link`.
If the operator uses `-P` or `-p` If the operator uses `-P` or `-p` then Docker will make the exposed port
then Docker will make the exposed port accessible on the host accessible on the host and the ports will be available to any client that
and the ports will be available to any client that can reach the host. can reach the host. To find the map between the host ports and the exposed
To find the map between the host ports and the exposed ports, use ports, use `docker port`)
`docker port`)
If the operator uses `--link` when starting the new If the operator uses `--link` when starting the new client container, then the
client container, then the client container can access the exposed port client container can access the exposed port via a private networking interface.
via a private networking interface. Docker will set some environment Docker will set some environment variables in the client container to help
variables in the client container to help indicate which interface and indicate which interface and port to use.
port to use.
### [ENV (Environment Variables)](#id15) ## ENV (Environment Variables)
The operator can **set any environment variable** in the container by The operator can **set any environment variable** in the container by using one
using one or more `-e` flags, even overriding those or more `-e` flags, even overriding those already defined by the developer with
already defined by the developer with a Dockefile `ENV`: a Dockefile `ENV`:
$ docker run -e "deep=purple" --rm ubuntu /bin/bash -c export $ docker run -e "deep=purple" --rm ubuntu /bin/bash -c export
declare -x HOME="/" declare -x HOME="/"
@ -340,10 +323,10 @@ already defined by the developer with a Dockefile `ENV`:
Similarly the operator can set the **hostname** with `-h`. Similarly the operator can set the **hostname** with `-h`.
`--link name:alias` also sets environment variables, `--link name:alias` also sets environment variables, using the *alias* string to
using the *alias* string to define environment variables within the define environment variables within the container that give the IP and PORT
container that give the IP and PORT information for connecting to the information for connecting to the service container. Let's imagine we have a
service container. Lets imagine we have a container running Redis: container running Redis:
# Start the service container, named redis-name # Start the service container, named redis-name
$ docker run -d --name redis-name dockerfiles/redis $ docker run -d --name redis-name dockerfiles/redis
@ -358,7 +341,7 @@ service container. Lets imagine we have a container running Redis:
$ docker port 4241164edf6f 6379 $ docker port 4241164edf6f 6379
2014/01/25 00:55:38 Error: No public port '6379' published for 4241164edf6f 2014/01/25 00:55:38 Error: No public port '6379' published for 4241164edf6f
Yet we can get information about the Redis containers exposed ports Yet we can get information about the Redis container'sexposed ports
with `--link`. Choose an alias that will form a with `--link`. Choose an alias that will form a
valid environment variable! valid environment variable!
@ -377,40 +360,36 @@ valid environment variable!
declare -x SHLVL="1" declare -x SHLVL="1"
declare -x container="lxc" declare -x container="lxc"
And we can use that information to connect from another container as a And we can use that information to connect from another container as a client:
client:
$ docker run -i -t --rm --link redis-name:redis_alias --entrypoint /bin/bash dockerfiles/redis -c '/redis-stable/src/redis-cli -h $REDIS_ALIAS_PORT_6379_TCP_ADDR -p $REDIS_ALIAS_PORT_6379_TCP_PORT' $ docker run -i -t --rm --link redis-name:redis_alias --entrypoint /bin/bash dockerfiles/redis -c '/redis-stable/src/redis-cli -h $REDIS_ALIAS_PORT_6379_TCP_ADDR -p $REDIS_ALIAS_PORT_6379_TCP_PORT'
172.17.0.32:6379> 172.17.0.32:6379>
### [VOLUME (Shared Filesystems)](#id16) ## VOLUME (Shared Filesystems)
-v=[]: Create a bind mount with: [host-dir]:[container-dir]:[rw|ro]. -v=[]: Create a bind mount with: [host-dir]:[container-dir]:[rw|ro].
If "container-dir" is missing, then docker creates a new volume. If "container-dir" is missing, then docker creates a new volume.
--volumes-from="": Mount all volumes from the given container(s) --volumes-from="": Mount all volumes from the given container(s)
The volumes commands are complex enough to have their own documentation The volumes commands are complex enough to have their own documentation in
in section [*Share Directories via section [*Share Directories via Volumes*](../../use/working_with_volumes/#volume-def).
Volumes*](../../use/working_with_volumes/#volume-def). A developer can A developer can define one or more `VOLUME's associated with an image, but only the
define one or more `VOLUME`s associated with an operator can give access from one container to another (or from a container to a
image, but only the operator can give access from one container to volume mounted on the host).
another (or from a container to a volume mounted on the host).
### [USER](#id17) ## USER
The default user within a container is `root` (id = The default user within a container is `root` (id = 0), but if the developer
0), but if the developer created additional users, those are accessible created additional users, those are accessible too. The developer can set a
too. The developer can set a default user to run the first process with default user to run the first process with the `Dockerfile USER` command,
the `Dockerfile USER` command, but the operator can but the operator can override it:
override it
-u="": Username or UID -u="": Username or UID
### [WORKDIR](#id18) ## WORKDIR
The default working directory for running binaries within a container is The default working directory for running binaries within a container is the
the root directory (`/`), but the developer can set root directory (`/`), but the developer can set a different default with the
a different default with the `Dockerfile WORKDIR` Dockerfile `WORKDIR` command. The operator can override this with:
command. The operator can override this with:
-w="": Working directory inside the container -w="": Working directory inside the container