Docs auto-conversion fixes and MD marking and structure improvements.

- Remove redundant chars and all errors caused by RST->MD conversion.
   e.g. [/#, /\, \<, />, etc.]
 - Fix broken inter-document links
 - Fix outbound links no-longer active or changed
 - Fix lists
 - Fix code blocks
 - Correct apostrophes
 - Replace redundant inline note marks for code with code marks
 - Fix broken image links
 - Remove non-functional title links
 - Correct broken cross-docs links
 - Improve readability

Note: This PR does not try to fix/amend:

 - Grammatical errors
 - Lexical errors
 - Linguistic-logic errors etc.

It just aims to fix main structural or conversion errors to serve as
a base for further amendments that will cover others including but
not limited to those mentioned above.

Docker-DCO-1.1-Signed-off-by: O.S. Tezer <ostezer@gmail.com> (github: ostezer)

Update:

 - Fix backtick issues

Docker-DCO-1.1-Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au> (github: SvenDowideit)
This commit is contained in:
O.S.Tezer 2014-04-23 23:48:28 +03:00 committed by Tibor Vass
parent 149437ff87
commit 148a2be878
3 changed files with 635 additions and 731 deletions

View File

@ -4,23 +4,21 @@ page_keywords: builder, docker, Dockerfile, automation, image creation
# Dockerfile Reference
**Docker can act as a builder** and read instructions from a text
`Dockerfile` to automate the steps you would
otherwise take manually to create an image. Executing
`docker build` will run your steps and commit them
along the way, giving you a final image.
**Docker can act as a builder** and read instructions from a text *Dockerfile*
to automate the steps you would otherwise take manually to create an image.
Executing `docker build` will run your steps and commit them along the way,
giving you a final image.
## Usage
To [*build*](../commandline/cli/#cli-build) an image from a source
repository, create a description file called `Dockerfile`
at the root of your repository. This file will describe the
steps to assemble the image.
To [*build*](../commandline/cli/#cli-build) an image from a source repository,
create a description file called Dockerfile at the root of your repository.
This file will describe the steps to assemble the image.
Then call `docker build` with the path of your
source repository as argument (for example, `.`):
Then call `docker build` with the path of you source repository as argument
(for example, `.`):
> `sudo docker build .`
sudo docker build .
The path to the source repository defines where to find the *context* of
the build. The build is run by the Docker daemon, not by the CLI, so the
@ -30,7 +28,7 @@ whole context must be transferred to the daemon. The Docker CLI reports
You can specify a repository and tag at which to save the new image if
the build succeeds:
> `sudo docker build -t shykes/myapp .`
sudo docker build -t shykes/myapp .
The Docker daemon will run your steps one-by-one, committing the result
to a new image if necessary, before finally outputting the ID of your
@ -38,12 +36,11 @@ new image. The Docker daemon will automatically clean up the context you
sent.
Note that each instruction is run independently, and causes a new image
to be created - so `RUN cd /tmp` will not have any
effect on the next instructions.
to be created - so `RUN cd /tmp` will not have any effect on the next
instructions.
Whenever possible, Docker will re-use the intermediate images,
accelerating `docker build` significantly (indicated
by `Using cache`):
accelerating `docker build` significantly (indicated by `Using cache`):
$ docker build -t SvenDowideit/ambassador .
Uploading context 10.24 kB
@ -58,9 +55,9 @@ by `Using cache`):
---> 1a5ffc17324d
Successfully built 1a5ffc17324d
When youre done with your build, youre ready to look into [*Pushing a
repository to its
registry*](../../use/workingwithrepository/#image-push).
When you're done with your build, you're ready to look into
[*Pushing a repository to its registry*](
../../use/workingwithrepository/#image-push).
## Format
@ -83,84 +80,73 @@ be treated as an argument. This allows statements like:
# Comment
RUN echo 'we are running some # of cool things'
Here is the set of instructions you can use in a `Dockerfile`
Here is the set of instructions you can use in a Dockerfile
for building images.
## `FROM`
## FROM
> `FROM <image>`
FROM <image>
Or
> `FROM <image>:<tag>`
FROM <image>:<tag>
The `FROM` instruction sets the [*Base
Image*](../../terms/image/#base-image-def) for subsequent instructions.
As such, a valid Dockerfile must have `FROM` as its
first instruction. The image can be any valid image it is especially
easy to start by **pulling an image** from the [*Public
Repositories*](../../use/workingwithrepository/#using-public-repositories).
The `FROM` instruction sets the [*Base Image*](../../terms/image/#base-image-def)
for subsequent instructions. As such, a valid Dockerfile must have `FROM` as
its first instruction. The image can be any valid image it is especially easy
to start by **pulling an image** from the [*Public Repositories*](
../../use/workingwithrepository/#using-public-repositories).
`FROM` must be the first non-comment instruction in
the `Dockerfile`.
`FROM` must be the first non-comment instruction in the Dockerfile.
`FROM` can appear multiple times within a single
Dockerfile in order to create multiple images. Simply make a note of the
last image id output by the commit before each new `FROM`
command.
`FROM` can appear multiple times within a single Dockerfile in order to create
multiple images. Simply make a note of the last image id output by the commit
before each new `FROM` command.
If no `tag` is given to the `FROM`
instruction, `latest` is assumed. If the
If no `tag` is given to the `FROM` instruction, `latest` is assumed. If the
used tag does not exist, an error will be returned.
## `MAINTAINER`
## MAINTAINER
> `MAINTAINER <name>`
MAINTAINER <name>
The `MAINTAINER` instruction allows you to set the
*Author* field of the generated images.
The `MAINTAINER` instruction allows you to set the *Author* field of the
generated images.
## `RUN`
## RUN
RUN has 2 forms:
- `RUN <command>` (the command is run in a shell -
`/bin/sh -c`)
- `RUN ["executable", "param1", "param2"]` (*exec*
form)
- `RUN <command>` (the command is run in a shell - `/bin/sh -c`)
- `RUN ["executable", "param1", "param2"]` (*exec* form)
The `RUN` instruction will execute any commands in a
new layer on top of the current image and commit the results. The
resulting committed image will be used for the next step in the
Dockerfile.
The `RUN` instruction will execute any commands in a new layer on top of the
current image and commit the results. The resulting committed image will be
used for the next step in the Dockerfile.
Layering `RUN` instructions and generating commits
conforms to the core concepts of Docker where commits are cheap and
containers can be created from any point in an images history, much
like source control.
Layering `RUN` instructions and generating commits conforms to the core
concepts of Docker where commits are cheap and containers can be created from
any point in an image's history, much like source control.
The *exec* form makes it possible to avoid shell string munging, and to
`RUN` commands using a base image that does not
contain `/bin/sh`.
The *exec* form makes it possible to avoid shell string munging, and to `RUN`
commands using a base image that does not contain `/bin/sh`.
### Known Issues (RUN)
- [Issue 783](https://github.com/dotcloud/docker/issues/783) is about
file permissions problems that can occur when using the AUFS file
system. You might notice it during an attempt to `rm`
a file, for example. The issue describes a workaround.
- [Issue 2424](https://github.com/dotcloud/docker/issues/2424) Locale
will not be set automatically.
- [Issue 783](https://github.com/dotcloud/docker/issues/783) is about file
permissions problems that can occur when using the AUFS file system. You
might notice it during an attempt to `rm` a file, for example. The issue
describes a workaround.
- [Issue 2424](https://github.com/dotcloud/docker/issues/2424) Locale will
not be set automatically.
## `CMD`
## CMD
CMD has three forms:
- `CMD ["executable","param1","param2"]` (like an
*exec*, preferred form)
- `CMD ["param1","param2"]` (as *default
parameters to ENTRYPOINT*)
- `CMD command param1 param2` (as a *shell*)
- `CMD ["executable","param1","param2"]` (like an *exec*, preferred form)
- `CMD ["param1","param2"]` (as *default parameters to ENTRYPOINT*)
- `CMD command param1 param2` (as a *shell*)
There can only be one CMD in a Dockerfile. If you list more than one CMD
then only the last CMD will take effect.
@ -169,83 +155,75 @@ then only the last CMD will take effect.
container.** These defaults can include an executable, or they can omit
the executable, in which case you must specify an ENTRYPOINT as well.
When used in the shell or exec formats, the `CMD`
instruction sets the command to be executed when running the image.
When used in the shell or exec formats, the `CMD` instruction sets the command
to be executed when running the image.
If you use the *shell* form of the CMD, then the `<command>`
will execute in `/bin/sh -c`:
If you use the *shell* form of the CMD, then the `<command>` will execute in
`/bin/sh -c`:
FROM ubuntu
CMD echo "This is a test." | wc -
If you want to **run your** `<command>` **without a
shell** then you must express the command as a JSON array and give the
full path to the executable. **This array form is the preferred format
of CMD.** Any additional parameters must be individually expressed as
strings in the array:
If you want to **run your** `<command>` **without a shell** then you must
express the command as a JSON array and give the full path to the executable.
**This array form is the preferred format of CMD.** Any additional parameters
must be individually expressed as strings in the array:
FROM ubuntu
CMD ["/usr/bin/wc","--help"]
If you would like your container to run the same executable every time,
then you should consider using `ENTRYPOINT` in
combination with `CMD`. See
If you would like your container to run the same executable every time, then
you should consider using `ENTRYPOINT` in combination with `CMD`. See
[*ENTRYPOINT*](#entrypoint).
If the user specifies arguments to `docker run` then
they will override the default specified in CMD.
If the user specifies arguments to `docker run` then they will override the
default specified in CMD.
> **Note**:
> Dont confuse `RUN` with `CMD`. `RUN` actually runs a command and commits
> don't confuse `RUN` with `CMD`. `RUN` actually runs a command and commits
> the result; `CMD` does not execute anything at build time, but specifies
> the intended command for the image.
## `EXPOSE`
## EXPOSE
> `EXPOSE <port> [<port>...]`
EXPOSE <port> [<port>...]
The `EXPOSE` instructions informs Docker that the
container will listen on the specified network ports at runtime. Docker
uses this information to interconnect containers using links (see
The `EXPOSE` instructions informs Docker that the container will listen on the
specified network ports at runtime. Docker uses this information to interconnect
containers using links (see
[*links*](../../use/working_with_links_names/#working-with-links-names)),
and to setup port redirection on the host system (see [*Redirect
Ports*](../../use/port_redirection/#port-redirection)).
and to setup port redirection on the host system (see [*Redirect Ports*](
../../use/port_redirection/#port-redirection)).
## `ENV`
## ENV
> `ENV <key> <value>`
ENV <key> <value>
The `ENV` instruction sets the environment variable
`<key>` to the value `<value>`.
This value will be passed to all future `RUN`
instructions. This is functionally equivalent to prefixing the command
with `<key>=<value>`
The `ENV` instruction sets the environment variable `<key>` to the value
`<value>`. This value will be passed to all future `RUN` instructions. This is
functionally equivalent to prefixing the command with `<key>=<value>`
The environment variables set using `ENV` will
persist when a container is run from the resulting image. You can view
the values using `docker inspect`, and change them
using `docker run --env <key>=<value>`.
The environment variables set using `ENV` will persist when a container is run
from the resulting image. You can view the values using `docker inspect`, and
change them using `docker run --env <key>=<value>`.
> **Note**:
> One example where this can cause unexpected consequenses, is setting
> `ENV DEBIAN_FRONTEND noninteractive`. Which will
> persist when the container is run interactively; for example:
> `docker run -t -i image bash`
> `ENV DEBIAN_FRONTEND noninteractive`. Which will persist when the container
> is run interactively; for example: `docker run -t -i image bash`
## `ADD`
## ADD
> `ADD <src> <dest>`
ADD <src> <dest>
The `ADD` instruction will copy new files from
\<src\> and add them to the containers filesystem at path
`<dest>`.
The `ADD` instruction will copy new files from `<src>` and add them to the
container's filesystem at path `<dest>`.
`<src>` must be the path to a file or directory
relative to the source directory being built (also called the *context*
of the build) or a remote file URL.
`<src>` must be the path to a file or directory relative to the source directory
being built (also called the *context* of the build) or a remote file URL.
`<dest>` is the absolute path to which the source
will be copied inside the destination container.
`<dest>` is the absolute path to which the source will be copied inside the
destination container.
All new files and directories are created with mode 0755, uid and gid 0.
@ -262,79 +240,64 @@ All new files and directories are created with mode 0755, uid and gid 0.
The copy obeys the following rules:
- The `<src>` path must be inside the *context* of
the build; you cannot `ADD ../something /something`
, because the first step of a `docker build`
is to send the context directory (and subdirectories) to
the docker daemon.
- The `<src>` path must be inside the *context* of the build;
you cannot `ADD ../something /something`, because the first step of a
`docker build` is to send the context directory (and subdirectories) to the
docker daemon.
- If `<src>` is a URL and `<dest>`
does not end with a trailing slash, then a file is
downloaded from the URL and copied to `<dest>`.
- If `<src>` is a URL and `<dest>` does not end with a trailing slash, then a
file is downloaded from the URL and copied to `<dest>`.
- If `<src>` is a URL and `<dest>`
does end with a trailing slash, then the filename is
inferred from the URL and the file is downloaded to
`<dest>/<filename>`. For instance,
`ADD http://example.com/foobar /` would create
the file `/foobar`. The URL must have a
nontrivial path so that an appropriate filename can be discovered in
this case (`http://example.com` will not work).
- If `<src>` is a URL and `<dest>` does end with a trailing slash, then the
filename is inferred from the URL and the file is downloaded to
`<dest>/<filename>`. For instance, `ADD http://example.com/foobar /` would
create the file `/foobar`. The URL must have a nontrivial path so that an
appropriate filename can be discovered in this case (`http://example.com`
will not work).
- If `<src>` is a directory, the entire directory
is copied, including filesystem metadata.
- If `<src>` is a directory, the entire directory is copied, including
filesystem metadata.
- If `<src>` is a *local* tar archive in a
recognized compression format (identity, gzip, bzip2 or xz) then it
is unpacked as a directory. Resources from *remote* URLs are **not**
decompressed.
- If `<src>` is a *local* tar archive in a recognized compression format
(identity, gzip, bzip2 or xz) then it is unpacked as a directory. Resources
from *remote* URLs are **not** decompressed. When a directory is copied or
unpacked, it has the same behavior as `tar -x`: the result is the union of:
When a directory is copied or unpacked, it has the same behavior as
`tar -x`: the result is the union of
1. whatever existed at the destination path and
2. the contents of the source tree, with conflicts resolved in favor of
"2." on a file-by-file basis.
1. whatever existed at the destination path and
2. the contents of the source tree,
- If `<src>` is any other kind of file, it is copied individually along with
its metadata. In this case, if `<dest>` ends with a trailing slash `/`, it
will be considered a directory and the contents of `<src>` will be written
at `<dest>/base(<src>)`.
with conflicts resolved in favor of "2." on a file-by-file basis.
- If `<dest>` does not end with a trailing slash, it will be considered a
regular file and the contents of `<src>` will be written at `<dest>`.
- If `<src>` is any other kind of file, it is
copied individually along with its metadata. In this case, if
`<dest>` ends with a trailing slash
`/`, it will be considered a directory and the
contents of `<src>` will be written at
`<dest>/base(<src>)`.
- If `<dest>` doesn't exist, it is created along with all missing directories
in its path.
- If `<dest>` does not end with a trailing slash,
it will be considered a regular file and the contents of
`<src>` will be written at `<dest>`
.
- If `<dest>` doesnt exist, it is created along
with all missing directories in its path.
## `ENTRYPOINT`
## ENTRYPOINT
ENTRYPOINT has two forms:
- `ENTRYPOINT ["executable", "param1", "param2"]`
(like an *exec*, preferred form)
- `ENTRYPOINT command param1 param2` (as a
*shell*)
- `ENTRYPOINT ["executable", "param1", "param2"]`
(like an *exec*, preferred form)
- `ENTRYPOINT command param1 param2`
(as a *shell*)
There can only be one `ENTRYPOINT` in a Dockerfile.
If you have more than one `ENTRYPOINT`, then only
the last one in the Dockerfile will have an effect.
There can only be one `ENTRYPOINT` in a Dockerfile. If you have more than one
`ENTRYPOINT`, then only the last one in the Dockerfile will have an effect.
An `ENTRYPOINT` helps you to configure a container
that you can run as an executable. That is, when you specify an
`ENTRYPOINT`, then the whole container runs as if it
was just that executable.
An `ENTRYPOINT` helps you to configure a container that you can run as an
executable. That is, when you specify an `ENTRYPOINT`, then the whole container
runs as if it was just that executable.
The `ENTRYPOINT` instruction adds an entry command that will **not** be
overwritten when arguments are passed to `docker run`, unlike the
behavior of `CMD`. This allows arguments to be passed to the entrypoint.
i.e. `docker run <image> -d` will pass the "-d" argument to the
ENTRYPOINT.
overwritten when arguments are passed to `docker run`, unlike the behavior
of `CMD`. This allows arguments to be passed to the entrypoint. i.e.
`docker run <image> -d` will pass the "-d" argument to the ENTRYPOINT.
You can specify parameters either in the ENTRYPOINT JSON array (as in
"like an exec" above), or by using a CMD statement. Parameters in the
@ -342,13 +305,13 @@ ENTRYPOINT will not be overridden by the `docker run`
arguments, but parameters specified via CMD will be overridden
by `docker run` arguments.
Like a `CMD`, you can specify a plain string for the
ENTRYPOINT and it will execute in `/bin/sh -c`:
Like a `CMD`, you can specify a plain string for the `ENTRYPOINT` and it will
execute in `/bin/sh -c`:
FROM ubuntu
ENTRYPOINT wc -l -
For example, that Dockerfiles image will *always* take stdin as input
For example, that Dockerfile's image will *always* take stdin as input
("-") and print the number of lines ("-l"). If you wanted to make this
optional but default, you could use a CMD:
@ -356,44 +319,41 @@ optional but default, you could use a CMD:
CMD ["-l", "-"]
ENTRYPOINT ["/usr/bin/wc"]
## `VOLUME`
## VOLUME
> `VOLUME ["/data"]`
VOLUME ["/data"]
The `VOLUME` instruction will create a mount point
with the specified name and mark it as holding externally mounted
volumes from native host or other containers. For more
information/examples and mounting instructions via docker client, refer
to [*Share Directories via
Volumes*](../../use/working_with_volumes/#volume-def) documentation.
The `VOLUME` instruction will create a mount point with the specified name
and mark it as holding externally mounted volumes from native host or other
containers. For more information/examples and mounting instructions via docker
client, refer to [*Share Directories via Volumes*](
../../use/working_with_volumes/#volume-def) documentation.
## `USER`
## USER
> `USER daemon`
USER daemon
The `USER` instruction sets the username or UID to
use when running the image.
The `USER` instruction sets the username or UID to use when running the image.
## `WORKDIR`
## WORKDIR
> `WORKDIR /path/to/workdir`
WORKDIR /path/to/workdir
The `WORKDIR` instruction sets the working directory
for the `RUN`, `CMD` and
The `WORKDIR` instruction sets the working directory for the `RUN`, `CMD` and
`ENTRYPOINT` Dockerfile commands that follow it.
It can be used multiple times in the one Dockerfile. If a relative path
is provided, it will be relative to the path of the previous
`WORKDIR` instruction. For example:
is provided, it will be relative to the path of the previous `WORKDIR`
instruction. For example:
> WORKDIR /a WORKDIR b WORKDIR c RUN pwd
WORKDIR /a WORKDIR b WORKDIR c RUN pwd
The output of the final `pwd` command in this
Dockerfile would be `/a/b/c`.
## `ONBUILD`
## ONBUILD
> `ONBUILD [INSTRUCTION]`
ONBUILD [INSTRUCTION]
The `ONBUILD` instruction adds to the image a
"trigger" instruction to be executed at a later time, when the image is
@ -410,7 +370,7 @@ daemon which may be customized with user-specific configuration.
For example, if your image is a reusable python application builder, it
will require application source code to be added in a particular
directory, and it might require a build script to be called *after*
that. You cant just call *ADD* and *RUN* now, because you dont yet
that. You can't just call *ADD* and *RUN* now, because you don't yet
have access to the application source code, and it will be different for
each application build. You could simply provide application developers
with a boilerplate Dockerfile to copy-paste into their application, but
@ -420,23 +380,23 @@ mixes with application-specific code.
The solution is to use *ONBUILD* to register in advance instructions to
run later, during the next build stage.
Heres how it works:
Here's how it works:
1. When it encounters an *ONBUILD* instruction, the builder adds a
trigger to the metadata of the image being built. The instruction
does not otherwise affect the current build.
2. At the end of the build, a list of all triggers is stored in the
image manifest, under the key *OnBuild*. They can be inspected with
*docker inspect*.
3. Later the image may be used as a base for a new build, using the
*FROM* instruction. As part of processing the *FROM* instruction,
the downstream builder looks for *ONBUILD* triggers, and executes
them in the same order they were registered. If any of the triggers
fail, the *FROM* instruction is aborted which in turn causes the
build to fail. If all triggers succeed, the FROM instruction
completes and the build continues as usual.
4. Triggers are cleared from the final image after being executed. In
other words they are not inherited by "grand-children" builds.
1. When it encounters an *ONBUILD* instruction, the builder adds a
trigger to the metadata of the image being built. The instruction
does not otherwise affect the current build.
2. At the end of the build, a list of all triggers is stored in the
image manifest, under the key *OnBuild*. They can be inspected with
*docker inspect*.
3. Later the image may be used as a base for a new build, using the
*FROM* instruction. As part of processing the *FROM* instruction,
the downstream builder looks for *ONBUILD* triggers, and executes
them in the same order they were registered. If any of the triggers
fail, the *FROM* instruction is aborted which in turn causes the
build to fail. If all triggers succeed, the FROM instruction
completes and the build continues as usual.
4. Triggers are cleared from the final image after being executed. In
other words they are not inherited by "grand-children" builds.
For example you might add something like this:
@ -445,7 +405,7 @@ For example you might add something like this:
ONBUILD RUN /usr/local/bin/python-build --dir /app/src
[...]
> **Warning**: Chaining ONBUILD instructions using ONBUILD ONBUILD isnt allowed.
> **Warning**: Chaining ONBUILD instructions using ONBUILD ONBUILD isn't allowed.
> **Warning**: ONBUILD may not trigger FROM or MAINTAINER instructions.

File diff suppressed because it is too large Load Diff

View File

@ -2,7 +2,7 @@ page_title: Docker Run Reference
page_description: Configure containers at runtime
page_keywords: docker, run, configure, runtime
# [Docker Run Reference](#id2)
# Docker Run Reference
**Docker runs processes in isolated containers**. When an operator
executes `docker run`, she starts a process with its
@ -10,59 +10,60 @@ own file system, its own networking, and its own isolated process tree.
The [*Image*](../../terms/image/#image-def) which starts the process may
define defaults related to the binary to run, the networking to expose,
and more, but `docker run` gives final control to
the operator who starts the container from the image. Thats the main
reason [*run*](../commandline/cli/#cli-run) has more options than any
the operator who starts the container from the image. That's the main
reason [*run*](../../commandline/cli/#cli-run) has more options than any
other `docker` command.
Every one of the [*Examples*](../../examples/#example-list) shows
running containers, and so here we try to give more in-depth guidance.
## [General Form](#id3)
## General Form
As youve seen in the [*Examples*](../../examples/#example-list), the
As you`ve seen in the [*Examples*](../../examples/#example-list), the
basic run command takes this form:
docker run [OPTIONS] IMAGE[:TAG] [COMMAND] [ARG...]
To learn how to interpret the types of `[OPTIONS]`,
see [*Option types*](../commandline/cli/#cli-options).
see [*Option types*](../../commandline/cli/#cli-options).
The list of `[OPTIONS]` breaks down into two groups:
1. Settings exclusive to operators, including:
- Detached or Foreground running,
- Container Identification,
- Network settings, and
- Runtime Constraints on CPU and Memory
- Privileges and LXC Configuration
1. Settings exclusive to operators, including:
2. Setting shared between operators and developers, where operators can
override defaults developers set in images at build time.
- Detached or Foreground running,
- Container Identification,
- Network settings, and
- Runtime Constraints on CPU and Memory
- Privileges and LXC Configuration
2. Setting shared between operators and developers, where operators can
override defaults developers set in images at build time.
Together, the `docker run [OPTIONS]` give complete
control over runtime behavior to the operator, allowing them to override
all defaults set by the developer during `docker build`
and nearly all the defaults set by the Docker runtime itself.
## [Operator Exclusive Options](#id4)
## Operator Exclusive Options
Only the operator (the person executing `docker run`) can set the
following options.
- [Detached vs Foreground](#detached-vs-foreground)
- [Detached (-d)](#detached-d)
- [Foreground](#foreground)
- [Container Identification](#container-identification)
- [Name (name)](#name-name)
- [PID Equivalent](#pid-equivalent)
- [Network Settings](#network-settings)
- [Clean Up (rm)](#clean-up-rm)
- [Runtime Constraints on CPU and
- [Detached vs Foreground](#detached-vs-foreground)
- [Detached (-d)](#detached-d)
- [Foreground](#foreground)
- [Container Identification](#container-identification)
- [Name (name)](#name-name)
- [PID Equivalent](#pid-equivalent)
- [Network Settings](#network-settings)
- [Clean Up (rm)](#clean-up-rm)
- [Runtime Constraints on CPU and
Memory](#runtime-constraints-on-cpu-and-memory)
- [Runtime Privilege and LXC
- [Runtime Privilege and LXC
Configuration](#runtime-privilege-and-lxc-configuration)
### [Detached vs Foreground](#id2)
## Detached vs Foreground
When starting a Docker container, you must first decide if you want to
run the container in the background in a "detached" mode or in the
@ -70,53 +71,50 @@ default foreground mode:
-d=false: Detached mode: Run container in the background, print new container id
#### [Detached (-d)](#id3)
### Detached (-d)
In detached mode (`-d=true` or just `-d`), all I/O should be done
through network connections or shared volumes because the container is
no longer listening to the commandline where you executed `docker run`.
You can reattach to a detached container with `docker`
[*attach*](../commandline/cli/#cli-attach). If you choose to run a
[*attach*](commandline/cli/#attach). If you choose to run a
container in the detached mode, then you cannot use the `--rm` option.
#### [Foreground](#id4)
### Foreground
In foreground mode (the default when `-d` is not
specified), `docker run` can start the process in
the container and attach the console to the processs standard input,
output, and standard error. It can even pretend to be a TTY (this is
what most commandline executables expect) and pass along signals. All of
that is configurable:
In foreground mode (the default when `-d` is not specified), `docker run`
can start the process in the container and attach the console to the process's
standard input, output, and standard error. It can even pretend to be a TTY
(this is what most commandline executables expect) and pass along signals. All
of that is configurable:
-a=[] : Attach to ``stdin``, ``stdout`` and/or ``stderr``
-t=false : Allocate a pseudo-tty
--sig-proxy=true: Proxify all received signal to the process (even in non-tty mode)
-i=false : Keep STDIN open even if not attached
If you do not specify `-a` then Docker will [attach
everything
(stdin,stdout,stderr)](https://github.com/dotcloud/docker/blob/75a7f4d90cde0295bcfb7213004abce8d4779b75/commands.go#L1797).
You can specify to which of the three standard streams
(`stdin`, `stdout`,
`stderr`) youd like to connect instead, as in:
If you do not specify `-a` then Docker will [attach everything (stdin,stdout,stderr)](
https://github.com/dotcloud/docker/blob/
75a7f4d90cde0295bcfb7213004abce8d4779b75/commands.go#L1797). You can specify to which
of the three standard streams (`stdin`, `stdout`, `stderr`) you'd like to connect
instead, as in:
docker run -a stdin -a stdout -i -t ubuntu /bin/bash
For interactive processes (like a shell) you will typically want a tty
as well as persistent standard input (`stdin`), so
youll use `-i -t` together in most interactive
cases.
For interactive processes (like a shell) you will typically want a tty as well as
persistent standard input (`stdin`), so you'll use `-i -t` together in most
interactive cases.
### [Container Identification](#id5)
## Container Identification
#### [Name (name)](#id6)
### Name (name)
The operator can identify a container in three ways:
- UUID long identifier
("f78375b1c487e03c9438c729345e54db9d20cfa2ac1fc3494b6eb60872e74778")
- UUID short identifier ("f78375b1c487")
- Name ("evil\_ptolemy")
- Name ("evil_ptolemy")
The UUID identifiers come from the Docker daemon, and if you do not
assign a name to the container with `--name` then
@ -127,16 +125,16 @@ name when defining
(or any other place you need to identify a container). This works for
both background and foreground Docker containers.
#### [PID Equivalent](#id7)
### PID Equivalent
And finally, to help with automation, you can have Docker write the
container ID out to a file of your choosing. This is similar to how some
programs might write out their process ID to a file (youve seen them as
programs might write out their process ID to a file (you`ve seen them as
PID files):
--cidfile="": Write the container ID to the file
### [Network Settings](#id8)
## Network Settings
-n=true : Enable networking for this container
--dns=[] : Set custom dns servers for the container
@ -150,19 +148,19 @@ files or STDIN/STDOUT only.
Your container will use the same DNS servers as the host by default, but
you can override this with `--dns`.
### [Clean Up (rm)](#id9)
## Clean Up (rm)
By default a containers file system persists even after the container
By default a container's file system persists even after the container
exits. This makes debugging a lot easier (since you can inspect the
final state) and you retain all your data by default. But if you are
running short-term **foreground** processes, these container file
systems can really pile up. If instead youd like Docker to
systems can really pile up. If instead you'd like Docker to
**automatically clean up the container and remove the file system when
the container exits**, you can add the `--rm` flag:
--rm=false: Automatically remove the container when it exits (incompatible with -d)
### [Runtime Constraints on CPU and Memory](#id10)
## Runtime Constraints on CPU and Memory
The operator can also adjust the performance parameters of the
container:
@ -181,7 +179,7 @@ the same priority and get the same proportion of CPU cycles, but you can
tell the kernel to give more shares of CPU time to one or more
containers when you start them via Docker.
### [Runtime Privilege and LXC Configuration](#id11)
## Runtime Privilege and LXC Configuration
--privileged=false: Give extended privileges to this container
--lxc-conf=[]: (lxc exec-driver only) Add custom lxc options --lxc-conf="lxc.cgroup.cpuset.cpus = 0,1"
@ -189,71 +187,63 @@ containers when you start them via Docker.
By default, Docker containers are "unprivileged" and cannot, for
example, run a Docker daemon inside a Docker container. This is because
by default a container is not allowed to access any devices, but a
"privileged" container is given access to all devices (see
[lxc-template.go](https://github.com/dotcloud/docker/blob/master/execdriver/lxc/lxc_template.go)
and documentation on [cgroups
devices](https://www.kernel.org/doc/Documentation/cgroups/devices.txt)).
"privileged" container is given access to all devices (see [lxc-template.go](
https://github.com/dotcloud/docker/blob/master/execdriver/lxc/lxc_template.go)
and documentation on [cgroups devices](
https://www.kernel.org/doc/Documentation/cgroups/devices.txt)).
When the operator executes `docker run --privileged`, Docker will enable
to access to all devices on the host as well as set some configuration
in AppArmor to allow the container nearly all the same access to the
host as processes running outside containers on the host. Additional
information about running with `--privileged` is available on the
[Docker
Blog](http://blog.docker.io/2013/09/docker-can-now-run-within-docker/).
[Docker Blog](http://blog.docker.io/2013/09/docker-can-now-run-within-docker/).
If the Docker daemon was started using the `lxc`
exec-driver (`docker -d --exec-driver=lxc`) then the
operator can also specify LXC options using one or more
`--lxc-conf` parameters. These can be new parameters
or override existing parameters from the
[lxc-template.go](https://github.com/dotcloud/docker/blob/master/execdriver/lxc/lxc_template.go).
Note that in the future, a given hosts Docker daemon may not use LXC,
so this is an implementation-specific configuration meant for operators
already familiar with using LXC directly.
If the Docker daemon was started using the `lxc` exec-driver
(`docker -d --exec-driver=lxc`) then the operator can also specify LXC options
using one or more `--lxc-conf` parameters. These can be new parameters or
override existing parameters from the [lxc-template.go](
https://github.com/dotcloud/docker/blob/master/execdriver/lxc/lxc_template.go).
Note that in the future, a given host's docker daemon may not use LXC, so this
is an implementation-specific configuration meant for operators already
familiar with using LXC directly.
## Overriding `Dockerfile` Image Defaults
## Overriding Dockerfile Image Defaults
When a developer builds an image from a
[*Dockerfile*](../builder/#dockerbuilder) or when she commits it, the
developer can set a number of default parameters that take effect when
the image starts up as a container.
When a developer builds an image from a [*Dockerfile*](builder/#dockerbuilder)
or when she commits it, the developer can set a number of default parameters
that take effect when the image starts up as a container.
Four of the `Dockerfile` commands cannot be
overridden at runtime: `FROM, MAINTAINER, RUN`, and
`ADD`. Everything else has a corresponding override
in `docker run`. Well go through what the developer
might have set in each `Dockerfile` instruction and
how the operator can override that setting.
Four of the Dockerfile commands cannot be overridden at runtime: `FROM`,
`MAINTAINER`, `RUN`, and `ADD`. Everything else has a corresponding override
in `docker run`. We'll go through what the developer might have set in each
Dockerfile instruction and how the operator can override that setting.
- [CMD (Default Command or Options)](#cmd-default-command-or-options)
- [ENTRYPOINT (Default Command to Execute at
Runtime](#entrypoint-default-command-to-execute-at-runtime)
- [EXPOSE (Incoming Ports)](#expose-incoming-ports)
- [ENV (Environment Variables)](#env-environment-variables)
- [VOLUME (Shared Filesystems)](#volume-shared-filesystems)
- [USER](#user)
- [WORKDIR](#workdir)
- [CMD (Default Command or Options)](#cmd-default-command-or-options)
- [ENTRYPOINT (Default Command to Execute at Runtime](
#entrypoint-default-command-to-execute-at-runtime)
- [EXPOSE (Incoming Ports)](#expose-incoming-ports)
- [ENV (Environment Variables)](#env-environment-variables)
- [VOLUME (Shared Filesystems)](#volume-shared-filesystems)
- [USER](#user)
- [WORKDIR](#workdir)
### [CMD (Default Command or Options)](#id12)
## CMD (Default Command or Options)
Recall the optional `COMMAND` in the Docker
commandline:
docker run [OPTIONS] IMAGE[:TAG] [COMMAND] [ARG...]
This command is optional because the person who created the
`IMAGE` may have already provided a default
`COMMAND` using the `Dockerfile`
`CMD`. As the operator (the person running a
container from the image), you can override that `CMD`
just by specifying a new `COMMAND`.
This command is optional because the person who created the `IMAGE` may have
already provided a default `COMMAND` using the Dockerfile `CMD`. As the
operator (the person running a container from the image), you can override that
`CMD` just by specifying a new `COMMAND`.
If the image also specifies an `ENTRYPOINT` then the
`CMD` or `COMMAND` get appended
as arguments to the `ENTRYPOINT`.
If the image also specifies an `ENTRYPOINT` then the `CMD` or `COMMAND` get
appended as arguments to the `ENTRYPOINT`.
### [ENTRYPOINT (Default Command to Execute at Runtime](#id13)
## ENTRYPOINT (Default Command to Execute at Runtime
--entrypoint="": Overwrite the default entrypoint set by the image
@ -276,13 +266,12 @@ or two examples of how to pass more parameters to that ENTRYPOINT:
docker run -i -t --entrypoint /bin/bash example/redis -c ls -l
docker run -i -t --entrypoint /usr/bin/redis-cli example/redis --help
### [EXPOSE (Incoming Ports)](#id14)
## EXPOSE (Incoming Ports)
The `Dockerfile` doesnt give much control over
networking, only providing the `EXPOSE` instruction
to give a hint to the operator about what incoming ports might provide
services. The following options work with or override the
`Dockerfile`s exposed defaults:
The Dockerfile doesn't give much control over networking, only providing the
`EXPOSE` instruction to give a hint to the operator about what incoming ports
might provide services. The following options work with or override the
Dockerfile's exposed defaults:
--expose=[]: Expose a port from the container
without publishing it to your host
@ -293,40 +282,34 @@ services. The following options work with or override the
(use 'docker port' to see the actual mapping)
--link="" : Add link to another container (name:alias)
As mentioned previously, `EXPOSE` (and
`--expose`) make a port available **in** a container
for incoming connections. The port number on the inside of the container
(where the service listens) does not need to be the same number as the
port exposed on the outside of the container (where clients connect), so
inside the container you might have an HTTP service listening on port 80
(and so you `EXPOSE 80` in the
`Dockerfile`), but outside the container the port
might be 42800.
As mentioned previously, `EXPOSE` (and `--expose`) make a port available **in**
a container for incoming connections. The port number on the inside of the
container (where the service listens) does not need to be the same number as the
port exposed on the outside of the container (where clients connect), so inside
the container you might have an HTTP service listening on port 80 (and so you
`EXPOSE 80` in the Dockerfile), but outside the container the port might be
42800.
To help a new client container reach the server containers internal
port operator `--expose`d by the operator or
`EXPOSE`d by the developer, the operator has three
choices: start the server container with `-P` or
`-p,` or start the client container with
`--link`.
To help a new client container reach the server container's internal port
operator `--expose``d by the operator or `EXPOSE``d by the developer, the
operator has three choices: start the server container with `-P` or `-p,` or
start the client container with `--link`.
If the operator uses `-P` or `-p`
then Docker will make the exposed port accessible on the host
and the ports will be available to any client that can reach the host.
To find the map between the host ports and the exposed ports, use
`docker port`)
If the operator uses `-P` or `-p` then Docker will make the exposed port
accessible on the host and the ports will be available to any client that
can reach the host. To find the map between the host ports and the exposed
ports, use `docker port`)
If the operator uses `--link` when starting the new
client container, then the client container can access the exposed port
via a private networking interface. Docker will set some environment
variables in the client container to help indicate which interface and
port to use.
If the operator uses `--link` when starting the new client container, then the
client container can access the exposed port via a private networking interface.
Docker will set some environment variables in the client container to help
indicate which interface and port to use.
### [ENV (Environment Variables)](#id15)
## ENV (Environment Variables)
The operator can **set any environment variable** in the container by
using one or more `-e` flags, even overriding those
already defined by the developer with a Dockefile `ENV`:
The operator can **set any environment variable** in the container by using one
or more `-e` flags, even overriding those already defined by the developer with
a Dockefile `ENV`:
$ docker run -e "deep=purple" --rm ubuntu /bin/bash -c export
declare -x HOME="/"
@ -340,10 +323,10 @@ already defined by the developer with a Dockefile `ENV`:
Similarly the operator can set the **hostname** with `-h`.
`--link name:alias` also sets environment variables,
using the *alias* string to define environment variables within the
container that give the IP and PORT information for connecting to the
service container. Lets imagine we have a container running Redis:
`--link name:alias` also sets environment variables, using the *alias* string to
define environment variables within the container that give the IP and PORT
information for connecting to the service container. Let's imagine we have a
container running Redis:
# Start the service container, named redis-name
$ docker run -d --name redis-name dockerfiles/redis
@ -358,7 +341,7 @@ service container. Lets imagine we have a container running Redis:
$ docker port 4241164edf6f 6379
2014/01/25 00:55:38 Error: No public port '6379' published for 4241164edf6f
Yet we can get information about the Redis containers exposed ports
Yet we can get information about the Redis container'sexposed ports
with `--link`. Choose an alias that will form a
valid environment variable!
@ -377,40 +360,36 @@ valid environment variable!
declare -x SHLVL="1"
declare -x container="lxc"
And we can use that information to connect from another container as a
client:
And we can use that information to connect from another container as a client:
$ docker run -i -t --rm --link redis-name:redis_alias --entrypoint /bin/bash dockerfiles/redis -c '/redis-stable/src/redis-cli -h $REDIS_ALIAS_PORT_6379_TCP_ADDR -p $REDIS_ALIAS_PORT_6379_TCP_PORT'
172.17.0.32:6379>
### [VOLUME (Shared Filesystems)](#id16)
## VOLUME (Shared Filesystems)
-v=[]: Create a bind mount with: [host-dir]:[container-dir]:[rw|ro].
If "container-dir" is missing, then docker creates a new volume.
--volumes-from="": Mount all volumes from the given container(s)
The volumes commands are complex enough to have their own documentation
in section [*Share Directories via
Volumes*](../../use/working_with_volumes/#volume-def). A developer can
define one or more `VOLUME`s associated with an
image, but only the operator can give access from one container to
another (or from a container to a volume mounted on the host).
The volumes commands are complex enough to have their own documentation in
section [*Share Directories via Volumes*](../../use/working_with_volumes/#volume-def).
A developer can define one or more `VOLUME's associated with an image, but only the
operator can give access from one container to another (or from a container to a
volume mounted on the host).
### [USER](#id17)
## USER
The default user within a container is `root` (id =
0), but if the developer created additional users, those are accessible
too. The developer can set a default user to run the first process with
the `Dockerfile USER` command, but the operator can
override it
The default user within a container is `root` (id = 0), but if the developer
created additional users, those are accessible too. The developer can set a
default user to run the first process with the `Dockerfile USER` command,
but the operator can override it:
-u="": Username or UID
### [WORKDIR](#id18)
## WORKDIR
The default working directory for running binaries within a container is
the root directory (`/`), but the developer can set
a different default with the `Dockerfile WORKDIR`
command. The operator can override this with:
The default working directory for running binaries within a container is the
root directory (`/`), but the developer can set a different default with the
Dockerfile `WORKDIR` command. The operator can override this with:
-w="": Working directory inside the container