now, with shiney markdown

Docker-DCO-1.1-Signed-off-by: Sven Dowideit <SvenDowideit@fosiki.com> (github: SvenDowideit)
This commit is contained in:
Sven Dowideit 2014-04-16 10:53:12 +10:00 committed by Tibor Vass
parent d39c8aea47
commit 88a30886e5
3 changed files with 2102 additions and 0 deletions

View File

@ -0,0 +1,510 @@
page_title: Dockerfile Reference
page_description: Dockerfiles use a simple DSL which allows you to automate the steps you would normally manually take to create an image.
page_keywords: builder, docker, Dockerfile, automation, image creation
# Dockerfile Reference
**Docker can act as a builder** and read instructions from a text
`Dockerfile` to automate the steps you would
otherwise take manually to create an image. Executing
`docker build` will run your steps and commit them
along the way, giving you a final image.
## Usage
To [*build*](../commandline/cli/#cli-build) an image from a source
repository, create a description file called `Dockerfile`
at the root of your repository. This file will describe the
steps to assemble the image.
Then call `docker build` with the path of your
source repository as argument (for example, `.`):
> `sudo docker build .`
The path to the source repository defines where to find the *context* of
the build. The build is run by the Docker daemon, not by the CLI, so the
whole context must be transferred to the daemon. The Docker CLI reports
"Uploading context" when the context is sent to the daemon.
You can specify a repository and tag at which to save the new image if
the build succeeds:
> `sudo docker build -t shykes/myapp .`
The Docker daemon will run your steps one-by-one, committing the result
to a new image if necessary, before finally outputting the ID of your
new image. The Docker daemon will automatically clean up the context you
sent.
Note that each instruction is run independently, and causes a new image
to be created - so `RUN cd /tmp` will not have any
effect on the next instructions.
Whenever possible, Docker will re-use the intermediate images,
accelerating `docker build` significantly (indicated
by `Using cache`):
$ docker build -t SvenDowideit/ambassador .
Uploading context 10.24 kB
Uploading context
Step 1 : FROM docker-ut
---> cbba202fe96b
Step 2 : MAINTAINER SvenDowideit@home.org.au
---> Using cache
---> 51182097be13
Step 3 : CMD env | grep _TCP= | sed 's/.*_PORT_\([0-9]*\)_TCP=tcp:\/\/\(.*\):\(.*\)/socat TCP4-LISTEN:\1,fork,reuseaddr TCP4:\2:\3 \&/' | sh && top
---> Using cache
---> 1a5ffc17324d
Successfully built 1a5ffc17324d
When youre done with your build, youre ready to look into [*Pushing a
repository to its
registry*](../../use/workingwithrepository/#image-push).
## Format
Here is the format of the Dockerfile:
# Comment
INSTRUCTION arguments
The Instruction is not case-sensitive, however convention is for them to
be UPPERCASE in order to distinguish them from arguments more easily.
Docker evaluates the instructions in a Dockerfile in order. **The first
instruction must be \`FROM\`** in order to specify the [*Base
Image*](../../terms/image/#base-image-def) from which you are building.
Docker will treat lines that *begin* with `#` as a
comment. A `#` marker anywhere else in the line will
be treated as an argument. This allows statements like:
# Comment
RUN echo 'we are running some # of cool things'
Here is the set of instructions you can use in a `Dockerfile`
for building images.
## `FROM`
> `FROM <image>`
Or
> `FROM <image>:<tag>`
The `FROM` instruction sets the [*Base
Image*](../../terms/image/#base-image-def) for subsequent instructions.
As such, a valid Dockerfile must have `FROM` as its
first instruction. The image can be any valid image it is especially
easy to start by **pulling an image** from the [*Public
Repositories*](../../use/workingwithrepository/#using-public-repositories).
`FROM` must be the first non-comment instruction in
the `Dockerfile`.
`FROM` can appear multiple times within a single
Dockerfile in order to create multiple images. Simply make a note of the
last image id output by the commit before each new `FROM`
command.
If no `tag` is given to the `FROM`
instruction, `latest` is assumed. If the
used tag does not exist, an error will be returned.
## `MAINTAINER`
> `MAINTAINER <name>`
The `MAINTAINER` instruction allows you to set the
*Author* field of the generated images.
## `RUN`
RUN has 2 forms:
- `RUN <command>` (the command is run in a shell -
`/bin/sh -c`)
- `RUN ["executable", "param1", "param2"]` (*exec*
form)
The `RUN` instruction will execute any commands in a
new layer on top of the current image and commit the results. The
resulting committed image will be used for the next step in the
Dockerfile.
Layering `RUN` instructions and generating commits
conforms to the core concepts of Docker where commits are cheap and
containers can be created from any point in an images history, much
like source control.
The *exec* form makes it possible to avoid shell string munging, and to
`RUN` commands using a base image that does not
contain `/bin/sh`.
### Known Issues (RUN)
- [Issue 783](https://github.com/dotcloud/docker/issues/783) is about
file permissions problems that can occur when using the AUFS file
system. You might notice it during an attempt to `rm`
a file, for example. The issue describes a workaround.
- [Issue 2424](https://github.com/dotcloud/docker/issues/2424) Locale
will not be set automatically.
## `CMD`
CMD has three forms:
- `CMD ["executable","param1","param2"]` (like an
*exec*, preferred form)
- `CMD ["param1","param2"]` (as *default
parameters to ENTRYPOINT*)
- `CMD command param1 param2` (as a *shell*)
There can only be one CMD in a Dockerfile. If you list more than one CMD
then only the last CMD will take effect.
**The main purpose of a CMD is to provide defaults for an executing
container.** These defaults can include an executable, or they can omit
the executable, in which case you must specify an ENTRYPOINT as well.
When used in the shell or exec formats, the `CMD`
instruction sets the command to be executed when running the image.
If you use the *shell* form of the CMD, then the `<command>`
will execute in `/bin/sh -c`:
FROM ubuntu
CMD echo "This is a test." | wc -
If you want to **run your** `<command>` **without a
shell** then you must express the command as a JSON array and give the
full path to the executable. **This array form is the preferred format
of CMD.** Any additional parameters must be individually expressed as
strings in the array:
FROM ubuntu
CMD ["/usr/bin/wc","--help"]
If you would like your container to run the same executable every time,
then you should consider using `ENTRYPOINT` in
combination with `CMD`. See
[*ENTRYPOINT*](#dockerfile-entrypoint).
If the user specifies arguments to `docker run` then
they will override the default specified in CMD.
Note
Dont confuse `RUN` with `CMD`.
`RUN` actually runs a command and commits the
result; `CMD` does not execute anything at build
time, but specifies the intended command for the image.
## `EXPOSE`
> `EXPOSE <port> [<port>...]`
The `EXPOSE` instructions informs Docker that the
container will listen on the specified network ports at runtime. Docker
uses this information to interconnect containers using links (see
[*links*](../../use/working_with_links_names/#working-with-links-names)),
and to setup port redirection on the host system (see [*Redirect
Ports*](../../use/port_redirection/#port-redirection)).
## `ENV`
> `ENV <key> <value>`
The `ENV` instruction sets the environment variable
`<key>` to the value `<value>`.
This value will be passed to all future `RUN`
instructions. This is functionally equivalent to prefixing the command
with `<key>=<value>`
The environment variables set using `ENV` will
persist when a container is run from the resulting image. You can view
the values using `docker inspect`, and change them
using `docker run --env <key>=<value>`.
Note
One example where this can cause unexpected consequenses, is setting
`ENV DEBIAN_FRONTEND noninteractive`. Which will
persist when the container is run interactively; for example:
`docker run -t -i image bash`
## `ADD`
> `ADD <src> <dest>`
The `ADD` instruction will copy new files from
\<src\> and add them to the containers filesystem at path
`<dest>`.
`<src>` must be the path to a file or directory
relative to the source directory being built (also called the *context*
of the build) or a remote file URL.
`<dest>` is the absolute path to which the source
will be copied inside the destination container.
All new files and directories are created with mode 0755, uid and gid 0.
Note
if you build using STDIN (`docker build - < somefile`
.literal}), there is no build context, so the Dockerfile can only
contain an URL based ADD statement.
Note
if your URL files are protected using authentication, you will need to
use an `RUN wget` , `RUN curl`
or other tool from within the container as ADD does not support
authentication.
The copy obeys the following rules:
- The `<src>` path must be inside the *context* of
the build; you cannot `ADD ../something /something`
, because the first step of a `docker build`
is to send the context directory (and subdirectories) to
the docker daemon.
- If `<src>` is a URL and `<dest>`
does not end with a trailing slash, then a file is
downloaded from the URL and copied to `<dest>`.
- If `<src>` is a URL and `<dest>`
does end with a trailing slash, then the filename is
inferred from the URL and the file is downloaded to
`<dest>/<filename>`. For instance,
`ADD http://example.com/foobar /` would create
the file `/foobar`. The URL must have a
nontrivial path so that an appropriate filename can be discovered in
this case (`http://example.com` will not work).
- If `<src>` is a directory, the entire directory
is copied, including filesystem metadata.
- If `<src>` is a *local* tar archive in a
recognized compression format (identity, gzip, bzip2 or xz) then it
is unpacked as a directory. Resources from *remote* URLs are **not**
decompressed.
When a directory is copied or unpacked, it has the same behavior as
`tar -x`: the result is the union of
1. whatever existed at the destination path and
2. the contents of the source tree,
with conflicts resolved in favor of "2." on a file-by-file basis.
- If `<src>` is any other kind of file, it is
copied individually along with its metadata. In this case, if
`<dest>` ends with a trailing slash
`/`, it will be considered a directory and the
contents of `<src>` will be written at
`<dest>/base(<src>)`.
- If `<dest>` does not end with a trailing slash,
it will be considered a regular file and the contents of
`<src>` will be written at `<dest>`
.
- If `<dest>` doesnt exist, it is created along
with all missing directories in its path.
## `ENTRYPOINT`
ENTRYPOINT has two forms:
- `ENTRYPOINT ["executable", "param1", "param2"]`
(like an *exec*, preferred form)
- `ENTRYPOINT command param1 param2` (as a
*shell*)
There can only be one `ENTRYPOINT` in a Dockerfile.
If you have more than one `ENTRYPOINT`, then only
the last one in the Dockerfile will have an effect.
An `ENTRYPOINT` helps you to configure a container
that you can run as an executable. That is, when you specify an
`ENTRYPOINT`, then the whole container runs as if it
was just that executable.
The `ENTRYPOINT` instruction adds an entry command
that will **not** be overwritten when arguments are passed to
`docker run`, unlike the behavior of `CMD`
.literal}. This allows arguments to be passed to the entrypoint. i.e.
`docker run <image> -d` will pass the "-d" argument
to the ENTRYPOINT.
You can specify parameters either in the ENTRYPOINT JSON array (as in
"like an exec" above), or by using a CMD statement. Parameters in the
ENTRYPOINT will not be overridden by the `docker run`
arguments, but parameters specified via CMD will be overridden
by `docker run` arguments.
Like a `CMD`, you can specify a plain string for the
ENTRYPOINT and it will execute in `/bin/sh -c`:
FROM ubuntu
ENTRYPOINT wc -l -
For example, that Dockerfiles image will *always* take stdin as input
("-") and print the number of lines ("-l"). If you wanted to make this
optional but default, you could use a CMD:
FROM ubuntu
CMD ["-l", "-"]
ENTRYPOINT ["/usr/bin/wc"]
## `VOLUME`
> `VOLUME ["/data"]`
The `VOLUME` instruction will create a mount point
with the specified name and mark it as holding externally mounted
volumes from native host or other containers. For more
information/examples and mounting instructions via docker client, refer
to [*Share Directories via
Volumes*](../../use/working_with_volumes/#volume-def) documentation.
## `USER`
> `USER daemon`
The `USER` instruction sets the username or UID to
use when running the image.
## `WORKDIR`
> `WORKDIR /path/to/workdir`
The `WORKDIR` instruction sets the working directory
for the `RUN`, `CMD` and
`ENTRYPOINT` Dockerfile commands that follow it.
It can be used multiple times in the one Dockerfile. If a relative path
is provided, it will be relative to the path of the previous
`WORKDIR` instruction. For example:
> WORKDIR /a WORKDIR b WORKDIR c RUN pwd
The output of the final `pwd` command in this
Dockerfile would be `/a/b/c`.
## `ONBUILD`
> `ONBUILD [INSTRUCTION]`
The `ONBUILD` instruction adds to the image a
"trigger" instruction to be executed at a later time, when the image is
used as the base for another build. The trigger will be executed in the
context of the downstream build, as if it had been inserted immediately
after the *FROM* instruction in the downstream Dockerfile.
Any build instruction can be registered as a trigger.
This is useful if you are building an image which will be used as a base
to build other images, for example an application build environment or a
daemon which may be customized with user-specific configuration.
For example, if your image is a reusable python application builder, it
will require application source code to be added in a particular
directory, and it might require a build script to be called *after*
that. You cant just call *ADD* and *RUN* now, because you dont yet
have access to the application source code, and it will be different for
each application build. You could simply provide application developers
with a boilerplate Dockerfile to copy-paste into their application, but
that is inefficient, error-prone and difficult to update because it
mixes with application-specific code.
The solution is to use *ONBUILD* to register in advance instructions to
run later, during the next build stage.
Heres how it works:
1. When it encounters an *ONBUILD* instruction, the builder adds a
trigger to the metadata of the image being built. The instruction
does not otherwise affect the current build.
2. At the end of the build, a list of all triggers is stored in the
image manifest, under the key *OnBuild*. They can be inspected with
*docker inspect*.
3. Later the image may be used as a base for a new build, using the
*FROM* instruction. As part of processing the *FROM* instruction,
the downstream builder looks for *ONBUILD* triggers, and executes
them in the same order they were registered. If any of the triggers
fail, the *FROM* instruction is aborted which in turn causes the
build to fail. If all triggers succeed, the FROM instruction
completes and the build continues as usual.
4. Triggers are cleared from the final image after being executed. In
other words they are not inherited by "grand-children" builds.
For example you might add something like this:
[...]
ONBUILD ADD . /app/src
ONBUILD RUN /usr/local/bin/python-build --dir /app/src
[...]
Warning
Chaining ONBUILD instructions using ONBUILD ONBUILD isnt allowed.
Warning
ONBUILD may not trigger FROM or MAINTAINER instructions.
## Dockerfile Examples
# Nginx
#
# VERSION 0.0.1
FROM ubuntu
MAINTAINER Guillaume J. Charmes <guillaume@docker.com>
# make sure the package repository is up to date
RUN echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list
RUN apt-get update
RUN apt-get install -y inotify-tools nginx apache2 openssh-server
# Firefox over VNC
#
# VERSION 0.3
FROM ubuntu
# make sure the package repository is up to date
RUN echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list
RUN apt-get update
# Install vnc, xvfb in order to create a 'fake' display and firefox
RUN apt-get install -y x11vnc xvfb firefox
RUN mkdir /.vnc
# Setup a password
RUN x11vnc -storepasswd 1234 ~/.vnc/passwd
# Autostart firefox (might not be the best way, but it does the trick)
RUN bash -c 'echo "firefox" >> /.bashrc'
EXPOSE 5900
CMD ["x11vnc", "-forever", "-usepw", "-create"]
# Multiple images example
#
# VERSION 0.1
FROM ubuntu
RUN echo foo > bar
# Will output something like ===> 907ad6c2736f
FROM ubuntu
RUN echo moo > oink
# Will output something like ===> 695d7793cbe4
# You'll now have two images, 907ad6c2736f with /bar, and 695d7793cbe4 with
# /oink.

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,422 @@
page_title: Docker Run Reference
page_description: Configure containers at runtime
page_keywords: docker, run, configure, runtime
# [Docker Run Reference](#id2)
**Docker runs processes in isolated containers**. When an operator
executes `docker run`, she starts a process with its
own file system, its own networking, and its own isolated process tree.
The [*Image*](../../terms/image/#image-def) which starts the process may
define defaults related to the binary to run, the networking to expose,
and more, but `docker run` gives final control to
the operator who starts the container from the image. Thats the main
reason [*run*](../commandline/cli/#cli-run) has more options than any
other `docker` command.
Every one of the [*Examples*](../../examples/#example-list) shows
running containers, and so here we try to give more in-depth guidance.
## [General Form](#id3)
As youve seen in the [*Examples*](../../examples/#example-list), the
basic run command takes this form:
docker run [OPTIONS] IMAGE[:TAG] [COMMAND] [ARG...]
To learn how to interpret the types of `[OPTIONS]`,
see [*Option types*](../commandline/cli/#cli-options).
The list of `[OPTIONS]` breaks down into two groups:
1. Settings exclusive to operators, including:
- Detached or Foreground running,
- Container Identification,
- Network settings, and
- Runtime Constraints on CPU and Memory
- Privileges and LXC Configuration
2. Setting shared between operators and developers, where operators can
override defaults developers set in images at build time.
Together, the `docker run [OPTIONS]` give complete
control over runtime behavior to the operator, allowing them to override
all defaults set by the developer during `docker build`
and nearly all the defaults set by the Docker runtime itself.
## [Operator Exclusive Options](#id4)
Only the operator (the person executing `docker run`
.literal}) can set the following options.
- [Detached vs Foreground](#detached-vs-foreground)
- [Detached (-d)](#detached-d)
- [Foreground](#foreground)
- [Container Identification](#container-identification)
- [Name (name)](#name-name)
- [PID Equivalent](#pid-equivalent)
- [Network Settings](#network-settings)
- [Clean Up (rm)](#clean-up-rm)
- [Runtime Constraints on CPU and
Memory](#runtime-constraints-on-cpu-and-memory)
- [Runtime Privilege and LXC
Configuration](#runtime-privilege-and-lxc-configuration)
### [Detached vs Foreground](#id2)
When starting a Docker container, you must first decide if you want to
run the container in the background in a "detached" mode or in the
default foreground mode:
-d=false: Detached mode: Run container in the background, print new container id
#### [Detached (-d)](#id3)
In detached mode (`-d=true` or just `-d`
.literal}), all I/O should be done through network connections or shared
volumes because the container is no longer listening to the commandline
where you executed `docker run`. You can reattach to
a detached container with `docker`
[*attach*](../commandline/cli/#cli-attach). If you choose to run a
container in the detached mode, then you cannot use the `--rm`
option.
#### [Foreground](#id4)
In foreground mode (the default when `-d` is not
specified), `docker run` can start the process in
the container and attach the console to the processs standard input,
output, and standard error. It can even pretend to be a TTY (this is
what most commandline executables expect) and pass along signals. All of
that is configurable:
-a=[] : Attach to ``stdin``, ``stdout`` and/or ``stderr``
-t=false : Allocate a pseudo-tty
--sig-proxy=true: Proxify all received signal to the process (even in non-tty mode)
-i=false : Keep STDIN open even if not attached
If you do not specify `-a` then Docker will [attach
everything
(stdin,stdout,stderr)](https://github.com/dotcloud/docker/blob/75a7f4d90cde0295bcfb7213004abce8d4779b75/commands.go#L1797).
You can specify to which of the three standard streams
(`stdin`, `stdout`,
`stderr`) youd like to connect instead, as in:
docker run -a stdin -a stdout -i -t ubuntu /bin/bash
For interactive processes (like a shell) you will typically want a tty
as well as persistent standard input (`stdin`), so
youll use `-i -t` together in most interactive
cases.
### [Container Identification](#id5)
#### [Name (name)](#id6)
The operator can identify a container in three ways:
- UUID long identifier
("f78375b1c487e03c9438c729345e54db9d20cfa2ac1fc3494b6eb60872e74778")
- UUID short identifier ("f78375b1c487")
- Name ("evil\_ptolemy")
The UUID identifiers come from the Docker daemon, and if you do not
assign a name to the container with `--name` then
the daemon will also generate a random string name too. The name can
become a handy way to add meaning to a container since you can use this
name when defining
[*links*](../../use/working_with_links_names/#working-with-links-names)
(or any other place you need to identify a container). This works for
both background and foreground Docker containers.
#### [PID Equivalent](#id7)
And finally, to help with automation, you can have Docker write the
container ID out to a file of your choosing. This is similar to how some
programs might write out their process ID to a file (youve seen them as
PID files):
--cidfile="": Write the container ID to the file
### [Network Settings](#id8)
-n=true : Enable networking for this container
--dns=[] : Set custom dns servers for the container
By default, all containers have networking enabled and they can make any
outgoing connections. The operator can completely disable networking
with `docker run -n` which disables all incoming and
outgoing networking. In cases like this, you would perform I/O through
files or STDIN/STDOUT only.
Your container will use the same DNS servers as the host by default, but
you can override this with `--dns`.
### [Clean Up (rm)](#id9)
By default a containers file system persists even after the container
exits. This makes debugging a lot easier (since you can inspect the
final state) and you retain all your data by default. But if you are
running short-term **foreground** processes, these container file
systems can really pile up. If instead youd like Docker to
**automatically clean up the container and remove the file system when
the container exits**, you can add the `--rm` flag:
--rm=false: Automatically remove the container when it exits (incompatible with -d)
### [Runtime Constraints on CPU and Memory](#id10)
The operator can also adjust the performance parameters of the
container:
-m="": Memory limit (format: <number><optional unit>, where unit = b, k, m or g)
-c=0 : CPU shares (relative weight)
The operator can constrain the memory available to a container easily
with `docker run -m`. If the host supports swap
memory, then the `-m` memory setting can be larger
than physical RAM.
Similarly the operator can increase the priority of this container with
the `-c` option. By default, all containers run at
the same priority and get the same proportion of CPU cycles, but you can
tell the kernel to give more shares of CPU time to one or more
containers when you start them via Docker.
### [Runtime Privilege and LXC Configuration](#id11)
--privileged=false: Give extended privileges to this container
--lxc-conf=[]: (lxc exec-driver only) Add custom lxc options --lxc-conf="lxc.cgroup.cpuset.cpus = 0,1"
By default, Docker containers are "unprivileged" and cannot, for
example, run a Docker daemon inside a Docker container. This is because
by default a container is not allowed to access any devices, but a
"privileged" container is given access to all devices (see
[lxc-template.go](https://github.com/dotcloud/docker/blob/master/execdriver/lxc/lxc_template.go)
and documentation on [cgroups
devices](https://www.kernel.org/doc/Documentation/cgroups/devices.txt)).
When the operator executes `docker run --privileged`
.literal}, Docker will enable to access to all devices on the host as
well as set some configuration in AppArmor to allow the container nearly
all the same access to the host as processes running outside containers
on the host. Additional information about running with
`--privileged` is available on the [Docker
Blog](http://blog.docker.io/2013/09/docker-can-now-run-within-docker/).
If the Docker daemon was started using the `lxc`
exec-driver (`docker -d --exec-driver=lxc`) then the
operator can also specify LXC options using one or more
`--lxc-conf` parameters. These can be new parameters
or override existing parameters from the
[lxc-template.go](https://github.com/dotcloud/docker/blob/master/execdriver/lxc/lxc_template.go).
Note that in the future, a given hosts Docker daemon may not use LXC,
so this is an implementation-specific configuration meant for operators
already familiar with using LXC directly.
## Overriding `Dockerfile` Image Defaults
When a developer builds an image from a
[*Dockerfile*](../builder/#dockerbuilder) or when she commits it, the
developer can set a number of default parameters that take effect when
the image starts up as a container.
Four of the `Dockerfile` commands cannot be
overridden at runtime: `FROM, MAINTAINER, RUN`, and
`ADD`. Everything else has a corresponding override
in `docker run`. Well go through what the developer
might have set in each `Dockerfile` instruction and
how the operator can override that setting.
- [CMD (Default Command or Options)](#cmd-default-command-or-options)
- [ENTRYPOINT (Default Command to Execute at
Runtime](#entrypoint-default-command-to-execute-at-runtime)
- [EXPOSE (Incoming Ports)](#expose-incoming-ports)
- [ENV (Environment Variables)](#env-environment-variables)
- [VOLUME (Shared Filesystems)](#volume-shared-filesystems)
- [USER](#user)
- [WORKDIR](#workdir)
### [CMD (Default Command or Options)](#id12)
Recall the optional `COMMAND` in the Docker
commandline:
docker run [OPTIONS] IMAGE[:TAG] [COMMAND] [ARG...]
This command is optional because the person who created the
`IMAGE` may have already provided a default
`COMMAND` using the `Dockerfile`
`CMD`. As the operator (the person running a
container from the image), you can override that `CMD`
just by specifying a new `COMMAND`.
If the image also specifies an `ENTRYPOINT` then the
`CMD` or `COMMAND` get appended
as arguments to the `ENTRYPOINT`.
### [ENTRYPOINT (Default Command to Execute at Runtime](#id13)
--entrypoint="": Overwrite the default entrypoint set by the image
The ENTRYPOINT of an image is similar to a `COMMAND`
because it specifies what executable to run when the container starts,
but it is (purposely) more difficult to override. The
`ENTRYPOINT` gives a container its default nature or
behavior, so that when you set an `ENTRYPOINT` you
can run the container *as if it were that binary*, complete with default
options, and you can pass in more options via the `COMMAND`
.literal}. But, sometimes an operator may want to run something else
inside the container, so you can override the default
`ENTRYPOINT` at runtime by using a string to specify
the new `ENTRYPOINT`. Here is an example of how to
run a shell in a container that has been set up to automatically run
something else (like `/usr/bin/redis-server`):
docker run -i -t --entrypoint /bin/bash example/redis
or two examples of how to pass more parameters to that ENTRYPOINT:
docker run -i -t --entrypoint /bin/bash example/redis -c ls -l
docker run -i -t --entrypoint /usr/bin/redis-cli example/redis --help
### [EXPOSE (Incoming Ports)](#id14)
The `Dockerfile` doesnt give much control over
networking, only providing the `EXPOSE` instruction
to give a hint to the operator about what incoming ports might provide
services. The following options work with or override the
`Dockerfile`s exposed defaults:
--expose=[]: Expose a port from the container
without publishing it to your host
-P=false : Publish all exposed ports to the host interfaces
-p=[] : Publish a container's port to the host (format:
ip:hostPort:containerPort | ip::containerPort |
hostPort:containerPort)
(use 'docker port' to see the actual mapping)
--link="" : Add link to another container (name:alias)
As mentioned previously, `EXPOSE` (and
`--expose`) make a port available **in** a container
for incoming connections. The port number on the inside of the container
(where the service listens) does not need to be the same number as the
port exposed on the outside of the container (where clients connect), so
inside the container you might have an HTTP service listening on port 80
(and so you `EXPOSE 80` in the
`Dockerfile`), but outside the container the port
might be 42800.
To help a new client container reach the server containers internal
port operator `--expose`d by the operator or
`EXPOSE`d by the developer, the operator has three
choices: start the server container with `-P` or
`-p,` or start the client container with
`--link`.
If the operator uses `-P` or `-p`
then Docker will make the exposed port accessible on the host
and the ports will be available to any client that can reach the host.
To find the map between the host ports and the exposed ports, use
`docker port`)
If the operator uses `--link` when starting the new
client container, then the client container can access the exposed port
via a private networking interface. Docker will set some environment
variables in the client container to help indicate which interface and
port to use.
### [ENV (Environment Variables)](#id15)
The operator can **set any environment variable** in the container by
using one or more `-e` flags, even overriding those
already defined by the developer with a Dockefile `ENV`
.literal}:
$ docker run -e "deep=purple" --rm ubuntu /bin/bash -c export
declare -x HOME="/"
declare -x HOSTNAME="85bc26a0e200"
declare -x OLDPWD
declare -x PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
declare -x PWD="/"
declare -x SHLVL="1"
declare -x container="lxc"
declare -x deep="purple"
Similarly the operator can set the **hostname** with `-h`
.literal}.
`--link name:alias` also sets environment variables,
using the *alias* string to define environment variables within the
container that give the IP and PORT information for connecting to the
service container. Lets imagine we have a container running Redis:
# Start the service container, named redis-name
$ docker run -d --name redis-name dockerfiles/redis
4241164edf6f5aca5b0e9e4c9eccd899b0b8080c64c0cd26efe02166c73208f3
# The redis-name container exposed port 6379
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4241164edf6f dockerfiles/redis:latest /redis-stable/src/re 5 seconds ago Up 4 seconds 6379/tcp redis-name
# Note that there are no public ports exposed since we didn't use -p or -P
$ docker port 4241164edf6f 6379
2014/01/25 00:55:38 Error: No public port '6379' published for 4241164edf6f
Yet we can get information about the Redis containers exposed ports
with `--link`. Choose an alias that will form a
valid environment variable!
$ docker run --rm --link redis-name:redis_alias --entrypoint /bin/bash dockerfiles/redis -c export
declare -x HOME="/"
declare -x HOSTNAME="acda7f7b1cdc"
declare -x OLDPWD
declare -x PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
declare -x PWD="/"
declare -x REDIS_ALIAS_NAME="/distracted_wright/redis"
declare -x REDIS_ALIAS_PORT="tcp://172.17.0.32:6379"
declare -x REDIS_ALIAS_PORT_6379_TCP="tcp://172.17.0.32:6379"
declare -x REDIS_ALIAS_PORT_6379_TCP_ADDR="172.17.0.32"
declare -x REDIS_ALIAS_PORT_6379_TCP_PORT="6379"
declare -x REDIS_ALIAS_PORT_6379_TCP_PROTO="tcp"
declare -x SHLVL="1"
declare -x container="lxc"
And we can use that information to connect from another container as a
client:
$ docker run -i -t --rm --link redis-name:redis_alias --entrypoint /bin/bash dockerfiles/redis -c '/redis-stable/src/redis-cli -h $REDIS_ALIAS_PORT_6379_TCP_ADDR -p $REDIS_ALIAS_PORT_6379_TCP_PORT'
172.17.0.32:6379>
### [VOLUME (Shared Filesystems)](#id16)
-v=[]: Create a bind mount with: [host-dir]:[container-dir]:[rw|ro].
If "container-dir" is missing, then docker creates a new volume.
--volumes-from="": Mount all volumes from the given container(s)
The volumes commands are complex enough to have their own documentation
in section [*Share Directories via
Volumes*](../../use/working_with_volumes/#volume-def). A developer can
define one or more `VOLUME`s associated with an
image, but only the operator can give access from one container to
another (or from a container to a volume mounted on the host).
### [USER](#id17)
The default user within a container is `root` (id =
0), but if the developer created additional users, those are accessible
too. The developer can set a default user to run the first process with
the `Dockerfile USER` command, but the operator can
override it
-u="": Username or UID
### [WORKDIR](#id18)
The default working directory for running binaries within a container is
the root directory (`/`), but the developer can set
a different default with the `Dockerfile WORKDIR`
command. The operator can override this with:
-w="": Working directory inside the container