From 28e2b92b58dfea848e491ef4329ea26b560166fa Mon Sep 17 00:00:00 2001 From: Sebastiaan van Stijn Date: Tue, 17 May 2022 11:00:51 +0200 Subject: [PATCH] docs: remove documentation about deprecated cluster-store This removes documentation related to legacy overlay networks using an external k/v store. Signed-off-by: Sebastiaan van Stijn --- docs/reference/commandline/dockerd.md | 52 -------- docs/reference/commandline/network_create.md | 79 ++++++++---- man/dockerd.8.md | 37 ------ man/src/network/create.md | 124 ++++++++++++------- 4 files changed, 132 insertions(+), 160 deletions(-) diff --git a/docs/reference/commandline/dockerd.md b/docs/reference/commandline/dockerd.md index 5261266ae2..145eca7393 100644 --- a/docs/reference/commandline/dockerd.md +++ b/docs/reference/commandline/dockerd.md @@ -1052,41 +1052,6 @@ Be careful setting `nproc` with the `ulimit` flag as `nproc` is designed by Linu set the maximum number of processes available to a user, not to a container. For details please check the [run](run.md) reference. -### Node discovery - -The `--cluster-advertise` option specifies the `host:port` or `interface:port` -combination that this particular daemon instance should use when advertising -itself to the cluster. The daemon is reached by remote hosts through this value. -If you specify an interface, make sure it includes the IP address of the actual -Docker host. For Engine installation created through `docker-machine`, the -interface is typically `eth1`. - -The daemon uses [libkv](https://github.com/docker/libkv/) to advertise -the node within the cluster. Some key-value backends support mutual -TLS. To configure the client TLS settings used by the daemon can be configured -using the `--cluster-store-opt` flag, specifying the paths to PEM encoded -files. For example: - -```console -$ sudo dockerd \ - --cluster-advertise 192.168.1.2:2376 \ - --cluster-store etcd://192.168.1.2:2379 \ - --cluster-store-opt kv.cacertfile=/path/to/ca.pem \ - --cluster-store-opt kv.certfile=/path/to/cert.pem \ - --cluster-store-opt kv.keyfile=/path/to/key.pem -``` - -The currently supported cluster store options are: - -| Option | Description | -|:----------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `discovery.heartbeat` | Specifies the heartbeat timer in seconds which is used by the daemon as a `keepalive` mechanism to make sure discovery module treats the node as alive in the cluster. If not configured, the default value is 20 seconds. | -| `discovery.ttl` | Specifies the TTL (time-to-live) in seconds which is used by the discovery module to timeout a node if a valid heartbeat is not received within the configured ttl value. If not configured, the default value is 60 seconds. | -| `kv.cacertfile` | Specifies the path to a local file with PEM encoded CA certificates to trust. | -| `kv.certfile` | Specifies the path to a local file with a PEM encoded certificate. This certificate is used as the client cert for communication with the Key/Value store. | -| `kv.keyfile` | Specifies the path to a local file with a PEM encoded private key. This private key is used as the client key for communication with the Key/Value store. | -| `kv.path` | Specifies the path in the Key/Value store. If not configured, the default value is 'docker/nodes'. | - ### Access authorization Docker's access authorization can be extended by authorization plugins that your @@ -1274,9 +1239,6 @@ This is a full example of the allowed configuration options on Linux: "bip": "", "bridge": "", "cgroup-parent": "", - "cluster-advertise": "", - "cluster-store": "", - "cluster-store-opts": {}, "containerd": "/run/containerd/containerd.sock", "containerd-namespace": "docker", "containerd-plugin-namespace": "docker-plugins", @@ -1402,8 +1364,6 @@ This is a full example of the allowed configuration options on Windows: "allow-nondistributable-artifacts": [], "authorization-plugins": [], "bridge": "", - "cluster-advertise": "", - "cluster-store": "", "containerd": "\\\\.\\pipe\\containerd-containerd", "containerd-namespace": "docker", "containerd-plugin-namespace": "docker-plugins", @@ -1471,9 +1431,6 @@ if there are conflicts, but it won't stop execution. The list of currently supported options that can be reconfigured is this: - `debug`: it changes the daemon to debug mode when set to true. -- `cluster-store`: it reloads the discovery store with the new address. -- `cluster-store-opts`: it uses the new options to reload the discovery store. -- `cluster-advertise`: it modifies the address advertised after reloading. - `labels`: it replaces the daemon labels with a new set of labels. - `live-restore`: Enables [keeping containers alive during daemon downtime](https://docs.docker.com/config/containers/live-restore/). - `max-concurrent-downloads`: it updates the max concurrent downloads for each pull. @@ -1491,15 +1448,6 @@ The list of currently supported options that can be reconfigured is this: - `shutdown-timeout`: it replaces the daemon's existing configuration timeout with a new timeout for shutting down all containers. - `features`: it explicitly enables or disables specific features. -Updating and reloading the cluster configurations such as `--cluster-store`, -`--cluster-advertise` and `--cluster-store-opts` will take effect only if -these configurations were not previously configured. If `--cluster-store` -has been provided in flags and `cluster-advertise` not, `cluster-advertise` -can be added in the configuration file without accompanied by `--cluster-store`. -Configuration reload will log a warning message if it detects a change in -previously configured cluster configurations. - - ### Run multiple daemons > **Note:** diff --git a/docs/reference/commandline/network_create.md b/docs/reference/commandline/network_create.md index 48d72eff90..1b68afdb99 100644 --- a/docs/reference/commandline/network_create.md +++ b/docs/reference/commandline/network_create.md @@ -51,34 +51,24 @@ $ docker network create -d bridge my-bridge-network Bridge networks are isolated networks on a single Engine installation. If you want to create a network that spans multiple Docker hosts each running an -Engine, you must create an `overlay` network. Unlike `bridge` networks, overlay -networks require some pre-existing conditions before you can create one. These -conditions are: +Engine, you must enable Swarm mode, and create an `overlay` network. To read more +about overlay networks with Swarm mode, see ["*use overlay networks*"](https://docs.docker.com/network/overlay/). -* Access to a key-value store. Engine supports Consul, Etcd, and ZooKeeper (Distributed store) key-value stores. -* A cluster of hosts with connectivity to the key-value store. -* A properly configured Engine `daemon` on each host in the cluster. - -The `dockerd` options that support the `overlay` network are: - -* `--cluster-store` -* `--cluster-store-opt` -* `--cluster-advertise` - -To read more about these options and how to configure them, see ["*Get started -with multi-host network*"](https://docs.docker.com/engine/userguide/networking/get-started-overlay). - -While not required, it is a good idea to install Docker Swarm to -manage the cluster that makes up your network. Swarm provides sophisticated -discovery and server management tools that can assist your implementation. - -Once you have prepared the `overlay` network prerequisites you simply choose a -Docker host in the cluster and issue the following to create the network: +Once you have enabled swarm mode, you can create a swarm-scoped overlay network: ```console -$ docker network create -d overlay my-multihost-network +$ docker network create --scope=swarm --attachable -d overlay my-multihost-network ``` +By default, swarm-scoped networks do not allow manually started containers to +be attached. This restriction is added to prevent someone that has access to +a non-manager node in the swarm cluster from running a container that is able +to access the network stack of a swarm service. + +The `--attachable` option used in the example above disables this restriction, +and allows for both swarm services and manually started containers to attach to +the oerlay network. + Network names must be unique. The Docker daemon attempts to identify naming conflicts but this is not guaranteed. It is the user's responsibility to avoid name conflicts. @@ -121,9 +111,9 @@ disconnect` command. ### Specify advanced options When you create a network, Engine creates a non-overlapping subnetwork for the -network by default. This subnetwork is not a subdivision of an existing -network. It is purely for ip-addressing purposes. You can override this default -and specify subnetwork values directly using the `--subnet` option. On a +network by default. This subnetwork is not a subdivision of an existing network. +It is purely for ip-addressing purposes. You can override this default and +specify subnetwork values directly using the `--subnet` option. On a `bridge` network you can only create a single subnet: ```console @@ -221,6 +211,43 @@ $ docker network create -d overlay \ my-ingress-network ``` +### Run services on predefined networks + +You can create services on the predefined docker networks `bridge` and `host`. + +```console +$ docker service create --name my-service \ + --network host \ + --replicas 2 \ + busybox top +``` + +### Swarm networks with local scope drivers + +You can create a swarm network with local scope network drivers. You do so +by promoting the network scope to `swarm` during the creation of the network. +You will then be able to use this network when creating services. + +```console +$ docker network create -d bridge \ + --scope swarm \ + --attachable \ + swarm-network +``` + +For network drivers which provide connectivity across hosts (ex. macvlan), if +node specific configurations are needed in order to plumb the network on each +host, you will supply that configuration via a configuration only network. +When you create the swarm scoped network, you will then specify the name of the +network which contains the configuration. + + +```console +node1$ docker network create --config-only --subnet 192.168.100.0/24 --gateway 192.168.100.115 mv-config +node2$ docker network create --config-only --subnet 192.168.200.0/24 --gateway 192.168.200.202 mv-config +node1$ docker network create -d macvlan --scope swarm --config-from mv-config --attachable swarm-network +``` + ## Related commands * [network inspect](network_inspect.md) diff --git a/man/dockerd.8.md b/man/dockerd.8.md index 01a88803af..89bd69033c 100644 --- a/man/dockerd.8.md +++ b/man/dockerd.8.md @@ -12,9 +12,6 @@ dockerd - Enable daemon mode [**-b**|**--bridge**[=*BRIDGE*]] [**--bip**[=*BIP*]] [**--cgroup-parent**[=*[]*]] -[**--cluster-store**[=*[]*]] -[**--cluster-advertise**[=*[]*]] -[**--cluster-store-opt**[=*map[]*]] [**--config-file**[=*/etc/docker/daemon.json*]] [**--containerd**[=*SOCKET-PATH*]] [**--data-root**[=*/var/lib/docker*]] @@ -154,17 +151,6 @@ $ sudo dockerd --add-runtime runc=runc --add-runtime custom=/usr/local/bin/my-ru Set parent cgroup for all containers. Default is "/docker" for fs cgroup driver and "system.slice" for systemd cgroup driver. -**--cluster-store**="" - URL of the distributed storage backend - -**--cluster-advertise**="" - Specifies the 'host:port' or `interface:port` combination that this - particular daemon instance should use when advertising itself to the cluster. - The daemon is reached through this value. - -**--cluster-store-opt**="" - Specifies options for the Key/Value store. - **--config-file**="/etc/docker/daemon.json" Specifies the JSON file path to load the configuration from. @@ -780,29 +766,6 @@ cannot be smaller than **btrfs.min_space**. Example use: `docker daemon -s btrfs --storage-opt btrfs.min_space=10G` -# CLUSTER STORE OPTIONS - -The daemon uses libkv to advertise the node within the cluster. Some Key/Value -backends support mutual TLS, and the client TLS settings used by the daemon can -be configured using the **--cluster-store-opt** flag, specifying the paths to -PEM encoded files. - -#### kv.cacertfile - -Specifies the path to a local file with PEM encoded CA certificates to trust - -#### kv.certfile - -Specifies the path to a local file with a PEM encoded certificate. This -certificate is used as the client cert for communication with the Key/Value -store. - -#### kv.keyfile - -Specifies the path to a local file with a PEM encoded private key. This -private key is used as the client key for communication with the Key/Value -store. - # Access authorization Docker's access authorization can be extended by authorization plugins that diff --git a/man/src/network/create.md b/man/src/network/create.md index 81dd68ee4b..05ec39bfb8 100644 --- a/man/src/network/create.md +++ b/man/src/network/create.md @@ -5,7 +5,7 @@ network driver you can specify that `DRIVER` here also. If you don't specify the When you install Docker Engine it creates a `bridge` network automatically. This network corresponds to the `docker0` bridge that Engine has traditionally relied on. When you launch a new container with `docker run` it automatically connects to -this bridge network. You cannot remove this default bridge network but you can +this bridge network. You cannot remove this default bridge network, but you can create new ones using the `network create` command. ```console @@ -14,50 +14,51 @@ $ docker network create -d bridge my-bridge-network Bridge networks are isolated networks on a single Engine installation. If you want to create a network that spans multiple Docker hosts each running an -Engine, you must create an `overlay` network. Unlike `bridge` networks overlay -networks require some pre-existing conditions before you can create one. These -conditions are: +Engine, you must enable Swarm mode, and create an `overlay` network. To read more +about overlay networks with Swarm mode, see ["*use overlay networks*"](https://docs.docker.com/network/overlay/). -* Access to a key-value store. Engine supports Consul, Etcd, and Zookeeper (Distributed store) key-value stores. -* A cluster of hosts with connectivity to the key-value store. -* A properly configured Engine `daemon` on each host in the cluster. - -The `dockerd` options that support the `overlay` network are: - -* `--cluster-store` -* `--cluster-store-opt` -* `--cluster-advertise` - -To read more about these options and how to configure them, see ["*Get started -with multi-host -network*"](https://docs.docker.com/engine/userguide/networking/get-started-overlay/). - -It is also a good idea, though not required, that you install Docker Swarm on to -manage the cluster that makes up your network. Swarm provides sophisticated -discovery and server management that can assist your implementation. - -Once you have prepared the `overlay` network prerequisites you simply choose a -Docker host in the cluster and issue the following to create the network: +Once you have enabled swarm mode, you can create a swarm-scoped overlay network: ```console -$ docker network create -d overlay my-multihost-network +$ docker network create --scope=swarm --attachable -d overlay my-multihost-network ``` +By default, swarm-scoped networks do not allow manually started containers to +be attached. This restriction is added to prevent someone that has access to +a non-manager node in the swarm cluster from running a container that is able +to access the network stack of a swarm service. + +The `--attachable` option used in the example above disables this restriction, +and allows for both swarm services and manually started containers to attach to +the oerlay network. + Network names must be unique. The Docker daemon attempts to identify naming conflicts but this is not guaranteed. It is the user's responsibility to avoid name conflicts. +### Overlay network limitations + +You should create overlay networks with `/24` blocks (the default), which limits +you to 256 IP addresses, when you create networks using the default VIP-based +endpoint-mode. This recommendation addresses +[limitations with swarm mode](https://github.com/moby/moby/issues/30820). If you +need more than 256 IP addresses, do not increase the IP block size. You can +either use `dnsrr` endpoint mode with an external load balancer, or use multiple +smaller overlay networks. See +[Configure service discovery](https://docs.docker.com/engine/swarm/networking/#configure-service-discovery) +for more information about different endpoint modes. + ## Connect containers -When you start a container use the `--network` flag to connect it to a network. -This adds the `busybox` container to the `mynet` network. +When you start a container, use the `--network` flag to connect it to a network. +This example adds the `busybox` container to the `mynet` network: ```console $ docker run -itd --network=mynet busybox ``` If you want to add a container to a network after the container is already -running use the `docker network connect` subcommand. +running, use the `docker network connect` subcommand. You can connect multiple containers to the same network. Once connected, the containers can communicate using only another container's IP address or name. @@ -68,7 +69,7 @@ Engines can also communicate in this way. You can disconnect a container from a network using the `docker network disconnect` command. -## Specifying advanced options +### Specify advanced options When you create a network, Engine creates a non-overlapping subnetwork for the network by default. This subnetwork is not a subdivision of an existing network. @@ -77,7 +78,7 @@ specify subnetwork values directly using the `--subnet` option. On a `bridge` network you can only create a single subnet: ```console -$ docker network create -d bridge --subnet=192.168.0.0/16 br0 +$ docker network create --driver=bridge --subnet=192.168.0.0/16 br0 ``` Additionally, you also specify the `--gateway` `--ip-range` and `--aux-address` @@ -94,23 +95,59 @@ $ docker network create \ If you omit the `--gateway` flag the Engine selects one for you from inside a preferred pool. For `overlay` networks and for network driver plugins that -support it you can create multiple subnetworks. +support it you can create multiple subnetworks. This example uses two `/25` +subnet mask to adhere to the current guidance of not having more than 256 IPs in +a single overlay network. Each of the subnetworks has 126 usable addresses. ```console $ docker network create -d overlay \ - --subnet=192.168.0.0/16 \ - --subnet=192.170.0.0/16 \ - --gateway=192.168.0.100 \ - --gateway=192.170.0.100 \ - --ip-range=192.168.1.0/24 \ - --aux-address="my-router=192.168.1.5" --aux-address="my-switch=192.168.1.6" \ - --aux-address="my-printer=192.170.1.5" --aux-address="my-nas=192.170.1.6" \ + --subnet=192.168.10.0/25 \ + --subnet=192.168.20.0/25 \ + --gateway=192.168.10.100 \ + --gateway=192.168.20.100 \ + --aux-address="my-router=192.168.10.5" --aux-address="my-switch=192.168.10.6" \ + --aux-address="my-printer=192.168.20.5" --aux-address="my-nas=192.168.20.6" \ my-multihost-network ``` Be sure that your subnetworks do not overlap. If they do, the network create fails and Engine returns an error. +### Bridge driver options + +When creating a custom network, the default network driver (i.e. `bridge`) has +additional options that can be passed. The following are those options and the +equivalent docker daemon flags used for docker0 bridge: + +| Option | Equivalent | Description | +|--------------------------------------------------|-------------|-------------------------------------------------------| +| `com.docker.network.bridge.name` | - | Bridge name to be used when creating the Linux bridge | +| `com.docker.network.bridge.enable_ip_masquerade` | `--ip-masq` | Enable IP masquerading | +| `com.docker.network.bridge.enable_icc` | `--icc` | Enable or Disable Inter Container Connectivity | +| `com.docker.network.bridge.host_binding_ipv4` | `--ip` | Default IP when binding container ports | +| `com.docker.network.driver.mtu` | `--mtu` | Set the containers network MTU | +| `com.docker.network.container_iface_prefix` | - | Set a custom prefix for container interfaces | + +The following arguments can be passed to `docker network create` for any +network driver, again with their approximate equivalents to `docker daemon`. + +| Argument | Equivalent | Description | +|--------------|----------------|--------------------------------------------| +| `--gateway` | - | IPv4 or IPv6 Gateway for the master subnet | +| `--ip-range` | `--fixed-cidr` | Allocate IPs from a range | +| `--internal` | - | Restrict external access to the network | +| `--ipv6` | `--ipv6` | Enable IPv6 networking | +| `--subnet` | `--bip` | Subnet for network | + +For example, let's use `-o` or `--opt` options to specify an IP address binding +when publishing ports: + +```console +$ docker network create \ + -o "com.docker.network.bridge.host_binding_ipv4"="172.19.0.1" \ + simple-network +``` + ### Network internal mode By default, when you connect a container to an `overlay` network, Docker also @@ -130,7 +167,7 @@ is also available when creating the ingress network, besides the `--attachable` $ docker network create -d overlay \ --subnet=10.11.0.0/16 \ --ingress \ - --opt com.docker.network.mtu=9216 \ + --opt com.docker.network.driver.mtu=9216 \ --opt encrypted=true \ my-ingress-network ``` @@ -149,8 +186,8 @@ $ docker service create --name my-service \ ### Swarm networks with local scope drivers You can create a swarm network with local scope network drivers. You do so -by promoting the network scope to `swarm` during the creation of the network. -You will then be able to use this network when creating services. +by promoting the network scope to `swarm` during the creation of the network. +You will then be able to use this network when creating services. ```console $ docker network create -d bridge \ @@ -162,7 +199,7 @@ $ docker network create -d bridge \ For network drivers which provide connectivity across hosts (ex. macvlan), if node specific configurations are needed in order to plumb the network on each host, you will supply that configuration via a configuration only network. -When you create the swarm scoped network, you will then specify the name of the +When you create the swarm scoped network, you will then specify the name of the network which contains the configuration. @@ -172,6 +209,3 @@ node2$ docker network create --config-only --subnet 192.168.200.0/24 --gateway 1 node1$ docker network create -d macvlan --scope swarm --config-from mv-config --attachable swarm-network ``` - - -