This implements a special "RESET" value that can be used to reset the
list of capabilities to add/drop when updating a service.
Given the following service;
| CapDrop | CapAdd |
| -------------- | ------------- |
| CAP_SOME_CAP | |
When updating the service, and applying `--cap-drop RESET`, the "drop" list
is reset to its default:
| CapDrop | CapAdd |
| -------------- | ------------- |
| | |
When updating the service, and applying `--cap-drop RESET`, combined with
`--cap-add CAP_SOME_CAP` and `--cap-drop CAP_SOME_OTHER_CAP`:
| CapDrop | CapAdd |
| -------------- | ------------- |
| CAP_FOO_CAP | CAP_SOME_CAP |
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Adding/removing capabilities when updating a service is considered a tri-state;
- if the capability was previously "dropped", then remove it from "CapabilityDrop",
but do NOT add it to "CapabilityAdd". However, if the capability was not yet in
the service's "CapabilityDrop", then simply add it to the service's "CapabilityAdd"
- likewise, if the capability was previously "added", then remove it from
"CapabilityAdd", but do NOT add it to "CapabilityDrop". If the capability was
not yet in the service's "CapabilityAdd", then simply add it to the service's
"CapabilityDrop".
In other words, given a service with the following:
| CapDrop | CapAdd |
| -------------- | ------------- |
| CAP_SOME_CAP | |
When updating the service, and applying `--cap-add CAP_SOME_CAP`, the previously
dropped capability is removed:
| CapDrop | CapAdd |
| -------------- | ------------- |
| | |
When updating the service a second time, applying `--cap-add CAP_SOME_CAP`,
capability is now added:
| CapDrop | CapAdd |
| -------------- | ------------- |
| | CAP_SOME_CAP |
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
When creating and updating services, we need to avoid unneeded service churn.
The interaction of separate lists to "add" and "drop" capabilities, a special
("ALL") capability, as well as a "relaxed" format for accepted capabilities
(case-insensitive, `CAP_` prefix optional) make this rather involved.
This patch updates how we handle `--cap-add` / `--cap-drop` when _creating_ as
well as _updating_, with the following rules/assumptions applied:
- both existing (service spec) and new (values passed through flags or in
the compose-file) are normalized and de-duplicated before use.
- the special "ALL" capability is equivalent to "all capabilities" and taken
into account when normalizing capabilities. Combining "ALL" capabilities
and other capabilities is therefore equivalent to just specifying "ALL".
- adding capabilities takes precedence over dropping, which means that if
a capability is both set to be "dropped" and to be "added", it is removed
from the list to "drop".
- the final lists should be sorted and normalized to reduce service churn
- no validation of capabilities is handled by the client. Validation is
delegated to the daemon/server.
When deploying a service using a docker-compose file, the docker-compose file
is *mostly* handled as being "declarative". However, many of the issues outlined
above also apply to compose-files, so similar handling is applied to compose
files as well to prevent service churn.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The tabwriter was configured to have a min-width for columns of 20 positions.
This seemed quite wide, and caused smaller columns to be printed with a large
gap between.
Before:
docker container stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
29184b3ae391 amazing_shirley 0.00% 800KiB / 1.944GiB 0.04% 1.44kB / 0B 0B / 0B 1
403c101bad56 agitated_swartz 0.15% 34.31MiB / 1.944GiB 1.72% 10.2MB / 206kB 0B / 0B 51
0dc4b7f6c6be container2 0.00% 1.012MiB / 1.944GiB 0.05% 12.9kB / 0B 0B / 0B 5
2d99abcc6f62 container99 0.00% 972KiB / 1.944GiB 0.05% 13kB / 0B 0B / 0B 5
9f9aa90173ac foo 0.00% 820KiB / 1.944GiB 0.04% 13kB / 0B 0B / 0B 5
docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
29184b3ae391 docker-cli-dev "ash" 4 hours ago Up 4 hours amazing_shirley
403c101bad56 docker-dev:master "hack/dind bash" 3 days ago Up 3 days agitated_swartz
0dc4b7f6c6be nginx:alpine "/docker-entrypoint.…" 4 days ago Up 4 days 80/tcp container2
2d99abcc6f62 nginx:alpine "/docker-entrypoint.…" 4 days ago Up 4 days 80/tcp container99
9f9aa90173ac nginx:alpine "/docker-entrypoint.…" 4 days ago Up 4 days 80/tcp foo
docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
docker-cli-dev latest 5f603caa04aa 4 hours ago 610MB
docker-cli-native latest 9dd29f8d387b 4 hours ago 519MB
docker-dev master 8132bf7a199e 3 days ago 2.02GB
docker-dev improve-build-errors 69e208994b3f 11 days ago 2.01GB
docker-dev refactor-idtools 69e208994b3f 11 days ago 2.01GB
After:
docker container stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
29184b3ae391 amazing_shirley 0.14% 5.703MiB / 1.944GiB 0.29% 1.44kB / 0B 0B / 0B 10
403c101bad56 agitated_swartz 0.15% 56.97MiB / 1.944GiB 2.86% 10.2MB / 206kB 0B / 0B 51
0dc4b7f6c6be container2 0.00% 1016KiB / 1.944GiB 0.05% 12.9kB / 0B 0B / 0B 5
2d99abcc6f62 container99 0.00% 956KiB / 1.944GiB 0.05% 13kB / 0B 0B / 0B 5
9f9aa90173ac foo 0.00% 980KiB / 1.944GiB 0.05% 13kB / 0B 0B / 0B 5
docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
29184b3ae391 docker-cli-dev "ash" 12 minutes ago Up 12 minutes amazing_shirley
403c101bad56 docker-dev:master "hack/dind bash" 3 days ago Up 3 days agitated_swartz
0dc4b7f6c6be nginx:alpine "/docker-entrypoint.…" 4 days ago Up 4 days 80/tcp container2
2d99abcc6f62 nginx:alpine "/docker-entrypoint.…" 4 days ago Up 4 days 80/tcp container99
9f9aa90173ac nginx:alpine "/docker-entrypoint.…" 4 days ago Up 4 days 80/tcp foo
docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
docker-cli-dev latest 5f603caa04aa 4 hours ago 610MB
docker-cli-native latest 9dd29f8d387b 4 hours ago 519MB
docker-dev master 8132bf7a199e 3 days ago 2.02GB
docker-dev improve-build-errors 69e208994b3f 11 days ago 2.01GB
docker-dev refactor-idtools 69e208994b3f 11 days ago 2.01GB
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The vanity domain is down, and the project has moved
to a new location.
vendor check started failing because of this:
Collecting initial packages
Download dependencies
unrecognized import path "vbom.ml/util" (https fetch: Get https://vbom.ml/util?go-get=1: dial tcp: lookup vbom.ml on 169.254.169.254:53: no such host)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
When using `docker rm` / `docker container rm` with the `-f` / `--force` option, attempts to remove non-existing containers should print a warning, but should return a zero exit code ("successful").
Currently, a non-zero exit code is returned, marking the removal as "failed";
$ docker rm -fv 798c9471b695
Error: No such container: 798c9471b695
$ echo $?
1
The command should match the behavior of `rm` / `rm -f`, with the exception that
a warning is printed (instead of silently ignored):
Running `rm` with `-f` silences output and returns a zero exit code:
touch some-file && rm -f no-such-file some-file; echo exit code: $?; ls -la
# exit code: 0
# total 0
# drwxr-xr-x 2 sebastiaan staff 64 Aug 14 12:17 .
# drwxr-xr-x 199 sebastiaan staff 6368 Aug 14 12:13 ..
mkdir some-directory && rm -rf no-such-directory some-directory; echo exit code: $?; ls -la
# exit code: 0
# total 0
# drwxr-xr-x 2 sebastiaan staff 64 Aug 14 12:17 .
# drwxr-xr-x 199 sebastiaan staff 6368 Aug 14 12:13 ..
Note that other reasons for a delete to fail should still result in a non-zero
exit code, matching the behavior of `rm`. For instance, in the example below,
the `rm` failed because directories can only be removed if the `-r` option is used;
touch some-file && mkdir some-directory && rm -f some-directory no-such-file some-file; echo exit code: $?; ls -la
# rm: some-directory: is a directory
# exit code: 1
# total 0
# drwxr-xr-x 3 sebastiaan staff 96 Aug 14 14:15 .
# drwxr-xr-x 199 sebastiaan staff 6368 Aug 14 12:13 ..
# drwxr-xr-x 2 sebastiaan staff 64 Aug 14 14:15 some-directory
This patch updates the `docker rm` / `docker container rm` command to not produce
an error when attempting to remove a missing containers, and instead only print
the error, but return a zero (0) exit code.
With this patch applied:
docker create --name mycontainer busybox \
&& docker rm nosuchcontainer mycontainer; \
echo exit code: $?; \
docker ps -a --filter name=mycontainer
# df23cc8573f00e97d6e948b48d9ea7d75ce3b4faaab4fe1d3458d3bfa451f39d
# mycontainer
# Error: No such container: nosuchcontainer
# exit code: 0
# CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Combining `-add` and `-rm` flags on `docker service update` should
be usable to explicitly replace existing options. The current order
of processing did not allow this, causing the `-rm` flag to remove
properties that were specified in `-add`. This behavior was inconsistent
with (for example) `--host-add` and `--host-rm`.
This patch updates the behavior to first remove properties, then
add new properties.
Note that there's still some improvements to make, to make the removal
more granulas (e.g. to make `--label-rm label=some-value` only remove
the label if value matches `some-value`); these changes are left for
a follow-up.
Before this change:
-----------------------------
Create a service with two env-vars
```bash
docker service create --env FOO=bar --env BAR=baz --name=test nginx:alpine
docker service inspect --format '{{json .Spec.TaskTemplate.ContainerSpec.Env }}' test | jq .
[
"FOO=bar",
"BAR=baz"
]
```
Update the service, with the intent to replace the value of `FOO` for a new value
```bash
docker service update --env-rm FOO --env-add FOO=updated-foo test
docker service inspect --format '{{json .Spec.TaskTemplate.ContainerSpec.Env }}' test | jq .
[
"BAR=baz"
]
```
Create a service with two labels
```bash
docker service create --label FOO=bar --label BAR=baz --name=test nginx:alpine
docker service inspect --format '{{json .Spec.Labels }}' test | jq .
{
"BAR": "baz",
"FOO": "bar"
}
```
Update the service, with the intent to replace the value of `FOO` for a new value
```bash
docker service update --label-rm FOO --label-add FOO=updated-foo test
docker service inspect --format '{{json .Spec.Labels }}' test | jq .
{
"BAR": "baz"
}
```
Create a service with two container labels
```bash
docker service create --container-label FOO=bar --container-label BAR=baz --name=test nginx:alpine
docker service inspect --format '{{json .Spec.TaskTemplate.ContainerSpec.Labels }}' test | jq .
{
"BAR": "baz",
"FOO": "bar"
}
```
Update the service, with the intent to replace the value of `FOO` for a new value
```bash
docker service update --container-label-rm FOO --container-label-add FOO=updated-foo test
docker service inspect --format '{{json .Spec.TaskTemplate.ContainerSpec.Labels }}' test | jq .
{
"BAR": "baz",
}
```
With this patch applied:
--------------------------------
Create a service with two env-vars
```bash
docker service create --env FOO=bar --env BAR=baz --name=test nginx:alpine
docker service inspect --format '{{json .Spec.TaskTemplate.ContainerSpec.Env }}' test | jq .
[
"FOO=bar",
"BAR=baz"
]
```
Update the service, and replace the value of `FOO` for a new value
```bash
docker service update --env-rm FOO --env-add FOO=updated-foo test
docker service inspect --format '{{json .Spec.TaskTemplate.ContainerSpec.Env }}' test | jq .
[
"BAR=baz",
"FOO=updated-foo"
]
```
Create a service with two labels
```bash
docker service create --label FOO=bar --label BAR=baz --name=test nginx:alpine
docker service inspect --format '{{json .Spec.Labels }}' test | jq .
{
"BAR": "baz",
"FOO": "bar"
}
```
Update the service, and replace the value of `FOO` for a new value
```bash
docker service update --label-rm FOO --label-add FOO=updated-foo test
docker service inspect --format '{{json .Spec.Labels }}' test | jq .
{
"BAR": "baz",
"FOO": "updated-foo"
}
```
Create a service with two container labels
```bash
docker service create --container-label FOO=bar --container-label BAR=baz --name=test nginx:alpine
docker service inspect --format '{{json .Spec.TaskTemplate.ContainerSpec.Labels }}' test | jq .
{
"BAR": "baz",
"FOO": "bar"
}
```
Update the service, and replace the value of `FOO` for a new value
```bash
docker service update --container-label-rm FOO --container-label-add FOO=updated-foo test
docker service inspect --format '{{json .Spec.TaskTemplate.ContainerSpec.Labels }}' test | jq .
{
"BAR": "baz",
"FOO": "updated-foo"
}
```
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
When doing `docker service inspect --pretty` on services without
`TaskTemplate.Resources` or `TaskTemplate.Resources.Limits`, the command
fails. This is due to a missing check on ResourceLimitPids().
This bug has been introduced by 395a6d560d
and produces following error message:
```
Template parsing error: template: :139:10: executing "" at <.ResourceLimitPids>: error calling ResourceLimitPids: runtime error: invalid memory address or nil pointer dereference
```
Signed-off-by: Albin Kerouanton <albin@akerouanton.name>
Both libaries provide similar functionality. We're currently using
Google Shlex in more places, so prefering that one for now, but we
could decide to switch to mattn/go-shellwords in future if that
library is considered better (it looks to be more actively maintained,
but that may be related to it providing "more features").
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
In situations where `~/.docker/config.json` was a symlink, saving
the file would replace the symlink with a file, instead of updating
the target file location;
mkdir -p ~/.docker
touch ~/real-config.json
ln -s ~/real-config.json ~/.docker/config.json
ls -la ~/.docker/config.json
# lrwxrwxrwx 1 root root 22 Jun 23 12:34 /root/.docker/config.json -> /root/real-config.json
docker login
# Username: thajeztah
# Password:
# Login Succeeded
ls -la ~/.docker/config.json
-rw-r--r-- 1 root root 229 Jun 23 12:36 /root/.docker/config.json
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The wrapping made the code harder to read (and in some cases destracted
from the actual code flow).
Some of these functions take too many arguments; instead of hiding that,
it probably better to make it apparent that something needs to be done
(and fix it :-)).
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
- change `validateResolveImageFlag()` to only perform _validation_,
and not combine it with modifying the option.
- use a `switch` instead of `if` in `validateResolveImageFlag()`
- `deployServices()`: break up some `switch` cases to make them
easier to read/understand the logic.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
- TestParseRunAttach: use subtests to reduce cyclomatic complexity
- TestParseRunWithInvalidArgs: use subtests, and check if the expected
error is returned.
- Removed parseMustError() as it was mostly redundant
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
These tests failed when running natively on macOS;
unknown server OS: darwin
Skipping them, like we do on Windows
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Before this change, a warning would be printed if the `~/.docker/config.json`
file was empty:
mkdir -p ~/.docker && touch ~/.docker/config.json
docker pull busybox
WARNING: Error loading config file: /root/.docker/config.json: EOF
Using default tag: latest
....
Given that we also accept an empty "JSON" file (`{}`), it should be
okay to ignore an empty file, as it's effectively a configuration file
with no custom options set.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
I'm not sure if this fixes anything, however I have seen some weird
behavior on Windows where temp config files are left around and there
doesn't seem to be any errors reported.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Previously, if a registry AuthInfo was not present in the CLI config file, docker logout could not be used
to ask the credential helper to forget about it. It causes problem for people working with
multiple alternative config files, and it causes problems for cases like Docker Desktop w/ WSL 2, as
it uses the same win32 credential helper as the Windows CLI, but a different config file, leading to
bugs where I cannot logout from a registry from wsl2 if I logged in from Windows and vice-versa.
Signed-off-by: Simon Ferquel <simon.ferquel@docker.com>
There's no need to perform an `os.Stat()` first, because
`os.Open()` also returns the same errors if the file does
not exist, or couldn't be opened for other reasons.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This prevents inconsistent errors when using a symlink, or when renaming
the binary;
Before this change;
ln -s $(which docker) toto
./toto rune
docker: 'rune' is not a docker command.
./toto run daslkjadslkjdaslkj
Unable to find image 'adslkjadslakdsj:latest' locally
./toto: Error response from daemon: pull access denied for adslkjadslakdsj, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
After this change:
ln -s $(which docker) toto
./toto rune
docker: 'rune' is not a docker command.
./toto run daslkjadslkjdaslkj
Unable to find image 'adslkjadslakdsj:latest' locally
docker: Error response from daemon: pull access denied for adslkjadslakdsj, repository does not exist or may require 'docker login': denied: requested access to the resource is den>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Previously we only set the platform when performing a pull, which is
only initiated if pull always is set, or if the image reference does not
exist in the daemon.
The daemon now supports specifying which platform you wanted on
container create so it can validate the image reference is the platform
you thought you were getting.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
This is not currently used by the CLI, but can be used by
docker compose to bring parity on this feature with the
compose v2.4 schema.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This adds the currently selected "docker context" to the output
of "docker version", which allows users to see which context
is selected to produce the version output, and can be used (for
example), to set the prompt to the currently selected context:
(in `~/.bashrc`):
```bash
function docker_context_prompt() {
PS1="context: $(docker version --format='{{.Client.Context}}')> "
}
PROMPT_COMMAND=docker_context_prompt
```
After reloading the `~/.bashrc`, the prompt now shows the currently selected
`docker context`:
```bash
$ source ~/.bashrc
context: default> docker context create --docker host=unix:///var/run/docker.sock my-context
my-context
Successfully created context "my-context"
context: default> docker context use my-context
my-context
Current context is now "my-context"
context: my-context> docker context use default
default
Current context is now "default"
context: default>
```
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This flag type was not yet merged upstream, so instead of
using a fork of spf13/pflag, define the type locally, so that
we can vendor the upstream package again.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
* Added two new modes accepted by the `--mode` flag
* `replicated-job` creates a replicated job
* `global-job` creates a global job.
* When using `replicated-job` mode, the `replicas` flag sets the
`TotalCompletions` parameter of the job. This is the total number of
tasks that will run
* Added a new flag, `max-concurrent`, for use with `replicated-job`
mode. This flag sets the `MaxConcurrent` parameter of the job, which
is the maximum number of replicas the job will run simultaneously.
* When using `replicated-job` or `global-job` mode, using any of the
update parameter flags will result in an error, as jobs cannot be
updated in the traditional sense.
* Updated the `docker service ls` UI to include the completion status
(completed vs total tasks) if the service is a job.
* Updated the progress bars UI for service creation and update to
support jobs. For jobs, there is displayed a bar covering the overall
progress of the job (the number of tasks completed over the total
number of tasks to complete).
* Added documentation explaining the use of the new flags, and of jobs
in general.
Signed-off-by: Drew Erny <derny@mirantis.com>
These packages are now living in their own repository. Updating
docker/docker to replace the dependencies.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This patch changes the package to lazily obtain the user's home-
directory on first use, instead of when initializing the package.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
67ebcd6dcf added an exception for
the "host-gateway" magic value to the validation rules, but didn't
add thise value to any of the tests.
This patch adds the magic value to tests, to verify the validation
is skipped for this magic value.
Note that validation on the client side is "optional" and mostly
done to provide a more user-friendly error message for regular
values (IP-addresses).
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
- Perform feature detection when actually needed, instead of during
initializing
- Version negotiation is performed either when making an API request,
or when (e.g.) running `docker help` (to hide unsupported features)
- Use a 2 second timeout when 'pinging' the daemon; this should be
sufficient for most cases, and when feature detection failed, the
daemon will still perform validation (and produce an error if needed)
- context.WithTimeout doesn't currently work with ssh connections (connhelper),
so we're only applying this timeout for tcp:// connections, otherwise
keep the old behavior.
Before this change:
time sh -c 'DOCKER_HOST=tcp://42.42.42.41:4242 docker help &> /dev/null'
real 0m32.919s
user 0m0.370s
sys 0m0.227s
time sh -c 'DOCKER_HOST=tcp://42.42.42.41:4242 docker context ls &> /dev/null'
real 0m32.072s
user 0m0.029s
sys 0m0.023s
After this change:
time sh -c 'DOCKER_HOST=tcp://42.42.42.41:4242 docker help &> /dev/null'
real 0m 2.28s
user 0m 0.03s
sys 0m 0.03s
time sh -c 'DOCKER_HOST=tcp://42.42.42.41:4242 docker context ls &> /dev/null'
real 0m 0.13s
user 0m 0.02s
sys 0m 0.02s
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The `docker network prune` command removes unused custom networks,
but built-in networks won't be removed. This patch updates the
message to mention that it's only removing custom networks.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The trust tests were not resetting the environment after they
ran, which could result in tests following those tests to fail.
While at it, I also updated some other tests to use gotest.tools
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Before this change, this would cause a panic:
docker run -it --rm -v 1:/1 alpine
panic: runtime error: index out of range
goroutine 1 [running]:
github.com/docker/cli/cli/compose/loader.isFilePath(0xc42027e058, 0x1, 0x557dcb978c20)
...
After this change, a correct error is returned:
docker run -it --rm -v 1:/1 alpine
docker: Error response from daemon: create 1: volume name is too short, names should be at least two alphanumeric characters.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
When printing services' tasks with `docker service ps` command, tasks are grouped only by task slot.
This leads to interleaving tasks from different services when `docker service ps` is called with multiple services.
Besides this, global services do not have slots at all and printing tasks for them doesn't group and
doesn't properly indent tasks with \_.
With this patch all tasks are grouped by service ID, slot and node ID (relevant only for global services) and it fixes issue 533.
Before this patch:
```console
docker service ps a b c
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
xbzm6ed776yw c.j1afavbqqhr21jvnid3nnfoyt nginx:alpine docker-desktop Running Running 5 seconds ago
4mcsovp8ckwn \_ c.j1afavbqqhr21jvnid3nnfoyt nginx:alpine docker-desktop Shutdown Shutdown 6 seconds ago
qpcgdsx1r21a b.1 nginx:alpine docker-desktop Running Running 2 seconds ago
kfjo1hly92l4 a.1 nginx:alpine docker-desktop Running Running 5 seconds ago
pubrerosvsw5 b.1 nginx:alpine docker-desktop Shutdown Shutdown 3 seconds ago
fu08gfi8tfyv a.1 nginx:alpine docker-desktop Shutdown Shutdown 7 seconds ago
pu6qmgyoibq4 b.2 nginx:alpine docker-desktop Running Ready 1 second ago
tz1n4hjne6pk \_ b.2 nginx:alpine docker-desktop Shutdown Shutdown less than a second ago
xq8dogqcbxd2 a.2 nginx:alpine docker-desktop Running Running 44 seconds ago
rm40lofzed0h a.3 nginx:alpine docker-desktop Running Starting less than a second ago
sqqj2n9fpi82 b.3 nginx:alpine docker-desktop Running Running 5 seconds ago
prv3gymkvqk6 \_ b.3 nginx:alpine docker-desktop Shutdown Shutdown 6 seconds ago
qn7c7jmjuo76 a.3 nginx:alpine docker-desktop Shutdown Shutdown less than a second ago
wi9330mbabpg a.4 nginx:alpine docker-desktop Running Running 2 seconds ago
p5oy6h7nkvc3 \_ a.4 nginx:alpine docker-desktop Shutdown Shutdown 3 seconds ago
```
After this patch:
```console
docker service ps a b c
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
kfjo1hly92l4 a.1 nginx:alpine docker-desktop Running Running 32 seconds ago
fu08gfi8tfyv \_ a.1 nginx:alpine docker-desktop Shutdown Shutdown 34 seconds ago
3pam0limnn24 a.2 nginx:alpine docker-desktop Running Running 23 seconds ago
xq8dogqcbxd2 \_ a.2 nginx:alpine docker-desktop Shutdown Shutdown 24 seconds ago
rm40lofzed0h a.3 nginx:alpine docker-desktop Running Running 26 seconds ago
qn7c7jmjuo76 \_ a.3 nginx:alpine docker-desktop Shutdown Shutdown 27 seconds ago
wi9330mbabpg a.4 nginx:alpine docker-desktop Running Running 29 seconds ago
p5oy6h7nkvc3 \_ a.4 nginx:alpine docker-desktop Shutdown Shutdown 30 seconds ago
qpcgdsx1r21a b.1 nginx:alpine docker-desktop Running Running 29 seconds ago
pubrerosvsw5 \_ b.1 nginx:alpine docker-desktop Shutdown Shutdown 30 seconds ago
pu6qmgyoibq4 b.2 nginx:alpine docker-desktop Running Running 26 seconds ago
tz1n4hjne6pk \_ b.2 nginx:alpine docker-desktop Shutdown Shutdown 27 seconds ago
sqqj2n9fpi82 b.3 nginx:alpine docker-desktop Running Running 32 seconds ago
prv3gymkvqk6 \_ b.3 nginx:alpine docker-desktop Shutdown Shutdown 33 seconds ago
xbzm6ed776yw c.j1afavbqqhr21jvnid3nnfoyt nginx:alpine docker-desktop Running Running 32 seconds ago
4mcsovp8ckwn \_ c.j1afavbqqhr21jvnid3nnfoyt nginx:alpine docker-desktop Shutdown Shutdown 33 seconds ago
```
Signed-off-by: Andrii Berehuliak <berkusandrew@gmail.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The `docker search --automated` and `docker search --stars` options were
deprecated in release v1.12.0, and scheduled for removal in v17.09.
This patch removes the deprecated flags, in favor of their equivalent
`--filter` options (`docker search --filter=is-automated=<true|false>` and
`docker search --filter=stars=...`).
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This is required for supporting some Kubernetes distributions such as
rancher/k3s.
It comes with a test case validating correct parsing of a k3s kubeconfig
file
Signed-off-by: Simon Ferquel <simon.ferquel@docker.com>