This reverts commit 3c87f01b18.
This commit introduced two regressions;
- spurious "Unsupported signal: <nil>. Discarding."
- docker start --attach hanging if the container does not
have a TTY attached
Reverting for now, while we dug deeper into what's causing
the regression.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Prior to this change, progressbars would sometimes be hidden, and the function
would return early. In addition, the direction of the progressbars would sometimes
be "incrementing" (similar to "docker service update"), and sometimes be "decrementing"
(to indicate a "rollback" is being performed).
This fix makes sure that we always proceed with the "verifying" step, and now
prints a message _after_ the verifying stage was completed;
$ docker service rollback foo
foo
overall progress: rolling back update: 5 out of 5 tasks
1/5: running [> ]
2/5: starting [===========> ]
3/5: starting [===========> ]
4/5: running [> ]
5/5: running [> ]
verify: Service converged
rollback: rollback completed
$ docker service rollback foo
foo
overall progress: rolling back update: 1 out of 1 tasks
1/1: running [> ]
verify: Service converged
rollback: rollback completed
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 104469be0b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Before this change:
--------------------------------------------
$ docker service create --replicas=1 --name foo -p 8080:80 nginx:alpine
t33qvykv8y0zbz266rxynsbo3
overall progress: 1 out of 1 tasks
1/1: running [==================================================>]
verify: Service converged
$ echo $?
0
$ docker service update --replicas=5 foo
foo
overall progress: 5 out of 5 tasks
1/5: running [==================================================>]
2/5: running [==================================================>]
3/5: running [==================================================>]
4/5: running [==================================================>]
5/5: running [==================================================>]
verify: Service converged
$ echo $?
0
$ docker service rollback foo
foo
rollback: manually requested rollback
overall progress: rolling back update: 1 out of 1 tasks
1/1: running [> ]
verify: Service converged
$ echo $?
0
$ docker service rollback foo
foo
service rolled back: rollback completed
$ echo $?
1
After this change:
--------------------------------------------
$ docker service create --replicas=1 --name foo -p 8080:80 nginx:alpine
t33qvykv8y0zbz266rxynsbo3
overall progress: 1 out of 1 tasks
1/1: running [==================================================>]
verify: Service converged
$ echo $?
0
$ docker service update --replicas=5 foo
foo
overall progress: 5 out of 5 tasks
1/5: running [==================================================>]
2/5: running [==================================================>]
3/5: running [==================================================>]
4/5: running [==================================================>]
5/5: running [==================================================>]
verify: Waiting 1 seconds to verify that tasks are stable...
$ echo $?
0
$ docker service rollback foo
foo
rollback: manually requested rollback
overall progress: rolling back update: 1 out of 1 tasks
1/1: running [> ]
verify: Service converged
$ echo $?
0
$ docker service rollback foo
foo
service rolled back: rollback completed
$ echo $?
0
$ docker service ps foo
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
4dt4ms4c5qfb foo.1 nginx:alpine docker-desktop Running Running 2 minutes ago
Remaining issues with reconciliation
--------------------------------------------
Note that both before, and after this change, the command sometimes terminates
early, and does not wait for the service to reconcile; this is most apparent
when rolling back is scaling up (so more tasks are deployed);
$ docker service rollback foo
foo
service rolled back: rollback completed
$ docker service rollback foo
foo
rollback: manually requested rollback
overall progress: rolling back update: 1 out of 5 tasks
1/5: pending [=================================> ]
2/5: running [> ]
3/5: pending [=================================> ]
4/5: pending [=================================> ]
5/5: pending [=================================> ]
service rolled back: rollback completed
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit ce26a165b0)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Commit f32731f902 fixed a potential panic
when an error was returned while trying to get existing credentials.
However, other code paths currently use the result of `GetDefaultAuthConfig()`
even in an error condition; this resulted in a panic, because a `nil` was
returned.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit c2820a7e3b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
On Windows, the os/exec.{Command,CommandContext,LookPath} functions
resolve command names that have neither path separators nor file extension
(e.g., "git") by first looking in the current working directory before
looking in the PATH environment variable.
Go maintainers intended to match cmd.exe's historical behavior.
However, this is pretty much never the intended behavior and as an abundance of precaution
this patch prevents that when executing commands.
Example of commands that docker.exe may execute: `git`, `docker-buildx` (or other cli plugin), `docker-credential-wincred`, `docker`.
Note that this was prompted by the [Go 1.15.7 security fixes](https://blog.golang.org/path-security), but unlike in `go.exe`,
the windows path lookups in docker are not in a code path allowing remote code execution, thus there is no security impact on docker.
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 8d199d5bba)
Signed-off-by: Tibor Vass <tibor@docker.com>
full diff: 75b288015a...c1f2f97bff
relevant changes:
- pkcs12: document that we use the wrong PEM type
- pkcs12: drop PKCS#12 attributes with unknown OIDs
- ocsp: Improve documentation for ParseResponse and ParseResponseForCert
other changes (not in vendor);
- ssh: improve error message for KeyboardInteractiveChallenge
- ssh: remove slow unnecessary diffie-hellman-group-exchange primality check
- ssh/terminal: replace with a golang.org/x/term wrapper
- Deprecates ssh/terminal in favor of golang.org/x/term
- ssh/terminal: add support for zos
- ssh/terminal: bump x/term dependency to fix js/nacl
- nacl/auth: use Size instead of KeySize for Sum output
- sha3: remove go:nocheckptr annotation
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This hack was added in an attempt to continue supporting the experimental
(non-buildkit) `--platform` option, by dynamically updating the API version
required if buildkit isn't enabled.
This hack didn't work, however, because at the moment the override is
added, the command is not yet attached to the "root" (`docker`) command,
and because of that, the command itself is the `root` command;
`cmd.Root()` returned the `build` command.
As a result, validation steps defined as `PersistentPreRunE` on the root
command were not executed, causing invalid flags/options to not producing
an error.
Attempts to use an alternative approach (for example, cobra supports both
a `PersistentPreRun` and `PersistentPreRunE`) did not work either, because
`PersistentPreRunE` takes precedence over `PersistentPreRun`, and only one
will be executed.
Now that `--platform` should be supported for other cases than just for
experimental (LCOW), let's remove the 'experimental' check, and just assume
it's supported for API v1.32 and up.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
While performance will be worse, we can safely ignore the --stream
option when used, and print a deprecation warning instead of failing
the build.
With this patch:
echo -e "FROM scratch\nLABEL foo=bar" | docker build --stream -
DEPRECATED: The experimental --stream flag has been removed and the build context
will be sent non-streaming. Enable BuildKit instead with DOCKER_BUILDKIT=1
to stream build context, see https://docs.docker.com/go/buildkit/
Sending build context to Docker daemon 2.048kB
Step 1/2 : FROM scratch
--->
Step 2/2 : LABEL foo=bar
---> Running in 99e4021085b6
Removing intermediate container 99e4021085b6
---> 1a7a41be241f
Successfully built 1a7a41be241f
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The CLI disabled experimental features by default, requiring users
to set a configuration option to enable them.
Disabling experimental features was a request from Enterprise users
that did not want experimental features to be accessible.
We are changing this policy, and now enable experimental features
by default. Experimental features may still change and/or removed,
and will be highlighted in the documentation and "usage" output.
For example, the `docker manifest inspect --help` output now shows:
EXPERIMENTAL:
docker manifest inspect is an experimental feature.
Experimental features provide early access to product functionality. These features
may change between releases without warning or can be removed entirely from a future
release. Learn more about experimental features: https://docs.docker.com/go/experimental/
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
When initializing the API client, the User-Agent was added to any custom
HTTPHeaders that were configured. However, because the map was not properly
dereferenced, the original map was modified, causing the User-Agent to also
be saved to config.json after `docker login` and `docker logout`:
Before this change;
$ cat ~/.docker/config.json
cat: can't open '/root/.docker/config.json': No such file or directory
$ docker login -u myusername
Password:
...
Login Succeeded
$ cat ~/.docker/config.json
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "<base64 auth>"
}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/19.03.12 (linux)"
}
}
$ docker logout
{
"auths": {},
"HttpHeaders": {
"User-Agent": "Docker-Client/19.03.12 (linux)"
}
}
After this change:
$ cat ~/.docker/config.json
cat: can't open '/root/.docker/config.json': No such file or directory
$ docker login -u myusername
Password:
...
Login Succeeded
$ cat ~/.docker/config.json
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "<base64 auth>"
}
}
}
$ docker logout
Removing login credentials for https://index.docker.io/v1/
$ cat ~/.docker/config.json
{
"auths": {}
}
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This implements a special "RESET" value that can be used to reset the
list of capabilities to add/drop when updating a service.
Given the following service;
| CapDrop | CapAdd |
| -------------- | ------------- |
| CAP_SOME_CAP | |
When updating the service, and applying `--cap-drop RESET`, the "drop" list
is reset to its default:
| CapDrop | CapAdd |
| -------------- | ------------- |
| | |
When updating the service, and applying `--cap-drop RESET`, combined with
`--cap-add CAP_SOME_CAP` and `--cap-drop CAP_SOME_OTHER_CAP`:
| CapDrop | CapAdd |
| -------------- | ------------- |
| CAP_FOO_CAP | CAP_SOME_CAP |
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Adding/removing capabilities when updating a service is considered a tri-state;
- if the capability was previously "dropped", then remove it from "CapabilityDrop",
but do NOT add it to "CapabilityAdd". However, if the capability was not yet in
the service's "CapabilityDrop", then simply add it to the service's "CapabilityAdd"
- likewise, if the capability was previously "added", then remove it from
"CapabilityAdd", but do NOT add it to "CapabilityDrop". If the capability was
not yet in the service's "CapabilityAdd", then simply add it to the service's
"CapabilityDrop".
In other words, given a service with the following:
| CapDrop | CapAdd |
| -------------- | ------------- |
| CAP_SOME_CAP | |
When updating the service, and applying `--cap-add CAP_SOME_CAP`, the previously
dropped capability is removed:
| CapDrop | CapAdd |
| -------------- | ------------- |
| | |
When updating the service a second time, applying `--cap-add CAP_SOME_CAP`,
capability is now added:
| CapDrop | CapAdd |
| -------------- | ------------- |
| | CAP_SOME_CAP |
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
When creating and updating services, we need to avoid unneeded service churn.
The interaction of separate lists to "add" and "drop" capabilities, a special
("ALL") capability, as well as a "relaxed" format for accepted capabilities
(case-insensitive, `CAP_` prefix optional) make this rather involved.
This patch updates how we handle `--cap-add` / `--cap-drop` when _creating_ as
well as _updating_, with the following rules/assumptions applied:
- both existing (service spec) and new (values passed through flags or in
the compose-file) are normalized and de-duplicated before use.
- the special "ALL" capability is equivalent to "all capabilities" and taken
into account when normalizing capabilities. Combining "ALL" capabilities
and other capabilities is therefore equivalent to just specifying "ALL".
- adding capabilities takes precedence over dropping, which means that if
a capability is both set to be "dropped" and to be "added", it is removed
from the list to "drop".
- the final lists should be sorted and normalized to reduce service churn
- no validation of capabilities is handled by the client. Validation is
delegated to the daemon/server.
When deploying a service using a docker-compose file, the docker-compose file
is *mostly* handled as being "declarative". However, many of the issues outlined
above also apply to compose-files, so similar handling is applied to compose
files as well to prevent service churn.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The tabwriter was configured to have a min-width for columns of 20 positions.
This seemed quite wide, and caused smaller columns to be printed with a large
gap between.
Before:
docker container stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
29184b3ae391 amazing_shirley 0.00% 800KiB / 1.944GiB 0.04% 1.44kB / 0B 0B / 0B 1
403c101bad56 agitated_swartz 0.15% 34.31MiB / 1.944GiB 1.72% 10.2MB / 206kB 0B / 0B 51
0dc4b7f6c6be container2 0.00% 1.012MiB / 1.944GiB 0.05% 12.9kB / 0B 0B / 0B 5
2d99abcc6f62 container99 0.00% 972KiB / 1.944GiB 0.05% 13kB / 0B 0B / 0B 5
9f9aa90173ac foo 0.00% 820KiB / 1.944GiB 0.04% 13kB / 0B 0B / 0B 5
docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
29184b3ae391 docker-cli-dev "ash" 4 hours ago Up 4 hours amazing_shirley
403c101bad56 docker-dev:master "hack/dind bash" 3 days ago Up 3 days agitated_swartz
0dc4b7f6c6be nginx:alpine "/docker-entrypoint.…" 4 days ago Up 4 days 80/tcp container2
2d99abcc6f62 nginx:alpine "/docker-entrypoint.…" 4 days ago Up 4 days 80/tcp container99
9f9aa90173ac nginx:alpine "/docker-entrypoint.…" 4 days ago Up 4 days 80/tcp foo
docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
docker-cli-dev latest 5f603caa04aa 4 hours ago 610MB
docker-cli-native latest 9dd29f8d387b 4 hours ago 519MB
docker-dev master 8132bf7a199e 3 days ago 2.02GB
docker-dev improve-build-errors 69e208994b3f 11 days ago 2.01GB
docker-dev refactor-idtools 69e208994b3f 11 days ago 2.01GB
After:
docker container stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
29184b3ae391 amazing_shirley 0.14% 5.703MiB / 1.944GiB 0.29% 1.44kB / 0B 0B / 0B 10
403c101bad56 agitated_swartz 0.15% 56.97MiB / 1.944GiB 2.86% 10.2MB / 206kB 0B / 0B 51
0dc4b7f6c6be container2 0.00% 1016KiB / 1.944GiB 0.05% 12.9kB / 0B 0B / 0B 5
2d99abcc6f62 container99 0.00% 956KiB / 1.944GiB 0.05% 13kB / 0B 0B / 0B 5
9f9aa90173ac foo 0.00% 980KiB / 1.944GiB 0.05% 13kB / 0B 0B / 0B 5
docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
29184b3ae391 docker-cli-dev "ash" 12 minutes ago Up 12 minutes amazing_shirley
403c101bad56 docker-dev:master "hack/dind bash" 3 days ago Up 3 days agitated_swartz
0dc4b7f6c6be nginx:alpine "/docker-entrypoint.…" 4 days ago Up 4 days 80/tcp container2
2d99abcc6f62 nginx:alpine "/docker-entrypoint.…" 4 days ago Up 4 days 80/tcp container99
9f9aa90173ac nginx:alpine "/docker-entrypoint.…" 4 days ago Up 4 days 80/tcp foo
docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
docker-cli-dev latest 5f603caa04aa 4 hours ago 610MB
docker-cli-native latest 9dd29f8d387b 4 hours ago 519MB
docker-dev master 8132bf7a199e 3 days ago 2.02GB
docker-dev improve-build-errors 69e208994b3f 11 days ago 2.01GB
docker-dev refactor-idtools 69e208994b3f 11 days ago 2.01GB
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The vanity domain is down, and the project has moved
to a new location.
vendor check started failing because of this:
Collecting initial packages
Download dependencies
unrecognized import path "vbom.ml/util" (https fetch: Get https://vbom.ml/util?go-get=1: dial tcp: lookup vbom.ml on 169.254.169.254:53: no such host)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
When using `docker rm` / `docker container rm` with the `-f` / `--force` option, attempts to remove non-existing containers should print a warning, but should return a zero exit code ("successful").
Currently, a non-zero exit code is returned, marking the removal as "failed";
$ docker rm -fv 798c9471b695
Error: No such container: 798c9471b695
$ echo $?
1
The command should match the behavior of `rm` / `rm -f`, with the exception that
a warning is printed (instead of silently ignored):
Running `rm` with `-f` silences output and returns a zero exit code:
touch some-file && rm -f no-such-file some-file; echo exit code: $?; ls -la
# exit code: 0
# total 0
# drwxr-xr-x 2 sebastiaan staff 64 Aug 14 12:17 .
# drwxr-xr-x 199 sebastiaan staff 6368 Aug 14 12:13 ..
mkdir some-directory && rm -rf no-such-directory some-directory; echo exit code: $?; ls -la
# exit code: 0
# total 0
# drwxr-xr-x 2 sebastiaan staff 64 Aug 14 12:17 .
# drwxr-xr-x 199 sebastiaan staff 6368 Aug 14 12:13 ..
Note that other reasons for a delete to fail should still result in a non-zero
exit code, matching the behavior of `rm`. For instance, in the example below,
the `rm` failed because directories can only be removed if the `-r` option is used;
touch some-file && mkdir some-directory && rm -f some-directory no-such-file some-file; echo exit code: $?; ls -la
# rm: some-directory: is a directory
# exit code: 1
# total 0
# drwxr-xr-x 3 sebastiaan staff 96 Aug 14 14:15 .
# drwxr-xr-x 199 sebastiaan staff 6368 Aug 14 12:13 ..
# drwxr-xr-x 2 sebastiaan staff 64 Aug 14 14:15 some-directory
This patch updates the `docker rm` / `docker container rm` command to not produce
an error when attempting to remove a missing containers, and instead only print
the error, but return a zero (0) exit code.
With this patch applied:
docker create --name mycontainer busybox \
&& docker rm nosuchcontainer mycontainer; \
echo exit code: $?; \
docker ps -a --filter name=mycontainer
# df23cc8573f00e97d6e948b48d9ea7d75ce3b4faaab4fe1d3458d3bfa451f39d
# mycontainer
# Error: No such container: nosuchcontainer
# exit code: 0
# CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Combining `-add` and `-rm` flags on `docker service update` should
be usable to explicitly replace existing options. The current order
of processing did not allow this, causing the `-rm` flag to remove
properties that were specified in `-add`. This behavior was inconsistent
with (for example) `--host-add` and `--host-rm`.
This patch updates the behavior to first remove properties, then
add new properties.
Note that there's still some improvements to make, to make the removal
more granulas (e.g. to make `--label-rm label=some-value` only remove
the label if value matches `some-value`); these changes are left for
a follow-up.
Before this change:
-----------------------------
Create a service with two env-vars
```bash
docker service create --env FOO=bar --env BAR=baz --name=test nginx:alpine
docker service inspect --format '{{json .Spec.TaskTemplate.ContainerSpec.Env }}' test | jq .
[
"FOO=bar",
"BAR=baz"
]
```
Update the service, with the intent to replace the value of `FOO` for a new value
```bash
docker service update --env-rm FOO --env-add FOO=updated-foo test
docker service inspect --format '{{json .Spec.TaskTemplate.ContainerSpec.Env }}' test | jq .
[
"BAR=baz"
]
```
Create a service with two labels
```bash
docker service create --label FOO=bar --label BAR=baz --name=test nginx:alpine
docker service inspect --format '{{json .Spec.Labels }}' test | jq .
{
"BAR": "baz",
"FOO": "bar"
}
```
Update the service, with the intent to replace the value of `FOO` for a new value
```bash
docker service update --label-rm FOO --label-add FOO=updated-foo test
docker service inspect --format '{{json .Spec.Labels }}' test | jq .
{
"BAR": "baz"
}
```
Create a service with two container labels
```bash
docker service create --container-label FOO=bar --container-label BAR=baz --name=test nginx:alpine
docker service inspect --format '{{json .Spec.TaskTemplate.ContainerSpec.Labels }}' test | jq .
{
"BAR": "baz",
"FOO": "bar"
}
```
Update the service, with the intent to replace the value of `FOO` for a new value
```bash
docker service update --container-label-rm FOO --container-label-add FOO=updated-foo test
docker service inspect --format '{{json .Spec.TaskTemplate.ContainerSpec.Labels }}' test | jq .
{
"BAR": "baz",
}
```
With this patch applied:
--------------------------------
Create a service with two env-vars
```bash
docker service create --env FOO=bar --env BAR=baz --name=test nginx:alpine
docker service inspect --format '{{json .Spec.TaskTemplate.ContainerSpec.Env }}' test | jq .
[
"FOO=bar",
"BAR=baz"
]
```
Update the service, and replace the value of `FOO` for a new value
```bash
docker service update --env-rm FOO --env-add FOO=updated-foo test
docker service inspect --format '{{json .Spec.TaskTemplate.ContainerSpec.Env }}' test | jq .
[
"BAR=baz",
"FOO=updated-foo"
]
```
Create a service with two labels
```bash
docker service create --label FOO=bar --label BAR=baz --name=test nginx:alpine
docker service inspect --format '{{json .Spec.Labels }}' test | jq .
{
"BAR": "baz",
"FOO": "bar"
}
```
Update the service, and replace the value of `FOO` for a new value
```bash
docker service update --label-rm FOO --label-add FOO=updated-foo test
docker service inspect --format '{{json .Spec.Labels }}' test | jq .
{
"BAR": "baz",
"FOO": "updated-foo"
}
```
Create a service with two container labels
```bash
docker service create --container-label FOO=bar --container-label BAR=baz --name=test nginx:alpine
docker service inspect --format '{{json .Spec.TaskTemplate.ContainerSpec.Labels }}' test | jq .
{
"BAR": "baz",
"FOO": "bar"
}
```
Update the service, and replace the value of `FOO` for a new value
```bash
docker service update --container-label-rm FOO --container-label-add FOO=updated-foo test
docker service inspect --format '{{json .Spec.TaskTemplate.ContainerSpec.Labels }}' test | jq .
{
"BAR": "baz",
"FOO": "updated-foo"
}
```
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
When doing `docker service inspect --pretty` on services without
`TaskTemplate.Resources` or `TaskTemplate.Resources.Limits`, the command
fails. This is due to a missing check on ResourceLimitPids().
This bug has been introduced by 395a6d560d
and produces following error message:
```
Template parsing error: template: :139:10: executing "" at <.ResourceLimitPids>: error calling ResourceLimitPids: runtime error: invalid memory address or nil pointer dereference
```
Signed-off-by: Albin Kerouanton <albin@akerouanton.name>