including all the directives and a link to the documentation.
Signed-off-by: Silvin Lubecki <silvin.lubecki@docker.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This commit fixes spelling mistakes (typos) at a few places in the codebase.
Signed-off-by: Amey Shrivastava <72866602+AmeyShrivastava@users.noreply.github.com>
Commit 330a003533
introduced "synchronous" service update and rollback, using progress bars to show
current status for each task.
As part of that change, progress bars were "reversed" when doing a rollback, to
indicate that status was rolled back to a previous state.
Reversing direction is somewhat confusing, as progress bars now return to their
"initial" state to indicate it was "completed"; for an "automatic" rollback, this
may be somewhat clear (progress bars "move to the right", then "roll back" if the
update failed), but when doing a manual rollback, it feels counter-intuitive
(rolling back is the _expected_ outcome).
This patch removes the code to reverse the direction of progress-bars, and makes
progress-bars always move from left ("start") to right ("finished").
Before this patch
----------------------------------------
1. create a service with automatic rollback on failure
$ docker service create --update-failure-action=rollback --name foo --tty --replicas=5 nginx:alpine
9xi1w3mv5sqtyexsuh78qg0cb
overall progress: 5 out of 5 tasks
1/5: running [==================================================>]
2/5: running [==================================================>]
3/5: running [==================================================>]
4/5: running [==================================================>]
5/5: running [==================================================>]
verify: Waiting 2 seconds to verify that tasks are stable...
2. update the service, making it fail after 3 seconds
$ docker service update --entrypoint="/bin/sh -c 'sleep 3; exit 1'" foo
overall progress: rolling back update: 2 out of 5 tasks
1/5: running [==================================================>]
2/5: running [==================================================>]
3/5: starting [============================================> ]
4/5: running [==================================================>]
5/5: running [==================================================>]
3. Once the service starts failing, automatic rollback is started; progress-bars now move in the reverse direction;
overall progress: rolling back update: 3 out of 5 tasks
1/5: ready [===========> ]
2/5: ready [===========> ]
3/5: running [> ]
4/5: running [> ]
5/5: running [> ]
4. When the rollback is completed, the progressbars are at the "start" to indicate they completed;
overall progress: rolling back update: 5 out of 5 tasks
1/5: running [> ]
2/5: running [> ]
3/5: running [> ]
4/5: running [> ]
5/5: running [> ]
rollback: update rolled back due to failure or early termination of task bndiu8a998agr8s6sjlg9tnrw
verify: Service converged
After this patch
----------------------------------------
Progress bars always go from left to right; also in a rollback situation;
After updating to the "faulty" entrypoint, task are deployed:
$ docker service update --entrypoint="/bin/sh -c 'sleep 3; exit 1'" foo
foo
overall progress: 1 out of 5 tasks
1/5:
2/5: running [==================================================>]
3/5: ready [======================================> ]
4/5:
5/5:
Once tasks start failing, rollback is started, and presented the same as a regular
update; progress bars go from left to right;
overall progress: rolling back update: 3 out of 5 tasks
1/5: ready [======================================> ]
2/5: starting [============================================> ]
3/5: running [==================================================>]
4/5: running [==================================================>]
5/5: running [==================================================>]
rollback: update rolled back due to failure or early termination of task c11dxd7ud3d5pq8g45qkb4rjx
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Prior to this change, progressbars would sometimes be hidden, and the function
would return early. In addition, the direction of the progressbars would sometimes
be "incrementing" (similar to "docker service update"), and sometimes be "decrementing"
(to indicate a "rollback" is being performed).
This fix makes sure that we always proceed with the "verifying" step, and now
prints a message _after_ the verifying stage was completed;
$ docker service rollback foo
foo
overall progress: rolling back update: 5 out of 5 tasks
1/5: running [> ]
2/5: starting [===========> ]
3/5: starting [===========> ]
4/5: running [> ]
5/5: running [> ]
verify: Service converged
rollback: rollback completed
$ docker service rollback foo
foo
overall progress: rolling back update: 1 out of 1 tasks
1/1: running [> ]
verify: Service converged
rollback: rollback completed
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Before this change:
--------------------------------------------
$ docker service create --replicas=1 --name foo -p 8080:80 nginx:alpine
t33qvykv8y0zbz266rxynsbo3
overall progress: 1 out of 1 tasks
1/1: running [==================================================>]
verify: Service converged
$ echo $?
0
$ docker service update --replicas=5 foo
foo
overall progress: 5 out of 5 tasks
1/5: running [==================================================>]
2/5: running [==================================================>]
3/5: running [==================================================>]
4/5: running [==================================================>]
5/5: running [==================================================>]
verify: Service converged
$ echo $?
0
$ docker service rollback foo
foo
rollback: manually requested rollback
overall progress: rolling back update: 1 out of 1 tasks
1/1: running [> ]
verify: Service converged
$ echo $?
0
$ docker service rollback foo
foo
service rolled back: rollback completed
$ echo $?
1
After this change:
--------------------------------------------
$ docker service create --replicas=1 --name foo -p 8080:80 nginx:alpine
t33qvykv8y0zbz266rxynsbo3
overall progress: 1 out of 1 tasks
1/1: running [==================================================>]
verify: Service converged
$ echo $?
0
$ docker service update --replicas=5 foo
foo
overall progress: 5 out of 5 tasks
1/5: running [==================================================>]
2/5: running [==================================================>]
3/5: running [==================================================>]
4/5: running [==================================================>]
5/5: running [==================================================>]
verify: Waiting 1 seconds to verify that tasks are stable...
$ echo $?
0
$ docker service rollback foo
foo
rollback: manually requested rollback
overall progress: rolling back update: 1 out of 1 tasks
1/1: running [> ]
verify: Service converged
$ echo $?
0
$ docker service rollback foo
foo
service rolled back: rollback completed
$ echo $?
0
$ docker service ps foo
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
4dt4ms4c5qfb foo.1 nginx:alpine docker-desktop Running Running 2 minutes ago
Remaining issues with reconciliation
--------------------------------------------
Note that both before, and after this change, the command sometimes terminates
early, and does not wait for the service to reconcile; this is most apparent
when rolling back is scaling up (so more tasks are deployed);
$ docker service rollback foo
foo
service rolled back: rollback completed
$ docker service rollback foo
foo
rollback: manually requested rollback
overall progress: rolling back update: 1 out of 5 tasks
1/5: pending [=================================> ]
2/5: running [> ]
3/5: pending [=================================> ]
4/5: pending [=================================> ]
5/5: pending [=================================> ]
service rolled back: rollback completed
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This implements a special "RESET" value that can be used to reset the
list of capabilities to add/drop when updating a service.
Given the following service;
| CapDrop | CapAdd |
| -------------- | ------------- |
| CAP_SOME_CAP | |
When updating the service, and applying `--cap-drop RESET`, the "drop" list
is reset to its default:
| CapDrop | CapAdd |
| -------------- | ------------- |
| | |
When updating the service, and applying `--cap-drop RESET`, combined with
`--cap-add CAP_SOME_CAP` and `--cap-drop CAP_SOME_OTHER_CAP`:
| CapDrop | CapAdd |
| -------------- | ------------- |
| CAP_FOO_CAP | CAP_SOME_CAP |
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Adding/removing capabilities when updating a service is considered a tri-state;
- if the capability was previously "dropped", then remove it from "CapabilityDrop",
but do NOT add it to "CapabilityAdd". However, if the capability was not yet in
the service's "CapabilityDrop", then simply add it to the service's "CapabilityAdd"
- likewise, if the capability was previously "added", then remove it from
"CapabilityAdd", but do NOT add it to "CapabilityDrop". If the capability was
not yet in the service's "CapabilityAdd", then simply add it to the service's
"CapabilityDrop".
In other words, given a service with the following:
| CapDrop | CapAdd |
| -------------- | ------------- |
| CAP_SOME_CAP | |
When updating the service, and applying `--cap-add CAP_SOME_CAP`, the previously
dropped capability is removed:
| CapDrop | CapAdd |
| -------------- | ------------- |
| | |
When updating the service a second time, applying `--cap-add CAP_SOME_CAP`,
capability is now added:
| CapDrop | CapAdd |
| -------------- | ------------- |
| | CAP_SOME_CAP |
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
When creating and updating services, we need to avoid unneeded service churn.
The interaction of separate lists to "add" and "drop" capabilities, a special
("ALL") capability, as well as a "relaxed" format for accepted capabilities
(case-insensitive, `CAP_` prefix optional) make this rather involved.
This patch updates how we handle `--cap-add` / `--cap-drop` when _creating_ as
well as _updating_, with the following rules/assumptions applied:
- both existing (service spec) and new (values passed through flags or in
the compose-file) are normalized and de-duplicated before use.
- the special "ALL" capability is equivalent to "all capabilities" and taken
into account when normalizing capabilities. Combining "ALL" capabilities
and other capabilities is therefore equivalent to just specifying "ALL".
- adding capabilities takes precedence over dropping, which means that if
a capability is both set to be "dropped" and to be "added", it is removed
from the list to "drop".
- the final lists should be sorted and normalized to reduce service churn
- no validation of capabilities is handled by the client. Validation is
delegated to the daemon/server.
When deploying a service using a docker-compose file, the docker-compose file
is *mostly* handled as being "declarative". However, many of the issues outlined
above also apply to compose-files, so similar handling is applied to compose
files as well to prevent service churn.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The tabwriter was configured to have a min-width for columns of 20 positions.
This seemed quite wide, and caused smaller columns to be printed with a large
gap between.
Before:
docker container stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
29184b3ae391 amazing_shirley 0.00% 800KiB / 1.944GiB 0.04% 1.44kB / 0B 0B / 0B 1
403c101bad56 agitated_swartz 0.15% 34.31MiB / 1.944GiB 1.72% 10.2MB / 206kB 0B / 0B 51
0dc4b7f6c6be container2 0.00% 1.012MiB / 1.944GiB 0.05% 12.9kB / 0B 0B / 0B 5
2d99abcc6f62 container99 0.00% 972KiB / 1.944GiB 0.05% 13kB / 0B 0B / 0B 5
9f9aa90173ac foo 0.00% 820KiB / 1.944GiB 0.04% 13kB / 0B 0B / 0B 5
docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
29184b3ae391 docker-cli-dev "ash" 4 hours ago Up 4 hours amazing_shirley
403c101bad56 docker-dev:master "hack/dind bash" 3 days ago Up 3 days agitated_swartz
0dc4b7f6c6be nginx:alpine "/docker-entrypoint.…" 4 days ago Up 4 days 80/tcp container2
2d99abcc6f62 nginx:alpine "/docker-entrypoint.…" 4 days ago Up 4 days 80/tcp container99
9f9aa90173ac nginx:alpine "/docker-entrypoint.…" 4 days ago Up 4 days 80/tcp foo
docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
docker-cli-dev latest 5f603caa04aa 4 hours ago 610MB
docker-cli-native latest 9dd29f8d387b 4 hours ago 519MB
docker-dev master 8132bf7a199e 3 days ago 2.02GB
docker-dev improve-build-errors 69e208994b3f 11 days ago 2.01GB
docker-dev refactor-idtools 69e208994b3f 11 days ago 2.01GB
After:
docker container stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
29184b3ae391 amazing_shirley 0.14% 5.703MiB / 1.944GiB 0.29% 1.44kB / 0B 0B / 0B 10
403c101bad56 agitated_swartz 0.15% 56.97MiB / 1.944GiB 2.86% 10.2MB / 206kB 0B / 0B 51
0dc4b7f6c6be container2 0.00% 1016KiB / 1.944GiB 0.05% 12.9kB / 0B 0B / 0B 5
2d99abcc6f62 container99 0.00% 956KiB / 1.944GiB 0.05% 13kB / 0B 0B / 0B 5
9f9aa90173ac foo 0.00% 980KiB / 1.944GiB 0.05% 13kB / 0B 0B / 0B 5
docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
29184b3ae391 docker-cli-dev "ash" 12 minutes ago Up 12 minutes amazing_shirley
403c101bad56 docker-dev:master "hack/dind bash" 3 days ago Up 3 days agitated_swartz
0dc4b7f6c6be nginx:alpine "/docker-entrypoint.…" 4 days ago Up 4 days 80/tcp container2
2d99abcc6f62 nginx:alpine "/docker-entrypoint.…" 4 days ago Up 4 days 80/tcp container99
9f9aa90173ac nginx:alpine "/docker-entrypoint.…" 4 days ago Up 4 days 80/tcp foo
docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
docker-cli-dev latest 5f603caa04aa 4 hours ago 610MB
docker-cli-native latest 9dd29f8d387b 4 hours ago 519MB
docker-dev master 8132bf7a199e 3 days ago 2.02GB
docker-dev improve-build-errors 69e208994b3f 11 days ago 2.01GB
docker-dev refactor-idtools 69e208994b3f 11 days ago 2.01GB
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The vanity domain is down, and the project has moved
to a new location.
vendor check started failing because of this:
Collecting initial packages
Download dependencies
unrecognized import path "vbom.ml/util" (https fetch: Get https://vbom.ml/util?go-get=1: dial tcp: lookup vbom.ml on 169.254.169.254:53: no such host)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Combining `-add` and `-rm` flags on `docker service update` should
be usable to explicitly replace existing options. The current order
of processing did not allow this, causing the `-rm` flag to remove
properties that were specified in `-add`. This behavior was inconsistent
with (for example) `--host-add` and `--host-rm`.
This patch updates the behavior to first remove properties, then
add new properties.
Note that there's still some improvements to make, to make the removal
more granulas (e.g. to make `--label-rm label=some-value` only remove
the label if value matches `some-value`); these changes are left for
a follow-up.
Before this change:
-----------------------------
Create a service with two env-vars
```bash
docker service create --env FOO=bar --env BAR=baz --name=test nginx:alpine
docker service inspect --format '{{json .Spec.TaskTemplate.ContainerSpec.Env }}' test | jq .
[
"FOO=bar",
"BAR=baz"
]
```
Update the service, with the intent to replace the value of `FOO` for a new value
```bash
docker service update --env-rm FOO --env-add FOO=updated-foo test
docker service inspect --format '{{json .Spec.TaskTemplate.ContainerSpec.Env }}' test | jq .
[
"BAR=baz"
]
```
Create a service with two labels
```bash
docker service create --label FOO=bar --label BAR=baz --name=test nginx:alpine
docker service inspect --format '{{json .Spec.Labels }}' test | jq .
{
"BAR": "baz",
"FOO": "bar"
}
```
Update the service, with the intent to replace the value of `FOO` for a new value
```bash
docker service update --label-rm FOO --label-add FOO=updated-foo test
docker service inspect --format '{{json .Spec.Labels }}' test | jq .
{
"BAR": "baz"
}
```
Create a service with two container labels
```bash
docker service create --container-label FOO=bar --container-label BAR=baz --name=test nginx:alpine
docker service inspect --format '{{json .Spec.TaskTemplate.ContainerSpec.Labels }}' test | jq .
{
"BAR": "baz",
"FOO": "bar"
}
```
Update the service, with the intent to replace the value of `FOO` for a new value
```bash
docker service update --container-label-rm FOO --container-label-add FOO=updated-foo test
docker service inspect --format '{{json .Spec.TaskTemplate.ContainerSpec.Labels }}' test | jq .
{
"BAR": "baz",
}
```
With this patch applied:
--------------------------------
Create a service with two env-vars
```bash
docker service create --env FOO=bar --env BAR=baz --name=test nginx:alpine
docker service inspect --format '{{json .Spec.TaskTemplate.ContainerSpec.Env }}' test | jq .
[
"FOO=bar",
"BAR=baz"
]
```
Update the service, and replace the value of `FOO` for a new value
```bash
docker service update --env-rm FOO --env-add FOO=updated-foo test
docker service inspect --format '{{json .Spec.TaskTemplate.ContainerSpec.Env }}' test | jq .
[
"BAR=baz",
"FOO=updated-foo"
]
```
Create a service with two labels
```bash
docker service create --label FOO=bar --label BAR=baz --name=test nginx:alpine
docker service inspect --format '{{json .Spec.Labels }}' test | jq .
{
"BAR": "baz",
"FOO": "bar"
}
```
Update the service, and replace the value of `FOO` for a new value
```bash
docker service update --label-rm FOO --label-add FOO=updated-foo test
docker service inspect --format '{{json .Spec.Labels }}' test | jq .
{
"BAR": "baz",
"FOO": "updated-foo"
}
```
Create a service with two container labels
```bash
docker service create --container-label FOO=bar --container-label BAR=baz --name=test nginx:alpine
docker service inspect --format '{{json .Spec.TaskTemplate.ContainerSpec.Labels }}' test | jq .
{
"BAR": "baz",
"FOO": "bar"
}
```
Update the service, and replace the value of `FOO` for a new value
```bash
docker service update --container-label-rm FOO --container-label-add FOO=updated-foo test
docker service inspect --format '{{json .Spec.TaskTemplate.ContainerSpec.Labels }}' test | jq .
{
"BAR": "baz",
"FOO": "updated-foo"
}
```
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
When doing `docker service inspect --pretty` on services without
`TaskTemplate.Resources` or `TaskTemplate.Resources.Limits`, the command
fails. This is due to a missing check on ResourceLimitPids().
This bug has been introduced by 395a6d560d
and produces following error message:
```
Template parsing error: template: :139:10: executing "" at <.ResourceLimitPids>: error calling ResourceLimitPids: runtime error: invalid memory address or nil pointer dereference
```
Signed-off-by: Albin Kerouanton <albin@akerouanton.name>
* Added two new modes accepted by the `--mode` flag
* `replicated-job` creates a replicated job
* `global-job` creates a global job.
* When using `replicated-job` mode, the `replicas` flag sets the
`TotalCompletions` parameter of the job. This is the total number of
tasks that will run
* Added a new flag, `max-concurrent`, for use with `replicated-job`
mode. This flag sets the `MaxConcurrent` parameter of the job, which
is the maximum number of replicas the job will run simultaneously.
* When using `replicated-job` or `global-job` mode, using any of the
update parameter flags will result in an error, as jobs cannot be
updated in the traditional sense.
* Updated the `docker service ls` UI to include the completion status
(completed vs total tasks) if the service is a job.
* Updated the progress bars UI for service creation and update to
support jobs. For jobs, there is displayed a bar covering the overall
progress of the job (the number of tasks completed over the total
number of tasks to complete).
* Added documentation explaining the use of the new flags, and of jobs
in general.
Signed-off-by: Drew Erny <derny@mirantis.com>
67ebcd6dcf added an exception for
the "host-gateway" magic value to the validation rules, but didn't
add thise value to any of the tests.
This patch adds the magic value to tests, to verify the validation
is skipped for this magic value.
Note that validation on the client side is "optional" and mostly
done to provide a more user-friendly error message for regular
values (IP-addresses).
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Comments should have a leading space unless the comment is
for special purposes (go:generate, nolint:)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
```
cli/command/service/update_test.go:31:16: SA1012: do not pass a nil Context, even if a function permits it; pass context.TODO if you are unsure about which Context to use (staticcheck)
updateService(nil, nil, flags, spec)
^
cli/command/service/update_test.go:535:16: SA1012: do not pass a nil Context, even if a function permits it; pass context.TODO if you are unsure about which Context to use (staticcheck)
updateService(nil, nil, flags, spec)
^
cli/command/service/update_test.go:540:16: SA1012: do not pass a nil Context, even if a function permits it; pass context.TODO if you are unsure about which Context to use (staticcheck)
updateService(nil, nil, flags, spec)
^
```
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
API v1.41 adds a new option to get the number of desired
and running tasks when listing services. This patch enables
this functionality, and provides a fallback mechanism when
the ServiceStatus is not available, which would be when
using an older API version.
Now that the swarm.Service struct captures this information,
the `ListInfo` type is no longer needed, so it is removed,
and the related list- and formatting functions have been
modified accordingly.
To reduce repetition, sorting the services has been moved
to the formatter. This is a slight change in behavior, but
all calls to the formatter performed this sort first, so
the change will not lead to user-facing changes.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This patch:
- Adds new GlobalService and ServiceStatus options
- Makes the NodeList() function functional
- Minor improvment to the `newService()` function to allow passing options
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
```
cli/command/service/update.go:1007:43: Using a reference for the variable on range scope `entry` (scopelint)
if _, ok := portSet[portConfigToString(&entry)]; !ok {
^
cli/command/service/update.go:1008:32: Using a reference for the variable on range scope `entry` (scopelint)
portSet[portConfigToString(&entry)] = entry
^
cli/command/service/update.go:1034:44: Using a reference for the variable on range scope `port` (scopelint)
if _, ok := portSet[portConfigToString(&port)]; ok {
^
```
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The github.com/flynn-archive/go-shlex package is a fork of Google/shlex,
and the repository is now archived, so let's switch to the maintained
version.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Adds tests for setting and updating swarm service CredentialSpecs,
especially when using a Config as a credential spec.
Signed-off-by: Drew Erny <drew.erny@docker.com>
Updates the CredentialSpec handling code for services to allow using
swarm Configs.
Additionally, fixes a bug where the `--credential-spec` flag would not
be respected on service updates.
Signed-off-by: Drew Erny <drew.erny@docker.com>
This patch fixes a bug where labels use the same behavior as `--env`, resulting
in a value to be copied from environment variables with the same name as the
label if no value is set (i.e. a simple key, no `=` sign, no value).
An earlier pull request addressed similar cases for `docker run`;
2b17f4c8a8, but this did not address the
same situation for (e.g.) `docker service create`.
Digging in history for this bug, I found that use of the `ValidateEnv`
function for labels was added in the original implementation of the labels feature in
abb5e9a077 (diff-ae476143d40e21ac0918630f7365ed3cR34)
However, the design never intended it to expand environment variables,
and use of this function was either due to either a "copy/paste" of the
equivalent `--env` flags, or a misunderstanding (the name `ValidateEnv` does
not communicate that it also expands environment variables), and the existing
`ValidateLabel` was designed for _engine_ labels (which required a value to
be set).
Following the initial implementation, other parts of the code followed
the same (incorrect) approach, therefore leading the bug to be introduced
in services as well.
This patch:
- updates the `ValidateLabel` to match the expected validation
rules (this function is no longer used since 31dc5c0a9a),
and the daemon has its own implementation)
- corrects various locations in the code where `ValidateEnv` was used instead of `ValidateLabel`.
Before this patch:
```bash
export SOME_ENV_VAR=I_AM_SOME_ENV_VAR
docker service create --label SOME_ENV_VAR --tty --name test busybox
docker service inspect --format '{{json .Spec.Labels}}' test
{"SOME_ENV_VAR":"I_AM_SOME_ENV_VAR"}
```
After this patch:
```bash
export SOME_ENV_VAR=I_AM_SOME_ENV_VAR
docker service create --label SOME_ENV_VAR --tty --name test busybox
docker container inspect --format '{{json .Config.Labels}}' test
{"SOME_ENV_VAR":""}
```
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
- make it possible to extract the formatter implementation from the
"common" code, that way, the formatter package stays small
- extract some formatter into their own packages
This is essentially moving the "formatter" implementation of each type
in their respective packages. The *main* reason to do that, is to be
able to depend on `cli/command/formatter` without depending of the
implementation detail of the formatter. As of now, depending on
`cli/command/formatter` means we depend on `docker/docker/api/types`,
`docker/licensing`, … — that should not be the case. `formatter`
should hold the common code (or helpers) to easily create formatter,
not all formatter implementations.
Signed-off-by: Vincent Demeester <vincent@sbr.pm>
- Use `Contains` instead of `Include`
- Use `ToJSON` instead of `ToParam`
- Remove usage of `ParseFlag` as it is deprecated too
Signed-off-by: Vincent Demeester <vincent@sbr.pm>
Before this change:
----------------------------------------------------
Create a service with reservations and limits for memory and cpu:
docker service create --name test \
--limit-memory=100M --limit-cpu=1 \
--reserve-memory=100M --reserve-cpu=1 \
nginx:alpine
Verify the configuration
docker service inspect --format '{{json .Spec.TaskTemplate.Resources}}' test
{
"Limits": {
"NanoCPUs": 1000000000,
"MemoryBytes": 104857600
},
"Reservations": {
"NanoCPUs": 1000000000,
"MemoryBytes": 104857600
}
}
Update just CPU limit and reservation:
docker service update --limit-cpu=2 --reserve-cpu=2 test
Notice that the memory limit and reservation is not preserved:
docker service inspect --format '{{json .Spec.TaskTemplate.Resources}}' test
{
"Limits": {
"NanoCPUs": 2000000000
},
"Reservations": {
"NanoCPUs": 2000000000
}
}
Update just Memory limit and reservation:
docker service update --limit-memory=200M --reserve-memory=200M test
Notice that the CPU limit and reservation is not preserved:
docker service inspect --format '{{json .Spec.TaskTemplate.Resources}}' test
{
"Limits": {
"MemoryBytes": 209715200
},
"Reservations": {
"MemoryBytes": 209715200
}
}
After this change:
----------------------------------------------------
Create a service with reservations and limits for memory and cpu:
docker service create --name test \
--limit-memory=100M --limit-cpu=1 \
--reserve-memory=100M --reserve-cpu=1 \
nginx:alpine
Verify the configuration
docker service inspect --format '{{json .Spec.TaskTemplate.Resources}}' test
{
"Limits": {
"NanoCPUs": 1000000000,
"MemoryBytes": 104857600
},
"Reservations": {
"NanoCPUs": 1000000000,
"MemoryBytes": 104857600
}
}
Update just CPU limit and reservation:
docker service update --limit-cpu=2 --reserve-cpu=2 test
Confirm that the CPU limits/reservations are updated, but memory limit and reservation are preserved:
docker service inspect --format '{{json .Spec.TaskTemplate.Resources}}' test
{
"Limits": {
"NanoCPUs": 2000000000,
"MemoryBytes": 104857600
},
"Reservations": {
"NanoCPUs": 2000000000,
"MemoryBytes": 104857600
}
}
Update just Memory limit and reservation:
docker service update --limit-memory=200M --reserve-memory=200M test
Confirm that the Mempry limits/reservations are updated, but CPU limit and reservation are preserved:
docker service inspect --format '{{json .Spec.TaskTemplate.Resources}}' test
{
"Limits": {
"NanoCPUs": 2000000000,
"MemoryBytes": 209715200
},
"Reservations": {
"NanoCPUs": 2000000000,
"MemoryBytes": 209715200
}
}
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Since go 1.7, "context" is a standard package. Since go 1.9,
x/net/context merely provides some types aliased to those in
the standard context package.
The changes were performed by the following script:
for f in $(git ls-files \*.go | grep -v ^vendor/); do
sed -i 's|golang.org/x/net/context|context|' $f
goimports -w $f
for i in 1 2; do
awk '/^$/ {e=1; next;}
/\t"context"$/ {e=0;}
{if (e) {print ""; e=0}; print;}' < $f > $f.new && \
mv $f.new $f
goimports -w $f
done
done
[v2: do awk/goimports fixup twice]
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
Removing a host by `<host>:<ip>` should only remove occurences of the host with
a matching IP-address, instead of removing all entries for that host.
In addition, combining `--host-rm` and `--host-add` for the same host should
result in the new host being added.
This patch fixes the way the diff is calculated to allow combining
removing/adding, and to support entries having both a canonical, and aliases.
Aliases cannot be added by the CLI, but are supported in the Service spec, thus
should be taken into account:
Entries can be removed by either a specific `<host-name>:<ip-address>`
mapping, or by `<host>` alone:
- If both IP-address and host-name is provided, only remove the hostname
from entries that match the given IP-address.
- If only a host-name is provided, remove the hostname from any entry it
is part of (either as _canonical_ host-name, or as _alias_).
- If, after removing the host-name from an entry, no host-names remain in
the entry, the entry itself should be removed.
For example, the list of host-entries before processing could look like this:
hosts = &[]string{
"127.0.0.2 host3 host1 host2 host4",
"127.0.0.1 host1 host4",
"127.0.0.3 host1",
"127.0.0.1 host1",
}
Removing `host1` removes every occurrence:
hosts = &[]string{
"127.0.0.2 host3 host2 host4",
"127.0.0.1 host4",
}
Whereas removing `host1:127.0.0.1` only remove the host if the IP-address matches:
hosts = &[]string{
"127.0.0.2 host3 host1 host2 host4",
"127.0.0.1 host4",
"127.0.0.3 host1",
}
Before this patch:
$ docker service create --name my-service --host foo:127.0.0.1 --host foo:127.0.0.2 --host foo:127.0.0.3 nginx:alpine
$ docker service update --host-rm foo:127.0.0.1 --host-add foo:127.0.0.4 my-service
$ docker service inspect --format '{{.Spec.TaskTemplate.ContainerSpec.Hosts}}' my-service
[]
After this patch is applied:
$ docker service create --name my-service --host foo:127.0.0.1 --host foo:127.0.0.2 --host foo:127.0.0.3 nginx:alpine
$ docker service update --host-rm foo:127.0.0.1 --host-add foo:127.0.0.5 my-service
$ docker service inspect --format '{{.Spec.TaskTemplate.ContainerSpec.Hosts}}' my-service
[127.0.0.2 foo 127.0.0.3 foo 127.0.0.4 foo]
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The "update" and "rollback" configurations were cross-wired, as a result, setting
`--rollback-*` options would override the service's update-options.
Creating a service with both update, and rollback configuration:
docker service create \
--name=test \
--update-failure-action=pause \
--update-max-failure-ratio=0.6 \
--update-monitor=3s \
--update-order=stop-first \
--update-parallelism=3 \
--rollback-failure-action=continue \
--rollback-max-failure-ratio=0.5 \
--rollback-monitor=4s \
--rollback-order=start-first \
--rollback-parallelism=2 \
--tty \
busybox
Before this change:
docker service inspect --format '{{json .Spec.UpdateConfig}}' test \
&& docker service inspect --format '{{json .Spec.RollbackConfig}}' test
Produces:
{"Parallelism":3,"FailureAction":"pause","Monitor":3000000000,"MaxFailureRatio":0.6,"Order":"stop-first"}
{"Parallelism":3,"FailureAction":"pause","Monitor":3000000000,"MaxFailureRatio":0.6,"Order":"stop-first"}
After this change:
{"Parallelism":3,"FailureAction":"pause","Monitor":3000000000,"MaxFailureRatio":0.6,"Order":"stop-first"}
{"Parallelism":2,"FailureAction":"continue","Monitor":4000000000,"MaxFailureRatio":0.5,"Order":"start-first"}
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Fix tests that failed when using cmp.Compare()
internal/test/testutil/assert
InDelta
Fix DeepEqual with kube metav1.Time
Convert some ErrorContains to assert
Signed-off-by: Daniel Nephin <dnephin@docker.com>
`--label-file` has the exact same behavior as `--env-file`, meaning any
placeholder (i.e. a simple key, no `=` sign, no value), it will get the
value from the environment variable.
For `--label-file` it should just add an empty label.
Signed-off-by: Vincent Demeester <vincent@sbr.pm>
When adding a network using `docker service update --network-add`,
the new network was added by _name_.
Existing entries in a service spec are listed by network ID, which
resulted in the CLI not detecting duplicate entries for the same
network.
This patch changes the behavior to always use the network-ID,
so that duplicate entries are correctly caught.
Before this change;
$ docker network create -d overlay foo
$ docker service create --name=test --network=foo nginx:alpine
$ docker service update --network-add foo test
$ docker service inspect --format '{{ json .Spec.TaskTemplate.Networks}}' test
[
{
"Target": "9ot0ieagg5xv1gxd85m7y33eq"
},
{
"Target": "9ot0ieagg5xv1gxd85m7y33eq"
}
]
After this change:
$ docker network create -d overlay foo
$ docker service create --name=test --network=foo nginx:alpine
$ docker service update --network-add foo test
service is already attached to network foo
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Extra hosts (`extra_hosts` in compose-file, or `--hosts` in services) adds
custom host/ip mappings to the container's `/etc/hosts`.
The current implementation used a `map[string]string{}` as intermediate
storage, and sorted the results alphabetically when converting to a service-spec.
As a result, duplicate hosts were removed, and order of host/ip mappings was not
preserved (in case the compose-file used a list instead of a map).
According to the **host.conf(5)** man page (http://man7.org/linux/man-pages/man5/host.conf.5.html)
multi Valid values are on and off. If set to on, the resolver
library will return all valid addresses for a host that
appears in the /etc/hosts file, instead of only the first.
This is off by default, as it may cause a substantial
performance loss at sites with large hosts files.
Multiple entries for a host are allowed, and even required for some situations,
for example, to add mappings for IPv4 and IPv6 addreses for a host, as illustrated
by the example hosts file in the **hosts(5)** man page (http://man7.org/linux/man-pages/man5/hosts.5.html):
# The following lines are desirable for IPv4 capable hosts
127.0.0.1 localhost
# 127.0.1.1 is often used for the FQDN of the machine
127.0.1.1 thishost.mydomain.org thishost
192.168.1.10 foo.mydomain.org foo
192.168.1.13 bar.mydomain.org bar
146.82.138.7 master.debian.org master
209.237.226.90 www.opensource.org
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
This patch changes the intermediate storage format to use a `[]string`, and only
sorts entries if the input format in the compose file is a mapping. If the input
format is a list, the original sort-order is preserved.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The `--host-add` flag adds a new `host:ip` mapping. Even though
adding an entry is idempotent (adding the same mapping multiple
times does not update the service's definition), it does not
_update_ an existing mapping with a new IP-address (multiple
IP-addresses can be defined for a host).
This patch removes the "or update" part from the flag's
description.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
and enable the new WarnUnmatchedDirective to warn if a nolint is unnecessary.
remove some unnecessary nolint
Signed-off-by: Daniel Nephin <dnephin@docker.com>
Running `docker service ps --quiet` should print the
full, non-truncated ID, even if the `--no-trunc` option
is not set.
This patch disables truncation if the `--quiet` flag
is set.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Commit 330a0035334871d92207b583c1c36d52a244753f added a `--detach=false` option
to various service-related commands, with the intent to make this the default in
a future version (17.09).
This patch changes the default to use "interactive" (non-detached), allowing
users to override this by setting the `--detach` option.
To prevent problems when connecting to older daemon versions (17.05 and below,
see commit db60f25561), the detach option is
ignored for those versions, and detach is always true.
Before this change, a warning was printed to announce the upcoming default:
$ docker service create nginx:alpine
saxiyn3pe559d753730zr0xer
Since --detach=false was not specified, tasks will be created in the background.
In a future release, --detach=false will become the default.
After this change, no warning is printed, but `--detach` is disabled;
$ docker service create nginx:alpine
y9jujwzozi0hwgj5yaadzliq6
overall progress: 1 out of 1 tasks
1/1: running [==================================================>]
verify: Service converged
Setting the `--detach` flag makes the cli use the pre-17.06 behavior:
$ docker service create --detach nginx:alpine
280hjnzy0wzje5o56gr22a46n
Running against a 17.03 daemon, without specifying the `--detach` flag;
$ docker service create nginx:alpine
kqheg7ogj0kszoa34g4p73i8q
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>