Compare commits

..

50 Commits

Author SHA1 Message Date
ehsan-salamati 6cf72341df
Merge 94b0eb83ec into abb8e9b78a 2024-10-21 19:03:12 +01:00
Sebastiaan van Stijn abb8e9b78a
Merge pull request #5546 from thaJeztah/hints_coverage
cli/hints: add tests
2024-10-21 18:08:28 +02:00
Laura Brehm 7029147458
Merge pull request #5557 from thaJeztah/minor_linting_issues 2024-10-21 17:00:40 +01:00
Paweł Gronowski d2b87a0a3b
Merge pull request #5553 from thaJeztah/login_idempotent
cli/config/credentials: skip saving config-file if credentials didn't change
2024-10-21 15:23:26 +02:00
Sebastiaan van Stijn 24ee5f228a
Merge pull request #5551 from thaJeztah/fix_ConfigureAuth_deprecation
cli/command: ConfigureAuth: fix deprecation comment
2024-10-21 14:28:43 +02:00
Sebastiaan van Stijn 8b6133a2b7
Merge pull request #5544 from thaJeztah/bump_engine_28
vendor: github.com/docker/docker 36a3bd090489 (master, v28.0-dev)
2024-10-21 13:28:35 +02:00
Sebastiaan van Stijn d3f6867e4d
cli/config/credentials: skip saving config-file if credentials didn't change
Before this change, the config-file was always updated, even if there
were no changes to save. This could cause issues when the config-file
already had credentials set and was read-only for the current user.

For example, on NixOS, this poses a problem because `config.json` is a
symlink to a write-protected file;

    $ readlink ~/.docker/config.json
    /home/username/.config/sops-nix/secrets/ghcr_auth

    $ readlink -f ~/.docker/config.json
    /run/user/1000/secrets.d/28/ghcr_auth

Which causes `docker login` to fail, even if no changes were to be made;

    Error saving credentials: rename /home/derek/.docker/config.json2180380217 /home/username/.config/sops-nix/secrets/ghcr_auth: invalid cross-device link

This patch updates the code to only update the config file if changes
were detected. It there's nothing to save, it skips updating the file,
as well as skips printing the warning about credentials being stored
insecurely.

With this patch applied:

    $ docker login -u yourname
    Password:

    WARNING! Your credentials are stored unencrypted in '/root/.docker/config.json'.
    Configure a credential helper to remove this warning. See
    https://docs.docker.com/go/credential-store/

    Login Succeeded

    $ docker login -u yourname
    Password:
    Login Succeeded

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-10-21 00:19:52 +02:00
Sebastiaan van Stijn 6b9083776f
cli/command: AddPlatformFlag: suppress unhandled error
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-10-20 17:51:36 +02:00
Sebastiaan van Stijn fb61156b05
cli/command/registry: fix minor linting issues
- fix camelCase naming of verifyLoginOptions
- suppress unhandled errors that can be ignored

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-10-20 17:51:12 +02:00
Sebastiaan van Stijn 062eecf14a
Merge pull request #5554 from albers/fix-completion-events-filter-daemon
Fix bash completion for `events --filter daemon=`
2024-10-19 17:55:12 +02:00
Harald Albers 3f7b156c85 Fix bash completion for `events --filter daemon=`
Signed-off-by: Harald Albers <github@albersweb.de>
2024-10-19 15:40:07 +00:00
Sebastiaan van Stijn 54e3685bcd
cli/command: ConfigureAuth: fix deprecation comment
Deprecation comments must have an empty line before them, otherwise tools
and linters may not recognise them. While fixing this, also updated the
reference to PromptUserForCredentials to be a docs-link to make it clickable.

Updates 6e4818e7d6.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-10-19 13:05:31 +02:00
Sebastiaan van Stijn 87acf77aef
cli/hints: add tests
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-10-19 00:48:16 +02:00
Sebastiaan van Stijn 9b525bc9d1
vendor: github.com/docker/docker 36a3bd090489 (master, v28.0-dev)
full diff: 164cae56ed...36a3bd0904

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-10-18 17:48:05 +02:00
Sebastiaan van Stijn 8a7c5ae68f
Merge pull request #5542 from thaJeztah/base_completion_tests
cmd/docker: add tests for flag-completions, and refactor
2024-10-18 12:07:02 +02:00
Sebastiaan van Stijn da9e984231
Merge pull request #5541 from thaJeztah/template_coverage
templates: add test for HeaderFunctions
2024-10-18 11:42:00 +02:00
Sebastiaan van Stijn 670f81803f
cmd/docker: add tests for flag-completions, and refactor
Remove the registerCompletionFuncForGlobalFlags for now, as
the error it returned was ignored, so it didn't add much
benefit, other than abstracting things.

Split the underlying completion-functions to separate
functions, and add some basic tests for them.

Remove the completions helper, as it now didn't add much,
and it saved having the dependency on the package.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-10-18 11:23:55 +02:00
Paweł Gronowski 38653277af
Merge pull request #5539 from thaJeztah/bump_swarmkit
vendor: github.com/moby/swarmkit/v2 v2.0.0-20241017191044-e8ecf83ee08e
2024-10-18 10:44:41 +02:00
Sebastiaan van Stijn 12dcc6e25c
templates: add test for HeaderFunctions
Before:

    go test -test.coverprofile -
    PASS
    coverage: 65.2% of statements
    ok  	github.com/docker/cli/templates	0.607s

After:

    go test -test.coverprofile -
    PASS
    coverage: 95.7% of statements
    ok  	github.com/docker/cli/templates	0.259s

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-10-18 10:07:33 +02:00
Sebastiaan van Stijn cbbb917323
vendor: github.com/moby/swarmkit/v2 v2.0.0-20241017191044-e8ecf83ee08e
- add Unwrap error to custom error types
- removes dependency on github.com/rexray/gocsi
- fix CSI plugin load issue

full diff: ea1a7cec35...e8ecf83ee0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-10-17 23:01:19 +02:00
Paweł Gronowski 3590f946a3
Merge pull request #5535 from dvdksn/fix-image-tag-spec
docs: update prose about image tag/name format
2024-10-17 12:46:46 +02:00
David Karlsson 2c6b80491b docs: update prose about image tag/name format
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-10-17 12:36:32 +02:00
Paweł Gronowski 09e16fc9c6
Merge pull request #5537 from dvdksn/correct_events_limit
docs: corrected the max events returned
2024-10-17 11:26:09 +02:00
Laura Brehm dba4b15d6b
Merge pull request #5534 from thaJeztah/container_testfixes 2024-10-16 22:08:22 +01:00
David Karlsson 50ef0c58c2 docs: corrected the max events returned
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-10-16 12:21:51 +02:00
Sebastiaan van Stijn 35d7b1a7a6
cli/command/container: TestWaitExitOrRemoved use subtests
=== RUN   TestWaitExitOrRemoved
    === RUN   TestWaitExitOrRemoved/normal-container
    === RUN   TestWaitExitOrRemoved/give-me-exit-code-42
    === RUN   TestWaitExitOrRemoved/i-want-a-wait-error
    time="2024-10-13T18:48:14+02:00" level=error msg="Error waiting for container: removal failed"
    === RUN   TestWaitExitOrRemoved/non-existent-container-id
    time="2024-10-13T18:48:14+02:00" level=error msg="error waiting for container: no such container: non-existent-container-id"
    --- PASS: TestWaitExitOrRemoved (0.00s)
        --- PASS: TestWaitExitOrRemoved/normal-container (0.00s)
        --- PASS: TestWaitExitOrRemoved/give-me-exit-code-42 (0.00s)
        --- PASS: TestWaitExitOrRemoved/i-want-a-wait-error (0.00s)
        --- PASS: TestWaitExitOrRemoved/non-existent-container-id (0.00s)
    PASS

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-10-16 12:03:17 +02:00
Sebastiaan van Stijn 3b38dc67be
cli/command/container: set empty args in tests and discard output
Prevent some tests from failing when running from a pre-compiled
testbinary, and discard output to make the output less noisy.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-10-16 12:01:25 +02:00
Sebastiaan van Stijn 31eeed7ca4
Merge pull request #5533 from thaJeztah/completions_coverage
cli/command/completion: add more unit-tests
2024-10-14 17:38:02 +02:00
Sebastiaan van Stijn 089448ba6d
Merge pull request #5532 from thaJeztah/update_badges
README: update pkg.go.dev badge, add OpenSSF scorecard
2024-10-14 17:18:23 +02:00
Sebastiaan van Stijn 6ed137f7dd
Merge pull request #5529 from thaJeztah/bump_deps
vendor assorted dependencies in preparation of engine update
2024-10-14 13:29:16 +02:00
Sebastiaan van Stijn e1c472a436
completion: add test for VolumeNames
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-10-13 19:08:56 +02:00
Sebastiaan van Stijn 302d73f990
completion: add test for NetworkNames
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-10-13 19:08:52 +02:00
Sebastiaan van Stijn ab418a38d8
completion: add test for ImageNames
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-10-13 19:08:26 +02:00
Sebastiaan van Stijn f3b4094eb0
completion: add test for ContainerNames
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-10-13 19:07:52 +02:00
Sebastiaan van Stijn be197da6b8
completion: add test for NoComplete
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-10-13 17:54:40 +02:00
Sebastiaan van Stijn 51713196c9
completion: add test for FromList
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-10-13 17:53:19 +02:00
Sebastiaan van Stijn a5ca5b33f1
completion: add test for FileNames
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-10-13 17:52:50 +02:00
Sebastiaan van Stijn 8f2e5662e7
completion: add test for EnvVarNames
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-10-13 17:48:49 +02:00
Sebastiaan van Stijn b8cddc63ad
completion: ContainerNames: don't panic on nil filter
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-10-13 17:47:07 +02:00
Sebastiaan van Stijn a58faf7971
README: update pkg.go.dev badge, add OpenSSF scorecard
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-10-12 22:12:38 +02:00
Sebastiaan van Stijn b6d27ff60e
vendor: google.golang.org/grpc v1.66.2
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-10-12 21:52:41 +02:00
Sebastiaan van Stijn 200225f530
vendor: google.golang.org/protobuf v1.34.1
full diff: https://github.com/protocolbuffers/protobuf-go/compare/v1.33.0...v1.34.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-10-12 21:49:59 +02:00
Sebastiaan van Stijn 9599251d07
vendor: github.com/cespare/xxhash/v2 v2.3.0
full diff: https://github.com/cespare/xxhash/compare/v2.2.0...v2.3.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-10-12 21:48:45 +02:00
Sebastiaan van Stijn ea8aa2a419
vendor: golang.org/x/net v0.29.0
no changes in vendored code

full diff: https://github.com/golang/net/compare/v0.28.0...v0.29.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-10-12 21:47:34 +02:00
Sebastiaan van Stijn 61867feecf
vendor: golang.org/x/crypto v0.27.0
no changes in vendored code

full diff: https://github.com/golang/crypto/compare/v0.26.0...v0.27.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-10-12 21:46:59 +02:00
Sebastiaan van Stijn 843ae6d7e2
vendor: golang.org/x/term v0.24.0
full diff: https://github.com/golang/term/compare/v0.23.0...v0.24.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-10-12 21:46:04 +02:00
Sebastiaan van Stijn bea4ee6588
vendor: golang.org/x/text v0.18.0
no changes in vendored code

full diff: https://github.com/golang/text/compare/v0.17.0...v0.18.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-10-12 21:43:36 +02:00
Sebastiaan van Stijn a88ee33f71
vendor: golang.org/x/sys v0.25.0
full diff: https://github.com/golang/sys/compare/v0.24.0...v0.25.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-10-12 21:42:42 +02:00
Sebastiaan van Stijn 21eea1e003
Merge pull request #5527 from albers/completion-container-rm
Improve completion of containers for `docker rm`
2024-10-11 22:35:25 +02:00
Harald Albers 147630a309 Only complete removable containers if --force is not given
Signed-off-by: Harald Albers <github@albersweb.de>
2024-10-10 21:34:38 +00:00
201 changed files with 7359 additions and 4710 deletions

View File

@ -1,9 +1,10 @@
# Docker CLI
[![PkgGoDev](https://img.shields.io/badge/go.dev-docs-007d9c?logo=go&logoColor=white)](https://pkg.go.dev/github.com/docker/cli)
[![PkgGoDev](https://pkg.go.dev/badge/github.com/docker/cli)](https://pkg.go.dev/github.com/docker/cli)
[![Build Status](https://img.shields.io/github/actions/workflow/status/docker/cli/build.yml?branch=master&label=build&logo=github)](https://github.com/docker/cli/actions?query=workflow%3Abuild)
[![Test Status](https://img.shields.io/github/actions/workflow/status/docker/cli/test.yml?branch=master&label=test&logo=github)](https://github.com/docker/cli/actions?query=workflow%3Atest)
[![Go Report Card](https://goreportcard.com/badge/github.com/docker/cli)](https://goreportcard.com/report/github.com/docker/cli)
[![OpenSSF Scorecard](https://api.scorecard.dev/projects/github.com/docker/cli/badge)](https://scorecard.dev/viewer/?uri=github.com/docker/cli)
[![Codecov](https://img.shields.io/codecov/c/github/docker/cli?logo=codecov)](https://codecov.io/gh/docker/cli)
## About

View File

@ -59,7 +59,7 @@ func ContainerNames(dockerCLI APIClientProvider, all bool, filters ...func(conta
for _, ctr := range list {
skip := false
for _, fn := range filters {
if !fn(ctr) {
if fn != nil && !fn(ctr) {
skip = true
break
}

View File

@ -1,15 +1,350 @@
package completion
import (
"context"
"errors"
"sort"
"testing"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/image"
"github.com/docker/docker/api/types/network"
"github.com/docker/docker/api/types/volume"
"github.com/docker/docker/client"
"github.com/google/go-cmp/cmp/cmpopts"
"github.com/spf13/cobra"
"gotest.tools/v3/assert"
is "gotest.tools/v3/assert/cmp"
"gotest.tools/v3/env"
)
type fakeCLI struct {
*fakeClient
}
// Client implements [APIClientProvider].
func (c fakeCLI) Client() client.APIClient {
return c.fakeClient
}
type fakeClient struct {
client.Client
containerListFunc func(options container.ListOptions) ([]container.Summary, error)
imageListFunc func(options image.ListOptions) ([]image.Summary, error)
networkListFunc func(ctx context.Context, options network.ListOptions) ([]network.Summary, error)
volumeListFunc func(filter filters.Args) (volume.ListResponse, error)
}
func (c *fakeClient) ContainerList(_ context.Context, options container.ListOptions) ([]container.Summary, error) {
if c.containerListFunc != nil {
return c.containerListFunc(options)
}
return []container.Summary{}, nil
}
func (c *fakeClient) ImageList(_ context.Context, options image.ListOptions) ([]image.Summary, error) {
if c.imageListFunc != nil {
return c.imageListFunc(options)
}
return []image.Summary{}, nil
}
func (c *fakeClient) NetworkList(ctx context.Context, options network.ListOptions) ([]network.Summary, error) {
if c.networkListFunc != nil {
return c.networkListFunc(ctx, options)
}
return []network.Inspect{}, nil
}
func (c *fakeClient) VolumeList(_ context.Context, options volume.ListOptions) (volume.ListResponse, error) {
if c.volumeListFunc != nil {
return c.volumeListFunc(options.Filters)
}
return volume.ListResponse{}, nil
}
func TestCompleteContainerNames(t *testing.T) {
tests := []struct {
doc string
showAll, showIDs bool
filters []func(container.Summary) bool
containers []container.Summary
expOut []string
expOpts container.ListOptions
expDirective cobra.ShellCompDirective
}{
{
doc: "no results",
expDirective: cobra.ShellCompDirectiveNoFileComp,
},
{
doc: "all containers",
showAll: true,
containers: []container.Summary{
{ID: "id-c", State: "running", Names: []string{"/container-c", "/container-c/link-b"}},
{ID: "id-b", State: "created", Names: []string{"/container-b"}},
{ID: "id-a", State: "exited", Names: []string{"/container-a"}},
},
expOut: []string{"container-c", "container-c/link-b", "container-b", "container-a"},
expOpts: container.ListOptions{All: true},
expDirective: cobra.ShellCompDirectiveNoFileComp,
},
{
doc: "all containers with ids",
showAll: true,
showIDs: true,
containers: []container.Summary{
{ID: "id-c", State: "running", Names: []string{"/container-c", "/container-c/link-b"}},
{ID: "id-b", State: "created", Names: []string{"/container-b"}},
{ID: "id-a", State: "exited", Names: []string{"/container-a"}},
},
expOut: []string{"id-c", "container-c", "container-c/link-b", "id-b", "container-b", "id-a", "container-a"},
expOpts: container.ListOptions{All: true},
expDirective: cobra.ShellCompDirectiveNoFileComp,
},
{
doc: "only running containers",
showAll: false,
containers: []container.Summary{
{ID: "id-c", State: "running", Names: []string{"/container-c", "/container-c/link-b"}},
},
expOut: []string{"container-c", "container-c/link-b"},
expDirective: cobra.ShellCompDirectiveNoFileComp,
},
{
doc: "with filter",
showAll: true,
filters: []func(container.Summary) bool{
func(container container.Summary) bool { return container.State == "created" },
},
containers: []container.Summary{
{ID: "id-c", State: "running", Names: []string{"/container-c", "/container-c/link-b"}},
{ID: "id-b", State: "created", Names: []string{"/container-b"}},
{ID: "id-a", State: "exited", Names: []string{"/container-a"}},
},
expOut: []string{"container-b"},
expOpts: container.ListOptions{All: true},
expDirective: cobra.ShellCompDirectiveNoFileComp,
},
{
doc: "multiple filters",
showAll: true,
filters: []func(container.Summary) bool{
func(container container.Summary) bool { return container.ID == "id-a" },
func(container container.Summary) bool { return container.State == "created" },
},
containers: []container.Summary{
{ID: "id-c", State: "running", Names: []string{"/container-c", "/container-c/link-b"}},
{ID: "id-b", State: "created", Names: []string{"/container-b"}},
{ID: "id-a", State: "created", Names: []string{"/container-a"}},
},
expOut: []string{"container-a"},
expOpts: container.ListOptions{All: true},
expDirective: cobra.ShellCompDirectiveNoFileComp,
},
{
doc: "with error",
expDirective: cobra.ShellCompDirectiveError,
},
}
for _, tc := range tests {
tc := tc
t.Run(tc.doc, func(t *testing.T) {
if tc.showIDs {
t.Setenv("DOCKER_COMPLETION_SHOW_CONTAINER_IDS", "yes")
}
comp := ContainerNames(fakeCLI{&fakeClient{
containerListFunc: func(opts container.ListOptions) ([]container.Summary, error) {
assert.Check(t, is.DeepEqual(opts, tc.expOpts, cmpopts.IgnoreUnexported(container.ListOptions{}, filters.Args{})))
if tc.expDirective == cobra.ShellCompDirectiveError {
return nil, errors.New("some error occurred")
}
return tc.containers, nil
},
}}, tc.showAll, tc.filters...)
containers, directives := comp(&cobra.Command{}, nil, "")
assert.Check(t, is.Equal(directives&tc.expDirective, tc.expDirective))
assert.Check(t, is.DeepEqual(containers, tc.expOut))
})
}
}
func TestCompleteEnvVarNames(t *testing.T) {
env.PatchAll(t, map[string]string{
"ENV_A": "hello-a",
"ENV_B": "hello-b",
})
values, directives := EnvVarNames(nil, nil, "")
assert.Check(t, is.Equal(directives&cobra.ShellCompDirectiveNoFileComp, cobra.ShellCompDirectiveNoFileComp), "Should not perform file completion")
sort.Strings(values)
expected := []string{"ENV_A", "ENV_B"}
assert.Check(t, is.DeepEqual(values, expected))
}
func TestCompleteFileNames(t *testing.T) {
values, directives := FileNames(nil, nil, "")
assert.Check(t, is.Equal(directives, cobra.ShellCompDirectiveDefault))
assert.Check(t, is.Len(values, 0))
}
func TestCompleteFromList(t *testing.T) {
expected := []string{"one", "two", "three"}
values, directives := FromList(expected...)(nil, nil, "")
assert.Check(t, is.Equal(directives&cobra.ShellCompDirectiveNoFileComp, cobra.ShellCompDirectiveNoFileComp), "Should not perform file completion")
assert.Check(t, is.DeepEqual(values, expected))
}
func TestCompleteImageNames(t *testing.T) {
tests := []struct {
doc string
images []image.Summary
expOut []string
expDirective cobra.ShellCompDirective
}{
{
doc: "no results",
expDirective: cobra.ShellCompDirectiveNoFileComp,
},
{
doc: "with results",
images: []image.Summary{
{RepoTags: []string{"image-c:latest", "image-c:other"}},
{RepoTags: []string{"image-b:latest", "image-b:other"}},
{RepoTags: []string{"image-a:latest", "image-a:other"}},
},
expOut: []string{"image-c:latest", "image-c:other", "image-b:latest", "image-b:other", "image-a:latest", "image-a:other"},
expDirective: cobra.ShellCompDirectiveNoFileComp,
},
{
doc: "with error",
expDirective: cobra.ShellCompDirectiveError,
},
}
for _, tc := range tests {
tc := tc
t.Run(tc.doc, func(t *testing.T) {
comp := ImageNames(fakeCLI{&fakeClient{
imageListFunc: func(options image.ListOptions) ([]image.Summary, error) {
if tc.expDirective == cobra.ShellCompDirectiveError {
return nil, errors.New("some error occurred")
}
return tc.images, nil
},
}})
volumes, directives := comp(&cobra.Command{}, nil, "")
assert.Check(t, is.Equal(directives&tc.expDirective, tc.expDirective))
assert.Check(t, is.DeepEqual(volumes, tc.expOut))
})
}
}
func TestCompleteNetworkNames(t *testing.T) {
tests := []struct {
doc string
networks []network.Summary
expOut []string
expDirective cobra.ShellCompDirective
}{
{
doc: "no results",
expDirective: cobra.ShellCompDirectiveNoFileComp,
},
{
doc: "with results",
networks: []network.Summary{
{ID: "nw-c", Name: "network-c"},
{ID: "nw-b", Name: "network-b"},
{ID: "nw-a", Name: "network-a"},
},
expOut: []string{"network-c", "network-b", "network-a"},
expDirective: cobra.ShellCompDirectiveNoFileComp,
},
{
doc: "with error",
expDirective: cobra.ShellCompDirectiveError,
},
}
for _, tc := range tests {
tc := tc
t.Run(tc.doc, func(t *testing.T) {
comp := NetworkNames(fakeCLI{&fakeClient{
networkListFunc: func(ctx context.Context, options network.ListOptions) ([]network.Summary, error) {
if tc.expDirective == cobra.ShellCompDirectiveError {
return nil, errors.New("some error occurred")
}
return tc.networks, nil
},
}})
volumes, directives := comp(&cobra.Command{}, nil, "")
assert.Check(t, is.Equal(directives&tc.expDirective, tc.expDirective))
assert.Check(t, is.DeepEqual(volumes, tc.expOut))
})
}
}
func TestCompleteNoComplete(t *testing.T) {
values, directives := NoComplete(nil, nil, "")
assert.Check(t, is.Equal(directives, cobra.ShellCompDirectiveNoFileComp))
assert.Check(t, is.Len(values, 0))
}
func TestCompletePlatforms(t *testing.T) {
values, directives := Platforms(nil, nil, "")
assert.Check(t, is.Equal(directives&cobra.ShellCompDirectiveNoFileComp, cobra.ShellCompDirectiveNoFileComp), "Should not perform file completion")
assert.Check(t, is.DeepEqual(values, commonPlatforms))
}
func TestCompleteVolumeNames(t *testing.T) {
tests := []struct {
doc string
volumes []*volume.Volume
expOut []string
expDirective cobra.ShellCompDirective
}{
{
doc: "no results",
expDirective: cobra.ShellCompDirectiveNoFileComp,
},
{
doc: "with results",
volumes: []*volume.Volume{
{Name: "volume-c"},
{Name: "volume-b"},
{Name: "volume-a"},
},
expOut: []string{"volume-c", "volume-b", "volume-a"},
expDirective: cobra.ShellCompDirectiveNoFileComp,
},
{
doc: "with error",
expDirective: cobra.ShellCompDirectiveError,
},
}
for _, tc := range tests {
tc := tc
t.Run(tc.doc, func(t *testing.T) {
comp := VolumeNames(fakeCLI{&fakeClient{
volumeListFunc: func(filter filters.Args) (volume.ListResponse, error) {
if tc.expDirective == cobra.ShellCompDirectiveError {
return volume.ListResponse{}, errors.New("some error occurred")
}
return volume.ListResponse{Volumes: tc.volumes}, nil
},
}})
volumes, directives := comp(&cobra.Command{}, nil, "")
assert.Check(t, is.Equal(directives&tc.expDirective, tc.expDirective))
assert.Check(t, is.DeepEqual(volumes, tc.expOut))
})
}
}

View File

@ -127,7 +127,6 @@ func TestContainerListBuildContainerListOptions(t *testing.T) {
func TestContainerListErrors(t *testing.T) {
testCases := []struct {
args []string
flags map[string]string
containerListFunc func(container.ListOptions) ([]container.Summary, error)
expectedError string
@ -157,10 +156,10 @@ func TestContainerListErrors(t *testing.T) {
containerListFunc: tc.containerListFunc,
}),
)
cmd.SetArgs(tc.args)
for key, value := range tc.flags {
assert.Check(t, cmd.Flags().Set(key, value))
}
cmd.SetArgs([]string{})
cmd.SetOut(io.Discard)
cmd.SetErr(io.Discard)
assert.ErrorContains(t, cmd.Execute(), tc.expectedError)
@ -180,6 +179,9 @@ func TestContainerListWithoutFormat(t *testing.T) {
},
})
cmd := newListCommand(cli)
cmd.SetArgs([]string{})
cmd.SetOut(io.Discard)
cmd.SetErr(io.Discard)
assert.NilError(t, cmd.Execute())
golden.Assert(t, cli.OutBuffer().String(), "container-list-without-format.golden")
}
@ -194,6 +196,9 @@ func TestContainerListNoTrunc(t *testing.T) {
},
})
cmd := newListCommand(cli)
cmd.SetArgs([]string{})
cmd.SetOut(io.Discard)
cmd.SetErr(io.Discard)
assert.Check(t, cmd.Flags().Set("no-trunc", "true"))
assert.NilError(t, cmd.Execute())
golden.Assert(t, cli.OutBuffer().String(), "container-list-without-format-no-trunc.golden")
@ -210,6 +215,9 @@ func TestContainerListNamesMultipleTime(t *testing.T) {
},
})
cmd := newListCommand(cli)
cmd.SetArgs([]string{})
cmd.SetOut(io.Discard)
cmd.SetErr(io.Discard)
assert.Check(t, cmd.Flags().Set("format", "{{.Names}} {{.Names}}"))
assert.NilError(t, cmd.Execute())
golden.Assert(t, cli.OutBuffer().String(), "container-list-format-name-name.golden")
@ -226,6 +234,9 @@ func TestContainerListFormatTemplateWithArg(t *testing.T) {
},
})
cmd := newListCommand(cli)
cmd.SetArgs([]string{})
cmd.SetOut(io.Discard)
cmd.SetErr(io.Discard)
assert.Check(t, cmd.Flags().Set("format", `{{.Names}} {{.Label "some.label"}}`))
assert.NilError(t, cmd.Execute())
golden.Assert(t, cli.OutBuffer().String(), "container-list-format-with-arg.golden")
@ -275,6 +286,9 @@ func TestContainerListFormatSizeSetsOption(t *testing.T) {
},
})
cmd := newListCommand(cli)
cmd.SetArgs([]string{})
cmd.SetOut(io.Discard)
cmd.SetErr(io.Discard)
assert.Check(t, cmd.Flags().Set("format", tc.format))
if tc.sizeFlag != "" {
assert.Check(t, cmd.Flags().Set("size", tc.sizeFlag))
@ -297,6 +311,9 @@ func TestContainerListWithConfigFormat(t *testing.T) {
PsFormat: "{{ .Names }} {{ .Image }} {{ .Labels }} {{ .Size}}",
})
cmd := newListCommand(cli)
cmd.SetArgs([]string{})
cmd.SetOut(io.Discard)
cmd.SetErr(io.Discard)
assert.NilError(t, cmd.Execute())
golden.Assert(t, cli.OutBuffer().String(), "container-list-with-config-format.golden")
}
@ -314,6 +331,9 @@ func TestContainerListWithFormat(t *testing.T) {
t.Run("with format", func(t *testing.T) {
cli.OutBuffer().Reset()
cmd := newListCommand(cli)
cmd.SetArgs([]string{})
cmd.SetOut(io.Discard)
cmd.SetErr(io.Discard)
assert.Check(t, cmd.Flags().Set("format", "{{ .Names }} {{ .Image }} {{ .Labels }}"))
assert.NilError(t, cmd.Execute())
golden.Assert(t, cli.OutBuffer().String(), "container-list-with-format.golden")
@ -322,6 +342,9 @@ func TestContainerListWithFormat(t *testing.T) {
t.Run("with format and quiet", func(t *testing.T) {
cli.OutBuffer().Reset()
cmd := newListCommand(cli)
cmd.SetArgs([]string{})
cmd.SetOut(io.Discard)
cmd.SetErr(io.Discard)
assert.Check(t, cmd.Flags().Set("format", "{{ .Names }} {{ .Image }} {{ .Labels }}"))
assert.Check(t, cmd.Flags().Set("quiet", "true"))
assert.NilError(t, cmd.Execute())

View File

@ -21,6 +21,7 @@ func TestContainerPrunePromptTermination(t *testing.T) {
},
})
cmd := NewPruneCommand(cli)
cmd.SetArgs([]string{})
cmd.SetOut(io.Discard)
cmd.SetErr(io.Discard)
test.TerminatePrompt(ctx, t, cmd, cli)

View File

@ -38,7 +38,9 @@ func NewRmCommand(dockerCli command.Cli) *cobra.Command {
Annotations: map[string]string{
"aliases": "docker container rm, docker container remove, docker rm",
},
ValidArgsFunction: completion.ContainerNames(dockerCli, true),
ValidArgsFunction: completion.ContainerNames(dockerCli, true, func(ctr container.Summary) bool {
return opts.force || ctr.State == "exited" || ctr.State == "created"
}),
}
flags := cmd.Flags()

View File

@ -38,7 +38,7 @@ func waitFn(cid string) (<-chan container.WaitResponse, <-chan error) {
}
func TestWaitExitOrRemoved(t *testing.T) {
testcases := []struct {
tests := []struct {
cid string
exitCode int
}{
@ -61,9 +61,11 @@ func TestWaitExitOrRemoved(t *testing.T) {
}
client := &fakeClient{waitFunc: waitFn, Version: api.DefaultVersion}
for _, testcase := range testcases {
statusC := waitExitOrRemoved(context.Background(), client, testcase.cid, true)
exitCode := <-statusC
assert.Check(t, is.Equal(testcase.exitCode, exitCode))
for _, tc := range tests {
t.Run(tc.cid, func(t *testing.T) {
statusC := waitExitOrRemoved(context.Background(), client, tc.cid, true)
exitCode := <-statusC
assert.Check(t, is.Equal(tc.exitCode, exitCode))
})
}
}

View File

@ -86,7 +86,8 @@ func GetDefaultAuthConfig(cfg *configfile.ConfigFile, checkCredStore bool, serve
}
// ConfigureAuth handles prompting of user's username and password if needed.
// Deprecated: use PromptUserForCredentials instead.
//
// Deprecated: use [PromptUserForCredentials] instead.
func ConfigureAuth(ctx context.Context, cli Cli, flUser, flPassword string, authConfig *registrytypes.AuthConfig, _ bool) error {
defaultUsername := authConfig.Username
serverAddress := authConfig.ServerAddress

View File

@ -58,9 +58,9 @@ func NewLoginCommand(dockerCli command.Cli) *cobra.Command {
return cmd
}
func verifyloginOptions(dockerCli command.Cli, opts *loginOptions) error {
func verifyLoginOptions(dockerCli command.Cli, opts *loginOptions) error {
if opts.password != "" {
fmt.Fprintln(dockerCli.Err(), "WARNING! Using --password via the CLI is insecure. Use --password-stdin.")
_, _ = fmt.Fprintln(dockerCli.Err(), "WARNING! Using --password via the CLI is insecure. Use --password-stdin.")
if opts.passwordStdin {
return errors.New("--password and --password-stdin are mutually exclusive")
}
@ -83,7 +83,7 @@ func verifyloginOptions(dockerCli command.Cli, opts *loginOptions) error {
}
func runLogin(ctx context.Context, dockerCli command.Cli, opts loginOptions) error {
if err := verifyloginOptions(dockerCli, &opts); err != nil {
if err := verifyLoginOptions(dockerCli, &opts); err != nil {
return err
}
var (
@ -174,7 +174,7 @@ func loginUser(ctx context.Context, dockerCli command.Cli, opts loginOptions, de
if !errors.Is(err, manager.ErrDeviceLoginStartFail) {
return response, err
}
fmt.Fprint(dockerCli.Err(), "Failed to start web-based login - falling back to command line login...\n\n")
_, _ = fmt.Fprint(dockerCli.Err(), "Failed to start web-based login - falling back to command line login...\n\n")
}
return loginWithUsernameAndPassword(ctx, dockerCli, opts, defaultUsername, serverAddress)

View File

@ -199,7 +199,7 @@ func PruneFilters(dockerCli Cli, pruneFilters filters.Args) filters.Args {
// AddPlatformFlag adds `platform` to a set of flags for API version 1.32 and later.
func AddPlatformFlag(flags *pflag.FlagSet, target *string) {
flags.StringVar(target, "platform", os.Getenv("DOCKER_DEFAULT_PLATFORM"), "Set platform if server is multi-platform capable")
flags.SetAnnotation("platform", "version", []string{"1.32"})
_ = flags.SetAnnotation("platform", "version", []string{"1.32"})
}
// ValidateOutputPath validates the output paths of the `export` and `save` commands.

View File

@ -30,6 +30,10 @@ func NewFileStore(file store) Store {
// Erase removes the given credentials from the file store.
func (c *fileStore) Erase(serverAddress string) error {
if _, exists := c.file.GetAuthConfigs()[serverAddress]; !exists {
// nothing to do; no credentials found for the given serverAddress
return nil
}
delete(c.file.GetAuthConfigs(), serverAddress)
return c.file.Save()
}
@ -70,9 +74,14 @@ https://docs.docker.com/go/credential-store/
// CLI invocation (no need to warn the user multiple times per command).
var alreadyPrinted atomic.Bool
// Store saves the given credentials in the file store.
// Store saves the given credentials in the file store. This function is
// idempotent and does not update the file if credentials did not change.
func (c *fileStore) Store(authConfig types.AuthConfig) error {
authConfigs := c.file.GetAuthConfigs()
if oldAuthConfig, ok := authConfigs[authConfig.ServerAddress]; ok && oldAuthConfig == authConfig {
// Credentials didn't change, so skip updating the configuration file.
return nil
}
authConfigs[authConfig.ServerAddress] = authConfig
if err := c.file.Save(); err != nil {
return err

View File

@ -5,7 +5,9 @@ import (
"strconv"
)
// Enabled returns whether cli hints are enabled or not
// Enabled returns whether cli hints are enabled or not. Hints are enabled by
// default, but can be disabled through the "DOCKER_CLI_HINTS" environment
// variable.
func Enabled() bool {
if v := os.Getenv("DOCKER_CLI_HINTS"); v != "" {
enabled, err := strconv.ParseBool(v)

52
cli/hints/hints_test.go Normal file
View File

@ -0,0 +1,52 @@
package hints
import (
"testing"
"gotest.tools/v3/assert"
)
func TestEnabled(t *testing.T) {
tests := []struct {
doc string
env string
expected bool
}{
{
doc: "default",
expected: true,
},
{
doc: "DOCKER_CLI_HINTS=1",
env: "1",
expected: true,
},
{
doc: "DOCKER_CLI_HINTS=true",
env: "true",
expected: true,
},
{
doc: "DOCKER_CLI_HINTS=0",
env: "0",
expected: false,
},
{
doc: "DOCKER_CLI_HINTS=false",
env: "false",
expected: false,
},
{
doc: "DOCKER_CLI_HINTS=not-a-bool",
env: "not-a-bool",
expected: true,
},
}
for _, tc := range tests {
t.Run(tc.doc, func(t *testing.T) {
t.Setenv("DOCKER_CLI_HINTS", tc.env)
assert.Equal(t, Enabled(), tc.expected)
})
}
}

View File

@ -1,7 +1,6 @@
package main
import (
"github.com/docker/cli/cli/command/completion"
"github.com/docker/cli/cli/context/store"
"github.com/spf13/cobra"
)
@ -10,18 +9,15 @@ type contextStoreProvider interface {
ContextStore() store.Store
}
func registerCompletionFuncForGlobalFlags(dockerCLI contextStoreProvider, cmd *cobra.Command) error {
err := cmd.RegisterFlagCompletionFunc("context", func(*cobra.Command, []string, string) ([]string, cobra.ShellCompDirective) {
func completeContextNames(dockerCLI contextStoreProvider) func(*cobra.Command, []string, string) ([]string, cobra.ShellCompDirective) {
return func(*cobra.Command, []string, string) ([]string, cobra.ShellCompDirective) {
names, _ := store.Names(dockerCLI.ContextStore())
return names, cobra.ShellCompDirectiveNoFileComp
})
if err != nil {
return err
}
err = cmd.RegisterFlagCompletionFunc("log-level", completion.FromList("debug", "info", "warn", "error", "fatal"))
if err != nil {
return err
}
return nil
}
var logLevels = []string{"debug", "info", "warn", "error", "fatal", "panic"}
func completeLogLevels(*cobra.Command, []string, string) ([]string, cobra.ShellCompDirective) {
return cobra.FixedCompletions(logLevels, cobra.ShellCompDirectiveNoFileComp)(nil, nil, "")
}

View File

@ -0,0 +1,49 @@
package main
import (
"testing"
"github.com/docker/cli/cli/context/store"
"github.com/spf13/cobra"
"gotest.tools/v3/assert"
is "gotest.tools/v3/assert/cmp"
)
type fakeCLI struct {
contextStore store.Store
}
func (c *fakeCLI) ContextStore() store.Store {
return c.contextStore
}
type fakeContextStore struct {
store.Store
names []string
}
func (f fakeContextStore) List() (c []store.Metadata, _ error) {
for _, name := range f.names {
c = append(c, store.Metadata{Name: name})
}
return c, nil
}
func TestCompleteContextNames(t *testing.T) {
expectedNames := []string{"context-b", "context-c", "context-a"}
cli := &fakeCLI{
contextStore: fakeContextStore{
names: expectedNames,
},
}
values, directives := completeContextNames(cli)(nil, nil, "")
assert.Check(t, is.Equal(directives, cobra.ShellCompDirectiveNoFileComp))
assert.Check(t, is.DeepEqual(values, expectedNames))
}
func TestCompleteLogLevels(t *testing.T) {
values, directives := completeLogLevels(nil, nil, "")
assert.Check(t, is.Equal(directives, cobra.ShellCompDirectiveNoFileComp))
assert.Check(t, is.DeepEqual(values, logLevels))
}

View File

@ -100,7 +100,11 @@ func newDockerCommand(dockerCli *command.DockerCli) *cli.TopLevelCommand {
cmd.SetErr(dockerCli.Err())
opts, helpCmd = cli.SetupRootCommand(cmd)
_ = registerCompletionFuncForGlobalFlags(dockerCli, cmd)
// TODO(thaJeztah): move configuring completion for these flags to where the flags are added.
_ = cmd.RegisterFlagCompletionFunc("context", completeContextNames(dockerCli))
_ = cmd.RegisterFlagCompletionFunc("log-level", completeLogLevels)
cmd.Flags().BoolP("version", "v", false, "Print version information and quit")
setFlagErrorFunc(dockerCli, cmd)

View File

@ -5081,7 +5081,7 @@ _docker_system_events() {
return
;;
daemon)
local name=$(__docker_q info | sed -n 's/^\(ID\|Name\): //p')
local name=$(__docker_q info --format '{{.Info.Name}} {{.Info.ID}}')
COMPREPLY=( $( compgen -W "$name" -- "${cur##*=}" ) )
return
;;

View File

@ -3,7 +3,7 @@ module github.com/docker/cli/docs/generate
// dummy go.mod to avoid dealing with dependencies specific
// to docs generation and not really part of the project.
go 1.16
go 1.22.0
//require (
// github.com/docker/cli v0.0.0+incompatible

View File

@ -12,38 +12,50 @@ Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
## Description
A full image name has the following format and components:
A Docker image reference consists of several components that describe where the
image is stored and its identity. These components are:
`[HOST[:PORT_NUMBER]/]PATH`
```text
[HOST[:PORT]/]NAMESPACE/REPOSITORY[:TAG]
```
- `HOST`: The optional registry hostname specifies where the image is located.
The hostname must comply with standard DNS rules, but may not contain
underscores. If you don't specify a hostname, the command uses Docker's public
registry at `registry-1.docker.io` by default. Note that `docker.io` is the
canonical reference for Docker's public registry.
- `PORT_NUMBER`: If a hostname is present, it may optionally be followed by a
registry port number in the format `:8080`.
- `PATH`: The path consists of slash-separated components. Each
component may contain lowercase letters, digits and separators. A separator is
defined as a period, one or two underscores, or one or more hyphens. A component
may not start or end with a separator. While the
[OCI Distribution Specification](https://github.com/opencontainers/distribution-spec)
supports more than two slash-separated components, most registries only support
two slash-separated components. For Docker's public registry, the path format is
as follows:
- `[NAMESPACE/]REPOSITORY`: The first, optional component is typically a
user's or an organization's namespace. The second, mandatory component is the
repository name. When the namespace is not present, Docker uses `library`
as the default namespace.
`HOST`
: Specifies the registry location where the image resides. If omitted, Docker
defaults to Docker Hub (`docker.io`).
After the image name, the optional `TAG` is a custom, human-readable manifest
identifier that's typically a specific version or variant of an image. The tag
must be valid ASCII and can contain lowercase and uppercase letters, digits,
underscores, periods, and hyphens. It can't start with a period or hyphen and
must be no longer than 128 characters. If you don't specify a tag, the command uses `latest` by default.
`PORT`
: An optional port number for the registry, if necessary (for example, `:5000`).
You can group your images together using names and tags, and then
[push](image_push.md) them to a registry.
`NAMESPACE/REPOSITORY`
: The namespace (optional) usually represents a user or organization. The
repository is required and identifies the specific image. If the namespace is
omitted, Docker defaults to `library`, the namespace reserved for Docker
Official Images.
`TAG`
: An optional identifier used to specify a particular version or variant of the
image. If no tag is provided, Docker defaults to `latest`.
### Example image references
`example.com:5000/team/my-app:2.0`
- Host: `example.com`
- Port: `5000`
- Namespace: `team`
- Repository: `my-app`
- Tag: `2.0`
`alpine`
- Host: `docker.io` (default)
- Namespace: `library` (default)
- Repository: `alpine`
- Tag: `latest` (default)
For more information on the structure and rules of image naming, refer to the
[Distribution reference](https://pkg.go.dev/github.com/distribution/reference#pkg-overview)
as the canonical definition of the format.
## Examples

View File

@ -26,7 +26,7 @@ per Docker object type. Different event types have different scopes. Local
scoped events are only seen on the node they take place on, and Swarm scoped
events are seen on all managers.
Only the last 1000 log events are returned. You can use filters to further limit
Only the last 256 log events are returned. You can use filters to further limit
the number of events returned.
### Object types
@ -156,7 +156,7 @@ that have elapsed since January 1, 1970 (midnight UTC/GMT), not counting leap
seconds (aka Unix epoch or Unix time), and the optional .nanoseconds field is a
fraction of a second no more than nine digits long.
Only the last 1000 log events are returned. You can use filters to further limit
Only the last 256 log events are returned. You can use filters to further limit
the number of events returned.
#### <a name="filter"></a> Filtering (--filter)

View File

@ -3,7 +3,7 @@ module github.com/docker/cli/man
// dummy go.mod to avoid dealing with dependencies specific
// to manpages generation and not really part of the project.
go 1.16
go 1.12.0
//require (
// github.com/docker/cli v0.0.0+incompatible

View File

@ -18,7 +18,7 @@ init() {
cat > go.mod <<EOL
module github.com/docker/cli
go 1.19
go 1.22.0
EOL
}

View File

@ -89,3 +89,55 @@ func TestParseTruncateFunction(t *testing.T) {
})
}
}
func TestHeaderFunctions(t *testing.T) {
const source = "hello world"
tests := []struct {
doc string
template string
}{
{
doc: "json",
template: `{{ json .}}`,
},
{
doc: "split",
template: `{{ split . ","}}`,
},
{
doc: "join",
template: `{{ join . ","}}`,
},
{
doc: "title",
template: `{{ title .}}`,
},
{
doc: "lower",
template: `{{ lower .}}`,
},
{
doc: "upper",
template: `{{ upper .}}`,
},
{
doc: "truncate",
template: `{{ truncate . 2}}`,
},
}
for _, tc := range tests {
t.Run(tc.doc, func(t *testing.T) {
tmpl, err := New("").Funcs(HeaderFunctions).Parse(tc.template)
assert.NilError(t, err)
var b bytes.Buffer
assert.NilError(t, tmpl.Execute(&b, source))
// All header-functions are currently stubs, and don't modify the input.
expected := source
assert.Equal(t, expected, b.String())
})
}
}

View File

@ -4,7 +4,7 @@ module github.com/docker/cli
// There is no 'go.mod' file, as that would imply opting in for all the rules
// around SemVer, which this repo cannot abide by as it uses CalVer.
go 1.21.0
go 1.22.0
require (
dario.cat/mergo v1.0.1
@ -13,7 +13,7 @@ require (
github.com/distribution/reference v0.6.0
github.com/docker/cli-docs-tool v0.8.0
github.com/docker/distribution v2.8.3+incompatible
github.com/docker/docker v27.0.2-0.20240912171519-164cae56ed95+incompatible // master (v-next)
github.com/docker/docker v27.0.2-0.20241018142220-36a3bd090489+incompatible // master (v-next)
github.com/docker/docker-credential-helpers v0.8.2
github.com/docker/go-connections v0.5.0
github.com/docker/go-units v0.5.0
@ -25,7 +25,7 @@ require (
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510
github.com/mattn/go-runewidth v0.0.15
github.com/moby/patternmatcher v0.6.0
github.com/moby/swarmkit/v2 v2.0.0-20240611172349-ea1a7cec35cb
github.com/moby/swarmkit/v2 v2.0.0-20241017191044-e8ecf83ee08e
github.com/moby/sys/capability v0.3.0
github.com/moby/sys/sequential v0.6.0
github.com/moby/sys/signal v0.7.1
@ -49,9 +49,9 @@ require (
go.opentelemetry.io/otel/sdk/metric v1.21.0
go.opentelemetry.io/otel/trace v1.21.0
golang.org/x/sync v0.8.0
golang.org/x/sys v0.24.0
golang.org/x/term v0.23.0
golang.org/x/text v0.17.0
golang.org/x/sys v0.25.0
golang.org/x/term v0.24.0
golang.org/x/text v0.18.0
gopkg.in/yaml.v2 v2.4.0
gotest.tools/v3 v3.5.1
tags.cncf.io/container-device-interface v0.8.0
@ -63,7 +63,7 @@ require (
github.com/Microsoft/go-winio v0.6.2 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/cenkalti/backoff/v4 v4.2.1 // indirect
github.com/cespare/xxhash/v2 v2.2.0 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/containerd/log v0.1.0 // indirect
github.com/docker/go v1.5.1-1.0.20160303222718-d30aec9fd63c // indirect
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c // indirect
@ -95,11 +95,11 @@ require (
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.21.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.21.0 // indirect
go.opentelemetry.io/proto/otlp v1.0.0 // indirect
golang.org/x/crypto v0.26.0 // indirect
golang.org/x/net v0.28.0 // indirect
golang.org/x/crypto v0.27.0 // indirect
golang.org/x/net v0.29.0 // indirect
golang.org/x/time v0.6.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20240123012728-ef4313101c80 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20240123012728-ef4313101c80 // indirect
google.golang.org/grpc v1.62.0 // indirect
google.golang.org/protobuf v1.33.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20240604185151-ef581f913117 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20240604185151-ef581f913117 // indirect
google.golang.org/grpc v1.66.2 // indirect
google.golang.org/protobuf v1.34.1 // indirect
)

View File

@ -29,8 +29,8 @@ github.com/cenkalti/backoff/v4 v4.2.1 h1:y4OZtCnogmCPw98Zjyt5a6+QwPLGkiQsYW5oUqy
github.com/cenkalti/backoff/v4 v4.2.1/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE=
github.com/certifi/gocertifi v0.0.0-20191021191039-0944d244cd40/go.mod h1:sGbDF6GwGcLpkNXPUTkMRoywsNa/ol15pxFe6ERfguA=
github.com/certifi/gocertifi v0.0.0-20200922220541-2c3bb06c6054/go.mod h1:sGbDF6GwGcLpkNXPUTkMRoywsNa/ol15pxFe6ERfguA=
github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj44=
github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cloudflare/cfssl v0.0.0-20180223231731-4e2dcbde5004/go.mod h1:yMWuSON2oQp+43nFtAV/uvKQIFpSPerB57DCt9t8sSA=
github.com/cloudflare/cfssl v1.6.4 h1:NMOvfrEjFfC63K3SGXgAnFdsgkmiq4kATme5BfcqrO8=
github.com/cloudflare/cfssl v1.6.4/go.mod h1:8b3CQMxfWPAeom3zBnGJ6sd+G1NkL5TXqmDXacb+1J0=
@ -57,8 +57,8 @@ github.com/docker/cli-docs-tool v0.8.0/go.mod h1:8TQQ3E7mOXoYUs811LiPdUnAhXrcVsB
github.com/docker/distribution v2.7.1+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/distribution v2.8.3+incompatible h1:AtKxIZ36LoNK51+Z6RpzLpddBirtxJnzDrHLEKxTAYk=
github.com/docker/distribution v2.8.3+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/docker v27.0.2-0.20240912171519-164cae56ed95+incompatible h1:HRK75BHG33htes7s+v/fJ8saCNw3B7f3spcgLsvbLRQ=
github.com/docker/docker v27.0.2-0.20240912171519-164cae56ed95+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/docker v27.0.2-0.20241018142220-36a3bd090489+incompatible h1:utxxyIvPGk7UmtlGHirUyNUP2Spf8yL660PCbmb7tsk=
github.com/docker/docker v27.0.2-0.20241018142220-36a3bd090489+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/docker-credential-helpers v0.8.2 h1:bX3YxiGzFP5sOXWc3bTPEXdEaZSeVMrFgOr3T+zrFAo=
github.com/docker/docker-credential-helpers v0.8.2/go.mod h1:P3ci7E3lwkZg6XiHdRKft1KckHiO9a2rNtyFbZ/ry9M=
github.com/docker/go v1.5.1-1.0.20160303222718-d30aec9fd63c h1:lzqkGL9b3znc+ZUgi7FlLnqjQhcXxkNM/quxIjBVMD0=
@ -105,8 +105,8 @@ github.com/gogo/protobuf v1.3.1/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXP
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang-sql/civil v0.0.0-20190719163853-cb61b32ac6fe/go.mod h1:8vg3r2VgvsThLBIFL93Qb5yWzgyZWhEmBwUJWevAkK0=
github.com/golang/glog v1.2.0 h1:uCdmnmatrKCgMBlM4rMuJZWOkPDqdbZPnrMXDY4gI68=
github.com/golang/glog v1.2.0/go.mod h1:6AhwSGph0fcJtXVM/PEHPqZlFeoLxhs7/t5UDAwmO+w=
github.com/golang/glog v1.2.1 h1:OptwRhECazUx5ix5TTWC3EZhsZEHWcYWY4FQHTIubm4=
github.com/golang/glog v1.2.1/go.mod h1:6AhwSGph0fcJtXVM/PEHPqZlFeoLxhs7/t5UDAwmO+w=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
@ -180,8 +180,8 @@ github.com/moby/docker-image-spec v1.3.1 h1:jMKff3w6PgbfSa69GfNg+zN/XLhfXJGnEx3N
github.com/moby/docker-image-spec v1.3.1/go.mod h1:eKmb5VW8vQEh/BAr2yvVNvuiJuY6UIocYsFu/DxxRpo=
github.com/moby/patternmatcher v0.6.0 h1:GmP9lR19aU5GqSSFko+5pRqHi+Ohk1O69aFiKkVGiPk=
github.com/moby/patternmatcher v0.6.0/go.mod h1:hDPoyOpDY7OrrMDLaYoY3hf52gNCR/YOUYxkhApJIxc=
github.com/moby/swarmkit/v2 v2.0.0-20240611172349-ea1a7cec35cb h1:1UTTg2EgO3nuyV03wREDzldqqePzQ4+0a5G1C1y1bIo=
github.com/moby/swarmkit/v2 v2.0.0-20240611172349-ea1a7cec35cb/go.mod h1:kNy225f/gWAnF8wPftteMc5nbAHhrH+HUfvyjmhFjeQ=
github.com/moby/swarmkit/v2 v2.0.0-20241017191044-e8ecf83ee08e h1:1yC8fRqStY6NirU/swI74fsrHvZVMbtxsHcvl8YpzDg=
github.com/moby/swarmkit/v2 v2.0.0-20241017191044-e8ecf83ee08e/go.mod h1:mTTGIAz/59OGZR5Qe+QByIe3Nxc+sSuJkrsStFhr6Lg=
github.com/moby/sys/capability v0.3.0 h1:kEP+y6te0gEXIaeQhIi0s7vKs/w0RPoH1qPa6jROcVg=
github.com/moby/sys/capability v0.3.0/go.mod h1:4g9IK291rVkms3LKCDOoYlnV8xKwoDTpIrNEE35Wq0I=
github.com/moby/sys/sequential v0.6.0 h1:qrx7XFUd/5DxtqcoH1h438hF5TmOvzC/lspjy7zgvCU=
@ -336,8 +336,8 @@ golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPh
golang.org/x/crypto v0.0.0-20201117144127-c1f2f97bffc9/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU=
golang.org/x/crypto v0.26.0 h1:RrRspgV4mU+YwB4FYnuBoKsUapNIL5cohGAmSH3azsw=
golang.org/x/crypto v0.26.0/go.mod h1:GY7jblb9wI+FOo5y8/S2oY4zWP07AkOJ4+jxCqdqn54=
golang.org/x/crypto v0.27.0 h1:GXm2NjJrPaiv/h1tb2UH8QfgC/hOf/+z0p6PT8o1w7A=
golang.org/x/crypto v0.27.0/go.mod h1:1Xngt8kV6Dvbssa53Ziq6Eqn0HqbZi5Z6R0ZpwQzt70=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
@ -353,8 +353,8 @@ golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
golang.org/x/net v0.28.0 h1:a9JDOJc5GMUJ0+UDqmLT86WiEy7iWyIhz8gz8E4e5hE=
golang.org/x/net v0.28.0/go.mod h1:yqtgsTWOOnlGLG9GFRrK3++bGOUEkNBoHZc8MEDWPNg=
golang.org/x/net v0.29.0 h1:5ORfpBpCs4HzDYoodCDBbwHzdR5UrLBZ3sOnUJmFoHo=
golang.org/x/net v0.29.0/go.mod h1:gLkgy8jTGERgjzMic6DS9+SP0ajcu6Xu3Orq/SpETg0=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@ -385,24 +385,24 @@ golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.24.0 h1:Twjiwq9dn6R1fQcyiK+wQyHWfaz/BJB+YIpzU/Cv3Xg=
golang.org/x/sys v0.24.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.25.0 h1:r+8e+loiHxRqhXVl6ML1nO3l1+oFoWbnlu2Ehimmi34=
golang.org/x/sys v0.25.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo=
golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk=
golang.org/x/term v0.23.0 h1:F6D4vR+EHoL9/sWAWgAR1H2DcHr4PareCbAaCo1RpuU=
golang.org/x/term v0.23.0/go.mod h1:DgV24QBUrK6jhZXl+20l6UWznPlwAHm1Q1mGHtydmSk=
golang.org/x/term v0.24.0 h1:Mh5cbb+Zk2hqqXNO7S1iTjEphVL+jb8ZWaqh/g+JWkM=
golang.org/x/term v0.24.0/go.mod h1:lOBK/LVxemqiMij05LGJ0tzNr8xlmwBRJ81PX6wVLH8=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.17.0 h1:XtiM5bkSOt+ewxlOE/aE/AKEHibwj/6gvWMl9Rsh0Qc=
golang.org/x/text v0.17.0/go.mod h1:BuEKDfySbSR4drPmRPG/7iBdf8hvFMuRexcpahXilzY=
golang.org/x/text v0.18.0 h1:XvMDiNzPAl0jr17s6W9lcaIhGUfUORdGCNsuLmPG224=
golang.org/x/text v0.18.0/go.mod h1:BuEKDfySbSR4drPmRPG/7iBdf8hvFMuRexcpahXilzY=
golang.org/x/time v0.6.0 h1:eTDhh4ZXt5Qf0augr54TN6suAUudPcawVZeIAPU7D4U=
golang.org/x/time v0.6.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
@ -416,19 +416,17 @@ golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8T
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/genproto v0.0.0-20240123012728-ef4313101c80 h1:KAeGQVN3M9nD0/bQXnr/ClcEMJ968gUXJQ9pwfSynuQ=
google.golang.org/genproto v0.0.0-20240123012728-ef4313101c80/go.mod h1:cc8bqMqtv9gMOr0zHg2Vzff5ULhhL2IXP4sbcn32Dro=
google.golang.org/genproto/googleapis/api v0.0.0-20240123012728-ef4313101c80 h1:Lj5rbfG876hIAYFjqiJnPHfhXbv+nzTWfm04Fg/XSVU=
google.golang.org/genproto/googleapis/api v0.0.0-20240123012728-ef4313101c80/go.mod h1:4jWUdICTdgc3Ibxmr8nAJiiLHwQBY0UI0XZcEMaFKaA=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240123012728-ef4313101c80 h1:AjyfHzEPEFp/NpvfN5g+KDla3EMojjhRVZc1i7cj+oM=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240123012728-ef4313101c80/go.mod h1:PAREbraiVEVGVdTZsVWjSbbTtSyGbAgIIvni8a8CD5s=
google.golang.org/genproto/googleapis/api v0.0.0-20240604185151-ef581f913117 h1:+rdxYoE3E5htTEWIe15GlN6IfvbURM//Jt0mmkmm6ZU=
google.golang.org/genproto/googleapis/api v0.0.0-20240604185151-ef581f913117/go.mod h1:OimBR/bc1wPO9iV4NC2bpyjy3VnAwZh5EBPQdtaE5oo=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240604185151-ef581f913117 h1:1GBuWVLM/KMVUv1t1En5Gs+gFZCNd360GGb4sSxtrhU=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240604185151-ef581f913117/go.mod h1:EfXuqaE1J41VCDicxHzUDm+8rk+7ZdXzHV0IhO/I6s0=
google.golang.org/grpc v1.0.5/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw=
google.golang.org/grpc v1.62.0 h1:HQKZ/fa1bXkX1oFOvSjmZEUL8wLSaZTjCcLAlmZRtdk=
google.golang.org/grpc v1.62.0/go.mod h1:IWTG0VlJLCh1SkC58F7np9ka9mx/WNkjl4PGJaiq+QE=
google.golang.org/grpc v1.66.2 h1:3QdXkuq3Bkh7w+ywLdLvM56cmGvQHUMZpiCzt6Rqaoo=
google.golang.org/grpc v1.66.2/go.mod h1:s3/l6xSSCURdVfAnL+TqCNMyTDAGN6+lZeVxnZR128Y=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.33.0 h1:uNO2rsAINq/JlFpSdYEKIZ0uKD/R9cpdv0T+yoGwGmI=
google.golang.org/protobuf v1.33.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos=
google.golang.org/protobuf v1.34.1 h1:9ddQBjfCyZPOHPUiPxpYESBLc+T8P3E+Vo4IbKZgFWg=
google.golang.org/protobuf v1.34.1/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos=
gopkg.in/airbrake/gobrake.v2 v2.0.9/go.mod h1:/h5ZAUhDkGaJfjzjKLSjv6zCL6O0LLBxU4K+aSYdM/U=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/cenkalti/backoff.v2 v2.2.1 h1:eJ9UAg01/HIHG987TwxvnzK2MgxXq97YY6rYDpY9aII=

View File

@ -70,3 +70,5 @@ benchstat <(go test -benchtime 500ms -count 15 -bench 'Sum64$')
- [VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics)
- [FreeCache](https://github.com/coocood/freecache)
- [FastCache](https://github.com/VictoriaMetrics/fastcache)
- [Ristretto](https://github.com/dgraph-io/ristretto)
- [Badger](https://github.com/dgraph-io/badger)

View File

@ -19,10 +19,13 @@ const (
// Store the primes in an array as well.
//
// The consts are used when possible in Go code to avoid MOVs but we need a
// contiguous array of the assembly code.
// contiguous array for the assembly code.
var primes = [...]uint64{prime1, prime2, prime3, prime4, prime5}
// Digest implements hash.Hash64.
//
// Note that a zero-valued Digest is not ready to receive writes.
// Call Reset or create a Digest using New before calling other methods.
type Digest struct {
v1 uint64
v2 uint64
@ -33,19 +36,31 @@ type Digest struct {
n int // how much of mem is used
}
// New creates a new Digest that computes the 64-bit xxHash algorithm.
// New creates a new Digest with a zero seed.
func New() *Digest {
return NewWithSeed(0)
}
// NewWithSeed creates a new Digest with the given seed.
func NewWithSeed(seed uint64) *Digest {
var d Digest
d.Reset()
d.ResetWithSeed(seed)
return &d
}
// Reset clears the Digest's state so that it can be reused.
// It uses a seed value of zero.
func (d *Digest) Reset() {
d.v1 = primes[0] + prime2
d.v2 = prime2
d.v3 = 0
d.v4 = -primes[0]
d.ResetWithSeed(0)
}
// ResetWithSeed clears the Digest's state so that it can be reused.
// It uses the given seed to initialize the state.
func (d *Digest) ResetWithSeed(seed uint64) {
d.v1 = seed + prime1 + prime2
d.v2 = seed + prime2
d.v3 = seed
d.v4 = seed - prime1
d.total = 0
d.n = 0
}

View File

@ -6,7 +6,7 @@
package xxhash
// Sum64 computes the 64-bit xxHash digest of b.
// Sum64 computes the 64-bit xxHash digest of b with a zero seed.
//
//go:noescape
func Sum64(b []byte) uint64

View File

@ -3,7 +3,7 @@
package xxhash
// Sum64 computes the 64-bit xxHash digest of b.
// Sum64 computes the 64-bit xxHash digest of b with a zero seed.
func Sum64(b []byte) uint64 {
// A simpler version would be
// d := New()

View File

@ -5,7 +5,7 @@
package xxhash
// Sum64String computes the 64-bit xxHash digest of s.
// Sum64String computes the 64-bit xxHash digest of s with a zero seed.
func Sum64String(s string) uint64 {
return Sum64([]byte(s))
}

View File

@ -33,7 +33,7 @@ import (
//
// See https://github.com/golang/go/issues/42739 for discussion.
// Sum64String computes the 64-bit xxHash digest of s.
// Sum64String computes the 64-bit xxHash digest of s with a zero seed.
// It may be faster than Sum64([]byte(s)) by avoiding a copy.
func Sum64String(s string) uint64 {
b := *(*[]byte)(unsafe.Pointer(&sliceHeader{s, len(s)}))

View File

@ -6005,7 +6005,7 @@ definitions:
accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates
from unknown CAs) communication.
By default, local registries (`127.0.0.0/8`) are configured as
By default, local registries (`::1/128` and `127.0.0.0/8`) are configured as
insecure. All other registries are secure. Communicating with an
insecure registry is not possible if the daemon assumes that registry
is secure.
@ -6170,6 +6170,8 @@ definitions:
Expected:
description: |
Commit ID of external tool expected by dockerd as set at build time.
**Deprecated**: This field is deprecated and will be omitted in a API v1.49.
type: "string"
example: "2d41c047c83e09a6d61d464906feb2a2f3c52aa4"
@ -7881,10 +7883,12 @@ paths:
type: "string"
- name: "h"
in: "query"
required: true
description: "Height of the TTY session in characters"
type: "integer"
- name: "w"
in: "query"
required: true
description: "Width of the TTY session in characters"
type: "integer"
tags: ["Container"]
@ -10236,10 +10240,12 @@ paths:
type: "string"
- name: "h"
in: "query"
required: true
description: "Height of the TTY session in characters"
type: "integer"
- name: "w"
in: "query"
required: true
description: "Width of the TTY session in characters"
type: "integer"
tags: ["Exec"]

View File

@ -137,8 +137,13 @@ type PluginsInfo struct {
// Commit holds the Git-commit (SHA1) that a binary was built from, as reported
// in the version-string of external tools, such as containerd, or runC.
type Commit struct {
ID string // ID is the actual commit ID of external tool.
Expected string // Expected is the commit ID of external tool expected by dockerd as set at build time.
// ID is the actual commit ID or version of external tool.
ID string
// Expected is the commit ID of external tool expected by dockerd as set at build time.
//
// Deprecated: this field is no longer used in API v1.49, but kept for backward-compatibility with older API versions.
Expected string
}
// NetworkAddressPool is a temp struct used by [Info] struct.

View File

@ -172,4 +172,6 @@ type BuildCachePruneOptions struct {
All bool
KeepStorage int64
Filters filters.Args
// FIXME(thaJeztah): add new options; see https://github.com/moby/moby/issues/48639
}

View File

@ -2,7 +2,7 @@
Package client is a Go client for the Docker Engine API.
For more information about the Engine API, see the documentation:
https://docs.docker.com/engine/api/
https://docs.docker.com/reference/api/engine/
# Usage

View File

@ -5,6 +5,8 @@ import (
"encoding/json"
"net/url"
"path"
"sort"
"strings"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/network"
@ -12,12 +14,6 @@ import (
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
)
type configWrapper struct {
*container.Config
HostConfig *container.HostConfig
NetworkingConfig *network.NetworkingConfig
}
// ContainerCreate creates a new container based on the given configuration.
// It can be associated with a name, but it's not mandatory.
func (cli *Client) ContainerCreate(ctx context.Context, config *container.Config, hostConfig *container.HostConfig, networkingConfig *network.NetworkingConfig, platform *ocispec.Platform, containerName string) (container.CreateResponse, error) {
@ -58,6 +54,9 @@ func (cli *Client) ContainerCreate(ctx context.Context, config *container.Config
// When using API under 1.42, the Linux daemon doesn't respect the ConsoleSize
hostConfig.ConsoleSize = [2]uint{0, 0}
}
hostConfig.CapAdd = normalizeCapabilities(hostConfig.CapAdd)
hostConfig.CapDrop = normalizeCapabilities(hostConfig.CapDrop)
}
// Since API 1.44, the container-wide MacAddress is deprecated and will trigger a WARNING if it's specified.
@ -74,7 +73,7 @@ func (cli *Client) ContainerCreate(ctx context.Context, config *container.Config
query.Set("name", containerName)
}
body := configWrapper{
body := container.CreateRequest{
Config: config,
HostConfig: hostConfig,
NetworkingConfig: networkingConfig,
@ -114,3 +113,42 @@ func hasEndpointSpecificMacAddress(networkingConfig *network.NetworkingConfig) b
}
return false
}
// allCapabilities is a magic value for "all capabilities"
const allCapabilities = "ALL"
// normalizeCapabilities normalizes capabilities to their canonical form,
// removes duplicates, and sorts the results.
//
// It is similar to [github.com/docker/docker/oci/caps.NormalizeLegacyCapabilities],
// but performs no validation based on supported capabilities.
func normalizeCapabilities(caps []string) []string {
var normalized []string
unique := make(map[string]struct{})
for _, c := range caps {
c = normalizeCap(c)
if _, ok := unique[c]; ok {
continue
}
unique[c] = struct{}{}
normalized = append(normalized, c)
}
sort.Strings(normalized)
return normalized
}
// normalizeCap normalizes a capability to its canonical format by upper-casing
// and adding a "CAP_" prefix (if not yet present). It also accepts the "ALL"
// magic-value.
func normalizeCap(cap string) string {
cap = strings.ToUpper(cap)
if cap == allCapabilities {
return cap
}
if !strings.HasPrefix(cap, "CAP_") {
cap = "CAP_" + cap
}
return cap
}

View File

@ -19,9 +19,10 @@ func (cli *Client) ContainerExecResize(ctx context.Context, execID string, optio
}
func (cli *Client) resize(ctx context.Context, basePath string, height, width uint) error {
// FIXME(thaJeztah): the API / backend accepts uint32, but container.ResizeOptions uses uint.
query := url.Values{}
query.Set("h", strconv.Itoa(int(height)))
query.Set("w", strconv.Itoa(int(width)))
query.Set("h", strconv.FormatUint(uint64(height), 10))
query.Set("w", strconv.FormatUint(uint64(width), 10))
resp, err := cli.post(ctx, basePath+"/resize", query, nil, nil)
ensureReaderClosed(resp)

View File

@ -12,6 +12,7 @@ import (
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/network"
)
// ImageBuild sends a request to the daemon to build images.
@ -44,10 +45,15 @@ func (cli *Client) ImageBuild(ctx context.Context, buildContext io.Reader, optio
}
func (cli *Client) imageBuildOptionsToQuery(ctx context.Context, options types.ImageBuildOptions) (url.Values, error) {
query := url.Values{
"t": options.Tags,
"securityopt": options.SecurityOpt,
"extrahosts": options.ExtraHosts,
query := url.Values{}
if len(options.Tags) > 0 {
query["t"] = options.Tags
}
if len(options.SecurityOpt) > 0 {
query["securityopt"] = options.SecurityOpt
}
if len(options.ExtraHosts) > 0 {
query["extrahosts"] = options.ExtraHosts
}
if options.SuppressOutput {
query.Set("q", "1")
@ -58,9 +64,11 @@ func (cli *Client) imageBuildOptionsToQuery(ctx context.Context, options types.I
if options.NoCache {
query.Set("nocache", "1")
}
if options.Remove {
query.Set("rm", "1")
} else {
if !options.Remove {
// only send value when opting out because the daemon's default is
// to remove intermediate containers after a successful build,
//
// TODO(thaJeztah): deprecate "Remove" option, and provide a "NoRemove" or "Keep" option instead.
query.Set("rm", "0")
}
@ -83,42 +91,70 @@ func (cli *Client) imageBuildOptionsToQuery(ctx context.Context, options types.I
query.Set("isolation", string(options.Isolation))
}
query.Set("cpusetcpus", options.CPUSetCPUs)
query.Set("networkmode", options.NetworkMode)
query.Set("cpusetmems", options.CPUSetMems)
query.Set("cpushares", strconv.FormatInt(options.CPUShares, 10))
query.Set("cpuquota", strconv.FormatInt(options.CPUQuota, 10))
query.Set("cpuperiod", strconv.FormatInt(options.CPUPeriod, 10))
query.Set("memory", strconv.FormatInt(options.Memory, 10))
query.Set("memswap", strconv.FormatInt(options.MemorySwap, 10))
query.Set("cgroupparent", options.CgroupParent)
query.Set("shmsize", strconv.FormatInt(options.ShmSize, 10))
query.Set("dockerfile", options.Dockerfile)
query.Set("target", options.Target)
ulimitsJSON, err := json.Marshal(options.Ulimits)
if err != nil {
return query, err
if options.CPUSetCPUs != "" {
query.Set("cpusetcpus", options.CPUSetCPUs)
}
query.Set("ulimits", string(ulimitsJSON))
buildArgsJSON, err := json.Marshal(options.BuildArgs)
if err != nil {
return query, err
if options.NetworkMode != "" && options.NetworkMode != network.NetworkDefault {
query.Set("networkmode", options.NetworkMode)
}
query.Set("buildargs", string(buildArgsJSON))
labelsJSON, err := json.Marshal(options.Labels)
if err != nil {
return query, err
if options.CPUSetMems != "" {
query.Set("cpusetmems", options.CPUSetMems)
}
query.Set("labels", string(labelsJSON))
cacheFromJSON, err := json.Marshal(options.CacheFrom)
if err != nil {
return query, err
if options.CPUShares != 0 {
query.Set("cpushares", strconv.FormatInt(options.CPUShares, 10))
}
if options.CPUQuota != 0 {
query.Set("cpuquota", strconv.FormatInt(options.CPUQuota, 10))
}
if options.CPUPeriod != 0 {
query.Set("cpuperiod", strconv.FormatInt(options.CPUPeriod, 10))
}
if options.Memory != 0 {
query.Set("memory", strconv.FormatInt(options.Memory, 10))
}
if options.MemorySwap != 0 {
query.Set("memswap", strconv.FormatInt(options.MemorySwap, 10))
}
if options.CgroupParent != "" {
query.Set("cgroupparent", options.CgroupParent)
}
if options.ShmSize != 0 {
query.Set("shmsize", strconv.FormatInt(options.ShmSize, 10))
}
if options.Dockerfile != "" {
query.Set("dockerfile", options.Dockerfile)
}
if options.Target != "" {
query.Set("target", options.Target)
}
if len(options.Ulimits) != 0 {
ulimitsJSON, err := json.Marshal(options.Ulimits)
if err != nil {
return query, err
}
query.Set("ulimits", string(ulimitsJSON))
}
if len(options.BuildArgs) != 0 {
buildArgsJSON, err := json.Marshal(options.BuildArgs)
if err != nil {
return query, err
}
query.Set("buildargs", string(buildArgsJSON))
}
if len(options.Labels) != 0 {
labelsJSON, err := json.Marshal(options.Labels)
if err != nil {
return query, err
}
query.Set("labels", string(labelsJSON))
}
if len(options.CacheFrom) != 0 {
cacheFromJSON, err := json.Marshal(options.CacheFrom)
if err != nil {
return query, err
}
query.Set("cachefrom", string(cacheFromJSON))
}
query.Set("cachefrom", string(cacheFromJSON))
if options.SessionID != "" {
query.Set("session", options.SessionID)
}
@ -131,7 +167,9 @@ func (cli *Client) imageBuildOptionsToQuery(ctx context.Context, options types.I
if options.BuildID != "" {
query.Set("buildid", options.BuildID)
}
query.Set("version", string(options.Version))
if options.Version != "" {
query.Set("version", string(options.Version))
}
if options.Outputs != nil {
outputsJSON, err := json.Marshal(options.Outputs)

View File

@ -654,7 +654,7 @@ func (ta *tarAppender) addTarFile(path, name string) error {
ta.Buffer.Reset(ta.TarWriter)
defer ta.Buffer.Reset(nil)
_, err = io.Copy(ta.Buffer, file)
_, err = pools.Copy(ta.Buffer, file)
file.Close()
if err != nil {
return err
@ -705,7 +705,7 @@ func createTarFile(path, extractDir string, hdr *tar.Header, reader io.Reader, o
if err != nil {
return err
}
if _, err := io.Copy(file, reader); err != nil {
if _, err := pools.Copy(file, reader); err != nil {
file.Close()
return err
}
@ -1375,7 +1375,7 @@ func (archiver *Archiver) CopyFileWithTar(src, dst string) (err error) {
if err := tw.WriteHeader(hdr); err != nil {
return err
}
if _, err := io.Copy(tw, srcF); err != nil {
if _, err := pools.Copy(tw, srcF); err != nil {
return err
}
return nil

View File

@ -184,7 +184,7 @@ func (config *serviceConfig) loadMirrors(mirrors []string) error {
func (config *serviceConfig) loadInsecureRegistries(registries []string) error {
// Localhost is by default considered as an insecure registry. This is a
// stop-gap for people who are running a private registry on localhost.
registries = append(registries, "127.0.0.0/8")
registries = append(registries, "::1/128", "127.0.0.0/8")
var (
insecureRegistryCIDRs = make([]*registry.NetIPNet, 0)

View File

@ -552,6 +552,7 @@ ccflags="$@"
$2 !~ /^RTC_VL_(ACCURACY|BACKUP|DATA)/ &&
$2 ~ /^(NETLINK|NLM|NLMSG|NLA|IFA|IFAN|RT|RTC|RTCF|RTN|RTPROT|RTNH|ARPHRD|ETH_P|NETNSA)_/ ||
$2 ~ /^SOCK_|SK_DIAG_|SKNLGRP_$/ ||
$2 ~ /^(CONNECT|SAE)_/ ||
$2 ~ /^FIORDCHK$/ ||
$2 ~ /^SIOC/ ||
$2 ~ /^TIOC/ ||

View File

@ -566,6 +566,43 @@ func PthreadFchdir(fd int) (err error) {
return pthread_fchdir_np(fd)
}
// Connectx calls connectx(2) to initiate a connection on a socket.
//
// srcIf, srcAddr, and dstAddr are filled into a [SaEndpoints] struct and passed as the endpoints argument.
//
// - srcIf is the optional source interface index. 0 means unspecified.
// - srcAddr is the optional source address. nil means unspecified.
// - dstAddr is the destination address.
//
// On success, Connectx returns the number of bytes enqueued for transmission.
func Connectx(fd int, srcIf uint32, srcAddr, dstAddr Sockaddr, associd SaeAssocID, flags uint32, iov []Iovec, connid *SaeConnID) (n uintptr, err error) {
endpoints := SaEndpoints{
Srcif: srcIf,
}
if srcAddr != nil {
addrp, addrlen, err := srcAddr.sockaddr()
if err != nil {
return 0, err
}
endpoints.Srcaddr = (*RawSockaddr)(addrp)
endpoints.Srcaddrlen = uint32(addrlen)
}
if dstAddr != nil {
addrp, addrlen, err := dstAddr.sockaddr()
if err != nil {
return 0, err
}
endpoints.Dstaddr = (*RawSockaddr)(addrp)
endpoints.Dstaddrlen = uint32(addrlen)
}
err = connectx(fd, &endpoints, associd, flags, iov, &n, connid)
return
}
//sys connectx(fd int, endpoints *SaEndpoints, associd SaeAssocID, flags uint32, iov []Iovec, n *uintptr, connid *SaeConnID) (err error)
//sys sendfile(infd int, outfd int, offset int64, len *int64, hdtr unsafe.Pointer, flags int) (err error)
//sys shmat(id int, addr uintptr, flag int) (ret uintptr, err error)

View File

@ -11,6 +11,7 @@ package unix
int ioctl(int, unsigned long int, uintptr_t);
*/
import "C"
import "unsafe"
func ioctl(fd int, req uint, arg uintptr) (err error) {
r0, er := C.ioctl(C.int(fd), C.ulong(req), C.uintptr_t(arg))

View File

@ -237,6 +237,9 @@ const (
CLOCK_UPTIME_RAW_APPROX = 0x9
CLONE_NOFOLLOW = 0x1
CLONE_NOOWNERCOPY = 0x2
CONNECT_DATA_AUTHENTICATED = 0x4
CONNECT_DATA_IDEMPOTENT = 0x2
CONNECT_RESUME_ON_READ_WRITE = 0x1
CR0 = 0x0
CR1 = 0x1000
CR2 = 0x2000
@ -1265,6 +1268,10 @@ const (
RTV_SSTHRESH = 0x20
RUSAGE_CHILDREN = -0x1
RUSAGE_SELF = 0x0
SAE_ASSOCID_ALL = 0xffffffff
SAE_ASSOCID_ANY = 0x0
SAE_CONNID_ALL = 0xffffffff
SAE_CONNID_ANY = 0x0
SCM_CREDS = 0x3
SCM_RIGHTS = 0x1
SCM_TIMESTAMP = 0x2

View File

@ -237,6 +237,9 @@ const (
CLOCK_UPTIME_RAW_APPROX = 0x9
CLONE_NOFOLLOW = 0x1
CLONE_NOOWNERCOPY = 0x2
CONNECT_DATA_AUTHENTICATED = 0x4
CONNECT_DATA_IDEMPOTENT = 0x2
CONNECT_RESUME_ON_READ_WRITE = 0x1
CR0 = 0x0
CR1 = 0x1000
CR2 = 0x2000
@ -1265,6 +1268,10 @@ const (
RTV_SSTHRESH = 0x20
RUSAGE_CHILDREN = -0x1
RUSAGE_SELF = 0x0
SAE_ASSOCID_ALL = 0xffffffff
SAE_ASSOCID_ANY = 0x0
SAE_CONNID_ALL = 0xffffffff
SAE_CONNID_ANY = 0x0
SCM_CREDS = 0x3
SCM_RIGHTS = 0x1
SCM_TIMESTAMP = 0x2

View File

@ -581,6 +581,8 @@ const (
AT_EMPTY_PATH = 0x1000
AT_REMOVEDIR = 0x200
RENAME_NOREPLACE = 1 << 0
ST_RDONLY = 1
ST_NOSUID = 2
)
const (

View File

@ -841,6 +841,26 @@ var libc_pthread_fchdir_np_trampoline_addr uintptr
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
func connectx(fd int, endpoints *SaEndpoints, associd SaeAssocID, flags uint32, iov []Iovec, n *uintptr, connid *SaeConnID) (err error) {
var _p0 unsafe.Pointer
if len(iov) > 0 {
_p0 = unsafe.Pointer(&iov[0])
} else {
_p0 = unsafe.Pointer(&_zero)
}
_, _, e1 := syscall_syscall9(libc_connectx_trampoline_addr, uintptr(fd), uintptr(unsafe.Pointer(endpoints)), uintptr(associd), uintptr(flags), uintptr(_p0), uintptr(len(iov)), uintptr(unsafe.Pointer(n)), uintptr(unsafe.Pointer(connid)), 0)
if e1 != 0 {
err = errnoErr(e1)
}
return
}
var libc_connectx_trampoline_addr uintptr
//go:cgo_import_dynamic libc_connectx connectx "/usr/lib/libSystem.B.dylib"
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
func sendfile(infd int, outfd int, offset int64, len *int64, hdtr unsafe.Pointer, flags int) (err error) {
_, _, e1 := syscall_syscall6(libc_sendfile_trampoline_addr, uintptr(infd), uintptr(outfd), uintptr(offset), uintptr(unsafe.Pointer(len)), uintptr(hdtr), uintptr(flags))
if e1 != 0 {

View File

@ -248,6 +248,11 @@ TEXT libc_pthread_fchdir_np_trampoline<>(SB),NOSPLIT,$0-0
GLOBL ·libc_pthread_fchdir_np_trampoline_addr(SB), RODATA, $8
DATA ·libc_pthread_fchdir_np_trampoline_addr(SB)/8, $libc_pthread_fchdir_np_trampoline<>(SB)
TEXT libc_connectx_trampoline<>(SB),NOSPLIT,$0-0
JMP libc_connectx(SB)
GLOBL ·libc_connectx_trampoline_addr(SB), RODATA, $8
DATA ·libc_connectx_trampoline_addr(SB)/8, $libc_connectx_trampoline<>(SB)
TEXT libc_sendfile_trampoline<>(SB),NOSPLIT,$0-0
JMP libc_sendfile(SB)
GLOBL ·libc_sendfile_trampoline_addr(SB), RODATA, $8

View File

@ -841,6 +841,26 @@ var libc_pthread_fchdir_np_trampoline_addr uintptr
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
func connectx(fd int, endpoints *SaEndpoints, associd SaeAssocID, flags uint32, iov []Iovec, n *uintptr, connid *SaeConnID) (err error) {
var _p0 unsafe.Pointer
if len(iov) > 0 {
_p0 = unsafe.Pointer(&iov[0])
} else {
_p0 = unsafe.Pointer(&_zero)
}
_, _, e1 := syscall_syscall9(libc_connectx_trampoline_addr, uintptr(fd), uintptr(unsafe.Pointer(endpoints)), uintptr(associd), uintptr(flags), uintptr(_p0), uintptr(len(iov)), uintptr(unsafe.Pointer(n)), uintptr(unsafe.Pointer(connid)), 0)
if e1 != 0 {
err = errnoErr(e1)
}
return
}
var libc_connectx_trampoline_addr uintptr
//go:cgo_import_dynamic libc_connectx connectx "/usr/lib/libSystem.B.dylib"
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
func sendfile(infd int, outfd int, offset int64, len *int64, hdtr unsafe.Pointer, flags int) (err error) {
_, _, e1 := syscall_syscall6(libc_sendfile_trampoline_addr, uintptr(infd), uintptr(outfd), uintptr(offset), uintptr(unsafe.Pointer(len)), uintptr(hdtr), uintptr(flags))
if e1 != 0 {

View File

@ -248,6 +248,11 @@ TEXT libc_pthread_fchdir_np_trampoline<>(SB),NOSPLIT,$0-0
GLOBL ·libc_pthread_fchdir_np_trampoline_addr(SB), RODATA, $8
DATA ·libc_pthread_fchdir_np_trampoline_addr(SB)/8, $libc_pthread_fchdir_np_trampoline<>(SB)
TEXT libc_connectx_trampoline<>(SB),NOSPLIT,$0-0
JMP libc_connectx(SB)
GLOBL ·libc_connectx_trampoline_addr(SB), RODATA, $8
DATA ·libc_connectx_trampoline_addr(SB)/8, $libc_connectx_trampoline<>(SB)
TEXT libc_sendfile_trampoline<>(SB),NOSPLIT,$0-0
JMP libc_sendfile(SB)
GLOBL ·libc_sendfile_trampoline_addr(SB), RODATA, $8

View File

@ -306,6 +306,19 @@ type XVSockPgen struct {
type _Socklen uint32
type SaeAssocID uint32
type SaeConnID uint32
type SaEndpoints struct {
Srcif uint32
Srcaddr *RawSockaddr
Srcaddrlen uint32
Dstaddr *RawSockaddr
Dstaddrlen uint32
_ [4]byte
}
type Xucred struct {
Version uint32
Uid uint32

View File

@ -306,6 +306,19 @@ type XVSockPgen struct {
type _Socklen uint32
type SaeAssocID uint32
type SaeConnID uint32
type SaEndpoints struct {
Srcif uint32
Srcaddr *RawSockaddr
Srcaddrlen uint32
Dstaddr *RawSockaddr
Dstaddrlen uint32
_ [4]byte
}
type Xucred struct {
Version uint32
Uid uint32

View File

@ -625,6 +625,7 @@ const (
POLLRDNORM = 0x40
POLLWRBAND = 0x100
POLLWRNORM = 0x4
POLLRDHUP = 0x4000
)
type CapRights struct {

View File

@ -630,6 +630,7 @@ const (
POLLRDNORM = 0x40
POLLWRBAND = 0x100
POLLWRNORM = 0x4
POLLRDHUP = 0x4000
)
type CapRights struct {

View File

@ -616,6 +616,7 @@ const (
POLLRDNORM = 0x40
POLLWRBAND = 0x100
POLLWRNORM = 0x4
POLLRDHUP = 0x4000
)
type CapRights struct {

View File

@ -610,6 +610,7 @@ const (
POLLRDNORM = 0x40
POLLWRBAND = 0x100
POLLWRNORM = 0x4
POLLRDHUP = 0x4000
)
type CapRights struct {

View File

@ -612,6 +612,7 @@ const (
POLLRDNORM = 0x40
POLLWRBAND = 0x100
POLLWRNORM = 0x4
POLLRDHUP = 0x4000
)
type CapRights struct {

View File

@ -2486,7 +2486,7 @@ type XDPMmapOffsets struct {
type XDPUmemReg struct {
Addr uint64
Len uint64
Chunk_size uint32
Size uint32
Headroom uint32
Flags uint32
Tx_metadata_len uint32

View File

@ -727,6 +727,37 @@ const (
RISCV_HWPROBE_EXT_ZBA = 0x8
RISCV_HWPROBE_EXT_ZBB = 0x10
RISCV_HWPROBE_EXT_ZBS = 0x20
RISCV_HWPROBE_EXT_ZICBOZ = 0x40
RISCV_HWPROBE_EXT_ZBC = 0x80
RISCV_HWPROBE_EXT_ZBKB = 0x100
RISCV_HWPROBE_EXT_ZBKC = 0x200
RISCV_HWPROBE_EXT_ZBKX = 0x400
RISCV_HWPROBE_EXT_ZKND = 0x800
RISCV_HWPROBE_EXT_ZKNE = 0x1000
RISCV_HWPROBE_EXT_ZKNH = 0x2000
RISCV_HWPROBE_EXT_ZKSED = 0x4000
RISCV_HWPROBE_EXT_ZKSH = 0x8000
RISCV_HWPROBE_EXT_ZKT = 0x10000
RISCV_HWPROBE_EXT_ZVBB = 0x20000
RISCV_HWPROBE_EXT_ZVBC = 0x40000
RISCV_HWPROBE_EXT_ZVKB = 0x80000
RISCV_HWPROBE_EXT_ZVKG = 0x100000
RISCV_HWPROBE_EXT_ZVKNED = 0x200000
RISCV_HWPROBE_EXT_ZVKNHA = 0x400000
RISCV_HWPROBE_EXT_ZVKNHB = 0x800000
RISCV_HWPROBE_EXT_ZVKSED = 0x1000000
RISCV_HWPROBE_EXT_ZVKSH = 0x2000000
RISCV_HWPROBE_EXT_ZVKT = 0x4000000
RISCV_HWPROBE_EXT_ZFH = 0x8000000
RISCV_HWPROBE_EXT_ZFHMIN = 0x10000000
RISCV_HWPROBE_EXT_ZIHINTNTL = 0x20000000
RISCV_HWPROBE_EXT_ZVFH = 0x40000000
RISCV_HWPROBE_EXT_ZVFHMIN = 0x80000000
RISCV_HWPROBE_EXT_ZFA = 0x100000000
RISCV_HWPROBE_EXT_ZTSO = 0x200000000
RISCV_HWPROBE_EXT_ZACAS = 0x400000000
RISCV_HWPROBE_EXT_ZICOND = 0x800000000
RISCV_HWPROBE_EXT_ZIHINTPAUSE = 0x1000000000
RISCV_HWPROBE_KEY_CPUPERF_0 = 0x5
RISCV_HWPROBE_MISALIGNED_UNKNOWN = 0x0
RISCV_HWPROBE_MISALIGNED_EMULATED = 0x1
@ -734,4 +765,6 @@ const (
RISCV_HWPROBE_MISALIGNED_FAST = 0x3
RISCV_HWPROBE_MISALIGNED_UNSUPPORTED = 0x4
RISCV_HWPROBE_MISALIGNED_MASK = 0x7
RISCV_HWPROBE_KEY_ZICBOZ_BLOCK_SIZE = 0x6
RISCV_HWPROBE_WHICH_CPUS = 0x1
)

View File

@ -313,6 +313,10 @@ func NewCallbackCDecl(fn interface{}) uintptr {
//sys SetConsoleMode(console Handle, mode uint32) (err error) = kernel32.SetConsoleMode
//sys GetConsoleScreenBufferInfo(console Handle, info *ConsoleScreenBufferInfo) (err error) = kernel32.GetConsoleScreenBufferInfo
//sys setConsoleCursorPosition(console Handle, position uint32) (err error) = kernel32.SetConsoleCursorPosition
//sys GetConsoleCP() (cp uint32, err error) = kernel32.GetConsoleCP
//sys GetConsoleOutputCP() (cp uint32, err error) = kernel32.GetConsoleOutputCP
//sys SetConsoleCP(cp uint32) (err error) = kernel32.SetConsoleCP
//sys SetConsoleOutputCP(cp uint32) (err error) = kernel32.SetConsoleOutputCP
//sys WriteConsole(console Handle, buf *uint16, towrite uint32, written *uint32, reserved *byte) (err error) = kernel32.WriteConsoleW
//sys ReadConsole(console Handle, buf *uint16, toread uint32, read *uint32, inputControl *byte) (err error) = kernel32.ReadConsoleW
//sys resizePseudoConsole(pconsole Handle, size uint32) (hr error) = kernel32.ResizePseudoConsole

View File

@ -1060,6 +1060,7 @@ const (
SIO_GET_EXTENSION_FUNCTION_POINTER = IOC_INOUT | IOC_WS2 | 6
SIO_KEEPALIVE_VALS = IOC_IN | IOC_VENDOR | 4
SIO_UDP_CONNRESET = IOC_IN | IOC_VENDOR | 12
SIO_UDP_NETRESET = IOC_IN | IOC_VENDOR | 15
// cf. http://support.microsoft.com/default.aspx?scid=kb;en-us;257460

View File

@ -247,7 +247,9 @@ var (
procGetCommandLineW = modkernel32.NewProc("GetCommandLineW")
procGetComputerNameExW = modkernel32.NewProc("GetComputerNameExW")
procGetComputerNameW = modkernel32.NewProc("GetComputerNameW")
procGetConsoleCP = modkernel32.NewProc("GetConsoleCP")
procGetConsoleMode = modkernel32.NewProc("GetConsoleMode")
procGetConsoleOutputCP = modkernel32.NewProc("GetConsoleOutputCP")
procGetConsoleScreenBufferInfo = modkernel32.NewProc("GetConsoleScreenBufferInfo")
procGetCurrentDirectoryW = modkernel32.NewProc("GetCurrentDirectoryW")
procGetCurrentProcessId = modkernel32.NewProc("GetCurrentProcessId")
@ -347,8 +349,10 @@ var (
procSetCommMask = modkernel32.NewProc("SetCommMask")
procSetCommState = modkernel32.NewProc("SetCommState")
procSetCommTimeouts = modkernel32.NewProc("SetCommTimeouts")
procSetConsoleCP = modkernel32.NewProc("SetConsoleCP")
procSetConsoleCursorPosition = modkernel32.NewProc("SetConsoleCursorPosition")
procSetConsoleMode = modkernel32.NewProc("SetConsoleMode")
procSetConsoleOutputCP = modkernel32.NewProc("SetConsoleOutputCP")
procSetCurrentDirectoryW = modkernel32.NewProc("SetCurrentDirectoryW")
procSetDefaultDllDirectories = modkernel32.NewProc("SetDefaultDllDirectories")
procSetDllDirectoryW = modkernel32.NewProc("SetDllDirectoryW")
@ -2162,6 +2166,15 @@ func GetComputerName(buf *uint16, n *uint32) (err error) {
return
}
func GetConsoleCP() (cp uint32, err error) {
r0, _, e1 := syscall.Syscall(procGetConsoleCP.Addr(), 0, 0, 0, 0)
cp = uint32(r0)
if cp == 0 {
err = errnoErr(e1)
}
return
}
func GetConsoleMode(console Handle, mode *uint32) (err error) {
r1, _, e1 := syscall.Syscall(procGetConsoleMode.Addr(), 2, uintptr(console), uintptr(unsafe.Pointer(mode)), 0)
if r1 == 0 {
@ -2170,6 +2183,15 @@ func GetConsoleMode(console Handle, mode *uint32) (err error) {
return
}
func GetConsoleOutputCP() (cp uint32, err error) {
r0, _, e1 := syscall.Syscall(procGetConsoleOutputCP.Addr(), 0, 0, 0, 0)
cp = uint32(r0)
if cp == 0 {
err = errnoErr(e1)
}
return
}
func GetConsoleScreenBufferInfo(console Handle, info *ConsoleScreenBufferInfo) (err error) {
r1, _, e1 := syscall.Syscall(procGetConsoleScreenBufferInfo.Addr(), 2, uintptr(console), uintptr(unsafe.Pointer(info)), 0)
if r1 == 0 {
@ -3038,6 +3060,14 @@ func SetCommTimeouts(handle Handle, timeouts *CommTimeouts) (err error) {
return
}
func SetConsoleCP(cp uint32) (err error) {
r1, _, e1 := syscall.Syscall(procSetConsoleCP.Addr(), 1, uintptr(cp), 0, 0)
if r1 == 0 {
err = errnoErr(e1)
}
return
}
func setConsoleCursorPosition(console Handle, position uint32) (err error) {
r1, _, e1 := syscall.Syscall(procSetConsoleCursorPosition.Addr(), 2, uintptr(console), uintptr(position), 0)
if r1 == 0 {
@ -3054,6 +3084,14 @@ func SetConsoleMode(console Handle, mode uint32) (err error) {
return
}
func SetConsoleOutputCP(cp uint32) (err error) {
r1, _, e1 := syscall.Syscall(procSetConsoleOutputCP.Addr(), 1, uintptr(cp), 0, 0)
if r1 == 0 {
err = errnoErr(e1)
}
return
}
func SetCurrentDirectory(path *uint16) (err error) {
r1, _, e1 := syscall.Syscall(procSetCurrentDirectoryW.Addr(), 1, uintptr(unsafe.Pointer(path)), 0, 0)
if r1 == 0 {

View File

@ -26,6 +26,7 @@ func makeRaw(fd int) (*State, error) {
return nil, err
}
raw := st &^ (windows.ENABLE_ECHO_INPUT | windows.ENABLE_PROCESSED_INPUT | windows.ENABLE_LINE_INPUT | windows.ENABLE_PROCESSED_OUTPUT)
raw |= windows.ENABLE_VIRTUAL_TERMINAL_INPUT
if err := windows.SetConsoleMode(windows.Handle(fd), raw); err != nil {
return nil, err
}

View File

@ -1,4 +1,4 @@
// Copyright 2023 Google LLC
// Copyright 2024 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@ -15,7 +15,7 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.26.0
// protoc v3.21.9
// protoc v4.24.4
// source: google/api/httpbody.proto
package httpbody

View File

@ -1,4 +1,4 @@
// Copyright 2022 Google LLC
// Copyright 2024 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@ -15,7 +15,7 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.26.0
// protoc v3.21.9
// protoc v4.24.4
// source: google/rpc/error_details.proto
package errdetails

View File

@ -1,4 +1,4 @@
// Copyright 2022 Google LLC
// Copyright 2024 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@ -15,7 +15,7 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.26.0
// protoc v3.21.9
// protoc v4.24.4
// source: google/rpc/status.proto
package status

View File

@ -66,7 +66,7 @@ How to get your contributions merged smoothly and quickly.
- **All tests need to be passing** before your change can be merged. We
recommend you **run tests locally** before creating your PR to catch breakages
early on.
- `VET_SKIP_PROTO=1 ./vet.sh` to catch vet errors
- `./scripts/vet.sh` to catch vet errors
- `go test -cpu 1,4 -timeout 7m ./...` to run the tests
- `go test -race -cpu 1,4 -timeout 7m ./...` to run tests in race mode

View File

@ -9,20 +9,28 @@ for general contribution guidelines.
## Maintainers (in alphabetical order)
- [cesarghali](https://github.com/cesarghali), Google LLC
- [aranjans](https://github.com/aranjans), Google LLC
- [arjan-bal](https://github.com/arjan-bal), Google LLC
- [arvindbr8](https://github.com/arvindbr8), Google LLC
- [atollena](https://github.com/atollena), Datadog, Inc.
- [dfawley](https://github.com/dfawley), Google LLC
- [easwars](https://github.com/easwars), Google LLC
- [menghanl](https://github.com/menghanl), Google LLC
- [srini100](https://github.com/srini100), Google LLC
- [erm-g](https://github.com/erm-g), Google LLC
- [gtcooke94](https://github.com/gtcooke94), Google LLC
- [purnesh42h](https://github.com/purnesh42h), Google LLC
- [zasweq](https://github.com/zasweq), Google LLC
## Emeritus Maintainers (in alphabetical order)
- [adelez](https://github.com/adelez), Google LLC
- [canguler](https://github.com/canguler), Google LLC
- [iamqizhao](https://github.com/iamqizhao), Google LLC
- [jadekler](https://github.com/jadekler), Google LLC
- [jtattermusch](https://github.com/jtattermusch), Google LLC
- [lyuxuan](https://github.com/lyuxuan), Google LLC
- [makmukhi](https://github.com/makmukhi), Google LLC
- [matt-kwong](https://github.com/matt-kwong), Google LLC
- [nicolasnoble](https://github.com/nicolasnoble), Google LLC
- [yongni](https://github.com/yongni), Google LLC
- [adelez](https://github.com/adelez)
- [canguler](https://github.com/canguler)
- [cesarghali](https://github.com/cesarghali)
- [iamqizhao](https://github.com/iamqizhao)
- [jeanbza](https://github.com/jeanbza)
- [jtattermusch](https://github.com/jtattermusch)
- [lyuxuan](https://github.com/lyuxuan)
- [makmukhi](https://github.com/makmukhi)
- [matt-kwong](https://github.com/matt-kwong)
- [menghanl](https://github.com/menghanl)
- [nicolasnoble](https://github.com/nicolasnoble)
- [srini100](https://github.com/srini100)
- [yongni](https://github.com/yongni)

View File

@ -30,17 +30,20 @@ testdeps:
GO111MODULE=on go get -d -v -t google.golang.org/grpc/...
vet: vetdeps
./vet.sh
./scripts/vet.sh
vetdeps:
./vet.sh -install
./scripts/vet.sh -install
.PHONY: \
all \
build \
clean \
deps \
proto \
test \
testsubmodule \
testrace \
testdeps \
vet \
vetdeps

View File

@ -10,7 +10,7 @@ RPC framework that puts mobile and HTTP/2 first. For more information see the
## Prerequisites
- **[Go][]**: any one of the **three latest major** [releases][go-releases].
- **[Go][]**: any one of the **two latest major** [releases][go-releases].
## Installation

View File

@ -1,3 +1,3 @@
# Security Policy
For information on gRPC Security Policy and reporting potentional security issues, please see [gRPC CVE Process](https://github.com/grpc/proposal/blob/master/P4-grpc-cve-process.md).
For information on gRPC Security Policy and reporting potential security issues, please see [gRPC CVE Process](https://github.com/grpc/proposal/blob/master/P4-grpc-cve-process.md).

View File

@ -39,7 +39,7 @@ type Config struct {
MaxDelay time.Duration
}
// DefaultConfig is a backoff configuration with the default values specfied
// DefaultConfig is a backoff configuration with the default values specified
// at https://github.com/grpc/grpc/blob/master/doc/connection-backoff.md.
//
// This should be useful for callers who want to configure backoff with

View File

@ -30,6 +30,7 @@ import (
"google.golang.org/grpc/channelz"
"google.golang.org/grpc/connectivity"
"google.golang.org/grpc/credentials"
estats "google.golang.org/grpc/experimental/stats"
"google.golang.org/grpc/grpclog"
"google.golang.org/grpc/internal"
"google.golang.org/grpc/metadata"
@ -54,13 +55,14 @@ var (
// an init() function), and is not thread-safe. If multiple Balancers are
// registered with the same name, the one registered last will take effect.
func Register(b Builder) {
if strings.ToLower(b.Name()) != b.Name() {
name := strings.ToLower(b.Name())
if name != b.Name() {
// TODO: Skip the use of strings.ToLower() to index the map after v1.59
// is released to switch to case sensitive balancer registry. Also,
// remove this warning and update the docstrings for Register and Get.
logger.Warningf("Balancer registered with name %q. grpc-go will be switching to case sensitive balancer registries soon", b.Name())
}
m[strings.ToLower(b.Name())] = b
m[name] = b
}
// unregisterForTesting deletes the balancer with the given name from the
@ -71,8 +73,21 @@ func unregisterForTesting(name string) {
delete(m, name)
}
// connectedAddress returns the connected address for a SubConnState. The
// address is only valid if the state is READY.
func connectedAddress(scs SubConnState) resolver.Address {
return scs.connectedAddress
}
// setConnectedAddress sets the connected address for a SubConnState.
func setConnectedAddress(scs *SubConnState, addr resolver.Address) {
scs.connectedAddress = addr
}
func init() {
internal.BalancerUnregister = unregisterForTesting
internal.ConnectedAddress = connectedAddress
internal.SetConnectedAddress = setConnectedAddress
}
// Get returns the resolver builder registered with the given name.
@ -232,8 +247,8 @@ type BuildOptions struct {
// implementations which do not communicate with a remote load balancer
// server can ignore this field.
Authority string
// ChannelzParentID is the parent ClientConn's channelz ID.
ChannelzParentID *channelz.Identifier
// ChannelzParent is the parent ClientConn's channelz channel.
ChannelzParent channelz.Identifier
// CustomUserAgent is the custom user agent set on the parent ClientConn.
// The balancer should set the same custom user agent if it creates a
// ClientConn.
@ -242,6 +257,10 @@ type BuildOptions struct {
// same resolver.Target as passed to the resolver. See the documentation for
// the resolver.Target type for details about what it contains.
Target resolver.Target
// MetricsRecorder is the metrics recorder that balancers can use to record
// metrics. Balancer implementations which do not register metrics on
// metrics registry and record on them can ignore this field.
MetricsRecorder estats.MetricsRecorder
}
// Builder creates a balancer.
@ -409,6 +428,9 @@ type SubConnState struct {
// ConnectionError is set if the ConnectivityState is TransientFailure,
// describing the reason the SubConn failed. Otherwise, it is nil.
ConnectionError error
// connectedAddr contains the connected address when ConnectivityState is
// Ready. Otherwise, it is indeterminate.
connectedAddress resolver.Address
}
// ClientConnState describes the state of a ClientConn relevant to the

View File

@ -16,54 +16,60 @@
*
*/
package grpc
// Package pickfirst contains the pick_first load balancing policy.
package pickfirst
import (
"encoding/json"
"errors"
"fmt"
"math/rand"
"google.golang.org/grpc/balancer"
"google.golang.org/grpc/connectivity"
"google.golang.org/grpc/grpclog"
"google.golang.org/grpc/internal"
internalgrpclog "google.golang.org/grpc/internal/grpclog"
"google.golang.org/grpc/internal/grpcrand"
"google.golang.org/grpc/internal/pretty"
"google.golang.org/grpc/resolver"
"google.golang.org/grpc/serviceconfig"
)
const (
// PickFirstBalancerName is the name of the pick_first balancer.
PickFirstBalancerName = "pick_first"
logPrefix = "[pick-first-lb %p] "
)
func newPickfirstBuilder() balancer.Builder {
return &pickfirstBuilder{}
func init() {
balancer.Register(pickfirstBuilder{})
internal.ShuffleAddressListForTesting = func(n int, swap func(i, j int)) { rand.Shuffle(n, swap) }
}
var logger = grpclog.Component("pick-first-lb")
const (
// Name is the name of the pick_first balancer.
Name = "pick_first"
logPrefix = "[pick-first-lb %p] "
)
type pickfirstBuilder struct{}
func (*pickfirstBuilder) Build(cc balancer.ClientConn, opt balancer.BuildOptions) balancer.Balancer {
func (pickfirstBuilder) Build(cc balancer.ClientConn, opt balancer.BuildOptions) balancer.Balancer {
b := &pickfirstBalancer{cc: cc}
b.logger = internalgrpclog.NewPrefixLogger(logger, fmt.Sprintf(logPrefix, b))
return b
}
func (*pickfirstBuilder) Name() string {
return PickFirstBalancerName
func (pickfirstBuilder) Name() string {
return Name
}
type pfConfig struct {
serviceconfig.LoadBalancingConfig `json:"-"`
// If set to true, instructs the LB policy to shuffle the order of the list
// of addresses received from the name resolver before attempting to
// of endpoints received from the name resolver before attempting to
// connect to them.
ShuffleAddressList bool `json:"shuffleAddressList"`
}
func (*pickfirstBuilder) ParseConfig(js json.RawMessage) (serviceconfig.LoadBalancingConfig, error) {
func (pickfirstBuilder) ParseConfig(js json.RawMessage) (serviceconfig.LoadBalancingConfig, error) {
var cfg pfConfig
if err := json.Unmarshal(js, &cfg); err != nil {
return nil, fmt.Errorf("pickfirst: unable to unmarshal LB policy config: %s, error: %v", string(js), err)
@ -97,9 +103,14 @@ func (b *pickfirstBalancer) ResolverError(err error) {
})
}
type Shuffler interface {
ShuffleAddressListForTesting(n int, swap func(i, j int))
}
func ShuffleAddressListForTesting(n int, swap func(i, j int)) { rand.Shuffle(n, swap) }
func (b *pickfirstBalancer) UpdateClientConnState(state balancer.ClientConnState) error {
addrs := state.ResolverState.Addresses
if len(addrs) == 0 {
if len(state.ResolverState.Addresses) == 0 && len(state.ResolverState.Endpoints) == 0 {
// The resolver reported an empty address list. Treat it like an error by
// calling b.ResolverError.
if b.subConn != nil {
@ -111,22 +122,49 @@ func (b *pickfirstBalancer) UpdateClientConnState(state balancer.ClientConnState
b.ResolverError(errors.New("produced zero addresses"))
return balancer.ErrBadResolverState
}
// We don't have to guard this block with the env var because ParseConfig
// already does so.
cfg, ok := state.BalancerConfig.(pfConfig)
if state.BalancerConfig != nil && !ok {
return fmt.Errorf("pickfirst: received illegal BalancerConfig (type %T): %v", state.BalancerConfig, state.BalancerConfig)
}
if cfg.ShuffleAddressList {
addrs = append([]resolver.Address{}, addrs...)
grpcrand.Shuffle(len(addrs), func(i, j int) { addrs[i], addrs[j] = addrs[j], addrs[i] })
}
if b.logger.V(2) {
b.logger.Infof("Received new config %s, resolver state %s", pretty.ToJSON(cfg), pretty.ToJSON(state.ResolverState))
}
var addrs []resolver.Address
if endpoints := state.ResolverState.Endpoints; len(endpoints) != 0 {
// Perform the optional shuffling described in gRFC A62. The shuffling will
// change the order of endpoints but not touch the order of the addresses
// within each endpoint. - A61
if cfg.ShuffleAddressList {
endpoints = append([]resolver.Endpoint{}, endpoints...)
internal.ShuffleAddressListForTesting.(func(int, func(int, int)))(len(endpoints), func(i, j int) { endpoints[i], endpoints[j] = endpoints[j], endpoints[i] })
}
// "Flatten the list by concatenating the ordered list of addresses for each
// of the endpoints, in order." - A61
for _, endpoint := range endpoints {
// "In the flattened list, interleave addresses from the two address
// families, as per RFC-8304 section 4." - A61
// TODO: support the above language.
addrs = append(addrs, endpoint.Addresses...)
}
} else {
// Endpoints not set, process addresses until we migrate resolver
// emissions fully to Endpoints. The top channel does wrap emitted
// addresses with endpoints, however some balancers such as weighted
// target do not forward the corresponding correct endpoints down/split
// endpoints properly. Once all balancers correctly forward endpoints
// down, can delete this else conditional.
addrs = state.ResolverState.Addresses
if cfg.ShuffleAddressList {
addrs = append([]resolver.Address{}, addrs...)
rand.Shuffle(len(addrs), func(i, j int) { addrs[i], addrs[j] = addrs[j], addrs[i] })
}
}
if b.subConn != nil {
b.cc.UpdateAddresses(b.subConn, addrs)
return nil
@ -243,7 +281,3 @@ func (i *idlePicker) Pick(balancer.PickInfo) (balancer.PickResult, error) {
i.subConn.Connect()
return balancer.PickResult{}, balancer.ErrNoSubConnAvailable
}
func init() {
balancer.Register(newPickfirstBuilder())
}

View File

@ -22,12 +22,12 @@
package roundrobin
import (
"math/rand"
"sync/atomic"
"google.golang.org/grpc/balancer"
"google.golang.org/grpc/balancer/base"
"google.golang.org/grpc/grpclog"
"google.golang.org/grpc/internal/grpcrand"
)
// Name is the name of round_robin balancer.
@ -60,7 +60,7 @@ func (*rrPickerBuilder) Build(info base.PickerBuildInfo) balancer.Picker {
// Start at a random index, as the same RR balancer rebuilds a new
// picker when SubConn states change, and we don't want to apply excess
// load to the first server in the list.
next: uint32(grpcrand.Intn(len(scs))),
next: uint32(rand.Intn(len(scs))),
}
}

View File

@ -21,17 +21,19 @@ package grpc
import (
"context"
"fmt"
"strings"
"sync"
"google.golang.org/grpc/balancer"
"google.golang.org/grpc/connectivity"
"google.golang.org/grpc/internal"
"google.golang.org/grpc/internal/balancer/gracefulswitch"
"google.golang.org/grpc/internal/channelz"
"google.golang.org/grpc/internal/grpcsync"
"google.golang.org/grpc/resolver"
)
var setConnectedAddress = internal.SetConnectedAddress.(func(*balancer.SubConnState, resolver.Address))
// ccBalancerWrapper sits between the ClientConn and the Balancer.
//
// ccBalancerWrapper implements methods corresponding to the ones on the
@ -66,19 +68,21 @@ type ccBalancerWrapper struct {
}
// newCCBalancerWrapper creates a new balancer wrapper in idle state. The
// underlying balancer is not created until the switchTo() method is invoked.
// underlying balancer is not created until the updateClientConnState() method
// is invoked.
func newCCBalancerWrapper(cc *ClientConn) *ccBalancerWrapper {
ctx, cancel := context.WithCancel(cc.ctx)
ccb := &ccBalancerWrapper{
cc: cc,
opts: balancer.BuildOptions{
DialCreds: cc.dopts.copts.TransportCredentials,
CredsBundle: cc.dopts.copts.CredsBundle,
Dialer: cc.dopts.copts.Dialer,
Authority: cc.authority,
CustomUserAgent: cc.dopts.copts.UserAgent,
ChannelzParentID: cc.channelzID,
Target: cc.parsedTarget,
DialCreds: cc.dopts.copts.TransportCredentials,
CredsBundle: cc.dopts.copts.CredsBundle,
Dialer: cc.dopts.copts.Dialer,
Authority: cc.authority,
CustomUserAgent: cc.dopts.copts.UserAgent,
ChannelzParent: cc.channelz,
Target: cc.parsedTarget,
MetricsRecorder: cc.metricsRecorderList,
},
serializer: grpcsync.NewCallbackSerializer(ctx),
serializerCancel: cancel,
@ -92,27 +96,38 @@ func newCCBalancerWrapper(cc *ClientConn) *ccBalancerWrapper {
// it is safe to call into the balancer here.
func (ccb *ccBalancerWrapper) updateClientConnState(ccs *balancer.ClientConnState) error {
errCh := make(chan error)
ok := ccb.serializer.Schedule(func(ctx context.Context) {
uccs := func(ctx context.Context) {
defer close(errCh)
if ctx.Err() != nil || ccb.balancer == nil {
return
}
name := gracefulswitch.ChildName(ccs.BalancerConfig)
if ccb.curBalancerName != name {
ccb.curBalancerName = name
channelz.Infof(logger, ccb.cc.channelz, "Channel switches to new LB policy %q", name)
}
err := ccb.balancer.UpdateClientConnState(*ccs)
if logger.V(2) && err != nil {
logger.Infof("error from balancer.UpdateClientConnState: %v", err)
}
errCh <- err
})
if !ok {
return nil
}
onFailure := func() { close(errCh) }
// UpdateClientConnState can race with Close, and when the latter wins, the
// serializer is closed, and the attempt to schedule the callback will fail.
// It is acceptable to ignore this failure. But since we want to handle the
// state update in a blocking fashion (when we successfully schedule the
// callback), we have to use the ScheduleOr method and not the MaybeSchedule
// method on the serializer.
ccb.serializer.ScheduleOr(uccs, onFailure)
return <-errCh
}
// resolverError is invoked by grpc to push a resolver error to the underlying
// balancer. The call to the balancer is executed from the serializer.
func (ccb *ccBalancerWrapper) resolverError(err error) {
ccb.serializer.Schedule(func(ctx context.Context) {
ccb.serializer.TrySchedule(func(ctx context.Context) {
if ctx.Err() != nil || ccb.balancer == nil {
return
}
@ -120,54 +135,6 @@ func (ccb *ccBalancerWrapper) resolverError(err error) {
})
}
// switchTo is invoked by grpc to instruct the balancer wrapper to switch to the
// LB policy identified by name.
//
// ClientConn calls newCCBalancerWrapper() at creation time. Upon receipt of the
// first good update from the name resolver, it determines the LB policy to use
// and invokes the switchTo() method. Upon receipt of every subsequent update
// from the name resolver, it invokes this method.
//
// the ccBalancerWrapper keeps track of the current LB policy name, and skips
// the graceful balancer switching process if the name does not change.
func (ccb *ccBalancerWrapper) switchTo(name string) {
ccb.serializer.Schedule(func(ctx context.Context) {
if ctx.Err() != nil || ccb.balancer == nil {
return
}
// TODO: Other languages use case-sensitive balancer registries. We should
// switch as well. See: https://github.com/grpc/grpc-go/issues/5288.
if strings.EqualFold(ccb.curBalancerName, name) {
return
}
ccb.buildLoadBalancingPolicy(name)
})
}
// buildLoadBalancingPolicy performs the following:
// - retrieve a balancer builder for the given name. Use the default LB
// policy, pick_first, if no LB policy with name is found in the registry.
// - instruct the gracefulswitch balancer to switch to the above builder. This
// will actually build the new balancer.
// - update the `curBalancerName` field
//
// Must be called from a serializer callback.
func (ccb *ccBalancerWrapper) buildLoadBalancingPolicy(name string) {
builder := balancer.Get(name)
if builder == nil {
channelz.Warningf(logger, ccb.cc.channelzID, "Channel switches to new LB policy %q, since the specified LB policy %q was not registered", PickFirstBalancerName, name)
builder = newPickfirstBuilder()
} else {
channelz.Infof(logger, ccb.cc.channelzID, "Channel switches to new LB policy %q", name)
}
if err := ccb.balancer.SwitchTo(builder); err != nil {
channelz.Errorf(logger, ccb.cc.channelzID, "Channel failed to build new LB policy %q: %v", name, err)
return
}
ccb.curBalancerName = builder.Name()
}
// close initiates async shutdown of the wrapper. cc.mu must be held when
// calling this function. To determine the wrapper has finished shutting down,
// the channel should block on ccb.serializer.Done() without cc.mu held.
@ -175,8 +142,8 @@ func (ccb *ccBalancerWrapper) close() {
ccb.mu.Lock()
ccb.closed = true
ccb.mu.Unlock()
channelz.Info(logger, ccb.cc.channelzID, "ccBalancerWrapper: closing")
ccb.serializer.Schedule(func(context.Context) {
channelz.Info(logger, ccb.cc.channelz, "ccBalancerWrapper: closing")
ccb.serializer.TrySchedule(func(context.Context) {
if ccb.balancer == nil {
return
}
@ -188,7 +155,7 @@ func (ccb *ccBalancerWrapper) close() {
// exitIdle invokes the balancer's exitIdle method in the serializer.
func (ccb *ccBalancerWrapper) exitIdle() {
ccb.serializer.Schedule(func(ctx context.Context) {
ccb.serializer.TrySchedule(func(ctx context.Context) {
if ctx.Err() != nil || ccb.balancer == nil {
return
}
@ -212,7 +179,7 @@ func (ccb *ccBalancerWrapper) NewSubConn(addrs []resolver.Address, opts balancer
}
ac, err := ccb.cc.newAddrConnLocked(addrs, opts)
if err != nil {
channelz.Warningf(logger, ccb.cc.channelzID, "acBalancerWrapper: NewSubConn: failed to newAddrConn: %v", err)
channelz.Warningf(logger, ccb.cc.channelz, "acBalancerWrapper: NewSubConn: failed to newAddrConn: %v", err)
return nil, err
}
acbw := &acBalancerWrapper{
@ -241,6 +208,10 @@ func (ccb *ccBalancerWrapper) UpdateAddresses(sc balancer.SubConn, addrs []resol
func (ccb *ccBalancerWrapper) UpdateState(s balancer.State) {
ccb.cc.mu.Lock()
defer ccb.cc.mu.Unlock()
if ccb.cc.conns == nil {
// The CC has been closed; ignore this update.
return
}
ccb.mu.Lock()
if ccb.closed {
@ -291,20 +262,34 @@ type acBalancerWrapper struct {
// updateState is invoked by grpc to push a subConn state update to the
// underlying balancer.
func (acbw *acBalancerWrapper) updateState(s connectivity.State, err error) {
acbw.ccb.serializer.Schedule(func(ctx context.Context) {
func (acbw *acBalancerWrapper) updateState(s connectivity.State, curAddr resolver.Address, err error) {
acbw.ccb.serializer.TrySchedule(func(ctx context.Context) {
if ctx.Err() != nil || acbw.ccb.balancer == nil {
return
}
// Even though it is optional for balancers, gracefulswitch ensures
// opts.StateListener is set, so this cannot ever be nil.
// TODO: delete this comment when UpdateSubConnState is removed.
acbw.stateListener(balancer.SubConnState{ConnectivityState: s, ConnectionError: err})
scs := balancer.SubConnState{ConnectivityState: s, ConnectionError: err}
if s == connectivity.Ready {
setConnectedAddress(&scs, curAddr)
}
acbw.stateListener(scs)
acbw.ac.mu.Lock()
defer acbw.ac.mu.Unlock()
if s == connectivity.Ready {
// When changing states to READY, reset stateReadyChan. Wait until
// after we notify the LB policy's listener(s) in order to prevent
// ac.getTransport() from unblocking before the LB policy starts
// tracking the subchannel as READY.
close(acbw.ac.stateReadyChan)
acbw.ac.stateReadyChan = make(chan struct{})
}
})
}
func (acbw *acBalancerWrapper) String() string {
return fmt.Sprintf("SubConn(id:%d)", acbw.ac.channelzID.Int())
return fmt.Sprintf("SubConn(id:%d)", acbw.ac.channelz.ID)
}
func (acbw *acBalancerWrapper) UpdateAddresses(addrs []resolver.Address) {

View File

@ -18,8 +18,8 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.32.0
// protoc v4.25.2
// protoc-gen-go v1.34.1
// protoc v5.27.1
// source: grpc/binlog/v1/binarylog.proto
package grpc_binarylog_v1

View File

@ -24,6 +24,7 @@ import (
"fmt"
"math"
"net/url"
"slices"
"strings"
"sync"
"sync/atomic"
@ -31,14 +32,15 @@ import (
"google.golang.org/grpc/balancer"
"google.golang.org/grpc/balancer/base"
"google.golang.org/grpc/balancer/pickfirst"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/connectivity"
"google.golang.org/grpc/internal"
"google.golang.org/grpc/internal/channelz"
"google.golang.org/grpc/internal/grpcsync"
"google.golang.org/grpc/internal/idle"
"google.golang.org/grpc/internal/pretty"
iresolver "google.golang.org/grpc/internal/resolver"
"google.golang.org/grpc/internal/stats"
"google.golang.org/grpc/internal/transport"
"google.golang.org/grpc/keepalive"
"google.golang.org/grpc/resolver"
@ -67,12 +69,14 @@ var (
errConnDrain = errors.New("grpc: the connection is drained")
// errConnClosing indicates that the connection is closing.
errConnClosing = errors.New("grpc: the connection is closing")
// errConnIdling indicates the the connection is being closed as the channel
// errConnIdling indicates the connection is being closed as the channel
// is moving to an idle mode due to inactivity.
errConnIdling = errors.New("grpc: the connection is closing due to channel idleness")
// invalidDefaultServiceConfigErrPrefix is used to prefix the json parsing error for the default
// service config.
invalidDefaultServiceConfigErrPrefix = "grpc: the provided default service config is invalid"
// PickFirstBalancerName is the name of the pick_first balancer.
PickFirstBalancerName = pickfirst.Name
)
// The following errors are returned from Dial and DialContext
@ -101,11 +105,6 @@ const (
defaultReadBufSize = 32 * 1024
)
// Dial creates a client connection to the given target.
func Dial(target string, opts ...DialOption) (*ClientConn, error) {
return DialContext(context.Background(), target, opts...)
}
type defaultConfigSelector struct {
sc *ServiceConfig
}
@ -117,13 +116,23 @@ func (dcs *defaultConfigSelector) SelectConfig(rpcInfo iresolver.RPCInfo) (*ires
}, nil
}
// newClient returns a new client in idle mode.
func newClient(target string, opts ...DialOption) (conn *ClientConn, err error) {
// NewClient creates a new gRPC "channel" for the target URI provided. No I/O
// is performed. Use of the ClientConn for RPCs will automatically cause it to
// connect. Connect may be used to manually create a connection, but for most
// users this is unnecessary.
//
// The target name syntax is defined in
// https://github.com/grpc/grpc/blob/master/doc/naming.md. e.g. to use dns
// resolver, a "dns:///" prefix should be applied to the target.
//
// The DialOptions returned by WithBlock, WithTimeout,
// WithReturnConnectionError, and FailOnNonTempDialError are ignored by this
// function.
func NewClient(target string, opts ...DialOption) (conn *ClientConn, err error) {
cc := &ClientConn{
target: target,
conns: make(map[*addrConn]struct{}),
dopts: defaultDialOptions(),
czData: new(channelzData),
}
cc.retryThrottler.Store((*retryThrottler)(nil))
@ -148,6 +157,16 @@ func newClient(target string, opts ...DialOption) (conn *ClientConn, err error)
for _, opt := range opts {
opt.apply(&cc.dopts)
}
// Determine the resolver to use.
if err := cc.initParsedTargetAndResolverBuilder(); err != nil {
return nil, err
}
for _, opt := range globalPerTargetDialOptions {
opt.DialOptionForTarget(cc.parsedTarget.URL).apply(&cc.dopts)
}
chainUnaryClientInterceptors(cc)
chainStreamClientInterceptors(cc)
@ -156,7 +175,7 @@ func newClient(target string, opts ...DialOption) (conn *ClientConn, err error)
}
if cc.dopts.defaultServiceConfigRawJSON != nil {
scpr := parseServiceConfig(*cc.dopts.defaultServiceConfigRawJSON)
scpr := parseServiceConfig(*cc.dopts.defaultServiceConfigRawJSON, cc.dopts.maxCallAttempts)
if scpr.Err != nil {
return nil, fmt.Errorf("%s: %v", invalidDefaultServiceConfigErrPrefix, scpr.Err)
}
@ -164,66 +183,57 @@ func newClient(target string, opts ...DialOption) (conn *ClientConn, err error)
}
cc.mkp = cc.dopts.copts.KeepaliveParams
// Register ClientConn with channelz.
if err = cc.initAuthority(); err != nil {
return nil, err
}
// Register ClientConn with channelz. Note that this is only done after
// channel creation cannot fail.
cc.channelzRegistration(target)
channelz.Infof(logger, cc.channelz, "parsed dial target is: %#v", cc.parsedTarget)
channelz.Infof(logger, cc.channelz, "Channel authority set to %q", cc.authority)
// TODO: Ideally it should be impossible to error from this function after
// channelz registration. This will require removing some channelz logs
// from the following functions that can error. Errors can be returned to
// the user, and successful logs can be emitted here, after the checks have
// passed and channelz is subsequently registered.
// Determine the resolver to use.
if err := cc.parseTargetAndFindResolver(); err != nil {
channelz.RemoveEntry(cc.channelzID)
return nil, err
}
if err = cc.determineAuthority(); err != nil {
channelz.RemoveEntry(cc.channelzID)
return nil, err
}
cc.csMgr = newConnectivityStateManager(cc.ctx, cc.channelzID)
cc.csMgr = newConnectivityStateManager(cc.ctx, cc.channelz)
cc.pickerWrapper = newPickerWrapper(cc.dopts.copts.StatsHandlers)
cc.metricsRecorderList = stats.NewMetricsRecorderList(cc.dopts.copts.StatsHandlers)
cc.initIdleStateLocked() // Safe to call without the lock, since nothing else has a reference to cc.
cc.idlenessMgr = idle.NewManager((*idler)(cc), cc.dopts.idleTimeout)
return cc, nil
}
// DialContext creates a client connection to the given target. By default, it's
// a non-blocking dial (the function won't wait for connections to be
// established, and connecting happens in the background). To make it a blocking
// dial, use WithBlock() dial option.
// Dial calls DialContext(context.Background(), target, opts...).
//
// In the non-blocking case, the ctx does not act against the connection. It
// only controls the setup steps.
// Deprecated: use NewClient instead. Will be supported throughout 1.x.
func Dial(target string, opts ...DialOption) (*ClientConn, error) {
return DialContext(context.Background(), target, opts...)
}
// DialContext calls NewClient and then exits idle mode. If WithBlock(true) is
// used, it calls Connect and WaitForStateChange until either the context
// expires or the state of the ClientConn is Ready.
//
// In the blocking case, ctx can be used to cancel or expire the pending
// connection. Once this function returns, the cancellation and expiration of
// ctx will be noop. Users should call ClientConn.Close to terminate all the
// pending operations after this function returns.
// One subtle difference between NewClient and Dial and DialContext is that the
// former uses "dns" as the default name resolver, while the latter use
// "passthrough" for backward compatibility. This distinction should not matter
// to most users, but could matter to legacy users that specify a custom dialer
// and expect it to receive the target string directly.
//
// The target name syntax is defined in
// https://github.com/grpc/grpc/blob/master/doc/naming.md.
// e.g. to use dns resolver, a "dns:///" prefix should be applied to the target.
// Deprecated: use NewClient instead. Will be supported throughout 1.x.
func DialContext(ctx context.Context, target string, opts ...DialOption) (conn *ClientConn, err error) {
cc, err := newClient(target, opts...)
// At the end of this method, we kick the channel out of idle, rather than
// waiting for the first rpc.
opts = append([]DialOption{withDefaultScheme("passthrough")}, opts...)
cc, err := NewClient(target, opts...)
if err != nil {
return nil, err
}
// We start the channel off in idle mode, but kick it out of idle now,
// instead of waiting for the first RPC. Other gRPC implementations do wait
// for the first RPC to kick the channel out of idle. But doing so would be
// a major behavior change for our users who are used to seeing the channel
// active after Dial.
//
// Taking this approach of kicking it out of idle at the end of this method
// allows us to share the code between channel creation and exiting idle
// mode. This will also make it easy for us to switch to starting the
// channel off in idle, i.e. by making newClient exported.
// instead of waiting for the first RPC. This is the legacy behavior of
// Dial.
defer func() {
if err != nil {
cc.Close()
@ -291,17 +301,17 @@ func DialContext(ctx context.Context, target string, opts ...DialOption) (conn *
// addTraceEvent is a helper method to add a trace event on the channel. If the
// channel is a nested one, the same event is also added on the parent channel.
func (cc *ClientConn) addTraceEvent(msg string) {
ted := &channelz.TraceEventDesc{
ted := &channelz.TraceEvent{
Desc: fmt.Sprintf("Channel %s", msg),
Severity: channelz.CtInfo,
}
if cc.dopts.channelzParentID != nil {
ted.Parent = &channelz.TraceEventDesc{
Desc: fmt.Sprintf("Nested channel(id:%d) %s", cc.channelzID.Int(), msg),
if cc.dopts.channelzParent != nil {
ted.Parent = &channelz.TraceEvent{
Desc: fmt.Sprintf("Nested channel(id:%d) %s", cc.channelz.ID, msg),
Severity: channelz.CtInfo,
}
}
channelz.AddTraceEvent(logger, cc.channelzID, 0, ted)
channelz.AddTraceEvent(logger, cc.channelz, 0, ted)
}
type idler ClientConn
@ -418,14 +428,15 @@ func (cc *ClientConn) validateTransportCredentials() error {
}
// channelzRegistration registers the newly created ClientConn with channelz and
// stores the returned identifier in `cc.channelzID` and `cc.csMgr.channelzID`.
// A channelz trace event is emitted for ClientConn creation. If the newly
// created ClientConn is a nested one, i.e a valid parent ClientConn ID is
// specified via a dial option, the trace event is also added to the parent.
// stores the returned identifier in `cc.channelz`. A channelz trace event is
// emitted for ClientConn creation. If the newly created ClientConn is a nested
// one, i.e a valid parent ClientConn ID is specified via a dial option, the
// trace event is also added to the parent.
//
// Doesn't grab cc.mu as this method is expected to be called only at Dial time.
func (cc *ClientConn) channelzRegistration(target string) {
cc.channelzID = channelz.RegisterChannel(&channelzChannel{cc}, cc.dopts.channelzParentID, target)
parentChannel, _ := cc.dopts.channelzParent.(*channelz.Channel)
cc.channelz = channelz.RegisterChannel(parentChannel, target)
cc.addTraceEvent("created")
}
@ -492,11 +503,11 @@ func getChainStreamer(interceptors []StreamClientInterceptor, curr int, finalStr
}
// newConnectivityStateManager creates an connectivityStateManager with
// the specified id.
func newConnectivityStateManager(ctx context.Context, id *channelz.Identifier) *connectivityStateManager {
// the specified channel.
func newConnectivityStateManager(ctx context.Context, channel *channelz.Channel) *connectivityStateManager {
return &connectivityStateManager{
channelzID: id,
pubSub: grpcsync.NewPubSub(ctx),
channelz: channel,
pubSub: grpcsync.NewPubSub(ctx),
}
}
@ -510,7 +521,7 @@ type connectivityStateManager struct {
mu sync.Mutex
state connectivity.State
notifyChan chan struct{}
channelzID *channelz.Identifier
channelz *channelz.Channel
pubSub *grpcsync.PubSub
}
@ -527,9 +538,10 @@ func (csm *connectivityStateManager) updateState(state connectivity.State) {
return
}
csm.state = state
csm.channelz.ChannelMetrics.State.Store(&state)
csm.pubSub.Publish(state)
channelz.Infof(logger, csm.channelzID, "Channel Connectivity change to %v", state)
channelz.Infof(logger, csm.channelz, "Channel Connectivity change to %v", state)
if csm.notifyChan != nil {
// There are other goroutines waiting on this channel.
close(csm.notifyChan)
@ -583,20 +595,20 @@ type ClientConn struct {
cancel context.CancelFunc // Cancelled on close.
// The following are initialized at dial time, and are read-only after that.
target string // User's dial target.
parsedTarget resolver.Target // See parseTargetAndFindResolver().
authority string // See determineAuthority().
dopts dialOptions // Default and user specified dial options.
channelzID *channelz.Identifier // Channelz identifier for the channel.
resolverBuilder resolver.Builder // See parseTargetAndFindResolver().
idlenessMgr *idle.Manager
target string // User's dial target.
parsedTarget resolver.Target // See initParsedTargetAndResolverBuilder().
authority string // See initAuthority().
dopts dialOptions // Default and user specified dial options.
channelz *channelz.Channel // Channelz object.
resolverBuilder resolver.Builder // See initParsedTargetAndResolverBuilder().
idlenessMgr *idle.Manager
metricsRecorderList *stats.MetricsRecorderList
// The following provide their own synchronization, and therefore don't
// require cc.mu to be held to access them.
csMgr *connectivityStateManager
pickerWrapper *pickerWrapper
safeConfigSelector iresolver.SafeConfigSelector
czData *channelzData
retryThrottler atomic.Value // Updated from service config.
// mu protects the following fields.
@ -620,11 +632,6 @@ type ClientConn struct {
// WaitForStateChange waits until the connectivity.State of ClientConn changes from sourceState or
// ctx expires. A true value is returned in former case and false in latter.
//
// # Experimental
//
// Notice: This API is EXPERIMENTAL and may be changed or removed in a
// later release.
func (cc *ClientConn) WaitForStateChange(ctx context.Context, sourceState connectivity.State) bool {
ch := cc.csMgr.getNotifyChan()
if cc.csMgr.getState() != sourceState {
@ -639,11 +646,6 @@ func (cc *ClientConn) WaitForStateChange(ctx context.Context, sourceState connec
}
// GetState returns the connectivity.State of ClientConn.
//
// # Experimental
//
// Notice: This API is EXPERIMENTAL and may be changed or removed in a later
// release.
func (cc *ClientConn) GetState() connectivity.State {
return cc.csMgr.getState()
}
@ -690,7 +692,7 @@ func (cc *ClientConn) waitForResolvedAddrs(ctx context.Context) error {
var emptyServiceConfig *ServiceConfig
func init() {
cfg := parseServiceConfig("{}")
cfg := parseServiceConfig("{}", defaultMaxCallAttempts)
if cfg.Err != nil {
panic(fmt.Sprintf("impossible error parsing empty service config: %v", cfg.Err))
}
@ -707,15 +709,15 @@ func init() {
}
}
func (cc *ClientConn) maybeApplyDefaultServiceConfig(addrs []resolver.Address) {
func (cc *ClientConn) maybeApplyDefaultServiceConfig() {
if cc.sc != nil {
cc.applyServiceConfigAndBalancer(cc.sc, nil, addrs)
cc.applyServiceConfigAndBalancer(cc.sc, nil)
return
}
if cc.dopts.defaultServiceConfig != nil {
cc.applyServiceConfigAndBalancer(cc.dopts.defaultServiceConfig, &defaultConfigSelector{cc.dopts.defaultServiceConfig}, addrs)
cc.applyServiceConfigAndBalancer(cc.dopts.defaultServiceConfig, &defaultConfigSelector{cc.dopts.defaultServiceConfig})
} else {
cc.applyServiceConfigAndBalancer(emptyServiceConfig, &defaultConfigSelector{emptyServiceConfig}, addrs)
cc.applyServiceConfigAndBalancer(emptyServiceConfig, &defaultConfigSelector{emptyServiceConfig})
}
}
@ -733,7 +735,7 @@ func (cc *ClientConn) updateResolverStateAndUnlock(s resolver.State, err error)
// May need to apply the initial service config in case the resolver
// doesn't support service configs, or doesn't provide a service config
// with the new addresses.
cc.maybeApplyDefaultServiceConfig(nil)
cc.maybeApplyDefaultServiceConfig()
cc.balancerWrapper.resolverError(err)
@ -744,10 +746,10 @@ func (cc *ClientConn) updateResolverStateAndUnlock(s resolver.State, err error)
var ret error
if cc.dopts.disableServiceConfig {
channelz.Infof(logger, cc.channelzID, "ignoring service config from resolver (%v) and applying the default because service config is disabled", s.ServiceConfig)
cc.maybeApplyDefaultServiceConfig(s.Addresses)
channelz.Infof(logger, cc.channelz, "ignoring service config from resolver (%v) and applying the default because service config is disabled", s.ServiceConfig)
cc.maybeApplyDefaultServiceConfig()
} else if s.ServiceConfig == nil {
cc.maybeApplyDefaultServiceConfig(s.Addresses)
cc.maybeApplyDefaultServiceConfig()
// TODO: do we need to apply a failing LB policy if there is no
// default, per the error handling design?
} else {
@ -755,12 +757,12 @@ func (cc *ClientConn) updateResolverStateAndUnlock(s resolver.State, err error)
configSelector := iresolver.GetConfigSelector(s)
if configSelector != nil {
if len(s.ServiceConfig.Config.(*ServiceConfig).Methods) != 0 {
channelz.Infof(logger, cc.channelzID, "method configs in service config will be ignored due to presence of config selector")
channelz.Infof(logger, cc.channelz, "method configs in service config will be ignored due to presence of config selector")
}
} else {
configSelector = &defaultConfigSelector{sc}
}
cc.applyServiceConfigAndBalancer(sc, configSelector, s.Addresses)
cc.applyServiceConfigAndBalancer(sc, configSelector)
} else {
ret = balancer.ErrBadResolverState
if cc.sc == nil {
@ -775,7 +777,7 @@ func (cc *ClientConn) updateResolverStateAndUnlock(s resolver.State, err error)
var balCfg serviceconfig.LoadBalancingConfig
if cc.sc != nil && cc.sc.lbConfig != nil {
balCfg = cc.sc.lbConfig.cfg
balCfg = cc.sc.lbConfig
}
bw := cc.balancerWrapper
cc.mu.Unlock()
@ -806,17 +808,11 @@ func (cc *ClientConn) applyFailingLBLocked(sc *serviceconfig.ParseResult) {
cc.csMgr.updateState(connectivity.TransientFailure)
}
// Makes a copy of the input addresses slice and clears out the balancer
// attributes field. Addresses are passed during subconn creation and address
// update operations. In both cases, we will clear the balancer attributes by
// calling this function, and therefore we will be able to use the Equal method
// provided by the resolver.Address type for comparison.
func copyAddressesWithoutBalancerAttributes(in []resolver.Address) []resolver.Address {
// Makes a copy of the input addresses slice. Addresses are passed during
// subconn creation and address update operations.
func copyAddresses(in []resolver.Address) []resolver.Address {
out := make([]resolver.Address, len(in))
for i := range in {
out[i] = in[i]
out[i].BalancerAttributes = nil
}
copy(out, in)
return out
}
@ -829,27 +825,25 @@ func (cc *ClientConn) newAddrConnLocked(addrs []resolver.Address, opts balancer.
}
ac := &addrConn{
state: connectivity.Idle,
cc: cc,
addrs: copyAddressesWithoutBalancerAttributes(addrs),
scopts: opts,
dopts: cc.dopts,
czData: new(channelzData),
resetBackoff: make(chan struct{}),
stateChan: make(chan struct{}),
state: connectivity.Idle,
cc: cc,
addrs: copyAddresses(addrs),
scopts: opts,
dopts: cc.dopts,
channelz: channelz.RegisterSubChannel(cc.channelz, ""),
resetBackoff: make(chan struct{}),
stateReadyChan: make(chan struct{}),
}
ac.ctx, ac.cancel = context.WithCancel(cc.ctx)
// Start with our address set to the first address; this may be updated if
// we connect to different addresses.
ac.channelz.ChannelMetrics.Target.Store(&addrs[0].Addr)
var err error
ac.channelzID, err = channelz.RegisterSubChannel(ac, cc.channelzID, "")
if err != nil {
return nil, err
}
channelz.AddTraceEvent(logger, ac.channelzID, 0, &channelz.TraceEventDesc{
channelz.AddTraceEvent(logger, ac.channelz, 0, &channelz.TraceEvent{
Desc: "Subchannel created",
Severity: channelz.CtInfo,
Parent: &channelz.TraceEventDesc{
Desc: fmt.Sprintf("Subchannel(id:%d) created", ac.channelzID.Int()),
Parent: &channelz.TraceEvent{
Desc: fmt.Sprintf("Subchannel(id:%d) created", ac.channelz.ID),
Severity: channelz.CtInfo,
},
})
@ -872,38 +866,27 @@ func (cc *ClientConn) removeAddrConn(ac *addrConn, err error) {
ac.tearDown(err)
}
func (cc *ClientConn) channelzMetric() *channelz.ChannelInternalMetric {
return &channelz.ChannelInternalMetric{
State: cc.GetState(),
Target: cc.target,
CallsStarted: atomic.LoadInt64(&cc.czData.callsStarted),
CallsSucceeded: atomic.LoadInt64(&cc.czData.callsSucceeded),
CallsFailed: atomic.LoadInt64(&cc.czData.callsFailed),
LastCallStartedTimestamp: time.Unix(0, atomic.LoadInt64(&cc.czData.lastCallStartedTime)),
}
}
// Target returns the target string of the ClientConn.
//
// # Experimental
//
// Notice: This API is EXPERIMENTAL and may be changed or removed in a
// later release.
func (cc *ClientConn) Target() string {
return cc.target
}
// CanonicalTarget returns the canonical target string of the ClientConn.
func (cc *ClientConn) CanonicalTarget() string {
return cc.parsedTarget.String()
}
func (cc *ClientConn) incrCallsStarted() {
atomic.AddInt64(&cc.czData.callsStarted, 1)
atomic.StoreInt64(&cc.czData.lastCallStartedTime, time.Now().UnixNano())
cc.channelz.ChannelMetrics.CallsStarted.Add(1)
cc.channelz.ChannelMetrics.LastCallStartedTimestamp.Store(time.Now().UnixNano())
}
func (cc *ClientConn) incrCallsSucceeded() {
atomic.AddInt64(&cc.czData.callsSucceeded, 1)
cc.channelz.ChannelMetrics.CallsSucceeded.Add(1)
}
func (cc *ClientConn) incrCallsFailed() {
atomic.AddInt64(&cc.czData.callsFailed, 1)
cc.channelz.ChannelMetrics.CallsFailed.Add(1)
}
// connect starts creating a transport.
@ -925,32 +908,37 @@ func (ac *addrConn) connect() error {
ac.mu.Unlock()
return nil
}
ac.mu.Unlock()
ac.resetTransport()
ac.resetTransportAndUnlock()
return nil
}
func equalAddresses(a, b []resolver.Address) bool {
if len(a) != len(b) {
return false
}
for i, v := range a {
if !v.Equal(b[i]) {
return false
}
}
return true
// equalAddressIgnoringBalAttributes returns true is a and b are considered equal.
// This is different from the Equal method on the resolver.Address type which
// considers all fields to determine equality. Here, we only consider fields
// that are meaningful to the subConn.
func equalAddressIgnoringBalAttributes(a, b *resolver.Address) bool {
return a.Addr == b.Addr && a.ServerName == b.ServerName &&
a.Attributes.Equal(b.Attributes) &&
a.Metadata == b.Metadata
}
func equalAddressesIgnoringBalAttributes(a, b []resolver.Address) bool {
return slices.EqualFunc(a, b, func(a, b resolver.Address) bool { return equalAddressIgnoringBalAttributes(&a, &b) })
}
// updateAddrs updates ac.addrs with the new addresses list and handles active
// connections or connection attempts.
func (ac *addrConn) updateAddrs(addrs []resolver.Address) {
ac.mu.Lock()
channelz.Infof(logger, ac.channelzID, "addrConn: updateAddrs curAddr: %v, addrs: %v", pretty.ToJSON(ac.curAddr), pretty.ToJSON(addrs))
addrs = copyAddresses(addrs)
limit := len(addrs)
if limit > 5 {
limit = 5
}
channelz.Infof(logger, ac.channelz, "addrConn: updateAddrs addrs (%d of %d): %v", limit, len(addrs), addrs[:limit])
addrs = copyAddressesWithoutBalancerAttributes(addrs)
if equalAddresses(ac.addrs, addrs) {
ac.mu.Lock()
if equalAddressesIgnoringBalAttributes(ac.addrs, addrs) {
ac.mu.Unlock()
return
}
@ -969,7 +957,7 @@ func (ac *addrConn) updateAddrs(addrs []resolver.Address) {
// Try to find the connected address.
for _, a := range addrs {
a.ServerName = ac.cc.getServerName(a)
if a.Equal(ac.curAddr) {
if equalAddressIgnoringBalAttributes(&a, &ac.curAddr) {
// We are connected to a valid address, so do nothing but
// update the addresses.
ac.mu.Unlock()
@ -995,11 +983,9 @@ func (ac *addrConn) updateAddrs(addrs []resolver.Address) {
ac.updateConnectivityState(connectivity.Idle, nil)
}
ac.mu.Unlock()
// Since we were connecting/connected, we should start a new connection
// attempt.
go ac.resetTransport()
go ac.resetTransportAndUnlock()
}
// getServerName determines the serverName to be used in the connection
@ -1067,7 +1053,7 @@ func (cc *ClientConn) getTransport(ctx context.Context, failfast bool, method st
})
}
func (cc *ClientConn) applyServiceConfigAndBalancer(sc *ServiceConfig, configSelector iresolver.ConfigSelector, addrs []resolver.Address) {
func (cc *ClientConn) applyServiceConfigAndBalancer(sc *ServiceConfig, configSelector iresolver.ConfigSelector) {
if sc == nil {
// should never reach here.
return
@ -1088,17 +1074,6 @@ func (cc *ClientConn) applyServiceConfigAndBalancer(sc *ServiceConfig, configSel
} else {
cc.retryThrottler.Store((*retryThrottler)(nil))
}
var newBalancerName string
if cc.sc == nil || (cc.sc.lbConfig == nil && cc.sc.LB == nil) {
// No service config or no LB policy specified in config.
newBalancerName = PickFirstBalancerName
} else if cc.sc.lbConfig != nil {
newBalancerName = cc.sc.lbConfig.name
} else { // cc.sc.LB != nil
newBalancerName = *cc.sc.LB
}
cc.balancerWrapper.switchTo(newBalancerName)
}
func (cc *ClientConn) resolveNow(o resolver.ResolveNowOptions) {
@ -1174,7 +1149,7 @@ func (cc *ClientConn) Close() error {
// TraceEvent needs to be called before RemoveEntry, as TraceEvent may add
// trace reference to the entity being deleted, and thus prevent it from being
// deleted right away.
channelz.RemoveEntry(cc.channelzID)
channelz.RemoveEntry(cc.channelz.ID)
return nil
}
@ -1195,19 +1170,22 @@ type addrConn struct {
// is received, transport is closed, ac has been torn down).
transport transport.ClientTransport // The current transport.
// This mutex is used on the RPC path, so its usage should be minimized as
// much as possible.
// TODO: Find a lock-free way to retrieve the transport and state from the
// addrConn.
mu sync.Mutex
curAddr resolver.Address // The current address.
addrs []resolver.Address // All addresses that the resolver resolved to.
// Use updateConnectivityState for updating addrConn's connectivity state.
state connectivity.State
stateChan chan struct{} // closed and recreated on every state change.
state connectivity.State
stateReadyChan chan struct{} // closed and recreated on every READY state change.
backoffIdx int // Needs to be stateful for resetConnectBackoff.
resetBackoff chan struct{}
channelzID *channelz.Identifier
czData *channelzData
channelz *channelz.SubChannel
}
// Note: this requires a lock on ac.mu.
@ -1215,16 +1193,14 @@ func (ac *addrConn) updateConnectivityState(s connectivity.State, lastErr error)
if ac.state == s {
return
}
// When changing states, reset the state change channel.
close(ac.stateChan)
ac.stateChan = make(chan struct{})
ac.state = s
ac.channelz.ChannelMetrics.State.Store(&s)
if lastErr == nil {
channelz.Infof(logger, ac.channelzID, "Subchannel Connectivity change to %v", s)
channelz.Infof(logger, ac.channelz, "Subchannel Connectivity change to %v", s)
} else {
channelz.Infof(logger, ac.channelzID, "Subchannel Connectivity change to %v, last error: %s", s, lastErr)
channelz.Infof(logger, ac.channelz, "Subchannel Connectivity change to %v, last error: %s", s, lastErr)
}
ac.acbw.updateState(s, lastErr)
ac.acbw.updateState(s, ac.curAddr, lastErr)
}
// adjustParams updates parameters used to create transports upon
@ -1241,8 +1217,10 @@ func (ac *addrConn) adjustParams(r transport.GoAwayReason) {
}
}
func (ac *addrConn) resetTransport() {
ac.mu.Lock()
// resetTransportAndUnlock unconditionally connects the addrConn.
//
// ac.mu must be held by the caller, and this function will guarantee it is released.
func (ac *addrConn) resetTransportAndUnlock() {
acCtx := ac.ctx
if acCtx.Err() != nil {
ac.mu.Unlock()
@ -1320,6 +1298,7 @@ func (ac *addrConn) resetTransport() {
func (ac *addrConn) tryAllAddrs(ctx context.Context, addrs []resolver.Address, connectDeadline time.Time) error {
var firstConnErr error
for _, addr := range addrs {
ac.channelz.ChannelMetrics.Target.Store(&addr.Addr)
if ctx.Err() != nil {
return errConnClosing
}
@ -1335,7 +1314,7 @@ func (ac *addrConn) tryAllAddrs(ctx context.Context, addrs []resolver.Address, c
}
ac.mu.Unlock()
channelz.Infof(logger, ac.channelzID, "Subchannel picks a new address %q to connect", addr.Addr)
channelz.Infof(logger, ac.channelz, "Subchannel picks a new address %q to connect", addr.Addr)
err := ac.createTransport(ctx, addr, copts, connectDeadline)
if err == nil {
@ -1388,7 +1367,7 @@ func (ac *addrConn) createTransport(ctx context.Context, addr resolver.Address,
connectCtx, cancel := context.WithDeadline(ctx, connectDeadline)
defer cancel()
copts.ChannelzParentID = ac.channelzID
copts.ChannelzParent = ac.channelz
newTr, err := transport.NewClientTransport(connectCtx, ac.cc.ctx, addr, copts, onClose)
if err != nil {
@ -1397,7 +1376,7 @@ func (ac *addrConn) createTransport(ctx context.Context, addr resolver.Address,
}
// newTr is either nil, or closed.
hcancel()
channelz.Warningf(logger, ac.channelzID, "grpc: addrConn.createTransport failed to connect to %s. Err: %v", addr, err)
channelz.Warningf(logger, ac.channelz, "grpc: addrConn.createTransport failed to connect to %s. Err: %v", addr, err)
return err
}
@ -1469,7 +1448,7 @@ func (ac *addrConn) startHealthCheck(ctx context.Context) {
// The health package is not imported to set health check function.
//
// TODO: add a link to the health check doc in the error message.
channelz.Error(logger, ac.channelzID, "Health check is requested but health check function is not set.")
channelz.Error(logger, ac.channelz, "Health check is requested but health check function is not set.")
return
}
@ -1499,9 +1478,9 @@ func (ac *addrConn) startHealthCheck(ctx context.Context) {
err := ac.cc.dopts.healthCheckFunc(ctx, newStream, setConnectivityState, healthCheckConfig.ServiceName)
if err != nil {
if status.Code(err) == codes.Unimplemented {
channelz.Error(logger, ac.channelzID, "Subchannel health check is unimplemented at server side, thus health check is disabled")
channelz.Error(logger, ac.channelz, "Subchannel health check is unimplemented at server side, thus health check is disabled")
} else {
channelz.Errorf(logger, ac.channelzID, "Health checking failed: %v", err)
channelz.Errorf(logger, ac.channelz, "Health checking failed: %v", err)
}
}
}()
@ -1531,7 +1510,7 @@ func (ac *addrConn) getReadyTransport() transport.ClientTransport {
func (ac *addrConn) getTransport(ctx context.Context) (transport.ClientTransport, error) {
for ctx.Err() == nil {
ac.mu.Lock()
t, state, sc := ac.transport, ac.state, ac.stateChan
t, state, sc := ac.transport, ac.state, ac.stateReadyChan
ac.mu.Unlock()
if state == connectivity.Ready {
return t, nil
@ -1566,18 +1545,18 @@ func (ac *addrConn) tearDown(err error) {
ac.cancel()
ac.curAddr = resolver.Address{}
channelz.AddTraceEvent(logger, ac.channelzID, 0, &channelz.TraceEventDesc{
channelz.AddTraceEvent(logger, ac.channelz, 0, &channelz.TraceEvent{
Desc: "Subchannel deleted",
Severity: channelz.CtInfo,
Parent: &channelz.TraceEventDesc{
Desc: fmt.Sprintf("Subchannel(id:%d) deleted", ac.channelzID.Int()),
Parent: &channelz.TraceEvent{
Desc: fmt.Sprintf("Subchannel(id:%d) deleted", ac.channelz.ID),
Severity: channelz.CtInfo,
},
})
// TraceEvent needs to be called before RemoveEntry, as TraceEvent may add
// trace reference to the entity being deleted, and thus prevent it from
// being deleted right away.
channelz.RemoveEntry(ac.channelzID)
channelz.RemoveEntry(ac.channelz.ID)
ac.mu.Unlock()
// We have to release the lock before the call to GracefulClose/Close here
@ -1594,7 +1573,7 @@ func (ac *addrConn) tearDown(err error) {
} else {
// Hard close the transport when the channel is entering idle or is
// being shutdown. In the case where the channel is being shutdown,
// closing of transports is also taken care of by cancelation of cc.ctx.
// closing of transports is also taken care of by cancellation of cc.ctx.
// But in the case where the channel is entering idle, we need to
// explicitly close the transports here. Instead of distinguishing
// between these two cases, it is simpler to close the transport
@ -1604,39 +1583,6 @@ func (ac *addrConn) tearDown(err error) {
}
}
func (ac *addrConn) getState() connectivity.State {
ac.mu.Lock()
defer ac.mu.Unlock()
return ac.state
}
func (ac *addrConn) ChannelzMetric() *channelz.ChannelInternalMetric {
ac.mu.Lock()
addr := ac.curAddr.Addr
ac.mu.Unlock()
return &channelz.ChannelInternalMetric{
State: ac.getState(),
Target: addr,
CallsStarted: atomic.LoadInt64(&ac.czData.callsStarted),
CallsSucceeded: atomic.LoadInt64(&ac.czData.callsSucceeded),
CallsFailed: atomic.LoadInt64(&ac.czData.callsFailed),
LastCallStartedTimestamp: time.Unix(0, atomic.LoadInt64(&ac.czData.lastCallStartedTime)),
}
}
func (ac *addrConn) incrCallsStarted() {
atomic.AddInt64(&ac.czData.callsStarted, 1)
atomic.StoreInt64(&ac.czData.lastCallStartedTime, time.Now().UnixNano())
}
func (ac *addrConn) incrCallsSucceeded() {
atomic.AddInt64(&ac.czData.callsSucceeded, 1)
}
func (ac *addrConn) incrCallsFailed() {
atomic.AddInt64(&ac.czData.callsFailed, 1)
}
type retryThrottler struct {
max float64
thresh float64
@ -1674,12 +1620,17 @@ func (rt *retryThrottler) successfulRPC() {
}
}
type channelzChannel struct {
cc *ClientConn
func (ac *addrConn) incrCallsStarted() {
ac.channelz.ChannelMetrics.CallsStarted.Add(1)
ac.channelz.ChannelMetrics.LastCallStartedTimestamp.Store(time.Now().UnixNano())
}
func (c *channelzChannel) ChannelzMetric() *channelz.ChannelInternalMetric {
return c.cc.channelzMetric()
func (ac *addrConn) incrCallsSucceeded() {
ac.channelz.ChannelMetrics.CallsSucceeded.Add(1)
}
func (ac *addrConn) incrCallsFailed() {
ac.channelz.ChannelMetrics.CallsFailed.Add(1)
}
// ErrClientConnTimeout indicates that the ClientConn cannot establish the
@ -1713,22 +1664,19 @@ func (cc *ClientConn) connectionError() error {
return cc.lastConnectionError
}
// parseTargetAndFindResolver parses the user's dial target and stores the
// parsed target in `cc.parsedTarget`.
// initParsedTargetAndResolverBuilder parses the user's dial target and stores
// the parsed target in `cc.parsedTarget`.
//
// The resolver to use is determined based on the scheme in the parsed target
// and the same is stored in `cc.resolverBuilder`.
//
// Doesn't grab cc.mu as this method is expected to be called only at Dial time.
func (cc *ClientConn) parseTargetAndFindResolver() error {
channelz.Infof(logger, cc.channelzID, "original dial target is: %q", cc.target)
func (cc *ClientConn) initParsedTargetAndResolverBuilder() error {
logger.Infof("original dial target is: %q", cc.target)
var rb resolver.Builder
parsedTarget, err := parseTarget(cc.target)
if err != nil {
channelz.Infof(logger, cc.channelzID, "dial target %q parse failed: %v", cc.target, err)
} else {
channelz.Infof(logger, cc.channelzID, "parsed dial target is: %#v", parsedTarget)
if err == nil {
rb = cc.getResolver(parsedTarget.URL.Scheme)
if rb != nil {
cc.parsedTarget = parsedTarget
@ -1740,17 +1688,19 @@ func (cc *ClientConn) parseTargetAndFindResolver() error {
// We are here because the user's dial target did not contain a scheme or
// specified an unregistered scheme. We should fallback to the default
// scheme, except when a custom dialer is specified in which case, we should
// always use passthrough scheme.
defScheme := resolver.GetDefaultScheme()
channelz.Infof(logger, cc.channelzID, "fallback to scheme %q", defScheme)
// always use passthrough scheme. For either case, we need to respect any overridden
// global defaults set by the user.
defScheme := cc.dopts.defaultScheme
if internal.UserSetDefaultScheme {
defScheme = resolver.GetDefaultScheme()
}
canonicalTarget := defScheme + ":///" + cc.target
parsedTarget, err = parseTarget(canonicalTarget)
if err != nil {
channelz.Infof(logger, cc.channelzID, "dial target %q parse failed: %v", canonicalTarget, err)
return err
}
channelz.Infof(logger, cc.channelzID, "parsed dial target is: %+v", parsedTarget)
rb = cc.getResolver(parsedTarget.URL.Scheme)
if rb == nil {
return fmt.Errorf("could not get resolver for default scheme: %q", parsedTarget.URL.Scheme)
@ -1772,6 +1722,8 @@ func parseTarget(target string) (resolver.Target, error) {
return resolver.Target{URL: *u}, nil
}
// encodeAuthority escapes the authority string based on valid chars defined in
// https://datatracker.ietf.org/doc/html/rfc3986#section-3.2.
func encodeAuthority(authority string) string {
const upperhex = "0123456789ABCDEF"
@ -1788,7 +1740,7 @@ func encodeAuthority(authority string) string {
return false
case '!', '$', '&', '\'', '(', ')', '*', '+', ',', ';', '=': // Subdelim characters
return false
case ':', '[', ']', '@': // Authority related delimeters
case ':', '[', ']', '@': // Authority related delimiters
return false
}
// Everything else must be escaped.
@ -1838,7 +1790,7 @@ func encodeAuthority(authority string) string {
// credentials do not match the authority configured through the dial option.
//
// Doesn't grab cc.mu as this method is expected to be called only at Dial time.
func (cc *ClientConn) determineAuthority() error {
func (cc *ClientConn) initAuthority() error {
dopts := cc.dopts
// Historically, we had two options for users to specify the serverName or
// authority for a channel. One was through the transport credentials
@ -1871,6 +1823,5 @@ func (cc *ClientConn) determineAuthority() error {
} else {
cc.authority = encodeAuthority(endpoint)
}
channelz.Infof(logger, cc.channelzID, "Channel authority set to %q", cc.authority)
return nil
}

View File

@ -21,18 +21,73 @@ package grpc
import (
"google.golang.org/grpc/encoding"
_ "google.golang.org/grpc/encoding/proto" // to register the Codec for "proto"
"google.golang.org/grpc/mem"
)
// baseCodec contains the functionality of both Codec and encoding.Codec, but
// omits the name/string, which vary between the two and are not needed for
// anything besides the registry in the encoding package.
// baseCodec captures the new encoding.CodecV2 interface without the Name
// function, allowing it to be implemented by older Codec and encoding.Codec
// implementations. The omitted Name function is only needed for the register in
// the encoding package and is not part of the core functionality.
type baseCodec interface {
Marshal(v any) ([]byte, error)
Unmarshal(data []byte, v any) error
Marshal(v any) (mem.BufferSlice, error)
Unmarshal(data mem.BufferSlice, v any) error
}
var _ baseCodec = Codec(nil)
var _ baseCodec = encoding.Codec(nil)
// getCodec returns an encoding.CodecV2 for the codec of the given name (if
// registered). Initially checks the V2 registry with encoding.GetCodecV2 and
// returns the V2 codec if it is registered. Otherwise, it checks the V1 registry
// with encoding.GetCodec and if it is registered wraps it with newCodecV1Bridge
// to turn it into an encoding.CodecV2. Returns nil otherwise.
func getCodec(name string) encoding.CodecV2 {
if codecV1 := encoding.GetCodec(name); codecV1 != nil {
return newCodecV1Bridge(codecV1)
}
return encoding.GetCodecV2(name)
}
func newCodecV0Bridge(c Codec) baseCodec {
return codecV0Bridge{codec: c}
}
func newCodecV1Bridge(c encoding.Codec) encoding.CodecV2 {
return codecV1Bridge{
codecV0Bridge: codecV0Bridge{codec: c},
name: c.Name(),
}
}
var _ baseCodec = codecV0Bridge{}
type codecV0Bridge struct {
codec interface {
Marshal(v any) ([]byte, error)
Unmarshal(data []byte, v any) error
}
}
func (c codecV0Bridge) Marshal(v any) (mem.BufferSlice, error) {
data, err := c.codec.Marshal(v)
if err != nil {
return nil, err
}
return mem.BufferSlice{mem.NewBuffer(&data, nil)}, nil
}
func (c codecV0Bridge) Unmarshal(data mem.BufferSlice, v any) (err error) {
return c.codec.Unmarshal(data.Materialize(), v)
}
var _ encoding.CodecV2 = codecV1Bridge{}
type codecV1Bridge struct {
codecV0Bridge
name string
}
func (c codecV1Bridge) Name() string {
return c.name
}
// Codec defines the interface gRPC uses to encode and decode messages.
// Note that implementations of this interface must be thread safe;

View File

@ -1,17 +0,0 @@
#!/usr/bin/env bash
# This script serves as an example to demonstrate how to generate the gRPC-Go
# interface and the related messages from .proto file.
#
# It assumes the installation of i) Google proto buffer compiler at
# https://github.com/google/protobuf (after v2.6.1) and ii) the Go codegen
# plugin at https://github.com/golang/protobuf (after 2015-02-20). If you have
# not, please install them first.
#
# We recommend running this script at $GOPATH/src.
#
# If this is not what you need, feel free to make your own scripts. Again, this
# script is for demonstration purpose.
#
proto=$1
protoc --go_out=plugins=grpc:. $proto

View File

@ -235,7 +235,7 @@ func (c *Code) UnmarshalJSON(b []byte) error {
if ci, err := strconv.ParseUint(string(b), 10, 32); err == nil {
if ci >= _maxCode {
return fmt.Errorf("invalid code: %q", ci)
return fmt.Errorf("invalid code: %d", ci)
}
*c = Code(ci)

View File

@ -28,9 +28,9 @@ import (
"fmt"
"net"
"github.com/golang/protobuf/proto"
"google.golang.org/grpc/attributes"
icredentials "google.golang.org/grpc/internal/credentials"
"google.golang.org/protobuf/proto"
)
// PerRPCCredentials defines the common interface for the credentials which need to
@ -237,7 +237,7 @@ func ClientHandshakeInfoFromContext(ctx context.Context) ClientHandshakeInfo {
}
// CheckSecurityLevel checks if a connection's security level is greater than or equal to the specified one.
// It returns success if 1) the condition is satisified or 2) AuthInfo struct does not implement GetCommonAuthInfo() method
// It returns success if 1) the condition is satisfied or 2) AuthInfo struct does not implement GetCommonAuthInfo() method
// or 3) CommonAuthInfo.SecurityLevel has an invalid zero value. For 2) and 3), it is for the purpose of backward-compatibility.
//
// This API is experimental.

View File

@ -27,9 +27,13 @@ import (
"net/url"
"os"
"google.golang.org/grpc/grpclog"
credinternal "google.golang.org/grpc/internal/credentials"
"google.golang.org/grpc/internal/envconfig"
)
var logger = grpclog.Component("credentials")
// TLSInfo contains the auth information for a TLS authenticated connection.
// It implements the AuthInfo interface.
type TLSInfo struct {
@ -112,6 +116,22 @@ func (c *tlsCreds) ClientHandshake(ctx context.Context, authority string, rawCon
conn.Close()
return nil, nil, ctx.Err()
}
// The negotiated protocol can be either of the following:
// 1. h2: When the server supports ALPN. Only HTTP/2 can be negotiated since
// it is the only protocol advertised by the client during the handshake.
// The tls library ensures that the server chooses a protocol advertised
// by the client.
// 2. "" (empty string): If the server doesn't support ALPN. ALPN is a requirement
// for using HTTP/2 over TLS. We can terminate the connection immediately.
np := conn.ConnectionState().NegotiatedProtocol
if np == "" {
if envconfig.EnforceALPNEnabled {
conn.Close()
return nil, nil, fmt.Errorf("credentials: cannot check peer: missing selected ALPN property")
}
logger.Warningf("Allowing TLS connection to server %q with ALPN disabled. TLS connections to servers with ALPN disabled will be disallowed in future grpc-go releases", cfg.ServerName)
}
tlsInfo := TLSInfo{
State: conn.ConnectionState(),
CommonAuthInfo: CommonAuthInfo{
@ -131,8 +151,20 @@ func (c *tlsCreds) ServerHandshake(rawConn net.Conn) (net.Conn, AuthInfo, error)
conn.Close()
return nil, nil, err
}
cs := conn.ConnectionState()
// The negotiated application protocol can be empty only if the client doesn't
// support ALPN. In such cases, we can close the connection since ALPN is required
// for using HTTP/2 over TLS.
if cs.NegotiatedProtocol == "" {
if envconfig.EnforceALPNEnabled {
conn.Close()
return nil, nil, fmt.Errorf("credentials: cannot check peer: missing selected ALPN property")
} else if logger.V(2) {
logger.Info("Allowing TLS connection from client with ALPN disabled. TLS connections with ALPN disabled will be disallowed in future grpc-go releases")
}
}
tlsInfo := TLSInfo{
State: conn.ConnectionState(),
State: cs,
CommonAuthInfo: CommonAuthInfo{
SecurityLevel: PrivacyAndIntegrity,
},

View File

@ -21,6 +21,7 @@ package grpc
import (
"context"
"net"
"net/url"
"time"
"google.golang.org/grpc/backoff"
@ -32,10 +33,16 @@ import (
"google.golang.org/grpc/internal/binarylog"
"google.golang.org/grpc/internal/transport"
"google.golang.org/grpc/keepalive"
"google.golang.org/grpc/mem"
"google.golang.org/grpc/resolver"
"google.golang.org/grpc/stats"
)
const (
// https://github.com/grpc/proposal/blob/master/A6-client-retries.md#limits-on-retries-and-hedges
defaultMaxCallAttempts = 5
)
func init() {
internal.AddGlobalDialOptions = func(opt ...DialOption) {
globalDialOptions = append(globalDialOptions, opt...)
@ -43,10 +50,18 @@ func init() {
internal.ClearGlobalDialOptions = func() {
globalDialOptions = nil
}
internal.AddGlobalPerTargetDialOptions = func(opt any) {
if ptdo, ok := opt.(perTargetDialOption); ok {
globalPerTargetDialOptions = append(globalPerTargetDialOptions, ptdo)
}
}
internal.ClearGlobalPerTargetDialOptions = func() {
globalPerTargetDialOptions = nil
}
internal.WithBinaryLogger = withBinaryLogger
internal.JoinDialOptions = newJoinDialOption
internal.DisableGlobalDialOptions = newDisableGlobalDialOptions
internal.WithRecvBufferPool = withRecvBufferPool
internal.WithBufferPool = withBufferPool
}
// dialOptions configure a Dial call. dialOptions are set by the DialOption
@ -68,7 +83,7 @@ type dialOptions struct {
binaryLogger binarylog.Logger
copts transport.ConnectOptions
callOptions []CallOption
channelzParentID *channelz.Identifier
channelzParent channelz.Identifier
disableServiceConfig bool
disableRetry bool
disableHealthCheck bool
@ -78,7 +93,8 @@ type dialOptions struct {
defaultServiceConfigRawJSON *string
resolvers []resolver.Builder
idleTimeout time.Duration
recvBufferPool SharedBufferPool
defaultScheme string
maxCallAttempts int
}
// DialOption configures how we set up the connection.
@ -88,6 +104,19 @@ type DialOption interface {
var globalDialOptions []DialOption
// perTargetDialOption takes a parsed target and returns a dial option to apply.
//
// This gets called after NewClient() parses the target, and allows per target
// configuration set through a returned DialOption. The DialOption will not take
// effect if specifies a resolver builder, as that Dial Option is factored in
// while parsing target.
type perTargetDialOption interface {
// DialOption returns a Dial Option to apply.
DialOptionForTarget(parsedTarget url.URL) DialOption
}
var globalPerTargetDialOptions []perTargetDialOption
// EmptyDialOption does not alter the dial configuration. It can be embedded in
// another structure to build custom dial options.
//
@ -154,9 +183,7 @@ func WithSharedWriteBuffer(val bool) DialOption {
}
// WithWriteBufferSize determines how much data can be batched before doing a
// write on the wire. The corresponding memory allocation for this buffer will
// be twice the size to keep syscalls low. The default value for this buffer is
// 32KB.
// write on the wire. The default value for this buffer is 32KB.
//
// Zero or negative values will disable the write buffer such that each write
// will be on underlying connection. Note: A Send call may not directly
@ -301,6 +328,9 @@ func withBackoff(bs internalbackoff.Strategy) DialOption {
//
// Use of this feature is not recommended. For more information, please see:
// https://github.com/grpc/grpc-go/blob/master/Documentation/anti-patterns.md
//
// Deprecated: this DialOption is not supported by NewClient.
// Will be supported throughout 1.x.
func WithBlock() DialOption {
return newFuncDialOption(func(o *dialOptions) {
o.block = true
@ -315,10 +345,8 @@ func WithBlock() DialOption {
// Use of this feature is not recommended. For more information, please see:
// https://github.com/grpc/grpc-go/blob/master/Documentation/anti-patterns.md
//
// # Experimental
//
// Notice: This API is EXPERIMENTAL and may be changed or removed in a
// later release.
// Deprecated: this DialOption is not supported by NewClient.
// Will be supported throughout 1.x.
func WithReturnConnectionError() DialOption {
return newFuncDialOption(func(o *dialOptions) {
o.block = true
@ -388,8 +416,8 @@ func WithCredentialsBundle(b credentials.Bundle) DialOption {
// WithTimeout returns a DialOption that configures a timeout for dialing a
// ClientConn initially. This is valid if and only if WithBlock() is present.
//
// Deprecated: use DialContext instead of Dial and context.WithTimeout
// instead. Will be supported throughout 1.x.
// Deprecated: this DialOption is not supported by NewClient.
// Will be supported throughout 1.x.
func WithTimeout(d time.Duration) DialOption {
return newFuncDialOption(func(o *dialOptions) {
o.timeout = d
@ -471,9 +499,8 @@ func withBinaryLogger(bl binarylog.Logger) DialOption {
// Use of this feature is not recommended. For more information, please see:
// https://github.com/grpc/grpc-go/blob/master/Documentation/anti-patterns.md
//
// # Experimental
//
// Notice: This API is EXPERIMENTAL and may be changed or removed in a
// Deprecated: this DialOption is not supported by NewClient.
// This API may be changed or removed in a
// later release.
func FailOnNonTempDialError(f bool) DialOption {
return newFuncDialOption(func(o *dialOptions) {
@ -555,9 +582,9 @@ func WithAuthority(a string) DialOption {
//
// Notice: This API is EXPERIMENTAL and may be changed or removed in a
// later release.
func WithChannelzParentID(id *channelz.Identifier) DialOption {
func WithChannelzParentID(c channelz.Identifier) DialOption {
return newFuncDialOption(func(o *dialOptions) {
o.channelzParentID = id
o.channelzParent = c
})
}
@ -602,12 +629,22 @@ func WithDisableRetry() DialOption {
})
}
// MaxHeaderListSizeDialOption is a DialOption that specifies the maximum
// (uncompressed) size of header list that the client is prepared to accept.
type MaxHeaderListSizeDialOption struct {
MaxHeaderListSize uint32
}
func (o MaxHeaderListSizeDialOption) apply(do *dialOptions) {
do.copts.MaxHeaderListSize = &o.MaxHeaderListSize
}
// WithMaxHeaderListSize returns a DialOption that specifies the maximum
// (uncompressed) size of header list that the client is prepared to accept.
func WithMaxHeaderListSize(s uint32) DialOption {
return newFuncDialOption(func(o *dialOptions) {
o.copts.MaxHeaderListSize = &s
})
return MaxHeaderListSizeDialOption{
MaxHeaderListSize: s,
}
}
// WithDisableHealthCheck disables the LB channel health checking for all
@ -640,15 +677,17 @@ func defaultDialOptions() dialOptions {
WriteBufferSize: defaultWriteBufSize,
UseProxy: true,
UserAgent: grpcUA,
BufferPool: mem.DefaultBufferPool(),
},
bs: internalbackoff.DefaultExponential,
healthCheckFunc: internal.HealthCheckFunc,
idleTimeout: 30 * time.Minute,
recvBufferPool: nopBufferPool{},
defaultScheme: "dns",
maxCallAttempts: defaultMaxCallAttempts,
}
}
// withGetMinConnectDeadline specifies the function that clientconn uses to
// withMinConnectDeadline specifies the function that clientconn uses to
// get minConnectDeadline. This can be used to make connection attempts happen
// faster/slower.
//
@ -659,6 +698,14 @@ func withMinConnectDeadline(f func() time.Duration) DialOption {
})
}
// withDefaultScheme is used to allow Dial to use "passthrough" as the default
// name resolver, while NewClient uses "dns" otherwise.
func withDefaultScheme(s string) DialOption {
return newFuncDialOption(func(o *dialOptions) {
o.defaultScheme = s
})
}
// WithResolvers allows a list of resolver implementations to be registered
// locally with the ClientConn without needing to be globally registered via
// resolver.Register. They will be matched against the scheme used for the
@ -694,25 +741,25 @@ func WithIdleTimeout(d time.Duration) DialOption {
})
}
// WithRecvBufferPool returns a DialOption that configures the ClientConn
// to use the provided shared buffer pool for parsing incoming messages. Depending
// on the application's workload, this could result in reduced memory allocation.
// WithMaxCallAttempts returns a DialOption that configures the maximum number
// of attempts per call (including retries and hedging) using the channel.
// Service owners may specify a higher value for these parameters, but higher
// values will be treated as equal to the maximum value by the client
// implementation. This mitigates security concerns related to the service
// config being transferred to the client via DNS.
//
// If you are unsure about how to implement a memory pool but want to utilize one,
// begin with grpc.NewSharedBufferPool.
//
// Note: The shared buffer pool feature will not be active if any of the following
// options are used: WithStatsHandler, EnableTracing, or binary logging. In such
// cases, the shared buffer pool will be ignored.
//
// Deprecated: use experimental.WithRecvBufferPool instead. Will be deleted in
// v1.60.0 or later.
func WithRecvBufferPool(bufferPool SharedBufferPool) DialOption {
return withRecvBufferPool(bufferPool)
}
func withRecvBufferPool(bufferPool SharedBufferPool) DialOption {
// A value of 5 will be used if this dial option is not set or n < 2.
func WithMaxCallAttempts(n int) DialOption {
return newFuncDialOption(func(o *dialOptions) {
o.recvBufferPool = bufferPool
if n < 2 {
n = defaultMaxCallAttempts
}
o.maxCallAttempts = n
})
}
func withBufferPool(bufferPool mem.BufferPool) DialOption {
return newFuncDialOption(func(o *dialOptions) {
o.copts.BufferPool = bufferPool
})
}

View File

@ -16,7 +16,7 @@
*
*/
//go:generate ./regenerate.sh
//go:generate ./scripts/regenerate.sh
/*
Package grpc implements an RPC system called gRPC.

View File

@ -94,7 +94,7 @@ type Codec interface {
Name() string
}
var registeredCodecs = make(map[string]Codec)
var registeredCodecs = make(map[string]any)
// RegisterCodec registers the provided Codec for use with all gRPC clients and
// servers.
@ -126,5 +126,6 @@ func RegisterCodec(codec Codec) {
//
// The content-subtype is expected to be lowercase.
func GetCodec(contentSubtype string) Codec {
return registeredCodecs[contentSubtype]
c, _ := registeredCodecs[contentSubtype].(Codec)
return c
}

81
vendor/google.golang.org/grpc/encoding/encoding_v2.go generated vendored Normal file
View File

@ -0,0 +1,81 @@
/*
*
* Copyright 2024 gRPC authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
package encoding
import (
"strings"
"google.golang.org/grpc/mem"
)
// CodecV2 defines the interface gRPC uses to encode and decode messages. Note
// that implementations of this interface must be thread safe; a CodecV2's
// methods can be called from concurrent goroutines.
type CodecV2 interface {
// Marshal returns the wire format of v. The buffers in the returned
// [mem.BufferSlice] must have at least one reference each, which will be freed
// by gRPC when they are no longer needed.
Marshal(v any) (out mem.BufferSlice, err error)
// Unmarshal parses the wire format into v. Note that data will be freed as soon
// as this function returns. If the codec wishes to guarantee access to the data
// after this function, it must take its own reference that it frees when it is
// no longer needed.
Unmarshal(data mem.BufferSlice, v any) error
// Name returns the name of the Codec implementation. The returned string
// will be used as part of content type in transmission. The result must be
// static; the result cannot change between calls.
Name() string
}
// RegisterCodecV2 registers the provided CodecV2 for use with all gRPC clients and
// servers.
//
// The CodecV2 will be stored and looked up by result of its Name() method, which
// should match the content-subtype of the encoding handled by the CodecV2. This
// is case-insensitive, and is stored and looked up as lowercase. If the
// result of calling Name() is an empty string, RegisterCodecV2 will panic. See
// Content-Type on
// https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md#requests for
// more details.
//
// If both a Codec and CodecV2 are registered with the same name, the CodecV2
// will be used.
//
// NOTE: this function must only be called during initialization time (i.e. in
// an init() function), and is not thread-safe. If multiple Codecs are
// registered with the same name, the one registered last will take effect.
func RegisterCodecV2(codec CodecV2) {
if codec == nil {
panic("cannot register a nil CodecV2")
}
if codec.Name() == "" {
panic("cannot register CodecV2 with empty string result for Name()")
}
contentSubtype := strings.ToLower(codec.Name())
registeredCodecs[contentSubtype] = codec
}
// GetCodecV2 gets a registered CodecV2 by content-subtype, or nil if no CodecV2 is
// registered for the content-subtype.
//
// The content-subtype is expected to be lowercase.
func GetCodecV2(contentSubtype string) CodecV2 {
c, _ := registeredCodecs[contentSubtype].(CodecV2)
return c
}

View File

@ -1,6 +1,6 @@
/*
*
* Copyright 2018 gRPC authors.
* Copyright 2024 gRPC authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@ -24,6 +24,7 @@ import (
"fmt"
"google.golang.org/grpc/encoding"
"google.golang.org/grpc/mem"
"google.golang.org/protobuf/proto"
"google.golang.org/protobuf/protoadapt"
)
@ -32,28 +33,51 @@ import (
const Name = "proto"
func init() {
encoding.RegisterCodec(codec{})
encoding.RegisterCodecV2(&codecV2{})
}
// codec is a Codec implementation with protobuf. It is the default codec for gRPC.
type codec struct{}
// codec is a CodecV2 implementation with protobuf. It is the default codec for
// gRPC.
type codecV2 struct{}
func (codec) Marshal(v any) ([]byte, error) {
func (c *codecV2) Marshal(v any) (data mem.BufferSlice, err error) {
vv := messageV2Of(v)
if vv == nil {
return nil, fmt.Errorf("failed to marshal, message is %T, want proto.Message", v)
return nil, fmt.Errorf("proto: failed to marshal, message is %T, want proto.Message", v)
}
return proto.Marshal(vv)
size := proto.Size(vv)
if mem.IsBelowBufferPoolingThreshold(size) {
buf, err := proto.Marshal(vv)
if err != nil {
return nil, err
}
data = append(data, mem.SliceBuffer(buf))
} else {
pool := mem.DefaultBufferPool()
buf := pool.Get(size)
if _, err := (proto.MarshalOptions{}).MarshalAppend((*buf)[:0], vv); err != nil {
pool.Put(buf)
return nil, err
}
data = append(data, mem.NewBuffer(buf, pool))
}
return data, nil
}
func (codec) Unmarshal(data []byte, v any) error {
func (c *codecV2) Unmarshal(data mem.BufferSlice, v any) (err error) {
vv := messageV2Of(v)
if vv == nil {
return fmt.Errorf("failed to unmarshal, message is %T, want proto.Message", v)
}
return proto.Unmarshal(data, vv)
buf := data.MaterializeToBuffer(mem.DefaultBufferPool())
defer buf.Free()
// TODO: Upgrade proto.Unmarshal to support mem.BufferSlice. Right now, it's not
// really possible without a major overhaul of the proto package, but the
// vtprotobuf library may be able to support this.
return proto.Unmarshal(buf.ReadOnlyData(), vv)
}
func messageV2Of(v any) proto.Message {
@ -67,6 +91,6 @@ func messageV2Of(v any) proto.Message {
return nil
}
func (codec) Name() string {
func (c *codecV2) Name() string {
return Name
}

View File

@ -0,0 +1,269 @@
/*
*
* Copyright 2024 gRPC authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
package stats
import (
"maps"
"google.golang.org/grpc/grpclog"
"google.golang.org/grpc/internal"
)
func init() {
internal.SnapshotMetricRegistryForTesting = snapshotMetricsRegistryForTesting
}
var logger = grpclog.Component("metrics-registry")
// DefaultMetrics are the default metrics registered through global metrics
// registry. This is written to at initialization time only, and is read only
// after initialization.
var DefaultMetrics = NewMetrics()
// MetricDescriptor is the data for a registered metric.
type MetricDescriptor struct {
// The name of this metric. This name must be unique across the whole binary
// (including any per call metrics). See
// https://github.com/grpc/proposal/blob/master/A79-non-per-call-metrics-architecture.md#metric-instrument-naming-conventions
// for metric naming conventions.
Name Metric
// The description of this metric.
Description string
// The unit (e.g. entries, seconds) of this metric.
Unit string
// The required label keys for this metric. These are intended to
// metrics emitted from a stats handler.
Labels []string
// The optional label keys for this metric. These are intended to attached
// to metrics emitted from a stats handler if configured.
OptionalLabels []string
// Whether this metric is on by default.
Default bool
// The type of metric. This is set by the metric registry, and not intended
// to be set by a component registering a metric.
Type MetricType
// Bounds are the bounds of this metric. This only applies to histogram
// metrics. If unset or set with length 0, stats handlers will fall back to
// default bounds.
Bounds []float64
}
// MetricType is the type of metric.
type MetricType int
// Type of metric supported by this instrument registry.
const (
MetricTypeIntCount MetricType = iota
MetricTypeFloatCount
MetricTypeIntHisto
MetricTypeFloatHisto
MetricTypeIntGauge
)
// Int64CountHandle is a typed handle for a int count metric. This handle
// is passed at the recording point in order to know which metric to record
// on.
type Int64CountHandle MetricDescriptor
// Descriptor returns the int64 count handle typecast to a pointer to a
// MetricDescriptor.
func (h *Int64CountHandle) Descriptor() *MetricDescriptor {
return (*MetricDescriptor)(h)
}
// Record records the int64 count value on the metrics recorder provided.
func (h *Int64CountHandle) Record(recorder MetricsRecorder, incr int64, labels ...string) {
recorder.RecordInt64Count(h, incr, labels...)
}
// Float64CountHandle is a typed handle for a float count metric. This handle is
// passed at the recording point in order to know which metric to record on.
type Float64CountHandle MetricDescriptor
// Descriptor returns the float64 count handle typecast to a pointer to a
// MetricDescriptor.
func (h *Float64CountHandle) Descriptor() *MetricDescriptor {
return (*MetricDescriptor)(h)
}
// Record records the float64 count value on the metrics recorder provided.
func (h *Float64CountHandle) Record(recorder MetricsRecorder, incr float64, labels ...string) {
recorder.RecordFloat64Count(h, incr, labels...)
}
// Int64HistoHandle is a typed handle for an int histogram metric. This handle
// is passed at the recording point in order to know which metric to record on.
type Int64HistoHandle MetricDescriptor
// Descriptor returns the int64 histo handle typecast to a pointer to a
// MetricDescriptor.
func (h *Int64HistoHandle) Descriptor() *MetricDescriptor {
return (*MetricDescriptor)(h)
}
// Record records the int64 histo value on the metrics recorder provided.
func (h *Int64HistoHandle) Record(recorder MetricsRecorder, incr int64, labels ...string) {
recorder.RecordInt64Histo(h, incr, labels...)
}
// Float64HistoHandle is a typed handle for a float histogram metric. This
// handle is passed at the recording point in order to know which metric to
// record on.
type Float64HistoHandle MetricDescriptor
// Descriptor returns the float64 histo handle typecast to a pointer to a
// MetricDescriptor.
func (h *Float64HistoHandle) Descriptor() *MetricDescriptor {
return (*MetricDescriptor)(h)
}
// Record records the float64 histo value on the metrics recorder provided.
func (h *Float64HistoHandle) Record(recorder MetricsRecorder, incr float64, labels ...string) {
recorder.RecordFloat64Histo(h, incr, labels...)
}
// Int64GaugeHandle is a typed handle for an int gauge metric. This handle is
// passed at the recording point in order to know which metric to record on.
type Int64GaugeHandle MetricDescriptor
// Descriptor returns the int64 gauge handle typecast to a pointer to a
// MetricDescriptor.
func (h *Int64GaugeHandle) Descriptor() *MetricDescriptor {
return (*MetricDescriptor)(h)
}
// Record records the int64 histo value on the metrics recorder provided.
func (h *Int64GaugeHandle) Record(recorder MetricsRecorder, incr int64, labels ...string) {
recorder.RecordInt64Gauge(h, incr, labels...)
}
// registeredMetrics are the registered metric descriptor names.
var registeredMetrics = make(map[Metric]bool)
// metricsRegistry contains all of the registered metrics.
//
// This is written to only at init time, and read only after that.
var metricsRegistry = make(map[Metric]*MetricDescriptor)
// DescriptorForMetric returns the MetricDescriptor from the global registry.
//
// Returns nil if MetricDescriptor not present.
func DescriptorForMetric(metric Metric) *MetricDescriptor {
return metricsRegistry[metric]
}
func registerMetric(name Metric, def bool) {
if registeredMetrics[name] {
logger.Fatalf("metric %v already registered", name)
}
registeredMetrics[name] = true
if def {
DefaultMetrics = DefaultMetrics.Add(name)
}
}
// RegisterInt64Count registers the metric description onto the global registry.
// It returns a typed handle to use to recording data.
//
// NOTE: this function must only be called during initialization time (i.e. in
// an init() function), and is not thread-safe. If multiple metrics are
// registered with the same name, this function will panic.
func RegisterInt64Count(descriptor MetricDescriptor) *Int64CountHandle {
registerMetric(descriptor.Name, descriptor.Default)
descriptor.Type = MetricTypeIntCount
descPtr := &descriptor
metricsRegistry[descriptor.Name] = descPtr
return (*Int64CountHandle)(descPtr)
}
// RegisterFloat64Count registers the metric description onto the global
// registry. It returns a typed handle to use to recording data.
//
// NOTE: this function must only be called during initialization time (i.e. in
// an init() function), and is not thread-safe. If multiple metrics are
// registered with the same name, this function will panic.
func RegisterFloat64Count(descriptor MetricDescriptor) *Float64CountHandle {
registerMetric(descriptor.Name, descriptor.Default)
descriptor.Type = MetricTypeFloatCount
descPtr := &descriptor
metricsRegistry[descriptor.Name] = descPtr
return (*Float64CountHandle)(descPtr)
}
// RegisterInt64Histo registers the metric description onto the global registry.
// It returns a typed handle to use to recording data.
//
// NOTE: this function must only be called during initialization time (i.e. in
// an init() function), and is not thread-safe. If multiple metrics are
// registered with the same name, this function will panic.
func RegisterInt64Histo(descriptor MetricDescriptor) *Int64HistoHandle {
registerMetric(descriptor.Name, descriptor.Default)
descriptor.Type = MetricTypeIntHisto
descPtr := &descriptor
metricsRegistry[descriptor.Name] = descPtr
return (*Int64HistoHandle)(descPtr)
}
// RegisterFloat64Histo registers the metric description onto the global
// registry. It returns a typed handle to use to recording data.
//
// NOTE: this function must only be called during initialization time (i.e. in
// an init() function), and is not thread-safe. If multiple metrics are
// registered with the same name, this function will panic.
func RegisterFloat64Histo(descriptor MetricDescriptor) *Float64HistoHandle {
registerMetric(descriptor.Name, descriptor.Default)
descriptor.Type = MetricTypeFloatHisto
descPtr := &descriptor
metricsRegistry[descriptor.Name] = descPtr
return (*Float64HistoHandle)(descPtr)
}
// RegisterInt64Gauge registers the metric description onto the global registry.
// It returns a typed handle to use to recording data.
//
// NOTE: this function must only be called during initialization time (i.e. in
// an init() function), and is not thread-safe. If multiple metrics are
// registered with the same name, this function will panic.
func RegisterInt64Gauge(descriptor MetricDescriptor) *Int64GaugeHandle {
registerMetric(descriptor.Name, descriptor.Default)
descriptor.Type = MetricTypeIntGauge
descPtr := &descriptor
metricsRegistry[descriptor.Name] = descPtr
return (*Int64GaugeHandle)(descPtr)
}
// snapshotMetricsRegistryForTesting snapshots the global data of the metrics
// registry. Returns a cleanup function that sets the metrics registry to its
// original state.
func snapshotMetricsRegistryForTesting() func() {
oldDefaultMetrics := DefaultMetrics
oldRegisteredMetrics := registeredMetrics
oldMetricsRegistry := metricsRegistry
registeredMetrics = make(map[Metric]bool)
metricsRegistry = make(map[Metric]*MetricDescriptor)
maps.Copy(registeredMetrics, registeredMetrics)
maps.Copy(metricsRegistry, metricsRegistry)
return func() {
DefaultMetrics = oldDefaultMetrics
registeredMetrics = oldRegisteredMetrics
metricsRegistry = oldMetricsRegistry
}
}

View File

@ -0,0 +1,114 @@
/*
*
* Copyright 2024 gRPC authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
// Package stats contains experimental metrics/stats API's.
package stats
import "maps"
// MetricsRecorder records on metrics derived from metric registry.
type MetricsRecorder interface {
// RecordInt64Count records the measurement alongside labels on the int
// count associated with the provided handle.
RecordInt64Count(handle *Int64CountHandle, incr int64, labels ...string)
// RecordFloat64Count records the measurement alongside labels on the float
// count associated with the provided handle.
RecordFloat64Count(handle *Float64CountHandle, incr float64, labels ...string)
// RecordInt64Histo records the measurement alongside labels on the int
// histo associated with the provided handle.
RecordInt64Histo(handle *Int64HistoHandle, incr int64, labels ...string)
// RecordFloat64Histo records the measurement alongside labels on the float
// histo associated with the provided handle.
RecordFloat64Histo(handle *Float64HistoHandle, incr float64, labels ...string)
// RecordInt64Gauge records the measurement alongside labels on the int
// gauge associated with the provided handle.
RecordInt64Gauge(handle *Int64GaugeHandle, incr int64, labels ...string)
}
// Metric is an identifier for a metric.
type Metric string
// Metrics is a set of metrics to record. Once created, Metrics is immutable,
// however Add and Remove can make copies with specific metrics added or
// removed, respectively.
//
// Do not construct directly; use NewMetrics instead.
type Metrics struct {
// metrics are the set of metrics to initialize.
metrics map[Metric]bool
}
// NewMetrics returns a Metrics containing Metrics.
func NewMetrics(metrics ...Metric) *Metrics {
newMetrics := make(map[Metric]bool)
for _, metric := range metrics {
newMetrics[metric] = true
}
return &Metrics{
metrics: newMetrics,
}
}
// Metrics returns the metrics set. The returned map is read-only and must not
// be modified.
func (m *Metrics) Metrics() map[Metric]bool {
return m.metrics
}
// Add adds the metrics to the metrics set and returns a new copy with the
// additional metrics.
func (m *Metrics) Add(metrics ...Metric) *Metrics {
newMetrics := make(map[Metric]bool)
for metric := range m.metrics {
newMetrics[metric] = true
}
for _, metric := range metrics {
newMetrics[metric] = true
}
return &Metrics{
metrics: newMetrics,
}
}
// Join joins the metrics passed in with the metrics set, and returns a new copy
// with the merged metrics.
func (m *Metrics) Join(metrics *Metrics) *Metrics {
newMetrics := make(map[Metric]bool)
maps.Copy(newMetrics, m.metrics)
maps.Copy(newMetrics, metrics.metrics)
return &Metrics{
metrics: newMetrics,
}
}
// Remove removes the metrics from the metrics set and returns a new copy with
// the metrics removed.
func (m *Metrics) Remove(metrics ...Metric) *Metrics {
newMetrics := make(map[Metric]bool)
for metric := range m.metrics {
newMetrics[metric] = true
}
for _, metric := range metrics {
delete(newMetrics, metric)
}
return &Metrics{
metrics: newMetrics,
}
}

View File

@ -20,8 +20,6 @@ package grpclog
import (
"fmt"
"google.golang.org/grpc/internal/grpclog"
)
// componentData records the settings for a component.
@ -33,22 +31,22 @@ var cache = map[string]*componentData{}
func (c *componentData) InfoDepth(depth int, args ...any) {
args = append([]any{"[" + string(c.name) + "]"}, args...)
grpclog.InfoDepth(depth+1, args...)
InfoDepth(depth+1, args...)
}
func (c *componentData) WarningDepth(depth int, args ...any) {
args = append([]any{"[" + string(c.name) + "]"}, args...)
grpclog.WarningDepth(depth+1, args...)
WarningDepth(depth+1, args...)
}
func (c *componentData) ErrorDepth(depth int, args ...any) {
args = append([]any{"[" + string(c.name) + "]"}, args...)
grpclog.ErrorDepth(depth+1, args...)
ErrorDepth(depth+1, args...)
}
func (c *componentData) FatalDepth(depth int, args ...any) {
args = append([]any{"[" + string(c.name) + "]"}, args...)
grpclog.FatalDepth(depth+1, args...)
FatalDepth(depth+1, args...)
}
func (c *componentData) Info(args ...any) {

View File

@ -18,18 +18,15 @@
// Package grpclog defines logging for grpc.
//
// All logs in transport and grpclb packages only go to verbose level 2.
// All logs in other packages in grpc are logged in spite of the verbosity level.
//
// In the default logger,
// severity level can be set by environment variable GRPC_GO_LOG_SEVERITY_LEVEL,
// verbosity level can be set by GRPC_GO_LOG_VERBOSITY_LEVEL.
package grpclog // import "google.golang.org/grpc/grpclog"
// In the default logger, severity level can be set by environment variable
// GRPC_GO_LOG_SEVERITY_LEVEL, verbosity level can be set by
// GRPC_GO_LOG_VERBOSITY_LEVEL.
package grpclog
import (
"os"
"google.golang.org/grpc/internal/grpclog"
"google.golang.org/grpc/grpclog/internal"
)
func init() {
@ -38,58 +35,58 @@ func init() {
// V reports whether verbosity level l is at least the requested verbose level.
func V(l int) bool {
return grpclog.Logger.V(l)
return internal.LoggerV2Impl.V(l)
}
// Info logs to the INFO log.
func Info(args ...any) {
grpclog.Logger.Info(args...)
internal.LoggerV2Impl.Info(args...)
}
// Infof logs to the INFO log. Arguments are handled in the manner of fmt.Printf.
func Infof(format string, args ...any) {
grpclog.Logger.Infof(format, args...)
internal.LoggerV2Impl.Infof(format, args...)
}
// Infoln logs to the INFO log. Arguments are handled in the manner of fmt.Println.
func Infoln(args ...any) {
grpclog.Logger.Infoln(args...)
internal.LoggerV2Impl.Infoln(args...)
}
// Warning logs to the WARNING log.
func Warning(args ...any) {
grpclog.Logger.Warning(args...)
internal.LoggerV2Impl.Warning(args...)
}
// Warningf logs to the WARNING log. Arguments are handled in the manner of fmt.Printf.
func Warningf(format string, args ...any) {
grpclog.Logger.Warningf(format, args...)
internal.LoggerV2Impl.Warningf(format, args...)
}
// Warningln logs to the WARNING log. Arguments are handled in the manner of fmt.Println.
func Warningln(args ...any) {
grpclog.Logger.Warningln(args...)
internal.LoggerV2Impl.Warningln(args...)
}
// Error logs to the ERROR log.
func Error(args ...any) {
grpclog.Logger.Error(args...)
internal.LoggerV2Impl.Error(args...)
}
// Errorf logs to the ERROR log. Arguments are handled in the manner of fmt.Printf.
func Errorf(format string, args ...any) {
grpclog.Logger.Errorf(format, args...)
internal.LoggerV2Impl.Errorf(format, args...)
}
// Errorln logs to the ERROR log. Arguments are handled in the manner of fmt.Println.
func Errorln(args ...any) {
grpclog.Logger.Errorln(args...)
internal.LoggerV2Impl.Errorln(args...)
}
// Fatal logs to the FATAL log. Arguments are handled in the manner of fmt.Print.
// It calls os.Exit() with exit code 1.
func Fatal(args ...any) {
grpclog.Logger.Fatal(args...)
internal.LoggerV2Impl.Fatal(args...)
// Make sure fatal logs will exit.
os.Exit(1)
}
@ -97,15 +94,15 @@ func Fatal(args ...any) {
// Fatalf logs to the FATAL log. Arguments are handled in the manner of fmt.Printf.
// It calls os.Exit() with exit code 1.
func Fatalf(format string, args ...any) {
grpclog.Logger.Fatalf(format, args...)
internal.LoggerV2Impl.Fatalf(format, args...)
// Make sure fatal logs will exit.
os.Exit(1)
}
// Fatalln logs to the FATAL log. Arguments are handled in the manner of fmt.Println.
// It calle os.Exit()) with exit code 1.
// It calls os.Exit() with exit code 1.
func Fatalln(args ...any) {
grpclog.Logger.Fatalln(args...)
internal.LoggerV2Impl.Fatalln(args...)
// Make sure fatal logs will exit.
os.Exit(1)
}
@ -114,19 +111,76 @@ func Fatalln(args ...any) {
//
// Deprecated: use Info.
func Print(args ...any) {
grpclog.Logger.Info(args...)
internal.LoggerV2Impl.Info(args...)
}
// Printf prints to the logger. Arguments are handled in the manner of fmt.Printf.
//
// Deprecated: use Infof.
func Printf(format string, args ...any) {
grpclog.Logger.Infof(format, args...)
internal.LoggerV2Impl.Infof(format, args...)
}
// Println prints to the logger. Arguments are handled in the manner of fmt.Println.
//
// Deprecated: use Infoln.
func Println(args ...any) {
grpclog.Logger.Infoln(args...)
internal.LoggerV2Impl.Infoln(args...)
}
// InfoDepth logs to the INFO log at the specified depth.
//
// # Experimental
//
// Notice: This API is EXPERIMENTAL and may be changed or removed in a
// later release.
func InfoDepth(depth int, args ...any) {
if internal.DepthLoggerV2Impl != nil {
internal.DepthLoggerV2Impl.InfoDepth(depth, args...)
} else {
internal.LoggerV2Impl.Infoln(args...)
}
}
// WarningDepth logs to the WARNING log at the specified depth.
//
// # Experimental
//
// Notice: This API is EXPERIMENTAL and may be changed or removed in a
// later release.
func WarningDepth(depth int, args ...any) {
if internal.DepthLoggerV2Impl != nil {
internal.DepthLoggerV2Impl.WarningDepth(depth, args...)
} else {
internal.LoggerV2Impl.Warningln(args...)
}
}
// ErrorDepth logs to the ERROR log at the specified depth.
//
// # Experimental
//
// Notice: This API is EXPERIMENTAL and may be changed or removed in a
// later release.
func ErrorDepth(depth int, args ...any) {
if internal.DepthLoggerV2Impl != nil {
internal.DepthLoggerV2Impl.ErrorDepth(depth, args...)
} else {
internal.LoggerV2Impl.Errorln(args...)
}
}
// FatalDepth logs to the FATAL log at the specified depth.
//
// # Experimental
//
// Notice: This API is EXPERIMENTAL and may be changed or removed in a
// later release.
func FatalDepth(depth int, args ...any) {
if internal.DepthLoggerV2Impl != nil {
internal.DepthLoggerV2Impl.FatalDepth(depth, args...)
} else {
internal.LoggerV2Impl.Fatalln(args...)
}
os.Exit(1)
}

View File

@ -1,9 +1,6 @@
//go:build !linux
// +build !linux
/*
*
* Copyright 2018 gRPC authors.
* Copyright 2024 gRPC authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@ -19,9 +16,11 @@
*
*/
package channelz
// Package internal contains functionality internal to the grpclog package.
package internal
// GetSocketOption gets the socket option info of the conn.
func GetSocketOption(c any) *SocketOptionData {
return nil
}
// LoggerV2Impl is the logger used for the non-depth log functions.
var LoggerV2Impl LoggerV2
// DepthLoggerV2Impl is the logger used for the depth log functions.
var DepthLoggerV2Impl DepthLoggerV2

View File

@ -0,0 +1,87 @@
/*
*
* Copyright 2024 gRPC authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
package internal
// Logger mimics golang's standard Logger as an interface.
//
// Deprecated: use LoggerV2.
type Logger interface {
Fatal(args ...any)
Fatalf(format string, args ...any)
Fatalln(args ...any)
Print(args ...any)
Printf(format string, args ...any)
Println(args ...any)
}
// LoggerWrapper wraps Logger into a LoggerV2.
type LoggerWrapper struct {
Logger
}
// Info logs to INFO log. Arguments are handled in the manner of fmt.Print.
func (l *LoggerWrapper) Info(args ...any) {
l.Logger.Print(args...)
}
// Infoln logs to INFO log. Arguments are handled in the manner of fmt.Println.
func (l *LoggerWrapper) Infoln(args ...any) {
l.Logger.Println(args...)
}
// Infof logs to INFO log. Arguments are handled in the manner of fmt.Printf.
func (l *LoggerWrapper) Infof(format string, args ...any) {
l.Logger.Printf(format, args...)
}
// Warning logs to WARNING log. Arguments are handled in the manner of fmt.Print.
func (l *LoggerWrapper) Warning(args ...any) {
l.Logger.Print(args...)
}
// Warningln logs to WARNING log. Arguments are handled in the manner of fmt.Println.
func (l *LoggerWrapper) Warningln(args ...any) {
l.Logger.Println(args...)
}
// Warningf logs to WARNING log. Arguments are handled in the manner of fmt.Printf.
func (l *LoggerWrapper) Warningf(format string, args ...any) {
l.Logger.Printf(format, args...)
}
// Error logs to ERROR log. Arguments are handled in the manner of fmt.Print.
func (l *LoggerWrapper) Error(args ...any) {
l.Logger.Print(args...)
}
// Errorln logs to ERROR log. Arguments are handled in the manner of fmt.Println.
func (l *LoggerWrapper) Errorln(args ...any) {
l.Logger.Println(args...)
}
// Errorf logs to ERROR log. Arguments are handled in the manner of fmt.Printf.
func (l *LoggerWrapper) Errorf(format string, args ...any) {
l.Logger.Printf(format, args...)
}
// V reports whether verbosity level l is at least the requested verbose level.
func (*LoggerWrapper) V(l int) bool {
// Returns true for all verbose level.
return true
}

View File

@ -1,6 +1,6 @@
/*
*
* Copyright 2020 gRPC authors.
* Copyright 2024 gRPC authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@ -16,59 +16,17 @@
*
*/
// Package grpclog (internal) defines depth logging for grpc.
package grpclog
package internal
import (
"encoding/json"
"fmt"
"io"
"log"
"os"
)
// Logger is the logger used for the non-depth log functions.
var Logger LoggerV2
// DepthLogger is the logger used for the depth log functions.
var DepthLogger DepthLoggerV2
// InfoDepth logs to the INFO log at the specified depth.
func InfoDepth(depth int, args ...any) {
if DepthLogger != nil {
DepthLogger.InfoDepth(depth, args...)
} else {
Logger.Infoln(args...)
}
}
// WarningDepth logs to the WARNING log at the specified depth.
func WarningDepth(depth int, args ...any) {
if DepthLogger != nil {
DepthLogger.WarningDepth(depth, args...)
} else {
Logger.Warningln(args...)
}
}
// ErrorDepth logs to the ERROR log at the specified depth.
func ErrorDepth(depth int, args ...any) {
if DepthLogger != nil {
DepthLogger.ErrorDepth(depth, args...)
} else {
Logger.Errorln(args...)
}
}
// FatalDepth logs to the FATAL log at the specified depth.
func FatalDepth(depth int, args ...any) {
if DepthLogger != nil {
DepthLogger.FatalDepth(depth, args...)
} else {
Logger.Fatalln(args...)
}
os.Exit(1)
}
// LoggerV2 does underlying logging work for grpclog.
// This is a copy of the LoggerV2 defined in the external grpclog package. It
// is defined here to avoid a circular dependency.
type LoggerV2 interface {
// Info logs to INFO log. Arguments are handled in the manner of fmt.Print.
Info(args ...any)
@ -107,14 +65,13 @@ type LoggerV2 interface {
// DepthLoggerV2 logs at a specified call frame. If a LoggerV2 also implements
// DepthLoggerV2, the below functions will be called with the appropriate stack
// depth set for trivial functions the logger may ignore.
// This is a copy of the DepthLoggerV2 defined in the external grpclog package.
// It is defined here to avoid a circular dependency.
//
// # Experimental
//
// Notice: This type is EXPERIMENTAL and may be changed or removed in a
// later release.
type DepthLoggerV2 interface {
LoggerV2
// InfoDepth logs to INFO log at the specified depth. Arguments are handled in the manner of fmt.Println.
InfoDepth(depth int, args ...any)
// WarningDepth logs to WARNING log at the specified depth. Arguments are handled in the manner of fmt.Println.
@ -124,3 +81,124 @@ type DepthLoggerV2 interface {
// FatalDepth logs to FATAL log at the specified depth. Arguments are handled in the manner of fmt.Println.
FatalDepth(depth int, args ...any)
}
const (
// infoLog indicates Info severity.
infoLog int = iota
// warningLog indicates Warning severity.
warningLog
// errorLog indicates Error severity.
errorLog
// fatalLog indicates Fatal severity.
fatalLog
)
// severityName contains the string representation of each severity.
var severityName = []string{
infoLog: "INFO",
warningLog: "WARNING",
errorLog: "ERROR",
fatalLog: "FATAL",
}
// loggerT is the default logger used by grpclog.
type loggerT struct {
m []*log.Logger
v int
jsonFormat bool
}
func (g *loggerT) output(severity int, s string) {
sevStr := severityName[severity]
if !g.jsonFormat {
g.m[severity].Output(2, fmt.Sprintf("%v: %v", sevStr, s))
return
}
// TODO: we can also include the logging component, but that needs more
// (API) changes.
b, _ := json.Marshal(map[string]string{
"severity": sevStr,
"message": s,
})
g.m[severity].Output(2, string(b))
}
func (g *loggerT) Info(args ...any) {
g.output(infoLog, fmt.Sprint(args...))
}
func (g *loggerT) Infoln(args ...any) {
g.output(infoLog, fmt.Sprintln(args...))
}
func (g *loggerT) Infof(format string, args ...any) {
g.output(infoLog, fmt.Sprintf(format, args...))
}
func (g *loggerT) Warning(args ...any) {
g.output(warningLog, fmt.Sprint(args...))
}
func (g *loggerT) Warningln(args ...any) {
g.output(warningLog, fmt.Sprintln(args...))
}
func (g *loggerT) Warningf(format string, args ...any) {
g.output(warningLog, fmt.Sprintf(format, args...))
}
func (g *loggerT) Error(args ...any) {
g.output(errorLog, fmt.Sprint(args...))
}
func (g *loggerT) Errorln(args ...any) {
g.output(errorLog, fmt.Sprintln(args...))
}
func (g *loggerT) Errorf(format string, args ...any) {
g.output(errorLog, fmt.Sprintf(format, args...))
}
func (g *loggerT) Fatal(args ...any) {
g.output(fatalLog, fmt.Sprint(args...))
os.Exit(1)
}
func (g *loggerT) Fatalln(args ...any) {
g.output(fatalLog, fmt.Sprintln(args...))
os.Exit(1)
}
func (g *loggerT) Fatalf(format string, args ...any) {
g.output(fatalLog, fmt.Sprintf(format, args...))
os.Exit(1)
}
func (g *loggerT) V(l int) bool {
return l <= g.v
}
// LoggerV2Config configures the LoggerV2 implementation.
type LoggerV2Config struct {
// Verbosity sets the verbosity level of the logger.
Verbosity int
// FormatJSON controls whether the logger should output logs in JSON format.
FormatJSON bool
}
// NewLoggerV2 creates a new LoggerV2 instance with the provided configuration.
// The infoW, warningW, and errorW writers are used to write log messages of
// different severity levels.
func NewLoggerV2(infoW, warningW, errorW io.Writer, c LoggerV2Config) LoggerV2 {
var m []*log.Logger
flag := log.LstdFlags
if c.FormatJSON {
flag = 0
}
m = append(m, log.New(infoW, "", flag))
m = append(m, log.New(io.MultiWriter(infoW, warningW), "", flag))
ew := io.MultiWriter(infoW, warningW, errorW) // ew will be used for error and fatal.
m = append(m, log.New(ew, "", flag))
m = append(m, log.New(ew, "", flag))
return &loggerT{m: m, v: c.Verbosity, jsonFormat: c.FormatJSON}
}

View File

@ -18,70 +18,17 @@
package grpclog
import "google.golang.org/grpc/internal/grpclog"
import "google.golang.org/grpc/grpclog/internal"
// Logger mimics golang's standard Logger as an interface.
//
// Deprecated: use LoggerV2.
type Logger interface {
Fatal(args ...any)
Fatalf(format string, args ...any)
Fatalln(args ...any)
Print(args ...any)
Printf(format string, args ...any)
Println(args ...any)
}
type Logger internal.Logger
// SetLogger sets the logger that is used in grpc. Call only from
// init() functions.
//
// Deprecated: use SetLoggerV2.
func SetLogger(l Logger) {
grpclog.Logger = &loggerWrapper{Logger: l}
}
// loggerWrapper wraps Logger into a LoggerV2.
type loggerWrapper struct {
Logger
}
func (g *loggerWrapper) Info(args ...any) {
g.Logger.Print(args...)
}
func (g *loggerWrapper) Infoln(args ...any) {
g.Logger.Println(args...)
}
func (g *loggerWrapper) Infof(format string, args ...any) {
g.Logger.Printf(format, args...)
}
func (g *loggerWrapper) Warning(args ...any) {
g.Logger.Print(args...)
}
func (g *loggerWrapper) Warningln(args ...any) {
g.Logger.Println(args...)
}
func (g *loggerWrapper) Warningf(format string, args ...any) {
g.Logger.Printf(format, args...)
}
func (g *loggerWrapper) Error(args ...any) {
g.Logger.Print(args...)
}
func (g *loggerWrapper) Errorln(args ...any) {
g.Logger.Println(args...)
}
func (g *loggerWrapper) Errorf(format string, args ...any) {
g.Logger.Printf(format, args...)
}
func (g *loggerWrapper) V(l int) bool {
// Returns true for all verbose level.
return true
internal.LoggerV2Impl = &internal.LoggerWrapper{Logger: l}
}

View File

@ -19,52 +19,16 @@
package grpclog
import (
"encoding/json"
"fmt"
"io"
"log"
"os"
"strconv"
"strings"
"google.golang.org/grpc/internal/grpclog"
"google.golang.org/grpc/grpclog/internal"
)
// LoggerV2 does underlying logging work for grpclog.
type LoggerV2 interface {
// Info logs to INFO log. Arguments are handled in the manner of fmt.Print.
Info(args ...any)
// Infoln logs to INFO log. Arguments are handled in the manner of fmt.Println.
Infoln(args ...any)
// Infof logs to INFO log. Arguments are handled in the manner of fmt.Printf.
Infof(format string, args ...any)
// Warning logs to WARNING log. Arguments are handled in the manner of fmt.Print.
Warning(args ...any)
// Warningln logs to WARNING log. Arguments are handled in the manner of fmt.Println.
Warningln(args ...any)
// Warningf logs to WARNING log. Arguments are handled in the manner of fmt.Printf.
Warningf(format string, args ...any)
// Error logs to ERROR log. Arguments are handled in the manner of fmt.Print.
Error(args ...any)
// Errorln logs to ERROR log. Arguments are handled in the manner of fmt.Println.
Errorln(args ...any)
// Errorf logs to ERROR log. Arguments are handled in the manner of fmt.Printf.
Errorf(format string, args ...any)
// Fatal logs to ERROR log. Arguments are handled in the manner of fmt.Print.
// gRPC ensures that all Fatal logs will exit with os.Exit(1).
// Implementations may also call os.Exit() with a non-zero exit code.
Fatal(args ...any)
// Fatalln logs to ERROR log. Arguments are handled in the manner of fmt.Println.
// gRPC ensures that all Fatal logs will exit with os.Exit(1).
// Implementations may also call os.Exit() with a non-zero exit code.
Fatalln(args ...any)
// Fatalf logs to ERROR log. Arguments are handled in the manner of fmt.Printf.
// gRPC ensures that all Fatal logs will exit with os.Exit(1).
// Implementations may also call os.Exit() with a non-zero exit code.
Fatalf(format string, args ...any)
// V reports whether verbosity level l is at least the requested verbose level.
V(l int) bool
}
type LoggerV2 internal.LoggerV2
// SetLoggerV2 sets logger that is used in grpc to a V2 logger.
// Not mutex-protected, should be called before any gRPC functions.
@ -72,34 +36,8 @@ func SetLoggerV2(l LoggerV2) {
if _, ok := l.(*componentData); ok {
panic("cannot use component logger as grpclog logger")
}
grpclog.Logger = l
grpclog.DepthLogger, _ = l.(grpclog.DepthLoggerV2)
}
const (
// infoLog indicates Info severity.
infoLog int = iota
// warningLog indicates Warning severity.
warningLog
// errorLog indicates Error severity.
errorLog
// fatalLog indicates Fatal severity.
fatalLog
)
// severityName contains the string representation of each severity.
var severityName = []string{
infoLog: "INFO",
warningLog: "WARNING",
errorLog: "ERROR",
fatalLog: "FATAL",
}
// loggerT is the default logger used by grpclog.
type loggerT struct {
m []*log.Logger
v int
jsonFormat bool
internal.LoggerV2Impl = l
internal.DepthLoggerV2Impl, _ = l.(internal.DepthLoggerV2)
}
// NewLoggerV2 creates a loggerV2 with the provided writers.
@ -108,32 +46,13 @@ type loggerT struct {
// Warning logs will be written to warningW and infoW.
// Info logs will be written to infoW.
func NewLoggerV2(infoW, warningW, errorW io.Writer) LoggerV2 {
return newLoggerV2WithConfig(infoW, warningW, errorW, loggerV2Config{})
return internal.NewLoggerV2(infoW, warningW, errorW, internal.LoggerV2Config{})
}
// NewLoggerV2WithVerbosity creates a loggerV2 with the provided writers and
// verbosity level.
func NewLoggerV2WithVerbosity(infoW, warningW, errorW io.Writer, v int) LoggerV2 {
return newLoggerV2WithConfig(infoW, warningW, errorW, loggerV2Config{verbose: v})
}
type loggerV2Config struct {
verbose int
jsonFormat bool
}
func newLoggerV2WithConfig(infoW, warningW, errorW io.Writer, c loggerV2Config) LoggerV2 {
var m []*log.Logger
flag := log.LstdFlags
if c.jsonFormat {
flag = 0
}
m = append(m, log.New(infoW, "", flag))
m = append(m, log.New(io.MultiWriter(infoW, warningW), "", flag))
ew := io.MultiWriter(infoW, warningW, errorW) // ew will be used for error and fatal.
m = append(m, log.New(ew, "", flag))
m = append(m, log.New(ew, "", flag))
return &loggerT{m: m, v: c.verbose, jsonFormat: c.jsonFormat}
return internal.NewLoggerV2(infoW, warningW, errorW, internal.LoggerV2Config{Verbosity: v})
}
// newLoggerV2 creates a loggerV2 to be used as default logger.
@ -161,82 +80,12 @@ func newLoggerV2() LoggerV2 {
jsonFormat := strings.EqualFold(os.Getenv("GRPC_GO_LOG_FORMATTER"), "json")
return newLoggerV2WithConfig(infoW, warningW, errorW, loggerV2Config{
verbose: v,
jsonFormat: jsonFormat,
return internal.NewLoggerV2(infoW, warningW, errorW, internal.LoggerV2Config{
Verbosity: v,
FormatJSON: jsonFormat,
})
}
func (g *loggerT) output(severity int, s string) {
sevStr := severityName[severity]
if !g.jsonFormat {
g.m[severity].Output(2, fmt.Sprintf("%v: %v", sevStr, s))
return
}
// TODO: we can also include the logging component, but that needs more
// (API) changes.
b, _ := json.Marshal(map[string]string{
"severity": sevStr,
"message": s,
})
g.m[severity].Output(2, string(b))
}
func (g *loggerT) Info(args ...any) {
g.output(infoLog, fmt.Sprint(args...))
}
func (g *loggerT) Infoln(args ...any) {
g.output(infoLog, fmt.Sprintln(args...))
}
func (g *loggerT) Infof(format string, args ...any) {
g.output(infoLog, fmt.Sprintf(format, args...))
}
func (g *loggerT) Warning(args ...any) {
g.output(warningLog, fmt.Sprint(args...))
}
func (g *loggerT) Warningln(args ...any) {
g.output(warningLog, fmt.Sprintln(args...))
}
func (g *loggerT) Warningf(format string, args ...any) {
g.output(warningLog, fmt.Sprintf(format, args...))
}
func (g *loggerT) Error(args ...any) {
g.output(errorLog, fmt.Sprint(args...))
}
func (g *loggerT) Errorln(args ...any) {
g.output(errorLog, fmt.Sprintln(args...))
}
func (g *loggerT) Errorf(format string, args ...any) {
g.output(errorLog, fmt.Sprintf(format, args...))
}
func (g *loggerT) Fatal(args ...any) {
g.output(fatalLog, fmt.Sprint(args...))
os.Exit(1)
}
func (g *loggerT) Fatalln(args ...any) {
g.output(fatalLog, fmt.Sprintln(args...))
os.Exit(1)
}
func (g *loggerT) Fatalf(format string, args ...any) {
g.output(fatalLog, fmt.Sprintf(format, args...))
os.Exit(1)
}
func (g *loggerT) V(l int) bool {
return l <= g.v
}
// DepthLoggerV2 logs at a specified call frame. If a LoggerV2 also implements
// DepthLoggerV2, the below functions will be called with the appropriate stack
// depth set for trivial functions the logger may ignore.
@ -245,14 +94,4 @@ func (g *loggerT) V(l int) bool {
//
// Notice: This type is EXPERIMENTAL and may be changed or removed in a
// later release.
type DepthLoggerV2 interface {
LoggerV2
// InfoDepth logs to INFO log at the specified depth. Arguments are handled in the manner of fmt.Println.
InfoDepth(depth int, args ...any)
// WarningDepth logs to WARNING log at the specified depth. Arguments are handled in the manner of fmt.Println.
WarningDepth(depth int, args ...any)
// ErrorDepth logs to ERROR log at the specified depth. Arguments are handled in the manner of fmt.Println.
ErrorDepth(depth int, args ...any)
// FatalDepth logs to FATAL log at the specified depth. Arguments are handled in the manner of fmt.Println.
FatalDepth(depth int, args ...any)
}
type DepthLoggerV2 internal.DepthLoggerV2

View File

@ -17,8 +17,8 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.32.0
// protoc v4.25.2
// protoc-gen-go v1.34.1
// protoc v5.27.1
// source: grpc/health/v1/health.proto
package grpc_health_v1

View File

@ -17,8 +17,8 @@
// Code generated by protoc-gen-go-grpc. DO NOT EDIT.
// versions:
// - protoc-gen-go-grpc v1.3.0
// - protoc v4.25.2
// - protoc-gen-go-grpc v1.5.1
// - protoc v5.27.1
// source: grpc/health/v1/health.proto
package grpc_health_v1
@ -32,8 +32,8 @@ import (
// This is a compile-time assertion to ensure that this generated file
// is compatible with the grpc package it is being compiled against.
// Requires gRPC-Go v1.32.0 or later.
const _ = grpc.SupportPackageIsVersion7
// Requires gRPC-Go v1.64.0 or later.
const _ = grpc.SupportPackageIsVersion9
const (
Health_Check_FullMethodName = "/grpc.health.v1.Health/Check"
@ -43,6 +43,10 @@ const (
// HealthClient is the client API for Health service.
//
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream.
//
// Health is gRPC's mechanism for checking whether a server is able to handle
// RPCs. Its semantics are documented in
// https://github.com/grpc/grpc/blob/master/doc/health-checking.md.
type HealthClient interface {
// Check gets the health of the specified service. If the requested service
// is unknown, the call will fail with status NOT_FOUND. If the caller does
@ -69,7 +73,7 @@ type HealthClient interface {
// should assume this method is not supported and should not retry the
// call. If the call terminates with any other status (including OK),
// clients should retry the call with appropriate exponential backoff.
Watch(ctx context.Context, in *HealthCheckRequest, opts ...grpc.CallOption) (Health_WatchClient, error)
Watch(ctx context.Context, in *HealthCheckRequest, opts ...grpc.CallOption) (grpc.ServerStreamingClient[HealthCheckResponse], error)
}
type healthClient struct {
@ -81,20 +85,22 @@ func NewHealthClient(cc grpc.ClientConnInterface) HealthClient {
}
func (c *healthClient) Check(ctx context.Context, in *HealthCheckRequest, opts ...grpc.CallOption) (*HealthCheckResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(HealthCheckResponse)
err := c.cc.Invoke(ctx, Health_Check_FullMethodName, in, out, opts...)
err := c.cc.Invoke(ctx, Health_Check_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *healthClient) Watch(ctx context.Context, in *HealthCheckRequest, opts ...grpc.CallOption) (Health_WatchClient, error) {
stream, err := c.cc.NewStream(ctx, &Health_ServiceDesc.Streams[0], Health_Watch_FullMethodName, opts...)
func (c *healthClient) Watch(ctx context.Context, in *HealthCheckRequest, opts ...grpc.CallOption) (grpc.ServerStreamingClient[HealthCheckResponse], error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
stream, err := c.cc.NewStream(ctx, &Health_ServiceDesc.Streams[0], Health_Watch_FullMethodName, cOpts...)
if err != nil {
return nil, err
}
x := &healthWatchClient{stream}
x := &grpc.GenericClientStream[HealthCheckRequest, HealthCheckResponse]{ClientStream: stream}
if err := x.ClientStream.SendMsg(in); err != nil {
return nil, err
}
@ -104,26 +110,16 @@ func (c *healthClient) Watch(ctx context.Context, in *HealthCheckRequest, opts .
return x, nil
}
type Health_WatchClient interface {
Recv() (*HealthCheckResponse, error)
grpc.ClientStream
}
type healthWatchClient struct {
grpc.ClientStream
}
func (x *healthWatchClient) Recv() (*HealthCheckResponse, error) {
m := new(HealthCheckResponse)
if err := x.ClientStream.RecvMsg(m); err != nil {
return nil, err
}
return m, nil
}
// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name.
type Health_WatchClient = grpc.ServerStreamingClient[HealthCheckResponse]
// HealthServer is the server API for Health service.
// All implementations should embed UnimplementedHealthServer
// for forward compatibility
// for forward compatibility.
//
// Health is gRPC's mechanism for checking whether a server is able to handle
// RPCs. Its semantics are documented in
// https://github.com/grpc/grpc/blob/master/doc/health-checking.md.
type HealthServer interface {
// Check gets the health of the specified service. If the requested service
// is unknown, the call will fail with status NOT_FOUND. If the caller does
@ -150,19 +146,23 @@ type HealthServer interface {
// should assume this method is not supported and should not retry the
// call. If the call terminates with any other status (including OK),
// clients should retry the call with appropriate exponential backoff.
Watch(*HealthCheckRequest, Health_WatchServer) error
Watch(*HealthCheckRequest, grpc.ServerStreamingServer[HealthCheckResponse]) error
}
// UnimplementedHealthServer should be embedded to have forward compatible implementations.
type UnimplementedHealthServer struct {
}
// UnimplementedHealthServer should be embedded to have
// forward compatible implementations.
//
// NOTE: this should be embedded by value instead of pointer to avoid a nil
// pointer dereference when methods are called.
type UnimplementedHealthServer struct{}
func (UnimplementedHealthServer) Check(context.Context, *HealthCheckRequest) (*HealthCheckResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method Check not implemented")
}
func (UnimplementedHealthServer) Watch(*HealthCheckRequest, Health_WatchServer) error {
func (UnimplementedHealthServer) Watch(*HealthCheckRequest, grpc.ServerStreamingServer[HealthCheckResponse]) error {
return status.Errorf(codes.Unimplemented, "method Watch not implemented")
}
func (UnimplementedHealthServer) testEmbeddedByValue() {}
// UnsafeHealthServer may be embedded to opt out of forward compatibility for this service.
// Use of this interface is not recommended, as added methods to HealthServer will
@ -172,6 +172,13 @@ type UnsafeHealthServer interface {
}
func RegisterHealthServer(s grpc.ServiceRegistrar, srv HealthServer) {
// If the following call panics, it indicates UnimplementedHealthServer was
// embedded by pointer and is nil. This will cause panics if an
// unimplemented method is ever invoked, so we test this at initialization
// time to prevent it from happening at runtime later due to I/O.
if t, ok := srv.(interface{ testEmbeddedByValue() }); ok {
t.testEmbeddedByValue()
}
s.RegisterService(&Health_ServiceDesc, srv)
}
@ -198,21 +205,11 @@ func _Health_Watch_Handler(srv interface{}, stream grpc.ServerStream) error {
if err := stream.RecvMsg(m); err != nil {
return err
}
return srv.(HealthServer).Watch(m, &healthWatchServer{stream})
return srv.(HealthServer).Watch(m, &grpc.GenericServerStream[HealthCheckRequest, HealthCheckResponse]{ServerStream: stream})
}
type Health_WatchServer interface {
Send(*HealthCheckResponse) error
grpc.ServerStream
}
type healthWatchServer struct {
grpc.ServerStream
}
func (x *healthWatchServer) Send(m *HealthCheckResponse) error {
return x.ServerStream.SendMsg(m)
}
// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name.
type Health_WatchServer = grpc.ServerStreamingServer[HealthCheckResponse]
// Health_ServiceDesc is the grpc.ServiceDesc for Health service.
// It's only intended for direct use with grpc.RegisterService,

View File

@ -25,10 +25,10 @@ package backoff
import (
"context"
"errors"
"math/rand"
"time"
grpcbackoff "google.golang.org/grpc/backoff"
"google.golang.org/grpc/internal/grpcrand"
)
// Strategy defines the methodology for backing off after a grpc connection
@ -67,7 +67,7 @@ func (bc Exponential) Backoff(retries int) time.Duration {
}
// Randomize backoff delays so that if a cluster of requests start at
// the same time, they won't operate in lockstep.
backoff *= 1 + bc.Config.Jitter*(grpcrand.Float64()*2-1)
backoff *= 1 + bc.Config.Jitter*(rand.Float64()*2-1)
if backoff < 0 {
return 0
}

Some files were not shown because too many files have changed in this diff Show More