vendor: google.golang.org/grpc v1.58.3

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This commit is contained in:
Sebastiaan van Stijn 2023-11-01 16:03:44 +01:00
parent 7841493823
commit aa24d611bd
No known key found for this signature in database
GPG Key ID: 76698F39D527CE8C
64 changed files with 1487 additions and 864 deletions

View File

@ -84,7 +84,7 @@ require (
golang.org/x/net v0.17.0 // indirect golang.org/x/net v0.17.0 // indirect
golang.org/x/time v0.3.0 // indirect golang.org/x/time v0.3.0 // indirect
golang.org/x/tools v0.10.0 // indirect golang.org/x/tools v0.10.0 // indirect
google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1 // indirect google.golang.org/genproto/googleapis/rpc v0.0.0-20230711160842-782d3b101e98 // indirect
google.golang.org/grpc v1.56.3 // indirect google.golang.org/grpc v1.58.3 // indirect
google.golang.org/protobuf v1.31.0 // indirect google.golang.org/protobuf v1.31.0 // indirect
) )

View File

@ -355,11 +355,13 @@ golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8T
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1 h1:KpwkzHKEF7B9Zxg18WzOa7djJ+Ha5DzthMyZYQfEn2A= google.golang.org/genproto v0.0.0-20230711160842-782d3b101e98 h1:Z0hjGZePRE0ZBWotvtrwxFNrNE9CUAGtplaDK5NNI/g=
google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1/go.mod h1:nKE/iIaLqn2bQwXBg8f1g2Ylh6r5MN5CmZvuzZCgsCU= google.golang.org/genproto/googleapis/api v0.0.0-20230711160842-782d3b101e98 h1:FmF5cCW94Ij59cfpoLiwTgodWmm60eEV0CjlsVg2fuw=
google.golang.org/genproto/googleapis/rpc v0.0.0-20230711160842-782d3b101e98 h1:bVf09lpb+OJbByTj913DRJioFFAjf/ZGxEz7MajTp2U=
google.golang.org/genproto/googleapis/rpc v0.0.0-20230711160842-782d3b101e98/go.mod h1:TUfxEVdsvPg18p6AslUXFoLdpED4oBnGwyqk3dV1XzM=
google.golang.org/grpc v1.0.5/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw= google.golang.org/grpc v1.0.5/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw=
google.golang.org/grpc v1.56.3 h1:8I4C0Yq1EjstUzUJzpcRVbuYA2mODtEmpWiQoN/b2nc= google.golang.org/grpc v1.58.3 h1:BjnpXut1btbtgN/6sp+brB2Kbm2LjNXnidYujAVbSoQ=
google.golang.org/grpc v1.56.3/go.mod h1:I9bI3vqKfayGqPUAwGdOSu7kt6oIJLixfffKrpXqQ9s= google.golang.org/grpc v1.58.3/go.mod h1:tgX3ZQDlNJGU96V6yHh1T/JeoBQ2TXdr43YbYSsCJk0=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw= google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc= google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.31.0 h1:g0LDEJHgrBl9N9r17Ru3sqWhkIx2NB67okBHPwC7hs8= google.golang.org/protobuf v1.31.0 h1:g0LDEJHgrBl9N9r17Ru3sqWhkIx2NB67okBHPwC7hs8=

View File

@ -14,21 +14,14 @@ RPC framework that puts mobile and HTTP/2 first. For more information see the
## Installation ## Installation
With [Go module][] support (Go 1.11+), simply add the following import Simply add the following import to your code, and then `go [build|run|test]`
will automatically fetch the necessary dependencies:
```go ```go
import "google.golang.org/grpc" import "google.golang.org/grpc"
``` ```
to your code, and then `go [build|run|test]` will automatically fetch the
necessary dependencies.
Otherwise, to install the `grpc-go` package, run the following command:
```console
$ go get -u google.golang.org/grpc
```
> **Note:** If you are trying to access `grpc-go` from **China**, see the > **Note:** If you are trying to access `grpc-go` from **China**, see the
> [FAQ](#FAQ) below. > [FAQ](#FAQ) below.
@ -56,15 +49,6 @@ To build Go code, there are several options:
- Set up a VPN and access google.golang.org through that. - Set up a VPN and access google.golang.org through that.
- Without Go module support: `git clone` the repo manually:
```sh
git clone https://github.com/grpc/grpc-go.git $GOPATH/src/google.golang.org/grpc
```
You will need to do the same for all of grpc's dependencies in `golang.org`,
e.g. `golang.org/x/net`.
- With Go module support: it is possible to use the `replace` feature of `go - With Go module support: it is possible to use the `replace` feature of `go
mod` to create aliases for golang.org packages. In your project's directory: mod` to create aliases for golang.org packages. In your project's directory:
@ -76,33 +60,13 @@ To build Go code, there are several options:
``` ```
Again, this will need to be done for all transitive dependencies hosted on Again, this will need to be done for all transitive dependencies hosted on
golang.org as well. For details, refer to [golang/go issue #28652](https://github.com/golang/go/issues/28652). golang.org as well. For details, refer to [golang/go issue
#28652](https://github.com/golang/go/issues/28652).
### Compiling error, undefined: grpc.SupportPackageIsVersion ### Compiling error, undefined: grpc.SupportPackageIsVersion
#### If you are using Go modules: Please update to the latest version of gRPC-Go using
`go get google.golang.org/grpc`.
Ensure your gRPC-Go version is `require`d at the appropriate version in
the same module containing the generated `.pb.go` files. For example,
`SupportPackageIsVersion6` needs `v1.27.0`, so in your `go.mod` file:
```go
module <your module name>
require (
google.golang.org/grpc v1.27.0
)
```
#### If you are *not* using Go modules:
Update the `proto` package, gRPC package, and rebuild the `.proto` files:
```sh
go get -u github.com/golang/protobuf/{proto,protoc-gen-go}
go get -u google.golang.org/grpc
protoc --go_out=plugins=grpc:. *.proto
```
### How to turn on logging ### How to turn on logging
@ -121,9 +85,11 @@ possible reasons, including:
1. mis-configured transport credentials, connection failed on handshaking 1. mis-configured transport credentials, connection failed on handshaking
1. bytes disrupted, possibly by a proxy in between 1. bytes disrupted, possibly by a proxy in between
1. server shutdown 1. server shutdown
1. Keepalive parameters caused connection shutdown, for example if you have configured 1. Keepalive parameters caused connection shutdown, for example if you have
your server to terminate connections regularly to [trigger DNS lookups](https://github.com/grpc/grpc-go/issues/3170#issuecomment-552517779). configured your server to terminate connections regularly to [trigger DNS
If this is the case, you may want to increase your [MaxConnectionAgeGrace](https://pkg.go.dev/google.golang.org/grpc/keepalive?tab=doc#ServerParameters), lookups](https://github.com/grpc/grpc-go/issues/3170#issuecomment-552517779).
If this is the case, you may want to increase your
[MaxConnectionAgeGrace](https://pkg.go.dev/google.golang.org/grpc/keepalive?tab=doc#ServerParameters),
to allow longer RPC calls to finish. to allow longer RPC calls to finish.
It can be tricky to debug this because the error happens on the client side but It can be tricky to debug this because the error happens on the client side but

View File

@ -34,26 +34,26 @@ import (
// key/value pairs. Keys must be hashable, and users should define their own // key/value pairs. Keys must be hashable, and users should define their own
// types for keys. Values should not be modified after they are added to an // types for keys. Values should not be modified after they are added to an
// Attributes or if they were received from one. If values implement 'Equal(o // Attributes or if they were received from one. If values implement 'Equal(o
// interface{}) bool', it will be called by (*Attributes).Equal to determine // any) bool', it will be called by (*Attributes).Equal to determine whether
// whether two values with the same key should be considered equal. // two values with the same key should be considered equal.
type Attributes struct { type Attributes struct {
m map[interface{}]interface{} m map[any]any
} }
// New returns a new Attributes containing the key/value pair. // New returns a new Attributes containing the key/value pair.
func New(key, value interface{}) *Attributes { func New(key, value any) *Attributes {
return &Attributes{m: map[interface{}]interface{}{key: value}} return &Attributes{m: map[any]any{key: value}}
} }
// WithValue returns a new Attributes containing the previous keys and values // WithValue returns a new Attributes containing the previous keys and values
// and the new key/value pair. If the same key appears multiple times, the // and the new key/value pair. If the same key appears multiple times, the
// last value overwrites all previous values for that key. To remove an // last value overwrites all previous values for that key. To remove an
// existing key, use a nil value. value should not be modified later. // existing key, use a nil value. value should not be modified later.
func (a *Attributes) WithValue(key, value interface{}) *Attributes { func (a *Attributes) WithValue(key, value any) *Attributes {
if a == nil { if a == nil {
return New(key, value) return New(key, value)
} }
n := &Attributes{m: make(map[interface{}]interface{}, len(a.m)+1)} n := &Attributes{m: make(map[any]any, len(a.m)+1)}
for k, v := range a.m { for k, v := range a.m {
n.m[k] = v n.m[k] = v
} }
@ -63,20 +63,19 @@ func (a *Attributes) WithValue(key, value interface{}) *Attributes {
// Value returns the value associated with these attributes for key, or nil if // Value returns the value associated with these attributes for key, or nil if
// no value is associated with key. The returned value should not be modified. // no value is associated with key. The returned value should not be modified.
func (a *Attributes) Value(key interface{}) interface{} { func (a *Attributes) Value(key any) any {
if a == nil { if a == nil {
return nil return nil
} }
return a.m[key] return a.m[key]
} }
// Equal returns whether a and o are equivalent. If 'Equal(o interface{}) // Equal returns whether a and o are equivalent. If 'Equal(o any) bool' is
// bool' is implemented for a value in the attributes, it is called to // implemented for a value in the attributes, it is called to determine if the
// determine if the value matches the one stored in the other attributes. If // value matches the one stored in the other attributes. If Equal is not
// Equal is not implemented, standard equality is used to determine if the two // implemented, standard equality is used to determine if the two values are
// values are equal. Note that some types (e.g. maps) aren't comparable by // equal. Note that some types (e.g. maps) aren't comparable by default, so
// default, so they must be wrapped in a struct, or in an alias type, with Equal // they must be wrapped in a struct, or in an alias type, with Equal defined.
// defined.
func (a *Attributes) Equal(o *Attributes) bool { func (a *Attributes) Equal(o *Attributes) bool {
if a == nil && o == nil { if a == nil && o == nil {
return true return true
@ -93,7 +92,7 @@ func (a *Attributes) Equal(o *Attributes) bool {
// o missing element of a // o missing element of a
return false return false
} }
if eq, ok := v.(interface{ Equal(o interface{}) bool }); ok { if eq, ok := v.(interface{ Equal(o any) bool }); ok {
if !eq.Equal(ov) { if !eq.Equal(ov) {
return false return false
} }
@ -112,19 +111,31 @@ func (a *Attributes) String() string {
sb.WriteString("{") sb.WriteString("{")
first := true first := true
for k, v := range a.m { for k, v := range a.m {
var key, val string
if str, ok := k.(interface{ String() string }); ok {
key = str.String()
}
if str, ok := v.(interface{ String() string }); ok {
val = str.String()
}
if !first { if !first {
sb.WriteString(", ") sb.WriteString(", ")
} }
sb.WriteString(fmt.Sprintf("%q: %q, ", key, val)) sb.WriteString(fmt.Sprintf("%q: %q ", str(k), str(v)))
first = false first = false
} }
sb.WriteString("}") sb.WriteString("}")
return sb.String() return sb.String()
} }
func str(x any) string {
if v, ok := x.(fmt.Stringer); ok {
return v.String()
} else if v, ok := x.(string); ok {
return v
}
return fmt.Sprintf("<%p>", x)
}
// MarshalJSON helps implement the json.Marshaler interface, thereby rendering
// the Attributes correctly when printing (via pretty.JSON) structs containing
// Attributes as fields.
//
// Is it impossible to unmarshal attributes from a JSON representation and this
// method is meant only for debugging purposes.
func (a *Attributes) MarshalJSON() ([]byte, error) {
return []byte(a.String()), nil
}

View File

@ -105,8 +105,8 @@ type SubConn interface {
// //
// This will trigger a state transition for the SubConn. // This will trigger a state transition for the SubConn.
// //
// Deprecated: This method is now part of the ClientConn interface and will // Deprecated: this method will be removed. Create new SubConns for new
// eventually be removed from here. // addresses instead.
UpdateAddresses([]resolver.Address) UpdateAddresses([]resolver.Address)
// Connect starts the connecting for this SubConn. // Connect starts the connecting for this SubConn.
Connect() Connect()
@ -115,6 +115,13 @@ type SubConn interface {
// creates a new one and returns it. Returns a close function which must // creates a new one and returns it. Returns a close function which must
// be called when the Producer is no longer needed. // be called when the Producer is no longer needed.
GetOrBuildProducer(ProducerBuilder) (p Producer, close func()) GetOrBuildProducer(ProducerBuilder) (p Producer, close func())
// Shutdown shuts down the SubConn gracefully. Any started RPCs will be
// allowed to complete. No future calls should be made on the SubConn.
// One final state update will be delivered to the StateListener (or
// UpdateSubConnState; deprecated) with ConnectivityState of Shutdown to
// indicate the shutdown operation. This may be delivered before
// in-progress RPCs are complete and the actual connection is closed.
Shutdown()
} }
// NewSubConnOptions contains options to create new SubConn. // NewSubConnOptions contains options to create new SubConn.
@ -129,6 +136,11 @@ type NewSubConnOptions struct {
// HealthCheckEnabled indicates whether health check service should be // HealthCheckEnabled indicates whether health check service should be
// enabled on this SubConn // enabled on this SubConn
HealthCheckEnabled bool HealthCheckEnabled bool
// StateListener is called when the state of the subconn changes. If nil,
// Balancer.UpdateSubConnState will be called instead. Will never be
// invoked until after Connect() is called on the SubConn created with
// these options.
StateListener func(SubConnState)
} }
// State contains the balancer's state relevant to the gRPC ClientConn. // State contains the balancer's state relevant to the gRPC ClientConn.
@ -150,16 +162,24 @@ type ClientConn interface {
// NewSubConn is called by balancer to create a new SubConn. // NewSubConn is called by balancer to create a new SubConn.
// It doesn't block and wait for the connections to be established. // It doesn't block and wait for the connections to be established.
// Behaviors of the SubConn can be controlled by options. // Behaviors of the SubConn can be controlled by options.
//
// Deprecated: please be aware that in a future version, SubConns will only
// support one address per SubConn.
NewSubConn([]resolver.Address, NewSubConnOptions) (SubConn, error) NewSubConn([]resolver.Address, NewSubConnOptions) (SubConn, error)
// RemoveSubConn removes the SubConn from ClientConn. // RemoveSubConn removes the SubConn from ClientConn.
// The SubConn will be shutdown. // The SubConn will be shutdown.
//
// Deprecated: use SubConn.Shutdown instead.
RemoveSubConn(SubConn) RemoveSubConn(SubConn)
// UpdateAddresses updates the addresses used in the passed in SubConn. // UpdateAddresses updates the addresses used in the passed in SubConn.
// gRPC checks if the currently connected address is still in the new list. // gRPC checks if the currently connected address is still in the new list.
// If so, the connection will be kept. Else, the connection will be // If so, the connection will be kept. Else, the connection will be
// gracefully closed, and a new connection will be created. // gracefully closed, and a new connection will be created.
// //
// This will trigger a state transition for the SubConn. // This may trigger a state transition for the SubConn.
//
// Deprecated: this method will be removed. Create new SubConns for new
// addresses instead.
UpdateAddresses(SubConn, []resolver.Address) UpdateAddresses(SubConn, []resolver.Address)
// UpdateState notifies gRPC that the balancer's internal state has // UpdateState notifies gRPC that the balancer's internal state has
@ -250,7 +270,7 @@ type DoneInfo struct {
// trailing metadata. // trailing metadata.
// //
// The only supported type now is *orca_v3.LoadReport. // The only supported type now is *orca_v3.LoadReport.
ServerLoad interface{} ServerLoad any
} }
var ( var (
@ -343,9 +363,13 @@ type Balancer interface {
ResolverError(error) ResolverError(error)
// UpdateSubConnState is called by gRPC when the state of a SubConn // UpdateSubConnState is called by gRPC when the state of a SubConn
// changes. // changes.
//
// Deprecated: Use NewSubConnOptions.StateListener when creating the
// SubConn instead.
UpdateSubConnState(SubConn, SubConnState) UpdateSubConnState(SubConn, SubConnState)
// Close closes the balancer. The balancer is not required to call // Close closes the balancer. The balancer is not currently required to
// ClientConn.RemoveSubConn for its existing SubConns. // call SubConn.Shutdown for its existing SubConns; however, this will be
// required in a future release, so it is recommended.
Close() Close()
} }
@ -390,15 +414,14 @@ var ErrBadResolverState = errors.New("bad resolver state")
type ProducerBuilder interface { type ProducerBuilder interface {
// Build creates a Producer. The first parameter is always a // Build creates a Producer. The first parameter is always a
// grpc.ClientConnInterface (a type to allow creating RPCs/streams on the // grpc.ClientConnInterface (a type to allow creating RPCs/streams on the
// associated SubConn), but is declared as interface{} to avoid a // associated SubConn), but is declared as `any` to avoid a dependency
// dependency cycle. Should also return a close function that will be // cycle. Should also return a close function that will be called when all
// called when all references to the Producer have been given up. // references to the Producer have been given up.
Build(grpcClientConnInterface interface{}) (p Producer, close func()) Build(grpcClientConnInterface any) (p Producer, close func())
} }
// A Producer is a type shared among potentially many consumers. It is // A Producer is a type shared among potentially many consumers. It is
// associated with a SubConn, and an implementation will typically contain // associated with a SubConn, and an implementation will typically contain
// other methods to provide additional functionality, e.g. configuration or // other methods to provide additional functionality, e.g. configuration or
// subscription registration. // subscription registration.
type Producer interface { type Producer any
}

View File

@ -105,7 +105,12 @@ func (b *baseBalancer) UpdateClientConnState(s balancer.ClientConnState) error {
addrsSet.Set(a, nil) addrsSet.Set(a, nil)
if _, ok := b.subConns.Get(a); !ok { if _, ok := b.subConns.Get(a); !ok {
// a is a new address (not existing in b.subConns). // a is a new address (not existing in b.subConns).
sc, err := b.cc.NewSubConn([]resolver.Address{a}, balancer.NewSubConnOptions{HealthCheckEnabled: b.config.HealthCheck}) var sc balancer.SubConn
opts := balancer.NewSubConnOptions{
HealthCheckEnabled: b.config.HealthCheck,
StateListener: func(scs balancer.SubConnState) { b.updateSubConnState(sc, scs) },
}
sc, err := b.cc.NewSubConn([]resolver.Address{a}, opts)
if err != nil { if err != nil {
logger.Warningf("base.baseBalancer: failed to create new SubConn: %v", err) logger.Warningf("base.baseBalancer: failed to create new SubConn: %v", err)
continue continue
@ -121,10 +126,10 @@ func (b *baseBalancer) UpdateClientConnState(s balancer.ClientConnState) error {
sc := sci.(balancer.SubConn) sc := sci.(balancer.SubConn)
// a was removed by resolver. // a was removed by resolver.
if _, ok := addrsSet.Get(a); !ok { if _, ok := addrsSet.Get(a); !ok {
b.cc.RemoveSubConn(sc) sc.Shutdown()
b.subConns.Delete(a) b.subConns.Delete(a)
// Keep the state of this sc in b.scStates until sc's state becomes Shutdown. // Keep the state of this sc in b.scStates until sc's state becomes Shutdown.
// The entry will be deleted in UpdateSubConnState. // The entry will be deleted in updateSubConnState.
} }
} }
// If resolver state contains no addresses, return an error so ClientConn // If resolver state contains no addresses, return an error so ClientConn
@ -177,7 +182,12 @@ func (b *baseBalancer) regeneratePicker() {
b.picker = b.pickerBuilder.Build(PickerBuildInfo{ReadySCs: readySCs}) b.picker = b.pickerBuilder.Build(PickerBuildInfo{ReadySCs: readySCs})
} }
// UpdateSubConnState is a nop because a StateListener is always set in NewSubConn.
func (b *baseBalancer) UpdateSubConnState(sc balancer.SubConn, state balancer.SubConnState) { func (b *baseBalancer) UpdateSubConnState(sc balancer.SubConn, state balancer.SubConnState) {
logger.Errorf("base.baseBalancer: UpdateSubConnState(%v, %+v) called unexpectedly", sc, state)
}
func (b *baseBalancer) updateSubConnState(sc balancer.SubConn, state balancer.SubConnState) {
s := state.ConnectivityState s := state.ConnectivityState
if logger.V(2) { if logger.V(2) {
logger.Infof("base.baseBalancer: handle SubConn state change: %p, %v", sc, s) logger.Infof("base.baseBalancer: handle SubConn state change: %p, %v", sc, s)
@ -204,8 +214,8 @@ func (b *baseBalancer) UpdateSubConnState(sc balancer.SubConn, state balancer.Su
case connectivity.Idle: case connectivity.Idle:
sc.Connect() sc.Connect()
case connectivity.Shutdown: case connectivity.Shutdown:
// When an address was removed by resolver, b called RemoveSubConn but // When an address was removed by resolver, b called Shutdown but kept
// kept the sc's state in scStates. Remove state for this sc here. // the sc's state in scStates. Remove state for this sc here.
delete(b.scStates, sc) delete(b.scStates, sc)
case connectivity.TransientFailure: case connectivity.TransientFailure:
// Save error to be reported via picker. // Save error to be reported via picker.
@ -226,7 +236,7 @@ func (b *baseBalancer) UpdateSubConnState(sc balancer.SubConn, state balancer.Su
} }
// Close is a nop because base balancer doesn't have internal state to clean up, // Close is a nop because base balancer doesn't have internal state to clean up,
// and it doesn't need to call RemoveSubConn for the SubConns. // and it doesn't need to call Shutdown for the SubConns.
func (b *baseBalancer) Close() { func (b *baseBalancer) Close() {
} }

View File

@ -99,20 +99,6 @@ func (ccb *ccBalancerWrapper) updateClientConnState(ccs *balancer.ClientConnStat
// lock held. But the lock guards only the scheduling part. The actual // lock held. But the lock guards only the scheduling part. The actual
// callback is called asynchronously without the lock being held. // callback is called asynchronously without the lock being held.
ok := ccb.serializer.Schedule(func(_ context.Context) { ok := ccb.serializer.Schedule(func(_ context.Context) {
// If the addresses specified in the update contain addresses of type
// "grpclb" and the selected LB policy is not "grpclb", these addresses
// will be filtered out and ccs will be modified with the updated
// address list.
if ccb.curBalancerName != grpclbName {
var addrs []resolver.Address
for _, addr := range ccs.ResolverState.Addresses {
if addr.Type == resolver.GRPCLB {
continue
}
addrs = append(addrs, addr)
}
ccs.ResolverState.Addresses = addrs
}
errCh <- ccb.balancer.UpdateClientConnState(*ccs) errCh <- ccb.balancer.UpdateClientConnState(*ccs)
}) })
if !ok { if !ok {
@ -139,7 +125,9 @@ func (ccb *ccBalancerWrapper) updateClientConnState(ccs *balancer.ClientConnStat
func (ccb *ccBalancerWrapper) updateSubConnState(sc balancer.SubConn, s connectivity.State, err error) { func (ccb *ccBalancerWrapper) updateSubConnState(sc balancer.SubConn, s connectivity.State, err error) {
ccb.mu.Lock() ccb.mu.Lock()
ccb.serializer.Schedule(func(_ context.Context) { ccb.serializer.Schedule(func(_ context.Context) {
ccb.balancer.UpdateSubConnState(sc, balancer.SubConnState{ConnectivityState: s, ConnectionError: err}) // Even though it is optional for balancers, gracefulswitch ensures
// opts.StateListener is set, so this cannot ever be nil.
sc.(*acBalancerWrapper).stateListener(balancer.SubConnState{ConnectivityState: s, ConnectionError: err})
}) })
ccb.mu.Unlock() ccb.mu.Unlock()
} }
@ -221,7 +209,7 @@ func (ccb *ccBalancerWrapper) closeBalancer(m ccbMode) {
} }
ccb.mode = m ccb.mode = m
done := ccb.serializer.Done done := ccb.serializer.Done()
b := ccb.balancer b := ccb.balancer
ok := ccb.serializer.Schedule(func(_ context.Context) { ok := ccb.serializer.Schedule(func(_ context.Context) {
// Close the serializer to ensure that no more calls from gRPC are sent // Close the serializer to ensure that no more calls from gRPC are sent
@ -238,11 +226,9 @@ func (ccb *ccBalancerWrapper) closeBalancer(m ccbMode) {
} }
ccb.mu.Unlock() ccb.mu.Unlock()
// Give enqueued callbacks a chance to finish. // Give enqueued callbacks a chance to finish before closing the balancer.
<-done <-done
// Spawn a goroutine to close the balancer (since it may block trying to b.Close()
// cleanup all allocated resources) and return early.
go b.Close()
} }
// exitIdleMode is invoked by grpc when the channel exits idle mode either // exitIdleMode is invoked by grpc when the channel exits idle mode either
@ -314,29 +300,19 @@ func (ccb *ccBalancerWrapper) NewSubConn(addrs []resolver.Address, opts balancer
channelz.Warningf(logger, ccb.cc.channelzID, "acBalancerWrapper: NewSubConn: failed to newAddrConn: %v", err) channelz.Warningf(logger, ccb.cc.channelzID, "acBalancerWrapper: NewSubConn: failed to newAddrConn: %v", err)
return nil, err return nil, err
} }
acbw := &acBalancerWrapper{ac: ac, producers: make(map[balancer.ProducerBuilder]*refCountedProducer)} acbw := &acBalancerWrapper{
ccb: ccb,
ac: ac,
producers: make(map[balancer.ProducerBuilder]*refCountedProducer),
stateListener: opts.StateListener,
}
ac.acbw = acbw ac.acbw = acbw
return acbw, nil return acbw, nil
} }
func (ccb *ccBalancerWrapper) RemoveSubConn(sc balancer.SubConn) { func (ccb *ccBalancerWrapper) RemoveSubConn(sc balancer.SubConn) {
if ccb.isIdleOrClosed() { // The graceful switch balancer will never call this.
// It it safe to ignore this call when the balancer is closed or in idle logger.Errorf("ccb RemoveSubConn(%v) called unexpectedly, sc")
// because the ClientConn takes care of closing the connections.
//
// Not returning early from here when the balancer is closed or in idle
// leads to a deadlock though, because of the following sequence of
// calls when holding cc.mu:
// cc.exitIdleMode --> ccb.enterIdleMode --> gsw.Close -->
// ccb.RemoveAddrConn --> cc.removeAddrConn
return
}
acbw, ok := sc.(*acBalancerWrapper)
if !ok {
return
}
ccb.cc.removeAddrConn(acbw.ac, errConnDrain)
} }
func (ccb *ccBalancerWrapper) UpdateAddresses(sc balancer.SubConn, addrs []resolver.Address) { func (ccb *ccBalancerWrapper) UpdateAddresses(sc balancer.SubConn, addrs []resolver.Address) {
@ -380,7 +356,9 @@ func (ccb *ccBalancerWrapper) Target() string {
// acBalancerWrapper is a wrapper on top of ac for balancers. // acBalancerWrapper is a wrapper on top of ac for balancers.
// It implements balancer.SubConn interface. // It implements balancer.SubConn interface.
type acBalancerWrapper struct { type acBalancerWrapper struct {
ac *addrConn // read-only ac *addrConn // read-only
ccb *ccBalancerWrapper // read-only
stateListener func(balancer.SubConnState)
mu sync.Mutex mu sync.Mutex
producers map[balancer.ProducerBuilder]*refCountedProducer producers map[balancer.ProducerBuilder]*refCountedProducer
@ -398,6 +376,23 @@ func (acbw *acBalancerWrapper) Connect() {
go acbw.ac.connect() go acbw.ac.connect()
} }
func (acbw *acBalancerWrapper) Shutdown() {
ccb := acbw.ccb
if ccb.isIdleOrClosed() {
// It it safe to ignore this call when the balancer is closed or in idle
// because the ClientConn takes care of closing the connections.
//
// Not returning early from here when the balancer is closed or in idle
// leads to a deadlock though, because of the following sequence of
// calls when holding cc.mu:
// cc.exitIdleMode --> ccb.enterIdleMode --> gsw.Close -->
// ccb.RemoveAddrConn --> cc.removeAddrConn
return
}
ccb.cc.removeAddrConn(acbw.ac, errConnDrain)
}
// NewStream begins a streaming RPC on the addrConn. If the addrConn is not // NewStream begins a streaming RPC on the addrConn. If the addrConn is not
// ready, blocks until it is or ctx expires. Returns an error when the context // ready, blocks until it is or ctx expires. Returns an error when the context
// expires or the addrConn is shut down. // expires or the addrConn is shut down.
@ -411,7 +406,7 @@ func (acbw *acBalancerWrapper) NewStream(ctx context.Context, desc *StreamDesc,
// Invoke performs a unary RPC. If the addrConn is not ready, returns // Invoke performs a unary RPC. If the addrConn is not ready, returns
// errSubConnNotReady. // errSubConnNotReady.
func (acbw *acBalancerWrapper) Invoke(ctx context.Context, method string, args interface{}, reply interface{}, opts ...CallOption) error { func (acbw *acBalancerWrapper) Invoke(ctx context.Context, method string, args any, reply any, opts ...CallOption) error {
cs, err := acbw.NewStream(ctx, unaryStreamDesc, method, opts...) cs, err := acbw.NewStream(ctx, unaryStreamDesc, method, opts...)
if err != nil { if err != nil {
return err return err

View File

@ -18,7 +18,7 @@
// Code generated by protoc-gen-go. DO NOT EDIT. // Code generated by protoc-gen-go. DO NOT EDIT.
// versions: // versions:
// protoc-gen-go v1.30.0 // protoc-gen-go v1.31.0
// protoc v4.22.0 // protoc v4.22.0
// source: grpc/binlog/v1/binarylog.proto // source: grpc/binlog/v1/binarylog.proto

View File

@ -26,12 +26,7 @@ import (
// received. This is typically called by generated code. // received. This is typically called by generated code.
// //
// All errors returned by Invoke are compatible with the status package. // All errors returned by Invoke are compatible with the status package.
func (cc *ClientConn) Invoke(ctx context.Context, method string, args, reply interface{}, opts ...CallOption) error { func (cc *ClientConn) Invoke(ctx context.Context, method string, args, reply any, opts ...CallOption) error {
if err := cc.idlenessMgr.onCallBegin(); err != nil {
return err
}
defer cc.idlenessMgr.onCallEnd()
// allow interceptor to see all applicable call options, which means those // allow interceptor to see all applicable call options, which means those
// configured as defaults from dial option as well as per-call options // configured as defaults from dial option as well as per-call options
opts = combine(cc.dopts.callOptions, opts) opts = combine(cc.dopts.callOptions, opts)
@ -61,13 +56,13 @@ func combine(o1 []CallOption, o2 []CallOption) []CallOption {
// received. This is typically called by generated code. // received. This is typically called by generated code.
// //
// DEPRECATED: Use ClientConn.Invoke instead. // DEPRECATED: Use ClientConn.Invoke instead.
func Invoke(ctx context.Context, method string, args, reply interface{}, cc *ClientConn, opts ...CallOption) error { func Invoke(ctx context.Context, method string, args, reply any, cc *ClientConn, opts ...CallOption) error {
return cc.Invoke(ctx, method, args, reply, opts...) return cc.Invoke(ctx, method, args, reply, opts...)
} }
var unaryStreamDesc = &StreamDesc{ServerStreams: false, ClientStreams: false} var unaryStreamDesc = &StreamDesc{ServerStreams: false, ClientStreams: false}
func invoke(ctx context.Context, method string, req, reply interface{}, cc *ClientConn, opts ...CallOption) error { func invoke(ctx context.Context, method string, req, reply any, cc *ClientConn, opts ...CallOption) error {
cs, err := newClientStream(ctx, unaryStreamDesc, cc, method, opts...) cs, err := newClientStream(ctx, unaryStreamDesc, cc, method, opts...)
if err != nil { if err != nil {
return err return err

View File

@ -34,9 +34,12 @@ import (
"google.golang.org/grpc/codes" "google.golang.org/grpc/codes"
"google.golang.org/grpc/connectivity" "google.golang.org/grpc/connectivity"
"google.golang.org/grpc/credentials" "google.golang.org/grpc/credentials"
"google.golang.org/grpc/internal"
"google.golang.org/grpc/internal/backoff" "google.golang.org/grpc/internal/backoff"
"google.golang.org/grpc/internal/channelz" "google.golang.org/grpc/internal/channelz"
"google.golang.org/grpc/internal/grpcsync" "google.golang.org/grpc/internal/grpcsync"
"google.golang.org/grpc/internal/idle"
"google.golang.org/grpc/internal/pretty"
iresolver "google.golang.org/grpc/internal/resolver" iresolver "google.golang.org/grpc/internal/resolver"
"google.golang.org/grpc/internal/transport" "google.golang.org/grpc/internal/transport"
"google.golang.org/grpc/keepalive" "google.golang.org/grpc/keepalive"
@ -53,8 +56,6 @@ import (
const ( const (
// minimum time to give a connection to complete // minimum time to give a connection to complete
minConnectTimeout = 20 * time.Second minConnectTimeout = 20 * time.Second
// must match grpclbName in grpclb/grpclb.go
grpclbName = "grpclb"
) )
var ( var (
@ -137,7 +138,6 @@ func (dcs *defaultConfigSelector) SelectConfig(rpcInfo iresolver.RPCInfo) (*ires
func DialContext(ctx context.Context, target string, opts ...DialOption) (conn *ClientConn, err error) { func DialContext(ctx context.Context, target string, opts ...DialOption) (conn *ClientConn, err error) {
cc := &ClientConn{ cc := &ClientConn{
target: target, target: target,
csMgr: &connectivityStateManager{},
conns: make(map[*addrConn]struct{}), conns: make(map[*addrConn]struct{}),
dopts: defaultDialOptions(), dopts: defaultDialOptions(),
czData: new(channelzData), czData: new(channelzData),
@ -190,6 +190,8 @@ func DialContext(ctx context.Context, target string, opts ...DialOption) (conn *
// Register ClientConn with channelz. // Register ClientConn with channelz.
cc.channelzRegistration(target) cc.channelzRegistration(target)
cc.csMgr = newConnectivityStateManager(cc.ctx, cc.channelzID)
if err := cc.validateTransportCredentials(); err != nil { if err := cc.validateTransportCredentials(); err != nil {
return nil, err return nil, err
} }
@ -265,7 +267,7 @@ func DialContext(ctx context.Context, target string, opts ...DialOption) (conn *
// Configure idleness support with configured idle timeout or default idle // Configure idleness support with configured idle timeout or default idle
// timeout duration. Idleness can be explicitly disabled by the user, by // timeout duration. Idleness can be explicitly disabled by the user, by
// setting the dial option to 0. // setting the dial option to 0.
cc.idlenessMgr = newIdlenessManager(cc, cc.dopts.idleTimeout) cc.idlenessMgr = idle.NewManager(idle.ManagerOptions{Enforcer: (*idler)(cc), Timeout: cc.dopts.idleTimeout, Logger: logger})
// Return early for non-blocking dials. // Return early for non-blocking dials.
if !cc.dopts.block { if !cc.dopts.block {
@ -316,6 +318,16 @@ func (cc *ClientConn) addTraceEvent(msg string) {
channelz.AddTraceEvent(logger, cc.channelzID, 0, ted) channelz.AddTraceEvent(logger, cc.channelzID, 0, ted)
} }
type idler ClientConn
func (i *idler) EnterIdleMode() error {
return (*ClientConn)(i).enterIdleMode()
}
func (i *idler) ExitIdleMode() error {
return (*ClientConn)(i).exitIdleMode()
}
// exitIdleMode moves the channel out of idle mode by recreating the name // exitIdleMode moves the channel out of idle mode by recreating the name
// resolver and load balancer. // resolver and load balancer.
func (cc *ClientConn) exitIdleMode() error { func (cc *ClientConn) exitIdleMode() error {
@ -326,7 +338,7 @@ func (cc *ClientConn) exitIdleMode() error {
} }
if cc.idlenessState != ccIdlenessStateIdle { if cc.idlenessState != ccIdlenessStateIdle {
cc.mu.Unlock() cc.mu.Unlock()
logger.Info("ClientConn asked to exit idle mode when not in idle mode") channelz.Infof(logger, cc.channelzID, "ClientConn asked to exit idle mode, current mode is %v", cc.idlenessState)
return nil return nil
} }
@ -349,7 +361,7 @@ func (cc *ClientConn) exitIdleMode() error {
cc.idlenessState = ccIdlenessStateExitingIdle cc.idlenessState = ccIdlenessStateExitingIdle
exitedIdle := false exitedIdle := false
if cc.blockingpicker == nil { if cc.blockingpicker == nil {
cc.blockingpicker = newPickerWrapper() cc.blockingpicker = newPickerWrapper(cc.dopts.copts.StatsHandlers)
} else { } else {
cc.blockingpicker.exitIdleMode() cc.blockingpicker.exitIdleMode()
exitedIdle = true exitedIdle = true
@ -397,7 +409,8 @@ func (cc *ClientConn) enterIdleMode() error {
return ErrClientConnClosing return ErrClientConnClosing
} }
if cc.idlenessState != ccIdlenessStateActive { if cc.idlenessState != ccIdlenessStateActive {
logger.Error("ClientConn asked to enter idle mode when not active") channelz.Errorf(logger, cc.channelzID, "ClientConn asked to enter idle mode, current mode is %v", cc.idlenessState)
cc.mu.Unlock()
return nil return nil
} }
@ -474,7 +487,6 @@ func (cc *ClientConn) validateTransportCredentials() error {
func (cc *ClientConn) channelzRegistration(target string) { func (cc *ClientConn) channelzRegistration(target string) {
cc.channelzID = channelz.RegisterChannel(&channelzChannel{cc}, cc.dopts.channelzParentID, target) cc.channelzID = channelz.RegisterChannel(&channelzChannel{cc}, cc.dopts.channelzParentID, target)
cc.addTraceEvent("created") cc.addTraceEvent("created")
cc.csMgr.channelzID = cc.channelzID
} }
// chainUnaryClientInterceptors chains all unary client interceptors into one. // chainUnaryClientInterceptors chains all unary client interceptors into one.
@ -491,7 +503,7 @@ func chainUnaryClientInterceptors(cc *ClientConn) {
} else if len(interceptors) == 1 { } else if len(interceptors) == 1 {
chainedInt = interceptors[0] chainedInt = interceptors[0]
} else { } else {
chainedInt = func(ctx context.Context, method string, req, reply interface{}, cc *ClientConn, invoker UnaryInvoker, opts ...CallOption) error { chainedInt = func(ctx context.Context, method string, req, reply any, cc *ClientConn, invoker UnaryInvoker, opts ...CallOption) error {
return interceptors[0](ctx, method, req, reply, cc, getChainUnaryInvoker(interceptors, 0, invoker), opts...) return interceptors[0](ctx, method, req, reply, cc, getChainUnaryInvoker(interceptors, 0, invoker), opts...)
} }
} }
@ -503,7 +515,7 @@ func getChainUnaryInvoker(interceptors []UnaryClientInterceptor, curr int, final
if curr == len(interceptors)-1 { if curr == len(interceptors)-1 {
return finalInvoker return finalInvoker
} }
return func(ctx context.Context, method string, req, reply interface{}, cc *ClientConn, opts ...CallOption) error { return func(ctx context.Context, method string, req, reply any, cc *ClientConn, opts ...CallOption) error {
return interceptors[curr+1](ctx, method, req, reply, cc, getChainUnaryInvoker(interceptors, curr+1, finalInvoker), opts...) return interceptors[curr+1](ctx, method, req, reply, cc, getChainUnaryInvoker(interceptors, curr+1, finalInvoker), opts...)
} }
} }
@ -539,13 +551,27 @@ func getChainStreamer(interceptors []StreamClientInterceptor, curr int, finalStr
} }
} }
// newConnectivityStateManager creates an connectivityStateManager with
// the specified id.
func newConnectivityStateManager(ctx context.Context, id *channelz.Identifier) *connectivityStateManager {
return &connectivityStateManager{
channelzID: id,
pubSub: grpcsync.NewPubSub(ctx),
}
}
// connectivityStateManager keeps the connectivity.State of ClientConn. // connectivityStateManager keeps the connectivity.State of ClientConn.
// This struct will eventually be exported so the balancers can access it. // This struct will eventually be exported so the balancers can access it.
//
// TODO: If possible, get rid of the `connectivityStateManager` type, and
// provide this functionality using the `PubSub`, to avoid keeping track of
// the connectivity state at two places.
type connectivityStateManager struct { type connectivityStateManager struct {
mu sync.Mutex mu sync.Mutex
state connectivity.State state connectivity.State
notifyChan chan struct{} notifyChan chan struct{}
channelzID *channelz.Identifier channelzID *channelz.Identifier
pubSub *grpcsync.PubSub
} }
// updateState updates the connectivity.State of ClientConn. // updateState updates the connectivity.State of ClientConn.
@ -561,6 +587,8 @@ func (csm *connectivityStateManager) updateState(state connectivity.State) {
return return
} }
csm.state = state csm.state = state
csm.pubSub.Publish(state)
channelz.Infof(logger, csm.channelzID, "Channel Connectivity change to %v", state) channelz.Infof(logger, csm.channelzID, "Channel Connectivity change to %v", state)
if csm.notifyChan != nil { if csm.notifyChan != nil {
// There are other goroutines waiting on this channel. // There are other goroutines waiting on this channel.
@ -590,7 +618,7 @@ func (csm *connectivityStateManager) getNotifyChan() <-chan struct{} {
type ClientConnInterface interface { type ClientConnInterface interface {
// Invoke performs a unary RPC and returns after the response is received // Invoke performs a unary RPC and returns after the response is received
// into reply. // into reply.
Invoke(ctx context.Context, method string, args interface{}, reply interface{}, opts ...CallOption) error Invoke(ctx context.Context, method string, args any, reply any, opts ...CallOption) error
// NewStream begins a streaming RPC. // NewStream begins a streaming RPC.
NewStream(ctx context.Context, desc *StreamDesc, method string, opts ...CallOption) (ClientStream, error) NewStream(ctx context.Context, desc *StreamDesc, method string, opts ...CallOption) (ClientStream, error)
} }
@ -622,7 +650,7 @@ type ClientConn struct {
channelzID *channelz.Identifier // Channelz identifier for the channel. channelzID *channelz.Identifier // Channelz identifier for the channel.
resolverBuilder resolver.Builder // See parseTargetAndFindResolver(). resolverBuilder resolver.Builder // See parseTargetAndFindResolver().
balancerWrapper *ccBalancerWrapper // Uses gracefulswitch.balancer underneath. balancerWrapper *ccBalancerWrapper // Uses gracefulswitch.balancer underneath.
idlenessMgr idlenessManager idlenessMgr idle.Manager
// The following provide their own synchronization, and therefore don't // The following provide their own synchronization, and therefore don't
// require cc.mu to be held to access them. // require cc.mu to be held to access them.
@ -668,6 +696,19 @@ const (
ccIdlenessStateExitingIdle ccIdlenessStateExitingIdle
) )
func (s ccIdlenessState) String() string {
switch s {
case ccIdlenessStateActive:
return "active"
case ccIdlenessStateIdle:
return "idle"
case ccIdlenessStateExitingIdle:
return "exitingIdle"
default:
return "unknown"
}
}
// WaitForStateChange waits until the connectivity.State of ClientConn changes from sourceState or // WaitForStateChange waits until the connectivity.State of ClientConn changes from sourceState or
// ctx expires. A true value is returned in former case and false in latter. // ctx expires. A true value is returned in former case and false in latter.
// //
@ -759,6 +800,10 @@ func init() {
panic(fmt.Sprintf("impossible error parsing empty service config: %v", cfg.Err)) panic(fmt.Sprintf("impossible error parsing empty service config: %v", cfg.Err))
} }
emptyServiceConfig = cfg.Config.(*ServiceConfig) emptyServiceConfig = cfg.Config.(*ServiceConfig)
internal.SubscribeToConnectivityStateChanges = func(cc *ClientConn, s grpcsync.Subscriber) func() {
return cc.csMgr.pubSub.Subscribe(s)
}
} }
func (cc *ClientConn) maybeApplyDefaultServiceConfig(addrs []resolver.Address) { func (cc *ClientConn) maybeApplyDefaultServiceConfig(addrs []resolver.Address) {
@ -867,6 +912,20 @@ func (cc *ClientConn) handleSubConnStateChange(sc balancer.SubConn, s connectivi
cc.balancerWrapper.updateSubConnState(sc, s, err) cc.balancerWrapper.updateSubConnState(sc, s, err)
} }
// Makes a copy of the input addresses slice and clears out the balancer
// attributes field. Addresses are passed during subconn creation and address
// update operations. In both cases, we will clear the balancer attributes by
// calling this function, and therefore we will be able to use the Equal method
// provided by the resolver.Address type for comparison.
func copyAddressesWithoutBalancerAttributes(in []resolver.Address) []resolver.Address {
out := make([]resolver.Address, len(in))
for i := range in {
out[i] = in[i]
out[i].BalancerAttributes = nil
}
return out
}
// newAddrConn creates an addrConn for addrs and adds it to cc.conns. // newAddrConn creates an addrConn for addrs and adds it to cc.conns.
// //
// Caller needs to make sure len(addrs) > 0. // Caller needs to make sure len(addrs) > 0.
@ -874,7 +933,7 @@ func (cc *ClientConn) newAddrConn(addrs []resolver.Address, opts balancer.NewSub
ac := &addrConn{ ac := &addrConn{
state: connectivity.Idle, state: connectivity.Idle,
cc: cc, cc: cc,
addrs: addrs, addrs: copyAddressesWithoutBalancerAttributes(addrs),
scopts: opts, scopts: opts,
dopts: cc.dopts, dopts: cc.dopts,
czData: new(channelzData), czData: new(channelzData),
@ -995,8 +1054,9 @@ func equalAddresses(a, b []resolver.Address) bool {
// connections or connection attempts. // connections or connection attempts.
func (ac *addrConn) updateAddrs(addrs []resolver.Address) { func (ac *addrConn) updateAddrs(addrs []resolver.Address) {
ac.mu.Lock() ac.mu.Lock()
channelz.Infof(logger, ac.channelzID, "addrConn: updateAddrs curAddr: %v, addrs: %v", ac.curAddr, addrs) channelz.Infof(logger, ac.channelzID, "addrConn: updateAddrs curAddr: %v, addrs: %v", pretty.ToJSON(ac.curAddr), pretty.ToJSON(addrs))
addrs = copyAddressesWithoutBalancerAttributes(addrs)
if equalAddresses(ac.addrs, addrs) { if equalAddresses(ac.addrs, addrs) {
ac.mu.Unlock() ac.mu.Unlock()
return return
@ -1031,8 +1091,8 @@ func (ac *addrConn) updateAddrs(addrs []resolver.Address) {
ac.cancel() ac.cancel()
ac.ctx, ac.cancel = context.WithCancel(ac.cc.ctx) ac.ctx, ac.cancel = context.WithCancel(ac.cc.ctx)
// We have to defer here because GracefulClose => Close => onClose, which // We have to defer here because GracefulClose => onClose, which requires
// requires locking ac.mu. // locking ac.mu.
if ac.transport != nil { if ac.transport != nil {
defer ac.transport.GracefulClose() defer ac.transport.GracefulClose()
ac.transport = nil ac.transport = nil
@ -1137,23 +1197,13 @@ func (cc *ClientConn) applyServiceConfigAndBalancer(sc *ServiceConfig, configSel
} }
var newBalancerName string var newBalancerName string
if cc.sc != nil && cc.sc.lbConfig != nil { if cc.sc == nil || (cc.sc.lbConfig == nil && cc.sc.LB == nil) {
// No service config or no LB policy specified in config.
newBalancerName = PickFirstBalancerName
} else if cc.sc.lbConfig != nil {
newBalancerName = cc.sc.lbConfig.name newBalancerName = cc.sc.lbConfig.name
} else { } else { // cc.sc.LB != nil
var isGRPCLB bool newBalancerName = *cc.sc.LB
for _, a := range addrs {
if a.Type == resolver.GRPCLB {
isGRPCLB = true
break
}
}
if isGRPCLB {
newBalancerName = grpclbName
} else if cc.sc != nil && cc.sc.LB != nil {
newBalancerName = *cc.sc.LB
} else {
newBalancerName = PickFirstBalancerName
}
} }
cc.balancerWrapper.switchTo(newBalancerName) cc.balancerWrapper.switchTo(newBalancerName)
} }
@ -1192,7 +1242,10 @@ func (cc *ClientConn) ResetConnectBackoff() {
// Close tears down the ClientConn and all underlying connections. // Close tears down the ClientConn and all underlying connections.
func (cc *ClientConn) Close() error { func (cc *ClientConn) Close() error {
defer cc.cancel() defer func() {
cc.cancel()
<-cc.csMgr.pubSub.Done()
}()
cc.mu.Lock() cc.mu.Lock()
if cc.conns == nil { if cc.conns == nil {
@ -1226,7 +1279,7 @@ func (cc *ClientConn) Close() error {
rWrapper.close() rWrapper.close()
} }
if idlenessMgr != nil { if idlenessMgr != nil {
idlenessMgr.close() idlenessMgr.Close()
} }
for ac := range conns { for ac := range conns {
@ -1336,12 +1389,14 @@ func (ac *addrConn) resetTransport() {
if err := ac.tryAllAddrs(acCtx, addrs, connectDeadline); err != nil { if err := ac.tryAllAddrs(acCtx, addrs, connectDeadline); err != nil {
ac.cc.resolveNow(resolver.ResolveNowOptions{}) ac.cc.resolveNow(resolver.ResolveNowOptions{})
// After exhausting all addresses, the addrConn enters ac.mu.Lock()
// TRANSIENT_FAILURE.
if acCtx.Err() != nil { if acCtx.Err() != nil {
// addrConn was torn down.
ac.mu.Unlock()
return return
} }
ac.mu.Lock() // After exhausting all addresses, the addrConn enters
// TRANSIENT_FAILURE.
ac.updateConnectivityState(connectivity.TransientFailure, err) ac.updateConnectivityState(connectivity.TransientFailure, err)
// Backoff. // Backoff.
@ -1537,7 +1592,7 @@ func (ac *addrConn) startHealthCheck(ctx context.Context) {
// Set up the health check helper functions. // Set up the health check helper functions.
currentTr := ac.transport currentTr := ac.transport
newStream := func(method string) (interface{}, error) { newStream := func(method string) (any, error) {
ac.mu.Lock() ac.mu.Lock()
if ac.transport != currentTr { if ac.transport != currentTr {
ac.mu.Unlock() ac.mu.Unlock()
@ -1625,16 +1680,7 @@ func (ac *addrConn) tearDown(err error) {
ac.updateConnectivityState(connectivity.Shutdown, nil) ac.updateConnectivityState(connectivity.Shutdown, nil)
ac.cancel() ac.cancel()
ac.curAddr = resolver.Address{} ac.curAddr = resolver.Address{}
if err == errConnDrain && curTr != nil {
// GracefulClose(...) may be executed multiple times when
// i) receiving multiple GoAway frames from the server; or
// ii) there are concurrent name resolver/Balancer triggered
// address removal and GoAway.
// We have to unlock and re-lock here because GracefulClose => Close => onClose, which requires locking ac.mu.
ac.mu.Unlock()
curTr.GracefulClose()
ac.mu.Lock()
}
channelz.AddTraceEvent(logger, ac.channelzID, 0, &channelz.TraceEventDesc{ channelz.AddTraceEvent(logger, ac.channelzID, 0, &channelz.TraceEventDesc{
Desc: "Subchannel deleted", Desc: "Subchannel deleted",
Severity: channelz.CtInfo, Severity: channelz.CtInfo,
@ -1648,6 +1694,29 @@ func (ac *addrConn) tearDown(err error) {
// being deleted right away. // being deleted right away.
channelz.RemoveEntry(ac.channelzID) channelz.RemoveEntry(ac.channelzID)
ac.mu.Unlock() ac.mu.Unlock()
// We have to release the lock before the call to GracefulClose/Close here
// because both of them call onClose(), which requires locking ac.mu.
if curTr != nil {
if err == errConnDrain {
// Close the transport gracefully when the subConn is being shutdown.
//
// GracefulClose() may be executed multiple times if:
// - multiple GoAway frames are received from the server
// - there are concurrent name resolver or balancer triggered
// address removal and GoAway
curTr.GracefulClose()
} else {
// Hard close the transport when the channel is entering idle or is
// being shutdown. In the case where the channel is being shutdown,
// closing of transports is also taken care of by cancelation of cc.ctx.
// But in the case where the channel is entering idle, we need to
// explicitly close the transports here. Instead of distinguishing
// between these two cases, it is simpler to close the transport
// unconditionally here.
curTr.Close(err)
}
}
} }
func (ac *addrConn) getState() connectivity.State { func (ac *addrConn) getState() connectivity.State {
@ -1807,19 +1876,70 @@ func (cc *ClientConn) parseTargetAndFindResolver() error {
} }
// parseTarget uses RFC 3986 semantics to parse the given target into a // parseTarget uses RFC 3986 semantics to parse the given target into a
// resolver.Target struct containing scheme, authority and url. Query // resolver.Target struct containing url. Query params are stripped from the
// params are stripped from the endpoint. // endpoint.
func parseTarget(target string) (resolver.Target, error) { func parseTarget(target string) (resolver.Target, error) {
u, err := url.Parse(target) u, err := url.Parse(target)
if err != nil { if err != nil {
return resolver.Target{}, err return resolver.Target{}, err
} }
return resolver.Target{ return resolver.Target{URL: *u}, nil
Scheme: u.Scheme, }
Authority: u.Host,
URL: *u, func encodeAuthority(authority string) string {
}, nil const upperhex = "0123456789ABCDEF"
// Return for characters that must be escaped as per
// Valid chars are mentioned here:
// https://datatracker.ietf.org/doc/html/rfc3986#section-3.2
shouldEscape := func(c byte) bool {
// Alphanum are always allowed.
if 'a' <= c && c <= 'z' || 'A' <= c && c <= 'Z' || '0' <= c && c <= '9' {
return false
}
switch c {
case '-', '_', '.', '~': // Unreserved characters
return false
case '!', '$', '&', '\'', '(', ')', '*', '+', ',', ';', '=': // Subdelim characters
return false
case ':', '[', ']', '@': // Authority related delimeters
return false
}
// Everything else must be escaped.
return true
}
hexCount := 0
for i := 0; i < len(authority); i++ {
c := authority[i]
if shouldEscape(c) {
hexCount++
}
}
if hexCount == 0 {
return authority
}
required := len(authority) + 2*hexCount
t := make([]byte, required)
j := 0
// This logic is a barebones version of escape in the go net/url library.
for i := 0; i < len(authority); i++ {
switch c := authority[i]; {
case shouldEscape(c):
t[j] = '%'
t[j+1] = upperhex[c>>4]
t[j+2] = upperhex[c&15]
j += 3
default:
t[j] = authority[i]
j++
}
}
return string(t)
} }
// Determine channel authority. The order of precedence is as follows: // Determine channel authority. The order of precedence is as follows:
@ -1872,7 +1992,11 @@ func (cc *ClientConn) determineAuthority() error {
// the channel authority given the user's dial target. For resolvers // the channel authority given the user's dial target. For resolvers
// which don't implement this interface, we will use the endpoint from // which don't implement this interface, we will use the endpoint from
// "scheme://authority/endpoint" as the default authority. // "scheme://authority/endpoint" as the default authority.
cc.authority = endpoint // Escape the endpoint to handle use cases where the endpoint
// might not be a valid authority by default.
// For example an endpoint which has multiple paths like
// 'a/b/c', which is not a valid authority by default.
cc.authority = encodeAuthority(endpoint)
} }
channelz.Infof(logger, cc.channelzID, "Channel authority set to %q", cc.authority) channelz.Infof(logger, cc.channelzID, "Channel authority set to %q", cc.authority)
return nil return nil

View File

@ -27,8 +27,8 @@ import (
// omits the name/string, which vary between the two and are not needed for // omits the name/string, which vary between the two and are not needed for
// anything besides the registry in the encoding package. // anything besides the registry in the encoding package.
type baseCodec interface { type baseCodec interface {
Marshal(v interface{}) ([]byte, error) Marshal(v any) ([]byte, error)
Unmarshal(data []byte, v interface{}) error Unmarshal(data []byte, v any) error
} }
var _ baseCodec = Codec(nil) var _ baseCodec = Codec(nil)
@ -41,9 +41,9 @@ var _ baseCodec = encoding.Codec(nil)
// Deprecated: use encoding.Codec instead. // Deprecated: use encoding.Codec instead.
type Codec interface { type Codec interface {
// Marshal returns the wire format of v. // Marshal returns the wire format of v.
Marshal(v interface{}) ([]byte, error) Marshal(v any) ([]byte, error)
// Unmarshal parses the wire format into v. // Unmarshal parses the wire format into v.
Unmarshal(data []byte, v interface{}) error Unmarshal(data []byte, v any) error
// String returns the name of the Codec implementation. This is unused by // String returns the name of the Codec implementation. This is unused by
// gRPC. // gRPC.
String() string String() string

View File

@ -78,6 +78,7 @@ type dialOptions struct {
defaultServiceConfigRawJSON *string defaultServiceConfigRawJSON *string
resolvers []resolver.Builder resolvers []resolver.Builder
idleTimeout time.Duration idleTimeout time.Duration
recvBufferPool SharedBufferPool
} }
// DialOption configures how we set up the connection. // DialOption configures how we set up the connection.
@ -138,6 +139,20 @@ func newJoinDialOption(opts ...DialOption) DialOption {
return &joinDialOption{opts: opts} return &joinDialOption{opts: opts}
} }
// WithSharedWriteBuffer allows reusing per-connection transport write buffer.
// If this option is set to true every connection will release the buffer after
// flushing the data on the wire.
//
// # Experimental
//
// Notice: This API is EXPERIMENTAL and may be changed or removed in a
// later release.
func WithSharedWriteBuffer(val bool) DialOption {
return newFuncDialOption(func(o *dialOptions) {
o.copts.SharedWriteBuffer = val
})
}
// WithWriteBufferSize determines how much data can be batched before doing a // WithWriteBufferSize determines how much data can be batched before doing a
// write on the wire. The corresponding memory allocation for this buffer will // write on the wire. The corresponding memory allocation for this buffer will
// be twice the size to keep syscalls low. The default value for this buffer is // be twice the size to keep syscalls low. The default value for this buffer is
@ -628,6 +643,7 @@ func defaultDialOptions() dialOptions {
ReadBufferSize: defaultReadBufSize, ReadBufferSize: defaultReadBufSize,
UseProxy: true, UseProxy: true,
}, },
recvBufferPool: nopBufferPool{},
} }
} }
@ -676,3 +692,24 @@ func WithIdleTimeout(d time.Duration) DialOption {
o.idleTimeout = d o.idleTimeout = d
}) })
} }
// WithRecvBufferPool returns a DialOption that configures the ClientConn
// to use the provided shared buffer pool for parsing incoming messages. Depending
// on the application's workload, this could result in reduced memory allocation.
//
// If you are unsure about how to implement a memory pool but want to utilize one,
// begin with grpc.NewSharedBufferPool.
//
// Note: The shared buffer pool feature will not be active if any of the following
// options are used: WithStatsHandler, EnableTracing, or binary logging. In such
// cases, the shared buffer pool will be ignored.
//
// # Experimental
//
// Notice: This API is EXPERIMENTAL and may be changed or removed in a
// later release.
func WithRecvBufferPool(bufferPool SharedBufferPool) DialOption {
return newFuncDialOption(func(o *dialOptions) {
o.recvBufferPool = bufferPool
})
}

View File

@ -90,9 +90,9 @@ func GetCompressor(name string) Compressor {
// methods can be called from concurrent goroutines. // methods can be called from concurrent goroutines.
type Codec interface { type Codec interface {
// Marshal returns the wire format of v. // Marshal returns the wire format of v.
Marshal(v interface{}) ([]byte, error) Marshal(v any) ([]byte, error)
// Unmarshal parses the wire format into v. // Unmarshal parses the wire format into v.
Unmarshal(data []byte, v interface{}) error Unmarshal(data []byte, v any) error
// Name returns the name of the Codec implementation. The returned string // Name returns the name of the Codec implementation. The returned string
// will be used as part of content type in transmission. The result must be // will be used as part of content type in transmission. The result must be
// static; the result cannot change between calls. // static; the result cannot change between calls.

View File

@ -37,7 +37,7 @@ func init() {
// codec is a Codec implementation with protobuf. It is the default codec for gRPC. // codec is a Codec implementation with protobuf. It is the default codec for gRPC.
type codec struct{} type codec struct{}
func (codec) Marshal(v interface{}) ([]byte, error) { func (codec) Marshal(v any) ([]byte, error) {
vv, ok := v.(proto.Message) vv, ok := v.(proto.Message)
if !ok { if !ok {
return nil, fmt.Errorf("failed to marshal, message is %T, want proto.Message", v) return nil, fmt.Errorf("failed to marshal, message is %T, want proto.Message", v)
@ -45,7 +45,7 @@ func (codec) Marshal(v interface{}) ([]byte, error) {
return proto.Marshal(vv) return proto.Marshal(vv)
} }
func (codec) Unmarshal(data []byte, v interface{}) error { func (codec) Unmarshal(data []byte, v any) error {
vv, ok := v.(proto.Message) vv, ok := v.(proto.Message)
if !ok { if !ok {
return fmt.Errorf("failed to unmarshal, message is %T, want proto.Message", v) return fmt.Errorf("failed to unmarshal, message is %T, want proto.Message", v)

View File

@ -31,71 +31,71 @@ type componentData struct {
var cache = map[string]*componentData{} var cache = map[string]*componentData{}
func (c *componentData) InfoDepth(depth int, args ...interface{}) { func (c *componentData) InfoDepth(depth int, args ...any) {
args = append([]interface{}{"[" + string(c.name) + "]"}, args...) args = append([]any{"[" + string(c.name) + "]"}, args...)
grpclog.InfoDepth(depth+1, args...) grpclog.InfoDepth(depth+1, args...)
} }
func (c *componentData) WarningDepth(depth int, args ...interface{}) { func (c *componentData) WarningDepth(depth int, args ...any) {
args = append([]interface{}{"[" + string(c.name) + "]"}, args...) args = append([]any{"[" + string(c.name) + "]"}, args...)
grpclog.WarningDepth(depth+1, args...) grpclog.WarningDepth(depth+1, args...)
} }
func (c *componentData) ErrorDepth(depth int, args ...interface{}) { func (c *componentData) ErrorDepth(depth int, args ...any) {
args = append([]interface{}{"[" + string(c.name) + "]"}, args...) args = append([]any{"[" + string(c.name) + "]"}, args...)
grpclog.ErrorDepth(depth+1, args...) grpclog.ErrorDepth(depth+1, args...)
} }
func (c *componentData) FatalDepth(depth int, args ...interface{}) { func (c *componentData) FatalDepth(depth int, args ...any) {
args = append([]interface{}{"[" + string(c.name) + "]"}, args...) args = append([]any{"[" + string(c.name) + "]"}, args...)
grpclog.FatalDepth(depth+1, args...) grpclog.FatalDepth(depth+1, args...)
} }
func (c *componentData) Info(args ...interface{}) { func (c *componentData) Info(args ...any) {
c.InfoDepth(1, args...) c.InfoDepth(1, args...)
} }
func (c *componentData) Warning(args ...interface{}) { func (c *componentData) Warning(args ...any) {
c.WarningDepth(1, args...) c.WarningDepth(1, args...)
} }
func (c *componentData) Error(args ...interface{}) { func (c *componentData) Error(args ...any) {
c.ErrorDepth(1, args...) c.ErrorDepth(1, args...)
} }
func (c *componentData) Fatal(args ...interface{}) { func (c *componentData) Fatal(args ...any) {
c.FatalDepth(1, args...) c.FatalDepth(1, args...)
} }
func (c *componentData) Infof(format string, args ...interface{}) { func (c *componentData) Infof(format string, args ...any) {
c.InfoDepth(1, fmt.Sprintf(format, args...)) c.InfoDepth(1, fmt.Sprintf(format, args...))
} }
func (c *componentData) Warningf(format string, args ...interface{}) { func (c *componentData) Warningf(format string, args ...any) {
c.WarningDepth(1, fmt.Sprintf(format, args...)) c.WarningDepth(1, fmt.Sprintf(format, args...))
} }
func (c *componentData) Errorf(format string, args ...interface{}) { func (c *componentData) Errorf(format string, args ...any) {
c.ErrorDepth(1, fmt.Sprintf(format, args...)) c.ErrorDepth(1, fmt.Sprintf(format, args...))
} }
func (c *componentData) Fatalf(format string, args ...interface{}) { func (c *componentData) Fatalf(format string, args ...any) {
c.FatalDepth(1, fmt.Sprintf(format, args...)) c.FatalDepth(1, fmt.Sprintf(format, args...))
} }
func (c *componentData) Infoln(args ...interface{}) { func (c *componentData) Infoln(args ...any) {
c.InfoDepth(1, args...) c.InfoDepth(1, args...)
} }
func (c *componentData) Warningln(args ...interface{}) { func (c *componentData) Warningln(args ...any) {
c.WarningDepth(1, args...) c.WarningDepth(1, args...)
} }
func (c *componentData) Errorln(args ...interface{}) { func (c *componentData) Errorln(args ...any) {
c.ErrorDepth(1, args...) c.ErrorDepth(1, args...)
} }
func (c *componentData) Fatalln(args ...interface{}) { func (c *componentData) Fatalln(args ...any) {
c.FatalDepth(1, args...) c.FatalDepth(1, args...)
} }

View File

@ -42,53 +42,53 @@ func V(l int) bool {
} }
// Info logs to the INFO log. // Info logs to the INFO log.
func Info(args ...interface{}) { func Info(args ...any) {
grpclog.Logger.Info(args...) grpclog.Logger.Info(args...)
} }
// Infof logs to the INFO log. Arguments are handled in the manner of fmt.Printf. // Infof logs to the INFO log. Arguments are handled in the manner of fmt.Printf.
func Infof(format string, args ...interface{}) { func Infof(format string, args ...any) {
grpclog.Logger.Infof(format, args...) grpclog.Logger.Infof(format, args...)
} }
// Infoln logs to the INFO log. Arguments are handled in the manner of fmt.Println. // Infoln logs to the INFO log. Arguments are handled in the manner of fmt.Println.
func Infoln(args ...interface{}) { func Infoln(args ...any) {
grpclog.Logger.Infoln(args...) grpclog.Logger.Infoln(args...)
} }
// Warning logs to the WARNING log. // Warning logs to the WARNING log.
func Warning(args ...interface{}) { func Warning(args ...any) {
grpclog.Logger.Warning(args...) grpclog.Logger.Warning(args...)
} }
// Warningf logs to the WARNING log. Arguments are handled in the manner of fmt.Printf. // Warningf logs to the WARNING log. Arguments are handled in the manner of fmt.Printf.
func Warningf(format string, args ...interface{}) { func Warningf(format string, args ...any) {
grpclog.Logger.Warningf(format, args...) grpclog.Logger.Warningf(format, args...)
} }
// Warningln logs to the WARNING log. Arguments are handled in the manner of fmt.Println. // Warningln logs to the WARNING log. Arguments are handled in the manner of fmt.Println.
func Warningln(args ...interface{}) { func Warningln(args ...any) {
grpclog.Logger.Warningln(args...) grpclog.Logger.Warningln(args...)
} }
// Error logs to the ERROR log. // Error logs to the ERROR log.
func Error(args ...interface{}) { func Error(args ...any) {
grpclog.Logger.Error(args...) grpclog.Logger.Error(args...)
} }
// Errorf logs to the ERROR log. Arguments are handled in the manner of fmt.Printf. // Errorf logs to the ERROR log. Arguments are handled in the manner of fmt.Printf.
func Errorf(format string, args ...interface{}) { func Errorf(format string, args ...any) {
grpclog.Logger.Errorf(format, args...) grpclog.Logger.Errorf(format, args...)
} }
// Errorln logs to the ERROR log. Arguments are handled in the manner of fmt.Println. // Errorln logs to the ERROR log. Arguments are handled in the manner of fmt.Println.
func Errorln(args ...interface{}) { func Errorln(args ...any) {
grpclog.Logger.Errorln(args...) grpclog.Logger.Errorln(args...)
} }
// Fatal logs to the FATAL log. Arguments are handled in the manner of fmt.Print. // Fatal logs to the FATAL log. Arguments are handled in the manner of fmt.Print.
// It calls os.Exit() with exit code 1. // It calls os.Exit() with exit code 1.
func Fatal(args ...interface{}) { func Fatal(args ...any) {
grpclog.Logger.Fatal(args...) grpclog.Logger.Fatal(args...)
// Make sure fatal logs will exit. // Make sure fatal logs will exit.
os.Exit(1) os.Exit(1)
@ -96,7 +96,7 @@ func Fatal(args ...interface{}) {
// Fatalf logs to the FATAL log. Arguments are handled in the manner of fmt.Printf. // Fatalf logs to the FATAL log. Arguments are handled in the manner of fmt.Printf.
// It calls os.Exit() with exit code 1. // It calls os.Exit() with exit code 1.
func Fatalf(format string, args ...interface{}) { func Fatalf(format string, args ...any) {
grpclog.Logger.Fatalf(format, args...) grpclog.Logger.Fatalf(format, args...)
// Make sure fatal logs will exit. // Make sure fatal logs will exit.
os.Exit(1) os.Exit(1)
@ -104,7 +104,7 @@ func Fatalf(format string, args ...interface{}) {
// Fatalln logs to the FATAL log. Arguments are handled in the manner of fmt.Println. // Fatalln logs to the FATAL log. Arguments are handled in the manner of fmt.Println.
// It calle os.Exit()) with exit code 1. // It calle os.Exit()) with exit code 1.
func Fatalln(args ...interface{}) { func Fatalln(args ...any) {
grpclog.Logger.Fatalln(args...) grpclog.Logger.Fatalln(args...)
// Make sure fatal logs will exit. // Make sure fatal logs will exit.
os.Exit(1) os.Exit(1)
@ -113,20 +113,20 @@ func Fatalln(args ...interface{}) {
// Print prints to the logger. Arguments are handled in the manner of fmt.Print. // Print prints to the logger. Arguments are handled in the manner of fmt.Print.
// //
// Deprecated: use Info. // Deprecated: use Info.
func Print(args ...interface{}) { func Print(args ...any) {
grpclog.Logger.Info(args...) grpclog.Logger.Info(args...)
} }
// Printf prints to the logger. Arguments are handled in the manner of fmt.Printf. // Printf prints to the logger. Arguments are handled in the manner of fmt.Printf.
// //
// Deprecated: use Infof. // Deprecated: use Infof.
func Printf(format string, args ...interface{}) { func Printf(format string, args ...any) {
grpclog.Logger.Infof(format, args...) grpclog.Logger.Infof(format, args...)
} }
// Println prints to the logger. Arguments are handled in the manner of fmt.Println. // Println prints to the logger. Arguments are handled in the manner of fmt.Println.
// //
// Deprecated: use Infoln. // Deprecated: use Infoln.
func Println(args ...interface{}) { func Println(args ...any) {
grpclog.Logger.Infoln(args...) grpclog.Logger.Infoln(args...)
} }

View File

@ -24,12 +24,12 @@ import "google.golang.org/grpc/internal/grpclog"
// //
// Deprecated: use LoggerV2. // Deprecated: use LoggerV2.
type Logger interface { type Logger interface {
Fatal(args ...interface{}) Fatal(args ...any)
Fatalf(format string, args ...interface{}) Fatalf(format string, args ...any)
Fatalln(args ...interface{}) Fatalln(args ...any)
Print(args ...interface{}) Print(args ...any)
Printf(format string, args ...interface{}) Printf(format string, args ...any)
Println(args ...interface{}) Println(args ...any)
} }
// SetLogger sets the logger that is used in grpc. Call only from // SetLogger sets the logger that is used in grpc. Call only from
@ -45,39 +45,39 @@ type loggerWrapper struct {
Logger Logger
} }
func (g *loggerWrapper) Info(args ...interface{}) { func (g *loggerWrapper) Info(args ...any) {
g.Logger.Print(args...) g.Logger.Print(args...)
} }
func (g *loggerWrapper) Infoln(args ...interface{}) { func (g *loggerWrapper) Infoln(args ...any) {
g.Logger.Println(args...) g.Logger.Println(args...)
} }
func (g *loggerWrapper) Infof(format string, args ...interface{}) { func (g *loggerWrapper) Infof(format string, args ...any) {
g.Logger.Printf(format, args...) g.Logger.Printf(format, args...)
} }
func (g *loggerWrapper) Warning(args ...interface{}) { func (g *loggerWrapper) Warning(args ...any) {
g.Logger.Print(args...) g.Logger.Print(args...)
} }
func (g *loggerWrapper) Warningln(args ...interface{}) { func (g *loggerWrapper) Warningln(args ...any) {
g.Logger.Println(args...) g.Logger.Println(args...)
} }
func (g *loggerWrapper) Warningf(format string, args ...interface{}) { func (g *loggerWrapper) Warningf(format string, args ...any) {
g.Logger.Printf(format, args...) g.Logger.Printf(format, args...)
} }
func (g *loggerWrapper) Error(args ...interface{}) { func (g *loggerWrapper) Error(args ...any) {
g.Logger.Print(args...) g.Logger.Print(args...)
} }
func (g *loggerWrapper) Errorln(args ...interface{}) { func (g *loggerWrapper) Errorln(args ...any) {
g.Logger.Println(args...) g.Logger.Println(args...)
} }
func (g *loggerWrapper) Errorf(format string, args ...interface{}) { func (g *loggerWrapper) Errorf(format string, args ...any) {
g.Logger.Printf(format, args...) g.Logger.Printf(format, args...)
} }

View File

@ -33,35 +33,35 @@ import (
// LoggerV2 does underlying logging work for grpclog. // LoggerV2 does underlying logging work for grpclog.
type LoggerV2 interface { type LoggerV2 interface {
// Info logs to INFO log. Arguments are handled in the manner of fmt.Print. // Info logs to INFO log. Arguments are handled in the manner of fmt.Print.
Info(args ...interface{}) Info(args ...any)
// Infoln logs to INFO log. Arguments are handled in the manner of fmt.Println. // Infoln logs to INFO log. Arguments are handled in the manner of fmt.Println.
Infoln(args ...interface{}) Infoln(args ...any)
// Infof logs to INFO log. Arguments are handled in the manner of fmt.Printf. // Infof logs to INFO log. Arguments are handled in the manner of fmt.Printf.
Infof(format string, args ...interface{}) Infof(format string, args ...any)
// Warning logs to WARNING log. Arguments are handled in the manner of fmt.Print. // Warning logs to WARNING log. Arguments are handled in the manner of fmt.Print.
Warning(args ...interface{}) Warning(args ...any)
// Warningln logs to WARNING log. Arguments are handled in the manner of fmt.Println. // Warningln logs to WARNING log. Arguments are handled in the manner of fmt.Println.
Warningln(args ...interface{}) Warningln(args ...any)
// Warningf logs to WARNING log. Arguments are handled in the manner of fmt.Printf. // Warningf logs to WARNING log. Arguments are handled in the manner of fmt.Printf.
Warningf(format string, args ...interface{}) Warningf(format string, args ...any)
// Error logs to ERROR log. Arguments are handled in the manner of fmt.Print. // Error logs to ERROR log. Arguments are handled in the manner of fmt.Print.
Error(args ...interface{}) Error(args ...any)
// Errorln logs to ERROR log. Arguments are handled in the manner of fmt.Println. // Errorln logs to ERROR log. Arguments are handled in the manner of fmt.Println.
Errorln(args ...interface{}) Errorln(args ...any)
// Errorf logs to ERROR log. Arguments are handled in the manner of fmt.Printf. // Errorf logs to ERROR log. Arguments are handled in the manner of fmt.Printf.
Errorf(format string, args ...interface{}) Errorf(format string, args ...any)
// Fatal logs to ERROR log. Arguments are handled in the manner of fmt.Print. // Fatal logs to ERROR log. Arguments are handled in the manner of fmt.Print.
// gRPC ensures that all Fatal logs will exit with os.Exit(1). // gRPC ensures that all Fatal logs will exit with os.Exit(1).
// Implementations may also call os.Exit() with a non-zero exit code. // Implementations may also call os.Exit() with a non-zero exit code.
Fatal(args ...interface{}) Fatal(args ...any)
// Fatalln logs to ERROR log. Arguments are handled in the manner of fmt.Println. // Fatalln logs to ERROR log. Arguments are handled in the manner of fmt.Println.
// gRPC ensures that all Fatal logs will exit with os.Exit(1). // gRPC ensures that all Fatal logs will exit with os.Exit(1).
// Implementations may also call os.Exit() with a non-zero exit code. // Implementations may also call os.Exit() with a non-zero exit code.
Fatalln(args ...interface{}) Fatalln(args ...any)
// Fatalf logs to ERROR log. Arguments are handled in the manner of fmt.Printf. // Fatalf logs to ERROR log. Arguments are handled in the manner of fmt.Printf.
// gRPC ensures that all Fatal logs will exit with os.Exit(1). // gRPC ensures that all Fatal logs will exit with os.Exit(1).
// Implementations may also call os.Exit() with a non-zero exit code. // Implementations may also call os.Exit() with a non-zero exit code.
Fatalf(format string, args ...interface{}) Fatalf(format string, args ...any)
// V reports whether verbosity level l is at least the requested verbose level. // V reports whether verbosity level l is at least the requested verbose level.
V(l int) bool V(l int) bool
} }
@ -182,53 +182,53 @@ func (g *loggerT) output(severity int, s string) {
g.m[severity].Output(2, string(b)) g.m[severity].Output(2, string(b))
} }
func (g *loggerT) Info(args ...interface{}) { func (g *loggerT) Info(args ...any) {
g.output(infoLog, fmt.Sprint(args...)) g.output(infoLog, fmt.Sprint(args...))
} }
func (g *loggerT) Infoln(args ...interface{}) { func (g *loggerT) Infoln(args ...any) {
g.output(infoLog, fmt.Sprintln(args...)) g.output(infoLog, fmt.Sprintln(args...))
} }
func (g *loggerT) Infof(format string, args ...interface{}) { func (g *loggerT) Infof(format string, args ...any) {
g.output(infoLog, fmt.Sprintf(format, args...)) g.output(infoLog, fmt.Sprintf(format, args...))
} }
func (g *loggerT) Warning(args ...interface{}) { func (g *loggerT) Warning(args ...any) {
g.output(warningLog, fmt.Sprint(args...)) g.output(warningLog, fmt.Sprint(args...))
} }
func (g *loggerT) Warningln(args ...interface{}) { func (g *loggerT) Warningln(args ...any) {
g.output(warningLog, fmt.Sprintln(args...)) g.output(warningLog, fmt.Sprintln(args...))
} }
func (g *loggerT) Warningf(format string, args ...interface{}) { func (g *loggerT) Warningf(format string, args ...any) {
g.output(warningLog, fmt.Sprintf(format, args...)) g.output(warningLog, fmt.Sprintf(format, args...))
} }
func (g *loggerT) Error(args ...interface{}) { func (g *loggerT) Error(args ...any) {
g.output(errorLog, fmt.Sprint(args...)) g.output(errorLog, fmt.Sprint(args...))
} }
func (g *loggerT) Errorln(args ...interface{}) { func (g *loggerT) Errorln(args ...any) {
g.output(errorLog, fmt.Sprintln(args...)) g.output(errorLog, fmt.Sprintln(args...))
} }
func (g *loggerT) Errorf(format string, args ...interface{}) { func (g *loggerT) Errorf(format string, args ...any) {
g.output(errorLog, fmt.Sprintf(format, args...)) g.output(errorLog, fmt.Sprintf(format, args...))
} }
func (g *loggerT) Fatal(args ...interface{}) { func (g *loggerT) Fatal(args ...any) {
g.output(fatalLog, fmt.Sprint(args...)) g.output(fatalLog, fmt.Sprint(args...))
os.Exit(1) os.Exit(1)
} }
func (g *loggerT) Fatalln(args ...interface{}) { func (g *loggerT) Fatalln(args ...any) {
g.output(fatalLog, fmt.Sprintln(args...)) g.output(fatalLog, fmt.Sprintln(args...))
os.Exit(1) os.Exit(1)
} }
func (g *loggerT) Fatalf(format string, args ...interface{}) { func (g *loggerT) Fatalf(format string, args ...any) {
g.output(fatalLog, fmt.Sprintf(format, args...)) g.output(fatalLog, fmt.Sprintf(format, args...))
os.Exit(1) os.Exit(1)
} }
@ -248,11 +248,11 @@ func (g *loggerT) V(l int) bool {
type DepthLoggerV2 interface { type DepthLoggerV2 interface {
LoggerV2 LoggerV2
// InfoDepth logs to INFO log at the specified depth. Arguments are handled in the manner of fmt.Println. // InfoDepth logs to INFO log at the specified depth. Arguments are handled in the manner of fmt.Println.
InfoDepth(depth int, args ...interface{}) InfoDepth(depth int, args ...any)
// WarningDepth logs to WARNING log at the specified depth. Arguments are handled in the manner of fmt.Println. // WarningDepth logs to WARNING log at the specified depth. Arguments are handled in the manner of fmt.Println.
WarningDepth(depth int, args ...interface{}) WarningDepth(depth int, args ...any)
// ErrorDepth logs to ERROR log at the specified depth. Arguments are handled in the manner of fmt.Println. // ErrorDepth logs to ERROR log at the specified depth. Arguments are handled in the manner of fmt.Println.
ErrorDepth(depth int, args ...interface{}) ErrorDepth(depth int, args ...any)
// FatalDepth logs to FATAL log at the specified depth. Arguments are handled in the manner of fmt.Println. // FatalDepth logs to FATAL log at the specified depth. Arguments are handled in the manner of fmt.Println.
FatalDepth(depth int, args ...interface{}) FatalDepth(depth int, args ...any)
} }

View File

@ -23,7 +23,7 @@ import (
) )
// UnaryInvoker is called by UnaryClientInterceptor to complete RPCs. // UnaryInvoker is called by UnaryClientInterceptor to complete RPCs.
type UnaryInvoker func(ctx context.Context, method string, req, reply interface{}, cc *ClientConn, opts ...CallOption) error type UnaryInvoker func(ctx context.Context, method string, req, reply any, cc *ClientConn, opts ...CallOption) error
// UnaryClientInterceptor intercepts the execution of a unary RPC on the client. // UnaryClientInterceptor intercepts the execution of a unary RPC on the client.
// Unary interceptors can be specified as a DialOption, using // Unary interceptors can be specified as a DialOption, using
@ -40,7 +40,7 @@ type UnaryInvoker func(ctx context.Context, method string, req, reply interface{
// defaults from the ClientConn as well as per-call options. // defaults from the ClientConn as well as per-call options.
// //
// The returned error must be compatible with the status package. // The returned error must be compatible with the status package.
type UnaryClientInterceptor func(ctx context.Context, method string, req, reply interface{}, cc *ClientConn, invoker UnaryInvoker, opts ...CallOption) error type UnaryClientInterceptor func(ctx context.Context, method string, req, reply any, cc *ClientConn, invoker UnaryInvoker, opts ...CallOption) error
// Streamer is called by StreamClientInterceptor to create a ClientStream. // Streamer is called by StreamClientInterceptor to create a ClientStream.
type Streamer func(ctx context.Context, desc *StreamDesc, cc *ClientConn, method string, opts ...CallOption) (ClientStream, error) type Streamer func(ctx context.Context, desc *StreamDesc, cc *ClientConn, method string, opts ...CallOption) (ClientStream, error)
@ -66,7 +66,7 @@ type StreamClientInterceptor func(ctx context.Context, desc *StreamDesc, cc *Cli
// server side. All per-rpc information may be mutated by the interceptor. // server side. All per-rpc information may be mutated by the interceptor.
type UnaryServerInfo struct { type UnaryServerInfo struct {
// Server is the service implementation the user provides. This is read-only. // Server is the service implementation the user provides. This is read-only.
Server interface{} Server any
// FullMethod is the full RPC method string, i.e., /package.service/method. // FullMethod is the full RPC method string, i.e., /package.service/method.
FullMethod string FullMethod string
} }
@ -78,13 +78,13 @@ type UnaryServerInfo struct {
// status package, or be one of the context errors. Otherwise, gRPC will use // status package, or be one of the context errors. Otherwise, gRPC will use
// codes.Unknown as the status code and err.Error() as the status message of the // codes.Unknown as the status code and err.Error() as the status message of the
// RPC. // RPC.
type UnaryHandler func(ctx context.Context, req interface{}) (interface{}, error) type UnaryHandler func(ctx context.Context, req any) (any, error)
// UnaryServerInterceptor provides a hook to intercept the execution of a unary RPC on the server. info // UnaryServerInterceptor provides a hook to intercept the execution of a unary RPC on the server. info
// contains all the information of this RPC the interceptor can operate on. And handler is the wrapper // contains all the information of this RPC the interceptor can operate on. And handler is the wrapper
// of the service method implementation. It is the responsibility of the interceptor to invoke handler // of the service method implementation. It is the responsibility of the interceptor to invoke handler
// to complete the RPC. // to complete the RPC.
type UnaryServerInterceptor func(ctx context.Context, req interface{}, info *UnaryServerInfo, handler UnaryHandler) (resp interface{}, err error) type UnaryServerInterceptor func(ctx context.Context, req any, info *UnaryServerInfo, handler UnaryHandler) (resp any, err error)
// StreamServerInfo consists of various information about a streaming RPC on // StreamServerInfo consists of various information about a streaming RPC on
// server side. All per-rpc information may be mutated by the interceptor. // server side. All per-rpc information may be mutated by the interceptor.
@ -101,4 +101,4 @@ type StreamServerInfo struct {
// info contains all the information of this RPC the interceptor can operate on. And handler is the // info contains all the information of this RPC the interceptor can operate on. And handler is the
// service method implementation. It is the responsibility of the interceptor to invoke handler to // service method implementation. It is the responsibility of the interceptor to invoke handler to
// complete the RPC. // complete the RPC.
type StreamServerInterceptor func(srv interface{}, ss ServerStream, info *StreamServerInfo, handler StreamHandler) error type StreamServerInterceptor func(srv any, ss ServerStream, info *StreamServerInfo, handler StreamHandler) error

View File

@ -200,8 +200,8 @@ func (gsb *Balancer) ExitIdle() {
} }
} }
// UpdateSubConnState forwards the update to the appropriate child. // updateSubConnState forwards the update to the appropriate child.
func (gsb *Balancer) UpdateSubConnState(sc balancer.SubConn, state balancer.SubConnState) { func (gsb *Balancer) updateSubConnState(sc balancer.SubConn, state balancer.SubConnState, cb func(balancer.SubConnState)) {
gsb.currentMu.Lock() gsb.currentMu.Lock()
defer gsb.currentMu.Unlock() defer gsb.currentMu.Unlock()
gsb.mu.Lock() gsb.mu.Lock()
@ -214,13 +214,26 @@ func (gsb *Balancer) UpdateSubConnState(sc balancer.SubConn, state balancer.SubC
} else if gsb.balancerPending != nil && gsb.balancerPending.subconns[sc] { } else if gsb.balancerPending != nil && gsb.balancerPending.subconns[sc] {
balToUpdate = gsb.balancerPending balToUpdate = gsb.balancerPending
} }
gsb.mu.Unlock()
if balToUpdate == nil { if balToUpdate == nil {
// SubConn belonged to a stale lb policy that has not yet fully closed, // SubConn belonged to a stale lb policy that has not yet fully closed,
// or the balancer was already closed. // or the balancer was already closed.
gsb.mu.Unlock()
return return
} }
balToUpdate.UpdateSubConnState(sc, state) if state.ConnectivityState == connectivity.Shutdown {
delete(balToUpdate.subconns, sc)
}
gsb.mu.Unlock()
if cb != nil {
cb(state)
} else {
balToUpdate.UpdateSubConnState(sc, state)
}
}
// UpdateSubConnState forwards the update to the appropriate child.
func (gsb *Balancer) UpdateSubConnState(sc balancer.SubConn, state balancer.SubConnState) {
gsb.updateSubConnState(sc, state, nil)
} }
// Close closes any active child balancers. // Close closes any active child balancers.
@ -242,7 +255,7 @@ func (gsb *Balancer) Close() {
// //
// It implements the balancer.ClientConn interface and is passed down in that // It implements the balancer.ClientConn interface and is passed down in that
// capacity to the wrapped balancer. It maintains a set of subConns created by // capacity to the wrapped balancer. It maintains a set of subConns created by
// the wrapped balancer and calls from the latter to create/update/remove // the wrapped balancer and calls from the latter to create/update/shutdown
// SubConns update this set before being forwarded to the parent ClientConn. // SubConns update this set before being forwarded to the parent ClientConn.
// State updates from the wrapped balancer can result in invocation of the // State updates from the wrapped balancer can result in invocation of the
// graceful switch logic. // graceful switch logic.
@ -254,21 +267,10 @@ type balancerWrapper struct {
subconns map[balancer.SubConn]bool // subconns created by this balancer subconns map[balancer.SubConn]bool // subconns created by this balancer
} }
func (bw *balancerWrapper) UpdateSubConnState(sc balancer.SubConn, state balancer.SubConnState) { // Close closes the underlying LB policy and shuts down the subconns it
if state.ConnectivityState == connectivity.Shutdown { // created. bw must not be referenced via balancerCurrent or balancerPending in
bw.gsb.mu.Lock() // gsb when called. gsb.mu must not be held. Does not panic with a nil
delete(bw.subconns, sc) // receiver.
bw.gsb.mu.Unlock()
}
// There is no need to protect this read with a mutex, as the write to the
// Balancer field happens in SwitchTo, which completes before this can be
// called.
bw.Balancer.UpdateSubConnState(sc, state)
}
// Close closes the underlying LB policy and removes the subconns it created. bw
// must not be referenced via balancerCurrent or balancerPending in gsb when
// called. gsb.mu must not be held. Does not panic with a nil receiver.
func (bw *balancerWrapper) Close() { func (bw *balancerWrapper) Close() {
// before Close is called. // before Close is called.
if bw == nil { if bw == nil {
@ -281,7 +283,7 @@ func (bw *balancerWrapper) Close() {
bw.Balancer.Close() bw.Balancer.Close()
bw.gsb.mu.Lock() bw.gsb.mu.Lock()
for sc := range bw.subconns { for sc := range bw.subconns {
bw.gsb.cc.RemoveSubConn(sc) sc.Shutdown()
} }
bw.gsb.mu.Unlock() bw.gsb.mu.Unlock()
} }
@ -335,13 +337,16 @@ func (bw *balancerWrapper) NewSubConn(addrs []resolver.Address, opts balancer.Ne
} }
bw.gsb.mu.Unlock() bw.gsb.mu.Unlock()
var sc balancer.SubConn
oldListener := opts.StateListener
opts.StateListener = func(state balancer.SubConnState) { bw.gsb.updateSubConnState(sc, state, oldListener) }
sc, err := bw.gsb.cc.NewSubConn(addrs, opts) sc, err := bw.gsb.cc.NewSubConn(addrs, opts)
if err != nil { if err != nil {
return nil, err return nil, err
} }
bw.gsb.mu.Lock() bw.gsb.mu.Lock()
if !bw.gsb.balancerCurrentOrPending(bw) { // balancer was closed during this call if !bw.gsb.balancerCurrentOrPending(bw) { // balancer was closed during this call
bw.gsb.cc.RemoveSubConn(sc) sc.Shutdown()
bw.gsb.mu.Unlock() bw.gsb.mu.Unlock()
return nil, fmt.Errorf("%T at address %p that called NewSubConn is deleted", bw, bw) return nil, fmt.Errorf("%T at address %p that called NewSubConn is deleted", bw, bw)
} }
@ -360,13 +365,9 @@ func (bw *balancerWrapper) ResolveNow(opts resolver.ResolveNowOptions) {
} }
func (bw *balancerWrapper) RemoveSubConn(sc balancer.SubConn) { func (bw *balancerWrapper) RemoveSubConn(sc balancer.SubConn) {
bw.gsb.mu.Lock() // Note: existing third party balancers may call this, so it must remain
if !bw.gsb.balancerCurrentOrPending(bw) { // until RemoveSubConn is fully removed.
bw.gsb.mu.Unlock() sc.Shutdown()
return
}
bw.gsb.mu.Unlock()
bw.gsb.cc.RemoveSubConn(sc)
} }
func (bw *balancerWrapper) UpdateAddresses(sc balancer.SubConn, addrs []resolver.Address) { func (bw *balancerWrapper) UpdateAddresses(sc balancer.SubConn, addrs []resolver.Address) {

View File

@ -25,7 +25,7 @@ import (
// Parser converts loads from metadata into a concrete type. // Parser converts loads from metadata into a concrete type.
type Parser interface { type Parser interface {
// Parse parses loads from metadata. // Parse parses loads from metadata.
Parse(md metadata.MD) interface{} Parse(md metadata.MD) any
} }
var parser Parser var parser Parser
@ -38,7 +38,7 @@ func SetParser(lr Parser) {
} }
// Parse calls parser.Read(). // Parse calls parser.Read().
func Parse(md metadata.MD) interface{} { func Parse(md metadata.MD) any {
if parser == nil { if parser == nil {
return nil return nil
} }

View File

@ -230,7 +230,7 @@ type ClientMessage struct {
OnClientSide bool OnClientSide bool
// Message can be a proto.Message or []byte. Other messages formats are not // Message can be a proto.Message or []byte. Other messages formats are not
// supported. // supported.
Message interface{} Message any
} }
func (c *ClientMessage) toProto() *binlogpb.GrpcLogEntry { func (c *ClientMessage) toProto() *binlogpb.GrpcLogEntry {
@ -270,7 +270,7 @@ type ServerMessage struct {
OnClientSide bool OnClientSide bool
// Message can be a proto.Message or []byte. Other messages formats are not // Message can be a proto.Message or []byte. Other messages formats are not
// supported. // supported.
Message interface{} Message any
} }
func (c *ServerMessage) toProto() *binlogpb.GrpcLogEntry { func (c *ServerMessage) toProto() *binlogpb.GrpcLogEntry {

View File

@ -28,25 +28,25 @@ import "sync"
// the underlying mutex used for synchronization. // the underlying mutex used for synchronization.
// //
// Unbounded supports values of any type to be stored in it by using a channel // Unbounded supports values of any type to be stored in it by using a channel
// of `interface{}`. This means that a call to Put() incurs an extra memory // of `any`. This means that a call to Put() incurs an extra memory allocation,
// allocation, and also that users need a type assertion while reading. For // and also that users need a type assertion while reading. For performance
// performance critical code paths, using Unbounded is strongly discouraged and // critical code paths, using Unbounded is strongly discouraged and defining a
// defining a new type specific implementation of this buffer is preferred. See // new type specific implementation of this buffer is preferred. See
// internal/transport/transport.go for an example of this. // internal/transport/transport.go for an example of this.
type Unbounded struct { type Unbounded struct {
c chan interface{} c chan any
closed bool closed bool
mu sync.Mutex mu sync.Mutex
backlog []interface{} backlog []any
} }
// NewUnbounded returns a new instance of Unbounded. // NewUnbounded returns a new instance of Unbounded.
func NewUnbounded() *Unbounded { func NewUnbounded() *Unbounded {
return &Unbounded{c: make(chan interface{}, 1)} return &Unbounded{c: make(chan any, 1)}
} }
// Put adds t to the unbounded buffer. // Put adds t to the unbounded buffer.
func (b *Unbounded) Put(t interface{}) { func (b *Unbounded) Put(t any) {
b.mu.Lock() b.mu.Lock()
defer b.mu.Unlock() defer b.mu.Unlock()
if b.closed { if b.closed {
@ -89,7 +89,7 @@ func (b *Unbounded) Load() {
// //
// If the unbounded buffer is closed, the read channel returned by this method // If the unbounded buffer is closed, the read channel returned by this method
// is closed. // is closed.
func (b *Unbounded) Get() <-chan interface{} { func (b *Unbounded) Get() <-chan any {
return b.c return b.c
} }

View File

@ -24,9 +24,7 @@
package channelz package channelz
import ( import (
"context"
"errors" "errors"
"fmt"
"sort" "sort"
"sync" "sync"
"sync/atomic" "sync/atomic"
@ -40,8 +38,11 @@ const (
) )
var ( var (
db dbWrapper // IDGen is the global channelz entity ID generator. It should not be used
idGen idGenerator // outside this package except by tests.
IDGen IDGenerator
db dbWrapper
// EntryPerPage defines the number of channelz entries to be shown on a web page. // EntryPerPage defines the number of channelz entries to be shown on a web page.
EntryPerPage = int64(50) EntryPerPage = int64(50)
curState int32 curState int32
@ -52,14 +53,14 @@ var (
func TurnOn() { func TurnOn() {
if !IsOn() { if !IsOn() {
db.set(newChannelMap()) db.set(newChannelMap())
idGen.reset() IDGen.Reset()
atomic.StoreInt32(&curState, 1) atomic.StoreInt32(&curState, 1)
} }
} }
// IsOn returns whether channelz data collection is on. // IsOn returns whether channelz data collection is on.
func IsOn() bool { func IsOn() bool {
return atomic.CompareAndSwapInt32(&curState, 1, 1) return atomic.LoadInt32(&curState) == 1
} }
// SetMaxTraceEntry sets maximum number of trace entry per entity (i.e. channel/subchannel). // SetMaxTraceEntry sets maximum number of trace entry per entity (i.e. channel/subchannel).
@ -97,43 +98,6 @@ func (d *dbWrapper) get() *channelMap {
return d.DB return d.DB
} }
// NewChannelzStorageForTesting initializes channelz data storage and id
// generator for testing purposes.
//
// Returns a cleanup function to be invoked by the test, which waits for up to
// 10s for all channelz state to be reset by the grpc goroutines when those
// entities get closed. This cleanup function helps with ensuring that tests
// don't mess up each other.
func NewChannelzStorageForTesting() (cleanup func() error) {
db.set(newChannelMap())
idGen.reset()
return func() error {
cm := db.get()
if cm == nil {
return nil
}
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
ticker := time.NewTicker(10 * time.Millisecond)
defer ticker.Stop()
for {
cm.mu.RLock()
topLevelChannels, servers, channels, subChannels, listenSockets, normalSockets := len(cm.topLevelChannels), len(cm.servers), len(cm.channels), len(cm.subChannels), len(cm.listenSockets), len(cm.normalSockets)
cm.mu.RUnlock()
if err := ctx.Err(); err != nil {
return fmt.Errorf("after 10s the channelz map has not been cleaned up yet, topchannels: %d, servers: %d, channels: %d, subchannels: %d, listen sockets: %d, normal sockets: %d", topLevelChannels, servers, channels, subChannels, listenSockets, normalSockets)
}
if topLevelChannels == 0 && servers == 0 && channels == 0 && subChannels == 0 && listenSockets == 0 && normalSockets == 0 {
return nil
}
<-ticker.C
}
}
}
// GetTopChannels returns a slice of top channel's ChannelMetric, along with a // GetTopChannels returns a slice of top channel's ChannelMetric, along with a
// boolean indicating whether there's more top channels to be queried for. // boolean indicating whether there's more top channels to be queried for.
// //
@ -193,7 +157,7 @@ func GetServer(id int64) *ServerMetric {
// //
// If channelz is not turned ON, the channelz database is not mutated. // If channelz is not turned ON, the channelz database is not mutated.
func RegisterChannel(c Channel, pid *Identifier, ref string) *Identifier { func RegisterChannel(c Channel, pid *Identifier, ref string) *Identifier {
id := idGen.genID() id := IDGen.genID()
var parent int64 var parent int64
isTopChannel := true isTopChannel := true
if pid != nil { if pid != nil {
@ -229,7 +193,7 @@ func RegisterSubChannel(c Channel, pid *Identifier, ref string) (*Identifier, er
if pid == nil { if pid == nil {
return nil, errors.New("a SubChannel's parent id cannot be nil") return nil, errors.New("a SubChannel's parent id cannot be nil")
} }
id := idGen.genID() id := IDGen.genID()
if !IsOn() { if !IsOn() {
return newIdentifer(RefSubChannel, id, pid), nil return newIdentifer(RefSubChannel, id, pid), nil
} }
@ -251,7 +215,7 @@ func RegisterSubChannel(c Channel, pid *Identifier, ref string) (*Identifier, er
// //
// If channelz is not turned ON, the channelz database is not mutated. // If channelz is not turned ON, the channelz database is not mutated.
func RegisterServer(s Server, ref string) *Identifier { func RegisterServer(s Server, ref string) *Identifier {
id := idGen.genID() id := IDGen.genID()
if !IsOn() { if !IsOn() {
return newIdentifer(RefServer, id, nil) return newIdentifer(RefServer, id, nil)
} }
@ -277,7 +241,7 @@ func RegisterListenSocket(s Socket, pid *Identifier, ref string) (*Identifier, e
if pid == nil { if pid == nil {
return nil, errors.New("a ListenSocket's parent id cannot be 0") return nil, errors.New("a ListenSocket's parent id cannot be 0")
} }
id := idGen.genID() id := IDGen.genID()
if !IsOn() { if !IsOn() {
return newIdentifer(RefListenSocket, id, pid), nil return newIdentifer(RefListenSocket, id, pid), nil
} }
@ -297,7 +261,7 @@ func RegisterNormalSocket(s Socket, pid *Identifier, ref string) (*Identifier, e
if pid == nil { if pid == nil {
return nil, errors.New("a NormalSocket's parent id cannot be 0") return nil, errors.New("a NormalSocket's parent id cannot be 0")
} }
id := idGen.genID() id := IDGen.genID()
if !IsOn() { if !IsOn() {
return newIdentifer(RefNormalSocket, id, pid), nil return newIdentifer(RefNormalSocket, id, pid), nil
} }
@ -776,14 +740,17 @@ func (c *channelMap) GetServer(id int64) *ServerMetric {
return sm return sm
} }
type idGenerator struct { // IDGenerator is an incrementing atomic that tracks IDs for channelz entities.
type IDGenerator struct {
id int64 id int64
} }
func (i *idGenerator) reset() { // Reset resets the generated ID back to zero. Should only be used at
// initialization or by tests sensitive to the ID number.
func (i *IDGenerator) Reset() {
atomic.StoreInt64(&i.id, 0) atomic.StoreInt64(&i.id, 0)
} }
func (i *idGenerator) genID() int64 { func (i *IDGenerator) genID() int64 {
return atomic.AddInt64(&i.id, 1) return atomic.AddInt64(&i.id, 1)
} }

View File

@ -31,7 +31,7 @@ func withParens(id *Identifier) string {
} }
// Info logs and adds a trace event if channelz is on. // Info logs and adds a trace event if channelz is on.
func Info(l grpclog.DepthLoggerV2, id *Identifier, args ...interface{}) { func Info(l grpclog.DepthLoggerV2, id *Identifier, args ...any) {
AddTraceEvent(l, id, 1, &TraceEventDesc{ AddTraceEvent(l, id, 1, &TraceEventDesc{
Desc: fmt.Sprint(args...), Desc: fmt.Sprint(args...),
Severity: CtInfo, Severity: CtInfo,
@ -39,7 +39,7 @@ func Info(l grpclog.DepthLoggerV2, id *Identifier, args ...interface{}) {
} }
// Infof logs and adds a trace event if channelz is on. // Infof logs and adds a trace event if channelz is on.
func Infof(l grpclog.DepthLoggerV2, id *Identifier, format string, args ...interface{}) { func Infof(l grpclog.DepthLoggerV2, id *Identifier, format string, args ...any) {
AddTraceEvent(l, id, 1, &TraceEventDesc{ AddTraceEvent(l, id, 1, &TraceEventDesc{
Desc: fmt.Sprintf(format, args...), Desc: fmt.Sprintf(format, args...),
Severity: CtInfo, Severity: CtInfo,
@ -47,7 +47,7 @@ func Infof(l grpclog.DepthLoggerV2, id *Identifier, format string, args ...inter
} }
// Warning logs and adds a trace event if channelz is on. // Warning logs and adds a trace event if channelz is on.
func Warning(l grpclog.DepthLoggerV2, id *Identifier, args ...interface{}) { func Warning(l grpclog.DepthLoggerV2, id *Identifier, args ...any) {
AddTraceEvent(l, id, 1, &TraceEventDesc{ AddTraceEvent(l, id, 1, &TraceEventDesc{
Desc: fmt.Sprint(args...), Desc: fmt.Sprint(args...),
Severity: CtWarning, Severity: CtWarning,
@ -55,7 +55,7 @@ func Warning(l grpclog.DepthLoggerV2, id *Identifier, args ...interface{}) {
} }
// Warningf logs and adds a trace event if channelz is on. // Warningf logs and adds a trace event if channelz is on.
func Warningf(l grpclog.DepthLoggerV2, id *Identifier, format string, args ...interface{}) { func Warningf(l grpclog.DepthLoggerV2, id *Identifier, format string, args ...any) {
AddTraceEvent(l, id, 1, &TraceEventDesc{ AddTraceEvent(l, id, 1, &TraceEventDesc{
Desc: fmt.Sprintf(format, args...), Desc: fmt.Sprintf(format, args...),
Severity: CtWarning, Severity: CtWarning,
@ -63,7 +63,7 @@ func Warningf(l grpclog.DepthLoggerV2, id *Identifier, format string, args ...in
} }
// Error logs and adds a trace event if channelz is on. // Error logs and adds a trace event if channelz is on.
func Error(l grpclog.DepthLoggerV2, id *Identifier, args ...interface{}) { func Error(l grpclog.DepthLoggerV2, id *Identifier, args ...any) {
AddTraceEvent(l, id, 1, &TraceEventDesc{ AddTraceEvent(l, id, 1, &TraceEventDesc{
Desc: fmt.Sprint(args...), Desc: fmt.Sprint(args...),
Severity: CtError, Severity: CtError,
@ -71,7 +71,7 @@ func Error(l grpclog.DepthLoggerV2, id *Identifier, args ...interface{}) {
} }
// Errorf logs and adds a trace event if channelz is on. // Errorf logs and adds a trace event if channelz is on.
func Errorf(l grpclog.DepthLoggerV2, id *Identifier, format string, args ...interface{}) { func Errorf(l grpclog.DepthLoggerV2, id *Identifier, format string, args ...any) {
AddTraceEvent(l, id, 1, &TraceEventDesc{ AddTraceEvent(l, id, 1, &TraceEventDesc{
Desc: fmt.Sprintf(format, args...), Desc: fmt.Sprintf(format, args...),
Severity: CtError, Severity: CtError,

View File

@ -628,6 +628,7 @@ type tracedChannel interface {
type channelTrace struct { type channelTrace struct {
cm *channelMap cm *channelMap
clearCalled bool
createdTime time.Time createdTime time.Time
eventCount int64 eventCount int64
mu sync.Mutex mu sync.Mutex
@ -656,6 +657,10 @@ func (c *channelTrace) append(e *TraceEvent) {
} }
func (c *channelTrace) clear() { func (c *channelTrace) clear() {
if c.clearCalled {
return
}
c.clearCalled = true
c.mu.Lock() c.mu.Lock()
for _, e := range c.events { for _, e := range c.events {
if e.RefID != 0 { if e.RefID != 0 {

View File

@ -23,7 +23,7 @@ import (
) )
// GetSocketOption gets the socket option info of the conn. // GetSocketOption gets the socket option info of the conn.
func GetSocketOption(socket interface{}) *SocketOptionData { func GetSocketOption(socket any) *SocketOptionData {
c, ok := socket.(syscall.Conn) c, ok := socket.(syscall.Conn)
if !ok { if !ok {
return nil return nil

View File

@ -22,6 +22,6 @@
package channelz package channelz
// GetSocketOption gets the socket option info of the conn. // GetSocketOption gets the socket option info of the conn.
func GetSocketOption(c interface{}) *SocketOptionData { func GetSocketOption(c any) *SocketOptionData {
return nil return nil
} }

View File

@ -25,12 +25,12 @@ import (
type requestInfoKey struct{} type requestInfoKey struct{}
// NewRequestInfoContext creates a context with ri. // NewRequestInfoContext creates a context with ri.
func NewRequestInfoContext(ctx context.Context, ri interface{}) context.Context { func NewRequestInfoContext(ctx context.Context, ri any) context.Context {
return context.WithValue(ctx, requestInfoKey{}, ri) return context.WithValue(ctx, requestInfoKey{}, ri)
} }
// RequestInfoFromContext extracts the RequestInfo from ctx. // RequestInfoFromContext extracts the RequestInfo from ctx.
func RequestInfoFromContext(ctx context.Context) interface{} { func RequestInfoFromContext(ctx context.Context) any {
return ctx.Value(requestInfoKey{}) return ctx.Value(requestInfoKey{})
} }
@ -39,11 +39,11 @@ func RequestInfoFromContext(ctx context.Context) interface{} {
type clientHandshakeInfoKey struct{} type clientHandshakeInfoKey struct{}
// ClientHandshakeInfoFromContext extracts the ClientHandshakeInfo from ctx. // ClientHandshakeInfoFromContext extracts the ClientHandshakeInfo from ctx.
func ClientHandshakeInfoFromContext(ctx context.Context) interface{} { func ClientHandshakeInfoFromContext(ctx context.Context) any {
return ctx.Value(clientHandshakeInfoKey{}) return ctx.Value(clientHandshakeInfoKey{})
} }
// NewClientHandshakeInfoContext creates a context with chi. // NewClientHandshakeInfoContext creates a context with chi.
func NewClientHandshakeInfoContext(ctx context.Context, chi interface{}) context.Context { func NewClientHandshakeInfoContext(ctx context.Context, chi any) context.Context {
return context.WithValue(ctx, clientHandshakeInfoKey{}, chi) return context.WithValue(ctx, clientHandshakeInfoKey{}, chi)
} }

View File

@ -37,9 +37,15 @@ var (
// checking which NACKs configs specifying ring sizes > 8*1024*1024 (~8M). // checking which NACKs configs specifying ring sizes > 8*1024*1024 (~8M).
RingHashCap = uint64FromEnv("GRPC_RING_HASH_CAP", 4096, 1, 8*1024*1024) RingHashCap = uint64FromEnv("GRPC_RING_HASH_CAP", 4096, 1, 8*1024*1024)
// PickFirstLBConfig is set if we should support configuration of the // PickFirstLBConfig is set if we should support configuration of the
// pick_first LB policy, which can be enabled by setting the environment // pick_first LB policy.
// variable "GRPC_EXPERIMENTAL_PICKFIRST_LB_CONFIG" to "true". PickFirstLBConfig = boolFromEnv("GRPC_EXPERIMENTAL_PICKFIRST_LB_CONFIG", true)
PickFirstLBConfig = boolFromEnv("GRPC_EXPERIMENTAL_PICKFIRST_LB_CONFIG", false) // LeastRequestLB is set if we should support the least_request_experimental
// LB policy, which can be enabled by setting the environment variable
// "GRPC_EXPERIMENTAL_ENABLE_LEAST_REQUEST" to "true".
LeastRequestLB = boolFromEnv("GRPC_EXPERIMENTAL_ENABLE_LEAST_REQUEST", false)
// ALTSMaxConcurrentHandshakes is the maximum number of concurrent ALTS
// handshakes that can be performed.
ALTSMaxConcurrentHandshakes = uint64FromEnv("GRPC_ALTS_MAX_CONCURRENT_HANDSHAKES", 100, 1, 100)
) )
func boolFromEnv(envVar string, def bool) bool { func boolFromEnv(envVar string, def bool) bool {

View File

@ -30,7 +30,7 @@ var Logger LoggerV2
var DepthLogger DepthLoggerV2 var DepthLogger DepthLoggerV2
// InfoDepth logs to the INFO log at the specified depth. // InfoDepth logs to the INFO log at the specified depth.
func InfoDepth(depth int, args ...interface{}) { func InfoDepth(depth int, args ...any) {
if DepthLogger != nil { if DepthLogger != nil {
DepthLogger.InfoDepth(depth, args...) DepthLogger.InfoDepth(depth, args...)
} else { } else {
@ -39,7 +39,7 @@ func InfoDepth(depth int, args ...interface{}) {
} }
// WarningDepth logs to the WARNING log at the specified depth. // WarningDepth logs to the WARNING log at the specified depth.
func WarningDepth(depth int, args ...interface{}) { func WarningDepth(depth int, args ...any) {
if DepthLogger != nil { if DepthLogger != nil {
DepthLogger.WarningDepth(depth, args...) DepthLogger.WarningDepth(depth, args...)
} else { } else {
@ -48,7 +48,7 @@ func WarningDepth(depth int, args ...interface{}) {
} }
// ErrorDepth logs to the ERROR log at the specified depth. // ErrorDepth logs to the ERROR log at the specified depth.
func ErrorDepth(depth int, args ...interface{}) { func ErrorDepth(depth int, args ...any) {
if DepthLogger != nil { if DepthLogger != nil {
DepthLogger.ErrorDepth(depth, args...) DepthLogger.ErrorDepth(depth, args...)
} else { } else {
@ -57,7 +57,7 @@ func ErrorDepth(depth int, args ...interface{}) {
} }
// FatalDepth logs to the FATAL log at the specified depth. // FatalDepth logs to the FATAL log at the specified depth.
func FatalDepth(depth int, args ...interface{}) { func FatalDepth(depth int, args ...any) {
if DepthLogger != nil { if DepthLogger != nil {
DepthLogger.FatalDepth(depth, args...) DepthLogger.FatalDepth(depth, args...)
} else { } else {
@ -71,35 +71,35 @@ func FatalDepth(depth int, args ...interface{}) {
// is defined here to avoid a circular dependency. // is defined here to avoid a circular dependency.
type LoggerV2 interface { type LoggerV2 interface {
// Info logs to INFO log. Arguments are handled in the manner of fmt.Print. // Info logs to INFO log. Arguments are handled in the manner of fmt.Print.
Info(args ...interface{}) Info(args ...any)
// Infoln logs to INFO log. Arguments are handled in the manner of fmt.Println. // Infoln logs to INFO log. Arguments are handled in the manner of fmt.Println.
Infoln(args ...interface{}) Infoln(args ...any)
// Infof logs to INFO log. Arguments are handled in the manner of fmt.Printf. // Infof logs to INFO log. Arguments are handled in the manner of fmt.Printf.
Infof(format string, args ...interface{}) Infof(format string, args ...any)
// Warning logs to WARNING log. Arguments are handled in the manner of fmt.Print. // Warning logs to WARNING log. Arguments are handled in the manner of fmt.Print.
Warning(args ...interface{}) Warning(args ...any)
// Warningln logs to WARNING log. Arguments are handled in the manner of fmt.Println. // Warningln logs to WARNING log. Arguments are handled in the manner of fmt.Println.
Warningln(args ...interface{}) Warningln(args ...any)
// Warningf logs to WARNING log. Arguments are handled in the manner of fmt.Printf. // Warningf logs to WARNING log. Arguments are handled in the manner of fmt.Printf.
Warningf(format string, args ...interface{}) Warningf(format string, args ...any)
// Error logs to ERROR log. Arguments are handled in the manner of fmt.Print. // Error logs to ERROR log. Arguments are handled in the manner of fmt.Print.
Error(args ...interface{}) Error(args ...any)
// Errorln logs to ERROR log. Arguments are handled in the manner of fmt.Println. // Errorln logs to ERROR log. Arguments are handled in the manner of fmt.Println.
Errorln(args ...interface{}) Errorln(args ...any)
// Errorf logs to ERROR log. Arguments are handled in the manner of fmt.Printf. // Errorf logs to ERROR log. Arguments are handled in the manner of fmt.Printf.
Errorf(format string, args ...interface{}) Errorf(format string, args ...any)
// Fatal logs to ERROR log. Arguments are handled in the manner of fmt.Print. // Fatal logs to ERROR log. Arguments are handled in the manner of fmt.Print.
// gRPC ensures that all Fatal logs will exit with os.Exit(1). // gRPC ensures that all Fatal logs will exit with os.Exit(1).
// Implementations may also call os.Exit() with a non-zero exit code. // Implementations may also call os.Exit() with a non-zero exit code.
Fatal(args ...interface{}) Fatal(args ...any)
// Fatalln logs to ERROR log. Arguments are handled in the manner of fmt.Println. // Fatalln logs to ERROR log. Arguments are handled in the manner of fmt.Println.
// gRPC ensures that all Fatal logs will exit with os.Exit(1). // gRPC ensures that all Fatal logs will exit with os.Exit(1).
// Implementations may also call os.Exit() with a non-zero exit code. // Implementations may also call os.Exit() with a non-zero exit code.
Fatalln(args ...interface{}) Fatalln(args ...any)
// Fatalf logs to ERROR log. Arguments are handled in the manner of fmt.Printf. // Fatalf logs to ERROR log. Arguments are handled in the manner of fmt.Printf.
// gRPC ensures that all Fatal logs will exit with os.Exit(1). // gRPC ensures that all Fatal logs will exit with os.Exit(1).
// Implementations may also call os.Exit() with a non-zero exit code. // Implementations may also call os.Exit() with a non-zero exit code.
Fatalf(format string, args ...interface{}) Fatalf(format string, args ...any)
// V reports whether verbosity level l is at least the requested verbose level. // V reports whether verbosity level l is at least the requested verbose level.
V(l int) bool V(l int) bool
} }
@ -116,11 +116,11 @@ type LoggerV2 interface {
// later release. // later release.
type DepthLoggerV2 interface { type DepthLoggerV2 interface {
// InfoDepth logs to INFO log at the specified depth. Arguments are handled in the manner of fmt.Println. // InfoDepth logs to INFO log at the specified depth. Arguments are handled in the manner of fmt.Println.
InfoDepth(depth int, args ...interface{}) InfoDepth(depth int, args ...any)
// WarningDepth logs to WARNING log at the specified depth. Arguments are handled in the manner of fmt.Println. // WarningDepth logs to WARNING log at the specified depth. Arguments are handled in the manner of fmt.Println.
WarningDepth(depth int, args ...interface{}) WarningDepth(depth int, args ...any)
// ErrorDepth logs to ERROR log at the specified depth. Arguments are handled in the manner of fmt.Println. // ErrorDepth logs to ERROR log at the specified depth. Arguments are handled in the manner of fmt.Println.
ErrorDepth(depth int, args ...interface{}) ErrorDepth(depth int, args ...any)
// FatalDepth logs to FATAL log at the specified depth. Arguments are handled in the manner of fmt.Println. // FatalDepth logs to FATAL log at the specified depth. Arguments are handled in the manner of fmt.Println.
FatalDepth(depth int, args ...interface{}) FatalDepth(depth int, args ...any)
} }

View File

@ -31,7 +31,7 @@ type PrefixLogger struct {
} }
// Infof does info logging. // Infof does info logging.
func (pl *PrefixLogger) Infof(format string, args ...interface{}) { func (pl *PrefixLogger) Infof(format string, args ...any) {
if pl != nil { if pl != nil {
// Handle nil, so the tests can pass in a nil logger. // Handle nil, so the tests can pass in a nil logger.
format = pl.prefix + format format = pl.prefix + format
@ -42,7 +42,7 @@ func (pl *PrefixLogger) Infof(format string, args ...interface{}) {
} }
// Warningf does warning logging. // Warningf does warning logging.
func (pl *PrefixLogger) Warningf(format string, args ...interface{}) { func (pl *PrefixLogger) Warningf(format string, args ...any) {
if pl != nil { if pl != nil {
format = pl.prefix + format format = pl.prefix + format
pl.logger.WarningDepth(1, fmt.Sprintf(format, args...)) pl.logger.WarningDepth(1, fmt.Sprintf(format, args...))
@ -52,7 +52,7 @@ func (pl *PrefixLogger) Warningf(format string, args ...interface{}) {
} }
// Errorf does error logging. // Errorf does error logging.
func (pl *PrefixLogger) Errorf(format string, args ...interface{}) { func (pl *PrefixLogger) Errorf(format string, args ...any) {
if pl != nil { if pl != nil {
format = pl.prefix + format format = pl.prefix + format
pl.logger.ErrorDepth(1, fmt.Sprintf(format, args...)) pl.logger.ErrorDepth(1, fmt.Sprintf(format, args...))
@ -62,7 +62,7 @@ func (pl *PrefixLogger) Errorf(format string, args ...interface{}) {
} }
// Debugf does info logging at verbose level 2. // Debugf does info logging at verbose level 2.
func (pl *PrefixLogger) Debugf(format string, args ...interface{}) { func (pl *PrefixLogger) Debugf(format string, args ...any) {
// TODO(6044): Refactor interfaces LoggerV2 and DepthLogger, and maybe // TODO(6044): Refactor interfaces LoggerV2 and DepthLogger, and maybe
// rewrite PrefixLogger a little to ensure that we don't use the global // rewrite PrefixLogger a little to ensure that we don't use the global
// `Logger` here, and instead use the `logger` field. // `Logger` here, and instead use the `logger` field.

View File

@ -80,6 +80,13 @@ func Uint32() uint32 {
return r.Uint32() return r.Uint32()
} }
// ExpFloat64 implements rand.ExpFloat64 on the grpcrand global source.
func ExpFloat64() float64 {
mu.Lock()
defer mu.Unlock()
return r.ExpFloat64()
}
// Shuffle implements rand.Shuffle on the grpcrand global source. // Shuffle implements rand.Shuffle on the grpcrand global source.
var Shuffle = func(n int, f func(int, int)) { var Shuffle = func(n int, f func(int, int)) {
mu.Lock() mu.Lock()

View File

@ -32,10 +32,10 @@ import (
// //
// This type is safe for concurrent access. // This type is safe for concurrent access.
type CallbackSerializer struct { type CallbackSerializer struct {
// Done is closed once the serializer is shut down completely, i.e all // done is closed once the serializer is shut down completely, i.e all
// scheduled callbacks are executed and the serializer has deallocated all // scheduled callbacks are executed and the serializer has deallocated all
// its resources. // its resources.
Done chan struct{} done chan struct{}
callbacks *buffer.Unbounded callbacks *buffer.Unbounded
closedMu sync.Mutex closedMu sync.Mutex
@ -48,12 +48,12 @@ type CallbackSerializer struct {
// callbacks will be added once this context is canceled, and any pending un-run // callbacks will be added once this context is canceled, and any pending un-run
// callbacks will be executed before the serializer is shut down. // callbacks will be executed before the serializer is shut down.
func NewCallbackSerializer(ctx context.Context) *CallbackSerializer { func NewCallbackSerializer(ctx context.Context) *CallbackSerializer {
t := &CallbackSerializer{ cs := &CallbackSerializer{
Done: make(chan struct{}), done: make(chan struct{}),
callbacks: buffer.NewUnbounded(), callbacks: buffer.NewUnbounded(),
} }
go t.run(ctx) go cs.run(ctx)
return t return cs
} }
// Schedule adds a callback to be scheduled after existing callbacks are run. // Schedule adds a callback to be scheduled after existing callbacks are run.
@ -64,56 +64,62 @@ func NewCallbackSerializer(ctx context.Context) *CallbackSerializer {
// Return value indicates if the callback was successfully added to the list of // Return value indicates if the callback was successfully added to the list of
// callbacks to be executed by the serializer. It is not possible to add // callbacks to be executed by the serializer. It is not possible to add
// callbacks once the context passed to NewCallbackSerializer is cancelled. // callbacks once the context passed to NewCallbackSerializer is cancelled.
func (t *CallbackSerializer) Schedule(f func(ctx context.Context)) bool { func (cs *CallbackSerializer) Schedule(f func(ctx context.Context)) bool {
t.closedMu.Lock() cs.closedMu.Lock()
defer t.closedMu.Unlock() defer cs.closedMu.Unlock()
if t.closed { if cs.closed {
return false return false
} }
t.callbacks.Put(f) cs.callbacks.Put(f)
return true return true
} }
func (t *CallbackSerializer) run(ctx context.Context) { func (cs *CallbackSerializer) run(ctx context.Context) {
var backlog []func(context.Context) var backlog []func(context.Context)
defer close(t.Done) defer close(cs.done)
for ctx.Err() == nil { for ctx.Err() == nil {
select { select {
case <-ctx.Done(): case <-ctx.Done():
// Do nothing here. Next iteration of the for loop will not happen, // Do nothing here. Next iteration of the for loop will not happen,
// since ctx.Err() would be non-nil. // since ctx.Err() would be non-nil.
case callback, ok := <-t.callbacks.Get(): case callback, ok := <-cs.callbacks.Get():
if !ok { if !ok {
return return
} }
t.callbacks.Load() cs.callbacks.Load()
callback.(func(ctx context.Context))(ctx) callback.(func(ctx context.Context))(ctx)
} }
} }
// Fetch pending callbacks if any, and execute them before returning from // Fetch pending callbacks if any, and execute them before returning from
// this method and closing t.Done. // this method and closing cs.done.
t.closedMu.Lock() cs.closedMu.Lock()
t.closed = true cs.closed = true
backlog = t.fetchPendingCallbacks() backlog = cs.fetchPendingCallbacks()
t.callbacks.Close() cs.callbacks.Close()
t.closedMu.Unlock() cs.closedMu.Unlock()
for _, b := range backlog { for _, b := range backlog {
b(ctx) b(ctx)
} }
} }
func (t *CallbackSerializer) fetchPendingCallbacks() []func(context.Context) { func (cs *CallbackSerializer) fetchPendingCallbacks() []func(context.Context) {
var backlog []func(context.Context) var backlog []func(context.Context)
for { for {
select { select {
case b := <-t.callbacks.Get(): case b := <-cs.callbacks.Get():
backlog = append(backlog, b.(func(context.Context))) backlog = append(backlog, b.(func(context.Context)))
t.callbacks.Load() cs.callbacks.Load()
default: default:
return backlog return backlog
} }
} }
} }
// Done returns a channel that is closed after the context passed to
// NewCallbackSerializer is canceled and all callbacks have been executed.
func (cs *CallbackSerializer) Done() <-chan struct{} {
return cs.done
}

View File

@ -0,0 +1,121 @@
/*
*
* Copyright 2023 gRPC authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
package grpcsync
import (
"context"
"sync"
)
// Subscriber represents an entity that is subscribed to messages published on
// a PubSub. It wraps the callback to be invoked by the PubSub when a new
// message is published.
type Subscriber interface {
// OnMessage is invoked when a new message is published. Implementations
// must not block in this method.
OnMessage(msg any)
}
// PubSub is a simple one-to-many publish-subscribe system that supports
// messages of arbitrary type. It guarantees that messages are delivered in
// the same order in which they were published.
//
// Publisher invokes the Publish() method to publish new messages, while
// subscribers interested in receiving these messages register a callback
// via the Subscribe() method.
//
// Once a PubSub is stopped, no more messages can be published, but any pending
// published messages will be delivered to the subscribers. Done may be used
// to determine when all published messages have been delivered.
type PubSub struct {
cs *CallbackSerializer
// Access to the below fields are guarded by this mutex.
mu sync.Mutex
msg any
subscribers map[Subscriber]bool
}
// NewPubSub returns a new PubSub instance. Users should cancel the
// provided context to shutdown the PubSub.
func NewPubSub(ctx context.Context) *PubSub {
return &PubSub{
cs: NewCallbackSerializer(ctx),
subscribers: map[Subscriber]bool{},
}
}
// Subscribe registers the provided Subscriber to the PubSub.
//
// If the PubSub contains a previously published message, the Subscriber's
// OnMessage() callback will be invoked asynchronously with the existing
// message to begin with, and subsequently for every newly published message.
//
// The caller is responsible for invoking the returned cancel function to
// unsubscribe itself from the PubSub.
func (ps *PubSub) Subscribe(sub Subscriber) (cancel func()) {
ps.mu.Lock()
defer ps.mu.Unlock()
ps.subscribers[sub] = true
if ps.msg != nil {
msg := ps.msg
ps.cs.Schedule(func(context.Context) {
ps.mu.Lock()
defer ps.mu.Unlock()
if !ps.subscribers[sub] {
return
}
sub.OnMessage(msg)
})
}
return func() {
ps.mu.Lock()
defer ps.mu.Unlock()
delete(ps.subscribers, sub)
}
}
// Publish publishes the provided message to the PubSub, and invokes
// callbacks registered by subscribers asynchronously.
func (ps *PubSub) Publish(msg any) {
ps.mu.Lock()
defer ps.mu.Unlock()
ps.msg = msg
for sub := range ps.subscribers {
s := sub
ps.cs.Schedule(func(context.Context) {
ps.mu.Lock()
defer ps.mu.Unlock()
if !ps.subscribers[s] {
return
}
s.OnMessage(msg)
})
}
}
// Done returns a channel that is closed after the context passed to NewPubSub
// is canceled and all updates have been sent to subscribers.
func (ps *PubSub) Done() <-chan struct{} {
return ps.cs.Done()
}

View File

@ -16,7 +16,9 @@
* *
*/ */
package grpc // Package idle contains a component for managing idleness (entering and exiting)
// based on RPC activity.
package idle
import ( import (
"fmt" "fmt"
@ -24,6 +26,8 @@ import (
"sync" "sync"
"sync/atomic" "sync/atomic"
"time" "time"
"google.golang.org/grpc/grpclog"
) )
// For overriding in unit tests. // For overriding in unit tests.
@ -31,31 +35,31 @@ var timeAfterFunc = func(d time.Duration, f func()) *time.Timer {
return time.AfterFunc(d, f) return time.AfterFunc(d, f)
} }
// idlenessEnforcer is the functionality provided by grpc.ClientConn to enter // Enforcer is the functionality provided by grpc.ClientConn to enter
// and exit from idle mode. // and exit from idle mode.
type idlenessEnforcer interface { type Enforcer interface {
exitIdleMode() error ExitIdleMode() error
enterIdleMode() error EnterIdleMode() error
} }
// idlenessManager defines the functionality required to track RPC activity on a // Manager defines the functionality required to track RPC activity on a
// channel. // channel.
type idlenessManager interface { type Manager interface {
onCallBegin() error OnCallBegin() error
onCallEnd() OnCallEnd()
close() Close()
} }
type noopIdlenessManager struct{} type noopManager struct{}
func (noopIdlenessManager) onCallBegin() error { return nil } func (noopManager) OnCallBegin() error { return nil }
func (noopIdlenessManager) onCallEnd() {} func (noopManager) OnCallEnd() {}
func (noopIdlenessManager) close() {} func (noopManager) Close() {}
// idlenessManagerImpl implements the idlenessManager interface. It uses atomic // manager implements the Manager interface. It uses atomic operations to
// operations to synchronize access to shared state and a mutex to guarantee // synchronize access to shared state and a mutex to guarantee mutual exclusion
// mutual exclusion in a critical section. // in a critical section.
type idlenessManagerImpl struct { type manager struct {
// State accessed atomically. // State accessed atomically.
lastCallEndTime int64 // Unix timestamp in nanos; time when the most recent RPC completed. lastCallEndTime int64 // Unix timestamp in nanos; time when the most recent RPC completed.
activeCallsCount int32 // Count of active RPCs; -math.MaxInt32 means channel is idle or is trying to get there. activeCallsCount int32 // Count of active RPCs; -math.MaxInt32 means channel is idle or is trying to get there.
@ -64,14 +68,15 @@ type idlenessManagerImpl struct {
// Can be accessed without atomics or mutex since these are set at creation // Can be accessed without atomics or mutex since these are set at creation
// time and read-only after that. // time and read-only after that.
enforcer idlenessEnforcer // Functionality provided by grpc.ClientConn. enforcer Enforcer // Functionality provided by grpc.ClientConn.
timeout int64 // Idle timeout duration nanos stored as an int64. timeout int64 // Idle timeout duration nanos stored as an int64.
logger grpclog.LoggerV2
// idleMu is used to guarantee mutual exclusion in two scenarios: // idleMu is used to guarantee mutual exclusion in two scenarios:
// - Opposing intentions: // - Opposing intentions:
// - a: Idle timeout has fired and handleIdleTimeout() is trying to put // - a: Idle timeout has fired and handleIdleTimeout() is trying to put
// the channel in idle mode because the channel has been inactive. // the channel in idle mode because the channel has been inactive.
// - b: At the same time an RPC is made on the channel, and onCallBegin() // - b: At the same time an RPC is made on the channel, and OnCallBegin()
// is trying to prevent the channel from going idle. // is trying to prevent the channel from going idle.
// - Competing intentions: // - Competing intentions:
// - The channel is in idle mode and there are multiple RPCs starting at // - The channel is in idle mode and there are multiple RPCs starting at
@ -83,28 +88,37 @@ type idlenessManagerImpl struct {
timer *time.Timer timer *time.Timer
} }
// newIdlenessManager creates a new idleness manager implementation for the // ManagerOptions is a collection of options used by
// NewManager.
type ManagerOptions struct {
Enforcer Enforcer
Timeout time.Duration
Logger grpclog.LoggerV2
}
// NewManager creates a new idleness manager implementation for the
// given idle timeout. // given idle timeout.
func newIdlenessManager(enforcer idlenessEnforcer, idleTimeout time.Duration) idlenessManager { func NewManager(opts ManagerOptions) Manager {
if idleTimeout == 0 { if opts.Timeout == 0 {
return noopIdlenessManager{} return noopManager{}
} }
i := &idlenessManagerImpl{ m := &manager{
enforcer: enforcer, enforcer: opts.Enforcer,
timeout: int64(idleTimeout), timeout: int64(opts.Timeout),
logger: opts.Logger,
} }
i.timer = timeAfterFunc(idleTimeout, i.handleIdleTimeout) m.timer = timeAfterFunc(opts.Timeout, m.handleIdleTimeout)
return i return m
} }
// resetIdleTimer resets the idle timer to the given duration. This method // resetIdleTimer resets the idle timer to the given duration. This method
// should only be called from the timer callback. // should only be called from the timer callback.
func (i *idlenessManagerImpl) resetIdleTimer(d time.Duration) { func (m *manager) resetIdleTimer(d time.Duration) {
i.idleMu.Lock() m.idleMu.Lock()
defer i.idleMu.Unlock() defer m.idleMu.Unlock()
if i.timer == nil { if m.timer == nil {
// Only close sets timer to nil. We are done. // Only close sets timer to nil. We are done.
return return
} }
@ -112,47 +126,47 @@ func (i *idlenessManagerImpl) resetIdleTimer(d time.Duration) {
// It is safe to ignore the return value from Reset() because this method is // It is safe to ignore the return value from Reset() because this method is
// only ever called from the timer callback, which means the timer has // only ever called from the timer callback, which means the timer has
// already fired. // already fired.
i.timer.Reset(d) m.timer.Reset(d)
} }
// handleIdleTimeout is the timer callback that is invoked upon expiry of the // handleIdleTimeout is the timer callback that is invoked upon expiry of the
// configured idle timeout. The channel is considered inactive if there are no // configured idle timeout. The channel is considered inactive if there are no
// ongoing calls and no RPC activity since the last time the timer fired. // ongoing calls and no RPC activity since the last time the timer fired.
func (i *idlenessManagerImpl) handleIdleTimeout() { func (m *manager) handleIdleTimeout() {
if i.isClosed() { if m.isClosed() {
return return
} }
if atomic.LoadInt32(&i.activeCallsCount) > 0 { if atomic.LoadInt32(&m.activeCallsCount) > 0 {
i.resetIdleTimer(time.Duration(i.timeout)) m.resetIdleTimer(time.Duration(m.timeout))
return return
} }
// There has been activity on the channel since we last got here. Reset the // There has been activity on the channel since we last got here. Reset the
// timer and return. // timer and return.
if atomic.LoadInt32(&i.activeSinceLastTimerCheck) == 1 { if atomic.LoadInt32(&m.activeSinceLastTimerCheck) == 1 {
// Set the timer to fire after a duration of idle timeout, calculated // Set the timer to fire after a duration of idle timeout, calculated
// from the time the most recent RPC completed. // from the time the most recent RPC completed.
atomic.StoreInt32(&i.activeSinceLastTimerCheck, 0) atomic.StoreInt32(&m.activeSinceLastTimerCheck, 0)
i.resetIdleTimer(time.Duration(atomic.LoadInt64(&i.lastCallEndTime) + i.timeout - time.Now().UnixNano())) m.resetIdleTimer(time.Duration(atomic.LoadInt64(&m.lastCallEndTime) + m.timeout - time.Now().UnixNano()))
return return
} }
// This CAS operation is extremely likely to succeed given that there has // This CAS operation is extremely likely to succeed given that there has
// been no activity since the last time we were here. Setting the // been no activity since the last time we were here. Setting the
// activeCallsCount to -math.MaxInt32 indicates to onCallBegin() that the // activeCallsCount to -math.MaxInt32 indicates to OnCallBegin() that the
// channel is either in idle mode or is trying to get there. // channel is either in idle mode or is trying to get there.
if !atomic.CompareAndSwapInt32(&i.activeCallsCount, 0, -math.MaxInt32) { if !atomic.CompareAndSwapInt32(&m.activeCallsCount, 0, -math.MaxInt32) {
// This CAS operation can fail if an RPC started after we checked for // This CAS operation can fail if an RPC started after we checked for
// activity at the top of this method, or one was ongoing from before // activity at the top of this method, or one was ongoing from before
// the last time we were here. In both case, reset the timer and return. // the last time we were here. In both case, reset the timer and return.
i.resetIdleTimer(time.Duration(i.timeout)) m.resetIdleTimer(time.Duration(m.timeout))
return return
} }
// Now that we've set the active calls count to -math.MaxInt32, it's time to // Now that we've set the active calls count to -math.MaxInt32, it's time to
// actually move to idle mode. // actually move to idle mode.
if i.tryEnterIdleMode() { if m.tryEnterIdleMode() {
// Successfully entered idle mode. No timer needed until we exit idle. // Successfully entered idle mode. No timer needed until we exit idle.
return return
} }
@ -160,8 +174,8 @@ func (i *idlenessManagerImpl) handleIdleTimeout() {
// Failed to enter idle mode due to a concurrent RPC that kept the channel // Failed to enter idle mode due to a concurrent RPC that kept the channel
// active, or because of an error from the channel. Undo the attempt to // active, or because of an error from the channel. Undo the attempt to
// enter idle, and reset the timer to try again later. // enter idle, and reset the timer to try again later.
atomic.AddInt32(&i.activeCallsCount, math.MaxInt32) atomic.AddInt32(&m.activeCallsCount, math.MaxInt32)
i.resetIdleTimer(time.Duration(i.timeout)) m.resetIdleTimer(time.Duration(m.timeout))
} }
// tryEnterIdleMode instructs the channel to enter idle mode. But before // tryEnterIdleMode instructs the channel to enter idle mode. But before
@ -171,15 +185,15 @@ func (i *idlenessManagerImpl) handleIdleTimeout() {
// Return value indicates whether or not the channel moved to idle mode. // Return value indicates whether or not the channel moved to idle mode.
// //
// Holds idleMu which ensures mutual exclusion with exitIdleMode. // Holds idleMu which ensures mutual exclusion with exitIdleMode.
func (i *idlenessManagerImpl) tryEnterIdleMode() bool { func (m *manager) tryEnterIdleMode() bool {
i.idleMu.Lock() m.idleMu.Lock()
defer i.idleMu.Unlock() defer m.idleMu.Unlock()
if atomic.LoadInt32(&i.activeCallsCount) != -math.MaxInt32 { if atomic.LoadInt32(&m.activeCallsCount) != -math.MaxInt32 {
// We raced and lost to a new RPC. Very rare, but stop entering idle. // We raced and lost to a new RPC. Very rare, but stop entering idle.
return false return false
} }
if atomic.LoadInt32(&i.activeSinceLastTimerCheck) == 1 { if atomic.LoadInt32(&m.activeSinceLastTimerCheck) == 1 {
// An very short RPC could have come in (and also finished) after we // An very short RPC could have come in (and also finished) after we
// checked for calls count and activity in handleIdleTimeout(), but // checked for calls count and activity in handleIdleTimeout(), but
// before the CAS operation. So, we need to check for activity again. // before the CAS operation. So, we need to check for activity again.
@ -189,99 +203,99 @@ func (i *idlenessManagerImpl) tryEnterIdleMode() bool {
// No new RPCs have come in since we last set the active calls count value // No new RPCs have come in since we last set the active calls count value
// -math.MaxInt32 in the timer callback. And since we have the lock, it is // -math.MaxInt32 in the timer callback. And since we have the lock, it is
// safe to enter idle mode now. // safe to enter idle mode now.
if err := i.enforcer.enterIdleMode(); err != nil { if err := m.enforcer.EnterIdleMode(); err != nil {
logger.Errorf("Failed to enter idle mode: %v", err) m.logger.Errorf("Failed to enter idle mode: %v", err)
return false return false
} }
// Successfully entered idle mode. // Successfully entered idle mode.
i.actuallyIdle = true m.actuallyIdle = true
return true return true
} }
// onCallBegin is invoked at the start of every RPC. // OnCallBegin is invoked at the start of every RPC.
func (i *idlenessManagerImpl) onCallBegin() error { func (m *manager) OnCallBegin() error {
if i.isClosed() { if m.isClosed() {
return nil return nil
} }
if atomic.AddInt32(&i.activeCallsCount, 1) > 0 { if atomic.AddInt32(&m.activeCallsCount, 1) > 0 {
// Channel is not idle now. Set the activity bit and allow the call. // Channel is not idle now. Set the activity bit and allow the call.
atomic.StoreInt32(&i.activeSinceLastTimerCheck, 1) atomic.StoreInt32(&m.activeSinceLastTimerCheck, 1)
return nil return nil
} }
// Channel is either in idle mode or is in the process of moving to idle // Channel is either in idle mode or is in the process of moving to idle
// mode. Attempt to exit idle mode to allow this RPC. // mode. Attempt to exit idle mode to allow this RPC.
if err := i.exitIdleMode(); err != nil { if err := m.exitIdleMode(); err != nil {
// Undo the increment to calls count, and return an error causing the // Undo the increment to calls count, and return an error causing the
// RPC to fail. // RPC to fail.
atomic.AddInt32(&i.activeCallsCount, -1) atomic.AddInt32(&m.activeCallsCount, -1)
return err return err
} }
atomic.StoreInt32(&i.activeSinceLastTimerCheck, 1) atomic.StoreInt32(&m.activeSinceLastTimerCheck, 1)
return nil return nil
} }
// exitIdleMode instructs the channel to exit idle mode. // exitIdleMode instructs the channel to exit idle mode.
// //
// Holds idleMu which ensures mutual exclusion with tryEnterIdleMode. // Holds idleMu which ensures mutual exclusion with tryEnterIdleMode.
func (i *idlenessManagerImpl) exitIdleMode() error { func (m *manager) exitIdleMode() error {
i.idleMu.Lock() m.idleMu.Lock()
defer i.idleMu.Unlock() defer m.idleMu.Unlock()
if !i.actuallyIdle { if !m.actuallyIdle {
// This can happen in two scenarios: // This can happen in two scenarios:
// - handleIdleTimeout() set the calls count to -math.MaxInt32 and called // - handleIdleTimeout() set the calls count to -math.MaxInt32 and called
// tryEnterIdleMode(). But before the latter could grab the lock, an RPC // tryEnterIdleMode(). But before the latter could grab the lock, an RPC
// came in and onCallBegin() noticed that the calls count is negative. // came in and OnCallBegin() noticed that the calls count is negative.
// - Channel is in idle mode, and multiple new RPCs come in at the same // - Channel is in idle mode, and multiple new RPCs come in at the same
// time, all of them notice a negative calls count in onCallBegin and get // time, all of them notice a negative calls count in OnCallBegin and get
// here. The first one to get the lock would got the channel to exit idle. // here. The first one to get the lock would got the channel to exit idle.
// //
// Either way, nothing to do here. // Either way, nothing to do here.
return nil return nil
} }
if err := i.enforcer.exitIdleMode(); err != nil { if err := m.enforcer.ExitIdleMode(); err != nil {
return fmt.Errorf("channel failed to exit idle mode: %v", err) return fmt.Errorf("channel failed to exit idle mode: %v", err)
} }
// Undo the idle entry process. This also respects any new RPC attempts. // Undo the idle entry process. This also respects any new RPC attempts.
atomic.AddInt32(&i.activeCallsCount, math.MaxInt32) atomic.AddInt32(&m.activeCallsCount, math.MaxInt32)
i.actuallyIdle = false m.actuallyIdle = false
// Start a new timer to fire after the configured idle timeout. // Start a new timer to fire after the configured idle timeout.
i.timer = timeAfterFunc(time.Duration(i.timeout), i.handleIdleTimeout) m.timer = timeAfterFunc(time.Duration(m.timeout), m.handleIdleTimeout)
return nil return nil
} }
// onCallEnd is invoked at the end of every RPC. // OnCallEnd is invoked at the end of every RPC.
func (i *idlenessManagerImpl) onCallEnd() { func (m *manager) OnCallEnd() {
if i.isClosed() { if m.isClosed() {
return return
} }
// Record the time at which the most recent call finished. // Record the time at which the most recent call finished.
atomic.StoreInt64(&i.lastCallEndTime, time.Now().UnixNano()) atomic.StoreInt64(&m.lastCallEndTime, time.Now().UnixNano())
// Decrement the active calls count. This count can temporarily go negative // Decrement the active calls count. This count can temporarily go negative
// when the timer callback is in the process of moving the channel to idle // when the timer callback is in the process of moving the channel to idle
// mode, but one or more RPCs come in and complete before the timer callback // mode, but one or more RPCs come in and complete before the timer callback
// can get done with the process of moving to idle mode. // can get done with the process of moving to idle mode.
atomic.AddInt32(&i.activeCallsCount, -1) atomic.AddInt32(&m.activeCallsCount, -1)
} }
func (i *idlenessManagerImpl) isClosed() bool { func (m *manager) isClosed() bool {
return atomic.LoadInt32(&i.closed) == 1 return atomic.LoadInt32(&m.closed) == 1
} }
func (i *idlenessManagerImpl) close() { func (m *manager) Close() {
atomic.StoreInt32(&i.closed, 1) atomic.StoreInt32(&m.closed, 1)
i.idleMu.Lock() m.idleMu.Lock()
i.timer.Stop() m.timer.Stop()
i.timer = nil m.timer = nil
i.idleMu.Unlock() m.idleMu.Unlock()
} }

View File

@ -30,7 +30,7 @@ import (
var ( var (
// WithHealthCheckFunc is set by dialoptions.go // WithHealthCheckFunc is set by dialoptions.go
WithHealthCheckFunc interface{} // func (HealthChecker) DialOption WithHealthCheckFunc any // func (HealthChecker) DialOption
// HealthCheckFunc is used to provide client-side LB channel health checking // HealthCheckFunc is used to provide client-side LB channel health checking
HealthCheckFunc HealthChecker HealthCheckFunc HealthChecker
// BalancerUnregister is exported by package balancer to unregister a balancer. // BalancerUnregister is exported by package balancer to unregister a balancer.
@ -38,8 +38,12 @@ var (
// KeepaliveMinPingTime is the minimum ping interval. This must be 10s by // KeepaliveMinPingTime is the minimum ping interval. This must be 10s by
// default, but tests may wish to set it lower for convenience. // default, but tests may wish to set it lower for convenience.
KeepaliveMinPingTime = 10 * time.Second KeepaliveMinPingTime = 10 * time.Second
// KeepaliveMinServerPingTime is the minimum ping interval for servers.
// This must be 1s by default, but tests may wish to set it lower for
// convenience.
KeepaliveMinServerPingTime = time.Second
// ParseServiceConfig parses a JSON representation of the service config. // ParseServiceConfig parses a JSON representation of the service config.
ParseServiceConfig interface{} // func(string) *serviceconfig.ParseResult ParseServiceConfig any // func(string) *serviceconfig.ParseResult
// EqualServiceConfigForTesting is for testing service config generation and // EqualServiceConfigForTesting is for testing service config generation and
// parsing. Both a and b should be returned by ParseServiceConfig. // parsing. Both a and b should be returned by ParseServiceConfig.
// This function compares the config without rawJSON stripped, in case the // This function compares the config without rawJSON stripped, in case the
@ -49,33 +53,33 @@ var (
// given name. This is set by package certprovider for use from xDS // given name. This is set by package certprovider for use from xDS
// bootstrap code while parsing certificate provider configs in the // bootstrap code while parsing certificate provider configs in the
// bootstrap file. // bootstrap file.
GetCertificateProviderBuilder interface{} // func(string) certprovider.Builder GetCertificateProviderBuilder any // func(string) certprovider.Builder
// GetXDSHandshakeInfoForTesting returns a pointer to the xds.HandshakeInfo // GetXDSHandshakeInfoForTesting returns a pointer to the xds.HandshakeInfo
// stored in the passed in attributes. This is set by // stored in the passed in attributes. This is set by
// credentials/xds/xds.go. // credentials/xds/xds.go.
GetXDSHandshakeInfoForTesting interface{} // func (*attributes.Attributes) *xds.HandshakeInfo GetXDSHandshakeInfoForTesting any // func (*attributes.Attributes) *xds.HandshakeInfo
// GetServerCredentials returns the transport credentials configured on a // GetServerCredentials returns the transport credentials configured on a
// gRPC server. An xDS-enabled server needs to know what type of credentials // gRPC server. An xDS-enabled server needs to know what type of credentials
// is configured on the underlying gRPC server. This is set by server.go. // is configured on the underlying gRPC server. This is set by server.go.
GetServerCredentials interface{} // func (*grpc.Server) credentials.TransportCredentials GetServerCredentials any // func (*grpc.Server) credentials.TransportCredentials
// CanonicalString returns the canonical string of the code defined here: // CanonicalString returns the canonical string of the code defined here:
// https://github.com/grpc/grpc/blob/master/doc/statuscodes.md. // https://github.com/grpc/grpc/blob/master/doc/statuscodes.md.
// //
// This is used in the 1.0 release of gcp/observability, and thus must not be // This is used in the 1.0 release of gcp/observability, and thus must not be
// deleted or changed. // deleted or changed.
CanonicalString interface{} // func (codes.Code) string CanonicalString any // func (codes.Code) string
// DrainServerTransports initiates a graceful close of existing connections // DrainServerTransports initiates a graceful close of existing connections
// on a gRPC server accepted on the provided listener address. An // on a gRPC server accepted on the provided listener address. An
// xDS-enabled server invokes this method on a grpc.Server when a particular // xDS-enabled server invokes this method on a grpc.Server when a particular
// listener moves to "not-serving" mode. // listener moves to "not-serving" mode.
DrainServerTransports interface{} // func(*grpc.Server, string) DrainServerTransports any // func(*grpc.Server, string)
// AddGlobalServerOptions adds an array of ServerOption that will be // AddGlobalServerOptions adds an array of ServerOption that will be
// effective globally for newly created servers. The priority will be: 1. // effective globally for newly created servers. The priority will be: 1.
// user-provided; 2. this method; 3. default values. // user-provided; 2. this method; 3. default values.
// //
// This is used in the 1.0 release of gcp/observability, and thus must not be // This is used in the 1.0 release of gcp/observability, and thus must not be
// deleted or changed. // deleted or changed.
AddGlobalServerOptions interface{} // func(opt ...ServerOption) AddGlobalServerOptions any // func(opt ...ServerOption)
// ClearGlobalServerOptions clears the array of extra ServerOption. This // ClearGlobalServerOptions clears the array of extra ServerOption. This
// method is useful in testing and benchmarking. // method is useful in testing and benchmarking.
// //
@ -88,14 +92,14 @@ var (
// //
// This is used in the 1.0 release of gcp/observability, and thus must not be // This is used in the 1.0 release of gcp/observability, and thus must not be
// deleted or changed. // deleted or changed.
AddGlobalDialOptions interface{} // func(opt ...DialOption) AddGlobalDialOptions any // func(opt ...DialOption)
// DisableGlobalDialOptions returns a DialOption that prevents the // DisableGlobalDialOptions returns a DialOption that prevents the
// ClientConn from applying the global DialOptions (set via // ClientConn from applying the global DialOptions (set via
// AddGlobalDialOptions). // AddGlobalDialOptions).
// //
// This is used in the 1.0 release of gcp/observability, and thus must not be // This is used in the 1.0 release of gcp/observability, and thus must not be
// deleted or changed. // deleted or changed.
DisableGlobalDialOptions interface{} // func() grpc.DialOption DisableGlobalDialOptions any // func() grpc.DialOption
// ClearGlobalDialOptions clears the array of extra DialOption. This // ClearGlobalDialOptions clears the array of extra DialOption. This
// method is useful in testing and benchmarking. // method is useful in testing and benchmarking.
// //
@ -104,23 +108,26 @@ var (
ClearGlobalDialOptions func() ClearGlobalDialOptions func()
// JoinDialOptions combines the dial options passed as arguments into a // JoinDialOptions combines the dial options passed as arguments into a
// single dial option. // single dial option.
JoinDialOptions interface{} // func(...grpc.DialOption) grpc.DialOption JoinDialOptions any // func(...grpc.DialOption) grpc.DialOption
// JoinServerOptions combines the server options passed as arguments into a // JoinServerOptions combines the server options passed as arguments into a
// single server option. // single server option.
JoinServerOptions interface{} // func(...grpc.ServerOption) grpc.ServerOption JoinServerOptions any // func(...grpc.ServerOption) grpc.ServerOption
// WithBinaryLogger returns a DialOption that specifies the binary logger // WithBinaryLogger returns a DialOption that specifies the binary logger
// for a ClientConn. // for a ClientConn.
// //
// This is used in the 1.0 release of gcp/observability, and thus must not be // This is used in the 1.0 release of gcp/observability, and thus must not be
// deleted or changed. // deleted or changed.
WithBinaryLogger interface{} // func(binarylog.Logger) grpc.DialOption WithBinaryLogger any // func(binarylog.Logger) grpc.DialOption
// BinaryLogger returns a ServerOption that can set the binary logger for a // BinaryLogger returns a ServerOption that can set the binary logger for a
// server. // server.
// //
// This is used in the 1.0 release of gcp/observability, and thus must not be // This is used in the 1.0 release of gcp/observability, and thus must not be
// deleted or changed. // deleted or changed.
BinaryLogger interface{} // func(binarylog.Logger) grpc.ServerOption BinaryLogger any // func(binarylog.Logger) grpc.ServerOption
// SubscribeToConnectivityStateChanges adds a grpcsync.Subscriber to a provided grpc.ClientConn
SubscribeToConnectivityStateChanges any // func(*grpc.ClientConn, grpcsync.Subscriber)
// NewXDSResolverWithConfigForTesting creates a new xds resolver builder using // NewXDSResolverWithConfigForTesting creates a new xds resolver builder using
// the provided xds bootstrap config instead of the global configuration from // the provided xds bootstrap config instead of the global configuration from
@ -131,7 +138,7 @@ var (
// //
// This function should ONLY be used for testing and may not work with some // This function should ONLY be used for testing and may not work with some
// other features, including the CSDS service. // other features, including the CSDS service.
NewXDSResolverWithConfigForTesting interface{} // func([]byte) (resolver.Builder, error) NewXDSResolverWithConfigForTesting any // func([]byte) (resolver.Builder, error)
// RegisterRLSClusterSpecifierPluginForTesting registers the RLS Cluster // RegisterRLSClusterSpecifierPluginForTesting registers the RLS Cluster
// Specifier Plugin for testing purposes, regardless of the XDSRLS environment // Specifier Plugin for testing purposes, regardless of the XDSRLS environment
@ -163,7 +170,11 @@ var (
UnregisterRBACHTTPFilterForTesting func() UnregisterRBACHTTPFilterForTesting func()
// ORCAAllowAnyMinReportingInterval is for examples/orca use ONLY. // ORCAAllowAnyMinReportingInterval is for examples/orca use ONLY.
ORCAAllowAnyMinReportingInterval interface{} // func(so *orca.ServiceOptions) ORCAAllowAnyMinReportingInterval any // func(so *orca.ServiceOptions)
// GRPCResolverSchemeExtraMetadata determines when gRPC will add extra
// metadata to RPCs.
GRPCResolverSchemeExtraMetadata string = "xds"
) )
// HealthChecker defines the signature of the client-side LB channel health checking function. // HealthChecker defines the signature of the client-side LB channel health checking function.
@ -174,7 +185,7 @@ var (
// //
// The health checking protocol is defined at: // The health checking protocol is defined at:
// https://github.com/grpc/grpc/blob/master/doc/health-checking.md // https://github.com/grpc/grpc/blob/master/doc/health-checking.md
type HealthChecker func(ctx context.Context, newStream func(string) (interface{}, error), setConnectivityState func(connectivity.State, error), serviceName string) error type HealthChecker func(ctx context.Context, newStream func(string) (any, error), setConnectivityState func(connectivity.State, error), serviceName string) error
const ( const (
// CredsBundleModeFallback switches GoogleDefaultCreds to fallback mode. // CredsBundleModeFallback switches GoogleDefaultCreds to fallback mode.

View File

@ -35,7 +35,7 @@ const mdKey = mdKeyType("grpc.internal.address.metadata")
type mdValue metadata.MD type mdValue metadata.MD
func (m mdValue) Equal(o interface{}) bool { func (m mdValue) Equal(o any) bool {
om, ok := o.(mdValue) om, ok := o.(mdValue)
if !ok { if !ok {
return false return false

View File

@ -35,7 +35,7 @@ const jsonIndent = " "
// ToJSON marshals the input into a json string. // ToJSON marshals the input into a json string.
// //
// If marshal fails, it falls back to fmt.Sprintf("%+v"). // If marshal fails, it falls back to fmt.Sprintf("%+v").
func ToJSON(e interface{}) string { func ToJSON(e any) string {
switch ee := e.(type) { switch ee := e.(type) {
case protov1.Message: case protov1.Message:
mm := jsonpb.Marshaler{Indent: jsonIndent} mm := jsonpb.Marshaler{Indent: jsonIndent}

View File

@ -92,7 +92,7 @@ type ClientStream interface {
// calling RecvMsg on the same stream at the same time, but it is not safe // calling RecvMsg on the same stream at the same time, but it is not safe
// to call SendMsg on the same stream in different goroutines. It is also // to call SendMsg on the same stream in different goroutines. It is also
// not safe to call CloseSend concurrently with SendMsg. // not safe to call CloseSend concurrently with SendMsg.
SendMsg(m interface{}) error SendMsg(m any) error
// RecvMsg blocks until it receives a message into m or the stream is // RecvMsg blocks until it receives a message into m or the stream is
// done. It returns io.EOF when the stream completes successfully. On // done. It returns io.EOF when the stream completes successfully. On
// any other error, the stream is aborted and the error contains the RPC // any other error, the stream is aborted and the error contains the RPC
@ -101,7 +101,7 @@ type ClientStream interface {
// It is safe to have a goroutine calling SendMsg and another goroutine // It is safe to have a goroutine calling SendMsg and another goroutine
// calling RecvMsg on the same stream at the same time, but it is not // calling RecvMsg on the same stream at the same time, but it is not
// safe to call RecvMsg on the same stream in different goroutines. // safe to call RecvMsg on the same stream in different goroutines.
RecvMsg(m interface{}) error RecvMsg(m any) error
} }
// ClientInterceptor is an interceptor for gRPC client streams. // ClientInterceptor is an interceptor for gRPC client streams.

View File

@ -62,7 +62,8 @@ const (
defaultPort = "443" defaultPort = "443"
defaultDNSSvrPort = "53" defaultDNSSvrPort = "53"
golang = "GO" golang = "GO"
// txtPrefix is the prefix string to be prepended to the host name for txt record lookup. // txtPrefix is the prefix string to be prepended to the host name for txt
// record lookup.
txtPrefix = "_grpc_config." txtPrefix = "_grpc_config."
// In DNS, service config is encoded in a TXT record via the mechanism // In DNS, service config is encoded in a TXT record via the mechanism
// described in RFC-1464 using the attribute name grpc_config. // described in RFC-1464 using the attribute name grpc_config.
@ -86,14 +87,14 @@ var (
minDNSResRate = 30 * time.Second minDNSResRate = 30 * time.Second
) )
var customAuthorityDialler = func(authority string) func(ctx context.Context, network, address string) (net.Conn, error) { var addressDialer = func(address string) func(context.Context, string, string) (net.Conn, error) {
return func(ctx context.Context, network, address string) (net.Conn, error) { return func(ctx context.Context, network, _ string) (net.Conn, error) {
var dialer net.Dialer var dialer net.Dialer
return dialer.DialContext(ctx, network, authority) return dialer.DialContext(ctx, network, address)
} }
} }
var customAuthorityResolver = func(authority string) (netResolver, error) { var newNetResolver = func(authority string) (netResolver, error) {
host, port, err := parseTarget(authority, defaultDNSSvrPort) host, port, err := parseTarget(authority, defaultDNSSvrPort)
if err != nil { if err != nil {
return nil, err return nil, err
@ -103,7 +104,7 @@ var customAuthorityResolver = func(authority string) (netResolver, error) {
return &net.Resolver{ return &net.Resolver{
PreferGo: true, PreferGo: true,
Dial: customAuthorityDialler(authorityWithPort), Dial: addressDialer(authorityWithPort),
}, nil }, nil
} }
@ -114,7 +115,8 @@ func NewBuilder() resolver.Builder {
type dnsBuilder struct{} type dnsBuilder struct{}
// Build creates and starts a DNS resolver that watches the name resolution of the target. // Build creates and starts a DNS resolver that watches the name resolution of
// the target.
func (b *dnsBuilder) Build(target resolver.Target, cc resolver.ClientConn, opts resolver.BuildOptions) (resolver.Resolver, error) { func (b *dnsBuilder) Build(target resolver.Target, cc resolver.ClientConn, opts resolver.BuildOptions) (resolver.Resolver, error) {
host, port, err := parseTarget(target.Endpoint(), defaultPort) host, port, err := parseTarget(target.Endpoint(), defaultPort)
if err != nil { if err != nil {
@ -143,7 +145,7 @@ func (b *dnsBuilder) Build(target resolver.Target, cc resolver.ClientConn, opts
if target.URL.Host == "" { if target.URL.Host == "" {
d.resolver = defaultResolver d.resolver = defaultResolver
} else { } else {
d.resolver, err = customAuthorityResolver(target.URL.Host) d.resolver, err = newNetResolver(target.URL.Host)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -180,19 +182,22 @@ type dnsResolver struct {
ctx context.Context ctx context.Context
cancel context.CancelFunc cancel context.CancelFunc
cc resolver.ClientConn cc resolver.ClientConn
// rn channel is used by ResolveNow() to force an immediate resolution of the target. // rn channel is used by ResolveNow() to force an immediate resolution of the
// target.
rn chan struct{} rn chan struct{}
// wg is used to enforce Close() to return after the watcher() goroutine has finished. // wg is used to enforce Close() to return after the watcher() goroutine has
// Otherwise, data race will be possible. [Race Example] in dns_resolver_test we // finished. Otherwise, data race will be possible. [Race Example] in
// replace the real lookup functions with mocked ones to facilitate testing. // dns_resolver_test we replace the real lookup functions with mocked ones to
// If Close() doesn't wait for watcher() goroutine finishes, race detector sometimes // facilitate testing. If Close() doesn't wait for watcher() goroutine
// will warns lookup (READ the lookup function pointers) inside watcher() goroutine // finishes, race detector sometimes will warns lookup (READ the lookup
// has data race with replaceNetFunc (WRITE the lookup function pointers). // function pointers) inside watcher() goroutine has data race with
// replaceNetFunc (WRITE the lookup function pointers).
wg sync.WaitGroup wg sync.WaitGroup
disableServiceConfig bool disableServiceConfig bool
} }
// ResolveNow invoke an immediate resolution of the target that this dnsResolver watches. // ResolveNow invoke an immediate resolution of the target that this
// dnsResolver watches.
func (d *dnsResolver) ResolveNow(resolver.ResolveNowOptions) { func (d *dnsResolver) ResolveNow(resolver.ResolveNowOptions) {
select { select {
case d.rn <- struct{}{}: case d.rn <- struct{}{}:
@ -220,8 +225,8 @@ func (d *dnsResolver) watcher() {
var timer *time.Timer var timer *time.Timer
if err == nil { if err == nil {
// Success resolving, wait for the next ResolveNow. However, also wait 30 seconds at the very least // Success resolving, wait for the next ResolveNow. However, also wait 30
// to prevent constantly re-resolving. // seconds at the very least to prevent constantly re-resolving.
backoffIndex = 1 backoffIndex = 1
timer = newTimerDNSResRate(minDNSResRate) timer = newTimerDNSResRate(minDNSResRate)
select { select {
@ -231,7 +236,8 @@ func (d *dnsResolver) watcher() {
case <-d.rn: case <-d.rn:
} }
} else { } else {
// Poll on an error found in DNS Resolver or an error received from ClientConn. // Poll on an error found in DNS Resolver or an error received from
// ClientConn.
timer = newTimer(backoff.DefaultExponential.Backoff(backoffIndex)) timer = newTimer(backoff.DefaultExponential.Backoff(backoffIndex))
backoffIndex++ backoffIndex++
} }
@ -278,7 +284,8 @@ func (d *dnsResolver) lookupSRV() ([]resolver.Address, error) {
} }
func handleDNSError(err error, lookupType string) error { func handleDNSError(err error, lookupType string) error {
if dnsErr, ok := err.(*net.DNSError); ok && !dnsErr.IsTimeout && !dnsErr.IsTemporary { dnsErr, ok := err.(*net.DNSError)
if ok && !dnsErr.IsTimeout && !dnsErr.IsTemporary {
// Timeouts and temporary errors should be communicated to gRPC to // Timeouts and temporary errors should be communicated to gRPC to
// attempt another DNS query (with backoff). Other errors should be // attempt another DNS query (with backoff). Other errors should be
// suppressed (they may represent the absence of a TXT record). // suppressed (they may represent the absence of a TXT record).
@ -307,10 +314,12 @@ func (d *dnsResolver) lookupTXT() *serviceconfig.ParseResult {
res += s res += s
} }
// TXT record must have "grpc_config=" attribute in order to be used as service config. // TXT record must have "grpc_config=" attribute in order to be used as
// service config.
if !strings.HasPrefix(res, txtAttribute) { if !strings.HasPrefix(res, txtAttribute) {
logger.Warningf("dns: TXT record %v missing %v attribute", res, txtAttribute) logger.Warningf("dns: TXT record %v missing %v attribute", res, txtAttribute)
// This is not an error; it is the equivalent of not having a service config. // This is not an error; it is the equivalent of not having a service
// config.
return nil return nil
} }
sc := canaryingSC(strings.TrimPrefix(res, txtAttribute)) sc := canaryingSC(strings.TrimPrefix(res, txtAttribute))
@ -352,9 +361,10 @@ func (d *dnsResolver) lookup() (*resolver.State, error) {
return &state, nil return &state, nil
} }
// formatIP returns ok = false if addr is not a valid textual representation of an IP address. // formatIP returns ok = false if addr is not a valid textual representation of
// If addr is an IPv4 address, return the addr and ok = true. // an IP address. If addr is an IPv4 address, return the addr and ok = true.
// If addr is an IPv6 address, return the addr enclosed in square brackets and ok = true. // If addr is an IPv6 address, return the addr enclosed in square brackets and
// ok = true.
func formatIP(addr string) (addrIP string, ok bool) { func formatIP(addr string) (addrIP string, ok bool) {
ip := net.ParseIP(addr) ip := net.ParseIP(addr)
if ip == nil { if ip == nil {
@ -366,10 +376,10 @@ func formatIP(addr string) (addrIP string, ok bool) {
return "[" + addr + "]", true return "[" + addr + "]", true
} }
// parseTarget takes the user input target string and default port, returns formatted host and port info. // parseTarget takes the user input target string and default port, returns
// If target doesn't specify a port, set the port to be the defaultPort. // formatted host and port info. If target doesn't specify a port, set the port
// If target is in IPv6 format and host-name is enclosed in square brackets, brackets // to be the defaultPort. If target is in IPv6 format and host-name is enclosed
// are stripped when setting the host. // in square brackets, brackets are stripped when setting the host.
// examples: // examples:
// target: "www.google.com" defaultPort: "443" returns host: "www.google.com", port: "443" // target: "www.google.com" defaultPort: "443" returns host: "www.google.com", port: "443"
// target: "ipv4-host:80" defaultPort: "443" returns host: "ipv4-host", port: "80" // target: "ipv4-host:80" defaultPort: "443" returns host: "ipv4-host", port: "80"
@ -385,12 +395,14 @@ func parseTarget(target, defaultPort string) (host, port string, err error) {
} }
if host, port, err = net.SplitHostPort(target); err == nil { if host, port, err = net.SplitHostPort(target); err == nil {
if port == "" { if port == "" {
// If the port field is empty (target ends with colon), e.g. "[::1]:", this is an error. // If the port field is empty (target ends with colon), e.g. "[::1]:",
// this is an error.
return "", "", errEndsWithColon return "", "", errEndsWithColon
} }
// target has port, i.e ipv4-host:port, [ipv6-host]:port, host-name:port // target has port, i.e ipv4-host:port, [ipv6-host]:port, host-name:port
if host == "" { if host == "" {
// Keep consistent with net.Dial(): If the host is empty, as in ":80", the local system is assumed. // Keep consistent with net.Dial(): If the host is empty, as in ":80",
// the local system is assumed.
host = "localhost" host = "localhost"
} }
return host, port, nil return host, port, nil

View File

@ -49,7 +49,7 @@ func New(c codes.Code, msg string) *Status {
} }
// Newf returns New(c, fmt.Sprintf(format, a...)). // Newf returns New(c, fmt.Sprintf(format, a...)).
func Newf(c codes.Code, format string, a ...interface{}) *Status { func Newf(c codes.Code, format string, a ...any) *Status {
return New(c, fmt.Sprintf(format, a...)) return New(c, fmt.Sprintf(format, a...))
} }
@ -64,7 +64,7 @@ func Err(c codes.Code, msg string) error {
} }
// Errorf returns Error(c, fmt.Sprintf(format, a...)). // Errorf returns Error(c, fmt.Sprintf(format, a...)).
func Errorf(c codes.Code, format string, a ...interface{}) error { func Errorf(c codes.Code, format string, a ...any) error {
return Err(c, fmt.Sprintf(format, a...)) return Err(c, fmt.Sprintf(format, a...))
} }
@ -120,11 +120,11 @@ func (s *Status) WithDetails(details ...proto.Message) (*Status, error) {
// Details returns a slice of details messages attached to the status. // Details returns a slice of details messages attached to the status.
// If a detail cannot be decoded, the error is returned in place of the detail. // If a detail cannot be decoded, the error is returned in place of the detail.
func (s *Status) Details() []interface{} { func (s *Status) Details() []any {
if s == nil || s.s == nil { if s == nil || s.s == nil {
return nil return nil
} }
details := make([]interface{}, 0, len(s.s.Details)) details := make([]any, 0, len(s.s.Details))
for _, any := range s.s.Details { for _, any := range s.s.Details {
detail := &ptypes.DynamicAny{} detail := &ptypes.DynamicAny{}
if err := ptypes.UnmarshalAny(any, detail); err != nil { if err := ptypes.UnmarshalAny(any, detail); err != nil {

View File

@ -40,7 +40,7 @@ var updateHeaderTblSize = func(e *hpack.Encoder, v uint32) {
} }
type itemNode struct { type itemNode struct {
it interface{} it any
next *itemNode next *itemNode
} }
@ -49,7 +49,7 @@ type itemList struct {
tail *itemNode tail *itemNode
} }
func (il *itemList) enqueue(i interface{}) { func (il *itemList) enqueue(i any) {
n := &itemNode{it: i} n := &itemNode{it: i}
if il.tail == nil { if il.tail == nil {
il.head, il.tail = n, n il.head, il.tail = n, n
@ -61,11 +61,11 @@ func (il *itemList) enqueue(i interface{}) {
// peek returns the first item in the list without removing it from the // peek returns the first item in the list without removing it from the
// list. // list.
func (il *itemList) peek() interface{} { func (il *itemList) peek() any {
return il.head.it return il.head.it
} }
func (il *itemList) dequeue() interface{} { func (il *itemList) dequeue() any {
if il.head == nil { if il.head == nil {
return nil return nil
} }
@ -336,7 +336,7 @@ func (c *controlBuffer) put(it cbItem) error {
return err return err
} }
func (c *controlBuffer) executeAndPut(f func(it interface{}) bool, it cbItem) (bool, error) { func (c *controlBuffer) executeAndPut(f func(it any) bool, it cbItem) (bool, error) {
var wakeUp bool var wakeUp bool
c.mu.Lock() c.mu.Lock()
if c.err != nil { if c.err != nil {
@ -373,7 +373,7 @@ func (c *controlBuffer) executeAndPut(f func(it interface{}) bool, it cbItem) (b
} }
// Note argument f should never be nil. // Note argument f should never be nil.
func (c *controlBuffer) execute(f func(it interface{}) bool, it interface{}) (bool, error) { func (c *controlBuffer) execute(f func(it any) bool, it any) (bool, error) {
c.mu.Lock() c.mu.Lock()
if c.err != nil { if c.err != nil {
c.mu.Unlock() c.mu.Unlock()
@ -387,7 +387,7 @@ func (c *controlBuffer) execute(f func(it interface{}) bool, it interface{}) (bo
return true, nil return true, nil
} }
func (c *controlBuffer) get(block bool) (interface{}, error) { func (c *controlBuffer) get(block bool) (any, error) {
for { for {
c.mu.Lock() c.mu.Lock()
if c.err != nil { if c.err != nil {
@ -830,7 +830,7 @@ func (l *loopyWriter) goAwayHandler(g *goAway) error {
return nil return nil
} }
func (l *loopyWriter) handle(i interface{}) error { func (l *loopyWriter) handle(i any) error {
switch i := i.(type) { switch i := i.(type) {
case *incomingWindowUpdate: case *incomingWindowUpdate:
l.incomingWindowUpdateHandler(i) l.incomingWindowUpdateHandler(i)

View File

@ -330,7 +330,7 @@ func newHTTP2Client(connectCtx, ctx context.Context, addr resolver.Address, opts
readerDone: make(chan struct{}), readerDone: make(chan struct{}),
writerDone: make(chan struct{}), writerDone: make(chan struct{}),
goAway: make(chan struct{}), goAway: make(chan struct{}),
framer: newFramer(conn, writeBufSize, readBufSize, maxHeaderListSize), framer: newFramer(conn, writeBufSize, readBufSize, opts.SharedWriteBuffer, maxHeaderListSize),
fc: &trInFlow{limit: uint32(icwz)}, fc: &trInFlow{limit: uint32(icwz)},
scheme: scheme, scheme: scheme,
activeStreams: make(map[uint32]*Stream), activeStreams: make(map[uint32]*Stream),
@ -762,7 +762,7 @@ func (t *http2Client) NewStream(ctx context.Context, callHdr *CallHdr) (*Stream,
firstTry := true firstTry := true
var ch chan struct{} var ch chan struct{}
transportDrainRequired := false transportDrainRequired := false
checkForStreamQuota := func(it interface{}) bool { checkForStreamQuota := func(it any) bool {
if t.streamQuota <= 0 { // Can go negative if server decreases it. if t.streamQuota <= 0 { // Can go negative if server decreases it.
if firstTry { if firstTry {
t.waitingStreams++ t.waitingStreams++
@ -800,7 +800,7 @@ func (t *http2Client) NewStream(ctx context.Context, callHdr *CallHdr) (*Stream,
return true return true
} }
var hdrListSizeErr error var hdrListSizeErr error
checkForHeaderListSize := func(it interface{}) bool { checkForHeaderListSize := func(it any) bool {
if t.maxSendHeaderListSize == nil { if t.maxSendHeaderListSize == nil {
return true return true
} }
@ -815,7 +815,7 @@ func (t *http2Client) NewStream(ctx context.Context, callHdr *CallHdr) (*Stream,
return true return true
} }
for { for {
success, err := t.controlBuf.executeAndPut(func(it interface{}) bool { success, err := t.controlBuf.executeAndPut(func(it any) bool {
return checkForHeaderListSize(it) && checkForStreamQuota(it) return checkForHeaderListSize(it) && checkForStreamQuota(it)
}, hdr) }, hdr)
if err != nil { if err != nil {
@ -927,7 +927,7 @@ func (t *http2Client) closeStream(s *Stream, err error, rst bool, rstCode http2.
rst: rst, rst: rst,
rstCode: rstCode, rstCode: rstCode,
} }
addBackStreamQuota := func(interface{}) bool { addBackStreamQuota := func(any) bool {
t.streamQuota++ t.streamQuota++
if t.streamQuota > 0 && t.waitingStreams > 0 { if t.streamQuota > 0 && t.waitingStreams > 0 {
select { select {
@ -1080,7 +1080,7 @@ func (t *http2Client) updateWindow(s *Stream, n uint32) {
// for the transport and the stream based on the current bdp // for the transport and the stream based on the current bdp
// estimation. // estimation.
func (t *http2Client) updateFlowControl(n uint32) { func (t *http2Client) updateFlowControl(n uint32) {
updateIWS := func(interface{}) bool { updateIWS := func(any) bool {
t.initialWindowSize = int32(n) t.initialWindowSize = int32(n)
t.mu.Lock() t.mu.Lock()
for _, s := range t.activeStreams { for _, s := range t.activeStreams {
@ -1233,7 +1233,7 @@ func (t *http2Client) handleSettings(f *http2.SettingsFrame, isFirst bool) {
} }
updateFuncs = append(updateFuncs, updateStreamQuota) updateFuncs = append(updateFuncs, updateStreamQuota)
} }
t.controlBuf.executeAndPut(func(interface{}) bool { t.controlBuf.executeAndPut(func(any) bool {
for _, f := range updateFuncs { for _, f := range updateFuncs {
f() f()
} }
@ -1505,14 +1505,15 @@ func (t *http2Client) operateHeaders(frame *http2.MetaHeadersFrame) {
return return
} }
isHeader := false // For headers, set them in s.header and close headerChan. For trailers or
// trailers-only, closeStream will set the trailers and close headerChan as
// If headerChan hasn't been closed yet // needed.
if atomic.CompareAndSwapUint32(&s.headerChanClosed, 0, 1) { if !endStream {
s.headerValid = true // If headerChan hasn't been closed yet (expected, given we checked it
if !endStream { // above, but something else could have potentially closed the whole
// HEADERS frame block carries a Response-Headers. // stream).
isHeader = true if atomic.CompareAndSwapUint32(&s.headerChanClosed, 0, 1) {
s.headerValid = true
// These values can be set without any synchronization because // These values can be set without any synchronization because
// stream goroutine will read it only after seeing a closed // stream goroutine will read it only after seeing a closed
// headerChan which we'll close after setting this. // headerChan which we'll close after setting this.
@ -1520,15 +1521,12 @@ func (t *http2Client) operateHeaders(frame *http2.MetaHeadersFrame) {
if len(mdata) > 0 { if len(mdata) > 0 {
s.header = mdata s.header = mdata
} }
} else { close(s.headerChan)
// HEADERS frame block carries a Trailers-Only.
s.noHeaders = true
} }
close(s.headerChan)
} }
for _, sh := range t.statsHandlers { for _, sh := range t.statsHandlers {
if isHeader { if !endStream {
inHeader := &stats.InHeader{ inHeader := &stats.InHeader{
Client: true, Client: true,
WireLength: int(frame.Header().Length), WireLength: int(frame.Header().Length),
@ -1554,9 +1552,10 @@ func (t *http2Client) operateHeaders(frame *http2.MetaHeadersFrame) {
statusGen = status.New(rawStatusCode, grpcMessage) statusGen = status.New(rawStatusCode, grpcMessage)
} }
// if client received END_STREAM from server while stream was still active, send RST_STREAM // If client received END_STREAM from server while stream was still active,
rst := s.getState() == streamActive // send RST_STREAM.
t.closeStream(s, io.EOF, rst, http2.ErrCodeNo, statusGen, mdata, true) rstStream := s.getState() == streamActive
t.closeStream(s, io.EOF, rstStream, http2.ErrCodeNo, statusGen, mdata, true)
} }
// readServerPreface reads and handles the initial settings frame from the // readServerPreface reads and handles the initial settings frame from the

View File

@ -165,7 +165,7 @@ func NewServerTransport(conn net.Conn, config *ServerConfig) (_ ServerTransport,
if config.MaxHeaderListSize != nil { if config.MaxHeaderListSize != nil {
maxHeaderListSize = *config.MaxHeaderListSize maxHeaderListSize = *config.MaxHeaderListSize
} }
framer := newFramer(conn, writeBufSize, readBufSize, maxHeaderListSize) framer := newFramer(conn, writeBufSize, readBufSize, config.SharedWriteBuffer, maxHeaderListSize)
// Send initial settings as connection preface to client. // Send initial settings as connection preface to client.
isettings := []http2.Setting{{ isettings := []http2.Setting{{
ID: http2.SettingMaxFrameSize, ID: http2.SettingMaxFrameSize,
@ -233,7 +233,7 @@ func NewServerTransport(conn net.Conn, config *ServerConfig) (_ ServerTransport,
kp.Timeout = defaultServerKeepaliveTimeout kp.Timeout = defaultServerKeepaliveTimeout
} }
if kp.Time != infinity { if kp.Time != infinity {
if err = syscall.SetTCPUserTimeout(conn, kp.Timeout); err != nil { if err = syscall.SetTCPUserTimeout(rawConn, kp.Timeout); err != nil {
return nil, connectionErrorf(false, err, "transport: failed to set TCP_USER_TIMEOUT: %v", err) return nil, connectionErrorf(false, err, "transport: failed to set TCP_USER_TIMEOUT: %v", err)
} }
} }
@ -850,7 +850,7 @@ func (t *http2Server) handleSettings(f *http2.SettingsFrame) {
} }
return nil return nil
}) })
t.controlBuf.executeAndPut(func(interface{}) bool { t.controlBuf.executeAndPut(func(any) bool {
for _, f := range updateFuncs { for _, f := range updateFuncs {
f() f()
} }
@ -934,7 +934,7 @@ func appendHeaderFieldsFromMD(headerFields []hpack.HeaderField, md metadata.MD)
return headerFields return headerFields
} }
func (t *http2Server) checkForHeaderListSize(it interface{}) bool { func (t *http2Server) checkForHeaderListSize(it any) bool {
if t.maxSendHeaderListSize == nil { if t.maxSendHeaderListSize == nil {
return true return true
} }

View File

@ -30,6 +30,7 @@ import (
"net/url" "net/url"
"strconv" "strconv"
"strings" "strings"
"sync"
"time" "time"
"unicode/utf8" "unicode/utf8"
@ -309,6 +310,7 @@ func decodeGrpcMessageUnchecked(msg string) string {
} }
type bufWriter struct { type bufWriter struct {
pool *sync.Pool
buf []byte buf []byte
offset int offset int
batchSize int batchSize int
@ -316,12 +318,17 @@ type bufWriter struct {
err error err error
} }
func newBufWriter(conn net.Conn, batchSize int) *bufWriter { func newBufWriter(conn net.Conn, batchSize int, pool *sync.Pool) *bufWriter {
return &bufWriter{ w := &bufWriter{
buf: make([]byte, batchSize*2),
batchSize: batchSize, batchSize: batchSize,
conn: conn, conn: conn,
pool: pool,
} }
// this indicates that we should use non shared buf
if pool == nil {
w.buf = make([]byte, batchSize)
}
return w
} }
func (w *bufWriter) Write(b []byte) (n int, err error) { func (w *bufWriter) Write(b []byte) (n int, err error) {
@ -332,19 +339,34 @@ func (w *bufWriter) Write(b []byte) (n int, err error) {
n, err = w.conn.Write(b) n, err = w.conn.Write(b)
return n, toIOError(err) return n, toIOError(err)
} }
if w.buf == nil {
b := w.pool.Get().(*[]byte)
w.buf = *b
}
for len(b) > 0 { for len(b) > 0 {
nn := copy(w.buf[w.offset:], b) nn := copy(w.buf[w.offset:], b)
b = b[nn:] b = b[nn:]
w.offset += nn w.offset += nn
n += nn n += nn
if w.offset >= w.batchSize { if w.offset >= w.batchSize {
err = w.Flush() err = w.flushKeepBuffer()
} }
} }
return n, err return n, err
} }
func (w *bufWriter) Flush() error { func (w *bufWriter) Flush() error {
err := w.flushKeepBuffer()
// Only release the buffer if we are in a "shared" mode
if w.buf != nil && w.pool != nil {
b := w.buf
w.pool.Put(&b)
w.buf = nil
}
return err
}
func (w *bufWriter) flushKeepBuffer() error {
if w.err != nil { if w.err != nil {
return w.err return w.err
} }
@ -381,7 +403,10 @@ type framer struct {
fr *http2.Framer fr *http2.Framer
} }
func newFramer(conn net.Conn, writeBufferSize, readBufferSize int, maxHeaderListSize uint32) *framer { var writeBufferPoolMap map[int]*sync.Pool = make(map[int]*sync.Pool)
var writeBufferMutex sync.Mutex
func newFramer(conn net.Conn, writeBufferSize, readBufferSize int, sharedWriteBuffer bool, maxHeaderListSize uint32) *framer {
if writeBufferSize < 0 { if writeBufferSize < 0 {
writeBufferSize = 0 writeBufferSize = 0
} }
@ -389,7 +414,11 @@ func newFramer(conn net.Conn, writeBufferSize, readBufferSize int, maxHeaderList
if readBufferSize > 0 { if readBufferSize > 0 {
r = bufio.NewReaderSize(r, readBufferSize) r = bufio.NewReaderSize(r, readBufferSize)
} }
w := newBufWriter(conn, writeBufferSize) var pool *sync.Pool
if sharedWriteBuffer {
pool = getWriteBufferPool(writeBufferSize)
}
w := newBufWriter(conn, writeBufferSize, pool)
f := &framer{ f := &framer{
writer: w, writer: w,
fr: http2.NewFramer(w, r), fr: http2.NewFramer(w, r),
@ -403,6 +432,24 @@ func newFramer(conn net.Conn, writeBufferSize, readBufferSize int, maxHeaderList
return f return f
} }
func getWriteBufferPool(writeBufferSize int) *sync.Pool {
writeBufferMutex.Lock()
defer writeBufferMutex.Unlock()
size := writeBufferSize * 2
pool, ok := writeBufferPoolMap[size]
if ok {
return pool
}
pool = &sync.Pool{
New: func() any {
b := make([]byte, size)
return &b
},
}
writeBufferPoolMap[size] = pool
return pool
}
// parseDialTarget returns the network and address to pass to dialer. // parseDialTarget returns the network and address to pass to dialer.
func parseDialTarget(target string) (string, string) { func parseDialTarget(target string) (string, string) {
net := "tcp" net := "tcp"

View File

@ -43,10 +43,6 @@ import (
"google.golang.org/grpc/tap" "google.golang.org/grpc/tap"
) )
// ErrNoHeaders is used as a signal that a trailers only response was received,
// and is not a real error.
var ErrNoHeaders = errors.New("stream has no headers")
const logLevel = 2 const logLevel = 2
type bufferPool struct { type bufferPool struct {
@ -56,7 +52,7 @@ type bufferPool struct {
func newBufferPool() *bufferPool { func newBufferPool() *bufferPool {
return &bufferPool{ return &bufferPool{
pool: sync.Pool{ pool: sync.Pool{
New: func() interface{} { New: func() any {
return new(bytes.Buffer) return new(bytes.Buffer)
}, },
}, },
@ -390,14 +386,10 @@ func (s *Stream) Header() (metadata.MD, error) {
} }
s.waitOnHeader() s.waitOnHeader()
if !s.headerValid { if !s.headerValid || s.noHeaders {
return nil, s.status.Err() return nil, s.status.Err()
} }
if s.noHeaders {
return nil, ErrNoHeaders
}
return s.header.Copy(), nil return s.header.Copy(), nil
} }
@ -559,6 +551,7 @@ type ServerConfig struct {
InitialConnWindowSize int32 InitialConnWindowSize int32
WriteBufferSize int WriteBufferSize int
ReadBufferSize int ReadBufferSize int
SharedWriteBuffer bool
ChannelzParentID *channelz.Identifier ChannelzParentID *channelz.Identifier
MaxHeaderListSize *uint32 MaxHeaderListSize *uint32
HeaderTableSize *uint32 HeaderTableSize *uint32
@ -592,6 +585,8 @@ type ConnectOptions struct {
WriteBufferSize int WriteBufferSize int
// ReadBufferSize sets the size of read buffer, which in turn determines how much data can be read at most for one read syscall. // ReadBufferSize sets the size of read buffer, which in turn determines how much data can be read at most for one read syscall.
ReadBufferSize int ReadBufferSize int
// SharedWriteBuffer indicates whether connections should reuse write buffer
SharedWriteBuffer bool
// ChannelzParentID sets the addrConn id which initiate the creation of this client transport. // ChannelzParentID sets the addrConn id which initiate the creation of this client transport.
ChannelzParentID *channelz.Identifier ChannelzParentID *channelz.Identifier
// MaxHeaderListSize sets the max (uncompressed) size of header list that is prepared to be received. // MaxHeaderListSize sets the max (uncompressed) size of header list that is prepared to be received.
@ -736,7 +731,7 @@ type ServerTransport interface {
} }
// connectionErrorf creates an ConnectionError with the specified error description. // connectionErrorf creates an ConnectionError with the specified error description.
func connectionErrorf(temp bool, e error, format string, a ...interface{}) ConnectionError { func connectionErrorf(temp bool, e error, format string, a ...any) ConnectionError {
return ConnectionError{ return ConnectionError{
Desc: fmt.Sprintf(format, a...), Desc: fmt.Sprintf(format, a...),
temp: temp, temp: temp,

View File

@ -28,21 +28,26 @@ import (
"google.golang.org/grpc/internal/channelz" "google.golang.org/grpc/internal/channelz"
istatus "google.golang.org/grpc/internal/status" istatus "google.golang.org/grpc/internal/status"
"google.golang.org/grpc/internal/transport" "google.golang.org/grpc/internal/transport"
"google.golang.org/grpc/stats"
"google.golang.org/grpc/status" "google.golang.org/grpc/status"
) )
// pickerWrapper is a wrapper of balancer.Picker. It blocks on certain pick // pickerWrapper is a wrapper of balancer.Picker. It blocks on certain pick
// actions and unblock when there's a picker update. // actions and unblock when there's a picker update.
type pickerWrapper struct { type pickerWrapper struct {
mu sync.Mutex mu sync.Mutex
done bool done bool
idle bool idle bool
blockingCh chan struct{} blockingCh chan struct{}
picker balancer.Picker picker balancer.Picker
statsHandlers []stats.Handler // to record blocking picker calls
} }
func newPickerWrapper() *pickerWrapper { func newPickerWrapper(statsHandlers []stats.Handler) *pickerWrapper {
return &pickerWrapper{blockingCh: make(chan struct{})} return &pickerWrapper{
blockingCh: make(chan struct{}),
statsHandlers: statsHandlers,
}
} }
// updatePicker is called by UpdateBalancerState. It unblocks all blocked pick. // updatePicker is called by UpdateBalancerState. It unblocks all blocked pick.
@ -95,6 +100,7 @@ func (pw *pickerWrapper) pick(ctx context.Context, failfast bool, info balancer.
var ch chan struct{} var ch chan struct{}
var lastPickErr error var lastPickErr error
for { for {
pw.mu.Lock() pw.mu.Lock()
if pw.done { if pw.done {
@ -129,6 +135,20 @@ func (pw *pickerWrapper) pick(ctx context.Context, failfast bool, info balancer.
continue continue
} }
// If the channel is set, it means that the pick call had to wait for a
// new picker at some point. Either it's the first iteration and this
// function received the first picker, or a picker errored with
// ErrNoSubConnAvailable or errored with failfast set to false, which
// will trigger a continue to the next iteration. In the first case this
// conditional will hit if this call had to block (the channel is set).
// In the second case, the only way it will get to this conditional is
// if there is a new picker.
if ch != nil {
for _, sh := range pw.statsHandlers {
sh.HandleRPC(ctx, &stats.PickerUpdated{})
}
}
ch = pw.blockingCh ch = pw.blockingCh
p := pw.picker p := pw.picker
pw.mu.Unlock() pw.mu.Unlock()

View File

@ -26,12 +26,18 @@ import (
"google.golang.org/grpc/balancer" "google.golang.org/grpc/balancer"
"google.golang.org/grpc/connectivity" "google.golang.org/grpc/connectivity"
"google.golang.org/grpc/internal/envconfig" "google.golang.org/grpc/internal/envconfig"
internalgrpclog "google.golang.org/grpc/internal/grpclog"
"google.golang.org/grpc/internal/grpcrand" "google.golang.org/grpc/internal/grpcrand"
"google.golang.org/grpc/internal/pretty"
"google.golang.org/grpc/resolver"
"google.golang.org/grpc/serviceconfig" "google.golang.org/grpc/serviceconfig"
) )
// PickFirstBalancerName is the name of the pick_first balancer. const (
const PickFirstBalancerName = "pick_first" // PickFirstBalancerName is the name of the pick_first balancer.
PickFirstBalancerName = "pick_first"
logPrefix = "[pick-first-lb %p] "
)
func newPickfirstBuilder() balancer.Builder { func newPickfirstBuilder() balancer.Builder {
return &pickfirstBuilder{} return &pickfirstBuilder{}
@ -40,7 +46,9 @@ func newPickfirstBuilder() balancer.Builder {
type pickfirstBuilder struct{} type pickfirstBuilder struct{}
func (*pickfirstBuilder) Build(cc balancer.ClientConn, opt balancer.BuildOptions) balancer.Balancer { func (*pickfirstBuilder) Build(cc balancer.ClientConn, opt balancer.BuildOptions) balancer.Balancer {
return &pickfirstBalancer{cc: cc} b := &pickfirstBalancer{cc: cc}
b.logger = internalgrpclog.NewPrefixLogger(logger, fmt.Sprintf(logPrefix, b))
return b
} }
func (*pickfirstBuilder) Name() string { func (*pickfirstBuilder) Name() string {
@ -57,23 +65,36 @@ type pfConfig struct {
} }
func (*pickfirstBuilder) ParseConfig(js json.RawMessage) (serviceconfig.LoadBalancingConfig, error) { func (*pickfirstBuilder) ParseConfig(js json.RawMessage) (serviceconfig.LoadBalancingConfig, error) {
cfg := &pfConfig{} if !envconfig.PickFirstLBConfig {
if err := json.Unmarshal(js, cfg); err != nil { // Prior to supporting loadbalancing configuration, the pick_first LB
// policy did not implement the balancer.ConfigParser interface. This
// meant that if a non-empty configuration was passed to it, the service
// config unmarshaling code would throw a warning log, but would
// continue using the pick_first LB policy. The code below ensures the
// same behavior is retained if the env var is not set.
if string(js) != "{}" {
logger.Warningf("Ignoring non-empty balancer configuration %q for the pick_first LB policy", string(js))
}
return nil, nil
}
var cfg pfConfig
if err := json.Unmarshal(js, &cfg); err != nil {
return nil, fmt.Errorf("pickfirst: unable to unmarshal LB policy config: %s, error: %v", string(js), err) return nil, fmt.Errorf("pickfirst: unable to unmarshal LB policy config: %s, error: %v", string(js), err)
} }
return cfg, nil return cfg, nil
} }
type pickfirstBalancer struct { type pickfirstBalancer struct {
logger *internalgrpclog.PrefixLogger
state connectivity.State state connectivity.State
cc balancer.ClientConn cc balancer.ClientConn
subConn balancer.SubConn subConn balancer.SubConn
cfg *pfConfig
} }
func (b *pickfirstBalancer) ResolverError(err error) { func (b *pickfirstBalancer) ResolverError(err error) {
if logger.V(2) { if b.logger.V(2) {
logger.Infof("pickfirstBalancer: ResolverError called with error: %v", err) b.logger.Infof("Received error from the name resolver: %v", err)
} }
if b.subConn == nil { if b.subConn == nil {
b.state = connectivity.TransientFailure b.state = connectivity.TransientFailure
@ -96,35 +117,44 @@ func (b *pickfirstBalancer) UpdateClientConnState(state balancer.ClientConnState
// The resolver reported an empty address list. Treat it like an error by // The resolver reported an empty address list. Treat it like an error by
// calling b.ResolverError. // calling b.ResolverError.
if b.subConn != nil { if b.subConn != nil {
// Remove the old subConn. All addresses were removed, so it is no longer // Shut down the old subConn. All addresses were removed, so it is
// valid. // no longer valid.
b.cc.RemoveSubConn(b.subConn) b.subConn.Shutdown()
b.subConn = nil b.subConn = nil
} }
b.ResolverError(errors.New("produced zero addresses")) b.ResolverError(errors.New("produced zero addresses"))
return balancer.ErrBadResolverState return balancer.ErrBadResolverState
} }
if state.BalancerConfig != nil { // We don't have to guard this block with the env var because ParseConfig
cfg, ok := state.BalancerConfig.(*pfConfig) // already does so.
if !ok { cfg, ok := state.BalancerConfig.(pfConfig)
return fmt.Errorf("pickfirstBalancer: received nil or illegal BalancerConfig (type %T): %v", state.BalancerConfig, state.BalancerConfig) if state.BalancerConfig != nil && !ok {
} return fmt.Errorf("pickfirst: received illegal BalancerConfig (type %T): %v", state.BalancerConfig, state.BalancerConfig)
b.cfg = cfg
} }
if cfg.ShuffleAddressList {
if envconfig.PickFirstLBConfig && b.cfg != nil && b.cfg.ShuffleAddressList { addrs = append([]resolver.Address{}, addrs...)
grpcrand.Shuffle(len(addrs), func(i, j int) { addrs[i], addrs[j] = addrs[j], addrs[i] }) grpcrand.Shuffle(len(addrs), func(i, j int) { addrs[i], addrs[j] = addrs[j], addrs[i] })
} }
if b.logger.V(2) {
b.logger.Infof("Received new config %s, resolver state %s", pretty.ToJSON(cfg), pretty.ToJSON(state.ResolverState))
}
if b.subConn != nil { if b.subConn != nil {
b.cc.UpdateAddresses(b.subConn, addrs) b.cc.UpdateAddresses(b.subConn, addrs)
return nil return nil
} }
subConn, err := b.cc.NewSubConn(addrs, balancer.NewSubConnOptions{}) var subConn balancer.SubConn
subConn, err := b.cc.NewSubConn(addrs, balancer.NewSubConnOptions{
StateListener: func(state balancer.SubConnState) {
b.updateSubConnState(subConn, state)
},
})
if err != nil { if err != nil {
if logger.V(2) { if b.logger.V(2) {
logger.Errorf("pickfirstBalancer: failed to NewSubConn: %v", err) b.logger.Infof("Failed to create new SubConn: %v", err)
} }
b.state = connectivity.TransientFailure b.state = connectivity.TransientFailure
b.cc.UpdateState(balancer.State{ b.cc.UpdateState(balancer.State{
@ -143,13 +173,19 @@ func (b *pickfirstBalancer) UpdateClientConnState(state balancer.ClientConnState
return nil return nil
} }
// UpdateSubConnState is unused as a StateListener is always registered when
// creating SubConns.
func (b *pickfirstBalancer) UpdateSubConnState(subConn balancer.SubConn, state balancer.SubConnState) { func (b *pickfirstBalancer) UpdateSubConnState(subConn balancer.SubConn, state balancer.SubConnState) {
if logger.V(2) { b.logger.Errorf("UpdateSubConnState(%v, %+v) called unexpectedly", subConn, state)
logger.Infof("pickfirstBalancer: UpdateSubConnState: %p, %v", subConn, state) }
func (b *pickfirstBalancer) updateSubConnState(subConn balancer.SubConn, state balancer.SubConnState) {
if b.logger.V(2) {
b.logger.Infof("Received SubConn state update: %p, %+v", subConn, state)
} }
if b.subConn != subConn { if b.subConn != subConn {
if logger.V(2) { if b.logger.V(2) {
logger.Infof("pickfirstBalancer: ignored state change because subConn is not recognized") b.logger.Infof("Ignored state change because subConn is not recognized")
} }
return return
} }

View File

@ -37,7 +37,7 @@ type PreparedMsg struct {
} }
// Encode marshalls and compresses the message using the codec and compressor for the stream. // Encode marshalls and compresses the message using the codec and compressor for the stream.
func (p *PreparedMsg) Encode(s Stream, msg interface{}) error { func (p *PreparedMsg) Encode(s Stream, msg any) error {
ctx := s.Context() ctx := s.Context()
rpcInfo, ok := rpcInfoFromContext(ctx) rpcInfo, ok := rpcInfoFromContext(ctx)
if !ok { if !ok {

View File

@ -20,7 +20,7 @@ package resolver
type addressMapEntry struct { type addressMapEntry struct {
addr Address addr Address
value interface{} value any
} }
// AddressMap is a map of addresses to arbitrary values taking into account // AddressMap is a map of addresses to arbitrary values taking into account
@ -69,7 +69,7 @@ func (l addressMapEntryList) find(addr Address) int {
} }
// Get returns the value for the address in the map, if present. // Get returns the value for the address in the map, if present.
func (a *AddressMap) Get(addr Address) (value interface{}, ok bool) { func (a *AddressMap) Get(addr Address) (value any, ok bool) {
addrKey := toMapKey(&addr) addrKey := toMapKey(&addr)
entryList := a.m[addrKey] entryList := a.m[addrKey]
if entry := entryList.find(addr); entry != -1 { if entry := entryList.find(addr); entry != -1 {
@ -79,7 +79,7 @@ func (a *AddressMap) Get(addr Address) (value interface{}, ok bool) {
} }
// Set updates or adds the value to the address in the map. // Set updates or adds the value to the address in the map.
func (a *AddressMap) Set(addr Address, value interface{}) { func (a *AddressMap) Set(addr Address, value any) {
addrKey := toMapKey(&addr) addrKey := toMapKey(&addr)
entryList := a.m[addrKey] entryList := a.m[addrKey]
if entry := entryList.find(addr); entry != -1 { if entry := entryList.find(addr); entry != -1 {
@ -127,8 +127,8 @@ func (a *AddressMap) Keys() []Address {
} }
// Values returns a slice of all current map values. // Values returns a slice of all current map values.
func (a *AddressMap) Values() []interface{} { func (a *AddressMap) Values() []any {
ret := make([]interface{}, 0, a.Len()) ret := make([]any, 0, a.Len())
for _, entryList := range a.m { for _, entryList := range a.m {
for _, entry := range entryList { for _, entry := range entryList {
ret = append(ret, entry.value) ret = append(ret, entry.value)

View File

@ -77,25 +77,6 @@ func GetDefaultScheme() string {
return defaultScheme return defaultScheme
} }
// AddressType indicates the address type returned by name resolution.
//
// Deprecated: use Attributes in Address instead.
type AddressType uint8
const (
// Backend indicates the address is for a backend server.
//
// Deprecated: use Attributes in Address instead.
Backend AddressType = iota
// GRPCLB indicates the address is for a grpclb load balancer.
//
// Deprecated: to select the GRPCLB load balancing policy, use a service
// config with a corresponding loadBalancingConfig. To supply balancer
// addresses to the GRPCLB load balancing policy, set State.Attributes
// using balancer/grpclb/state.Set.
GRPCLB
)
// Address represents a server the client connects to. // Address represents a server the client connects to.
// //
// # Experimental // # Experimental
@ -111,9 +92,6 @@ type Address struct {
// the address, instead of the hostname from the Dial target string. In most cases, // the address, instead of the hostname from the Dial target string. In most cases,
// this should not be set. // this should not be set.
// //
// If Type is GRPCLB, ServerName should be the name of the remote load
// balancer, not the name of the backend.
//
// WARNING: ServerName must only be populated with trusted values. It // WARNING: ServerName must only be populated with trusted values. It
// is insecure to populate it with data from untrusted inputs since untrusted // is insecure to populate it with data from untrusted inputs since untrusted
// values could be used to bypass the authority checks performed by TLS. // values could be used to bypass the authority checks performed by TLS.
@ -126,27 +104,29 @@ type Address struct {
// BalancerAttributes contains arbitrary data about this address intended // BalancerAttributes contains arbitrary data about this address intended
// for consumption by the LB policy. These attributes do not affect SubConn // for consumption by the LB policy. These attributes do not affect SubConn
// creation, connection establishment, handshaking, etc. // creation, connection establishment, handshaking, etc.
BalancerAttributes *attributes.Attributes
// Type is the type of this address.
// //
// Deprecated: use Attributes instead. // Deprecated: when an Address is inside an Endpoint, this field should not
Type AddressType // be used, and it will eventually be removed entirely.
BalancerAttributes *attributes.Attributes
// Metadata is the information associated with Addr, which may be used // Metadata is the information associated with Addr, which may be used
// to make load balancing decision. // to make load balancing decision.
// //
// Deprecated: use Attributes instead. // Deprecated: use Attributes instead.
Metadata interface{} Metadata any
} }
// Equal returns whether a and o are identical. Metadata is compared directly, // Equal returns whether a and o are identical. Metadata is compared directly,
// not with any recursive introspection. // not with any recursive introspection.
//
// This method compares all fields of the address. When used to tell apart
// addresses during subchannel creation or connection establishment, it might be
// more appropriate for the caller to implement custom equality logic.
func (a Address) Equal(o Address) bool { func (a Address) Equal(o Address) bool {
return a.Addr == o.Addr && a.ServerName == o.ServerName && return a.Addr == o.Addr && a.ServerName == o.ServerName &&
a.Attributes.Equal(o.Attributes) && a.Attributes.Equal(o.Attributes) &&
a.BalancerAttributes.Equal(o.BalancerAttributes) && a.BalancerAttributes.Equal(o.BalancerAttributes) &&
a.Type == o.Type && a.Metadata == o.Metadata a.Metadata == o.Metadata
} }
// String returns JSON formatted string representation of the address. // String returns JSON formatted string representation of the address.
@ -190,11 +170,37 @@ type BuildOptions struct {
Dialer func(context.Context, string) (net.Conn, error) Dialer func(context.Context, string) (net.Conn, error)
} }
// An Endpoint is one network endpoint, or server, which may have multiple
// addresses with which it can be accessed.
type Endpoint struct {
// Addresses contains a list of addresses used to access this endpoint.
Addresses []Address
// Attributes contains arbitrary data about this endpoint intended for
// consumption by the LB policy.
Attributes *attributes.Attributes
}
// State contains the current Resolver state relevant to the ClientConn. // State contains the current Resolver state relevant to the ClientConn.
type State struct { type State struct {
// Addresses is the latest set of resolved addresses for the target. // Addresses is the latest set of resolved addresses for the target.
//
// If a resolver sets Addresses but does not set Endpoints, one Endpoint
// will be created for each Address before the State is passed to the LB
// policy. The BalancerAttributes of each entry in Addresses will be set
// in Endpoints.Attributes, and be cleared in the Endpoint's Address's
// BalancerAttributes.
//
// Soon, Addresses will be deprecated and replaced fully by Endpoints.
Addresses []Address Addresses []Address
// Endpoints is the latest set of resolved endpoints for the target.
//
// If a resolver produces a State containing Endpoints but not Addresses,
// it must take care to ensure the LB policies it selects will support
// Endpoints.
Endpoints []Endpoint
// ServiceConfig contains the result from parsing the latest service // ServiceConfig contains the result from parsing the latest service
// config. If it is nil, it indicates no service config is present or the // config. If it is nil, it indicates no service config is present or the
// resolver does not provide service configs. // resolver does not provide service configs.
@ -254,20 +260,7 @@ type ClientConn interface {
// target does not contain a scheme or if the parsed scheme is not registered // target does not contain a scheme or if the parsed scheme is not registered
// (i.e. no corresponding resolver available to resolve the endpoint), we will // (i.e. no corresponding resolver available to resolve the endpoint), we will
// apply the default scheme, and will attempt to reparse it. // apply the default scheme, and will attempt to reparse it.
//
// Examples:
//
// - "dns://some_authority/foo.bar"
// Target{Scheme: "dns", Authority: "some_authority", Endpoint: "foo.bar"}
// - "foo.bar"
// Target{Scheme: resolver.GetDefaultScheme(), Endpoint: "foo.bar"}
// - "unknown_scheme://authority/endpoint"
// Target{Scheme: resolver.GetDefaultScheme(), Endpoint: "unknown_scheme://authority/endpoint"}
type Target struct { type Target struct {
// Deprecated: use URL.Scheme instead.
Scheme string
// Deprecated: use URL.Host instead.
Authority string
// URL contains the parsed dial target with an optional default scheme added // URL contains the parsed dial target with an optional default scheme added
// to it if the original dial target contained no scheme or contained an // to it if the original dial target contained no scheme or contained an
// unregistered scheme. Any query params specified in the original dial // unregistered scheme. Any query params specified in the original dial
@ -321,10 +314,3 @@ type Resolver interface {
// Close closes the resolver. // Close closes the resolver.
Close() Close()
} }
// UnregisterForTesting removes the resolver builder with the given scheme from the
// resolver map.
// This function is for testing only.
func UnregisterForTesting(scheme string) {
delete(m, scheme)
}

View File

@ -133,7 +133,7 @@ func (ccr *ccResolverWrapper) close() {
ccr.mu.Unlock() ccr.mu.Unlock()
// Give enqueued callbacks a chance to finish. // Give enqueued callbacks a chance to finish.
<-ccr.serializer.Done <-ccr.serializer.Done()
// Spawn a goroutine to close the resolver (since it may block trying to // Spawn a goroutine to close the resolver (since it may block trying to
// cleanup all allocated resources) and return early. // cleanup all allocated resources) and return early.
@ -152,6 +152,14 @@ func (ccr *ccResolverWrapper) serializerScheduleLocked(f func(context.Context))
// which includes addresses and service config. // which includes addresses and service config.
func (ccr *ccResolverWrapper) UpdateState(s resolver.State) error { func (ccr *ccResolverWrapper) UpdateState(s resolver.State) error {
errCh := make(chan error, 1) errCh := make(chan error, 1)
if s.Endpoints == nil {
s.Endpoints = make([]resolver.Endpoint, 0, len(s.Addresses))
for _, a := range s.Addresses {
ep := resolver.Endpoint{Addresses: []resolver.Address{a}, Attributes: a.BalancerAttributes}
ep.Addresses[0].BalancerAttributes = nil
s.Endpoints = append(s.Endpoints, ep)
}
}
ok := ccr.serializer.Schedule(func(context.Context) { ok := ccr.serializer.Schedule(func(context.Context) {
ccr.addChannelzTraceEvent(s) ccr.addChannelzTraceEvent(s)
ccr.curState = s ccr.curState = s

View File

@ -75,7 +75,7 @@ func NewGZIPCompressorWithLevel(level int) (Compressor, error) {
} }
return &gzipCompressor{ return &gzipCompressor{
pool: sync.Pool{ pool: sync.Pool{
New: func() interface{} { New: func() any {
w, err := gzip.NewWriterLevel(io.Discard, level) w, err := gzip.NewWriterLevel(io.Discard, level)
if err != nil { if err != nil {
panic(err) panic(err)
@ -577,6 +577,9 @@ type parser struct {
// The header of a gRPC message. Find more detail at // The header of a gRPC message. Find more detail at
// https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md // https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md
header [5]byte header [5]byte
// recvBufferPool is the pool of shared receive buffers.
recvBufferPool SharedBufferPool
} }
// recvMsg reads a complete gRPC message from the stream. // recvMsg reads a complete gRPC message from the stream.
@ -610,9 +613,7 @@ func (p *parser) recvMsg(maxReceiveMessageSize int) (pf payloadFormat, msg []byt
if int(length) > maxReceiveMessageSize { if int(length) > maxReceiveMessageSize {
return 0, nil, status.Errorf(codes.ResourceExhausted, "grpc: received message larger than max (%d vs. %d)", length, maxReceiveMessageSize) return 0, nil, status.Errorf(codes.ResourceExhausted, "grpc: received message larger than max (%d vs. %d)", length, maxReceiveMessageSize)
} }
// TODO(bradfitz,zhaoq): garbage. reuse buffer after proto decoding instead msg = p.recvBufferPool.Get(int(length))
// of making it for each message:
msg = make([]byte, int(length))
if _, err := p.r.Read(msg); err != nil { if _, err := p.r.Read(msg); err != nil {
if err == io.EOF { if err == io.EOF {
err = io.ErrUnexpectedEOF err = io.ErrUnexpectedEOF
@ -625,7 +626,7 @@ func (p *parser) recvMsg(maxReceiveMessageSize int) (pf payloadFormat, msg []byt
// encode serializes msg and returns a buffer containing the message, or an // encode serializes msg and returns a buffer containing the message, or an
// error if it is too large to be transmitted by grpc. If msg is nil, it // error if it is too large to be transmitted by grpc. If msg is nil, it
// generates an empty message. // generates an empty message.
func encode(c baseCodec, msg interface{}) ([]byte, error) { func encode(c baseCodec, msg any) ([]byte, error) {
if msg == nil { // NOTE: typed nils will not be caught by this check if msg == nil { // NOTE: typed nils will not be caught by this check
return nil, nil return nil, nil
} }
@ -692,7 +693,7 @@ func msgHeader(data, compData []byte) (hdr []byte, payload []byte) {
return hdr, data return hdr, data
} }
func outPayload(client bool, msg interface{}, data, payload []byte, t time.Time) *stats.OutPayload { func outPayload(client bool, msg any, data, payload []byte, t time.Time) *stats.OutPayload {
return &stats.OutPayload{ return &stats.OutPayload{
Client: client, Client: client,
Payload: msg, Payload: msg,
@ -726,12 +727,12 @@ type payloadInfo struct {
} }
func recvAndDecompress(p *parser, s *transport.Stream, dc Decompressor, maxReceiveMessageSize int, payInfo *payloadInfo, compressor encoding.Compressor) ([]byte, error) { func recvAndDecompress(p *parser, s *transport.Stream, dc Decompressor, maxReceiveMessageSize int, payInfo *payloadInfo, compressor encoding.Compressor) ([]byte, error) {
pf, d, err := p.recvMsg(maxReceiveMessageSize) pf, buf, err := p.recvMsg(maxReceiveMessageSize)
if err != nil { if err != nil {
return nil, err return nil, err
} }
if payInfo != nil { if payInfo != nil {
payInfo.compressedLength = len(d) payInfo.compressedLength = len(buf)
} }
if st := checkRecvPayload(pf, s.RecvCompress(), compressor != nil || dc != nil); st != nil { if st := checkRecvPayload(pf, s.RecvCompress(), compressor != nil || dc != nil); st != nil {
@ -743,10 +744,10 @@ func recvAndDecompress(p *parser, s *transport.Stream, dc Decompressor, maxRecei
// To match legacy behavior, if the decompressor is set by WithDecompressor or RPCDecompressor, // To match legacy behavior, if the decompressor is set by WithDecompressor or RPCDecompressor,
// use this decompressor as the default. // use this decompressor as the default.
if dc != nil { if dc != nil {
d, err = dc.Do(bytes.NewReader(d)) buf, err = dc.Do(bytes.NewReader(buf))
size = len(d) size = len(buf)
} else { } else {
d, size, err = decompress(compressor, d, maxReceiveMessageSize) buf, size, err = decompress(compressor, buf, maxReceiveMessageSize)
} }
if err != nil { if err != nil {
return nil, status.Errorf(codes.Internal, "grpc: failed to decompress the received message: %v", err) return nil, status.Errorf(codes.Internal, "grpc: failed to decompress the received message: %v", err)
@ -757,7 +758,7 @@ func recvAndDecompress(p *parser, s *transport.Stream, dc Decompressor, maxRecei
return nil, status.Errorf(codes.ResourceExhausted, "grpc: received message after decompression larger than max (%d vs. %d)", size, maxReceiveMessageSize) return nil, status.Errorf(codes.ResourceExhausted, "grpc: received message after decompression larger than max (%d vs. %d)", size, maxReceiveMessageSize)
} }
} }
return d, nil return buf, nil
} }
// Using compressor, decompress d, returning data and size. // Using compressor, decompress d, returning data and size.
@ -791,16 +792,18 @@ func decompress(compressor encoding.Compressor, d []byte, maxReceiveMessageSize
// For the two compressor parameters, both should not be set, but if they are, // For the two compressor parameters, both should not be set, but if they are,
// dc takes precedence over compressor. // dc takes precedence over compressor.
// TODO(dfawley): wrap the old compressor/decompressor using the new API? // TODO(dfawley): wrap the old compressor/decompressor using the new API?
func recv(p *parser, c baseCodec, s *transport.Stream, dc Decompressor, m interface{}, maxReceiveMessageSize int, payInfo *payloadInfo, compressor encoding.Compressor) error { func recv(p *parser, c baseCodec, s *transport.Stream, dc Decompressor, m any, maxReceiveMessageSize int, payInfo *payloadInfo, compressor encoding.Compressor) error {
d, err := recvAndDecompress(p, s, dc, maxReceiveMessageSize, payInfo, compressor) buf, err := recvAndDecompress(p, s, dc, maxReceiveMessageSize, payInfo, compressor)
if err != nil { if err != nil {
return err return err
} }
if err := c.Unmarshal(d, m); err != nil { if err := c.Unmarshal(buf, m); err != nil {
return status.Errorf(codes.Internal, "grpc: failed to unmarshal the received message: %v", err) return status.Errorf(codes.Internal, "grpc: failed to unmarshal the received message: %v", err)
} }
if payInfo != nil { if payInfo != nil {
payInfo.uncompressedBytes = d payInfo.uncompressedBytes = buf
} else {
p.recvBufferPool.Put(&buf)
} }
return nil return nil
} }
@ -860,19 +863,22 @@ func ErrorDesc(err error) string {
// Errorf returns nil if c is OK. // Errorf returns nil if c is OK.
// //
// Deprecated: use status.Errorf instead. // Deprecated: use status.Errorf instead.
func Errorf(c codes.Code, format string, a ...interface{}) error { func Errorf(c codes.Code, format string, a ...any) error {
return status.Errorf(c, format, a...) return status.Errorf(c, format, a...)
} }
var errContextCanceled = status.Error(codes.Canceled, context.Canceled.Error())
var errContextDeadline = status.Error(codes.DeadlineExceeded, context.DeadlineExceeded.Error())
// toRPCErr converts an error into an error from the status package. // toRPCErr converts an error into an error from the status package.
func toRPCErr(err error) error { func toRPCErr(err error) error {
switch err { switch err {
case nil, io.EOF: case nil, io.EOF:
return err return err
case context.DeadlineExceeded: case context.DeadlineExceeded:
return status.Error(codes.DeadlineExceeded, err.Error()) return errContextDeadline
case context.Canceled: case context.Canceled:
return status.Error(codes.Canceled, err.Error()) return errContextCanceled
case io.ErrUnexpectedEOF: case io.ErrUnexpectedEOF:
return status.Error(codes.Internal, err.Error()) return status.Error(codes.Internal, err.Error())
} }

View File

@ -86,7 +86,7 @@ func init() {
var statusOK = status.New(codes.OK, "") var statusOK = status.New(codes.OK, "")
var logger = grpclog.Component("core") var logger = grpclog.Component("core")
type methodHandler func(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor UnaryServerInterceptor) (interface{}, error) type methodHandler func(srv any, ctx context.Context, dec func(any) error, interceptor UnaryServerInterceptor) (any, error)
// MethodDesc represents an RPC service's method specification. // MethodDesc represents an RPC service's method specification.
type MethodDesc struct { type MethodDesc struct {
@ -99,20 +99,20 @@ type ServiceDesc struct {
ServiceName string ServiceName string
// The pointer to the service interface. Used to check whether the user // The pointer to the service interface. Used to check whether the user
// provided implementation satisfies the interface requirements. // provided implementation satisfies the interface requirements.
HandlerType interface{} HandlerType any
Methods []MethodDesc Methods []MethodDesc
Streams []StreamDesc Streams []StreamDesc
Metadata interface{} Metadata any
} }
// serviceInfo wraps information about a service. It is very similar to // serviceInfo wraps information about a service. It is very similar to
// ServiceDesc and is constructed from it for internal purposes. // ServiceDesc and is constructed from it for internal purposes.
type serviceInfo struct { type serviceInfo struct {
// Contains the implementation for the methods in this service. // Contains the implementation for the methods in this service.
serviceImpl interface{} serviceImpl any
methods map[string]*MethodDesc methods map[string]*MethodDesc
streams map[string]*StreamDesc streams map[string]*StreamDesc
mdata interface{} mdata any
} }
// Server is a gRPC server to serve RPC requests. // Server is a gRPC server to serve RPC requests.
@ -164,10 +164,12 @@ type serverOptions struct {
initialConnWindowSize int32 initialConnWindowSize int32
writeBufferSize int writeBufferSize int
readBufferSize int readBufferSize int
sharedWriteBuffer bool
connectionTimeout time.Duration connectionTimeout time.Duration
maxHeaderListSize *uint32 maxHeaderListSize *uint32
headerTableSize *uint32 headerTableSize *uint32
numServerWorkers uint32 numServerWorkers uint32
recvBufferPool SharedBufferPool
} }
var defaultServerOptions = serverOptions{ var defaultServerOptions = serverOptions{
@ -177,6 +179,7 @@ var defaultServerOptions = serverOptions{
connectionTimeout: 120 * time.Second, connectionTimeout: 120 * time.Second,
writeBufferSize: defaultWriteBufSize, writeBufferSize: defaultWriteBufSize,
readBufferSize: defaultReadBufSize, readBufferSize: defaultReadBufSize,
recvBufferPool: nopBufferPool{},
} }
var globalServerOptions []ServerOption var globalServerOptions []ServerOption
@ -228,6 +231,20 @@ func newJoinServerOption(opts ...ServerOption) ServerOption {
return &joinServerOption{opts: opts} return &joinServerOption{opts: opts}
} }
// SharedWriteBuffer allows reusing per-connection transport write buffer.
// If this option is set to true every connection will release the buffer after
// flushing the data on the wire.
//
// # Experimental
//
// Notice: This API is EXPERIMENTAL and may be changed or removed in a
// later release.
func SharedWriteBuffer(val bool) ServerOption {
return newFuncServerOption(func(o *serverOptions) {
o.sharedWriteBuffer = val
})
}
// WriteBufferSize determines how much data can be batched before doing a write // WriteBufferSize determines how much data can be batched before doing a write
// on the wire. The corresponding memory allocation for this buffer will be // on the wire. The corresponding memory allocation for this buffer will be
// twice the size to keep syscalls low. The default value for this buffer is // twice the size to keep syscalls low. The default value for this buffer is
@ -268,9 +285,9 @@ func InitialConnWindowSize(s int32) ServerOption {
// KeepaliveParams returns a ServerOption that sets keepalive and max-age parameters for the server. // KeepaliveParams returns a ServerOption that sets keepalive and max-age parameters for the server.
func KeepaliveParams(kp keepalive.ServerParameters) ServerOption { func KeepaliveParams(kp keepalive.ServerParameters) ServerOption {
if kp.Time > 0 && kp.Time < time.Second { if kp.Time > 0 && kp.Time < internal.KeepaliveMinServerPingTime {
logger.Warning("Adjusting keepalive ping interval to minimum period of 1s") logger.Warning("Adjusting keepalive ping interval to minimum period of 1s")
kp.Time = time.Second kp.Time = internal.KeepaliveMinServerPingTime
} }
return newFuncServerOption(func(o *serverOptions) { return newFuncServerOption(func(o *serverOptions) {
@ -550,6 +567,27 @@ func NumStreamWorkers(numServerWorkers uint32) ServerOption {
}) })
} }
// RecvBufferPool returns a ServerOption that configures the server
// to use the provided shared buffer pool for parsing incoming messages. Depending
// on the application's workload, this could result in reduced memory allocation.
//
// If you are unsure about how to implement a memory pool but want to utilize one,
// begin with grpc.NewSharedBufferPool.
//
// Note: The shared buffer pool feature will not be active if any of the following
// options are used: StatsHandler, EnableTracing, or binary logging. In such
// cases, the shared buffer pool will be ignored.
//
// # Experimental
//
// Notice: This API is EXPERIMENTAL and may be changed or removed in a
// later release.
func RecvBufferPool(bufferPool SharedBufferPool) ServerOption {
return newFuncServerOption(func(o *serverOptions) {
o.recvBufferPool = bufferPool
})
}
// serverWorkerResetThreshold defines how often the stack must be reset. Every // serverWorkerResetThreshold defines how often the stack must be reset. Every
// N requests, by spawning a new goroutine in its place, a worker can reset its // N requests, by spawning a new goroutine in its place, a worker can reset its
// stack so that large stacks don't live in memory forever. 2^16 should allow // stack so that large stacks don't live in memory forever. 2^16 should allow
@ -625,7 +663,7 @@ func NewServer(opt ...ServerOption) *Server {
// printf records an event in s's event log, unless s has been stopped. // printf records an event in s's event log, unless s has been stopped.
// REQUIRES s.mu is held. // REQUIRES s.mu is held.
func (s *Server) printf(format string, a ...interface{}) { func (s *Server) printf(format string, a ...any) {
if s.events != nil { if s.events != nil {
s.events.Printf(format, a...) s.events.Printf(format, a...)
} }
@ -633,7 +671,7 @@ func (s *Server) printf(format string, a ...interface{}) {
// errorf records an error in s's event log, unless s has been stopped. // errorf records an error in s's event log, unless s has been stopped.
// REQUIRES s.mu is held. // REQUIRES s.mu is held.
func (s *Server) errorf(format string, a ...interface{}) { func (s *Server) errorf(format string, a ...any) {
if s.events != nil { if s.events != nil {
s.events.Errorf(format, a...) s.events.Errorf(format, a...)
} }
@ -648,14 +686,14 @@ type ServiceRegistrar interface {
// once the server has started serving. // once the server has started serving.
// desc describes the service and its methods and handlers. impl is the // desc describes the service and its methods and handlers. impl is the
// service implementation which is passed to the method handlers. // service implementation which is passed to the method handlers.
RegisterService(desc *ServiceDesc, impl interface{}) RegisterService(desc *ServiceDesc, impl any)
} }
// RegisterService registers a service and its implementation to the gRPC // RegisterService registers a service and its implementation to the gRPC
// server. It is called from the IDL generated code. This must be called before // server. It is called from the IDL generated code. This must be called before
// invoking Serve. If ss is non-nil (for legacy code), its type is checked to // invoking Serve. If ss is non-nil (for legacy code), its type is checked to
// ensure it implements sd.HandlerType. // ensure it implements sd.HandlerType.
func (s *Server) RegisterService(sd *ServiceDesc, ss interface{}) { func (s *Server) RegisterService(sd *ServiceDesc, ss any) {
if ss != nil { if ss != nil {
ht := reflect.TypeOf(sd.HandlerType).Elem() ht := reflect.TypeOf(sd.HandlerType).Elem()
st := reflect.TypeOf(ss) st := reflect.TypeOf(ss)
@ -666,7 +704,7 @@ func (s *Server) RegisterService(sd *ServiceDesc, ss interface{}) {
s.register(sd, ss) s.register(sd, ss)
} }
func (s *Server) register(sd *ServiceDesc, ss interface{}) { func (s *Server) register(sd *ServiceDesc, ss any) {
s.mu.Lock() s.mu.Lock()
defer s.mu.Unlock() defer s.mu.Unlock()
s.printf("RegisterService(%q)", sd.ServiceName) s.printf("RegisterService(%q)", sd.ServiceName)
@ -707,7 +745,7 @@ type MethodInfo struct {
type ServiceInfo struct { type ServiceInfo struct {
Methods []MethodInfo Methods []MethodInfo
// Metadata is the metadata specified in ServiceDesc when registering service. // Metadata is the metadata specified in ServiceDesc when registering service.
Metadata interface{} Metadata any
} }
// GetServiceInfo returns a map from service names to ServiceInfo. // GetServiceInfo returns a map from service names to ServiceInfo.
@ -908,6 +946,7 @@ func (s *Server) newHTTP2Transport(c net.Conn) transport.ServerTransport {
InitialConnWindowSize: s.opts.initialConnWindowSize, InitialConnWindowSize: s.opts.initialConnWindowSize,
WriteBufferSize: s.opts.writeBufferSize, WriteBufferSize: s.opts.writeBufferSize,
ReadBufferSize: s.opts.readBufferSize, ReadBufferSize: s.opts.readBufferSize,
SharedWriteBuffer: s.opts.sharedWriteBuffer,
ChannelzParentID: s.channelzID, ChannelzParentID: s.channelzID,
MaxHeaderListSize: s.opts.maxHeaderListSize, MaxHeaderListSize: s.opts.maxHeaderListSize,
HeaderTableSize: s.opts.headerTableSize, HeaderTableSize: s.opts.headerTableSize,
@ -1094,7 +1133,7 @@ func (s *Server) incrCallsFailed() {
atomic.AddInt64(&s.czData.callsFailed, 1) atomic.AddInt64(&s.czData.callsFailed, 1)
} }
func (s *Server) sendResponse(t transport.ServerTransport, stream *transport.Stream, msg interface{}, cp Compressor, opts *transport.Options, comp encoding.Compressor) error { func (s *Server) sendResponse(t transport.ServerTransport, stream *transport.Stream, msg any, cp Compressor, opts *transport.Options, comp encoding.Compressor) error {
data, err := encode(s.getCodec(stream.ContentSubtype()), msg) data, err := encode(s.getCodec(stream.ContentSubtype()), msg)
if err != nil { if err != nil {
channelz.Error(logger, s.channelzID, "grpc: server failed to encode response: ", err) channelz.Error(logger, s.channelzID, "grpc: server failed to encode response: ", err)
@ -1141,7 +1180,7 @@ func chainUnaryServerInterceptors(s *Server) {
} }
func chainUnaryInterceptors(interceptors []UnaryServerInterceptor) UnaryServerInterceptor { func chainUnaryInterceptors(interceptors []UnaryServerInterceptor) UnaryServerInterceptor {
return func(ctx context.Context, req interface{}, info *UnaryServerInfo, handler UnaryHandler) (interface{}, error) { return func(ctx context.Context, req any, info *UnaryServerInfo, handler UnaryHandler) (any, error) {
return interceptors[0](ctx, req, info, getChainUnaryHandler(interceptors, 0, info, handler)) return interceptors[0](ctx, req, info, getChainUnaryHandler(interceptors, 0, info, handler))
} }
} }
@ -1150,7 +1189,7 @@ func getChainUnaryHandler(interceptors []UnaryServerInterceptor, curr int, info
if curr == len(interceptors)-1 { if curr == len(interceptors)-1 {
return finalHandler return finalHandler
} }
return func(ctx context.Context, req interface{}) (interface{}, error) { return func(ctx context.Context, req any) (any, error) {
return interceptors[curr+1](ctx, req, info, getChainUnaryHandler(interceptors, curr+1, info, finalHandler)) return interceptors[curr+1](ctx, req, info, getChainUnaryHandler(interceptors, curr+1, info, finalHandler))
} }
} }
@ -1187,7 +1226,7 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport.
defer func() { defer func() {
if trInfo != nil { if trInfo != nil {
if err != nil && err != io.EOF { if err != nil && err != io.EOF {
trInfo.tr.LazyLog(&fmtStringer{"%v", []interface{}{err}}, true) trInfo.tr.LazyLog(&fmtStringer{"%v", []any{err}}, true)
trInfo.tr.SetError() trInfo.tr.SetError()
} }
trInfo.tr.Finish() trInfo.tr.Finish()
@ -1294,7 +1333,7 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport.
if len(shs) != 0 || len(binlogs) != 0 { if len(shs) != 0 || len(binlogs) != 0 {
payInfo = &payloadInfo{} payInfo = &payloadInfo{}
} }
d, err := recvAndDecompress(&parser{r: stream}, stream, dc, s.opts.maxReceiveMessageSize, payInfo, decomp) d, err := recvAndDecompress(&parser{r: stream, recvBufferPool: s.opts.recvBufferPool}, stream, dc, s.opts.maxReceiveMessageSize, payInfo, decomp)
if err != nil { if err != nil {
if e := t.WriteStatus(stream, status.Convert(err)); e != nil { if e := t.WriteStatus(stream, status.Convert(err)); e != nil {
channelz.Warningf(logger, s.channelzID, "grpc: Server.processUnaryRPC failed to write status: %v", e) channelz.Warningf(logger, s.channelzID, "grpc: Server.processUnaryRPC failed to write status: %v", e)
@ -1304,7 +1343,7 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport.
if channelz.IsOn() { if channelz.IsOn() {
t.IncrMsgRecv() t.IncrMsgRecv()
} }
df := func(v interface{}) error { df := func(v any) error {
if err := s.getCodec(stream.ContentSubtype()).Unmarshal(d, v); err != nil { if err := s.getCodec(stream.ContentSubtype()).Unmarshal(d, v); err != nil {
return status.Errorf(codes.Internal, "grpc: error unmarshalling request: %v", err) return status.Errorf(codes.Internal, "grpc: error unmarshalling request: %v", err)
} }
@ -1468,7 +1507,7 @@ func chainStreamServerInterceptors(s *Server) {
} }
func chainStreamInterceptors(interceptors []StreamServerInterceptor) StreamServerInterceptor { func chainStreamInterceptors(interceptors []StreamServerInterceptor) StreamServerInterceptor {
return func(srv interface{}, ss ServerStream, info *StreamServerInfo, handler StreamHandler) error { return func(srv any, ss ServerStream, info *StreamServerInfo, handler StreamHandler) error {
return interceptors[0](srv, ss, info, getChainStreamHandler(interceptors, 0, info, handler)) return interceptors[0](srv, ss, info, getChainStreamHandler(interceptors, 0, info, handler))
} }
} }
@ -1477,7 +1516,7 @@ func getChainStreamHandler(interceptors []StreamServerInterceptor, curr int, inf
if curr == len(interceptors)-1 { if curr == len(interceptors)-1 {
return finalHandler return finalHandler
} }
return func(srv interface{}, stream ServerStream) error { return func(srv any, stream ServerStream) error {
return interceptors[curr+1](srv, stream, info, getChainStreamHandler(interceptors, curr+1, info, finalHandler)) return interceptors[curr+1](srv, stream, info, getChainStreamHandler(interceptors, curr+1, info, finalHandler))
} }
} }
@ -1504,7 +1543,7 @@ func (s *Server) processStreamingRPC(t transport.ServerTransport, stream *transp
ctx: ctx, ctx: ctx,
t: t, t: t,
s: stream, s: stream,
p: &parser{r: stream}, p: &parser{r: stream, recvBufferPool: s.opts.recvBufferPool},
codec: s.getCodec(stream.ContentSubtype()), codec: s.getCodec(stream.ContentSubtype()),
maxReceiveMessageSize: s.opts.maxReceiveMessageSize, maxReceiveMessageSize: s.opts.maxReceiveMessageSize,
maxSendMessageSize: s.opts.maxSendMessageSize, maxSendMessageSize: s.opts.maxSendMessageSize,
@ -1518,7 +1557,7 @@ func (s *Server) processStreamingRPC(t transport.ServerTransport, stream *transp
if trInfo != nil { if trInfo != nil {
ss.mu.Lock() ss.mu.Lock()
if err != nil && err != io.EOF { if err != nil && err != io.EOF {
ss.trInfo.tr.LazyLog(&fmtStringer{"%v", []interface{}{err}}, true) ss.trInfo.tr.LazyLog(&fmtStringer{"%v", []any{err}}, true)
ss.trInfo.tr.SetError() ss.trInfo.tr.SetError()
} }
ss.trInfo.tr.Finish() ss.trInfo.tr.Finish()
@ -1621,7 +1660,7 @@ func (s *Server) processStreamingRPC(t transport.ServerTransport, stream *transp
trInfo.tr.LazyLog(&trInfo.firstLine, false) trInfo.tr.LazyLog(&trInfo.firstLine, false)
} }
var appErr error var appErr error
var server interface{} var server any
if info != nil { if info != nil {
server = info.serviceImpl server = info.serviceImpl
} }
@ -1687,13 +1726,13 @@ func (s *Server) handleStream(t transport.ServerTransport, stream *transport.Str
pos := strings.LastIndex(sm, "/") pos := strings.LastIndex(sm, "/")
if pos == -1 { if pos == -1 {
if trInfo != nil { if trInfo != nil {
trInfo.tr.LazyLog(&fmtStringer{"Malformed method name %q", []interface{}{sm}}, true) trInfo.tr.LazyLog(&fmtStringer{"Malformed method name %q", []any{sm}}, true)
trInfo.tr.SetError() trInfo.tr.SetError()
} }
errDesc := fmt.Sprintf("malformed method name: %q", stream.Method()) errDesc := fmt.Sprintf("malformed method name: %q", stream.Method())
if err := t.WriteStatus(stream, status.New(codes.Unimplemented, errDesc)); err != nil { if err := t.WriteStatus(stream, status.New(codes.Unimplemented, errDesc)); err != nil {
if trInfo != nil { if trInfo != nil {
trInfo.tr.LazyLog(&fmtStringer{"%v", []interface{}{err}}, true) trInfo.tr.LazyLog(&fmtStringer{"%v", []any{err}}, true)
trInfo.tr.SetError() trInfo.tr.SetError()
} }
channelz.Warningf(logger, s.channelzID, "grpc: Server.handleStream failed to write status: %v", err) channelz.Warningf(logger, s.channelzID, "grpc: Server.handleStream failed to write status: %v", err)
@ -1734,7 +1773,7 @@ func (s *Server) handleStream(t transport.ServerTransport, stream *transport.Str
} }
if err := t.WriteStatus(stream, status.New(codes.Unimplemented, errDesc)); err != nil { if err := t.WriteStatus(stream, status.New(codes.Unimplemented, errDesc)); err != nil {
if trInfo != nil { if trInfo != nil {
trInfo.tr.LazyLog(&fmtStringer{"%v", []interface{}{err}}, true) trInfo.tr.LazyLog(&fmtStringer{"%v", []any{err}}, true)
trInfo.tr.SetError() trInfo.tr.SetError()
} }
channelz.Warningf(logger, s.channelzID, "grpc: Server.handleStream failed to write status: %v", err) channelz.Warningf(logger, s.channelzID, "grpc: Server.handleStream failed to write status: %v", err)
@ -2054,12 +2093,12 @@ func validateSendCompressor(name, clientCompressors string) error {
// atomicSemaphore implements a blocking, counting semaphore. acquire should be // atomicSemaphore implements a blocking, counting semaphore. acquire should be
// called synchronously; release may be called asynchronously. // called synchronously; release may be called asynchronously.
type atomicSemaphore struct { type atomicSemaphore struct {
n int64 n atomic.Int64
wait chan struct{} wait chan struct{}
} }
func (q *atomicSemaphore) acquire() { func (q *atomicSemaphore) acquire() {
if atomic.AddInt64(&q.n, -1) < 0 { if q.n.Add(-1) < 0 {
// We ran out of quota. Block until a release happens. // We ran out of quota. Block until a release happens.
<-q.wait <-q.wait
} }
@ -2070,12 +2109,14 @@ func (q *atomicSemaphore) release() {
// concurrent calls to acquire, but also note that with synchronous calls to // concurrent calls to acquire, but also note that with synchronous calls to
// acquire, as our system does, n will never be less than -1. There are // acquire, as our system does, n will never be less than -1. There are
// fairness issues (queuing) to consider if this was to be generalized. // fairness issues (queuing) to consider if this was to be generalized.
if atomic.AddInt64(&q.n, 1) <= 0 { if q.n.Add(1) <= 0 {
// An acquire was waiting on us. Unblock it. // An acquire was waiting on us. Unblock it.
q.wait <- struct{}{} q.wait <- struct{}{}
} }
} }
func newHandlerQuota(n uint32) *atomicSemaphore { func newHandlerQuota(n uint32) *atomicSemaphore {
return &atomicSemaphore{n: int64(n), wait: make(chan struct{}, 1)} a := &atomicSemaphore{wait: make(chan struct{}, 1)}
a.n.Store(int64(n))
return a
} }

154
vendor/google.golang.org/grpc/shared_buffer_pool.go generated vendored Normal file
View File

@ -0,0 +1,154 @@
/*
*
* Copyright 2023 gRPC authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
package grpc
import "sync"
// SharedBufferPool is a pool of buffers that can be shared, resulting in
// decreased memory allocation. Currently, in gRPC-go, it is only utilized
// for parsing incoming messages.
//
// # Experimental
//
// Notice: This API is EXPERIMENTAL and may be changed or removed in a
// later release.
type SharedBufferPool interface {
// Get returns a buffer with specified length from the pool.
//
// The returned byte slice may be not zero initialized.
Get(length int) []byte
// Put returns a buffer to the pool.
Put(*[]byte)
}
// NewSharedBufferPool creates a simple SharedBufferPool with buckets
// of different sizes to optimize memory usage. This prevents the pool from
// wasting large amounts of memory, even when handling messages of varying sizes.
//
// # Experimental
//
// Notice: This API is EXPERIMENTAL and may be changed or removed in a
// later release.
func NewSharedBufferPool() SharedBufferPool {
return &simpleSharedBufferPool{
pools: [poolArraySize]simpleSharedBufferChildPool{
newBytesPool(level0PoolMaxSize),
newBytesPool(level1PoolMaxSize),
newBytesPool(level2PoolMaxSize),
newBytesPool(level3PoolMaxSize),
newBytesPool(level4PoolMaxSize),
newBytesPool(0),
},
}
}
// simpleSharedBufferPool is a simple implementation of SharedBufferPool.
type simpleSharedBufferPool struct {
pools [poolArraySize]simpleSharedBufferChildPool
}
func (p *simpleSharedBufferPool) Get(size int) []byte {
return p.pools[p.poolIdx(size)].Get(size)
}
func (p *simpleSharedBufferPool) Put(bs *[]byte) {
p.pools[p.poolIdx(cap(*bs))].Put(bs)
}
func (p *simpleSharedBufferPool) poolIdx(size int) int {
switch {
case size <= level0PoolMaxSize:
return level0PoolIdx
case size <= level1PoolMaxSize:
return level1PoolIdx
case size <= level2PoolMaxSize:
return level2PoolIdx
case size <= level3PoolMaxSize:
return level3PoolIdx
case size <= level4PoolMaxSize:
return level4PoolIdx
default:
return levelMaxPoolIdx
}
}
const (
level0PoolMaxSize = 16 // 16 B
level1PoolMaxSize = level0PoolMaxSize * 16 // 256 B
level2PoolMaxSize = level1PoolMaxSize * 16 // 4 KB
level3PoolMaxSize = level2PoolMaxSize * 16 // 64 KB
level4PoolMaxSize = level3PoolMaxSize * 16 // 1 MB
)
const (
level0PoolIdx = iota
level1PoolIdx
level2PoolIdx
level3PoolIdx
level4PoolIdx
levelMaxPoolIdx
poolArraySize
)
type simpleSharedBufferChildPool interface {
Get(size int) []byte
Put(any)
}
type bufferPool struct {
sync.Pool
defaultSize int
}
func (p *bufferPool) Get(size int) []byte {
bs := p.Pool.Get().(*[]byte)
if cap(*bs) < size {
p.Pool.Put(bs)
return make([]byte, size)
}
return (*bs)[:size]
}
func newBytesPool(size int) simpleSharedBufferChildPool {
return &bufferPool{
Pool: sync.Pool{
New: func() any {
bs := make([]byte, size)
return &bs
},
},
defaultSize: size,
}
}
// nopBufferPool is a buffer pool just makes new buffer without pooling.
type nopBufferPool struct {
}
func (nopBufferPool) Get(length int) []byte {
return make([]byte, length)
}
func (nopBufferPool) Put(*[]byte) {
}

View File

@ -59,12 +59,22 @@ func (s *Begin) IsClient() bool { return s.Client }
func (s *Begin) isRPCStats() {} func (s *Begin) isRPCStats() {}
// PickerUpdated indicates that the LB policy provided a new picker while the
// RPC was waiting for one.
type PickerUpdated struct{}
// IsClient indicates if the stats information is from client side. Only Client
// Side interfaces with a Picker, thus always returns true.
func (*PickerUpdated) IsClient() bool { return true }
func (*PickerUpdated) isRPCStats() {}
// InPayload contains the information for an incoming payload. // InPayload contains the information for an incoming payload.
type InPayload struct { type InPayload struct {
// Client is true if this InPayload is from client side. // Client is true if this InPayload is from client side.
Client bool Client bool
// Payload is the payload with original type. // Payload is the payload with original type.
Payload interface{} Payload any
// Data is the serialized message payload. // Data is the serialized message payload.
Data []byte Data []byte
@ -134,7 +144,7 @@ type OutPayload struct {
// Client is true if this OutPayload is from client side. // Client is true if this OutPayload is from client side.
Client bool Client bool
// Payload is the payload with original type. // Payload is the payload with original type.
Payload interface{} Payload any
// Data is the serialized message payload. // Data is the serialized message payload.
Data []byte Data []byte
// Length is the size of the uncompressed payload data. Does not include any // Length is the size of the uncompressed payload data. Does not include any

View File

@ -50,7 +50,7 @@ func New(c codes.Code, msg string) *Status {
} }
// Newf returns New(c, fmt.Sprintf(format, a...)). // Newf returns New(c, fmt.Sprintf(format, a...)).
func Newf(c codes.Code, format string, a ...interface{}) *Status { func Newf(c codes.Code, format string, a ...any) *Status {
return New(c, fmt.Sprintf(format, a...)) return New(c, fmt.Sprintf(format, a...))
} }
@ -60,7 +60,7 @@ func Error(c codes.Code, msg string) error {
} }
// Errorf returns Error(c, fmt.Sprintf(format, a...)). // Errorf returns Error(c, fmt.Sprintf(format, a...)).
func Errorf(c codes.Code, format string, a ...interface{}) error { func Errorf(c codes.Code, format string, a ...any) error {
return Error(c, fmt.Sprintf(format, a...)) return Error(c, fmt.Sprintf(format, a...))
} }
@ -99,25 +99,27 @@ func FromError(err error) (s *Status, ok bool) {
} }
type grpcstatus interface{ GRPCStatus() *Status } type grpcstatus interface{ GRPCStatus() *Status }
if gs, ok := err.(grpcstatus); ok { if gs, ok := err.(grpcstatus); ok {
if gs.GRPCStatus() == nil { grpcStatus := gs.GRPCStatus()
if grpcStatus == nil {
// Error has status nil, which maps to codes.OK. There // Error has status nil, which maps to codes.OK. There
// is no sensible behavior for this, so we turn it into // is no sensible behavior for this, so we turn it into
// an error with codes.Unknown and discard the existing // an error with codes.Unknown and discard the existing
// status. // status.
return New(codes.Unknown, err.Error()), false return New(codes.Unknown, err.Error()), false
} }
return gs.GRPCStatus(), true return grpcStatus, true
} }
var gs grpcstatus var gs grpcstatus
if errors.As(err, &gs) { if errors.As(err, &gs) {
if gs.GRPCStatus() == nil { grpcStatus := gs.GRPCStatus()
if grpcStatus == nil {
// Error wraps an error that has status nil, which maps // Error wraps an error that has status nil, which maps
// to codes.OK. There is no sensible behavior for this, // to codes.OK. There is no sensible behavior for this,
// so we turn it into an error with codes.Unknown and // so we turn it into an error with codes.Unknown and
// discard the existing status. // discard the existing status.
return New(codes.Unknown, err.Error()), false return New(codes.Unknown, err.Error()), false
} }
p := gs.GRPCStatus().Proto() p := grpcStatus.Proto()
p.Message = err.Error() p.Message = err.Error()
return status.FromProto(p), true return status.FromProto(p), true
} }

View File

@ -31,6 +31,7 @@ import (
"google.golang.org/grpc/balancer" "google.golang.org/grpc/balancer"
"google.golang.org/grpc/codes" "google.golang.org/grpc/codes"
"google.golang.org/grpc/encoding" "google.golang.org/grpc/encoding"
"google.golang.org/grpc/internal"
"google.golang.org/grpc/internal/balancerload" "google.golang.org/grpc/internal/balancerload"
"google.golang.org/grpc/internal/binarylog" "google.golang.org/grpc/internal/binarylog"
"google.golang.org/grpc/internal/channelz" "google.golang.org/grpc/internal/channelz"
@ -54,7 +55,7 @@ import (
// status package, or be one of the context errors. Otherwise, gRPC will use // status package, or be one of the context errors. Otherwise, gRPC will use
// codes.Unknown as the status code and err.Error() as the status message of the // codes.Unknown as the status code and err.Error() as the status message of the
// RPC. // RPC.
type StreamHandler func(srv interface{}, stream ServerStream) error type StreamHandler func(srv any, stream ServerStream) error
// StreamDesc represents a streaming RPC service's method specification. Used // StreamDesc represents a streaming RPC service's method specification. Used
// on the server when registering services and on the client when initiating // on the server when registering services and on the client when initiating
@ -79,9 +80,9 @@ type Stream interface {
// Deprecated: See ClientStream and ServerStream documentation instead. // Deprecated: See ClientStream and ServerStream documentation instead.
Context() context.Context Context() context.Context
// Deprecated: See ClientStream and ServerStream documentation instead. // Deprecated: See ClientStream and ServerStream documentation instead.
SendMsg(m interface{}) error SendMsg(m any) error
// Deprecated: See ClientStream and ServerStream documentation instead. // Deprecated: See ClientStream and ServerStream documentation instead.
RecvMsg(m interface{}) error RecvMsg(m any) error
} }
// ClientStream defines the client-side behavior of a streaming RPC. // ClientStream defines the client-side behavior of a streaming RPC.
@ -90,7 +91,9 @@ type Stream interface {
// status package. // status package.
type ClientStream interface { type ClientStream interface {
// Header returns the header metadata received from the server if there // Header returns the header metadata received from the server if there
// is any. It blocks if the metadata is not ready to read. // is any. It blocks if the metadata is not ready to read. If the metadata
// is nil and the error is also nil, then the stream was terminated without
// headers, and the status can be discovered by calling RecvMsg.
Header() (metadata.MD, error) Header() (metadata.MD, error)
// Trailer returns the trailer metadata from the server, if there is any. // Trailer returns the trailer metadata from the server, if there is any.
// It must only be called after stream.CloseAndRecv has returned, or // It must only be called after stream.CloseAndRecv has returned, or
@ -126,7 +129,7 @@ type ClientStream interface {
// //
// It is not safe to modify the message after calling SendMsg. Tracing // It is not safe to modify the message after calling SendMsg. Tracing
// libraries and stats handlers may use the message lazily. // libraries and stats handlers may use the message lazily.
SendMsg(m interface{}) error SendMsg(m any) error
// RecvMsg blocks until it receives a message into m or the stream is // RecvMsg blocks until it receives a message into m or the stream is
// done. It returns io.EOF when the stream completes successfully. On // done. It returns io.EOF when the stream completes successfully. On
// any other error, the stream is aborted and the error contains the RPC // any other error, the stream is aborted and the error contains the RPC
@ -135,7 +138,7 @@ type ClientStream interface {
// It is safe to have a goroutine calling SendMsg and another goroutine // It is safe to have a goroutine calling SendMsg and another goroutine
// calling RecvMsg on the same stream at the same time, but it is not // calling RecvMsg on the same stream at the same time, but it is not
// safe to call RecvMsg on the same stream in different goroutines. // safe to call RecvMsg on the same stream in different goroutines.
RecvMsg(m interface{}) error RecvMsg(m any) error
} }
// NewStream creates a new Stream for the client side. This is typically // NewStream creates a new Stream for the client side. This is typically
@ -155,11 +158,6 @@ type ClientStream interface {
// If none of the above happen, a goroutine and a context will be leaked, and grpc // If none of the above happen, a goroutine and a context will be leaked, and grpc
// will not call the optionally-configured stats handler with a stats.End message. // will not call the optionally-configured stats handler with a stats.End message.
func (cc *ClientConn) NewStream(ctx context.Context, desc *StreamDesc, method string, opts ...CallOption) (ClientStream, error) { func (cc *ClientConn) NewStream(ctx context.Context, desc *StreamDesc, method string, opts ...CallOption) (ClientStream, error) {
if err := cc.idlenessMgr.onCallBegin(); err != nil {
return nil, err
}
defer cc.idlenessMgr.onCallEnd()
// allow interceptor to see all applicable call options, which means those // allow interceptor to see all applicable call options, which means those
// configured as defaults from dial option as well as per-call options // configured as defaults from dial option as well as per-call options
opts = combine(cc.dopts.callOptions, opts) opts = combine(cc.dopts.callOptions, opts)
@ -176,6 +174,16 @@ func NewClientStream(ctx context.Context, desc *StreamDesc, cc *ClientConn, meth
} }
func newClientStream(ctx context.Context, desc *StreamDesc, cc *ClientConn, method string, opts ...CallOption) (_ ClientStream, err error) { func newClientStream(ctx context.Context, desc *StreamDesc, cc *ClientConn, method string, opts ...CallOption) (_ ClientStream, err error) {
// Start tracking the RPC for idleness purposes. This is where a stream is
// created for both streaming and unary RPCs, and hence is a good place to
// track active RPC count.
if err := cc.idlenessMgr.OnCallBegin(); err != nil {
return nil, err
}
// Add a calloption, to decrement the active call count, that gets executed
// when the RPC completes.
opts = append([]CallOption{OnFinish(func(error) { cc.idlenessMgr.OnCallEnd() })}, opts...)
if md, added, ok := metadata.FromOutgoingContextRaw(ctx); ok { if md, added, ok := metadata.FromOutgoingContextRaw(ctx); ok {
// validate md // validate md
if err := imetadata.Validate(md); err != nil { if err := imetadata.Validate(md); err != nil {
@ -433,7 +441,7 @@ func (cs *clientStream) newAttemptLocked(isTransparent bool) (*csAttempt, error)
ctx = trace.NewContext(ctx, trInfo.tr) ctx = trace.NewContext(ctx, trInfo.tr)
} }
if cs.cc.parsedTarget.URL.Scheme == "xds" { if cs.cc.parsedTarget.URL.Scheme == internal.GRPCResolverSchemeExtraMetadata {
// Add extra metadata (metadata that will be added by transport) to context // Add extra metadata (metadata that will be added by transport) to context
// so the balancer can see them. // so the balancer can see them.
ctx = grpcutil.WithExtraMetadata(ctx, metadata.Pairs( ctx = grpcutil.WithExtraMetadata(ctx, metadata.Pairs(
@ -507,7 +515,7 @@ func (a *csAttempt) newStream() error {
return toRPCErr(nse.Err) return toRPCErr(nse.Err)
} }
a.s = s a.s = s
a.p = &parser{r: s} a.p = &parser{r: s, recvBufferPool: a.cs.cc.dopts.recvBufferPool}
return nil return nil
} }
@ -788,23 +796,24 @@ func (cs *clientStream) withRetry(op func(a *csAttempt) error, onSuccess func())
func (cs *clientStream) Header() (metadata.MD, error) { func (cs *clientStream) Header() (metadata.MD, error) {
var m metadata.MD var m metadata.MD
noHeader := false
err := cs.withRetry(func(a *csAttempt) error { err := cs.withRetry(func(a *csAttempt) error {
var err error var err error
m, err = a.s.Header() m, err = a.s.Header()
if err == transport.ErrNoHeaders {
noHeader = true
return nil
}
return toRPCErr(err) return toRPCErr(err)
}, cs.commitAttemptLocked) }, cs.commitAttemptLocked)
if err != nil { if m == nil && err == nil {
cs.finish(err) // The stream ended with success. Finish the clientStream.
return nil, err err = io.EOF
} }
if len(cs.binlogs) != 0 && !cs.serverHeaderBinlogged && !noHeader { if err != nil {
cs.finish(err)
// Do not return the error. The user should get it by calling Recv().
return nil, nil
}
if len(cs.binlogs) != 0 && !cs.serverHeaderBinlogged && m != nil {
// Only log if binary log is on and header has not been logged, and // Only log if binary log is on and header has not been logged, and
// there is actually headers to log. // there is actually headers to log.
logEntry := &binarylog.ServerHeader{ logEntry := &binarylog.ServerHeader{
@ -820,6 +829,7 @@ func (cs *clientStream) Header() (metadata.MD, error) {
binlog.Log(cs.ctx, logEntry) binlog.Log(cs.ctx, logEntry)
} }
} }
return m, nil return m, nil
} }
@ -860,7 +870,7 @@ func (cs *clientStream) bufferForRetryLocked(sz int, op func(a *csAttempt) error
cs.buffer = append(cs.buffer, op) cs.buffer = append(cs.buffer, op)
} }
func (cs *clientStream) SendMsg(m interface{}) (err error) { func (cs *clientStream) SendMsg(m any) (err error) {
defer func() { defer func() {
if err != nil && err != io.EOF { if err != nil && err != io.EOF {
// Call finish on the client stream for errors generated by this SendMsg // Call finish on the client stream for errors generated by this SendMsg
@ -904,7 +914,7 @@ func (cs *clientStream) SendMsg(m interface{}) (err error) {
return err return err
} }
func (cs *clientStream) RecvMsg(m interface{}) error { func (cs *clientStream) RecvMsg(m any) error {
if len(cs.binlogs) != 0 && !cs.serverHeaderBinlogged { if len(cs.binlogs) != 0 && !cs.serverHeaderBinlogged {
// Call Header() to binary log header if it's not already logged. // Call Header() to binary log header if it's not already logged.
cs.Header() cs.Header()
@ -928,24 +938,6 @@ func (cs *clientStream) RecvMsg(m interface{}) error {
if err != nil || !cs.desc.ServerStreams { if err != nil || !cs.desc.ServerStreams {
// err != nil or non-server-streaming indicates end of stream. // err != nil or non-server-streaming indicates end of stream.
cs.finish(err) cs.finish(err)
if len(cs.binlogs) != 0 {
// finish will not log Trailer. Log Trailer here.
logEntry := &binarylog.ServerTrailer{
OnClientSide: true,
Trailer: cs.Trailer(),
Err: err,
}
if logEntry.Err == io.EOF {
logEntry.Err = nil
}
if peer, ok := peer.FromContext(cs.Context()); ok {
logEntry.PeerAddr = peer.Addr
}
for _, binlog := range cs.binlogs {
binlog.Log(cs.ctx, logEntry)
}
}
} }
return err return err
} }
@ -1001,18 +993,30 @@ func (cs *clientStream) finish(err error) {
} }
} }
} }
cs.mu.Unlock() cs.mu.Unlock()
// For binary logging. only log cancel in finish (could be caused by RPC ctx // Only one of cancel or trailer needs to be logged.
// canceled or ClientConn closed). Trailer will be logged in RecvMsg. if len(cs.binlogs) != 0 {
// switch err {
// Only one of cancel or trailer needs to be logged. In the cases where case errContextCanceled, errContextDeadline, ErrClientConnClosing:
// users don't call RecvMsg, users must have already canceled the RPC. c := &binarylog.Cancel{
if len(cs.binlogs) != 0 && status.Code(err) == codes.Canceled { OnClientSide: true,
c := &binarylog.Cancel{ }
OnClientSide: true, for _, binlog := range cs.binlogs {
} binlog.Log(cs.ctx, c)
for _, binlog := range cs.binlogs { }
binlog.Log(cs.ctx, c) default:
logEntry := &binarylog.ServerTrailer{
OnClientSide: true,
Trailer: cs.Trailer(),
Err: err,
}
if peer, ok := peer.FromContext(cs.Context()); ok {
logEntry.PeerAddr = peer.Addr
}
for _, binlog := range cs.binlogs {
binlog.Log(cs.ctx, logEntry)
}
} }
} }
if err == nil { if err == nil {
@ -1028,7 +1032,7 @@ func (cs *clientStream) finish(err error) {
cs.cancel() cs.cancel()
} }
func (a *csAttempt) sendMsg(m interface{}, hdr, payld, data []byte) error { func (a *csAttempt) sendMsg(m any, hdr, payld, data []byte) error {
cs := a.cs cs := a.cs
if a.trInfo != nil { if a.trInfo != nil {
a.mu.Lock() a.mu.Lock()
@ -1055,7 +1059,7 @@ func (a *csAttempt) sendMsg(m interface{}, hdr, payld, data []byte) error {
return nil return nil
} }
func (a *csAttempt) recvMsg(m interface{}, payInfo *payloadInfo) (err error) { func (a *csAttempt) recvMsg(m any, payInfo *payloadInfo) (err error) {
cs := a.cs cs := a.cs
if len(a.statsHandlers) != 0 && payInfo == nil { if len(a.statsHandlers) != 0 && payInfo == nil {
payInfo = &payloadInfo{} payInfo = &payloadInfo{}
@ -1270,7 +1274,7 @@ func newNonRetryClientStream(ctx context.Context, desc *StreamDesc, method strin
return nil, err return nil, err
} }
as.s = s as.s = s
as.p = &parser{r: s} as.p = &parser{r: s, recvBufferPool: ac.dopts.recvBufferPool}
ac.incrCallsStarted() ac.incrCallsStarted()
if desc != unaryStreamDesc { if desc != unaryStreamDesc {
// Listen on stream context to cleanup when the stream context is // Listen on stream context to cleanup when the stream context is
@ -1348,7 +1352,7 @@ func (as *addrConnStream) Context() context.Context {
return as.s.Context() return as.s.Context()
} }
func (as *addrConnStream) SendMsg(m interface{}) (err error) { func (as *addrConnStream) SendMsg(m any) (err error) {
defer func() { defer func() {
if err != nil && err != io.EOF { if err != nil && err != io.EOF {
// Call finish on the client stream for errors generated by this SendMsg // Call finish on the client stream for errors generated by this SendMsg
@ -1393,7 +1397,7 @@ func (as *addrConnStream) SendMsg(m interface{}) (err error) {
return nil return nil
} }
func (as *addrConnStream) RecvMsg(m interface{}) (err error) { func (as *addrConnStream) RecvMsg(m any) (err error) {
defer func() { defer func() {
if err != nil || !as.desc.ServerStreams { if err != nil || !as.desc.ServerStreams {
// err != nil or non-server-streaming indicates end of stream. // err != nil or non-server-streaming indicates end of stream.
@ -1512,7 +1516,7 @@ type ServerStream interface {
// //
// It is not safe to modify the message after calling SendMsg. Tracing // It is not safe to modify the message after calling SendMsg. Tracing
// libraries and stats handlers may use the message lazily. // libraries and stats handlers may use the message lazily.
SendMsg(m interface{}) error SendMsg(m any) error
// RecvMsg blocks until it receives a message into m or the stream is // RecvMsg blocks until it receives a message into m or the stream is
// done. It returns io.EOF when the client has performed a CloseSend. On // done. It returns io.EOF when the client has performed a CloseSend. On
// any non-EOF error, the stream is aborted and the error contains the // any non-EOF error, the stream is aborted and the error contains the
@ -1521,7 +1525,7 @@ type ServerStream interface {
// It is safe to have a goroutine calling SendMsg and another goroutine // It is safe to have a goroutine calling SendMsg and another goroutine
// calling RecvMsg on the same stream at the same time, but it is not // calling RecvMsg on the same stream at the same time, but it is not
// safe to call RecvMsg on the same stream in different goroutines. // safe to call RecvMsg on the same stream in different goroutines.
RecvMsg(m interface{}) error RecvMsg(m any) error
} }
// serverStream implements a server side Stream. // serverStream implements a server side Stream.
@ -1602,7 +1606,7 @@ func (ss *serverStream) SetTrailer(md metadata.MD) {
ss.s.SetTrailer(md) ss.s.SetTrailer(md)
} }
func (ss *serverStream) SendMsg(m interface{}) (err error) { func (ss *serverStream) SendMsg(m any) (err error) {
defer func() { defer func() {
if ss.trInfo != nil { if ss.trInfo != nil {
ss.mu.Lock() ss.mu.Lock()
@ -1610,7 +1614,7 @@ func (ss *serverStream) SendMsg(m interface{}) (err error) {
if err == nil { if err == nil {
ss.trInfo.tr.LazyLog(&payload{sent: true, msg: m}, true) ss.trInfo.tr.LazyLog(&payload{sent: true, msg: m}, true)
} else { } else {
ss.trInfo.tr.LazyLog(&fmtStringer{"%v", []interface{}{err}}, true) ss.trInfo.tr.LazyLog(&fmtStringer{"%v", []any{err}}, true)
ss.trInfo.tr.SetError() ss.trInfo.tr.SetError()
} }
} }
@ -1677,7 +1681,7 @@ func (ss *serverStream) SendMsg(m interface{}) (err error) {
return nil return nil
} }
func (ss *serverStream) RecvMsg(m interface{}) (err error) { func (ss *serverStream) RecvMsg(m any) (err error) {
defer func() { defer func() {
if ss.trInfo != nil { if ss.trInfo != nil {
ss.mu.Lock() ss.mu.Lock()
@ -1685,7 +1689,7 @@ func (ss *serverStream) RecvMsg(m interface{}) (err error) {
if err == nil { if err == nil {
ss.trInfo.tr.LazyLog(&payload{sent: false, msg: m}, true) ss.trInfo.tr.LazyLog(&payload{sent: false, msg: m}, true)
} else if err != io.EOF { } else if err != io.EOF {
ss.trInfo.tr.LazyLog(&fmtStringer{"%v", []interface{}{err}}, true) ss.trInfo.tr.LazyLog(&fmtStringer{"%v", []any{err}}, true)
ss.trInfo.tr.SetError() ss.trInfo.tr.SetError()
} }
} }
@ -1757,7 +1761,7 @@ func MethodFromServerStream(stream ServerStream) (string, bool) {
// prepareMsg returns the hdr, payload and data // prepareMsg returns the hdr, payload and data
// using the compressors passed or using the // using the compressors passed or using the
// passed preparedmsg // passed preparedmsg
func prepareMsg(m interface{}, codec baseCodec, cp Compressor, comp encoding.Compressor) (hdr, payload, data []byte, err error) { func prepareMsg(m any, codec baseCodec, cp Compressor, comp encoding.Compressor) (hdr, payload, data []byte, err error) {
if preparedMsg, ok := m.(*PreparedMsg); ok { if preparedMsg, ok := m.(*PreparedMsg); ok {
return preparedMsg.hdr, preparedMsg.payload, preparedMsg.encodedData, nil return preparedMsg.hdr, preparedMsg.payload, preparedMsg.encodedData, nil
} }

View File

@ -97,8 +97,8 @@ func truncate(x string, l int) string {
// payload represents an RPC request or response payload. // payload represents an RPC request or response payload.
type payload struct { type payload struct {
sent bool // whether this is an outgoing payload sent bool // whether this is an outgoing payload
msg interface{} // e.g. a proto.Message msg any // e.g. a proto.Message
// TODO(dsymonds): add stringifying info to codec, and limit how much we hold here? // TODO(dsymonds): add stringifying info to codec, and limit how much we hold here?
} }
@ -111,7 +111,7 @@ func (p payload) String() string {
type fmtStringer struct { type fmtStringer struct {
format string format string
a []interface{} a []any
} }
func (f *fmtStringer) String() string { func (f *fmtStringer) String() string {

View File

@ -19,4 +19,4 @@
package grpc package grpc
// Version is the current grpc version. // Version is the current grpc version.
const Version = "1.56.3" const Version = "1.58.3"

View File

@ -84,6 +84,9 @@ not git grep -l 'x/net/context' -- "*.go"
# thread safety. # thread safety.
git grep -l '"math/rand"' -- "*.go" 2>&1 | not grep -v '^examples\|^stress\|grpcrand\|^benchmark\|wrr_test' git grep -l '"math/rand"' -- "*.go" 2>&1 | not grep -v '^examples\|^stress\|grpcrand\|^benchmark\|wrr_test'
# - Do not use "interface{}"; use "any" instead.
git grep -l 'interface{}' -- "*.go" 2>&1 | not grep -v '\.pb\.go\|protoc-gen-go-grpc'
# - Do not call grpclog directly. Use grpclog.Component instead. # - Do not call grpclog directly. Use grpclog.Component instead.
git grep -l -e 'grpclog.I' --or -e 'grpclog.W' --or -e 'grpclog.E' --or -e 'grpclog.F' --or -e 'grpclog.V' -- "*.go" | not grep -v '^grpclog/component.go\|^internal/grpctest/tlogger_test.go' git grep -l -e 'grpclog.I' --or -e 'grpclog.W' --or -e 'grpclog.E' --or -e 'grpclog.F' --or -e 'grpclog.V' -- "*.go" | not grep -v '^grpclog/component.go\|^internal/grpctest/tlogger_test.go'
@ -106,7 +109,7 @@ for MOD_FILE in $(find . -name 'go.mod'); do
goimports -l . 2>&1 | not grep -vE "\.pb\.go" goimports -l . 2>&1 | not grep -vE "\.pb\.go"
golint ./... 2>&1 | not grep -vE "/grpc_testing_not_regenerate/.*\.pb\.go:" golint ./... 2>&1 | not grep -vE "/grpc_testing_not_regenerate/.*\.pb\.go:"
go mod tidy -compat=1.17 go mod tidy -compat=1.19
git status --porcelain 2>&1 | fail_on_output || \ git status --porcelain 2>&1 | fail_on_output || \
(git status; git --no-pager diff; exit 1) (git status; git --no-pager diff; exit 1)
popd popd
@ -168,8 +171,6 @@ proto.RegisteredExtension is deprecated
proto.RegisteredExtensions is deprecated proto.RegisteredExtensions is deprecated
proto.RegisterMapType is deprecated proto.RegisterMapType is deprecated
proto.Unmarshaler is deprecated proto.Unmarshaler is deprecated
resolver.Backend
resolver.GRPCLB
Target is deprecated: Use the Target field in the BuildOptions instead. Target is deprecated: Use the Target field in the BuildOptions instead.
xxx_messageInfo_ xxx_messageInfo_
' "${SC_OUT}" ' "${SC_OUT}"

7
vendor/modules.txt vendored
View File

@ -368,11 +368,11 @@ golang.org/x/tools/internal/pkgbits
golang.org/x/tools/internal/tokeninternal golang.org/x/tools/internal/tokeninternal
golang.org/x/tools/internal/typeparams golang.org/x/tools/internal/typeparams
golang.org/x/tools/internal/typesinternal golang.org/x/tools/internal/typesinternal
# google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1 # google.golang.org/genproto/googleapis/rpc v0.0.0-20230711160842-782d3b101e98
## explicit; go 1.19 ## explicit; go 1.19
google.golang.org/genproto/googleapis/rpc/status google.golang.org/genproto/googleapis/rpc/status
# google.golang.org/grpc v1.56.3 # google.golang.org/grpc v1.58.3
## explicit; go 1.17 ## explicit; go 1.19
google.golang.org/grpc google.golang.org/grpc
google.golang.org/grpc/attributes google.golang.org/grpc/attributes
google.golang.org/grpc/backoff google.golang.org/grpc/backoff
@ -402,6 +402,7 @@ google.golang.org/grpc/internal/grpclog
google.golang.org/grpc/internal/grpcrand google.golang.org/grpc/internal/grpcrand
google.golang.org/grpc/internal/grpcsync google.golang.org/grpc/internal/grpcsync
google.golang.org/grpc/internal/grpcutil google.golang.org/grpc/internal/grpcutil
google.golang.org/grpc/internal/idle
google.golang.org/grpc/internal/metadata google.golang.org/grpc/internal/metadata
google.golang.org/grpc/internal/pretty google.golang.org/grpc/internal/pretty
google.golang.org/grpc/internal/resolver google.golang.org/grpc/internal/resolver