the current grpclb specific load reporting from the implementation.
The changes here also mean that the plugin will just get the initial
response and no subsequent response from the balancer.
and get rid of the dependency loop on the grpc shutdown path. Make sure
all background closures are complete before shutting down the other grpc
modules.
Avoid using the backup poller in TCP endpoints if using the background
poller.
resolution tracking to remove reference of the policy type (RR)
- Change RoundRobin to Child and rr_ to child_
- Changing the function names to remove reference to RR.
- Change 'grpclb' to 'xds' where parent policy is referenced.
Since retry_dispatched is next to the metadata bitfields, compiler will
generate a 2 byte store when we set this bitfield.
Move this field to the end of the structure to avoid such a data race.
Fixes#16896
This commit resolves an interop test currently failing on master. Over
the past couple of weeks, bazel-based tests have been introduced. The
current setup does not appear to automatically handle transitive
dependencies. Instead, transitive dependencies such as `requests` have
been manually added to `requirements.bazel.txt`. It appears that the
build server happened to have the dependencies of the `requests` library
installed already, but later had a configuration wipe.
This was compounded by the google-auth library erroneously reporting
that the `requests` module itself was not installed. In fact, it was the
transitive dependencies of `requests` that were not being installed by
the build file. (third-order dependencies of our test)
I consider this a quick fix to get the build passing. In the long run,
we need to automatically resolve and install transitive dependencies in
our bazel builds.
Resolves: #17170
I was trying to get a feel for what the rest of the python ecosystem
does with its logging, so I looked into the top few libraries on pypi:
urllib3 maintains a logger for not quite every module, but for each
one that does heavy lifting. The logger name is `__name__`, and no
handlers are registered for any module-level loggers, including
NullHandler. Their documentation spells out how to configure logging
for the library.
They explicitly configure a library root-level logger called `urllib3`
to which they attach a `NullHandler`. This addresses the "no handlers
could be found" problem.
Their tests explicitly configure handlers, just like ours do.
scrapy is more hands-on. It provides a configuration module for its
logging and a whole document on how to handle logging with scrapy. It
looks like log.py's whole reason for existence is making sure that a
handler is attached to to the scrapy handler at startup.
I think the extra complexity here is because scrapy also offers a CLI,
so there has to be some way to configure logging without resorting to
writing python, so I felt we didn't need to resort to this added
complexity.
---
Based on all of the libraries I've looked at, I think our current
approach is reasonable. The one change I would make is to explicitly
configure a `grpc` logger and to only attach a `NullHandler` to it
instead of putting the burden on the author of each new module to
configure it there.
With this change, I have
- Configured a logger in each module that cares about logging
- Removed all NullHandlers attached to module-level loggers
- Explicitly configured a `grpc` logger with a `NullHandler` attached
Resolves: #16572
Related: #17064
In tensorflow, RPC client thread doesn't active release,
rely on process to cleanup. If process have already
cleanup the global variable(g_default_client_callbacks),
after that client issue a RPC call which contains the ClientContext,
then once ClientContext destructor called,
pure virtual functions call error is reported.