This commit resolves an interop test currently failing on master. Over
the past couple of weeks, bazel-based tests have been introduced. The
current setup does not appear to automatically handle transitive
dependencies. Instead, transitive dependencies such as `requests` have
been manually added to `requirements.bazel.txt`. It appears that the
build server happened to have the dependencies of the `requests` library
installed already, but later had a configuration wipe.
This was compounded by the google-auth library erroneously reporting
that the `requests` module itself was not installed. In fact, it was the
transitive dependencies of `requests` that were not being installed by
the build file. (third-order dependencies of our test)
I consider this a quick fix to get the build passing. In the long run,
we need to automatically resolve and install transitive dependencies in
our bazel builds.
Resolves: #17170
I was trying to get a feel for what the rest of the python ecosystem
does with its logging, so I looked into the top few libraries on pypi:
urllib3 maintains a logger for not quite every module, but for each
one that does heavy lifting. The logger name is `__name__`, and no
handlers are registered for any module-level loggers, including
NullHandler. Their documentation spells out how to configure logging
for the library.
They explicitly configure a library root-level logger called `urllib3`
to which they attach a `NullHandler`. This addresses the "no handlers
could be found" problem.
Their tests explicitly configure handlers, just like ours do.
scrapy is more hands-on. It provides a configuration module for its
logging and a whole document on how to handle logging with scrapy. It
looks like log.py's whole reason for existence is making sure that a
handler is attached to to the scrapy handler at startup.
I think the extra complexity here is because scrapy also offers a CLI,
so there has to be some way to configure logging without resorting to
writing python, so I felt we didn't need to resort to this added
complexity.
---
Based on all of the libraries I've looked at, I think our current
approach is reasonable. The one change I would make is to explicitly
configure a `grpc` logger and to only attach a `NullHandler` to it
instead of putting the burden on the author of each new module to
configure it there.
With this change, I have
- Configured a logger in each module that cares about logging
- Removed all NullHandlers attached to module-level loggers
- Explicitly configured a `grpc` logger with a `NullHandler` attached
Resolves: #16572
Related: #17064
In tensorflow, RPC client thread doesn't active release,
rely on process to cleanup. If process have already
cleanup the global variable(g_default_client_callbacks),
after that client issue a RPC call which contains the ClientContext,
then once ClientContext destructor called,
pure virtual functions call error is reported.
Specifically: if a request handling thread is in flight but scheduled
out when shutdown is called on the server, but it has already passed
the shutdown check, then when it resumes it will add a grpc_call to
the completion queue that is leaked. We fix this by explicitly freeing
such calls after all worker threads have shutdown.
To manifest the leak, run the end2end::ClientCancelsRequestStream
test repeatedly on the unpatched server implementation. About 0.5% of
the time, the leak will manifest.
* Use templates instead of generating them every time
* Theme changed
* Add grpc_* modules
* APIs grouped
* No documentation for class members without docstring
* Add docstring for status code
* Include unit tests to test default behaviour, disable behaviour, enable behaviour of the wait-for-ready mechanism
* Import flags constants from grpc_types.h
* Use WaitGroup to wait for TRANSIENT_FAILURE state in unit test
Using a conditional branch to set a function pointer will prevent the
compiler to inline the function and worse it would jump based on a
register value killing the pipline.
Here is a short example to demonstrate the variants:
https://godbolt.org/z/radfZg
Remove `add_nothing` and add a lambda when necessary to make sure
everything is inlined.
Make sure call_element[0..count] is filled before actually calling
`init_call_elem` of the filters, because they would trash the cache.
This improves BM_StreamingPingPong<TCP, ...> by 3%.