I was trying to get a feel for what the rest of the python ecosystem
does with its logging, so I looked into the top few libraries on pypi:
urllib3 maintains a logger for not quite every module, but for each
one that does heavy lifting. The logger name is `__name__`, and no
handlers are registered for any module-level loggers, including
NullHandler. Their documentation spells out how to configure logging
for the library.
They explicitly configure a library root-level logger called `urllib3`
to which they attach a `NullHandler`. This addresses the "no handlers
could be found" problem.
Their tests explicitly configure handlers, just like ours do.
scrapy is more hands-on. It provides a configuration module for its
logging and a whole document on how to handle logging with scrapy. It
looks like log.py's whole reason for existence is making sure that a
handler is attached to to the scrapy handler at startup.
I think the extra complexity here is because scrapy also offers a CLI,
so there has to be some way to configure logging without resorting to
writing python, so I felt we didn't need to resort to this added
complexity.
---
Based on all of the libraries I've looked at, I think our current
approach is reasonable. The one change I would make is to explicitly
configure a `grpc` logger and to only attach a `NullHandler` to it
instead of putting the burden on the author of each new module to
configure it there.
With this change, I have
- Configured a logger in each module that cares about logging
- Removed all NullHandlers attached to module-level loggers
- Explicitly configured a `grpc` logger with a `NullHandler` attached
Resolves: #16572
Related: #17064
In tensorflow, RPC client thread doesn't active release,
rely on process to cleanup. If process have already
cleanup the global variable(g_default_client_callbacks),
after that client issue a RPC call which contains the ClientContext,
then once ClientContext destructor called,
pure virtual functions call error is reported.
Specifically: if a request handling thread is in flight but scheduled
out when shutdown is called on the server, but it has already passed
the shutdown check, then when it resumes it will add a grpc_call to
the completion queue that is leaked. We fix this by explicitly freeing
such calls after all worker threads have shutdown.
To manifest the leak, run the end2end::ClientCancelsRequestStream
test repeatedly on the unpatched server implementation. About 0.5% of
the time, the leak will manifest.
* Use templates instead of generating them every time
* Theme changed
* Add grpc_* modules
* APIs grouped
* No documentation for class members without docstring
* Add docstring for status code