* Expose the C-Core API in Cython layer
* Handle the object translation
* Create a separate package for Channelz specifically
* Handle nullptr and raise exception if seen one
* Translate C++ Channelz unit tests
* Adding 5 more invalid query unit tests
Adding peripheral utility for grpcio-channelz package
* Add to `pylint_code.sh`
* Add to Python build script
* Add to artifact build script
* Add to Bazel
* Add to Sphinx module list
Beta code elements are going to get deprecated and
Bazel support is much newer, so Bazel users are
not supposed to accidentally depend on beta code
elements. Preventing Bazel from building and
including beta code elements makes our tests pass
without depending on beta in grpcio target and
helps avoid including that dependency accidentally
if you are using Bazel.
This commit resolves an interop test currently failing on master. Over
the past couple of weeks, bazel-based tests have been introduced. The
current setup does not appear to automatically handle transitive
dependencies. Instead, transitive dependencies such as `requests` have
been manually added to `requirements.bazel.txt`. It appears that the
build server happened to have the dependencies of the `requests` library
installed already, but later had a configuration wipe.
This was compounded by the google-auth library erroneously reporting
that the `requests` module itself was not installed. In fact, it was the
transitive dependencies of `requests` that were not being installed by
the build file. (third-order dependencies of our test)
I consider this a quick fix to get the build passing. In the long run,
we need to automatically resolve and install transitive dependencies in
our bazel builds.
Resolves: #17170
I was trying to get a feel for what the rest of the python ecosystem
does with its logging, so I looked into the top few libraries on pypi:
urllib3 maintains a logger for not quite every module, but for each
one that does heavy lifting. The logger name is `__name__`, and no
handlers are registered for any module-level loggers, including
NullHandler. Their documentation spells out how to configure logging
for the library.
They explicitly configure a library root-level logger called `urllib3`
to which they attach a `NullHandler`. This addresses the "no handlers
could be found" problem.
Their tests explicitly configure handlers, just like ours do.
scrapy is more hands-on. It provides a configuration module for its
logging and a whole document on how to handle logging with scrapy. It
looks like log.py's whole reason for existence is making sure that a
handler is attached to to the scrapy handler at startup.
I think the extra complexity here is because scrapy also offers a CLI,
so there has to be some way to configure logging without resorting to
writing python, so I felt we didn't need to resort to this added
complexity.
---
Based on all of the libraries I've looked at, I think our current
approach is reasonable. The one change I would make is to explicitly
configure a `grpc` logger and to only attach a `NullHandler` to it
instead of putting the burden on the author of each new module to
configure it there.
With this change, I have
- Configured a logger in each module that cares about logging
- Removed all NullHandlers attached to module-level loggers
- Explicitly configured a `grpc` logger with a `NullHandler` attached
Resolves: #16572
Related: #17064
* Use templates instead of generating them every time
* Theme changed
* Add grpc_* modules
* APIs grouped
* No documentation for class members without docstring
* Add docstring for status code
* Include unit tests to test default behaviour, disable behaviour, enable behaviour of the wait-for-ready mechanism
* Import flags constants from grpc_types.h
* Use WaitGroup to wait for TRANSIENT_FAILURE state in unit test