Rather than single classes they are now broken up into class families
with each class containing only those fields and methods that are
needed in the context in which the class is used.
It is now a family of classes conforming to an interface rather than a
single class no single instance of which makes use of all behavior
scoped to the class.
It also now only uses gRPC Core memory for the time of a single batch
rather than for the entire lifetime of the instance.
When we set the call state to "CANCELLED" after
grpc_cancel_all_calls, we would block other start batch
operations from happening. The rpc_state for the cancelled
call would still be in the server's rpc_states set, but it
would never get removed because there were no active batches
for the call, and the only place we remove from rpc_states is
when a batch completes.
It is better to rely on c-core's cancellation. Once a call
is cancelled, all subsequent ops on that call will return
immediately with a cancellation error.
The RLock() change is due to the possibility that
_on_call_completed
gets invoked immediately when the call has already completed when the
rpc_future callback is created.
gRPC Python required RPCs terminating with non-OK status code to still
return a valid response value after calling set_code, even though the
response value was not supposed to be communicated to the client, and
returning None is considered a programming error.
This commit introduces an alternative mechanism to terminate RPCs by
calling the `abort` method on `ServicerContext` passed to the handler,
which raises an exception and signals to the gRPC runtime to abort the
RPC with the specified status code and details.
A GenericRpcHandler registered on a gRPC Server is not supposed to raise
an exception and if it does so it is considered a programming defect.
However, gRPC is supposed to respond to the client with an UNKNOWN
status code. Previously, this situation was left unhandled and the
client ended up receiving a response with CANCELLED status code.
This commit fixes the issue https://github.com/grpc/grpc/issues/13629.
Rather than allocating gRPC Core memory when instantiated and
retaining it until deleted, gRPC Python's credentials objects now
offer methods to create gRPC Core structures on demand.
"Do not make undocumented and unsupported use of one part of an API
from within the implementation of another part of the same API" is one
of the rules of Abstraction Club. Not sure if it's important enough to
be the first two rules, but it's up there.
Using the presence of the `*_pb2_grpc` module, as opposed to the absence
of the build script (`*_commands` module) is a problematic choice,
because even if a generated file is present, the test infrastructure may
want to regenerate it under a different environment (e.g. different
Python/proto package version). This will ensure the protos always get
recompiled if we have a `*_commands` module present, signaling we are in
a build environment, thereby making the process hermetic.
Previously, a secure server is configured with SSL credentials during
initialization, and those credentials will be used for the lifetime of
the server. If the user wants the server to use new credentials, the
user has to restart the server, resulting in server downtime. This
change enables the user to optionally configure the server with a
"certificiate config fetcher," such that on every new client
connection, the server will call the config fetcher before performing
the handshake, allowing the user application to optionally specify new
certificate configuration for the server to use (the fetcher can
return a "no change" and the server continues to use its current
certificate configuration).
The previous packaging structure exhibited strange behavior of
slowness when trying to use pip to install grpcio-reflection
or grpcio-health-checking in a single line with grpcio-tools.
The root cause seems to be the complicated interaction between
pip and setuptools and the fact that we ship a single .tar.gz
"source" archive for `grpcio_reflection` and
`grpcio_health_checking` packages. `pip` tries to build this
"source" package, and our build process wants to generate
code for the `.proto` files in the package. However, we have
already processed the `.proto` files into `_pb2.py` files in
our artifact build process, and installing `grpcio_tools`
to get `grpcio_{reflection,health_checking}` seems excessive.
The behavior gets worse since `setuptools`, while building
the package from source, tries to fetch `grpcio_tools` from
source and build that too. This takes a while, since it
involves compiling a bunch of native code from `protobuf` and
`grpc` and requires a C compiler to boot.
This commit modifies the Python artifact for the two packages
so that they will not include the raw `.proto` files in the
distribution uploaded to PyPI, nor would they contain the
Python module that does the preprocessing code generation
from the respective .proto files. Instead, a specific code
path is taken when the generated `_pb2_grpc` Python module is
not present in the package to provide such functionality
when built from the gRPC git repository (and hence when built
from our CI infrastructure.)