Using the presence of the `*_pb2_grpc` module, as opposed to the absence
of the build script (`*_commands` module) is a problematic choice,
because even if a generated file is present, the test infrastructure may
want to regenerate it under a different environment (e.g. different
Python/proto package version). This will ensure the protos always get
recompiled if we have a `*_commands` module present, signaling we are in
a build environment, thereby making the process hermetic.
Previously, a secure server is configured with SSL credentials during
initialization, and those credentials will be used for the lifetime of
the server. If the user wants the server to use new credentials, the
user has to restart the server, resulting in server downtime. This
change enables the user to optionally configure the server with a
"certificiate config fetcher," such that on every new client
connection, the server will call the config fetcher before performing
the handshake, allowing the user application to optionally specify new
certificate configuration for the server to use (the fetcher can
return a "no change" and the server continues to use its current
certificate configuration).
Caching the start-time for GPR_CLOCK_REALTIME has been causing errors in
cases where the system time is changed (after caching the time). In such
cases, the following functions produce incorrect results (and are off by
how much ever the system time was changed)
grpc_millis_to_timespec() and grpc_timespec_to_millis_round_down()
This can cause problems especially when using the above functions to
get timer deadlines or completion queue timeouts.
(In the worst case scenarios, the timeouts/deadlines will always occur (if the
timeout inverval / deadline was less than the system change delta)
Ideally we should be reverting https://github.com/grpc/grpc/pull/11866
but since that is a large change (which introduced new APIs in
exec_ctx.cc), I am doing this change to effectively revert to the old
behavior (while still keeping the new APIs introduced in exec_ctx)
Doing this without a lock causes TSAN failures for quic.
There isn't much need to be clever here because this only impacts
shutdown performance, which doesn't really matter.
The previous packaging structure exhibited strange behavior of
slowness when trying to use pip to install grpcio-reflection
or grpcio-health-checking in a single line with grpcio-tools.
The root cause seems to be the complicated interaction between
pip and setuptools and the fact that we ship a single .tar.gz
"source" archive for `grpcio_reflection` and
`grpcio_health_checking` packages. `pip` tries to build this
"source" package, and our build process wants to generate
code for the `.proto` files in the package. However, we have
already processed the `.proto` files into `_pb2.py` files in
our artifact build process, and installing `grpcio_tools`
to get `grpcio_{reflection,health_checking}` seems excessive.
The behavior gets worse since `setuptools`, while building
the package from source, tries to fetch `grpcio_tools` from
source and build that too. This takes a while, since it
involves compiling a bunch of native code from `protobuf` and
`grpc` and requires a C compiler to boot.
This commit modifies the Python artifact for the two packages
so that they will not include the raw `.proto` files in the
distribution uploaded to PyPI, nor would they contain the
Python module that does the preprocessing code generation
from the respective .proto files. Instead, a specific code
path is taken when the generated `_pb2_grpc` Python module is
not present in the package to provide such functionality
when built from the gRPC git repository (and hence when built
from our CI infrastructure.)