<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
The logic is straightforward: attempt to read the
`GRPC_ALTS_MAX_CONCURRENT_HANDSHAKES` environment variable and, if it
set to an integer, instantiate the handshake queues based on this
integer.
Based on go/grpc-alts-concurrent-handshake-cap.
Protobuf's HEAD has changed the layout of bazel targets we use for
getting the list of source files for our python grpcio.tools build.
With this patch, make_grpcio_tools.py finishes successfully and gives
reasonable results.
Change was created by the release automation script. See
go/grpc-release.
---------
Co-authored-by: Mark D. Roth <roth@google.com>
Co-authored-by: Craig Tiller <ctiller@google.com>
Co-authored-by: Esun Kim <veblush@google.com>
Co-authored-by: Jan Tattermusch <jtattermusch@google.com>
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
Aim here is to allow adding custom metadata types to the internal build.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
---------
Co-authored-by: ctiller <ctiller@users.noreply.github.com>
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
---------
Co-authored-by: ctiller <ctiller@users.noreply.github.com>
Implement listeners, connection, endpoints for `FuzzingEventEngine`.
Allows the fuzzer to select write sizes and delays, connection delays,
and port assignments.
I made a few modifications to the test suite to admit this event engine
to pass the client & server tests:
1. the test factories return shared_ptr<> to admit us to return the same
event engine for both the oracle and the implementation - necessary
because FuzzingEventEngine forms a closed world of addresses & ports.
2. removed the WaitForSingleOwner calls - these seem unnecessary, and we
don't ask our users to do this - tested existing linux tests 1000x
across debug, asan, tsan with this change
Additionally, the event engine overrides the global port picker logic so
that port assignments are made by the fuzzer too.
This PR is a step along a longer journey, and has some outstanding
brethren PR's, and some follow-up work:
* #32603 will convert all the core e2e tests into a more malleable form
* we'll then use #32667 to turn all of these into fuzzers
* finally we'll integrate this into that work and turn all core e2e
tests into fuzzers over timer & callback reorderings and io
size/spacings
---------
Co-authored-by: ctiller <ctiller@users.noreply.github.com>
This reverts commit 7bd9267f32.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
(hopefully last try)
Add new channel arg GRPC_ARG_ABSOLUTE_MAX_METADATA_SIZE as hard limit
for metadata. Change GRPC_ARG_MAX_METADATA_SIZE to be a soft limit.
Behavior is as follows:
Hard limit
(1) if hard limit is explicitly set, this will be used.
(2) if hard limit is not explicitly set, maximum of default and soft
limit * 1.25 (if soft limit is set) will be used.
Soft limit
(1) if soft limit is explicitly set, this will be used.
(2) if soft limit is not explicitly set, maximum of default and hard
limit * 0.8 (if hard limit is set) will be used.
Requests between soft and hard limit will be rejected randomly, requests
above hard limit will be rejected.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
---------
Co-authored-by: ctiller <ctiller@users.noreply.github.com>
Adds a simple hello-world example for GCP Observability purposes along
with a README.
The microservices observability user guide is not yet up, but this still
refers to it anyway.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
Previously we triggered a flow control update when `announced <
target/2`, but if `target==1` then we fail to send a flow control update
(announced is never less than 1/2==0) and break our forward progress
guarantees.
b/259780449 internally.
This adds a small wait for all timers to finish executing on
WindowsEventEngine shutdown. I have seen this wait code triggered on
rare occasions where either 10's of tests are running concurrently, or
in an environment where IPv6 is detected but disabled, and connection
cancellation takes seconds to complete. I have not gotten to the bottom
of the latter problem, but feel WindowsEventEngine destruction can
tolerate some small extra time.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
Earlier, we were simply using a 64 bit random number, but the spec
actually calls for UUIDv4.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
Fix the end timestamp of `grpc.io/client/roundtrip_latency` to when we
receive the trailers from the previous end timestamp of when we clean-up
the call attempt.
The reason to do this is to make sure that `grpc.io/client/api_latency`
(the end-to-end latency for the call) is always greater than
`grpc.io/client/roundtrip_latency` and fix the bug found by
@stanley-cheung
--
<br class="Apple-interchange-newline">
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
This allows us to replace `absl::optional<TaskHandle>` with checks
against the invalid handle.
This PR also replaces the differently-named invalid handle instances
with a uniform way of accessing static invalid instances across all
handle types, which aids a bit in testing.
Add an event manager that spawns threads just as much as it possibly
can... to expose TSAN to the myriad thread ordering problems in our code
base.
Next steps for this will be to add a new test mode for tsan + thready
event engine + a few other doodads to increase threads in the system
(party.cc in particular has a good place for a hook).
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
---------
Co-authored-by: ctiller <ctiller@users.noreply.github.com>
(part of removing support for VS2017)
Also see https://github.com/grpc/grpc/pull/32649
Also see https://github.com/grpc/grpc/pull/32615
The switch to grpc-win2019 windows workers has already happened:
(cl/517400022).
Once this PR lands, I'll backport to 1.53.x branch as well (since that
release removes the VS2017 support).
This PR is a small code change with a lot of new test data.
[In OpenSSL, there are two flags that configure CRL checks. Coping
relevant
section:](https://www.openssl.org/docs/man1.0.2/man3/X509_VERIFY_PARAM_get_depth.html)
> - X509_V_FLAG_CRL_CHECK enables CRL checking for the certificate chain
leaf certificate. An error occurs if a suitable CRL cannot be found.
> - X509_V_FLAG_CRL_CHECK_ALL enables CRL checking for the entire
certificate chain.
We currently only set `X509_V_FLAG_CRL_CHECK`, so we will only ever
check if the leaf certificate is revoked. We should check the whole
chain. I am open to making this a user configuration if we want to do it
that way, but we certainly need to be able to check the whole chain.
So, this PR contains the small code change in
`ssl_transport_security.cc` to use the `X509_V_FLAG_CRL_CHECK_ALL` flag.
Then the rest of the changes are in tests. I've added all the necessary
files to have a chain built that looks as follows
`Root CA -> Revoked Intermediate CA -> Leaf Certificate`, and added a
test for this case as well.
You can verify that on master this new test will fail (i.e. the
handshake will succeed even though the intermediate CA is revoked) by
checking out this branch, running `git checkout master --
./src/core/tsi/ssl_transport_security.cc`, then running the test.
I also slightly reorganized test/core/tsi/test_creds/ so that the CRLs
are in their own directory, which is the way our API intends to accept
CRLs.
Added documentation on how to use `Grpc.Tools` MSBuild integration with
*unsupported* architectures - i.e. third party built `protoc` and
`grpc_csharp_plugin`.
Fixing TSAN data races of the kind -
https://source.cloud.google.com/results/invocations/c3f02253-0d0b-44e6-917d-07e4bc0d3d62/targets/%2F%2Ftest%2Fcpp%2Fperformance:writes_per_rpc_test@poller%3Dpoll/log
I think the original issue was the non-atomic load in
writes_per_rpc_test.cc but also making the change to use barriers
instead of relaxed atomics (probably unimportant but I'll prefer safety
in the absence of otherwise comments).
Could probably also use std::atomic but feeling a bit lazy to make the
changes throughout.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
Update to
5501a1a255
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
This PR adds the view `grpc.io/client/api_latency` for GCP Observability
which aims to collect the end-to-end time taken by a call.
Changes made to support this -
1) A global interceptor factory registration is created for stats
plugins.
2) OpenCensus plugin now provides a new interceptor that's responsible
for collecting the new latency.
3) Gcp Observability registers this plugin.
4) A new OpenCensus measurement and view is created for api latency.
Note that this is internal as of now, since it's not clear if it should
be exposed as public experimental API. Leaving that decision for the
future.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
This PR adds annotations to client attempt spans and server spans on
messages of the form -
* `Send message: 1026 bytes`
* `Send compressed message: 31 bytes` (if message was compressed)
* `Received message: 31 bytes`
* `Received decompressed message: 1026 bytes` (if message needed to be
decompressed)
Note that the compressed and decompressed annotations are not present if
compression/decompression was not performed.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->