This PR adds in delegating call tracers that work like so -
If this is the first call tracer that is being added onto the call
context, just add it as earlier.
If this is the second call tracer that is being added onto the call
context, create a delegating call tracer that contains a list of call
tracers (including the first call tracer).
Any more call tracers added, just get added to the list of tracers in
the delegating call tracer.
(This is not yet used other than through tests.)
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
This doesn't matter for gtest (it does its own sorting later), but for
the fuzzers this will probably save a bit of churn in the corpus and
lead to faster discovery of interesting cases.
There's *very* little difference in cost for a pooled arena allocation
and a tcmalloc allocation - but the pooled allocation causes memory
stranding for call lifetime, whereas the tcmalloc allocation allows that
to be shared between calls - leading to a much lower overall cost.
---------
Co-authored-by: ctiller <ctiller@users.noreply.github.com>
Rolls forward https://github.com/grpc/grpc/pull/33871
Second and third commits here fix internal build issues
In particular, add a `// IWYU pragma: no_include <ares_build.h>` since
`ares.h` [includes that
anyways](bad62225b7/include/ares.h (L23))
(and seems unlikely for that to change since it would be breaking)
Implement DNS using dns service for iOS.
Current limitation:
1. Using a custom name server is not supported.
2. Only supports `LookupHostname`. `LookupSRV` and `LookupTXT` are not
implemented.
3. Not tested with single stack (ipv4 or ipv6) environment
4. ~Not tested with multiple ip records per stack~ manually tested with
wsj.com
5. Not tested with multiple interface environment
Need the ability to override server-side keepalive permit without calls
default without affecting client-side settings.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
Normally, c-ares related fds are destroyed after all DNS resolution is
finished in [this code
path](c82d31677a/src/core/ext/filters/client_channel/resolver/dns/c_ares/grpc_ares_wrapper.cc (L210)).
Also there are some fds that c-ares may fail to open or write to
initially, and c-ares will close them internally before grpc ever knows
about them.
But if:
1) c-ares opens a socket and successfully writes a request on it
2) then a subsequent read fails
Then c-ares will close the fd in [this code
path](bad62225b7/src/lib/ares_process.c (L740)),
but gRPC will have a reference on the fd and will still use it
afterwards.
Fix here is to leverage the c-ares socket-override API to properly track
fd ownership between c-ares and grpc.
Related: internal issue b/292203138
CNR a WindowsEventEngine listener flake in:
* 10k local Windows development machine runs
* 50k Windows RBE runs
* 10k Windows VM runs
It fails ~5 times per day on the master CI jobs.
This PR adds some logging to try to see if an edge is missed, and
switches the thread pool implementation to see if that makes the flake
go away. If the flakes disappear, I'll try removing one or the other to
see if either independently fix the problem (hopefully not logging).
---------
Co-authored-by: drfloob <drfloob@users.noreply.github.com>
Reverts grpc/grpc#33819
Verified that it passed these jobs:
`grpc/core/master/linux/grpc_basictests_c_cpp_dbg`
`grpc/core/master/linux/grpc_basictests_c_cpp_opt`
`grpc/core/master/linux/grpc_portability`
Why: Cleanup for chttp2_transport ahead of promise conversion - lots of
logic has become interleaved throughout chttp2, so some effort to
isolate logic out is warranted ahead of that conversion.
What: Split configuration and policy tracking for each of ping rate
throttling and abuse detection into their own modules. Add tests for
them.
Incidentally: Split channel args into their own header so that we can
split the policy stuff into separate build targets.
---------
Co-authored-by: ctiller <ctiller@users.noreply.github.com>
This PR is expected to fix the flakes of
`//test/core/tsi:ssl_transport_security_test` when built under UBSAN.
Why is this needed? There are several tests in
`ssl_transport_security_test.cc` that involve doing many expensive
operations and PR #33638 recently added one more (namely, repeatedly
signing with an ECDSA key). The slow tests are already altered for MSAN
and TSAN, and now we need to do the same for UBSAN.
Those tests are failing on CIs which do not have twisted installed. Skip
them for now and will fix the docker images next.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
This PR implements a c-ares based DNS resolver for EventEngine with the
reference from the original
[grpc_ares_wrapper.h](../blob/master/src/core/ext/filters/client_channel/resolver/dns/c_ares/grpc_ares_wrapper.h).
The PosixEventEngine DNSResolver is implemented on top of that. Tests
which use the client channel resolver API
([resolver.h](../blob/master/src/core/lib/resolver/resolver.h#L54)) are
ported, namely the
[resolver_component_test.cc](../blob/master/test/cpp/naming/resolver_component_test.cc)
and the
[cancel_ares_query_test.cc](../blob/master/test/cpp/naming/cancel_ares_query_test.cc).
The WindowsEventEngine DNSResolver will use the same EventEngine's
grpc_ares_wrapper and will be worked on next.
The
[resolve_address_test.cc](https://github.com/grpc/grpc/blob/master/test/core/iomgr/resolve_address_test.cc)
which uses the iomgr
[DNSResolver](../blob/master/src/core/lib/iomgr/resolve_address.h#L44)
API has been ported to EventEngine's dns_test.cc. That leaves only 2
tests which use iomgr's API, notably the
[dns_resolver_cooldown_test.cc](../blob/master/test/core/client_channel/resolvers/dns_resolver_cooldown_test.cc)
and the
[goaway_server_test.cc](../blob/master/test/core/end2end/goaway_server_test.cc)
which probably need to be restructured to use EventEngine DNSResolver
(for one thing they override the original grpc_ares_wrapper's free
functions). I will try to tackle these in the next step.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
Promises code can prevent these bad requests from even reaching the
application, which is beneficial but this test needs a minor update to
handle it.
---------
Co-authored-by: ctiller <ctiller@users.noreply.github.com>
Based on the discussion at:
595a75cc5d..e3b402a8fa (r1244325752)
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
There is a bug in the SSL stack that was only partially fixed in #29176:
if more than 17kb is written to the BIO buffer, then everything over
17kb will be discarded, and the SSL handshake will fail with a bad
record mac error or hang if not enough bytes have arrived yet.
It's relatively uncommon to hit this bug, because the TLS handshake
messages need to be much larger than normal for you to have a chance of
hitting this bug. However, there was a separate bug in the SSL stack
(recently fixed in #33558) that causes the ServerHello produced by a
gRPC-C++ TLS server to grow linearly in size with the size of the trust
bundle; these 2 bugs combined to cause a large number of TLS handshake
failures for gRPC-C++ clients talking to gRPC-C++ servers when the
server had a large trust bundle.
This PR fixes the bug by ensuring that all bytes are successfully
written to the BIO buffer. An initial quick fix for this bug was planned
in #33611, but abandoned because we were worried about temporarily
doubling the memory footprint of all SSL channels.
The complexity in this PR is mostly in the test: it is fairly tricky to
force gRPC-C++'s SSL stack to generate a sufficiently large ServerHello
to trigger this bug.
This reverts the following PRs: #32692#33087#33093#33427#33568
These changes seem to have introduced some flaky crashes. Reverting
while I investigate.
As per gRFC A58, when WRR sees a subchannel report READY, it reset the
non_empty_since value, thus restarting the blackout period. However,
there were two cases where we were incorrectly triggering this code:
1. When WRR got an updated address list that contained addresses that
were already present on the old list and whose subchannels were already
in READY state, the initial notification for those subchannels on the
new list was READY, which incorrectly triggered resetting the
non_empty_since value.
2. Due to a bug in the outlier_detection policy, whenever an update was
propagated down through the OD policy without actually enabling OD, it
would incorrectly send a duplicate connectivity state notification for
the subchannels. This meant that a subchannel that was already in state
READY would report READY again, which would also incorrectly trigger
resetting the non_empty_since value.
This PR makes two changes:
1. Fix the bug in outlier_detection that caused it to generate the
spurious duplicate READY updates.
2. Fix WRR to reset the non_empty_since value when a subchannel goes
READY only if the subchannel has seen a previous state update and only
if that previous state was not READY. (The duplicate READY notifications
should not actually happen anymore now that the OD policy has been
fixed, but better to be defensive.)
Fixes b/290983884.
Adds access token lifetime configuration for workload identity
federation with service account impersonation for both explicit and
implicit flows.
Changes:
1. Adds a new member "service_account_impersonation" to the
ExternalAccountCredentials class. "token_lifetime_seconds" is a member
of "service_account_impersonation".
2. Adds validation checks, like token_lifetime_seconds should be between
the minimum and maximum accepted value, during the creation of an
ExternalAccountCredentials object.
3. Appends "lifetime" to the body of the service account impersonation
request.
Tests:
1. Modifies a test to check if the default value is passed when
"service_account_impersonation" is empty.
2. Adds tests to check if the token_lifetime_seconds value is propagated
to the request body.
3. Adds tests to verify that an error is thrown when
token_lifetime_seconds is invalid.
- Support call finalizers in filter test.
- Add an accessor to the filter implementation from the channel, so that
it can be interrogated by tests.
- Matcher to ensure that some metadata is *not* in a metadata batch
(functionality needed to support the additional testing we talked about
this morning)
---------
Co-authored-by: ctiller <ctiller@users.noreply.github.com>
In real services most of our time ends up in the `Read1()` function,
which populates one byte into the bit buffer.
Change this to read in as many as possible bytes at a time into that
buffer.
Additionally, generate all possible (to some depth) parser geometries,
and add a benchmark for them. Run that benchmark and select the best
geometry for decoding base64 strings (since this is the main use-case).
(gives about a 30% speed boost parsing base64 then huffman encoded
random binary strings)
---------
Co-authored-by: ctiller <ctiller@users.noreply.github.com>
tcp_posix_test is incorrectly assuming that all endpoint_writes with
timestamps enabled will be successfully traced. Remove the timestamps
checking related tests to prevent flakes when the test is enabled
internally.
Most of the time parsing succeeds, and only rarely do we see an error.
This change reduces the parse memento size from 120 bytes to 56 bytes.
---------
Co-authored-by: ctiller <ctiller@users.noreply.github.com>