* [flake] Fix max connection age
If the thread sending the request gets descheduled for too long (suppose
CI is under duress!) then the request will not get sent before max
connection age hits, and we'll see the client request fail *without*
reaching the server.
* further fix
* WIP
* use the same lock for queue and picker -- still need to figure out how to drain queue safely
* working!
* always unref pickers in WorkSerializer
* simplify tracking of queued calls
* fix sanity
* simplify resolver queueing code
* fix tsan failure
* simplify queued LB pick reprocessing
* minor cleanups
* remove unnecessary wrap-around checks
* clang-format
* generate_projects
* fix pollset bug
* use absl::flat_hash_set<> instead of std::set<>
* fix use-after-free in retry code
* add missing BUILD dep
* Add info about ca cert used to verify chain.
The tsi_peer object will now contain the subject of the root/ca cert
that was used to verify the peer's chain during a handshake.
* temp investigation
* Fix issues relating to overlapping CRL callback
* formatting on ssl_transport_security.cc
* Swap ca_cert naming
* Use preverify_ok instead of numbers
* Continue some renaming, addressing pr comments
* Removed early return if peer property setting fails
* Continue renaming
* clang-tidy
* Fix clang problem
* clang fixes
* Add null check in tests
* More PR changes. Behavior change to include root cert extract when TSI_REQUEST_CLIENT_CERTIFICATE_AND_VERIFY
* Add intermediate ca, leaf cert, and test with them
* clang-tidy
* Basic formatting
* Add new keys to build for export
* Add new cert files to test BUILD
* build file style fix
* changes for chain test
* clang-format
* build clean
* Add $ to lines of code in README
* Add directive about X509_STORE_CTX_get0_chain
* formatting
These tests are failing because they're running with too few threads,
however if we give them sufficient threads to catch bugs they're flaky.
Remove them and get the team some bandwidth back.
* [http] Dont drop connections on metadata limit exceeded
* remove bad test
* Automated change: Fix sanity tests
---------
Co-authored-by: ctiller <ctiller@users.noreply.github.com>
* Support python 3.11 on aarch64
Fixes https://github.com/grpc/grpc/issues/30927
* Change base tag to something more specific
* Update current version
---------
Co-authored-by: Richard Belleville <rbellevi@google.com>
* Revert "Revert "Revert "Revert "server: introduce ServerMetricRecorder API and move per-call reporting from a C++ interceptor to a C-core filter (#32106)" (#32272)" (#32279)" (#32293)"
This reverts commit 1f960697c5.
* Do not create CallMetricRecorder if call is null.
* [channel_args] Use c++ channel args during channel init
Previously we were converting to C and then back to C++ for each
filter... this ought to save some CPU time during connection
establishment.
* Automated change: Fix sanity tests
* cpp channel filters
* Automated change: Fix sanity tests
* iwyu
---------
Co-authored-by: ctiller <ctiller@users.noreply.github.com>
* Add timeout example
* Add pb2 file to example
* Remove .proto file
* Add keep_alive example
* Add refelction client
* fixes
* Add example for health_check
* Changes based on comments
* Fix pylint
* Revert "Revert "server: introduce ServerMetricRecorder API and move per-call reporting from a C++ interceptor to a C-core filter (#32106)" (#32272)"
This reverts commit deb1e25543.
* Fix by caching call metric recording stuff in async request
PR #32106 caused msan errors in some tests while de-referencing the
server object where async calls are active after the server is
destroyed. Instead cache the ServerMetricRecorder pointer.
* copyright headers fixed
* clang fixes.
Our test clusters are in the broken/dirty state (except the URL map's
"basic" cluster, which isn't affected).
This PR switches to the newly created clusters to:
1. Get a data point on whether newly created clusters are affected
by the same issues.
2. Allow for descriptive work on the old clusters.
3. Hopefully, bring our tests back to green.
Bonus: more sensible cluster names.
* WIP. A seemingly properly failing test
* WIP. Pre-fork handlers now work
* Roughly working.
* Clean up
* Clean up more
* Add to CI
* Format
* Ugh. Remove swap file
* And another
* clean up
* Add copyright
* Format
* Remove another debug line
* Add stub forkable methods
* Remove use of 3.9+ function
* Remove unintentional double copyright
* drfloob review comments
* Only hold lock during Close once
* Create separate job for fork test
* Bump up gdb timeout
* Format
There was a ~1% flake in grpclb end2end tests that was reproducible in opt builds, manifesting as a hang, usually in a the SingleBalancerTest.Fallback test. Through experimentation, I found that by skipping the death test in the grpclb end2end test suite, the hang was no longer reproducible in 10,000 runs. Similarly, moving this test to the end of the suite, or making it run first (as is the case in this PR) resulted in 0 failures in 3000 runs.
It's unclear to me yet why the death test causes things to be unstable in this way. It's clear from the logs that one test does affect the rest, grpc_init is done once for all tests, so all tests utilize the same EventEngine ... until the death test completes, and a new EventEngine is created for the next test.
I think this death test is sufficiently artificial that it's fine to change the test ordering itself, and ignore the wonky intermediate state that results from it.
Reproducing the flake:
```
tools/bazel --bazelrc=tools/remote_build/linux.bazelrc test \
-c opt \
--test_env=GRPC_TRACE=event_engine \
--runs_per_test=5000 \
--test_output=summary \
test/cpp/end2end/grpclb_end2end_test@poller=epoll1
```
* see if experiments can lose weight
* test
* test
* test
* test
* contra test
* contra test
* add explainer
* Automated change: Fix sanity tests
* fixes
* fix
* strict-bs
* comments
* fixes
* iwyu
---------
Co-authored-by: ctiller <ctiller@users.noreply.github.com>