This is flaky in CI, and is being replaced by the new implementation with event engine, disabling this test.
Will clean them up after all have switched to event engine.
Closes#37636
<!--
If you know who should review your pull request, please assign it to that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the appropriate
lang label.
-->
Closes#37572
Final piece of gRFC A83 (https://github.com/grpc/proposal/pull/438): the GCP authentication filter itself.
Infrastructure changes include:
- Added a general-purpose LRU cache library that can be reused elsewhere.
- Fixed the client channel code to use the channel args returned by the resolver for the dynamic filters. This was necessary so that the GCP auth filter could access the `XdsConfig` object, which is passed via a channel arg.
- Unlike the other xDS HTTP filters we support, the GCP auth filter does not support config overrides, and its configuration includes a cache size parameter that we always need at the channel level, not per-call. As a result, I had to change the xDS HTTP filter API to give it the ability to set top-level fields in the service config, not just per-method fields. (We use the service config as a way of passing configuration down into xDS HTTP filters.) Note that for now, this works only on the client side, because we don't have machinery for a top-level service config on the server side.
- The GCP auth filter is also the first case where the filter needs to know its instance name from the xDS config, so I changed the xDS HTTP filter API to plumb that through.
- Fixed a bug in the HTTP client library that prevented the override functions from declining to override a particular request.
Closes#37550
[Gpr_To_Absl_Logging] Remove gpr_log. Adding absl LOG wrappers
List of changes in this PR
1. Replacing all instances of gpr_log in PHP and RUBY with the new absl wrapper APIs. The replacement mapping is given below
gpr_log(GPR_ERROR, ...)
=> grpc_absl_log_error
gpr_log(GPR_INFO, ...)
=> grpc_absl_log_info - Printing a simple message
=> grpc_absl_log_info_int - Printing a message and a number
=> grpc_absl_log_info_str - Printing 2 strings.
gpr_log(GPR_DEBUG, ...)
=> grpc_absl_vlog - Printing a simple message
=> grpc_absl_vlog_int - Printing a message and a number
=> grpc_absl_vlog_str - Printing 2 strings.
Adding grpc_absl_vlog2_enabled() check around gpr_log(GPR_DEBUG, ...)
2. src/python/grpcio_observability/grpc_observability/observability_util.cc One instance of gpr_log to absl LOG replacement was missed earlier. Fixing that.
3. Deleting deprecated gpr stuff : gpr_log_severity , GPR_DEBUG , GPR_INFO , GPR_ERROR , gpr_log .
4. Adding new APIs for Ruby and PHP. These APIs are very simple wrappers around absl.
5. Removing the legacy functions in platform specific log.cc files. These files are safe to delete now.
6. Fixing the allow list in banned_functions.py . This makes sure that these new wrappers don't get used all over the place by everyone. We carefully only allow list the PHP and RUBY files and allow the use of these wrappers. Everywhere else - using these wrappers should fail Sanity Tests.
Closes#37431
Add validation of the `Audience` cluster metadata type, as per gRFC A83 (https://github.com/grpc/proposal/pull/438).
I had previously changed the metadata to be represented as JSON in #37468
An error occurred
. However, while working on the GCP Authentication filter implementation, I realized that that's not an ideal representation, because it would have required us to validate the JSON on a per-RPC basis, which would be bad for performance. So I've changed the representation of metadata to be an abstract type, and we now store the `Audience` metadata as a simple string. I've also moved metadata into its own type with its own validation code, so that in the future we can use it in places other than CDS (many xDS resource types have metadata fields).
While I was at it, I also add some helper functions for validating the `UInt32Value` and `UInt64Value` wrapper protos.
Closes#37566
Just thought I'd contribute some typo fixes I stumbled upon. Nothing controversial (hopefully), just 74 simple fixes.
Use the following command to get a quick and dirty summary of the specific corrections made:
```shell
git diff HEAD^! --word-diff-regex='\w+' -U0 \
| grep -E '\[\-.*\-\]\{\+.*\+\}' \
| sed -r 's/.*\[\-(.*)\-\]\{\+(.*)\+\}.*/\1 \2/' \
| sort | uniq -c | sort -n
```
FWIW, the top typos are:
* satisifed (8)
* uncommited (7)
* tranparent (7)
* expecially (3)
* recieves (3)
* correponding (2)
* slighly (2)
* wierdly (2)
Closes#37450
[Gpr_To_Absl_Logging] Removing absl_vlog2_enabled .
@apolcyn : Please review the Ruby code.
@yashykt : Please review the C++ code. And the python sanity test.
Closes#37476
Fix for grpc_build_protobuf_at_head -> python_linux_opt_native_buildonly timeout
Found the following error during Python build:

When we mount and reuse the existing repo from host machine inside docker container, the `tools/bazel.rc` file is shared to the docker container and the Bazel override host location written to `tools/bazel.rc` from tools/.../grpc_build_submodule_at_head.sh (outside docker container) forces bazel to look for the same host location inside the docker container, which doesn't exist.
Hence overriding it again with the working directory inside the container should solve this issue.
Closes#37404
Centos7 reached EOL on June 30th
The test was still working until google-protobuf released 3.25.4 a ~week ago, which seems to no longer compile on centos 7 b/c it needs a new header, `stdatomic.h`
Closes#37401
Two new benchmarks here-in.
Benchmark 1: `bm_picker`
------
Measures various load balancing policies pick performance. For now we cover `pick_first` and `weighted_round_robin` at 1, 10, 100, 1000, 10000, and 100000 backends.
Today's output:
```
------------------------------------------------------------------------------
Benchmark Time CPU Iterations
------------------------------------------------------------------------------
BM_Pick/pick_first/1 20.4 ns 20.4 ns 68285
BM_Pick/pick_first/10 20.6 ns 20.6 ns 68274
BM_Pick/pick_first/100 20.5 ns 20.5 ns 67817
BM_Pick/pick_first/1000 20.6 ns 20.6 ns 67347
BM_Pick/pick_first/10000 20.7 ns 20.7 ns 67317
BM_Pick/pick_first/100000 20.9 ns 20.9 ns 67385
BM_Pick/weighted_round_robin/1 54.7 ns 54.7 ns 26641
BM_Pick/weighted_round_robin/10 54.2 ns 54.2 ns 25828
BM_Pick/weighted_round_robin/100 55.2 ns 55.2 ns 26210
BM_Pick/weighted_round_robin/1000 54.1 ns 54.1 ns 25678
BM_Pick/weighted_round_robin/10000 77.3 ns 76.6 ns 15776
BM_Pick/weighted_round_robin/100000 148 ns 148 ns 9882
```
Benchmark 2: `bm_load_balanced_call_destination`
-----
This benchmark measures call performance when a call spine passes through a `LoadBalancedCallDestination`, and with `BM_LoadBalancedCallDestination` also the construction/destruction cost of this object.
We do not consider picker performance in this benchmark as it's separately covered by `bm_picker` above.
Today's output:
```
-----------------------------------------------------------------------------------------------------------------------------------------
Benchmark Time CPU Iterations
-----------------------------------------------------------------------------------------------------------------------------------------
BM_UnaryWithSpawnPerEnd<UnstartedCallDestinationFixture<LoadBalancedCallDestinationTraits>> 1255 ns 1255 ns 1076
BM_UnaryWithSpawnPerOp<UnstartedCallDestinationFixture<LoadBalancedCallDestinationTraits>> 1459 ns 1459 ns 939
BM_ClientToServerStreaming<UnstartedCallDestinationFixture<LoadBalancedCallDestinationTraits>> 209 ns 209 ns 6775
BM_LoadBalancedCallDestination 92.8 ns 92.8 ns 15063
```
Notes
------
There's some duplicated code between the benchmarks & tests -- this is ok -- as the tests evolve we'll likely want to add more checks to the fixtures, whereas as the benchmarks evolve we may well want to optimize the fixtures so that performance of the systems under test dominate more. That is, the duplicated code is expected to have different evolutionary tracks.
Closes#37052
The Ruby artifact build is timing out at 1hr30m, specifically `build_artifact.ruby_native_gem_linux_aarch64-linux` in the `tools/internal_ci/linux/grpc_distribtests_ruby.sh` job. Most of the other Ruby builds take around 1hr15m, so the build time is increasing regardless.
@stanley-cheung this should probably be investigated. In the meantime, to hopefully unblock the v1.66 release, let's increase the build timeout.
Closes#37341
The oldest gcc version that gRPC supports as of today is gcc 7 but gcc 7 has an issue with template supports that gRPC already picked up. Recently we managed to fix it in gRPC library code but we still have some in our test code. Given that it's not easy to fix since it requires many trial error approach to find a way to satisfy gcc 7 and eventually gcc 7 will be dropped from our supported compilers, let's have this mitigation where just main grpc++ target is being tested for gcc 7 so that users can use grpc with it without having to fix this hairy issue.
Fixes https://github.com/grpc/grpc/issues/36751Closes#37257
- add a benchmark for various metadata creation styles
- add factory functions for status + message - these are 3-10x faster than going via absl::Status
- add a `MakePooledForOverwrite` function to Arena, use it everywhere -- this naming matches `std::make_unique_for_overwrite` in C++20, and avoids some language mandated initialization in `Table` (underlying `MetadataMap<>`) - speeding creation of metadata handles by 30%
For `bm_call_spine` we see before:
```
BM_UnaryWithSpawnPerEnd<CallSpineFixture>_median 745 ns 745 ns
```
and after:
```
BM_UnaryWithSpawnPerEnd<CallSpineFixture>_median 699 ns 699 ns
```
Closes#37111