<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
---------
Co-authored-by: Bradley Hess <bdhess@google.com>
Co-authored-by: AJ Heller <hork@google.com>
This only affects pull requests for the `Bazel RBE Build Tests` job. The
equivalent master CI job will still build all end2end tests, including
experiments.
1. Changes the resource retention period to 2 days for all resources
(previously 7 days for TD resources, 6 hours for k8s). This solved a
problem with k8s resources being stuck because corresponding TD
resources weren't deleted.
2. Resume on namespace cleanup failures
3. Add secondary lb cluster cleanup logic
4. Modularize `grpc_xds_resource_cleanup.sh`
5. Make `KubernetesNamespace`'s methods `pretty_format_status` and
`pretty_format_metadata` public
6. `pretty_format_status`: also print resource kind, creation and
deletion requested dates
ref b/259724370, cl/517235715
Split nonbazel test into two; one for bazel build which will be labeled
as required and the other for remining nonbazel tests.
Corresponding internal CL: cl/572652950
Bazelify tests from "linux/grpc_bazel_build" kokoro job by creating 3
bazelified tests - "build with strict warning", "build with no_xds=True"
and "build with no_xds=True negative test".
- also make the original "linux/grpc_bazel_build" kokoro job a no-op
(since bazelified tests now provide the same coverage).
- make C-core basictests use `--build_only` when running as bazelified
tests. This is because the volume of C core tests is expected to grow
very significantly after https://github.com/grpc/grpc/pull/34419 and
currently the non-bazelified counterpart of the tests (the presubmit
grpc_basictests_c_cpp_build_only job) is also "build only".
- make the linux presubmit job `grpc_basictests_c_cpp_build_only` a
noop, since the bazelified tests already give the same coverage on
presubmit.
Since many tests now run reliably as bazelified tests on RBE, we can
remove them from presubmit runs
to speedup testing of PRs.
(for now, these jobs will still run on master, they can be removed from
master as a followup).
- linux/grpc_distribtests_standalone is now fully covered by bazel test
suite
a3b4c797a7/tools/bazelify_tests/test/BUILD (L202),
setting them to `presubmit=False` will stop tests from running on PRs.
- stop running tests from grpc_bazel_distribtest on PR, instead rely on
bazel distribtests running as bazelified tests.
The `work_stealing` experiment on its own is not very valuable, so let's
delete it and save CI resources. We have a benchmark for
`GRPC_EXPERIMENTS=event_engine_listener,work_stealing`, which is really
what we care about right now.
This should get the benchmarks running again. The dotnet benchmark is
broken (unclear if it's still necessary), and the grpc-go benchmark
build currently fails. The go benchmark should be re-enabled when the
dockerfiles are fixed. The rest of the dotnet benchmark configuration /
artifacts should be deleted or fixed as well. @jtattermusch
Based on https://github.com/grpc/grpc-go/pull/6463
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
Add "bazelified" non-bazel tests. See tools/bazelify_tests/README.md for
the core idea.
- add a bunch of test targets that run under docker and execute tests
that correspond to `run_tests.py -l LANG ...`
- many more tests can be added in the future
- to enable running some of the C/C++ portability tests easily, added
support for `--cmake_extra_configure_args` in run_tests.py (the change
is fairly small).
Example passing build that shows how test results are structured:
https://source.cloud.google.com/results/invocations/21295351-a3e3-4be1-b6e9-aaf52195a044/targets
This enables both of the `event_engine_listener` and `work_stealing`
experiments together, which we expect will have better performance. The
benchmark-config-generation script required some light modification to
support running multiple experiments at the same time.
This adds a new GKE benchmark job, which runs the set of "dashboard"
scenarios for every gRPC experiment configured in the script. Results
are published to BigQuery at
`e2e_benchmarks.ci_cxx_experiment_results_${N}core.${experiment}`
See https://github.com/grpc/grpc/pull/33907 for the scenario config.
Scenarios for language `node` specify the server language as `node`
(instead of leaving it blank), so a flag must be added to
`--allow_server_language=node`.
Scenarios for language `node_purejs` differ in name and in scenario
settings, but otherwise run on identical clients and servers. This
change treats `node_purejs` as `node` for the purpose of generating load
test configurations.
https://github.com/grpc/grpc/pull/33699 incorrectly changed the legacy
builds to not just use the test driver from the master, but also to
build from it. This PR fixes the issue, and also updates the python job
to work use the driver from master.
We are seeing `g++: fatal error: Killed signal terminated program
cc1plus` on PHP distribtest builds. In case it's an OOM, let's try
reducing the build parallelism to see if it helps.
Makes the changes necessary to run the new PSM interop framework on
Ubuntu 22.04:
- Install dependencies via apt: `kubectl`,
`google-cloud-sdk-gke-gcloud-auth-plugin` (previously were
pre-provisioned or available as part of gcloud distribution)
- Use venv instead of pyenv
- Use python 3.10 instead of python3 .9
Other changes:
- Do not update gcloud components - the one provisioned is relatively
recent, and expected to be updated as new base images are released
- Unpin pip from `21.0.1`. Not sure if we're OK about using the latest
one `venv --upgrade-deps`, or if we should just pin it to something more
recent (currently it resolves to `pip 22.0.2` and `setuptools 68.0.0`)
ref b/274944592, cl/547690787