This change ensures that a valid driver configuration is always included in generated load test configurations, and that the driver pod is named with an index (`0`, since there is only one driver), in the same way as client and server pods.
Generated examples can be found in https://github.com/grpc/test-infra/pull/189.
With this change, it is no longer necessary to specify a driver image in order to specify a driver name and pool, so that is removed from the kokoro jobs.
* add no_arm64 tag to resolver_component_tests_runner_invoker tests
* skip no_arm64 tests when running on arm64
* increase kokoro jobs timeout for ARM64 C/C++ bazel tests
* use 8 core instance for arm64 bazel C/C++ tests
* Update load test template and config generation.
This change includes the following features and fixes:
* Add a script to generate load test examples.
* Update template generation logic to support round trip from configs to templates (handling of repeated clients and servers for the same language and of named clients and servers in source configs).
* Integrate safe language names from scenario config.
* Update template and config formatting (now that we generate in round trip).
* Fix shellcheck lint warnings.
* Update README.md.
This change fixes the format and location of the images in the driver configuration, so tests run with the proper driver configuration instead of a default.
* List full status of tests in Errored state that contain running pods.
This change lists the full status, including failure reason and start and end time, to help with debugging.
* milestone 1: static build instance, cmake
* on-demand vm per build, and use bazel
* PR cleanup
* pr cleanup: use builtin bazel wrapper
* pr: misc cleanup
* less verbose unzip
* small cleanup of shell scripts and config file
* using rsync for copying workspace is much faster
* simpler way to increase worker disk size
* simplify bazel build
* increase job timeout
* make max instance lifespan setting more obvious
* refactor the exitcode logic
* shutdown AWS instance once possible
* sudo shudown is required
* add useful AWS instance tags
* move aws integration scripts under AWS folder
* adjust scripts
* make sure ssh session closes even if stdout/stderr remains open
* add test scripts for multiple languages
* improvements to the run_remote_test script
* add cfg files for aws kokoro jobs
Co-authored-by: Alexander Midlash <amidlash@google.com>
* List annotations of tests that have running pods and are in Errored state.
* Fix format.
* Use pod owner reference instead of "loadtest" label.
* Delete loadtests that have running pods and are in Errored state.
* Improve jsonpath expressions.
* Add comment.
* List tests but do not delete them.
* Run all kokoro performance tests on dedicated node pools.
Both official and experimental tests run with kokoro are set to run on separate node pools with the suffix "-ci" (drivers-ci, workers-8core-ci, workers-32core-ci), separate from the default pools used for manual runs.
This change sets the deadline of master and experimental kokoro jobs to match their running interval (4 hours and 12 hours), and makes a change so runs initiated by kokoro are marked 'kokoro' for the master job and 'kokoro-test' for the experimental job.
Experiments show that 4 tests running concurrrently (two on 8-core nodes and two on 32-core nodes) is enough to run all tests within two hours with time to spare.
* Removes optional flag -a, allowing it to be changed later to a long-form flag.
* Updates concurrency levels to one more than what each worker node pool can support (each test requires two workers, and there are nine nodes in each pool, so each node can support four tests).
* Employ prebuilt images in continuous build.
This commits updated Kokoro build job to use prebuilt images to
run tests. The loadtest template was generated using
loadtest_template.py.
pip 21.1 released on Apr 24 introduced a regression for python 3.6.1.
The regression was identified on Apr 24, the fix merged on Apr 25.
The fix is expected to be delivered in the 21.1.1 patch.
There's no clear date, when 21.1.1 will be released.
Until then, pin is temporarily pinned to the previous release, 21.0.1.