In #35384 these two lines were forgotten and introduced a bug in the
script.
Sorry for the mistake.
Closes#35601
COPYBARA_INTEGRATE_REVIEW=https://github.com/grpc/grpc/pull/35601 from lepistone:fixup-bigquery-project 100d4d68ad
PiperOrigin-RevId: 599951984
Continue supporting the current grpc-testing that I suppose is used
inside of Google, but also allow to configure a different project to
upload results to.
The format "project_id.dataset_id.table_id" is common for BigQuery so it
seems idiomatic to do it in this way. Adding a separate command line
option would be more complicated because it would require changes all
the way down the chain (at least in the entry point for the test driver
and in the LoadTest controller).
<!--
If you know who should review your pull request, please assign it to that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the appropriate
lang label.
-->
Closes#35384
COPYBARA_INTEGRATE_REVIEW=https://github.com/grpc/grpc/pull/35384 from lepistone:choose_bigquery_project 2355fea28c
PiperOrigin-RevId: 595523944
This adds a new GKE benchmark job, which runs the set of "dashboard"
scenarios for every gRPC experiment configured in the script. Results
are published to BigQuery at
`e2e_benchmarks.ci_cxx_experiment_results_${N}core.${experiment}`
See https://github.com/grpc/grpc/pull/33907 for the scenario config.
This PR covers C++ only. Other language owners: feel free to ping me if
this would be useful for you.
This allows scenario_config to produce a small superset of tests
required to generate the performance dashboard
https://grafana-dot-grpc-testing.appspot.com/. This only adds a category
to some existing scenarios, and to my knowledge, should not affect any
current automation that uses scenario_config.
I plan to use this category to run gRPC-core experiments and produce
equivalent dashboards. It seems worth landing this independently of
those job configurations, but I'm flexible.
---------
Co-authored-by: drfloob <drfloob@users.noreply.github.com>
Scenarios for language `node` specify the server language as `node`
(instead of leaving it blank), so a flag must be added to
`--allow_server_language=node`.
Scenarios for language `node_purejs` differ in name and in scenario
settings, but otherwise run on identical clients and servers. This
change treats `node_purejs` as `node` for the purpose of generating load
test configurations.
- Switched from yapf to black
- Reconfigure isort for black
- Resolve black/pylint idiosyncrasies
Note: I used `--experimental-string-processing` because black was
producing "implicit string concatenation", similar to what described
here: https://github.com/psf/black/issues/1837. While currently this
feature is experimental, it will be enabled by default:
https://github.com/psf/black/issues/2188. After running black with the
new string processing so that the generated code merges these `"hello" "
world"` strings concatenations, then I removed
`--experimental-string-processing` for stability, and regenerated the
code again.
To the reviewer: don't even try to open "Files Changed" tab 😄 It's
better to review commit-by-commit, and ignore `run black and isort`.
* Add query delay flag to Prometheus query script.
This PR adds the flag allows user to config the delay of querying
the Prometheus.
* Update the help message of the flag to be more clear.
* Add enablePrometheus annotation.
The PR adds the enablePrometheus annotation to load tests that are
part of PSM data plan performance tests. This annotation enables
all PSM related tests to obtain data from Prometehus, even for the
regular tests.
* Add templates for PSM test.
The commit adds psm related template generated from
loadtest_template.py. This commit also update the
loadtest_example.sh to generate examples based on the templates.
This commit adds the PSM base scenario to each language in
scenario_config.py allowing per language generation of PSM
base scenarios. The PSM base scenarios will be further updated
in loadtest_config.py for number of client channels and the
number of async_server_thread. After update of the two
aforementioned fields, scenarios varying offered_load will
be generated from the base scenarios.
loadtest_config.py is updated to take number of client channels,
the number of async_servers and a list of targeted offered_load.
* GKE benchmarks: add support for benchmarking grpc-dotnet
* add dotnet to loadtest basic templates
* print full path for generated examples
* add grpc-dotnet scenario to loadtest_example.sh generator
* add grpc-dotnet to the experimental kokoro job
* yapf format code
* Run 2to3 on tools directory
* Delete github_stats_tracking
* Re-run 2to3
* Remove unused script
* Remove unused script
* Remove unused line count utility
* Yapf. Isort
* Remove accidentally included file
* Migrate tools/distrib directory to python 3
* Remove unnecessary shebang
* Restore line_count directory
* Immediately convert subprocess.check_output output to string
* Take care of Python 2 shebangs
* Invoke scripts using a Python 3 interpreter
* Yapf. Isort
* Try installing Python 3 first
* See if we have any Python 3 versions installed
* Add Python 3.7 to Windows path
* Try adding a symlink
* Try to symlink differently
* Install six for Python 3
* Run run_interop_tests with python 3
* Try installing six in python3.7 explicitly
* Revert "Try installing six in python3.7 explicitly"
This reverts commit 2cf60d72f3.
* And debug some more
* Fix issue with jobset.py
* Add debug for CI failure
* Revert microbenchmark changes
* make it obvious which gke benchmark script is experimental
* add cfg file for new kokoro jobs with a more fitting name
* update performance readme after renaming _v2 benchmark job
* fix typo
* Reorganize docs for end2end benchmarks
* Update README.md
* Restore heading "gRPC OSS benchmarks" (#27646)
This heading is used in cross-references from other repositories and sites. Also fix formatting.
Co-authored-by: Paulo Castello da Costa <6579971+paulosjca@users.noreply.github.com>