Run example benchmarks v2 test in continuous build. (#25976)

This commit includes the following changes:

1. A new load test template generator (loadtest_template.py) is added. The template generator combines existing configurations or templates for several languages into a single template that can be used to generate configurations for different languages or combinations of languages.

2. A basic template generated from the example tests in grpc/test-infra (loadtest_template_basic_all_languages.yaml) is added.

3. The load test config generator is updated to use the combined template.

4. An example run consisting of a single test (generated from the combined template) is added and set up to run continuously.
pull/26036/head
Paulo Castello da Costa 4 years ago committed by GitHub
parent 57cb063fb7
commit bb418da2b5
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
  1. 32
      tools/internal_ci/linux/grpc_e2e_performance_v2.sh
  2. 226
      tools/run_tests/performance/README.md
  3. 4
      tools/run_tests/performance/loadtest_concat_yaml.py
  4. 245
      tools/run_tests/performance/loadtest_config.py
  5. 203
      tools/run_tests/performance/loadtest_template.py
  6. 7
      tools/run_tests/performance/scenario_config.py
  7. 74
      tools/run_tests/performance/scenario_config_exporter.py
  8. 258
      tools/run_tests/performance/templates/loadtest_template_basic_all_languages.yaml

@ -19,5 +19,35 @@ cd $(dirname $0)/../../..
source tools/internal_ci/helper_scripts/prepare_build_linux_rc
echo "TODO: Add gRPC OSS Benchmarks here..."
# This is to ensure we can push and pull images from gcr.io. We do not
# necessarily need it to run load tests, but will need it when we employ
# pre-built images in the optimization.
gcloud auth configure-docker
# Connect to benchmarks-prod cluster.
gcloud config set project grpc-testing
gcloud container clusters get-credentials benchmarks-prod \
--zone us-central1-b --project grpc-testing
# This is subject to change. Runs a single test and does not wait for the
# result.
tools/run_tests/performance/loadtest_config.py -l go \
-t ./tools/run_tests/performance/templates/loadtest_template_basic_all_languages.yaml \
-s client_pool=workers-8core -s server_pool=workers-8core \
-s big_query_table=grpc-testing.e2e_benchmarks.experimental_results \
-s timeout_seconds=900 --prefix="kokoro-test" -u "$(date +%Y%m%d%H%M%S)" \
-r go_generic_sync_streaming_ping_pong_secure -o ./loadtest.yaml
# The original version of the client is a bit old, update to the latest release
# version v1.21.0.
kubectl version --client
curl -LO https://dl.k8s.io/release/v1.21.0/bin/linux/amd64/kubectl
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
chmod +x kubectl
sudo mv kubectl $(which kubectl)
kubectl version --client
kubectl apply -f ./loadtest.yaml
echo "TODO: Add more gRPC OSS Benchmarks here..."

@ -22,9 +22,9 @@ The [run_performance_test.py](../run_performance_tests.py) top-level runner
script can also be used with remote machines, but for e.g., profiling the
server, it might be useful to run workers manually.
1. You'll need a "driver" and separate "worker" machines. For example, you
might use one GCE "driver" machine and 3 other GCE "worker" machines that are
in the same zone.
1. You'll need a "driver" and separate "worker" machines. For example, you might
use one GCE "driver" machine and 3 other GCE "worker" machines that are in
the same zone.
2. Connect to each worker machine and start up a benchmark worker with a
"driver_port".
@ -45,7 +45,7 @@ server, it might be useful to run workers manually.
- These are more simple since they all live in the main grpc repo.
```shell
```
$ cd <grpc_repo_root>
$ tools/run_tests/performance/build_performance.sh
$ tools/run_tests/performance/run_worker_<language>.sh
@ -58,7 +58,7 @@ $ tools/run_tests/performance/run_worker_<language>.sh
- You'll need the [grpc-java](https://github.com/grpc/grpc-java) repo.
```shell
```
$ cd <grpc-java-repo>
$ ./gradlew -PskipCodegen=true -PskipAndroid=true :grpc-benchmarks:installDist
$ benchmarks/build/install/grpc-benchmarks/bin/benchmark_worker --driver_port <driver_port>
@ -68,7 +68,7 @@ $ benchmarks/build/install/grpc-benchmarks/bin/benchmark_worker --driver_port <d
- You'll need the [grpc-go repo](https://github.com/grpc/grpc-go)
```shell
```
$ cd <grpc-go-repo>/benchmark/worker && go install
$ # if profiling, it might be helpful to turn off inlining by building with "-gcflags=-l"
$ $GOPATH/bin/worker --driver_port <driver_port>
@ -79,7 +79,7 @@ $ $GOPATH/bin/worker --driver_port <driver_port>
- Connect to the driver machine (if using a remote driver) and from the grpc
repo root:
```shell
```
$ tools/run_tests/performance/build_performance.sh
```
@ -89,8 +89,8 @@ $ tools/run_tests/performance/build_performance.sh
json" configs are generated from [scenario_config.py](./scenario_config.py).
The [driver](../../../test/cpp/qps/qps_json_driver.cc) takes a list of these
configs as a json string of the form: `{scenario: <json_list_of_scenarios> }`
in its `--scenarios_json` command argument. One quick way to get a valid
json string to pass to the driver is by running the
in its `--scenarios_json` command argument. One quick way to get a valid json
string to pass to the driver is by running the
[run_performance_tests.py](./run_performance_tests.py) locally and copying
the logged scenario json command arg.
@ -103,7 +103,7 @@ $ tools/run_tests/performance/build_performance.sh
Example running and profiling of go benchmark server:
```shell
```
$ export QPS_WORKERS=<host1>:<10000>,<host2>,10000,<host3>:10000
$ bins/opt/qps_json_driver --scenario_json='<scenario_json_scenario_config_string>'
```
@ -116,7 +116,7 @@ Example to count syscalls in grpc-go server during a benchmark:
- Connect to server machine and run:
```shell
```
$ netstat -tulpn | grep <driver_port> # to get pid of worker
$ perf stat -p <worker_pid> -e syscalls:sys_enter_write # stop after test complete
```
@ -125,7 +125,7 @@ Example memory profile of grpc-go server, with `go tools pprof`:
- After a run is done on the server, see its alloc profile with:
```shell
```
$ go tool pprof --text --alloc_space http://localhost:<pprof_port>/debug/heap
```
@ -173,30 +173,33 @@ the tests. Continuous runs will typically run tests in the `scalable` category.
The following example counts scenarios in the `scalable` category:
```shell
```
$ ./tools/run_tests/performance/scenario_config_exporter.py --count_scenarios --category=scalable
Scenario count for all languages (category: scalable):
Count Language Client Server Categories
77 c++ None None scalable
19 python_asyncio None None scalable
16 java None None scalable
12 go None None scalable
12 node None node scalable
12 node_purejs None node scalable
9 csharp None None scalable
7 python None None scalable
5 ruby None None scalable
4 csharp None c++ scalable
4 php7 None c++ scalable
4 php7_protobuf_c None c++ scalable
3 python_asyncio None c++ scalable
2 ruby None c++ scalable
2 python None c++ scalable
1 csharp c++ None scalable
77 c++ scalable
19 python_asyncio scalable
16 java scalable
12 go scalable
12 node node scalable
12 node_purejs node scalable
9 csharp scalable
7 python scalable
5 ruby scalable
4 csharp c++ scalable
4 php7 c++ scalable
4 php7_protobuf_c c++ scalable
3 python_asyncio c++ scalable
2 ruby c++ scalable
2 python c++ scalable
1 csharp c++ scalable
189 total scenarios (category: scalable)
```
Client and server languages are only set for cross-language scenarios, where the
client or server language do not match the scenario language.
### Generating load test configurations
The benchmarks framework uses LoadTest resources configured by YAML files. Each
@ -209,29 +212,172 @@ https://github.com/grpc/test-infra/tree/master/config/samples
The script [loadtest_config.py](./loadtest_config.py) generates LoadTest
configurations for tests running a set of scenarios. The configurations are
written in multipart YAML format, either to a file or to stdout.
written in multipart YAML format, either to a file or to stdout. Each
configuration contains a single embedded scenario.
The LoadTest configurations are generated from a template. The example
configurations above can be used as templates.
The LoadTest configurations are generated from a template. Any configuration can
be used as a template, as long as it contains the languages required by the set
of scenarios we intend to run (for instance, if we are generating configurations
to run go scenarios, the template must contain a go client and a go server; if
we are generating configurations for cross-language scenarios that need a go
client and a C++ server, the template must also coontain a C++ server; and the
same for all other languages).
A template does not need to contain any substitution keys. That is why I say that any load test contiguration can be used as a template. The important part is that it must contain the languages required by the set of scenarios we intend to run (for instance, if we are generating configurations to run go scenarios, the template must contain a go client and a go server; if we are generating configurations for cross-language scenarios that need a go client and a C++ server, the template must also contain a C++ server, and the same for all other languages).
The LoadTests specified in the script output all have unique names and can be
run by applying the test to a cluster running the LoadTest controller with
`kubectl apply`:
```shell
```
$ kubectl apply -f loadtest_config.yaml
```
<!-- TODO(paulosjca): add more details on scripts and running tests. -->
A basic template for generating tests in various languages can be found here:
[loadtest_template_basic_all_languages.yaml](./templates/loadtest_template_basic_all_languages.yaml).
The following example generates configurations for C# and Java tests using this
template, including tests against C++ clients and servers, and running each test
twice:
### Concatenating load test configurations for
```
$ ./tools/run_tests/performance/loadtest_config.py -l go -l java \
-t ./tools/run_tests/performance/templates/loadtest_template_basic_all_languages.yaml \
-s client_pool=workers-8core -s server_pool=workers-8core \
-s big_query_table=grpc-testing.e2e_benchmarks.experimental_results \
-s timeout_seconds=3600 --category=scalable \
-d --allow_client_language=c++ --allow_server_language=c++ \
--runs_per_test=2 -o ./loadtest.yaml
```
The LoadTest configuration generator processes one language at a time, with a
given set of options. The convenience script
The script `loadtest_config.py` takes the following options:
- `-l`, `--language`<br> Language to benchmark. May be repeated.
- `-t`, `--template`<br> Template file. A template is a configuration file that
may contain multiple client and server configuration, and may also include
substitution keys.
- `p`, `--prefix`<br> Test names consist of a prefix_joined with a uuid with a
dash. Test names are stored in `metadata.name`. The prefix is also added as the
`prefix` label in `metadata.labels`. The prefix defaults to the user name if not
set.
- `-u`, `--uniquifier_element`<br> Uniquifier elements may be passed to the test
to make the test name unique. This option may be repeated to add multiple
elements. The uniquifier elements (plus a date string and a run index, if
applicable) are joined with a dash to form a _uniquifier_. The test name uuid
is derived from the scenario name and the uniquifier. The uniquifier is also
added as the `uniquifier` annotation in `metadata.annotations`.
- `-d`<br> This option is a shorthand for the addition of a date string as a
uniquifier element.
- `-a`, `--annotation`<br> Metadata annotation to be stored in
`metadata.annotations`, in the form key=value. May be repeated.
- `-r`, `--regex`<br> Regex to select scenarios to run. Each scenario is
embedded in a LoadTest configuration containing a client and server of the
language(s) required for the test. Defaults to `.*`, i.e., select all
scenarios.
- `--category`<br> Select scenarios of a specified _category_, or of all
categories. Defaults to `all`. Continuous runs typically run tests in the
`scalable` category.
- `--allow_client_language`<br> Allows cross-language scenarios where the client
is of a specified language, different from the scenario language. This is
typically `c++`. This flag may be repeated.
- `--allow_server_language`<br> Allows cross-language scenarios where the server
is of a specified language, different from the scenario language. This is
typically `node` or `c++`. This flag may be repeated.
- `--runs_per_test`<br> This option specifies that each test should be repeated
`n` times, where `n` is the value of the flag. If `n` > 1, the index of each
test run is added as a uniquifier element for that run.
- `-o`, `--output`<br> Output file name. The LoadTest configurations are added
to this file, in multipart YAML format. Output is streamed to `sys.stdout` if
not set.
The script adds labels and annotations to the metadata of each LoadTest
configuration:
The following labels are added to `metadata.labels`:
- `language`<br> The language of the LoadTest scenario.
- `prefix`<br> The prefix used in `metadata.name`.
The following annotations are added to `metadata.annotations`:
- `scenario`<br> The name of the LoadTest scenario.
- `uniquifier`<br> The uniquifier used to generate the LoadTest name, including
the run index if applicable.
[Labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/)
can be used in selectors in resource queries. Adding the prefix, in particular,
allows the user (or an automation script) to select the resources started from a
given run of the config geneator.
[Annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/)
contain additional information that is available to the user (or an automation
script) but is not indexed and cannot be used to select objects. Scenario name
and uniquifier are added to provide the elements of the LoadTest name uuid in
human-readable form. Additional annotations may be added later for automation.
### Concatenating load test configurations
The LoadTest configuration generator can process multiple languages at a time,
assuming that they are supported by the template. The convenience script
[loadtest_concat_yaml.py](./loadtest_concat_yaml.py) is provided to concatenate
several YAML files into one, so they can be run with a single command. It can be
invoked as follows:
several YAML files into one, so configurations generated by multiple generator
invocations can be concatenated into one and run with a single command. The
script can be invoked as follows:
```shell
```
$ loadtest_concat_yaml.py -i infile1.yaml infile2.yaml -o outfile.yaml
```
### Generating configuration templates
The script [loadtest_template.py](./loadtest_template.py) generates a load test
configuration template from a set of load test configurations. The source files
may be load test configurations or load test configuration templates. The
generated template supports all languages supported in any of the input
configurations or templates.
The example template in
[loadtest_template_basic_template_all_languages.yaml](./templates/loadtest_template_basic_all_languages.yaml)
was generated from the example configurations in
[grpc/test-infra](https://github.com/grpc/test-infra) by the following command:
```
$ ./tools/run_tests/performance/loadtest_template.py \
-i ../test-infra/config/samples/*.yaml \
--inject_client_pool --inject_server_pool --inject_big_query_table \
--inject_timeout_seconds \
-o ./tools/run_tests/performance/templates/loadtest_template_basic_all_languages.yaml \
--name basic_all_languages
```
The script `loadest_template.py` takes the following options:
- `-i`, `--inputs`<br> Space-separated list of the names of input files
containing LoadTest configurations. May be repeated.
- `-o`, `--output`<br> Output file name. Outputs to `sys.stdout` if not set.
- `--inject_client_pool`<br> If this option is set, the pool attribute of all
clients in `spec.clients` is set to `${client_pool}`, for later substitution.
- `--inject_server_pool`<br> If this option is set, the pool attribute of all
servers in `spec.servers` is set to `${server_pool}`, for later substitution.
- `--inject_big_query_table`<br> If this option is set, spec.bigQueryTable is
set to `${big_query_table}`.
- `--inject_timeout_seconds`<br> If this option is set, `spec.timeoutSeconds` is
set to `${timeout_seconds}`.
- `--inject_ttl_seconds`<br> If this optoin is set, `spec.ttlSeconds` is set to
`${ttl_seconds}`.
- `-n`, `--name`<br> Name to be set in `metadata.name`.
- `-a`, `--annotation`<br> Metadata annotation to be stored in
`metadata.annotations`, in the form key=value. May be repeated.
The four options that inject substitution keys are the most useful for template
reuse. When running tests on different node pools, it becomes necessary to set
the pool, and usually also to store the data on a different table. When running
as part of a larger collection of tests, it may also be necessary to adjust test
timeout and time-to-live, to ensure that all tests have time to complete.
The template name is replaced again by `loadtest_config.py`, and so is set only
as a human-readable memo.
Annotations, on the other hand, are passed on to the test configuratopms, and
may be set to values or to substitution keys in themselves, allowing future
automation scripts to process the tests generated from these configurations in
different ways.

@ -17,8 +17,8 @@
#
# This script concatenates multiple YAML files into a single multipart file.
# Input files are not parsed but processed as strings. This is a convenience
# script to concatenate the output files generated by loadtest_config.py for
# each individual language.
# script to concatenate output files generated by multiple runs of
# loadtest_config.py.
import argparse
import sys

@ -17,23 +17,44 @@
#
# This script filters test scenarios and generates uniquely named configurations
# for each test. Configurations are dumped in multipart YAML format.
#
# See documentation below:
# https://github.com/grpc/grpc/blob/master/tools/run_tests/performance/README.md#grpc-oss-benchmarks
import argparse
import copy
import datetime
import json
import itertools
import os
import string
import sys
import uuid
from typing import Any, Dict, Iterable, List, Mapping, Optional
from typing import Any, Dict, Iterable, Mapping, Optional, Type
import json
import yaml
import scenario_config
import scenario_config_exporter
CONFIGURATION_FILE_HEADER_COMMENT = """
# Load test configurations generated from a template by loadtest_config.py.
# See documentation below:
# https://github.com/grpc/grpc/blob/master/tools/run_tests/performance/README.md#grpc-oss-benchmarks
"""
def image_language(language: str) -> str:
"""Convert scenario languages to image languages."""
return {
'c++': 'cxx',
'node_purejs': 'node',
'php7': 'php',
'php7_protobuf_c': 'php',
'python_asyncio': 'python',
}.get(language, language)
def default_prefix() -> str:
"""Constructs and returns a default prefix for LoadTest names."""
@ -41,6 +62,7 @@ def default_prefix() -> str:
def now_string() -> str:
"""Returns the current date and time in string format."""
return datetime.datetime.now().strftime('%Y%m%d%H%M%S')
@ -53,22 +75,23 @@ def validate_loadtest_name(name: str) -> None:
raise ValueError('Invalid elements in LoadTest name: %s' % name)
def loadtest_base_name(scenario_name: str, uniquifiers: Iterable[str]) -> str:
def loadtest_base_name(scenario_name: str,
uniquifier_elements: Iterable[str]) -> str:
"""Constructs and returns the base name for a LoadTest resource."""
elements = scenario_name.split('_')
elements.extend(uniquifiers)
return '-'.join(elements)
name_elements = scenario_name.split('_')
name_elements.extend(uniquifier_elements)
return '-'.join(name_elements)
def loadtest_name(prefix: str, scenario_name: str,
uniquifiers: Iterable[str]) -> str:
uniquifier_elements: Iterable[str]) -> str:
"""Constructs and returns a valid name for a LoadTest resource."""
base_name = loadtest_base_name(scenario_name, uniquifiers)
elements = []
base_name = loadtest_base_name(scenario_name, uniquifier_elements)
name_elements = []
if prefix:
elements.append(prefix)
elements.append(str(uuid.uuid5(uuid.NAMESPACE_DNS, base_name)))
name = '-'.join(elements)
name_elements.append(prefix)
name_elements.append(str(uuid.uuid5(uuid.NAMESPACE_DNS, base_name)))
name = '-'.join(name_elements)
validate_loadtest_name(name)
return name
@ -78,7 +101,7 @@ def validate_annotations(annotations: Dict[str, str]) -> None:
These names are automatically added by the config generator.
"""
names = set(('scenario', 'uniquifiers')).intersection(annotations)
names = set(('scenario', 'uniquifier')).intersection(annotations)
if names:
raise ValueError('Annotations contain reserved names: %s' % names)
@ -94,35 +117,78 @@ def gen_run_indices(runs_per_test: int) -> Iterable[str]:
yield prefix_fmt.format(i)
def gen_loadtest_configs(base_config: yaml.YAMLObject,
scenarios: Iterable[Mapping[str, Any]],
loadtest_name_prefix: str,
uniquifiers: Iterable[str],
annotations: Mapping[str, str],
runs_per_test: int = 1) -> Iterable[yaml.YAMLObject]:
"""Generates LoadTest configurations as YAML objects."""
validate_annotations(annotations),
def gen_loadtest_configs(
base_config: Mapping[str, Any],
base_config_clients: Iterable[Mapping[str, Any]],
base_config_servers: Iterable[Mapping[str, Any]],
scenario_name_regex: str,
language_config: scenario_config_exporter.LanguageConfig,
loadtest_name_prefix: str,
uniquifier_elements: Iterable[str],
annotations: Mapping[str, str],
runs_per_test: int = 1) -> Iterable[Dict[str, Any]]:
"""Generates LoadTest configurations for a given language config.
The LoadTest configurations are generated as YAML objects.
"""
validate_annotations(annotations)
prefix = loadtest_name_prefix or default_prefix()
cl = image_language(language_config.client_language or
language_config.language)
sl = image_language(language_config.server_language or
language_config.language)
scenario_filter = scenario_config_exporter.scenario_filter(
scenario_name_regex=scenario_name_regex,
category=language_config.category,
client_language=language_config.client_language,
server_language=language_config.server_language)
scenarios = scenario_config_exporter.gen_scenarios(language_config.language,
scenario_filter)
for scenario in scenarios:
for run_index in gen_run_indices(runs_per_test):
uniq = uniquifiers + [run_index] if run_index else uniquifiers
uniq = (uniquifier_elements +
[run_index] if run_index else uniquifier_elements)
name = loadtest_name(prefix, scenario['name'], uniq)
scenario_str = json.dumps({'scenarios': scenario}, indent=' ')
config = copy.deepcopy(base_config)
metadata = config['metadata']
metadata['name'] = name
if 'labels' not in metadata:
metadata['labels'] = dict()
metadata['labels']['language'] = language_config.language
metadata['labels']['prefix'] = prefix
if 'annotations' not in metadata:
metadata['annotations'] = dict()
metadata['annotations'].update(annotations)
metadata['annotations'].update({
'scenario': scenario['name'],
'uniquifiers': uniq,
'uniquifier': '-'.join(uniq),
})
config['spec']['scenariosJSON'] = scenario_str
spec = config['spec']
# Select clients with the required language.
spec['clients'] = [
client for client in base_config_clients
if client['language'] == cl
]
if not spec['clients']:
raise IndexError('Client language not found in template: %s' %
cl)
# Select servers with the required language.
spec['servers'] = [
server for server in base_config_servers
if server['language'] == sl
]
if not spec['servers']:
raise IndexError('Server language not found in template: %s' %
sl)
spec['scenariosJSON'] = scenario_str
yield config
@ -140,8 +206,16 @@ def parse_key_value_args(args: Optional[Iterable[str]]) -> Dict[str, str]:
return d
def configure_yaml() -> None:
"""Configures the YAML library to dump data in the expected format."""
def config_dumper(header_comment: str) -> Type[yaml.Dumper]:
"""Returns a custom dumper to dump configurations in the expected format."""
class ConfigDumper(yaml.Dumper):
def expect_stream_start(self):
super().expect_stream_start()
if isinstance(self.event, yaml.StreamStartEvent):
self.write_indent()
self.write_indicator(header_comment, need_whitespace=False)
def str_presenter(dumper, data):
if '\n' in data:
@ -150,55 +224,55 @@ def configure_yaml() -> None:
style='|')
return dumper.represent_scalar('tag:yaml.org,2002:str', data)
yaml.add_representer(str, str_presenter)
ConfigDumper.add_representer(str, str_presenter)
return ConfigDumper
def main() -> None:
language_choices = sorted(scenario_config.LANGUAGES.keys())
argp = argparse.ArgumentParser(description='Generates load test configs.')
argp = argparse.ArgumentParser(
description='Generates load test configs from a template.',
fromfile_prefix_chars='@')
argp.add_argument('-l',
'--language',
action='append',
choices=language_choices,
required=True,
help='Language to benchmark.')
help='Language(s) to benchmark.',
dest='languages')
argp.add_argument('-t',
'--template',
type=str,
required=True,
help='LoadTest configuration yaml file template.')
argp.add_argument('-s',
'--substitutions',
action='extend',
nargs='+',
'--substitution',
action='append',
default=[],
type=str,
help='Template substitutions, in the form key=value.')
help='Template substitution(s), in the form key=value.',
dest='substitutions')
argp.add_argument('-p',
'--prefix',
default='',
type=str,
help='Test name prefix.')
argp.add_argument('-u',
'--uniquifiers',
action='extend',
nargs='+',
'--uniquifier_element',
action='append',
default=[],
type=str,
help='One or more strings to make the test name unique.')
help='String element(s) to make the test name unique.',
dest='uniquifier_elements')
argp.add_argument(
'-d',
nargs='?',
const=True,
default=False,
type=bool,
help='Use creation date and time as an addditional uniquifier.')
action='store_true',
help='Use creation date and time as an addditional uniquifier element.')
argp.add_argument('-a',
'--annotations',
action='extend',
nargs='+',
'--annotation',
action='append',
default=[],
type=str,
help='Test annotations, in the form key=value.')
help='metadata.annotation(s), in the form key=value.',
dest='annotations')
argp.add_argument('-r',
'--regex',
default='.*',
@ -210,13 +284,19 @@ def main() -> None:
default='all',
help='Select a category of tests to run.')
argp.add_argument(
'--client_language',
'--allow_client_language',
action='append',
choices=language_choices,
help='Select only scenarios with a specified client language.')
default=[],
help='Allow cross-language scenarios with this client language.',
dest='allow_client_languages')
argp.add_argument(
'--server_language',
'--allow_server_language',
action='append',
choices=language_choices,
help='Select only scenarios with a specified server language.')
default=[],
help='Allow cross-language scenarios with this server language.',
dest='allow_server_languages')
argp.add_argument('--runs_per_test',
default=1,
type=int,
@ -229,36 +309,49 @@ def main() -> None:
substitutions = parse_key_value_args(args.substitutions)
with open(args.template) as f:
base_config = yaml.safe_load(
string.Template(f.read()).substitute(substitutions))
scenario_filter = scenario_config_exporter.scenario_filter(
scenario_name_regex=args.regex,
category=args.category,
client_language=args.client_language,
server_language=args.server_language)
scenarios = scenario_config_exporter.gen_scenarios(args.language,
scenario_filter)
uniquifiers = args.uniquifiers
uniquifier_elements = args.uniquifier_elements
if args.d:
uniquifiers.append(now_string())
uniquifier_elements.append(now_string())
annotations = parse_key_value_args(args.annotations)
configs = gen_loadtest_configs(base_config,
scenarios,
loadtest_name_prefix=args.prefix,
uniquifiers=uniquifiers,
annotations=annotations,
runs_per_test=args.runs_per_test)
with open(args.template) as f:
base_config = yaml.safe_load(
string.Template(f.read()).substitute(substitutions))
configure_yaml()
spec = base_config['spec']
base_config_clients = spec['clients']
del spec['clients']
base_config_servers = spec['servers']
del spec['servers']
client_languages = [''] + args.allow_client_languages
server_languages = [''] + args.allow_server_languages
config_generators = []
for l, cl, sl in itertools.product(args.languages, client_languages,
server_languages):
language_config = scenario_config_exporter.LanguageConfig(
category=args.category,
language=l,
client_language=cl,
server_language=sl)
config_generators.append(
gen_loadtest_configs(base_config,
base_config_clients,
base_config_servers,
args.regex,
language_config,
loadtest_name_prefix=args.prefix,
uniquifier_elements=uniquifier_elements,
annotations=annotations,
runs_per_test=args.runs_per_test))
configs = (config for config in itertools.chain(*config_generators))
with open(args.output, 'w') if args.output else sys.stdout as f:
yaml.dump_all(configs, stream=f)
yaml.dump_all(configs,
stream=f,
Dumper=config_dumper(
CONFIGURATION_FILE_HEADER_COMMENT.strip()))
if __name__ == '__main__':

@ -0,0 +1,203 @@
#!/usr/bin/env python3
# Copyright 2021 The gRPC Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This script generates a load test configuration template from a collection of
# load test configurations.
#
# Configuration templates contain client and server configurations for multiple
# languages, and may contain template substitution keys. These templates are
# used to generate load test configurations by selecting clients and servers for
# the required languages. The source files for template generation may be load
# test configurations or load test configuration templates. Load test
# configuration generation is performed by loadtest_config.py. See documentation
# below:
# https://github.com/grpc/grpc/blob/master/tools/run_tests/performance/README.md
import argparse
import sys
from typing import Any, Dict, Iterable, Mapping, Type
import yaml
import loadtest_config
TEMPLATE_FILE_HEADER_COMMENT = """
# Template generated from load test configurations by loadtest_template.py.
#
# Configuration templates contain client and server configurations for multiple
# languages, and may contain template substitution keys. These templates are
# used to generate load test configurations by selecting clients and servers for
# the required languages. The source files for template generation may be load
# test configurations or load test configuration templates. Load test
# configuration generation is performed by loadtest_config.py. See documentation
# below:
# https://github.com/grpc/grpc/blob/master/tools/run_tests/performance/README.md
"""
def loadtest_template(
input_file_names: Iterable[str],
metadata: Mapping[str, Any],
inject_client_pool: bool,
inject_server_pool: bool,
inject_big_query_table: bool,
inject_timeout_seconds: bool,
inject_ttl_seconds: bool) -> Dict[str, Any]: # yapf: disable
"""Generates the load test template."""
clients = list()
servers = list()
spec = dict()
client_languages = set()
server_languages = set()
template = {
'apiVersion': 'e2etest.grpc.io/v1',
'kind': 'LoadTest',
'metadata': metadata,
}
for input_file_name in input_file_names:
with open(input_file_name) as f:
input_config = yaml.safe_load(f.read())
if input_config.get('apiVersion') != template['apiVersion']:
raise ValueError('Unexpected api version in file {}: {}'.format(
input_file_name, input_config.get('apiVersion')))
if input_config.get('kind') != template['kind']:
raise ValueError('Unexpected kind in file {}: {}'.format(
input_file_name, input_config.get('kind')))
for client in input_config['spec']['clients']:
if client['language'] in client_languages:
continue
if inject_client_pool:
client['pool'] = '${client_pool}'
clients.append(client)
client_languages.add(client['language'])
for server in input_config['spec']['servers']:
if server['language'] in server_languages:
continue
if inject_server_pool:
server['pool'] = '${server_pool}'
servers.append(server)
server_languages.add(server['language'])
input_spec = input_config['spec']
del input_spec['clients']
del input_spec['servers']
del input_spec['scenariosJSON']
spec.update(input_config['spec'])
clients.sort(key=lambda x: x['language'])
servers.sort(key=lambda x: x['language'])
spec.update({
'clients': clients,
'servers': servers,
})
if inject_big_query_table:
spec['big_query_table'] = '${big_query_table}'
if inject_timeout_seconds:
spec['timeoutSeconds'] = '${timeout_seconds}'
if inject_ttl_seconds:
spec['ttlSeconds'] = '${ttl_seconds}'
template['spec'] = spec
return template
def template_dumper(header_comment: str) -> Type[yaml.Dumper]:
"""Returns a custom dumper to dump templates in the expected format."""
class TemplateDumper(yaml.Dumper):
def expect_stream_start(self):
super().expect_stream_start()
if isinstance(self.event, yaml.StreamStartEvent):
self.write_indent()
self.write_indicator(header_comment, need_whitespace=False)
return TemplateDumper
def main() -> None:
argp = argparse.ArgumentParser(
description='Creates a load test config generator template.',
fromfile_prefix_chars='@')
argp.add_argument('-i',
'--inputs',
action='extend',
nargs='+',
type=str,
help='Input files.')
argp.add_argument('-o',
'--output',
type=str,
help='Output file. Outputs to stdout if not set.')
argp.add_argument(
'--inject_client_pool',
action='store_true',
help='Set spec.client(s).pool values to \'${client_pool}\'.')
argp.add_argument(
'--inject_server_pool',
action='store_true',
help='Set spec.server(s).pool values to \'${server_pool}\'.')
argp.add_argument('--inject_big_query_table',
action='store_true',
help='Set spec.bigQueryTable to \'${big_query_table}\'.')
argp.add_argument('--inject_timeout_seconds',
action='store_true',
help='Set spec.timeoutSeconds to \'${timeout_seconds}\'.')
argp.add_argument('--inject_ttl_seconds',
action='store_true',
help='Set timeout ')
argp.add_argument('-n',
'--name',
default='',
type=str,
help='metadata.name.')
argp.add_argument('-a',
'--annotation',
action='append',
type=str,
help='metadata.annotation(s), in the form key=value.',
dest='annotations')
args = argp.parse_args()
annotations = loadtest_config.parse_key_value_args(args.annotations)
metadata = {'name': args.name}
if annotations:
metadata['annotations'] = annotations
template = loadtest_template(
input_file_names=args.inputs,
metadata=metadata,
inject_client_pool=args.inject_client_pool,
inject_server_pool=args.inject_server_pool,
inject_big_query_table=args.inject_big_query_table,
inject_timeout_seconds=args.inject_timeout_seconds,
inject_ttl_seconds=args.inject_ttl_seconds)
with open(args.output, 'w') if args.output else sys.stdout as f:
yaml.dump(template,
stream=f,
Dumper=template_dumper(TEMPLATE_FILE_HEADER_COMMENT.strip()))
if __name__ == '__main__':
main()

@ -53,8 +53,11 @@ def _get_secargs(is_secure):
def remove_nonproto_fields(scenario):
"""Remove special-purpose that contains some extra info about the scenario
but don't belong to the ScenarioConfig protobuf message"""
"""Removes special-purpose fields that don't belong in the protobuf.
This function removes additional information about the scenario that is not
included in the ScenarioConfig protobuf message.
"""
scenario.pop('CATEGORIES', None)
scenario.pop('CLIENT_LANGUAGE', None)
scenario.pop('SERVER_LANGUAGE', None)

@ -14,7 +14,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# Library to extract scenario definitions from scenario_config.py
# Library to extract scenario definitions from scenario_config.py.
#
# Contains functions to filter, analyze and dump scenario definitions.
#
@ -22,8 +22,9 @@
# field in the format accepted by the OSS benchmarks framework.
# See https://github.com/grpc/test-infra/blob/master/config/samples/cxx_example_loadtest.yaml
#
# It can also be used to dump scenarios to files, and to count scenarios by
# language.
# It can also be used to dump scenarios to files, to count scenarios by
# language, and to export scenario languages in a format that can be used for
# automation.
#
# Example usage:
#
@ -31,6 +32,10 @@
# --category=scalable
#
# scenario_config.py --count_scenarios
#
# scenario_config.py --count_scenarios --category=scalable
#
# For usage of the language config output, see loadtest_config.py.
import argparse
import collections
@ -38,10 +43,21 @@ import json
import re
import sys
from typing import Any, Callable, Dict, Iterable, Optional, Tuple
from typing import Any, Callable, Dict, Iterable, NamedTuple
import scenario_config
# Language parameters for load test config generation.
LanguageConfig = NamedTuple('LanguageConfig', [('category', str),
('language', str),
('client_language', str),
('server_language', str)])
def as_dict_no_empty_values(self):
"""Returns the parameters as a dictionary, ignoring empty values."""
return dict((item for item in self._asdict().items() if item[1]))
def category_string(categories: Iterable[str], category: str) -> str:
"""Converts a list of categories into a single string for counting."""
@ -57,25 +73,27 @@ def category_string(categories: Iterable[str], category: str) -> str:
return ' '.join(c)
def gen_scenario_languages(
category: str) -> Iterable[Tuple[str, str, str, str]]:
def gen_scenario_languages(category: str) -> Iterable[LanguageConfig]:
"""Generates tuples containing the languages specified in each scenario."""
for language in scenario_config.LANGUAGES:
for scenario in scenario_config.LANGUAGES[language].scenarios():
client_language = scenario.get('CLIENT_LANGUAGE')
server_language = scenario.get('SERVER_LANGUAGE')
client_language = scenario.get('CLIENT_LANGUAGE', '')
server_language = scenario.get('SERVER_LANGUAGE', '')
categories = scenario.get('CATEGORIES', [])
if category != 'all' and category not in categories:
continue
yield (language, client_language, server_language,
category_string(categories, category))
cat = category_string(categories, category)
yield LanguageConfig(category=cat,
language=language,
client_language=client_language,
server_language=server_language)
def scenario_filter(
scenario_name_regex: str = '.*',
category: str = 'all',
client_language: Optional[str] = None,
server_language: Optional[str] = None
scenario_name_regex: str = '.*',
category: str = 'all',
client_language: str = '',
server_language: str = '',
) -> Callable[[Dict[str, Any]], bool]:
"""Returns a function to filter scenarios to process."""
@ -91,15 +109,13 @@ def scenario_filter(
if category not in scenario_categories and category != 'all':
return False
scenario_client_language = scenario.get('CLIENT_LANGUAGE')
scenario_client_language = scenario.get('CLIENT_LANGUAGE', '')
if client_language != scenario_client_language:
if scenario_client_language:
return False
return False
scenario_server_language = scenario.get('SERVER_LANGUAGE')
scenario_server_language = scenario.get('SERVER_LANGUAGE', '')
if server_language != scenario_server_language:
if scenario_client_language:
return False
return False
return True
@ -136,16 +152,10 @@ def main() -> None:
language_choices = sorted(scenario_config.LANGUAGES.keys())
argp = argparse.ArgumentParser(description='Exports scenarios to files.')
argp.add_argument('--export_scenarios',
nargs='?',
const=True,
default=False,
type=bool,
action='store_true',
help='Export scenarios to JSON files.')
argp.add_argument('--count_scenarios',
nargs='?',
const=True,
default=False,
type=bool,
action='store_true',
help='Count scenarios for all test languages.')
argp.add_argument('-l',
'--language',
@ -168,10 +178,12 @@ def main() -> None:
help='Select scenarios for a category of tests.')
argp.add_argument(
'--client_language',
default='',
choices=language_choices,
help='Select only scenarios with a specified client language.')
argp.add_argument(
'--server_language',
default='',
choices=language_choices,
help='Select only scenarios with a specified server language.')
args = argp.parse_args()
@ -197,10 +209,10 @@ def main() -> None:
'Server', 'Categories'))
c = collections.Counter(gen_scenario_languages(args.category))
total = 0
for ((l, cl, sl, cat), count) in c.most_common():
for ((cat, l, cl, sl), count) in c.most_common():
print('{count:5} {l:16} {cl:8} {sl:8} {cat}'.format(l=l,
cl=str(cl),
sl=str(sl),
cl=cl,
sl=sl,
count=count,
cat=cat))
total += count

@ -0,0 +1,258 @@
# Template generated from load test configurations by loadtest_template.py.
#
# Configuration templates contain client and server configurations for multiple
# languages, and may contain template substitution keys. These templates are
# used to generate load test configurations by selecting clients and servers for
# the required languages. The source files for template generation may be load
# test configurations or load test configuration templates. Load test
# configuration generation is performed by loadtest_config.py. See documentation
# below:
# https://github.com/grpc/grpc/blob/master/tools/run_tests/performance/README.md
apiVersion: e2etest.grpc.io/v1
kind: LoadTest
metadata:
name: basic_all_languages
spec:
big_query_table: ${big_query_table}
clients:
- build:
command:
- bash
- /build_scripts/build_qps_worker.sh
clone:
gitRef: master
repo: https://github.com/grpc/grpc.git
language: csharp
pool: ${client_pool}
run:
args:
- exec
- qps_worker/Grpc.IntegrationTesting.QpsWorker.dll
command:
- dotnet
- build:
args:
- build
- //test/cpp/qps:qps_worker
command:
- bazel
clone:
gitRef: master
repo: https://github.com/grpc/grpc.git
language: cxx
pool: ${client_pool}
run:
command:
- bazel-bin/test/cpp/qps/qps_worker
- build:
args:
- build
- -o
- /src/workspace/bin/worker
- ./benchmark/worker
command:
- go
clone:
gitRef: master
repo: https://github.com/grpc/grpc-go.git
language: go
pool: ${client_pool}
run:
command:
- /src/workspace/bin/worker
- build:
args:
- -PskipAndroid=true
- -PskipCodegen=true
- :grpc-benchmarks:installDist
command:
- gradle
clone:
gitRef: master
repo: https://github.com/grpc/grpc-java.git
language: java
pool: ${client_pool}
run:
command:
- benchmarks/build/install/grpc-benchmarks/bin/benchmark_worker
- build:
command:
- bash
- /build_scripts/build_qps_worker.sh
clone:
gitRef: master
repo: https://github.com/grpc/grpc-node.git
language: node
pool: ${client_pool}
run:
args:
- -r
- ./test/fixtures/native_native.js
- test/performance/worker.js
- --benchmark_impl=grpc
command:
- node
- build:
command:
- bash
- /build_scripts/build_qps_worker.sh
clone:
gitRef: master
repo: https://github.com/grpc/grpc.git
language: php
pool: ${client_pool}
run:
command:
- bash
- /run_scripts/run_worker.sh
- build:
args:
- build
- //src/python/grpcio_tests/tests/qps:qps_worker
command:
- bazel
clone:
gitRef: master
repo: https://github.com/grpc/grpc.git
language: python
pool: ${client_pool}
run:
command:
- bazel-bin/src/python/grpcio_tests/tests/qps/qps_worker
- build:
command:
- bash
- /build_scripts/build_qps_worker.sh
clone:
gitRef: master
repo: https://github.com/grpc/grpc
language: ruby
pool: ${client_pool}
run:
args:
- src/ruby/qps/worker.rb
command:
- ruby
servers:
- build:
command:
- bash
- /build_scripts/build_qps_worker.sh
clone:
gitRef: master
repo: https://github.com/grpc/grpc.git
language: csharp
pool: ${server_pool}
run:
args:
- exec
- qps_worker/Grpc.IntegrationTesting.QpsWorker.dll
command:
- dotnet
- build:
args:
- build
- //test/cpp/qps:qps_worker
command:
- bazel
clone:
gitRef: master
repo: https://github.com/grpc/grpc.git
language: cxx
pool: ${server_pool}
run:
args:
- --server_port=10010
command:
- bazel-bin/test/cpp/qps/qps_worker
- build:
args:
- build
- -o
- /src/workspace/bin/worker
- ./benchmark/worker
command:
- go
clone:
gitRef: master
repo: https://github.com/grpc/grpc-go.git
language: go
pool: ${server_pool}
run:
command:
- /src/workspace/bin/worker
- build:
args:
- -PskipAndroid=true
- -PskipCodegen=true
- :grpc-benchmarks:installDist
command:
- gradle
clone:
gitRef: master
repo: https://github.com/grpc/grpc-java.git
language: java
pool: ${server_pool}
run:
command:
- benchmarks/build/install/grpc-benchmarks/bin/benchmark_worker
- build:
command:
- bash
- /build_scripts/build_qps_worker.sh
clone:
gitRef: master
repo: https://github.com/grpc/grpc-node.git
language: node
pool: ${server_pool}
run:
args:
- -r
- ./test/fixtures/native_native.js
- test/performance/worker.js
- --benchmark_impl=grpc
command:
- node
- build:
command:
- bash
- /build_scripts/build_qps_worker.sh
clone:
gitRef: master
repo: https://github.com/grpc/grpc.git
language: php
pool: ${server_pool}
run:
command:
- bash
- /run_scripts/run_worker.sh
- build:
args:
- build
- //src/python/grpcio_tests/tests/qps:qps_worker
command:
- bazel
clone:
gitRef: master
repo: https://github.com/grpc/grpc.git
language: python
pool: ${server_pool}
run:
command:
- bazel-bin/src/python/grpcio_tests/tests/qps/qps_worker
- build:
command:
- bash
- /build_scripts/build_qps_worker.sh
clone:
gitRef: master
repo: https://github.com/grpc/grpc
language: ruby
pool: ${server_pool}
run:
args:
- src/ruby/qps/worker.rb
command:
- ruby
timeoutSeconds: ${timeout_seconds}
ttlSeconds: 86400
Loading…
Cancel
Save