LoadTest generator for OSS benchmarks framework. (#25781)

* LoadTest generator for OSS benchmarks framework.

This change adds a LoadTest configuration generator for the OSS
benchmarks framework. The output of the generator is a multipart
YAML file that specifies uniquely named LoadTest resources that
can be applied to a kubernetes cluster.

For the benchmarks framework, see  https://github.com/github/test-infra.
pull/25906/head
Paulo Castello da Costa 4 years ago committed by GitHub
parent 122af200e7
commit 3ad2e3185f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
  1. 217
      tools/run_tests/performance/README.md
  2. 66
      tools/run_tests/performance/loadtest_concat_yaml.py
  3. 265
      tools/run_tests/performance/loadtest_config.py
  4. 218
      tools/run_tests/performance/scenario_config_exporter.py

@ -1,89 +1,109 @@
# Overview of performance test suite, with steps for manual runs:
For design of the tests, see
https://grpc.io/docs/guides/benchmarking.
For design of the tests, see https://grpc.io/docs/guides/benchmarking.
For scripts related ot the GKE-based performance test suite (in development).
see [gRPC OSS benchmarks](#grpc-oss-benchmarks).
## Pre-reqs for running these manually:
In general the benchmark workers and driver build scripts expect
[linux_performance_worker_init.sh](../../gce/linux_performance_worker_init.sh) to have been ran already.
[linux_performance_worker_init.sh](../../gce/linux_performance_worker_init.sh)
to have been ran already.
### To run benchmarks locally:
* From the grpc repo root, start the
[run_performance_tests.py](../run_performance_tests.py) runner script.
- From the grpc repo root, start the
[run_performance_tests.py](../run_performance_tests.py) runner script.
### On remote machines, to start the driver and workers manually:
The [run_performance_test.py](../run_performance_tests.py) top-level runner script can also
be used with remote machines, but for e.g., profiling the server,
it might be useful to run workers manually.
1. You'll need a "driver" and separate "worker" machines.
For example, you might use one GCE "driver" machine and 3 other
GCE "worker" machines that are in the same zone.
The [run_performance_test.py](../run_performance_tests.py) top-level runner
script can also be used with remote machines, but for e.g., profiling the
server, it might be useful to run workers manually.
1. You'll need a "driver" and separate "worker" machines. For example, you
might use one GCE "driver" machine and 3 other GCE "worker" machines that are
in the same zone.
2. Connect to each worker machine and start up a benchmark worker with a
"driver_port".
2. Connect to each worker machine and start up a benchmark worker with a "driver_port".
* For example, to start the grpc-go benchmark worker:
[grpc-go worker main.go](https://github.com/grpc/grpc-go/blob/master/benchmark/worker/main.go) --driver_port <driver_port>
- For example, to start the grpc-go benchmark worker: [grpc-go worker
main.go](https://github.com/grpc/grpc-go/blob/master/benchmark/worker/main.go)
--driver_port <driver_port>
#### Commands to start workers in different languages:
* Note that these commands are what the top-level
[run_performance_test.py](../run_performance_tests.py) script uses to
build and run different workers through the
[build_performance.sh](./build_performance.sh) script and "run worker"
scripts (such as the [run_worker_java.sh](./run_worker_java.sh)).
- Note that these commands are what the top-level
[run_performance_test.py](../run_performance_tests.py) script uses to build
and run different workers through the
[build_performance.sh](./build_performance.sh) script and "run worker" scripts
(such as the [run_worker_java.sh](./run_worker_java.sh)).
##### Running benchmark workers for C-core wrapped languages (C++, Python, C#, Node, Ruby):
* These are more simple since they all live in the main grpc repo.
```
- These are more simple since they all live in the main grpc repo.
```shell
$ cd <grpc_repo_root>
$ tools/run_tests/performance/build_performance.sh
$ tools/run_tests/performance/run_worker_<language>.sh
```
* Note that there is one "run_worker" script per language, e.g.,
[run_worker_csharp.sh](./run_worker_csharp.sh) for c#.
- Note that there is one "run_worker" script per language, e.g.,
[run_worker_csharp.sh](./run_worker_csharp.sh) for c#.
##### Running benchmark workers for gRPC-Java:
* You'll need the [grpc-java](https://github.com/grpc/grpc-java) repo.
```
- You'll need the [grpc-java](https://github.com/grpc/grpc-java) repo.
```shell
$ cd <grpc-java-repo>
$ ./gradlew -PskipCodegen=true -PskipAndroid=true :grpc-benchmarks:installDist
$ benchmarks/build/install/grpc-benchmarks/bin/benchmark_worker --driver_port <driver_port>
```
##### Running benchmark workers for gRPC-Go:
* You'll need the [grpc-go repo](https://github.com/grpc/grpc-go)
```
- You'll need the [grpc-go repo](https://github.com/grpc/grpc-go)
```shell
$ cd <grpc-go-repo>/benchmark/worker && go install
$ # if profiling, it might be helpful to turn off inlining by building with "-gcflags=-l"
$ $GOPATH/bin/worker --driver_port <driver_port>
```
#### Build the driver:
* Connect to the driver machine (if using a remote driver) and from the grpc repo root:
```
- Connect to the driver machine (if using a remote driver) and from the grpc
repo root:
```shell
$ tools/run_tests/performance/build_performance.sh
```
#### Run the driver:
1. Get the 'scenario_json' relevant for the scenario to run. Note that "scenario
json" configs are generated from [scenario_config.py](./scenario_config.py).
The [driver](../../../test/cpp/qps/qps_json_driver.cc) takes a list of these configs as a json string of the form: `{scenario: <json_list_of_scenarios> }`
in its `--scenarios_json` command argument.
One quick way to get a valid json string to pass to the driver is by running
the [run_performance_tests.py](./run_performance_tests.py) locally and copying the logged scenario json command arg.
json" configs are generated from [scenario_config.py](./scenario_config.py).
The [driver](../../../test/cpp/qps/qps_json_driver.cc) takes a list of these
configs as a json string of the form: `{scenario: <json_list_of_scenarios> }`
in its `--scenarios_json` command argument. One quick way to get a valid
json string to pass to the driver is by running the
[run_performance_tests.py](./run_performance_tests.py) locally and copying
the logged scenario json command arg.
2. From the grpc repo root:
* Set `QPS_WORKERS` environment variable to a comma separated list of worker
machines. Note that the driver will start the "benchmark server" on the first
entry in the list, and the rest will be told to run as clients against the
benchmark server.
- Set `QPS_WORKERS` environment variable to a comma separated list of worker
machines. Note that the driver will start the "benchmark server" on the first
entry in the list, and the rest will be told to run as clients against the
benchmark server.
Example running and profiling of go benchmark server:
```
```shell
$ export QPS_WORKERS=<host1>:<10000>,<host2>,10000,<host3>:10000
$ bins/opt/qps_json_driver --scenario_json='<scenario_json_scenario_config_string>'
```
@ -93,42 +113,125 @@ $ bins/opt/qps_json_driver --scenario_json='<scenario_json_scenario_config_strin
While running the benchmark, a profiler can be attached to the server.
Example to count syscalls in grpc-go server during a benchmark:
* Connect to server machine and run:
```
- Connect to server machine and run:
```shell
$ netstat -tulpn | grep <driver_port> # to get pid of worker
$ perf stat -p <worker_pid> -e syscalls:sys_enter_write # stop after test complete
```
Example memory profile of grpc-go server, with `go tools pprof`:
* After a run is done on the server, see its alloc profile with:
```
- After a run is done on the server, see its alloc profile with:
```shell
$ go tool pprof --text --alloc_space http://localhost:<pprof_port>/debug/heap
```
### Configuration environment variables:
* QPS_WORKER_CHANNEL_CONNECT_TIMEOUT
- QPS_WORKER_CHANNEL_CONNECT_TIMEOUT
Consuming process: qps_worker
Type: integer (number of seconds)
This can be used to configure the amount of time that benchmark
clients wait for channels to the benchmark server to become ready.
This is useful in certain benchmark environments in which the
server can take a long time to become ready. Note: if setting
this to a high value, then the scenario config under test should
probably also have a large "warmup_seconds".
This can be used to configure the amount of time that benchmark clients wait
for channels to the benchmark server to become ready. This is useful in
certain benchmark environments in which the server can take a long time to
become ready. Note: if setting this to a high value, then the scenario config
under test should probably also have a large "warmup_seconds".
* QPS_WORKERS
- QPS_WORKERS
Consuming process: qps_json_driver
Type: comma separated list of host:port
Set this to a comma separated list of QPS worker processes/machines.
Each scenario in a scenario config has specifies a certain number
of servers, `num_servers`, and the driver will start
"benchmark servers"'s on the first `num_server` `host:port` pairs in
the comma separated list. The rest will be told to run as clients
against the benchmark server.
Set this to a comma separated list of QPS worker processes/machines. Each
scenario in a scenario config has specifies a certain number of servers,
`num_servers`, and the driver will start "benchmark servers"'s on the first
`num_server` `host:port` pairs in the comma separated list. The rest will be
told to run as clients against the benchmark server.
## gRPC OSS benchmarks
The scripts in this section generate LoadTest configurations for the GKE-based
gRPC OSS benchmarks framework. This framework is stored in a separate
repository, [grpc/test-infra](https://github.com/grpc/test-infra).
### Generating scenarios
The benchmarks framework uses the same test scenarios as the legacy one. These
script [scenario_config_exporter.py](./scenario_config_exporter.py) can be used
to export these scenarios to files, and also to count and analyze existing
scenarios.
The language(s) and category of the scenarios are of particular importance to
the tests. Continuous runs will typically run tests in the `scalable` category.
The following example counts scenarios in the `scalable` category:
```shell
$ ./tools/run_tests/performance/scenario_config_exporter.py --count_scenarios --category=scalable
Scenario count for all languages (category: scalable):
Count Language Client Server Categories
77 c++ None None scalable
19 python_asyncio None None scalable
16 java None None scalable
12 go None None scalable
12 node None node scalable
12 node_purejs None node scalable
9 csharp None None scalable
7 python None None scalable
5 ruby None None scalable
4 csharp None c++ scalable
4 php7 None c++ scalable
4 php7_protobuf_c None c++ scalable
3 python_asyncio None c++ scalable
2 ruby None c++ scalable
2 python None c++ scalable
1 csharp c++ None scalable
189 total scenarios (category: scalable)
```
### Generating load test configurations
The benchmarks framework uses LoadTest resources configured by YAML files. Each
LoadTest resource specifies a driver, a server, and one or more clients to run
the test. Each test runs one scenario. The scenario configuration is embedded in
the LoadTest configuration. Example configurations for various languages can be
found here:
https://github.com/grpc/test-infra/tree/master/config/samples
The script [loadtest_config.py](./loadtest_config.py) generates LoadTest
configurations for tests running a set of scenarios. The configurations are
written in multipart YAML format, either to a file or to stdout.
The LoadTest configurations are generated from a template. The example
configurations above can be used as templates.
The LoadTests specified in the script output all have unique names and can be
run by applying the test to a cluster running the LoadTest controller with
`kubectl apply`:
```shell
$ kubectl apply -f loadtest_config.yaml
```
<!-- TODO(paulosjca): add more details on scripts and running tests. -->
### Concatenating load test configurations for
The LoadTest configuration generator processes one language at a time, with a
given set of options. The convenience script
[loadtest_concat_yaml.py](./loadtest_concat_yaml.py) is provided to concatenate
several YAML files into one, so they can be run with a single command. It can be
invoked as follows:
```shell
$ loadtest_concat_yaml.py -i infile1.yaml infile2.yaml -o outfile.yaml
```

@ -0,0 +1,66 @@
#!/usr/bin/env python3
# Copyright 2021 The gRPC Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Helper script to concatenate YAML files.
#
# This script concatenates multiple YAML files into a single multipart file.
# Input files are not parsed but processed as strings. This is a convenience
# script to concatenate the output files generated by loadtest_config.py for
# each individual language.
import argparse
import sys
from typing import Iterable
def gen_content_strings(input_files: Iterable[str]) -> Iterable[str]:
if not input_files:
return
with open(input_files[0]) as f:
content = f.read()
yield content
for input_file in input_files[1:]:
with open(input_file) as f:
content = f.read()
yield '---\n'
yield content
def main() -> None:
argp = argparse.ArgumentParser(description='Concatenates YAML files.')
argp.add_argument('-i',
'--inputs',
action='extend',
nargs='+',
type=str,
required=True,
help='Input files.')
argp.add_argument(
'-o',
'--output',
type=str,
help='Concatenated output file. Output to stdout if not set.')
args = argp.parse_args()
with open(args.output, 'w') if args.output else sys.stdout as f:
for content in gen_content_strings(args.inputs):
print(content, file=f, sep='', end='')
if __name__ == '__main__':
main()

@ -0,0 +1,265 @@
#!/usr/bin/env python3
# Copyright 2021 The gRPC Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Script to generate test configurations for the OSS benchmarks framework.
#
# This script filters test scenarios and generates uniquely named configurations
# for each test. Configurations are dumped in multipart YAML format.
import argparse
import copy
import datetime
import json
import os
import string
import sys
import uuid
from typing import Any, Dict, Iterable, List, Mapping, Optional
import yaml
import scenario_config
import scenario_config_exporter
def default_prefix() -> str:
"""Constructs and returns a default prefix for LoadTest names."""
return os.environ.get('USER', 'loadtest')
def now_string() -> str:
return datetime.datetime.now().strftime('%Y%m%d%H%M%S')
def validate_loadtest_name(name: str) -> None:
"""Validates that a LoadTest name is in the expected format."""
if len(name) > 63:
raise ValueError(
'LoadTest name must be less than 63 characters long: %s' % name)
if not all((s.isalnum() for s in name.split('-'))):
raise ValueError('Invalid elements in LoadTest name: %s' % name)
def loadtest_base_name(scenario_name: str, uniquifiers: Iterable[str]) -> str:
"""Constructs and returns the base name for a LoadTest resource."""
elements = scenario_name.split('_')
elements.extend(uniquifiers)
return '-'.join(elements)
def loadtest_name(prefix: str, scenario_name: str,
uniquifiers: Iterable[str]) -> str:
"""Constructs and returns a valid name for a LoadTest resource."""
base_name = loadtest_base_name(scenario_name, uniquifiers)
elements = []
if prefix:
elements.append(prefix)
elements.append(str(uuid.uuid5(uuid.NAMESPACE_DNS, base_name)))
name = '-'.join(elements)
validate_loadtest_name(name)
return name
def validate_annotations(annotations: Dict[str, str]) -> None:
"""Validates that annotations do not contain reserved names.
These names are automatically added by the config generator.
"""
names = set(('scenario', 'uniquifiers')).intersection(annotations)
if names:
raise ValueError('Annotations contain reserved names: %s' % names)
def gen_run_indices(runs_per_test: int) -> Iterable[str]:
"""Generates run indices for multiple runs, as formatted strings."""
if runs_per_test < 2:
yield ''
return
prefix_length = len('{:d}'.format(runs_per_test - 1))
prefix_fmt = '{{:{:d}d}}'.format(prefix_length)
for i in range(runs_per_test):
yield prefix_fmt.format(i)
def gen_loadtest_configs(base_config: yaml.YAMLObject,
scenarios: Iterable[Mapping[str, Any]],
loadtest_name_prefix: str,
uniquifiers: Iterable[str],
annotations: Mapping[str, str],
runs_per_test: int = 1) -> Iterable[yaml.YAMLObject]:
"""Generates LoadTest configurations as YAML objects."""
validate_annotations(annotations),
prefix = loadtest_name_prefix or default_prefix()
for scenario in scenarios:
for run_index in gen_run_indices(runs_per_test):
uniq = uniquifiers + [run_index] if run_index else uniquifiers
name = loadtest_name(prefix, scenario['name'], uniq)
scenario_str = json.dumps({'scenarios': scenario}, indent=' ')
config = copy.deepcopy(base_config)
metadata = config['metadata']
metadata['name'] = name
if 'labels' not in metadata:
metadata['labels'] = dict()
metadata['labels']['prefix'] = prefix
if 'annotations' not in metadata:
metadata['annotations'] = dict()
metadata['annotations'].update(annotations)
metadata['annotations'].update({
'scenario': scenario['name'],
'uniquifiers': uniq,
})
config['spec']['scenariosJSON'] = scenario_str
yield config
def parse_key_value_args(args: Optional[Iterable[str]]) -> Dict[str, str]:
"""Parses arguments in the form key=value into a dictionary."""
d = dict()
if args is None:
return d
for arg in args:
key, equals, value = arg.partition('=')
if equals != '=':
raise ValueError('Expected key=value: ' + value)
d[key] = value
return d
def configure_yaml() -> None:
"""Configures the YAML library to dump data in the expected format."""
def str_presenter(dumper, data):
if '\n' in data:
return dumper.represent_scalar('tag:yaml.org,2002:str',
data,
style='|')
return dumper.represent_scalar('tag:yaml.org,2002:str', data)
yaml.add_representer(str, str_presenter)
def main() -> None:
language_choices = sorted(scenario_config.LANGUAGES.keys())
argp = argparse.ArgumentParser(description='Generates load test configs.')
argp.add_argument('-l',
'--language',
choices=language_choices,
required=True,
help='Language to benchmark.')
argp.add_argument('-t',
'--template',
type=str,
required=True,
help='LoadTest configuration yaml file template.')
argp.add_argument('-s',
'--substitutions',
action='extend',
nargs='+',
default=[],
type=str,
help='Template substitutions, in the form key=value.')
argp.add_argument('-p',
'--prefix',
default='',
type=str,
help='Test name prefix.')
argp.add_argument('-u',
'--uniquifiers',
action='extend',
nargs='+',
default=[],
type=str,
help='One or more strings to make the test name unique.')
argp.add_argument(
'-d',
nargs='?',
const=True,
default=False,
type=bool,
help='Use creation date and time as an addditional uniquifier.')
argp.add_argument('-a',
'--annotations',
action='extend',
nargs='+',
default=[],
type=str,
help='Test annotations, in the form key=value.')
argp.add_argument('-r',
'--regex',
default='.*',
type=str,
help='Regex to select scenarios to run.')
argp.add_argument(
'--category',
choices=['all', 'inproc', 'scalable', 'smoketest', 'sweep'],
default='all',
help='Select a category of tests to run.')
argp.add_argument(
'--client_language',
choices=language_choices,
help='Select only scenarios with a specified client language.')
argp.add_argument(
'--server_language',
choices=language_choices,
help='Select only scenarios with a specified server language.')
argp.add_argument('--runs_per_test',
default=1,
type=int,
help='Number of copies to generate for each test.')
argp.add_argument('-o',
'--output',
type=str,
help='Output file name. Output to stdout if not set.')
args = argp.parse_args()
substitutions = parse_key_value_args(args.substitutions)
with open(args.template) as f:
base_config = yaml.safe_load(
string.Template(f.read()).substitute(substitutions))
scenario_filter = scenario_config_exporter.scenario_filter(
scenario_name_regex=args.regex,
category=args.category,
client_language=args.client_language,
server_language=args.server_language)
scenarios = scenario_config_exporter.gen_scenarios(args.language,
scenario_filter)
uniquifiers = args.uniquifiers
if args.d:
uniquifiers.append(now_string())
annotations = parse_key_value_args(args.annotations)
configs = gen_loadtest_configs(base_config,
scenarios,
loadtest_name_prefix=args.prefix,
uniquifiers=uniquifiers,
annotations=annotations,
runs_per_test=args.runs_per_test)
configure_yaml()
with open(args.output, 'w') if args.output else sys.stdout as f:
yaml.dump_all(configs, stream=f)
if __name__ == '__main__':
main()

@ -14,48 +14,200 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# Helper script to extract JSON scenario definitions from scenario_config.py
# Useful to construct "ScenariosJSON" configuration accepted by the OSS benchmarks framework
# Library to extract scenario definitions from scenario_config.py
#
# Contains functions to filter, analyze and dump scenario definitions.
#
# This library is used in loadtest_config.py to generate the "scenariosJSON"
# field in the format accepted by the OSS benchmarks framework.
# See https://github.com/grpc/test-infra/blob/master/config/samples/cxx_example_loadtest.yaml
#
# It can also be used to dump scenarios to files, and to count scenarios by
# language.
#
# Example usage:
#
# scenario_config.py --export_scenarios -l cxx -f cxx_scenario_ -r '.*' \
# --category=scalable
#
# scenario_config.py --count_scenarios
import argparse
import collections
import json
import re
import scenario_config
import sys
from typing import Any, Callable, Dict, Iterable, Optional, Tuple
import scenario_config
def category_string(categories: Iterable[str], category: str) -> str:
"""Converts a list of categories into a single string for counting."""
if category != 'all':
return category if category in categories else ''
main_categories = ('scalable', 'smoketest')
s = set(categories)
c = [m for m in main_categories if m in s]
s.difference_update(main_categories)
c.extend(s)
return ' '.join(c)
def gen_scenario_languages(
category: str) -> Iterable[Tuple[str, str, str, str]]:
"""Generates tuples containing the languages specified in each scenario."""
for language in scenario_config.LANGUAGES:
for scenario in scenario_config.LANGUAGES[language].scenarios():
client_language = scenario.get('CLIENT_LANGUAGE')
server_language = scenario.get('SERVER_LANGUAGE')
categories = scenario.get('CATEGORIES', [])
if category != 'all' and category not in categories:
continue
yield (language, client_language, server_language,
category_string(categories, category))
def get_json_scenarios(language_name, scenario_name_regex='.*', category='all'):
"""Returns list of scenarios that match given constraints."""
result = []
scenarios = scenario_config.LANGUAGES[language_name].scenarios()
for scenario_json in scenarios:
if re.search(scenario_name_regex, scenario_json['name']):
# if the 'CATEGORIES' key is missing, treat scenario as part of 'scalable' and 'smoketest'
# this matches the behavior of run_performance_tests.py
scenario_categories = scenario_json.get('CATEGORIES',
['scalable', 'smoketest'])
# TODO(jtattermusch): consider adding filtering for 'CLIENT_LANGUAGE' and 'SERVER_LANGUAGE'
# fields, before the get stripped away.
if category in scenario_categories or category == 'all':
scenario_json_stripped = scenario_config.remove_nonproto_fields(
scenario_json)
result.append(scenario_json_stripped)
return result
def dump_to_json_files(json_scenarios, filename_prefix='scenario_dump_'):
"""Dump a list of json scenarios to json files"""
for scenario in json_scenarios:
filename = "%s%s.json" % (filename_prefix, scenario['name'])
print('Writing file %s' % filename, file=sys.stderr)
def scenario_filter(
scenario_name_regex: str = '.*',
category: str = 'all',
client_language: Optional[str] = None,
server_language: Optional[str] = None
) -> Callable[[Dict[str, Any]], bool]:
"""Returns a function to filter scenarios to process."""
def filter_scenario(scenario: Dict[str, Any]) -> bool:
"""Filters scenarios that match specified criteria."""
if not re.search(scenario_name_regex, scenario["name"]):
return False
# if the 'CATEGORIES' key is missing, treat scenario as part of
# 'scalable' and 'smoketest'. This matches the behavior of
# run_performance_tests.py.
scenario_categories = scenario.get('CATEGORIES',
['scalable', 'smoketest'])
if category not in scenario_categories and category != 'all':
return False
scenario_client_language = scenario.get('CLIENT_LANGUAGE')
if client_language != scenario_client_language:
if scenario_client_language:
return False
scenario_server_language = scenario.get('SERVER_LANGUAGE')
if server_language != scenario_server_language:
if scenario_client_language:
return False
return True
return filter_scenario
def gen_scenarios(
language_name: str, scenario_filter_function: Callable[[Dict[str, Any]],
bool]
) -> Iterable[Dict[str, Any]]:
"""Generates scenarios that match a given filter function."""
return map(
scenario_config.remove_nonproto_fields,
filter(scenario_filter_function,
scenario_config.LANGUAGES[language_name].scenarios()))
def dump_to_json_files(scenarios: Iterable[Dict[str, Any]],
filename_prefix: str) -> None:
"""Dumps a list of scenarios to JSON files"""
count = 0
for scenario in scenarios:
filename = '{}{}.json'.format(filename_prefix, scenario['name'])
print('Writing file {}'.format(filename), file=sys.stderr)
with open(filename, 'w') as outfile:
# the dump file should have {"scenarios" : []} as the top level element
# The dump file should have {"scenarios" : []} as the top level
# element, when embedded in a LoadTest configuration YAML file.
json.dump({'scenarios': [scenario]}, outfile, indent=2)
count += 1
print('Wrote {} scenarios'.format(count), file=sys.stderr)
def main() -> None:
language_choices = sorted(scenario_config.LANGUAGES.keys())
argp = argparse.ArgumentParser(description='Exports scenarios to files.')
argp.add_argument('--export_scenarios',
nargs='?',
const=True,
default=False,
type=bool,
help='Export scenarios to JSON files.')
argp.add_argument('--count_scenarios',
nargs='?',
const=True,
default=False,
type=bool,
help='Count scenarios for all test languages.')
argp.add_argument('-l',
'--language',
choices=language_choices,
help='Language to export.')
argp.add_argument('-f',
'--filename_prefix',
default='scenario_dump_',
type=str,
help='Prefix for exported JSON file names.')
argp.add_argument('-r',
'--regex',
default='.*',
type=str,
help='Regex to select scenarios to run.')
argp.add_argument(
'--category',
default='all',
choices=['all', 'inproc', 'scalable', 'smoketest', 'sweep'],
help='Select scenarios for a category of tests.')
argp.add_argument(
'--client_language',
choices=language_choices,
help='Select only scenarios with a specified client language.')
argp.add_argument(
'--server_language',
choices=language_choices,
help='Select only scenarios with a specified server language.')
args = argp.parse_args()
if args.export_scenarios and not args.language:
print('Dumping scenarios requires a specified language.',
file=sys.stderr)
argp.print_usage(file=sys.stderr)
return
if args.export_scenarios:
s_filter = scenario_filter(scenario_name_regex=args.regex,
category=args.category,
client_language=args.client_language,
server_language=args.server_language)
scenarios = gen_scenarios(args.language, s_filter)
dump_to_json_files(scenarios, args.filename_prefix)
if args.count_scenarios:
print('Scenario count for all languages (category: {}):'.format(
args.category))
print('{:>5} {:16} {:8} {:8} {}'.format('Count', 'Language', 'Client',
'Server', 'Categories'))
c = collections.Counter(gen_scenario_languages(args.category))
total = 0
for ((l, cl, sl, cat), count) in c.most_common():
print('{count:5} {l:16} {cl:8} {sl:8} {cat}'.format(l=l,
cl=str(cl),
sl=str(sl),
count=count,
cat=cat))
total += count
print('\n{:>5} total scenarios (category: {})'.format(
total, args.category))
if __name__ == "__main__":
# example usage: extract C# scenarios and dump them as .json files
scenarios = get_json_scenarios('csharp',
scenario_name_regex='.*',
category='scalable')
dump_to_json_files(scenarios, 'scenario_dump_')
main()

Loading…
Cancel
Save