Update load test template and config generation. (#26398)

* Update load test template and config generation.

This change includes the following features and fixes:

* Add a script to generate load test examples.
* Update template generation logic to support round trip from configs to templates (handling of repeated clients and servers for the same language and of named clients and servers in source configs).
* Integrate safe language names from scenario config.
* Update template and config formatting (now that we generate in round trip).
* Fix shellcheck lint warnings.
* Update README.md.
pull/26359/head
Paulo Castello da Costa 4 years ago committed by GitHub
parent e66943b006
commit e76afe5457
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
  1. 6
      tools/internal_ci/linux/grpc_e2e_performance_gke.sh
  2. 6
      tools/internal_ci/linux/grpc_e2e_performance_v2.sh
  3. 43
      tools/run_tests/performance/README.md
  4. 40
      tools/run_tests/performance/loadtest_config.py
  5. 120
      tools/run_tests/performance/loadtest_examples.sh
  6. 87
      tools/run_tests/performance/loadtest_template.py
  7. 67
      tools/run_tests/performance/scenario_config.py
  8. 6
      tools/run_tests/performance/scenario_config_exporter.py
  9. 16
      tools/run_tests/performance/templates/loadtest_template_basic_all_languages.yaml
  10. 7
      tools/run_tests/performance/templates/loadtest_template_prebuilt_all_languages.yaml

@ -15,7 +15,7 @@
set -ex
# Enter the gRPC repo root.
cd $(dirname $0)/../../..
cd "$(dirname "$0")/../../.."
source tools/internal_ci/helper_scripts/prepare_build_linux_rc
@ -72,8 +72,8 @@ popd
# Build test configurations.
buildConfigs() {
local pool="$1"
local table="$2"
local -r pool="$1"
local -r table="$2"
shift 2
tools/run_tests/performance/loadtest_config.py "$@" \
-t ./tools/run_tests/performance/templates/loadtest_template_prebuilt_all_languages.yaml \

@ -15,7 +15,7 @@
set -ex
# Enter the gRPC repo root.
cd $(dirname $0)/../../..
cd "$(dirname "$0")/../../.."
source tools/internal_ci/helper_scripts/prepare_build_linux_rc
@ -73,8 +73,8 @@ popd
# Build test configurations.
buildConfigs() {
local pool="$1"
local table="$2"
local -r pool="$1"
local -r table="$2"
shift 2
tools/run_tests/performance/loadtest_config.py "$@" \
-t ./tools/run_tests/performance/templates/loadtest_template_prebuilt_all_languages.yaml \

@ -163,7 +163,7 @@ repository, [grpc/test-infra](https://github.com/grpc/test-infra).
### Generating scenarios
The benchmarks framework uses the same test scenarios as the legacy one. These
The benchmarks framework uses the same test scenarios as the legacy one. The
script [scenario_config_exporter.py](./scenario_config_exporter.py) can be used
to export these scenarios to files, and also to count and analyze existing
scenarios.
@ -231,6 +231,9 @@ run by applying the test to a cluster running the LoadTest controller with
$ kubectl apply -f loadtest_config.yaml
```
> Note: The most common way of running tests generated by this script is to use
> a _test runner_. For details, see [running tests](#running-tests).
A basic template for generating tests in various languages can be found here:
[loadtest_template_basic_all_languages.yaml](./templates/loadtest_template_basic_all_languages.yaml).
The following example generates configurations for C# and Java tests using this
@ -254,7 +257,7 @@ The script `loadtest_config.py` takes the following options:
- `-t`, `--template`<br> Template file. A template is a configuration file that
may contain multiple client and server configuration, and may also include
substitution keys.
- `p`, `--prefix`<br> Test names consist of a prefix_joined with a uuid with a
- `-p`, `--prefix`<br> Test names consist of a prefix_joined with a uuid with a
dash. Test names are stored in `metadata.name`. The prefix is also added as
the `prefix` label in `metadata.labels`. The prefix defaults to the user name
if not set.
@ -281,7 +284,7 @@ The script `loadtest_config.py` takes the following options:
- `--allow_server_language`<br> Allows cross-language scenarios where the server
is of a specified language, different from the scenario language. This is
typically `node` or `c++`. This flag may be repeated.
- `--instances_per_client`<br>This option generates multiple instances of the
- `--instances_per_client`<br> This option generates multiple instances of the
clients for each test. The instances are named with the name of the client
combined with an index (or only an index, if no name is specified). If the
template specifies more than one client for a given language, it must also
@ -333,6 +336,19 @@ script can be invoked as follows:
$ loadtest_concat_yaml.py -i infile1.yaml infile2.yaml -o outfile.yaml
```
### Generating load test examples
The script [loadtest_examples.sh](./loadtest_examples.sh) is provided to
generate example load test configurations in all supported languages. This
script takes only one argument, which is the output directory where the
configurations will be created. The script produces a set of basic
configurations, as well as a set of template configurations intended to be used
with prebuilt images.
The [examples](https://github.com/grpc/test-infra/tree/master/config/samples)
in the repository [grpc/test-infra](https://github.com/grpc/test-infra) are
generated by this script.
### Generating configuration templates
The script [loadtest_template.py](./loadtest_template.py) generates a load test
@ -349,7 +365,7 @@ was generated from the example configurations in
```
$ ./tools/run_tests/performance/loadtest_template.py \
-i ../test-infra/config/samples/*_example_loadtest.yaml \
--inject_client_pool --inject_driver_pool --inject_server_pool \
--inject_client_pool --inject_server_pool \
--inject_big_query_table --inject_timeout_seconds \
-o ./tools/run_tests/performance/templates/loadtest_template_basic_all_languages.yaml \
--name basic_all_languages
@ -361,7 +377,7 @@ was generated by the following command:
```
$ ./tools/run_tests/performance/loadtest_template.py \
-i ../test-infra/config/samples/*_example_loadtest_with_pre_built_workers.yaml \
-i ../test-infra/config/samples/templates/*_example_loadtest_with_prebuilt_workers.yaml \
--inject_client_pool --inject_driver_image --inject_driver_pool \
--inject_server_pool --inject_big_query_table --inject_timeout_seconds \
-o ./tools/run_tests/performance/templates/loadtest_template_prebuilt_all_languages.yaml \
@ -405,3 +421,20 @@ Annotations, on the other hand, are passed on to the test configurations, and
may be set to values or to substitution keys in themselves, allowing future
automation scripts to process the tests generated from these configurations in
different ways.
### Running tests
Collections of tests generated by `loadtest_config.py` are intended to be run
with a test runner. The code for the test runner is stored in a separate
repository, [grpc/test-infra](https://github.com/grpc/test-infra).
The test runner applies the tests to the cluster, and monitors the tests for
completion while they are running. The test runner can also be set up to run
collections of tests in parallel on separate node pools, and to limit the number
of tests running in parallel on each pool.
The test runner is used in the continuous integration setup defined in
[grpc_e2e_performance_gke.sh] and [grpc_e2e_performance_v2.sh].
[grpc_e2e_performance_gke.sh]: ../../internal_ci/linux/grpc_e2e_performance_gke.sh
[grpc_e2e_performance_v2.sh]: ../../internal_ci/linux/grpc_e2e_performance_v2.sh

@ -46,11 +46,9 @@ CONFIGURATION_FILE_HEADER_COMMENT = """
"""
def label_language(language: str) -> str:
"""Convert scenario language to place in a resource label."""
return {
'c++': 'cxx',
}.get(language, language)
def safe_name(language: str) -> str:
"""Returns a name that is safe to use in labels and file names."""
return scenario_config.LANGUAGES[language].safename
def default_prefix() -> str:
@ -136,10 +134,8 @@ def gen_loadtest_configs(
"""
validate_annotations(annotations)
prefix = loadtest_name_prefix or default_prefix()
cl = label_language(language_config.client_language or
language_config.language)
sl = label_language(language_config.server_language or
language_config.language)
cl = safe_name(language_config.client_language or language_config.language)
sl = safe_name(language_config.server_language or language_config.language)
scenario_filter = scenario_config_exporter.scenario_filter(
scenario_name_regex=scenario_name_regex,
category=language_config.category,
@ -162,8 +158,7 @@ def gen_loadtest_configs(
metadata['name'] = name
if 'labels' not in metadata:
metadata['labels'] = dict()
metadata['labels']['language'] = label_language(
language_config.language)
metadata['labels']['language'] = safe_name(language_config.language)
metadata['labels']['prefix'] = prefix
if 'annotations' not in metadata:
metadata['annotations'] = dict()
@ -194,7 +189,7 @@ def gen_loadtest_configs(
'unique names, name counts for language %s: %s') %
(cl, c.most_common()))
# Name client instances with an index starting from 0.
# Name client instances with an index starting from zero.
client_instances = []
for i in range(instances_per_client):
client_instances.extend(copy.deepcopy(clients))
@ -222,12 +217,14 @@ def gen_loadtest_configs(
# Set servers to named instances.
spec['servers'] = servers
# Name driver with an index for consistency with workers.
if 'driver' not in spec:
spec['driver'] = dict()
# Name the driver with an index for consistency with workers.
# There is only one driver, so the index is zero.
if 'driver' in spec and 'run' in spec['driver']:
driver = spec['driver']
driver['language'] = 'cxx'
driver['name'] = component_name((driver.get('name', ''), str(i)))
if 'language' not in driver:
driver['language'] = safe_name('c++')
if 'name' not in driver or not driver['name']:
driver['name'] = '0'
spec['scenariosJSON'] = scenario_str
@ -284,15 +281,6 @@ def config_dumper(header_comment: str) -> Type[yaml.SafeDumper]:
self.write_indent()
self.write_indicator(header_comment, need_whitespace=False)
def expect_block_sequence(self):
super().expect_block_sequence()
self.increase_indent()
def expect_block_sequence_item(self, first=False):
if isinstance(self.event, yaml.SequenceEndEvent):
self.indent = self.indents.pop()
super().expect_block_sequence_item(first)
def str_presenter(dumper, data):
if '\n' in data:
return dumper.represent_scalar('tag:yaml.org,2002:str',

@ -0,0 +1,120 @@
#!/bin/bash
# Copyright 2021 The gRPC Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This script generates a set of load test examples from templates.
LOADTEST_CONFIG=tools/run_tests/performance/loadtest_config.py
if (( $# < 1 )); then
echo "Usage: ${0} <output directory>" >&2
exit 1
fi
if [[ ! -x "${LOADTEST_CONFIG}" ]]; then
echo "${LOADTEST_CONFIG} not found." >&2
exit 1
fi
outputbasedir="${1}"
mkdir -p "${outputbasedir}/templates"
example_file() {
local scenario="${1}"
local suffix="${2}"
if [[ "${scenario#cpp_}" != "${scenario}" ]]; then
echo "cxx${suffix}"
return
fi
if [[ "${scenario#python_asyncio_}" != "${scenario}" ]]; then
echo "python_asyncio${suffix}"
return
fi
echo "${scenario%%_*}${suffix}"
}
example_language() {
local filename="${1}"
if [[ "${filename#cxx_}" != "${filename}" ]]; then
echo "c++"
return
fi
if [[ "${filename#python_asyncio_}" != "${filename}" ]]; then
echo "python_asyncio"
return
fi
echo "${filename%%_*}"
}
scenarios=(
"cpp_generic_async_streaming_ping_pong_secure"
"csharp_protobuf_async_unary_ping_pong"
"go_generic_sync_streaming_ping_pong_secure"
"java_generic_async_streaming_ping_pong_secure"
"node_to_node_generic_async_streaming_ping_pong_secure"
"php7_protobuf_php_extension_to_cpp_protobuf_sync_unary_ping_pong"
"python_generic_sync_streaming_ping_pong"
"python_asyncio_generic_async_streaming_ping_pong"
"ruby_protobuf_sync_streaming_ping_pong"
)
# Basic examples are intended to be runnable _as is_, so substitution keys
# are stripped. Fields can be inserted manually following the pattern of the
# prebuilt examples.
basic_example() {
local -r scenario="${1}"
local -r outputdir="${2}"
local -r outputfile="$(example_file "${scenario}" _example_loadtest.yaml)"
local -r language="$(example_language "${outputfile}")"
${LOADTEST_CONFIG} \
-l "${language}" \
-t ./tools/run_tests/performance/templates/loadtest_template_basic_all_languages.yaml \
-s client_pool= -s server_pool= -s big_query_table= \
-s timeout_seconds=900 --prefix=examples -u basic -r "^${scenario}$" \
--allow_client_language=c++ --allow_server_language=c++ \
--allow_server_language=node \
-o "${outputdir}/${outputfile}"
echo "Created example: ${outputfile}"
}
# Prebuilt examples contain substitution keys, so must be processed before
# running.
prebuilt_example() {
local -r scenario="${1}"
local -r outputdir="${2}"
local -r outputfile="$(example_file "${scenario}" _example_loadtest_with_prebuilt_workers.yaml)"
local -r language="$(example_language "${outputfile}")"
${LOADTEST_CONFIG} \
-l "${language}" \
-t ./tools/run_tests/performance/templates/loadtest_template_prebuilt_all_languages.yaml \
-s driver_pool="\${driver_pool}" -s driver_image="\${driver_image}" \
-s client_pool="\${workers_pool}" -s server_pool="\${workers_pool}" \
-s big_query_table="\${big_query_table}" -s timeout_seconds=900 \
-s prebuilt_image_prefix="\${prebuilt_image_prefix}" \
-s prebuilt_image_tag="\${prebuilt_image_tag}" --prefix=examples -u prebuilt \
-a pool="\${workers_pool}" -r "^${scenario}$" \
--allow_client_language=c++ --allow_server_language=c++ \
--allow_server_language=node \
-o "${outputdir}/${outputfile}"
echo "Created example: ${outputfile}"
}
for scenario in "${scenarios[@]}"; do
basic_example "${scenario}" "${outputbasedir}"
done
for scenario in "${scenarios[@]}"; do
prebuilt_example "${scenario}" "${outputbasedir}/templates"
done

@ -28,7 +28,7 @@
import argparse
import sys
from typing import Any, Dict, Iterable, Mapping, Type
from typing import Any, Dict, Iterable, List, Mapping, Type
import yaml
@ -48,6 +48,28 @@ TEMPLATE_FILE_HEADER_COMMENT = """
"""
def insert_worker(worker: Dict[str, Any], workers: List[Dict[str,
Any]]) -> None:
"""Inserts client or server into a list, without inserting duplicates."""
def dump(w):
return yaml.dump(w, Dumper=yaml.SafeDumper, default_flow_style=False)
worker_str = dump(worker)
if any((worker_str == dump(w) for w in workers)):
return
workers.append(worker)
def uniquify_workers(workermap: Dict[str, List[Dict[str, Any]]]) -> None:
"""Name workers if there is more than one for the same map key."""
for workers in workermap.values():
if len(workers) <= 1:
continue
for i, worker in enumerate(workers):
worker['name'] = str(i)
def loadtest_template(
input_file_names: Iterable[str],
metadata: Mapping[str, Any],
@ -59,11 +81,9 @@ def loadtest_template(
inject_timeout_seconds: bool,
inject_ttl_seconds: bool) -> Dict[str, Any]: # yapf: disable
"""Generates the load test template."""
clients = list()
servers = list()
spec = dict()
client_languages = set()
server_languages = set()
spec = dict() # type: Dict[str, Any]
clientmap = dict() # Dict[str, List[Dict[str, Any]]]
servermap = dict() # Dict[Str, List[Dict[str, Any]]]
template = {
'apiVersion': 'e2etest.grpc.io/v1',
'kind': 'LoadTest',
@ -81,20 +101,20 @@ def loadtest_template(
input_file_name, input_config.get('kind')))
for client in input_config['spec']['clients']:
if client['language'] in client_languages:
continue
del client['name']
if inject_client_pool:
client['pool'] = '${client_pool}'
clients.append(client)
client_languages.add(client['language'])
if client['language'] not in clientmap:
clientmap[client['language']] = []
insert_worker(client, clientmap[client['language']])
for server in input_config['spec']['servers']:
if server['language'] in server_languages:
continue
del server['name']
if inject_server_pool:
server['pool'] = '${server_pool}'
servers.append(server)
server_languages.add(server['language'])
if server['language'] not in servermap:
servermap[server['language']] = []
insert_worker(server, servermap[server['language']])
input_spec = input_config['spec']
del input_spec['clients']
@ -102,21 +122,35 @@ def loadtest_template(
del input_spec['scenariosJSON']
spec.update(input_config['spec'])
clients.sort(key=lambda x: x['language'])
servers.sort(key=lambda x: x['language'])
uniquify_workers(clientmap)
uniquify_workers(servermap)
spec.update({
'clients': clients,
'servers': servers,
'clients':
sum((clientmap[language] for language in sorted(clientmap)),
start=[]),
'servers':
sum((servermap[language] for language in sorted(servermap)),
start=[]),
})
if inject_driver_image or inject_driver_pool:
driver = {'language': 'cxx'}
if 'driver' not in spec:
spec['driver'] = {'language': 'cxx'}
driver = spec['driver']
if 'name' in driver:
del driver['name']
if inject_driver_image:
driver['run'] = {'image': '${driver_image}'}
if 'run' not in driver:
driver['run'] = {}
driver['run']['image'] = '${driver_image}'
if inject_driver_pool:
driver['pool'] = '${driver_pool}'
spec['driver'] = driver
if 'run' not in driver:
if inject_driver_pool:
raise ValueError('Cannot inject driver.pool: missing driver.run.')
del spec['driver']
if inject_big_query_table:
if 'results' not in spec:
@ -143,15 +177,6 @@ def template_dumper(header_comment: str) -> Type[yaml.SafeDumper]:
self.write_indent()
self.write_indicator(header_comment, need_whitespace=False)
def expect_block_sequence(self):
super().expect_block_sequence()
self.increase_indent()
def expect_block_sequence_item(self, first=False):
if isinstance(self.event, yaml.SequenceEndEvent):
self.indent = self.indents.pop()
super().expect_block_sequence_item(first)
return TemplateDumper

@ -24,7 +24,7 @@ SMOKETEST = 'smoketest'
SCALABLE = 'scalable'
INPROC = 'inproc'
SWEEP = 'sweep'
DEFAULT_CATEGORIES = [SCALABLE, SMOKETEST]
DEFAULT_CATEGORIES = (SCALABLE, SMOKETEST)
SECURE_SECARGS = {
'use_test_ca': True,
@ -127,13 +127,13 @@ def _ping_pong_scenario(name,
server_threads_per_cq=0,
client_threads_per_cq=0,
warmup_seconds=WARMUP_SECONDS,
categories=DEFAULT_CATEGORIES,
categories=None,
channels=None,
outstanding=None,
num_clients=None,
resource_quota_size=None,
messages_per_stream=None,
excluded_poll_engines=[],
excluded_poll_engines=None,
minimal_stack=False,
offered_load=None):
"""Creates a basic ping pong scenario."""
@ -162,7 +162,9 @@ def _ping_pong_scenario(name,
'channel_args': [],
},
'warmup_seconds': warmup_seconds,
'benchmark_seconds': BENCHMARK_SECONDS
'benchmark_seconds': BENCHMARK_SECONDS,
'CATEGORIES': list(DEFAULT_CATEGORIES),
'EXCLUDED_POLL_ENGINES': [],
}
if resource_quota_size:
scenario['server_config']['resource_quota_size'] = resource_quota_size
@ -232,10 +234,18 @@ def _ping_pong_scenario(name,
return scenario
class CXXLanguage:
class Language(object):
def __init__(self):
self.safename = 'cxx'
@property
def safename(self):
return str(self)
class CXXLanguage(Language):
@property
def safename(self):
return 'cxx'
def worker_cmdline(self):
return ['cmake/build/qps_worker']
@ -619,10 +629,7 @@ class CXXLanguage:
return 'c++'
class CSharpLanguage:
def __init__(self):
self.safename = str(self)
class CSharpLanguage(Language):
def worker_cmdline(self):
return ['tools/run_tests/performance/run_worker_csharp.sh']
@ -747,10 +754,7 @@ class CSharpLanguage:
return 'csharp'
class PythonLanguage:
def __init__(self):
self.safename = 'python'
class PythonLanguage(Language):
def worker_cmdline(self):
return ['tools/run_tests/performance/run_worker_python.sh']
@ -824,10 +828,7 @@ class PythonLanguage:
return 'python'
class PythonAsyncIOLanguage:
def __init__(self):
self.safename = 'python_asyncio'
class PythonAsyncIOLanguage(Language):
def worker_cmdline(self):
return ['tools/run_tests/performance/run_worker_python_asyncio.sh']
@ -972,11 +973,7 @@ class PythonAsyncIOLanguage:
return 'python_asyncio'
class RubyLanguage:
def __init__(self):
pass
self.safename = str(self)
class RubyLanguage(Language):
def worker_cmdline(self):
return ['tools/run_tests/performance/run_worker_ruby.sh']
@ -1037,12 +1034,11 @@ class RubyLanguage:
return 'ruby'
class Php7Language:
class Php7Language(Language):
def __init__(self, php7_protobuf_c=False):
pass
super().__init__()
self.php7_protobuf_c = php7_protobuf_c
self.safename = str(self)
def worker_cmdline(self):
if self.php7_protobuf_c:
@ -1108,11 +1104,7 @@ class Php7Language:
return 'php7'
class JavaLanguage:
def __init__(self):
pass
self.safename = str(self)
class JavaLanguage(Language):
def worker_cmdline(self):
return ['tools/run_tests/performance/run_worker_java.sh']
@ -1212,11 +1204,7 @@ class JavaLanguage:
return 'java'
class GoLanguage:
def __init__(self):
pass
self.safename = str(self)
class GoLanguage(Language):
def worker_cmdline(self):
return ['tools/run_tests/performance/run_worker_go.sh']
@ -1297,12 +1285,11 @@ class GoLanguage:
return 'go'
class NodeLanguage:
class NodeLanguage(Language):
def __init__(self, node_purejs=False):
pass
super().__init__()
self.node_purejs = node_purejs
self.safename = str(self)
def worker_cmdline(self):
fixture = 'native_js' if self.node_purejs else 'native_native'

@ -48,17 +48,13 @@ from typing import Any, Callable, Dict, Iterable, NamedTuple
import scenario_config
# Language parameters for load test config generation.
LanguageConfig = NamedTuple('LanguageConfig', [('category', str),
('language', str),
('client_language', str),
('server_language', str)])
def as_dict_no_empty_values(self):
"""Returns the parameters as a dictionary, ignoring empty values."""
return dict((item for item in self._asdict().items() if item[1]))
def category_string(categories: Iterable[str], category: str) -> str:
"""Converts a list of categories into a single string for counting."""
if category != 'all':

@ -146,9 +146,6 @@ spec:
- src/ruby/qps/worker.rb
command:
- ruby
driver:
language: cxx
pool: ${driver_pool}
results:
bigQueryTable: ${big_query_table}
servers:
@ -231,19 +228,6 @@ spec:
- --benchmark_impl=grpc
command:
- node
- build:
command:
- bash
- /build_scripts/build_qps_worker.sh
clone:
gitRef: master
repo: https://github.com/grpc/grpc.git
language: php7
pool: ${server_pool}
run:
command:
- bash
- /run_scripts/run_worker.sh
- build:
args:
- build

@ -119,13 +119,6 @@ spec:
- /execute/worker-linux
- --benchmark_impl=grpc
image: ${prebuilt_image_prefix}/node:${prebuilt_image_tag}
- language: php7
pool: ${server_pool}
run:
command:
- bash
- /run_scripts/run_worker.sh
image: ${prebuilt_image_prefix}/php:${prebuilt_image_tag}
- language: python
pool: ${server_pool}
run:

Loading…
Cancel
Save