The C based gRPC (C++, Python, Ruby, Objective-C, PHP, C#) https://grpc.io/
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 
 
 
Paulo Castello da Costa 3ad2e3185f
LoadTest generator for OSS benchmarks framework. (#25781)
4 years ago
..
README.md LoadTest generator for OSS benchmarks framework. (#25781) 4 years ago
__init__.py auto-fix most of licenses 8 years ago
bq_upload_result.py Yapf all target python sources 5 years ago
build_performance.sh Integrate existing gRPC Python stack benchmarks with Bazel 5 years ago
build_performance_go.sh Upgrade sanity Docker image to debian:stretch 6 years ago
build_performance_node.sh Upgrade sanity Docker image to debian:stretch 6 years ago
build_performance_php7.sh Fix broken php7 performance benchmarks build 6 years ago
kill_workers.sh Make kill_workers.sh pass shellcheck (with suppressions) 7 years ago
loadtest_concat_yaml.py LoadTest generator for OSS benchmarks framework. (#25781) 4 years ago
loadtest_config.py LoadTest generator for OSS benchmarks framework. (#25781) 4 years ago
massage_qps_stats.py format generated code 7 years ago
massage_qps_stats_helpers.py yapf tools/run_tests/performance 7 years ago
patch_scenario_results_schema.py Yapf all target python sources 5 years ago
process_local_perf_flamegraphs.sh a few fixes for run_performance_tests.py flame graph generators 5 years ago
process_remote_perf_flamegraphs.sh a few fixes for run_performance_tests.py flame graph generators 5 years ago
remote_host_build.sh Fix remote_host_build.sh to pass shellcheck 7 years ago
remote_host_prepare.sh Remove disrespectful terms 4 years ago
run_netperf.sh Fix run_netperf.sh to pass shellcheck 7 years ago
run_qps_driver.sh build C++ in benchmarks with cmake 6 years ago
run_worker_csharp.sh adjust run_*tests.py 6 years ago
run_worker_go.sh Upgrade sanity Docker image to debian:stretch 6 years ago
run_worker_java.sh Fix run_worker_java.sh to pass shellcheck 7 years ago
run_worker_node.sh Upgrade sanity Docker image to debian:stretch 6 years ago
run_worker_php.sh Upgrade sanity Docker image to debian:stretch 6 years ago
run_worker_python.sh Integrate existing gRPC Python stack benchmarks with Bazel 5 years ago
run_worker_python_asyncio.sh s/blaze/bazel/ 5 years ago
run_worker_ruby.sh Upgrade sanity Docker image to debian:stretch 6 years ago
scenario_config.py Python style: reformat code using updated yapf 0.30.0 4 years ago
scenario_config_exporter.py LoadTest generator for OSS benchmarks framework. (#25781) 4 years ago
scenario_result_schema.json Add epollex fd cache 7 years ago

README.md

Overview of performance test suite, with steps for manual runs:

For design of the tests, see https://grpc.io/docs/guides/benchmarking.

For scripts related ot the GKE-based performance test suite (in development). see gRPC OSS benchmarks.

Pre-reqs for running these manually:

In general the benchmark workers and driver build scripts expect linux_performance_worker_init.sh to have been ran already.

To run benchmarks locally:

On remote machines, to start the driver and workers manually:

The run_performance_test.py top-level runner script can also be used with remote machines, but for e.g., profiling the server, it might be useful to run workers manually.

  1. You'll need a "driver" and separate "worker" machines. For example, you might use one GCE "driver" machine and 3 other GCE "worker" machines that are in the same zone.

  2. Connect to each worker machine and start up a benchmark worker with a "driver_port".

Commands to start workers in different languages:

Running benchmark workers for C-core wrapped languages (C++, Python, C#, Node, Ruby):
  • These are more simple since they all live in the main grpc repo.
$ cd <grpc_repo_root>
$ tools/run_tests/performance/build_performance.sh
$ tools/run_tests/performance/run_worker_<language>.sh
Running benchmark workers for gRPC-Java:
$ cd <grpc-java-repo>
$ ./gradlew -PskipCodegen=true -PskipAndroid=true :grpc-benchmarks:installDist
$ benchmarks/build/install/grpc-benchmarks/bin/benchmark_worker --driver_port <driver_port>
Running benchmark workers for gRPC-Go:
$ cd <grpc-go-repo>/benchmark/worker && go install
$ # if profiling, it might be helpful to turn off inlining by building with "-gcflags=-l"
$ $GOPATH/bin/worker --driver_port <driver_port>

Build the driver:

  • Connect to the driver machine (if using a remote driver) and from the grpc repo root:
$ tools/run_tests/performance/build_performance.sh

Run the driver:

  1. Get the 'scenario_json' relevant for the scenario to run. Note that "scenario json" configs are generated from scenario_config.py. The driver takes a list of these configs as a json string of the form: {scenario: <json_list_of_scenarios> } in its --scenarios_json command argument. One quick way to get a valid json string to pass to the driver is by running the run_performance_tests.py locally and copying the logged scenario json command arg.

  2. From the grpc repo root:

  • Set QPS_WORKERS environment variable to a comma separated list of worker machines. Note that the driver will start the "benchmark server" on the first entry in the list, and the rest will be told to run as clients against the benchmark server.

Example running and profiling of go benchmark server:

$ export QPS_WORKERS=<host1>:<10000>,<host2>,10000,<host3>:10000
$ bins/opt/qps_json_driver --scenario_json='<scenario_json_scenario_config_string>'

Example profiling commands

While running the benchmark, a profiler can be attached to the server.

Example to count syscalls in grpc-go server during a benchmark:

  • Connect to server machine and run:
$ netstat -tulpn | grep <driver_port> # to get pid of worker
$ perf stat -p <worker_pid> -e syscalls:sys_enter_write # stop after test complete

Example memory profile of grpc-go server, with go tools pprof:

  • After a run is done on the server, see its alloc profile with:
$ go tool pprof --text --alloc_space http://localhost:<pprof_port>/debug/heap

Configuration environment variables:

  • QPS_WORKER_CHANNEL_CONNECT_TIMEOUT

    Consuming process: qps_worker

    Type: integer (number of seconds)

    This can be used to configure the amount of time that benchmark clients wait for channels to the benchmark server to become ready. This is useful in certain benchmark environments in which the server can take a long time to become ready. Note: if setting this to a high value, then the scenario config under test should probably also have a large "warmup_seconds".

  • QPS_WORKERS

    Consuming process: qps_json_driver

    Type: comma separated list of host:port

    Set this to a comma separated list of QPS worker processes/machines. Each scenario in a scenario config has specifies a certain number of servers, num_servers, and the driver will start "benchmark servers"'s on the first num_server host:port pairs in the comma separated list. The rest will be told to run as clients against the benchmark server.

gRPC OSS benchmarks

The scripts in this section generate LoadTest configurations for the GKE-based gRPC OSS benchmarks framework. This framework is stored in a separate repository, grpc/test-infra.

Generating scenarios

The benchmarks framework uses the same test scenarios as the legacy one. These script scenario_config_exporter.py can be used to export these scenarios to files, and also to count and analyze existing scenarios.

The language(s) and category of the scenarios are of particular importance to the tests. Continuous runs will typically run tests in the scalable category.

The following example counts scenarios in the scalable category:

$ ./tools/run_tests/performance/scenario_config_exporter.py --count_scenarios --category=scalable
Scenario count for all languages (category: scalable):
Count  Language         Client   Server   Categories
   77  c++              None     None     scalable
   19  python_asyncio   None     None     scalable
   16  java             None     None     scalable
   12  go               None     None     scalable
   12  node             None     node     scalable
   12  node_purejs      None     node     scalable
    9  csharp           None     None     scalable
    7  python           None     None     scalable
    5  ruby             None     None     scalable
    4  csharp           None     c++      scalable
    4  php7             None     c++      scalable
    4  php7_protobuf_c  None     c++      scalable
    3  python_asyncio   None     c++      scalable
    2  ruby             None     c++      scalable
    2  python           None     c++      scalable
    1  csharp           c++      None     scalable

  189  total scenarios (category: scalable)

Generating load test configurations

The benchmarks framework uses LoadTest resources configured by YAML files. Each LoadTest resource specifies a driver, a server, and one or more clients to run the test. Each test runs one scenario. The scenario configuration is embedded in the LoadTest configuration. Example configurations for various languages can be found here:

https://github.com/grpc/test-infra/tree/master/config/samples

The script loadtest_config.py generates LoadTest configurations for tests running a set of scenarios. The configurations are written in multipart YAML format, either to a file or to stdout.

The LoadTest configurations are generated from a template. The example configurations above can be used as templates.

The LoadTests specified in the script output all have unique names and can be run by applying the test to a cluster running the LoadTest controller with kubectl apply:

$ kubectl apply -f loadtest_config.yaml

Concatenating load test configurations for

The LoadTest configuration generator processes one language at a time, with a given set of options. The convenience script loadtest_concat_yaml.py is provided to concatenate several YAML files into one, so they can be run with a single command. It can be invoked as follows:

$ loadtest_concat_yaml.py -i infile1.yaml infile2.yaml -o outfile.yaml