The C based gRPC (C++, Python, Ruby, Objective-C, PHP, C#) https://grpc.io/
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 
 
 
Craig Tiller eda29c6398 Merge branch 'epex4' into uberpoll 8 years ago
..
artifacts Merge branch 'v1.3.x' into master-upmerge-v1.3.x 8 years ago
build_stats cleanup tools/run_tests directory 8 years ago
dockerize minor changes to get docker building on osx 8 years ago
generated Merge branch 'epex4' into uberpoll 8 years ago
helper_scripts Merge pull request #10912 from dgquintas/trying-to-fix-stuff 8 years ago
interop add http2 server test for unary calls that uses small frames and padding 8 years ago
performance Merge pull request #10912 from dgquintas/trying-to-fix-stuff 8 years ago
python_utils Merge pull request #10985 from jtattermusch/internal_ci_failing_targets 8 years ago
sanity Remove leftover thrift mentions 8 years ago
stress_test Modify shebang on python scripts 8 years ago
README.md Merge branch 'master' into xiaohaoliang-patch-1 8 years ago
run_build_statistics.py Modify shebang on python scripts 8 years ago
run_interop_tests.py Added verbose option to run_interop_test to ease tracing the commands invoked 8 years ago
run_microbenchmark.py Merge github.com:grpc/grpc into bmdiff2.1.1 8 years ago
run_performance_tests.py Modify shebang on python scripts 8 years ago
run_stress_tests.py Modify shebang on python scripts 8 years ago
run_tests.py Merge branch 'epex4' into uberpoll 8 years ago
run_tests_matrix.py Add UBSan to list of tests generated 8 years ago
start_port_server.py Make testing scripts python3.x compatible 8 years ago
task_runner.py Modify shebang on python scripts 8 years ago

README.md

Overview

This directory contains scripts that facilitate building and running tests. We are using python scripts as entrypoint for our tests because that gives us the opportunity to run tests using the same commandline regardless of the platform you are using.

Unit tests (run_tests.py)

Builds gRPC in given language and runs unit tests. Use tools/run_tests/run_tests.py --help for more help.

Example

tools/run_tests/run_tests.py -l csharp -c dbg

Useful options (among many others)
  • --use_docker Builds a docker container containing all the prerequisites for given language and runs the tests under that container.
  • --build_only Only build, do not run the tests.

Interop tests (run_interop_tests.py)

Runs tests for cross-platform/cross-language interoperability. For more details, see Interop tests descriptions The script is also capable of running interop tests for grpc-java and grpc-go, using sources checked out alongside the ones of the grpc repository.

Example

tools/run_tests/run_interop_tests.py -l csharp -s c++ --use_docker (run interop tests with C# client and C++ server)

Note: if you see an error like no space left on device when running the interop tests using Docker, make sure that Docker is building the image files in a location with sufficient disk space.

Performance benchmarks (run_performance_tests.py)

Runs predefined benchmark scenarios for given languages. Besides the simple configuration of running all the scenarios locally, the script also supports orchestrating test runs with client and server running on different machines and uploading the results to BigQuery.

Example

tools/run_tests/run_performance_tests.py -l c++ node

Useful options
  • --regex use regex to select particular scenarios to run.

Stress tests (run_stress_tests.py)

Runs modified interop tests clients and servers under heavy load for an extended period of time to discover potential stability issues. The tests are internally using Kubernetes to run the client and server on GKE and upload statistics to BigQuery.

tools/run_tests/stress_test/run_on_gke.py --gcp_project_id=<google-cloud-platform-project-id> --config_file=<path-to-config-file>

The directory tools/run_tests/stress_test/configs/ contains the config files for several scenarios

Artifacts & Packages (task_runner.py)

A generalized framework for running predefined tasks based on their labels. We use this to building binary artifacts & distrib packages and testing them)

Example

tools/run_tests/task_runner.py -f python artifact linux x64 (build tasks with labels python, artifact, linux, and x64)