The C based gRPC (C++, Python, Ruby, Objective-C, PHP, C#) https://grpc.io/
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 
 
 
Larry Safran 4cb69f4658
[PSM Interop] Fix the issue with URL Map test suite not cleaning up failed test client (#32877)
2 years ago
..
artifacts Switch all VS2017 builds to VS2019 (#32662) 2 years ago
build_stats cleanup tools/run_tests directory 8 years ago
dockerize Move docker images from dockerhub to artifact registry. (#31591) 2 years ago
generated [Transport] Update Chttp2 context list to include relative offset of traced RPCs within outgoing buffer (#32825) 2 years ago
helper_scripts [cmake] Improved stdalign.h workaround for windows build (#32777) 2 years ago
interop Upgrade sanity Docker image to debian:stretch 6 years ago
lb_interop_tests Migrate Infrastructure Scripts to Python 3 (#27135) 3 years ago
performance Add query delay flag to Prometheus query script. (#31611) 2 years ago
python_utils Simplify running bazel with structured test results on CI (#29353) 3 years ago
sanity [BoringSSL] Update third_party/boringssl-with-bazel (#32868) 2 years ago
xds_k8s_test_driver [PSM Interop] Fix the issue with URL Map test suite not cleaning up failed test client (#32877) 2 years ago
README.md Remove C# implementation (individual packages will continue to be maintained through v2.46.x patches or moved to grpc-dotnet) (#29225) 3 years ago
run_grpclb_interop_tests.py Switch .py shebang to python3 (#28963) 3 years ago
run_interop_tests.py [Interop] Enable Orca test cases for C++ (#32684) (#32707) 2 years ago
run_microbenchmark.py [benchmarks] Remove stats integration (#30900) 2 years ago
run_performance_tests.py Switch .py shebang to python3 (#28963) 3 years ago
run_tests.py [test-infra] Re-enable skipped windows tests (#32658) 2 years ago
run_tests_matrix.py [test-infra] Sanity test job should only run sanity, not iwyu and clang-tidy (#32874) 2 years ago
run_xds_tests.py switch run_xds_tests.py shebang to python3 (#28964) 3 years ago
start_port_server.py Migrate the bm_diff benchmarks to python3 (#25619) 4 years ago
task_runner.py Switch .py shebang to python3 (#28963) 3 years ago

README.md

Overview

This directory contains scripts that facilitate building and running tests. We are using python scripts as entrypoint for our tests because that gives us the opportunity to run tests using the same commandline regardless of the platform you are using.

Unit tests (run_tests.py)

Builds gRPC in given language and runs unit tests. Use tools/run_tests/run_tests.py --help for more help.

Example

tools/run_tests/run_tests.py -l python -c dbg

Useful options (among many others)
  • --use_docker Builds a docker container containing all the prerequisites for given language and runs the tests under that container.
  • --build_only Only build, do not run the tests.

Note: If you get an error such as ImportError: No module named httplib2, then you may be missing some Python modules. Install the module listed in the error and try again.

Note: some tests may be flaky. Check the "Issues" tab for known flakes and other issues.

The full suite of unit tests will take many minutes to run.

Interop tests (run_interop_tests.py)

Runs tests for cross-platform/cross-language interoperability. For more details, see Interop tests descriptions The script is also capable of running interop tests for grpc-java and grpc-go, using sources checked out alongside the ones of the grpc repository.

Example

tools/run_tests/run_interop_tests.py -l python -s c++ --use_docker (run interop tests with C# client and C++ server)

Note: if you see an error like no space left on device when running the interop tests using Docker, make sure that Docker is building the image files in a location with sufficient disk space.

Performance benchmarks (run_performance_tests.py)

The run_performance_tests.py script is deprecated. See end-to-end benchmarking documentation for more info.

Artifacts & Packages (task_runner.py)

A generalized framework for running predefined tasks based on their labels. We use this to building binary artifacts & distrib packages and testing them)

Example

tools/run_tests/task_runner.py -f python artifact linux x64 (build tasks with labels python, artifact, linux, and x64)