* initial attempts to speedup qps tests
* make json_run_localhost finish without up to 5 sec lag
* cap number of client channels for qps tests
* regenerate bazel qps scenarios
* add a todo for driver.cc
* adjust max channel count for streaming_from_server
* regenerate scenarios
These tests (especially unconstrained versions) can get very
backlogged and may take a while to finish. We sometimes flake waiting
for that. This is not hazardous (IMO), as the scripts that run these
tests already have timeouts to make sure that these don't truly go on
forever.
2. Make the time spent in the benchmark phase actually be
benchmark_seconds rather than benchmark_seconds-warmup_seconds
as it is currently.
tsan documentation says 2-20x, so set it at 20x
asan documentation says 1.2-2.7x, so set it at 3x
msan documentation says 2-4x, so set it at 4x
This is now much less optimistic than before
2. Reactive tsan tests for qps_test
3. Set CPU load for qps_openloop_test
4. Divide qps_openloop_test Poisson rate by the slowdown factor of
the configuration
It now allows pluggging in "reporter" instances to process the benchmark results arbitrarily.
This would allow, for example, to send results to a leaderboard and/or other systems for tracking performance metrics.
This allows us to get back to single binary tests where appropriate, which will help in-depth profiling efforts.
I've built this atop my smoke_test changes as they inspired me to get this done.
-) using dupenv_s instead of getenv_s and calling strdup ourselves.
-) few impossible-to-obtain if checks.
-) various signed/unsigned casting.
-) using time_t instead of time32_t
-) checking output of FormatMessage for failures.
-) don't redefine _WIN32_WINNT without undefining it first.
-) fixed msvc's interlocked casting.
-) renamed AddPort to AddListeningPort.
-) added protobuf's third_party includes to search path.
-) added a missing definition for inet_ntop in mingw32.
-) removed useless declarations.