Alexander Polcyn
a6e796f58f
ignore core counts and core lists in qps json driver
8 years ago
Yuxuan Li
ac87a46225
enable uploading server cpu usage data from performance tests to big query
8 years ago
Alexander Polcyn
9f08d110f8
add a "perf" option to benchmarks script, generate profile text reports and flamegraphs
8 years ago
Jan Tattermusch
2de47cba05
enable running windows perf tests
8 years ago
murgatroid99
e8f9cbbd68
Remove some disabled copied Node Express perf tests
8 years ago
Yuxuan Li
d885a27ccb
fix repeated field
8 years ago
Yuxuan Li
317f60b98e
add a blank line back
8 years ago
Yuxuan Li
2b4d372620
hiding totalCpuTime and idleCpuTime from ServerStatus proto message
8 years ago
Yuxuan Li
9ba52e7aa9
unblock the performance tests. hide the cpu usage field for now.
8 years ago
murgatroid99
dabe6bd2af
Enable Node Express benchmarks, fix copy/paste errors
8 years ago
Craig Tiller
c939022dd5
More data massaging
8 years ago
Craig Tiller
ed531b8e82
More data massaging
8 years ago
Craig Tiller
77fbc1ca2a
Fix some typos
8 years ago
Craig Tiller
bd3ab19a95
Add some fields to bigquery
8 years ago
Sree Kuchibhotla
a83ee60b93
New streaming scenario
8 years ago
murgatroid99
b53e5d1f2b
Create benchmark client and server for Node Express
8 years ago
Craig Tiller
20afa3d7c9
BufferPool --> ResourceQuota
8 years ago
Sree Kuchibhotla
bba2f47e8f
remove formatting noise
8 years ago
Jan Tattermusch
d13fbca136
switch C# benchmark to coreclr
8 years ago
Craig Tiller
0c80c7d12c
Collecting failure status progress
8 years ago
Sree Kuchibhotla
923d641c27
Add a new scenario: Async-Client/Sync-Server
8 years ago
Sree Kuchibhotla
81b2bc323b
add async client sync server scenario
8 years ago
Craig Tiller
677966a257
Be explicit as to which tests to run
8 years ago
Craig Tiller
601cff4018
More programmatically generate scenarios
8 years ago
Craig Tiller
bf4939fd22
Extend sweeps further, add streaming
8 years ago
murgatroid99
232ca0275e
Removed unnecessary notes from scenario_config.py
8 years ago
murgatroid99
9030c81f20
Add a libuv endpoint to the C core, for use in the Node library
8 years ago
Craig Tiller
8c985c2866
Allow more sync threads
8 years ago
Craig Tiller
5709469c46
Fix sync/async
8 years ago
Craig Tiller
86cf6dcb1c
Remove some scenarios we can't run yet
8 years ago
Craig Tiller
0c98675e54
Fix outstanding calls
8 years ago
Craig Tiller
1423a63db9
Lets not do payload sweeps just yet
8 years ago
Craig Tiller
b6df24765f
Properly categorize tests to allow sweeps to be run by adhoc
8 years ago
Craig Tiller
c5aa700538
Significantly increase sweep space
8 years ago
Craig Tiller
6c38c1be30
Reduce outstanding request count
8 years ago
Craig Tiller
15372a3f22
Use a # of total outstanding requests to drive benchmark config
...
Instead of specifying # of outstanding requests/channel, specify a total
outstanding size, and try to fit that to the number of channels.
8 years ago
Sree Kuchibhotla
1b22cddc7d
Add clientSuccess and serverSuccess to BQ schema
8 years ago
Craig Tiller
4f2d9ae8e2
Add sync variants of important tests, add sweeps across different channel counts
9 years ago
Sree Kuchibhotla
6dbfce0a9d
Add clientSuccess and serverSuccess to BQ schema
9 years ago
Ken Payson
45c0f2b305
Migrated python performance tests to use GA API
9 years ago
Masood Malekghassemi
3b5b20682b
Make running individual Python tests less painful
...
Before this change, running Python tests individually required
building a tox environment via the run_tests script and then specifying
long environment variables to filter out just the test we wanted to run
(and then we wouldn't be able to get the output on interrupt, nor would
we have an easy way of determining the PID of the process for debugger
attachment). Now invoking the build_python.sh script creates a workable
python virtual environment that includes all necessary libraries and
tests (s.t. running a single test is now possible by just knowing the
module name). This does not change existing supported means of running
tests (e.g. through run_tests.py).
An additional way of running individual tests has been introduced.
Following invocation of `./tools/run_tests/build_python.sh` (or
run_tests.py), one may invoke
./$VENV/bin/python -m $TEST_MODULE_NAME
and acquire a single running process that *is* the test process (rather
than a parent of the process). $VENV is the virtual environment name
specified to `build_python.sh` (defaults to `py27`) and
$TEST_MODULE_NAME is what it says on the tin.
9 years ago
Masood Malekghassemi
1ff429da2a
Organize Python tests to use grpcio-tools directly
...
Moves all tests into a separate package. This does not change existing
supported means of running tests (e.g. through run_tests.py).
9 years ago
Craig Tiller
a286b040a9
small fixes
9 years ago
Ken Payson
9a36e6c7cd
Changed Python Sync streaming qps to follow spec
9 years ago
Ken Payson
571c12ef18
Remove AsyncStreaming client from python qps tests
9 years ago
Jan Tattermusch
28f6c01d76
Revert "Revive C++ large-message throughput tests"
9 years ago
vjpai
3fe694fa35
Revive throughput tests .
9 years ago
Jan Tattermusch
d27888b6db
mark unconstrained scenarios with scalable category
9 years ago
Jan Tattermusch
5cbccd0781
set client params differently for sync and async client
9 years ago
Jan Tattermusch
37a907e471
add support for CLIENT_LANGUAGE setting
9 years ago