* ConnectionAttemptInjector: fix tsan failures
* Add hold to test
* Add hold to test
* Address comments
* Address comments
* Address comments
* Fix typo
Co-authored-by: Mark D. Roth <roth@google.com>
Previously this failed 1/1000 times with a 1s timeout, giving a
`Deadline Exceeded` error. I was able to reproduce the failure in
22/1000 times with a 500ms timeout. Changing it to a 2s timeout in this
PR, the failure did not reproduce in 5000 runs.
We have many tests that create 100 threads or more, and mounting evidence that
this is harmful to our CI environment.
When the original code for many of these tests was written we ran our tests
under run_tests, which had explicit handling for tracking the number of threads
each test needed and making sure that we weren't over subscribing the test
runner. Bazel has no such facility (and the facility in run_tests has since
been removed) and so we need to adjust.
This PR adjusts down a single test and is part of a series so that we can
review and roll back easily if required.
We have many tests that create 100 threads or more, and mounting evidence that
this is harmful to our CI environment.
When the original code for many of these tests was written we ran our tests
under run_tests, which had explicit handling for tracking the number of threads
each test needed and making sure that we weren't over subscribing the test
runner. Bazel has no such facility (and the facility in run_tests has since
been removed) and so we need to adjust.
This PR adjusts down a single test and is part of a series so that we can
review and roll back easily if required.
We have many tests that create 100 threads or more, and mounting evidence that
this is harmful to our CI environment.
When the original code for many of these tests was written we ran our tests
under run_tests, which had explicit handling for tracking the number of threads
each test needed and making sure that we weren't over subscribing the test
runner. Bazel has no such facility (and the facility in run_tests has since
been removed) and so we need to adjust.
This PR adjusts down a single test and is part of a series so that we can
review and roll back easily if required.
We have many tests that create 100 threads or more, and mounting evidence that
this is harmful to our CI environment.
When the original code for many of these tests was written we ran our tests
under run_tests, which had explicit handling for tracking the number of threads
each test needed and making sure that we weren't over subscribing the test
runner. Bazel has no such facility (and the facility in run_tests has since
been removed) and so we need to adjust.
This PR adjusts down a single test and is part of a series so that we can
review and roll back easily if required.
We have many tests that create 100 threads or more, and mounting evidence that
this is harmful to our CI environment.
When the original code for many of these tests was written we ran our tests
under run_tests, which had explicit handling for tracking the number of threads
each test needed and making sure that we weren't over subscribing the test
runner. Bazel has no such facility (and the facility in run_tests has since
been removed) and so we need to adjust.
This PR adjusts down a single test and is part of a series so that we can
review and roll back easily if required.
* [fixit] Scale down large tests
We have many tests that create 100 threads or more, and mounting evidence that
this is harmful to our CI environment.
When the original code for many of these tests was written we ran our tests
under run_tests, which had explicit handling for tracking the number of threads
each test needed and making sure that we weren't over subscribing the test
runner. Bazel has no such facility (and the facility in run_tests has since
been removed) and so we need to adjust.
This PR adjusts down a single test and is part of a series so that we can
review and roll back easily if required.
* mark up cpu usage
* Automated change: Fix sanity tests
Co-authored-by: ctiller <ctiller@users.noreply.github.com>
We have many tests that create 100 threads or more, and mounting evidence that
this is harmful to our CI environment.
When the original code for many of these tests was written we ran our tests
under run_tests, which had explicit handling for tracking the number of threads
each test needed and making sure that we weren't over subscribing the test
runner. Bazel has no such facility (and the facility in run_tests has since
been removed) and so we need to adjust.
This PR adjusts down a single test and is part of a series so that we can
review and roll back easily if required.
We have many tests that create 100 threads or more, and mounting evidence that
this is harmful to our CI environment.
When the original code for many of these tests was written we ran our tests
under run_tests, which had explicit handling for tracking the number of threads
each test needed and making sure that we weren't over subscribing the test
runner. Bazel has no such facility (and the facility in run_tests has since
been removed) and so we need to adjust.
This PR adjusts down a single test and is part of a series so that we can
review and roll back easily if required.
We have many tests that create 100 threads or more, and mounting evidence that
this is harmful to our CI environment.
When the original code for many of these tests was written we ran our tests
under run_tests, which had explicit handling for tracking the number of threads
each test needed and making sure that we weren't over subscribing the test
runner. Bazel has no such facility (and the facility in run_tests has since
been removed) and so we need to adjust.
This PR adjusts down a single test and is part of a series so that we can
review and roll back easily if required.
We have many tests that create 100 threads or more, and mounting evidence that
this is harmful to our CI environment.
When the original code for many of these tests was written we ran our tests
under run_tests, which had explicit handling for tracking the number of threads
each test needed and making sure that we weren't over subscribing the test
runner. Bazel has no such facility (and the facility in run_tests has since
been removed) and so we need to adjust.
This PR adjusts down a single test and is part of a series so that we can
review and roll back easily if required.
We have many tests that create 100 threads or more, and mounting evidence that
this is harmful to our CI environment.
When the original code for many of these tests was written we ran our tests
under run_tests, which had explicit handling for tracking the number of threads
each test needed and making sure that we weren't over subscribing the test
runner. Bazel has no such facility (and the facility in run_tests has since
been removed) and so we need to adjust.
This PR adjusts down a single test and is part of a series so that we can
review and roll back easily if required.
We have many tests that create 100 threads or more, and mounting evidence that
this is harmful to our CI environment.
When the original code for many of these tests was written we ran our tests
under run_tests, which had explicit handling for tracking the number of threads
each test needed and making sure that we weren't over subscribing the test
runner. Bazel has no such facility (and the facility in run_tests has since
been removed) and so we need to adjust.
This PR adjusts down a single test and is part of a series so that we can
review and roll back easily if required.
* [fixit] Scale down large tests
We have many tests that create 100 threads or more, and mounting evidence that
this is harmful to our CI environment.
When the original code for many of these tests was written we ran our tests
under run_tests, which had explicit handling for tracking the number of threads
each test needed and making sure that we weren't over subscribing the test
runner. Bazel has no such facility (and the facility in run_tests has since
been removed) and so we need to adjust.
This PR adjusts down a single test and is part of a series so that we can
review and roll back easily if required.
* mark up cpu usage
We have many tests that create 100 threads or more, and mounting evidence that
this is harmful to our CI environment.
When the original code for many of these tests was written we ran our tests
under run_tests, which had explicit handling for tracking the number of threads
each test needed and making sure that we weren't over subscribing the test
runner. Bazel has no such facility (and the facility in run_tests has since
been removed) and so we need to adjust.
This PR adjusts down a single test and is part of a series so that we can
review and roll back easily if required.
We have many tests that create 100 threads or more, and mounting evidence that
this is harmful to our CI environment.
When the original code for many of these tests was written we ran our tests
under run_tests, which had explicit handling for tracking the number of threads
each test needed and making sure that we weren't over subscribing the test
runner. Bazel has no such facility (and the facility in run_tests has since
been removed) and so we need to adjust.
This PR adjusts down a single test and is part of a series so that we can
review and roll back easily if required.