[grpc][Gpr_To_Absl_Logging] Migrating from gpr to absl logging - BUILD
In this CL we are just editing the build and bzl files to add dependencies.
This is done to prevent merge conflict and constantly having to re-make the make files using generate_projects.sh for each set of changes.
Closes#36604
COPYBARA_INTEGRATE_REVIEW=https://github.com/grpc/grpc/pull/36604 from tanvi-jagtap:build_test_core_misc_01 8995ba4914
PiperOrigin-RevId: 633519619
[grpc][Gpr_To_Absl_Logging] Migrating from gpr to absl logging GPR_ASSERT
Replacing GPR_ASSERT with absl CHECK.
These changes have been made using string replacement and regex.
Will not be replacing all instances of CHECK with CHECK_EQ , CHECK_NE etc because there are too many callsites. Only ones which are doable using very simple regex with least chance of failure will be replaced.
Given that we have 5000+ instances of GPR_ASSERT to edit, Doing it manually is too much work for both the author and reviewer.
<!--
If you know who should review your pull request, please assign it to that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the appropriate
lang label.
-->
Closes#36457
COPYBARA_INTEGRATE_REVIEW=https://github.com/grpc/grpc/pull/36457 from tanvi-jagtap:tjagtap_misc_test 978d0411b8
PiperOrigin-RevId: 628949744
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
There's *very* little difference in cost for a pooled arena allocation
and a tcmalloc allocation - but the pooled allocation causes memory
stranding for call lifetime, whereas the tcmalloc allocation allows that
to be shared between calls - leading to a much lower overall cost.
---------
Co-authored-by: ctiller <ctiller@users.noreply.github.com>
We had an infinite recursion in our tracer parsing code... probably not
a biggy, but it was blocking the fuzzers so fixing.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
---------
Co-authored-by: ctiller <ctiller@users.noreply.github.com>
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
---------
Co-authored-by: ctiller <ctiller@users.noreply.github.com>
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
---------
Co-authored-by: ctiller <ctiller@users.noreply.github.com>
The pooled allocator currently has an ABA issue in the allocation path.
This change should fix that - algorithm is described reasonably well in
the PR.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
We have many tests that create 100 threads or more, and mounting evidence that
this is harmful to our CI environment.
When the original code for many of these tests was written we ran our tests
under run_tests, which had explicit handling for tracking the number of threads
each test needed and making sure that we weren't over subscribing the test
runner. Bazel has no such facility (and the facility in run_tests has since
been removed) and so we need to adjust.
This PR adjusts down a single test and is part of a series so that we can
review and roll back easily if required.
* [resource_quota] Custom controller
Instead of a PID controller use a custom binary-search inspired controller that will eventually converge to a good control value.
Experiments have shown this able to successfully hold a 95% memory pressure, optimizing for performance most of the time, and gracefully restricting memory usage under severe pressure.
There will be further changes after this one to use the new control value in various systems.
* Automated change: Fix sanity tests
* Update memory_quota.cc
Co-authored-by: ctiller <ctiller@users.noreply.github.com>
* [arena] Add ManagedNew(), gtest-ify test
Add a ManagedNew() method to Arena that calls the relevant destructor at Arena destruction time.
There are some cases coming up in the promise based call work where this becomes super convenient, and I expect it's likely that there are other places that's true too.
* Automated change: Fix sanity tests
* review feedback
* use construct/destruct more
* Automated change: Fix sanity tests
* Automated change: Fix sanity tests
* fix
* Automated change: Fix sanity tests
Co-authored-by: ctiller <ctiller@users.noreply.github.com>