This temporarily unblocks a related gtest upgrade. The ultimate goal is
to upgraade our gtest dependencies, but I don't have the cycles to
manage a potentially messy migration until at least next week. This PR
is coordinated with an internal change.
See code commentary for an explanation.
Add an additional constructor to allow `log_linux.cc` to compile with
GPR_PTHREAD_TLS. Without it:
```
../../third_party/grpc/src/core/lib/gpr/log_linux.cc:78:33: error: no viable conversion from 'int' to 'grpc_core::PthreadTlsImpl<long>'
static GPR_THREAD_LOCAL(long) tid = 0;
^ ~
../../third_party/grpc/src/core/lib/gpr/tls.h:64:3: note: candidate constructor not viable: no known conversion from 'int' to 'const grpc_core::PthreadTlsImpl<long> &' for 1st argument
PthreadTlsImpl(const PthreadTlsImpl&) = delete;
^
1 error generated.
```
* Implement type safety for TLS
This is mostly free when compiler support is available, but requires
careful templating when implemented using pthread.
Significantly slimmed the tls.h interface; it now only defines the "TLS
keyword" for each supported compiler, delegating enforcement of correct
usage (i.e. must be static) to the compiler itself.
Implemented implicit conversion for the pthread wrapper so it can be
used (mostly) the same as native support. Notable exception to this is
that static_cast<void*> is needed when printing a pointer stored in TLS
as %p.
* Use GPR_THREAD_LOCAL macros consistently
* Buffer HPACK parsing until the end of a header boundary
HTTP2 headers are sent in (potentially) many frames, but all must be
sent sequentially with no traffic intervening.
This was not clear when I wrote the HPACK parser, and still indeed quite
contentious on the HTTP2 mailing lists.
Now that matter is well settled (years ago!) take advantage of the fact
by delaying parsing until all bytes are available.
A future change will leverage this to avoid having to store and verify
partial parse state, completely eliminating indirect calls within the
parser.
* maybe fixes
* xx
* fix boundary detection
* clang-format
* Revert "xx"
This reverts commit 258d712ed3.
* fix tests
* add missed check
* fixes
* fix
* update tests
* fix benchmark
* properly unref
* optimize final slice refcounting
* cleanup bm_chttp2_hpack
* start
* new parser progress
* refinement
* get it compiling
* bug-fix
* build files
* clang-tidy
* fixes
* fixes
* fixes
* fix-leaks
* clang-tidy
* comments
* fix merge error
* Revert "Buffer HPACK parsing until the end of a header boundary (#26700)"
This reverts commit 8bab3e4bf4.
* streaming hpack parser start
* streaming parser
* clang-format
* Rework HPackTable into C++
* clang-tidy
* fix merge
* actually set the size of the entries array
* better
This exposes a bug in clang, reported upstream as
https://bugs.llvm.org/show_bug.cgi?id=51368.
The clang bug is mitigated using a fake scoped lock; that allows the
current code to compile while also serving as a change detector to
prevent it from going stale; if the compiler bug is fixed, the compiler
will see an overlapping locking requirement, and reject this code, which
will prompt a human being to remove this workaround.
It is not possible for such a function to be implemented in a way that
is understood by annotalysis. Mark it deprecated and replace instances
of its use with direct mutex/condvar usage.
Add a bunch of missing thread safety annotations while I'm here.
* LB policy API improvements
* clang-format
* fix build
* a bit more cleanup
* use absl::variant<> for pick result
* fix retry_lb_drop test
* clang-format
* fix grpclb_end2end_test
* fix xds_end2end_test
* try to make variant code a bit cleaner
* clang-format
* fix memory leak
* fix build
* clang-format
* fix error refcount bug
* remove PickResult factory functions
* clang-format
* add ctors to structs
* clang-format
* fix clang-tidy
* update comments
* move LB recv_trailing_metadata callback instead of copying it
* use Match() instead of providing PickResult::Handle()
* don't use Match() for now, since it breaks lock annotations
* update retry_lb_fail test
* Use new stats API in open census filter
* Fix time and latency calculation
* Fix parent census context
* Add tests
* Reviewer comments
* Reviewer comments
* Reviewer comments
* Reviewer comments
* Fix error unref
* Add a context object for the overall call
* Remove TODO
* Reviewer comments
HTTP2 headers are sent in (potentially) many frames, but all must be
sent sequentially with no traffic intervening.
This was not clear when I wrote the HPACK parser, and still indeed quite
contentious on the HTTP2 mailing lists.
Now that matter is well settled (years ago!) take advantage of the fact
by delaying parsing until all bytes are available.
A future change will leverage this to avoid having to store and verify
partial parse state, completely eliminating indirect calls within the
parser.
* Add isort_code.sh to sanity tests
* Run tools/distrib/isort_code.sh
* Fine tune the import order for relative imports
* Make pylint and project generation happy
* Fix a few corner cases
* Use --check instead of --diff
* The import order impacts test result somehow
* Make isort print diff and check output at the same time
* Let tools/run_tests/python_utils be firstparty library
* Run isort against latest HEAD
This is a fairly low effort migration of the current codebase into a C++ class, instead of free standing C code.
It builds upon #26657 as a necessary first step.
I've tried to minimize any changes to semantics or logic in this change, except where required to get a minimal amount of encapsulation - which is the major aim of this change.
A future change in this series will buffer slices until all HPACK headers are in memory for a stream prior to decoding -- it's important to have an encapsulated API to the parser before doing so however (hence this CL).
The next change after that will be an almost complete rewrite of the parsing functionality -- since we'll have the total set of header bytes, we'll no longer need to support suspending decoding at arbitrary points. This will allow us to move to a simple recursive descent parser, eliminate a bunch of indirection in this code, and end up in a much more malleable place for when we start doing metadata API changes.
(we likely also end up with some good performance wins!)
* Tighten the error tolerance requirement by 10x
* Make it 5 sigma instead of 4.5
* Rewrap comments
* Loosen the max concurrent requests in certain test cases