<!--
If you know who should review your pull request, please assign it to that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the appropriate
lang label.
-->
Closes#35153
COPYBARA_INTEGRATE_REVIEW=https://github.com/grpc/grpc/pull/35153 from yijiem:native_dns_resolver a4107f7d81
PiperOrigin-RevId: 588543137
This commit upgrades gRPC to protobuf v25.0 and makes some fixes to
account for upb changes. One major change is that upb has been merged
into the protobuf repo, so we can now drop the separate `@upb`
dependency. Another is that `.upb.c` files no longer exist and there are
new `.upb_minitable.h` and `.upb_minitable.c` files. The longer
filenames exceeded a Windows restriction, so to work around that I
renamed the `upb-generated` directory to just `upb-gen`, and likewise
for `upbdefs-generated`.
Design is documented at
[go/windows-dns-resolver-issue](http://go/windows-dns-resolver-issue)
(note that the design doc is slightly outdated regarding the shared
ownership model of the virtual socket that was implemented in
13bd2b404e).
Passed `//test/cpp/naming:resolver_component_tests_runner_invoker` and
`//test/cpp/naming:cancel_ares_query_test`:
```
C:\Users\yijiem\projects\grpc>bazel --output_base=C:\bazel6 test --dynamic_mode=off --verbose_failures --test_env=GRPC_EXPERIMENTS=event_engine_dns --test_env=GRPC_VERBOSITY=debug --test_env=GRPC_TRACE=cares_resolver --enable_runfiles=yes --nocache_test_results //test/cpp/naming:resolver_component_tests_runner_invoker
INFO: Analyzed target //test/cpp/naming:resolver_component_tests_runner_invoker (1 packages loaded, 8 targets configured).
INFO: Found 1 test target...
INFO: From Compiling src/core/lib/event_engine/windows/windows_engine.cc:
C:\bazel6\execroot\com_github_grpc_grpc\src/core/lib/channel/channel_args.h(287): warning C4312: 'reinterpret_cast': conversion from 'int' to 'void *' of greater size
Target //test/cpp/naming:resolver_component_tests_runner_invoker up-to-date:
bazel-bin/test/cpp/naming/resolver_component_tests_runner_invoker.exe
INFO: Elapsed time: 230.374s, Critical Path: 228.54s
INFO: 9 processes: 2 internal, 7 local.
INFO: Build completed successfully, 9 total actions
//test/cpp/naming:resolver_component_tests_runner_invoker PASSED in 221.2s
Executed 1 out of 1 test: 1 test passes.
```
```
C:\Users\yijiem\projects\grpc>bazel --output_base=C:\bazel6 test --dynamic_mode=off --verbose_failures --test_env=GRPC_EXPERIMENTS=event_engine_dns --test_env=GRPC_VERBOSITY=debug --test_env=GRPC_TRACE=cares_resolver --enable_runfiles=yes --nocache_test_results //test/cpp/naming:cancel_ares_query_test
INFO: Analyzed target //test/cpp/naming:cancel_ares_query_test (0 packages loaded, 0 targets configured).
INFO: Found 1 test target...
Target //test/cpp/naming:cancel_ares_query_test up-to-date:
bazel-bin/test/cpp/naming/cancel_ares_query_test.exe
INFO: Elapsed time: 49.656s, Critical Path: 48.00s
INFO: 6 processes: 2 internal, 4 local.
INFO: Build completed successfully, 6 total actions
//test/cpp/naming:cancel_ares_query_test PASSED in 43.0s
Executed 1 out of 1 test: 1 test passes.
```
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
This adds the directory reloader implementation of the CrlProvider. This
will periodically reload CRL files in a directory per [gRFC
A69](https://github.com/grpc/proposal/pull/382)
Included in this is the following:
* A public API to create the `DirectoryReloaderCrlProvider`
* A basic directory interface in gprpp and platform specific impls for
getting the list of files in a directory (unfortunately prior C++17,
there is no std::filesystem, so we have to have platform specific impls)
* The implementation of `DirectoryReloaderCrlProvider` takes an
event_engine and a directory interface. This allows us to test using the
fuzzing event engine for time mocking, and to implement a test directory
interface so we avoid having to make temporary directories and files in
the tests. This is notably not in `include`, and the
`CreateDirectoryReloaderCrlProvider` is the only way to construct one
from the public API, so we don't expose the event engine and directory
details to the user.
---------
Co-authored-by: gtcooke94 <gtcooke94@users.noreply.github.com>
Add TcpTracer interface for TCP instrumentation. It takes no gRPC
dependencies for use in external TCP implementations. Also add
HttpAnnotation for HTTP transport instrumentation using CallTracer.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
Ditch the old priority scheme for ordering filters, instead explicitly
mark up before/after constraints.
---------
Co-authored-by: ctiller <ctiller@users.noreply.github.com>
The basic APIs for the CRL Reloading features.
This adds external types to represent CRL Providers, CRLs, and
CertificateInfo.
Internally we will use `CrlImpl` - this layer is needed to hide OpenSSL
details from the user.
GRFC - https://github.com/grpc/proposal/pull/382
Things Done
* Add external API for `CrlProvider`, `Crl`, `CertInfo` (`CertInfo` is
used during CRL lookup rather than passing the entire certificate).
* Add code paths in `ssl_transport_security` to utilize CRL providers
* Add `StaticCrlProvider`
* Refactor `crl_ssl_transport_security_test.cc` so it is more extensible
and can be used with providers
Instead of fixing a target size for writes, try to adapt it a little to
observed bandwidth.
The initial algorithm tries to get large writes within 100-1000ms
maximum delay - this range probably wants to be tuned, but let's see.
The hope here is that on slow connections we can not back buffer so much
and so when we need to send a ping-ack it's possible without great
delay.
Experiment 1: On RST_STREAM: reduce MAX_CONCURRENT_STREAMS for one round
trip.
Experiment 2: If a settings frame is outstanding with a lower
MAX_CONCURRENT_STREAMS than is configured, and we receive a new incoming
stream that would exceed the new cap, randomly reject it.
---------
Co-authored-by: ctiller <ctiller@users.noreply.github.com>
Isolate ping callback tracking to its own file.
Also takes the opportunity to simplify keepalive code by applying the
ping timeout to all pings.
Adds an experiment to allow multiple pings outstanding too (this was
originally an accidental behavior change of the work, but one that I
think may be useful going forward).
---------
Co-authored-by: ctiller <ctiller@users.noreply.github.com>
More changes as part of the dualstack design:
- Change resolver and LB policy APIs to support multiple addresses per
endpoint. Specifically, replace `ServerAddress` with
`EndpointAddresses`, which encodes more than one address. Per-address
channel args are retained at the same level, so they are now
per-endpoint. For now, `EndpointAddress` provides a single-address ctor
and a single-address accessor for backward compatibility, so
`ServerAdress` is an alias for `EndpointAddresses`; eventually, this
alias and the single-address methods will be removed.
- Add an `EndpointAddressSet` class, which represents an unordered set
of addresses to be used as a map key. This will be used in a number of
LB policies that need to store per-endpoint state.
- Change the LB policy API's `ChannelControlHelper::CreateSubchannel()`
method to take the address and per-endpoint channel args as separate
parameters, so that we don't need to construct a legacy `ServerAddress`
object as we create a new subchannel for each address in the endpoint.
- Change pick_first to flatten the address list.
- Change ring_hash to use `EndpointAddressSet` as the key for its
endpoint map, and to use the first address of the endpoint as the hash
key.
- Change WRR to use `EndpointAddressSet` as the key for its endpoint
weight map.
Note that support for multiple addresses per endpoint is guarded in RR
by the existing `round_robin_delegate_to_pick_fist` experiment and in
WRR by the existing `wrr_delegate_to_pick_first` experiment.
This PR does *not* include support for multiple addresses per endpoint
for the outlier_detection or xds_override_host LB policies; those will
come in subsequent PRs.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
Expand our fuzzing capabilities by allowing fuzzers to choose the bits
that go into random number distribution generators.
---------
Co-authored-by: ctiller <ctiller@users.noreply.github.com>
This has been stable for a bit, everywhere that the EventEngine is
enabled. Going forward, I think the event_engine_{client|listener}
experiments can probably be used to regulate thread-pool-specific
issues.
---------
Co-authored-by: drfloob <drfloob@users.noreply.github.com>
Most recent attempt was #34320, reverted in #34335.
The first commit here is a pure revert. The second commit fixes the
outlier_detection unit test to pass both with and without the
experiment.
This is a follow up to https://github.com/grpc/grpc/pull/34103
That pull request explicitly aimed to introduce shared library builds
for Windows (DLLs) while effecting zero material change to the existing
build pipelines. That aspiration meant that the grpc++_unsecure library
had to be effectively excluded from the build (because including it
would have also included a dependency on openssl, which makes no sense
given its purpose)
This PR addresses that by:
* Extracting the single function in grpc_tls_certificate_provider with a
dependency on openssl into a separate compilation unit
* Including that new .cc file into the grpc library
* Including grpc_tls_certificate_provider and one other source file into
grpc_unsecure for the Windows DLL build only.
* Reinstating the grpc++_unsecure library which is a prerequisite for
many tests.
* Regenerating all files affected by the changes in Bazel BUILD that
introduce the new source file.
This change does affect the operation of other build pipelines - I have
confirmed that it does not break the Linux Bazel build.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
Reverts grpc/grpc#34325
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
… c-ares versions (#34314)"
This reverts commit eb37b91072.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
Tested by changing c-ares to `1.14.0` in Bazel dependency and verify
that the `posix_event_engine_test` build. The test would fail though
since c-ares versions < `1.16.0` does not come with address sorting
capability so the relevant test cases will fail. But this is probably
sufficient to make the Cloud C++ CI job pass and unblock the 1.58
release.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
Building out a new framing layer for chttp2.
The central idea here is to have the framing layer be solely responsible
for serialization of frames, and their deserialization - the framing
layer can reject frames that have invalid syntax - but the enacting of
what that frame means is left to a higher layer.
This class will become foundational for the promise conversion of chttp2
- by eliminating action from the parsing of frames we can reuse this
sensitive code.
Right now the new layer is inactive - there's a test that exercises it
relatively well, and not much more. In the next PRs I'll add an
experiments to enable using this layer or the existing code in the
writing and reading paths.
---------
Co-authored-by: ctiller <ctiller@users.noreply.github.com>
Proposed alternative to https://github.com/grpc/grpc/pull/34024.
This version has a simpler, faster busy-count implementation based on a
sharded set of atomic counts: fast increment/decrement operations,
relatively slower summation of total counts (which need to happen much
less frequently).
- Extract build metadata for some external dependencies from bazel
build. This is achieved by letting extract_metadata_from_bazel_xml.py
analyze some external libraries and sources. The logic is basically the
same as for internal libraries, I only needed to teach
extract_metadata_from_bazel_xml.py which external libraries it is
allowed to analyze.
* currently, the list of source files is automatically determined for
`z`, `upb`, `re2` and `gtest` dependencies (at least for the case where
we're building in "embedded" mode - e.g. mostly native extensions for
python, php, ruby etc. - cmake has the ability to replace some of these
dependencies by actual cmake dependency.)
- Eliminate the need for manually written gen_build_yaml.py for some
dependencies.
- Make the info on target dependencies in build_autogenerated.yaml more
accurate and complete. Until now, there were some depdendencies that
were allowed to show up in build_autogenerated.yaml and some that were
being skipped. This made generating the CMakeLists.txt and Makefile
quite confusing (since some dependencies are being explicitly mentioned
and some had to be assumed by the build system).
- Overhaul the Makefile
* the Makefile is currently only used internally (e.g. for ruby and PHP
builds)
* until now, the makefile wasn't really using the info about which
targets depend on what libraries, but it was effectively hardcoding the
depedendency data (by magically "knowing" what is the list of all the
stuff that e.g. "grpc" depends on).
* After the overhaul, the Makefile.template now actually looks at the
library dependencies and uses them when generating the makefile. This
gives a more correct and easier to maintain makefile.
* since csharp is no longer on the master branch, remove all mentions of
"csharp" targets in the Makefile.
Other notable changes:
- make extract_metadata_from_bazel_xml.py capable of resolving workspace
bind() rules (so that it knows the real name of the target that is
referred to as e.g. `//external:xyz`)
TODO:
- [DONE] ~~pkgconfig C++ distribtest~~
- [DONE} ~~update third_party/README to reflect changes in how some deps
get updated now.~~
Planned followups:
- cleanup naming of some targets in build metadata and buildsystem
templates: libssl vs boringssl, ares vs cares etc.
- further cleanup of Makefile
- further cleanup of CMakeLists.txt
- remote the need from manually hardcoding extra metadata for targets in
build_autogenerated.yaml. Either add logic that determines the
properties of targets automatically, or use metadata from bazel BUILD.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
Our current implementation of `Seq`, `TrySeq` leverage some complicated
template stuff to work, which makes them hard to maintain. I've been
thinking about ways to simplify that for some time and had something
like this in mind - using a code generator that's at least a little more
understandable to code generate most of the complexity into a file that
is checkable.
Concurrently - I have a cool optimization in mind - but it requires that
we can move promises after polling, which is a contract change. I'm
going to work through the set of primitives we have in the coming weeks
and change that contract to enable the optimization.
---------
Co-authored-by: ctiller <ctiller@users.noreply.github.com>
Implement DNS using dns service for iOS.
Current limitation:
1. Using a custom name server is not supported.
2. Only supports `LookupHostname`. `LookupSRV` and `LookupTXT` are not
implemented.
3. Not tested with single stack (ipv4 or ipv6) environment
4. ~Not tested with multiple ip records per stack~ manually tested with
wsj.com
5. Not tested with multiple interface environment
Resolves a set of failures seen rolling out promises - we need to read
all of the incoming payload before doing request matching.
---------
Co-authored-by: ctiller <ctiller@users.noreply.github.com>
Why: Cleanup for chttp2_transport ahead of promise conversion - lots of
logic has become interleaved throughout chttp2, so some effort to
isolate logic out is warranted ahead of that conversion.
What: Split configuration and policy tracking for each of ping rate
throttling and abuse detection into their own modules. Add tests for
them.
Incidentally: Split channel args into their own header so that we can
split the policy stuff into separate build targets.
---------
Co-authored-by: ctiller <ctiller@users.noreply.github.com>