Why: Cleanup for chttp2_transport ahead of promise conversion - lots of
logic has become interleaved throughout chttp2, so some effort to
isolate logic out is warranted ahead of that conversion.
What: Split configuration and policy tracking for each of ping rate
throttling and abuse detection into their own modules. Add tests for
them.
Incidentally: Split channel args into their own header so that we can
split the policy stuff into separate build targets.
---------
Co-authored-by: ctiller <ctiller@users.noreply.github.com>
This PR implements a c-ares based DNS resolver for EventEngine with the
reference from the original
[grpc_ares_wrapper.h](../blob/master/src/core/ext/filters/client_channel/resolver/dns/c_ares/grpc_ares_wrapper.h).
The PosixEventEngine DNSResolver is implemented on top of that. Tests
which use the client channel resolver API
([resolver.h](../blob/master/src/core/lib/resolver/resolver.h#L54)) are
ported, namely the
[resolver_component_test.cc](../blob/master/test/cpp/naming/resolver_component_test.cc)
and the
[cancel_ares_query_test.cc](../blob/master/test/cpp/naming/cancel_ares_query_test.cc).
The WindowsEventEngine DNSResolver will use the same EventEngine's
grpc_ares_wrapper and will be worked on next.
The
[resolve_address_test.cc](https://github.com/grpc/grpc/blob/master/test/core/iomgr/resolve_address_test.cc)
which uses the iomgr
[DNSResolver](../blob/master/src/core/lib/iomgr/resolve_address.h#L44)
API has been ported to EventEngine's dns_test.cc. That leaves only 2
tests which use iomgr's API, notably the
[dns_resolver_cooldown_test.cc](../blob/master/test/core/client_channel/resolvers/dns_resolver_cooldown_test.cc)
and the
[goaway_server_test.cc](../blob/master/test/core/end2end/goaway_server_test.cc)
which probably need to be restructured to use EventEngine DNSResolver
(for one thing they override the original grpc_ares_wrapper's free
functions). I will try to tackle these in the next step.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
Based on the discussion at:
595a75cc5d..e3b402a8fa (r1244325752)
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
Going forward `[[nodiscard]]` is the portable way to spell this;
requires yanking a bunch of usage from after the param list to before.
We should further refine the GRPC_MUST_USE_RESULT macro to make it work
uniformly for any compilers that it doesn't today (most likely by making
it expand to nothing).
---------
Co-authored-by: ctiller <ctiller@users.noreply.github.com>
Detected with gcc 13:
```
In file included from /data/mwrep/res/osp/Grpc/23-0-0-0/include/grpcpp/impl/proto_utils.h:31,
from ./include/generated/gacms.object.grpc.pb.h:18,
from ./include/generated/gacms.object.grpc.pb.cc:6:
/data/mwrep/res/osp/Grpc/23-0-0-0/include/grpcpp/support/proto_buffer_reader.h: In member function 'virtual bool grpc::ProtoBufferReader::ReadCord(absl::lts_20230125::Cord*, int)': /data/mwrep/res/osp/Grpc/23-0-0-0/include/grpcpp/support/proto_buffer_reader.h:157:24: error: comparison of integer expressions of different signedness: 'uint64_t' {aka 'long unsigned int'} and 'int' [-Werror=sign-compare]
157 | if (slice_length <= count) {
| ~~~~~~~~~~~~~^~~~~~~~
/data/mwrep/res/osp/Grpc/23-0-0-0/include/grpcpp/support/proto_buffer_reader.h: In lambda function:
/data/mwrep/res/osp/Grpc/23-0-0-0/include/grpcpp/support/proto_buffer_reader.h:191:35: warning: unused parameter 'view' [-Wunused-parameter]
191 | [slice](absl::string_view view) { grpc_slice_unref(slice); });
| ~~~~~~~~~~~~~~~~~~^~~~
cc1plus: all warnings being treated as errors
```
This adds a new channel argument `GRPC_ARG_DSCP` which allows users to
create classified gRPC streams with a
Differentiated Services Code Point (DSCP) marking on the IP frames.
The channel argument is handled on both clients and servers, but
currently only on posix based systems.
Fixes#17225
**Background**:
In addition to what is already described is #17225, when gRPC is used in
telco systems there is often a need to classify streams of importance.
There can be multiple hops between two endpoints (e.g. between 2 telecom
operators) and some streams that are more important than others (e.g.
emergency call related or similar). By marking the IP packets using DSCP
the aware routers can make a sound decision of the prioritization.
This PR propose to use DSCP as the configuration value since its common
for both IPv4/IPv6, an alternative would be to use a config name that
includes TOS and Traffic Class.
There might be more needed regarding documentation and end2end testing,
but there I need some advice.
**References**
https://datatracker.ietf.org/doc/html/rfc2474https://www.iana.org/assignments/dscp-registry/dscp-registry.xhtml
<!--
Your pull request will be routed to the following person by default for
triaging.
If you know who should review your pull request, please remove the
mentioning below.
-->
@yashykt
This PR does the following: for the TLS server credentials, stops
calling `SSL_CTX_set_client_CA_list` by default in
`ssl_transport_security.cc`, and gives users a knob to re-enable calling
this API.
## What does the `SSL_CTX_set_client_CA_list` API do?
When this API is called, a gRPC TLS server sends the following data in
the ServerHello: for each certificate in the server's trust bundle, the
CA name in the certificate.
This API does not change the set of certificates trusted by the server
in any way. Rather, it is just providing a hint to the client about what
client certificate should be sent to the server.
## Why are we removing the use of `SSL_CTX_set_client_CA_list` by
default for the TLS server credentials?
Removing the use of this API by default has 2 benefits:
1. Calling this API makes gRPC TLS unusable for servers with a
sufficiently large trust bundle. Indeed, if the server trust bundle is
too large, then the server will always fail to build the ServerHello.
2. Calling this API is introducing a huge amount of overhead (1000s of
bytes) to each ServerHello, so removing this feature will improve
connection establishment latency for all users of the TLS server
credentials.
This change simplifies `EventEngine::DNSResolver`'s API based on the
proposal:
[go/event-engine-dns-resolver-api-changes](http://go/event-engine-dns-resolver-api-changes).
Note that this API change + the implementation described in
[go/event-engine-dns-resolver-implementation](http://go/event-engine-dns-resolver-implementation)
has already been tested against our main test suites and are passing
them.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
As the [issue](https://github.com/grpc/grpc/issues/10136) documents, the
behavior of AsyncNotifyWhenDone is documented as:
"The comment on `AsyncNotifyWhenDone` states "Has to be called before
the rpc starts" but it seems that if the request tag is returned with
ok=false (i.e. because the CQ is shutting down) then the async done tag
is never received. Instead, I expect the async done tag to be received
regardless of whether or not an incoming call request was successfully
received."
The TODO item is marked closed as stale, and it seems unlikely this will
be resolved, without breaking
existing users whose code is written under the assumption that the tag
is not seen if the call never starts, so it may be time to documented
the idiosyncratic corner case and make it the expected behavior.
This makes the JSON API visible as part of the C-core API, but in the
`experimental` namespace. It will be used as part of various
experimental APIs that we will be introducing in the near future, such
as the audit logging API.
One TXT lookup query can return multiple TXT records (see the following
example). `EventEngine::DNSResolver` should return all of them to let
the caller (e.g. `event_engine_client_channel_resolver`) decide which
one they would use.
```
$ dig TXT wikipedia.org
; <<>> DiG 9.18.12-1+build1-Debian <<>> TXT wikipedia.org
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 49626
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;wikipedia.org. IN TXT
;; ANSWER SECTION:
wikipedia.org. 600 IN TXT "google-site-verification=AMHkgs-4ViEvIJf5znZle-BSE2EPNFqM1nDJGRyn2qk"
wikipedia.org. 600 IN TXT "yandex-verification: 35c08d23099dc863"
wikipedia.org. 600 IN TXT "v=spf1 include:wikimedia.org ~all"
```
Note that this change also deviates us from the iomgr's DNSResolver API
which uses std::string as the result type.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
See `event_engine.h` for the contract change. All other changes are
cleanup.
I confirmed that both the Posix and Windows implementations comply with
this already.
On Windows, the `WindowsEventEngineListener` will only call
`on_shutdown` after all `SinglePortSocketListener`s have been destroyed,
which ensures that no `on_accept` callback will be executed, even if
there is still trailing overlapped activity on the listening socket.
On Posix, the `PosixEngineListenerImpl` will only call `on_shutdown`
after all `AsyncConnectionAcceptor`s have been destroyed, which ensures
`EventHandle::OrphanHandle` has been called. The `OrphanHandle` contract
indicates that all existing notify closures must have already run. The
implementation looks to comply, so if it does not, that's a bug.
3aae08d25e/src/core/lib/event_engine/posix_engine/event_poller.h (L48-L50)
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
Audit logging APIs for both built-in loggers and third-party logger
implementations.
C++ uses using decls referring to C-Core APIs.
---------
Co-authored-by: rockspore <rockspore@users.noreply.github.com>
Fix https://github.com/grpc/grpc/issues/32638
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
As Protobuf is going to support Cord to reduce memory copy when
[de]serializing Cord fields, gRPC is going to leverage it. This
implementation is based on the internal one but it's slightly modified
to use the public APIs of Cord. only
The very non-trivial upgrade of third_party/protobuf to 22.x
This PR strives to be as small as possible and many changes that were
compatible with protobuf 21.x and didn't have to be merged atomically
with the upgrade were already merged.
Due to the complexity of the upgrade, this PR wasn't created
automatically by a tool, but manually. Subsequent upgraded of
third_party/protobuf with our OSS release script should work again once
this change is merged.
This is best reviewed commit-by-commit, I tried to group changes in
logical areas.
Notable changes:
- the upgrade of third_party/protobuf submodule, the bazel protobuf
dependency itself
- upgrade of UPB dependency to 22.x (in the past, we used to always
upgrade upb to "main", but upb now has release branch as well). UPB
needs to be upgraded atomically with protobuf since there's a de-facto
circular dependency (new protobuf depends on new upb, which depends on
new protobuf for codegen).
- some protobuf and upb bazel rules are now aliases, so `
extract_metadata_from_bazel_xml.py` and `gen_upb_api_from_bazel_xml.py`
had to be modified to be able to follow aliases and reach the actual
aliased targets.
- some protobuf public headers were renamed, so especially
`src/compiler` needed to be updated to use the new headers.
- protobuf and upb now both depend on utf8_range project, so since we
bundle upb with grpc in some languages, we now have to bundle utf8_range
as well (hence changes in build for python, PHP, objC, cmake etc).
- protoc now depends on absl and utf8_range (previously protobuf had
absl dependency, but not for the codegen part), so python's
make_grpcio_tools.py required partial rewrite to be able to handle those
dependencies in the grpcio_tools build.
- many updates and fixes required for C++ distribtests (currently they
all pass, but we'll probably need to follow up, make protobuf's and
grpc's handling of dependencies more aligned and revisit the
distribtests)
- bunch of other changes mostly due to overhaul of protobuf's and upb's
internal build layout.
TODOs:
- [DONE] make sure IWYU and clang_tidy_code pass
- create a list of followups (e.g. work to reenable the few tests I had
to disable and to remove workaround I had to use)
- [DONE in cl/523706129] figure out problem(s) with internal import
---------
Co-authored-by: Craig Tiller <ctiller@google.com>
This PR aims to de-experimentalize the APIs for GCP Observability.
We would have ideally wanted public feedback before declaring the APIs
stable, but we need stable APIs for GA.
Changes made after API review with @markdroth@veblush, @ctiller and the
entire Core/C++ team -
* The old experimental APIs `grpc::experimental::GcpObservabilityInit`
and `grpc::experimental::GcpObservabilityClose` are now deprecated and
will be deleted after v.1.55 release.
* The new API gets rid of the Close method and follows the RAII idiom
with a single `grpc::GcpObservability::Init()` call that returns an
`GcpObservability` object, the lifetime of which controls when
observability data is flushed.
* The `GcpObservability` class could in the future add more methods. For
example, a debug method that shows the current configuration.
* Document that GcpObservability initialization and flushing (on
`GcpObservability` destruction) are blocking calls.
* Document that gRPC is still usable if GcpObservability initialization
failed. (Added a test to prove the same).
* Since we don't have a good way to flush stats and tracing with
OpenCensus, the examples required users to sleep for 25 seconds. This
sleep is now part of `GcpObservability` destruction.
Additional Implementation details -
* `GcpObservability::Init` is now marked with `GRPC_MUST_USE_RESULT` to
make sure that the results are used. We ideally want users to store it,
but this is better than nothing.
* Added a note on GCP Observability lifetime guarantees.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
This reverts commit 7bd9267f32.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
(hopefully last try)
Add new channel arg GRPC_ARG_ABSOLUTE_MAX_METADATA_SIZE as hard limit
for metadata. Change GRPC_ARG_MAX_METADATA_SIZE to be a soft limit.
Behavior is as follows:
Hard limit
(1) if hard limit is explicitly set, this will be used.
(2) if hard limit is not explicitly set, maximum of default and soft
limit * 1.25 (if soft limit is set) will be used.
Soft limit
(1) if soft limit is explicitly set, this will be used.
(2) if soft limit is not explicitly set, maximum of default and hard
limit * 0.8 (if hard limit is set) will be used.
Requests between soft and hard limit will be rejected randomly, requests
above hard limit will be rejected.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
This allows us to replace `absl::optional<TaskHandle>` with checks
against the invalid handle.
This PR also replaces the differently-named invalid handle instances
with a uniform way of accessing static invalid instances across all
handle types, which aids a bit in testing.
This PR adds the view `grpc.io/client/api_latency` for GCP Observability
which aims to collect the end-to-end time taken by a call.
Changes made to support this -
1) A global interceptor factory registration is created for stats
plugins.
2) OpenCensus plugin now provides a new interceptor that's responsible
for collecting the new latency.
3) Gcp Observability registers this plugin.
4) A new OpenCensus measurement and view is created for api latency.
Note that this is internal as of now, since it's not clear if it should
be exposed as public experimental API. Leaving that decision for the
future.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
It is reported in https://github.com/grpc/grpc/issues/32356 that there
is a race on vptr for `UnimplementedAsyncRequest` which would cause
crashes for multi-threaded server if clients send unimplemented RPC
request to the server.
The cause is that the server requests a call for
`UnimplementedAsyncRequest` in its base class `GenericAsyncRequest` when
the `vptr` still points to the base class's `vtable`. If the call went
in and another server thread picks up the tag before the `vptr` points
back to the derived class's `vtable`, it would call the wrong virtual
function and also this is a data race. This fix makes the request of the
call inside the derived class's constructor.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
First step in the modernization of our RBE stack (see
go/rbe-tech-debt-notes).
- Get rid of the deprecated rbe_autoconfig and start using
[rbe_configs_gen](https://github.com/bazelbuild/bazel-toolchains#rbe_configs_gen---cli-tool-to-generate-configs)
+ check in the generated toolchain configs.
- Switch from marketplace.gcr.io/google/rbe-ubuntu16-04 to
marketplace.gcr.io/google/rbe-ubuntu18-04 (this image is still not owned
by us, but at least it's newer and demonstrates how a switch to a newer
docker image is done).
- provide script for generating the linux RBE toolchain configs.
- cleanup RBE configuration in the bazelrc files used for remote build
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
PR #32215 added the verified root cert subject to the lower level
`tsi_peer`. This PR is a companion to that and completes the feature by
bubbling the information up to the `TsiCustomVerificationCheckRequest`
which is part of the user facing API for implementing custom
verification callbacks.
This fixes the problems identified while building with clang-cl on
Windows, with build arguments `/std:c++14 /W4`
Passes internal checks: cl/511562057
----
We can't yet enable a clang-cl build as part of our continuous
integration tests due to a few issues:
protobuf fails an `unused-parameter` warning check in v4.21 (the current
pinned version) on Windows. The upgrade to v4.22 is evidently painful
and in progress. Without maintaining a patch against protobuf, or
disabling warnings-as-errors somehow for the protobuf code alone, we'll
need to upgrade our dependency before we can automate the clang-cl build
for Windows.
Next, our Windows CI environment does not have clang installed. There
has been some work over the past year to create custom kokoro images,
but that work has apparently stalled after trading hands a few times.
Using our current images, installing clang every time we run the job may
be our best bet (likely from precompiled binaries that we host
ourselves), but it will eat up more CI resources.
Finally, some of the default build configurations are incorrect for
clang-cl. For example `-Wall` in clang-cl translates roughly to
`-Weverything` in clang linux, whereas `-W4` in clang-cl translates more
closely to `-Wall -Wextra`. This configuration in the gRPC bazel build
is not currently platform-specific, it will need to be updated.
Similarly, `-std=c++14` is an unknown argument on Windows (should be
`/std:c++14`), and should not be in the bazelrc. This will likely need
the same platform-specific support.
This reverts commit 0fc0384b5a.
Major changes: this code calls `GetDefaultEventEngine` once on Alarm
init instead of 7 times throughout.
I will run benchmarks to ensure b/237283941 is not reproduced.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
---------
Co-authored-by: drfloob <drfloob@users.noreply.github.com>
This PR adds batching support for GCP Observability logging. So instead
of the naive creating a new RPC to cloud logging for each logging event,
we now batch the log events to meet one of the following requirements -
* Batch size of 1000
* Batch memory consumption of 1MB
* A timeout period of 1sec after which we flush the accumulated batch
irrespective of the size.
There can also be cases where for some reason the RPCs fail or the batch
just accumulates to a very large size(100000 entries or 10MB in size).
In such cases, we just log the events with gpr_log instead of just
continuing to accumulate.
Additionally, `GcpObservabilityClose()` has been added to gracefully
shut off logging where we block till all the currently logged events are
flushed. (We might be able to gracefully shut off stats and tracing in
the future too.)
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
* Revert "Revert "Revert "Revert "server: introduce ServerMetricRecorder API and move per-call reporting from a C++ interceptor to a C-core filter (#32106)" (#32272)" (#32279)" (#32293)"
This reverts commit 1f960697c5.
* Do not create CallMetricRecorder if call is null.
* Revert "Revert "server: introduce ServerMetricRecorder API and move per-call reporting from a C++ interceptor to a C-core filter (#32106)" (#32272)"
This reverts commit deb1e25543.
* Fix by caching call metric recording stuff in async request
PR #32106 caused msan errors in some tests while de-referencing the
server object where async calls are active after the server is
destroyed. Instead cache the ServerMetricRecorder pointer.
* copyright headers fixed
* clang fixes.