Changes -
* CsmObservability doesn't need `SetTargetSelector`. Removed it.
* Added missing plumbing of `ServiceMeshLabelsInjector` in
`CsmObservability` to actually do the metadata exchange.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
This reverts commit 2db446aa9a.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
I've added channel args to `CreateNewServerCallTracer` on the
`ServerCallTracerFactory`.
The motivation is for CSM Observability where the OTel plugin will be
configured to only do stats on servers which are xDS enabled, so I plan
to check this via channel args.
In the future, with the new scopes for metrics, I think I'll be able to
change this to only check once per server or server connection instead
of per call.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
The "override" is not added on purpose to remain compatible with
Protobuf < 22.x, as already written in the comment on top of these two
functions.
CC @veblush as the author of this code.
Note: I am personally not super enthousiastic about this change. As an
alternative, I can propose to selectively add the `override` keyword,
based on the value of the `PROTOBUF_VERSION` macro (comparing it to
`4022000`). Tell me if you prefer this version instead.
Towards https://github.com/grpc/grpc/issues/33032,
Reopen after botched force-push in #33175 that then got "merged" and
cannot be reopened anymore.
More context in that PR.
---------
Co-authored-by: Isuru Fernando <isuruf@gmail.com>
Co-authored-by: David Chamberlin <david.chamberlin@ln.email.gs.com>
This PR is mainly a set of improvements that allow the C++ Alarm to be
migrated away from legacy iomgr. It cannot be landed without significant
speedup, due to third-parties relying on a fast path for immediate timer
execution with deadlines <= now.
Previous EventEngine performance of bm_alarm, compared to baseline iomgr
timers: *0.014%*
This PR: *2.5%*
Regarding previous failures to land this change: The cloud libraries
team agreed to reduce the amount of stress in their alarm stress test
https://github.com/googleapis/google-cloud-cpp/pull/12378
Why: Cleanup for chttp2_transport ahead of promise conversion - lots of
logic has become interleaved throughout chttp2, so some effort to
isolate logic out is warranted ahead of that conversion.
What: Split configuration and policy tracking for each of ping rate
throttling and abuse detection into their own modules. Add tests for
them.
Incidentally: Split channel args into their own header so that we can
split the policy stuff into separate build targets.
---------
Co-authored-by: ctiller <ctiller@users.noreply.github.com>
This PR implements a c-ares based DNS resolver for EventEngine with the
reference from the original
[grpc_ares_wrapper.h](../blob/master/src/core/ext/filters/client_channel/resolver/dns/c_ares/grpc_ares_wrapper.h).
The PosixEventEngine DNSResolver is implemented on top of that. Tests
which use the client channel resolver API
([resolver.h](../blob/master/src/core/lib/resolver/resolver.h#L54)) are
ported, namely the
[resolver_component_test.cc](../blob/master/test/cpp/naming/resolver_component_test.cc)
and the
[cancel_ares_query_test.cc](../blob/master/test/cpp/naming/cancel_ares_query_test.cc).
The WindowsEventEngine DNSResolver will use the same EventEngine's
grpc_ares_wrapper and will be worked on next.
The
[resolve_address_test.cc](https://github.com/grpc/grpc/blob/master/test/core/iomgr/resolve_address_test.cc)
which uses the iomgr
[DNSResolver](../blob/master/src/core/lib/iomgr/resolve_address.h#L44)
API has been ported to EventEngine's dns_test.cc. That leaves only 2
tests which use iomgr's API, notably the
[dns_resolver_cooldown_test.cc](../blob/master/test/core/client_channel/resolvers/dns_resolver_cooldown_test.cc)
and the
[goaway_server_test.cc](../blob/master/test/core/end2end/goaway_server_test.cc)
which probably need to be restructured to use EventEngine DNSResolver
(for one thing they override the original grpc_ares_wrapper's free
functions). I will try to tackle these in the next step.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
Based on the discussion at:
595a75cc5d..e3b402a8fa (r1244325752)
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
Going forward `[[nodiscard]]` is the portable way to spell this;
requires yanking a bunch of usage from after the param list to before.
We should further refine the GRPC_MUST_USE_RESULT macro to make it work
uniformly for any compilers that it doesn't today (most likely by making
it expand to nothing).
---------
Co-authored-by: ctiller <ctiller@users.noreply.github.com>
Detected with gcc 13:
```
In file included from /data/mwrep/res/osp/Grpc/23-0-0-0/include/grpcpp/impl/proto_utils.h:31,
from ./include/generated/gacms.object.grpc.pb.h:18,
from ./include/generated/gacms.object.grpc.pb.cc:6:
/data/mwrep/res/osp/Grpc/23-0-0-0/include/grpcpp/support/proto_buffer_reader.h: In member function 'virtual bool grpc::ProtoBufferReader::ReadCord(absl::lts_20230125::Cord*, int)': /data/mwrep/res/osp/Grpc/23-0-0-0/include/grpcpp/support/proto_buffer_reader.h:157:24: error: comparison of integer expressions of different signedness: 'uint64_t' {aka 'long unsigned int'} and 'int' [-Werror=sign-compare]
157 | if (slice_length <= count) {
| ~~~~~~~~~~~~~^~~~~~~~
/data/mwrep/res/osp/Grpc/23-0-0-0/include/grpcpp/support/proto_buffer_reader.h: In lambda function:
/data/mwrep/res/osp/Grpc/23-0-0-0/include/grpcpp/support/proto_buffer_reader.h:191:35: warning: unused parameter 'view' [-Wunused-parameter]
191 | [slice](absl::string_view view) { grpc_slice_unref(slice); });
| ~~~~~~~~~~~~~~~~~~^~~~
cc1plus: all warnings being treated as errors
```
This adds a new channel argument `GRPC_ARG_DSCP` which allows users to
create classified gRPC streams with a
Differentiated Services Code Point (DSCP) marking on the IP frames.
The channel argument is handled on both clients and servers, but
currently only on posix based systems.
Fixes#17225
**Background**:
In addition to what is already described is #17225, when gRPC is used in
telco systems there is often a need to classify streams of importance.
There can be multiple hops between two endpoints (e.g. between 2 telecom
operators) and some streams that are more important than others (e.g.
emergency call related or similar). By marking the IP packets using DSCP
the aware routers can make a sound decision of the prioritization.
This PR propose to use DSCP as the configuration value since its common
for both IPv4/IPv6, an alternative would be to use a config name that
includes TOS and Traffic Class.
There might be more needed regarding documentation and end2end testing,
but there I need some advice.
**References**
https://datatracker.ietf.org/doc/html/rfc2474https://www.iana.org/assignments/dscp-registry/dscp-registry.xhtml
<!--
Your pull request will be routed to the following person by default for
triaging.
If you know who should review your pull request, please remove the
mentioning below.
-->
@yashykt
This PR does the following: for the TLS server credentials, stops
calling `SSL_CTX_set_client_CA_list` by default in
`ssl_transport_security.cc`, and gives users a knob to re-enable calling
this API.
## What does the `SSL_CTX_set_client_CA_list` API do?
When this API is called, a gRPC TLS server sends the following data in
the ServerHello: for each certificate in the server's trust bundle, the
CA name in the certificate.
This API does not change the set of certificates trusted by the server
in any way. Rather, it is just providing a hint to the client about what
client certificate should be sent to the server.
## Why are we removing the use of `SSL_CTX_set_client_CA_list` by
default for the TLS server credentials?
Removing the use of this API by default has 2 benefits:
1. Calling this API makes gRPC TLS unusable for servers with a
sufficiently large trust bundle. Indeed, if the server trust bundle is
too large, then the server will always fail to build the ServerHello.
2. Calling this API is introducing a huge amount of overhead (1000s of
bytes) to each ServerHello, so removing this feature will improve
connection establishment latency for all users of the TLS server
credentials.
This change simplifies `EventEngine::DNSResolver`'s API based on the
proposal:
[go/event-engine-dns-resolver-api-changes](http://go/event-engine-dns-resolver-api-changes).
Note that this API change + the implementation described in
[go/event-engine-dns-resolver-implementation](http://go/event-engine-dns-resolver-implementation)
has already been tested against our main test suites and are passing
them.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
As the [issue](https://github.com/grpc/grpc/issues/10136) documents, the
behavior of AsyncNotifyWhenDone is documented as:
"The comment on `AsyncNotifyWhenDone` states "Has to be called before
the rpc starts" but it seems that if the request tag is returned with
ok=false (i.e. because the CQ is shutting down) then the async done tag
is never received. Instead, I expect the async done tag to be received
regardless of whether or not an incoming call request was successfully
received."
The TODO item is marked closed as stale, and it seems unlikely this will
be resolved, without breaking
existing users whose code is written under the assumption that the tag
is not seen if the call never starts, so it may be time to documented
the idiosyncratic corner case and make it the expected behavior.
This makes the JSON API visible as part of the C-core API, but in the
`experimental` namespace. It will be used as part of various
experimental APIs that we will be introducing in the near future, such
as the audit logging API.
One TXT lookup query can return multiple TXT records (see the following
example). `EventEngine::DNSResolver` should return all of them to let
the caller (e.g. `event_engine_client_channel_resolver`) decide which
one they would use.
```
$ dig TXT wikipedia.org
; <<>> DiG 9.18.12-1+build1-Debian <<>> TXT wikipedia.org
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 49626
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;wikipedia.org. IN TXT
;; ANSWER SECTION:
wikipedia.org. 600 IN TXT "google-site-verification=AMHkgs-4ViEvIJf5znZle-BSE2EPNFqM1nDJGRyn2qk"
wikipedia.org. 600 IN TXT "yandex-verification: 35c08d23099dc863"
wikipedia.org. 600 IN TXT "v=spf1 include:wikimedia.org ~all"
```
Note that this change also deviates us from the iomgr's DNSResolver API
which uses std::string as the result type.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
See `event_engine.h` for the contract change. All other changes are
cleanup.
I confirmed that both the Posix and Windows implementations comply with
this already.
On Windows, the `WindowsEventEngineListener` will only call
`on_shutdown` after all `SinglePortSocketListener`s have been destroyed,
which ensures that no `on_accept` callback will be executed, even if
there is still trailing overlapped activity on the listening socket.
On Posix, the `PosixEngineListenerImpl` will only call `on_shutdown`
after all `AsyncConnectionAcceptor`s have been destroyed, which ensures
`EventHandle::OrphanHandle` has been called. The `OrphanHandle` contract
indicates that all existing notify closures must have already run. The
implementation looks to comply, so if it does not, that's a bug.
3aae08d25e/src/core/lib/event_engine/posix_engine/event_poller.h (L48-L50)
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
Audit logging APIs for both built-in loggers and third-party logger
implementations.
C++ uses using decls referring to C-Core APIs.
---------
Co-authored-by: rockspore <rockspore@users.noreply.github.com>
Fix https://github.com/grpc/grpc/issues/32638
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
As Protobuf is going to support Cord to reduce memory copy when
[de]serializing Cord fields, gRPC is going to leverage it. This
implementation is based on the internal one but it's slightly modified
to use the public APIs of Cord. only
The very non-trivial upgrade of third_party/protobuf to 22.x
This PR strives to be as small as possible and many changes that were
compatible with protobuf 21.x and didn't have to be merged atomically
with the upgrade were already merged.
Due to the complexity of the upgrade, this PR wasn't created
automatically by a tool, but manually. Subsequent upgraded of
third_party/protobuf with our OSS release script should work again once
this change is merged.
This is best reviewed commit-by-commit, I tried to group changes in
logical areas.
Notable changes:
- the upgrade of third_party/protobuf submodule, the bazel protobuf
dependency itself
- upgrade of UPB dependency to 22.x (in the past, we used to always
upgrade upb to "main", but upb now has release branch as well). UPB
needs to be upgraded atomically with protobuf since there's a de-facto
circular dependency (new protobuf depends on new upb, which depends on
new protobuf for codegen).
- some protobuf and upb bazel rules are now aliases, so `
extract_metadata_from_bazel_xml.py` and `gen_upb_api_from_bazel_xml.py`
had to be modified to be able to follow aliases and reach the actual
aliased targets.
- some protobuf public headers were renamed, so especially
`src/compiler` needed to be updated to use the new headers.
- protobuf and upb now both depend on utf8_range project, so since we
bundle upb with grpc in some languages, we now have to bundle utf8_range
as well (hence changes in build for python, PHP, objC, cmake etc).
- protoc now depends on absl and utf8_range (previously protobuf had
absl dependency, but not for the codegen part), so python's
make_grpcio_tools.py required partial rewrite to be able to handle those
dependencies in the grpcio_tools build.
- many updates and fixes required for C++ distribtests (currently they
all pass, but we'll probably need to follow up, make protobuf's and
grpc's handling of dependencies more aligned and revisit the
distribtests)
- bunch of other changes mostly due to overhaul of protobuf's and upb's
internal build layout.
TODOs:
- [DONE] make sure IWYU and clang_tidy_code pass
- create a list of followups (e.g. work to reenable the few tests I had
to disable and to remove workaround I had to use)
- [DONE in cl/523706129] figure out problem(s) with internal import
---------
Co-authored-by: Craig Tiller <ctiller@google.com>
This PR aims to de-experimentalize the APIs for GCP Observability.
We would have ideally wanted public feedback before declaring the APIs
stable, but we need stable APIs for GA.
Changes made after API review with @markdroth@veblush, @ctiller and the
entire Core/C++ team -
* The old experimental APIs `grpc::experimental::GcpObservabilityInit`
and `grpc::experimental::GcpObservabilityClose` are now deprecated and
will be deleted after v.1.55 release.
* The new API gets rid of the Close method and follows the RAII idiom
with a single `grpc::GcpObservability::Init()` call that returns an
`GcpObservability` object, the lifetime of which controls when
observability data is flushed.
* The `GcpObservability` class could in the future add more methods. For
example, a debug method that shows the current configuration.
* Document that GcpObservability initialization and flushing (on
`GcpObservability` destruction) are blocking calls.
* Document that gRPC is still usable if GcpObservability initialization
failed. (Added a test to prove the same).
* Since we don't have a good way to flush stats and tracing with
OpenCensus, the examples required users to sleep for 25 seconds. This
sleep is now part of `GcpObservability` destruction.
Additional Implementation details -
* `GcpObservability::Init` is now marked with `GRPC_MUST_USE_RESULT` to
make sure that the results are used. We ideally want users to store it,
but this is better than nothing.
* Added a note on GCP Observability lifetime guarantees.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
This reverts commit 7bd9267f32.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
(hopefully last try)
Add new channel arg GRPC_ARG_ABSOLUTE_MAX_METADATA_SIZE as hard limit
for metadata. Change GRPC_ARG_MAX_METADATA_SIZE to be a soft limit.
Behavior is as follows:
Hard limit
(1) if hard limit is explicitly set, this will be used.
(2) if hard limit is not explicitly set, maximum of default and soft
limit * 1.25 (if soft limit is set) will be used.
Soft limit
(1) if soft limit is explicitly set, this will be used.
(2) if soft limit is not explicitly set, maximum of default and hard
limit * 0.8 (if hard limit is set) will be used.
Requests between soft and hard limit will be rejected randomly, requests
above hard limit will be rejected.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
This allows us to replace `absl::optional<TaskHandle>` with checks
against the invalid handle.
This PR also replaces the differently-named invalid handle instances
with a uniform way of accessing static invalid instances across all
handle types, which aids a bit in testing.
This PR adds the view `grpc.io/client/api_latency` for GCP Observability
which aims to collect the end-to-end time taken by a call.
Changes made to support this -
1) A global interceptor factory registration is created for stats
plugins.
2) OpenCensus plugin now provides a new interceptor that's responsible
for collecting the new latency.
3) Gcp Observability registers this plugin.
4) A new OpenCensus measurement and view is created for api latency.
Note that this is internal as of now, since it's not clear if it should
be exposed as public experimental API. Leaving that decision for the
future.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->