Changes -
* Remove `csm.remote_workload_pod_name` and
`csm.remote_workload_container_name`.
* Add `csm.remote_workload_name`, the value for which is sent through
MetadataExchange, from the `CSM_WORKLOAD_NAME` env var. (Note that this
is not added in local labels.)
* Add a local `csm.canonical_service` (@markdroth, please verify the key
that we want here) that is read from `CSM_CANONICAL_SERVICE_NAME` env
var, and we continue to send it over via MetadataExchange
Revert the reversion of the SSL_CTX_new change (#34355 reverted #34180 )
with a fix.
There was an issue with using `strcpy` on a `new[] string` in the
constructor of `ssl_credentials`. An ASAN test caught this in some CI
down the line - `ERROR: AddressSanitizer: alloc-dealloc-mismatch
(operator new [] vs free)`
That `strcpy` call was changed to `grp_strdup` which duplicates a string
in a way that can be freed by `gpr_free` and should resolve the ASAN
failure.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
This pull request adds another hook service on the maintenance server.
This will enable clients to gradually migrate from the standalone hook
server.
Changes:
1. Hook service can now be used separately.
2. Copied latest protos and updated the hook service to new API.
3. Added the hook service to the maintenance server.
Working towards testing against CSM Observability. Added ability to
register a prometheus exporter with our Opentelemetry plugin. This will
allow our metrics to be available at the standard prometheus port
`:9464`.
This test assumed synchronous work serializer execution (or at least
faster async than we always get)... make a trivial change to keep the
test semantics but allow for the implementation to be more async.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
Move the SSL_CTX to the level of the credentials rather than the
subchannel.
The SSL_CTX should only get created once per credential rather than once
per subchannel.
We should observe no behavior change with this PR, only efficiency
gains.
Most recent attempt was #34320, reverted in #34335.
The first commit here is a pure revert. The second commit fixes the
outlier_detection unit test to pass both with and without the
experiment.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
@ctiller PTAL for core configuration changes. I converted the type from
std::function to absl::AnyInvocable. Do you think the functor in
RegisteredBuilder should be callable just once or multiple times?
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
Changes -
1) Change local mesh labels to not be reported on 'started' metrics at
all (even those that we know about) to be consistent. (Since xDS labels
atleast on the server side would not be available on started metric.)
2) Add mesh_id as a local label that is populated by reading the xDS
bootstrap. As part of this, also added a minimal xds bootstrap parsing
logic.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
This should address one of the failures we're seeing in #34224.
The test failure is caused by the changes in timing triggering a race
condition. In the code at head, we delay sending out the subscription
for the first CDS watch until we've already seen the other two CDS
watches, because the previous send_message op has not yet completed, and
by the time it does, we've seen all 3 watches, so we can send a
subscription for all 3 at the same time. With the WorkSerializer change,
the send_message op is complete by the time we see the first CDS watch,
so we subscribe to only that resource, and then later add the other two.
The result is that we'll NACK twice with two different messages, the
first one including only the error about the first resource, and the
second one including all three.
I suspect this same race condition would have been triggered eventually
by the EventEngine migration anyway; the current test basically depends
on the single-thread timing of the iomgr approach. So I'm addressing it
by replacing the e2e test with a unit test that covers the same cases
without the timing issue.
Rolls forward part of the dualstack changes, mostly from #33427 and a
little bit from #32692, both of which were reverted in #33718.
Specifically:
- For petiole policies, unconditionally start health watch on
subchannels, even if client side health checking is not enabled; in this
case, the health watch will report the subchannel's raw connectivity
state.
- Fix edge cases in health check reporting that occur when a watcher is
started before the initial state is reported.
- When client-side health checking fails, add the subchannel's address
to the RPC failure status message.
- Outlier detection now works only via the health checking watch, not
via the raw connectivity state watch.
- Remove now-unnecessary hack to ensure that outlier detection does not
work for pick_first.
A new metadata type `x-envoy-peer-metadata` is being introduced. We
don't have a better way to do this at the moment compared to just adding
it in `metadata_batch.h`.
The GSM Observability plugin uses this metadata to send topology
information to peers in the form of serialized and base64 encoded
`google::protobuf::Struct`. The individual keys being used inside the
struct are subject to change.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
Based on updates at https://github.com/grpc/proposal/pull/380
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
---------
Co-authored-by: Mark D. Roth <roth@google.com>
Co-authored-by: markdroth <markdroth@users.noreply.github.com>
Co-authored-by: ctiller <ctiller@users.noreply.github.com>
Old message:
new_cluster_1: UNAVAILABLE: errors validating xds_cluster_resolver LB
policy config: [field:discoveryMechanisms error:must be non-empty]
New message:
new_cluster_1: FAILED_PRECONDITION: aggregate cluster graph has no leaf
clusters
If we get a readable event on an fd and both the following happens:
- c-ares does *not* read all bytes off the fd
- c-ares removes the fd from the set ARES_GETSOCK_READABLE
... then we have a busy loop here, where we'd keep asking c-ares to
process an fd that it no longer cares about.
This is indirectly related to a change in this code one month ago:
https://github.com/grpc/grpc/pull/33942 - before that change, c-ares
would close the socket when it called
[handle_error](7f3262312f/src/lib/ares_process.c (L707))
and so `IsFdStillReadableLocked` would start returning `false`, causing
us to get away with [this
loop](f6a994229e/src/core/ext/filters/client_channel/resolver/dns/c_ares/grpc_ares_wrapper.cc (L371)).
Now, because `IsFdStillReadableLocked` will keep returning true (because
of our overridden `close` API), we'll loop forever.
Based on updates at https://github.com/grpc/proposal/pull/380
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
We added this as an exploratory measure for a customer that thought they
were using open census (this turned out to be emphatically false).
Remove it since it's probably not how we ultimately want to do this, and
wait for something better to come along.
---------
Co-authored-by: ctiller <ctiller@users.noreply.github.com>