The docs change is extracted from
https://github.com/grpc/grpc/pull/31869 and
https://github.com/grpc/grpc/pull/31938.
The actual upgrade of boringssl is in progress, but in the meantime we
can at least make sure the instructions are up-to-date.
I'll also update the internal counterpart (cl/501499368)
Co-authored-by: Hannah Shi <hannahshisfb@gmail.com>
This filter was originally written only for the C++ wrapped layer, but
we have plans to use this for Python (and maybe other wrapped languages
too in the future.)
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
Looks like this was accidentally dropped from our build files in
https://github.com/grpc/grpc/pull/21929, which means that this test
hasn't actually been built or run in almost 3 years. Unsurprisingly
after all that time, I had to make some changes to the test to get it to
actually build.
I've replaced all use of `InternalError` here because none of these
scenarios would necessarily merit a bug or outage report.
Identified in the fuchsia test suite: calling the Listener's
`on_shutdown` method with anything other than `absl::OkStatus()` would
fail some assertions in the Posix-specialized client test suite if the
Oracle were implemented similarly. It _should_ fail the same way in the
listener test suite, but the statuses are ignored. I've fixed that.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
This fixes the problems identified while building with clang-cl on
Windows, with build arguments `/std:c++14 /W4`
Passes internal checks: cl/511562057
----
We can't yet enable a clang-cl build as part of our continuous
integration tests due to a few issues:
protobuf fails an `unused-parameter` warning check in v4.21 (the current
pinned version) on Windows. The upgrade to v4.22 is evidently painful
and in progress. Without maintaining a patch against protobuf, or
disabling warnings-as-errors somehow for the protobuf code alone, we'll
need to upgrade our dependency before we can automate the clang-cl build
for Windows.
Next, our Windows CI environment does not have clang installed. There
has been some work over the past year to create custom kokoro images,
but that work has apparently stalled after trading hands a few times.
Using our current images, installing clang every time we run the job may
be our best bet (likely from precompiled binaries that we host
ourselves), but it will eat up more CI resources.
Finally, some of the default build configurations are incorrect for
clang-cl. For example `-Wall` in clang-cl translates roughly to
`-Weverything` in clang linux, whereas `-W4` in clang-cl translates more
closely to `-Wall -Wextra`. This configuration in the gRPC bazel build
is not currently platform-specific, it will need to be updated.
Similarly, `-std=c++14` is an unknown argument on Windows (should be
`/std:c++14`), and should not be in the bazelrc. This will likely need
the same platform-specific support.
With the `--copt="-std=c++14"` setting in the bazel.rc file as it is
today, MSVC builds have complained for every cc file:
```
cl : Command line warning D9002 : ignoring unknown option '-std=c++14'
```
This adds thousands of lines of noise to Windows builds, and hides
useful warnings. Using the `/std:c++14` flag on MSVC (and clang-cl) gets
us the desired result.
I had some doubts about `Seq` debugging another problem, so expanded the
tests we have to try and isolate the problem (so far without success, so
I think the original problem was elsewhere).
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
Internal Windows builds will catch issues that we cannot yet catch in
OSS. This will be mostly remedied once we have clang-cl in our CI (See a
rough roadmap in https://github.com/grpc/grpc/pull/32448). For now, this
PR identifies folders where most Windows-specific code is developed, and
requires cherrypicks for PRs that touch anything inside those folders.
This PR also refactors gpr and gprpp source files to better isolate all
platform-specific code ~the Windows-only code~. ~I will reorganize the
other platform-specific files using this structure if there are no
objections.~
This subset of folders covers about half of the `#ifdef GPR_WINDOWS`
usages in gRPC, but nearly all of the actively-developed Windows code
locations.
This reverts commit 0fc0384b5a.
Major changes: this code calls `GetDefaultEventEngine` once on Alarm
init instead of 7 times throughout.
I will run benchmarks to ensure b/237283941 is not reproduced.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
---------
Co-authored-by: drfloob <drfloob@users.noreply.github.com>
For stats, the StackDriver/OpenCensus API allows setting the
MonitoredResource directly, so use that.
For tracing, there is no explicit MonitoredResource to use, so just
insert it into the attributes for a span.
This PR adds batching support for GCP Observability logging. So instead
of the naive creating a new RPC to cloud logging for each logging event,
we now batch the log events to meet one of the following requirements -
* Batch size of 1000
* Batch memory consumption of 1MB
* A timeout period of 1sec after which we flush the accumulated batch
irrespective of the size.
There can also be cases where for some reason the RPCs fail or the batch
just accumulates to a very large size(100000 entries or 10MB in size).
In such cases, we just log the events with gpr_log instead of just
continuing to accumulate.
Additionally, `GcpObservabilityClose()` has been added to gracefully
shut off logging where we block till all the currently logged events are
flushed. (We might be able to gracefully shut off stats and tracing in
the future too.)
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
core; ref #30979
1. avoid calling `grpc_dump_slice` if the log level is too low (and the
result will be ignored)
2. use `GRPC_TRACE_FLAG_ENABLED(x)` over `x.enabled()` in the touched
code
---------
Co-authored-by: Yash Tibrewal <yashkt@google.com>
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
Enforce a minimum value for the `refresh_interval_sec_` for the
`FileWatcherCertificateProvider`. There have been issues found when this
is set to 0, and the security team discussed and agreed that 0 should
not be a valid value for this use-case.
I made the `refresh_interval_sec_` public to make it easy to test - I
didn't immediately see an easy way around this. I found `FRIEND_TEST`
exists for accessing private members, but I didn't see that used
anywhere in grpc. If there is a better solution to this, please let me
know.
This test is flaky only with iomgr, this fix will likely fix this.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
This version broke backward compatibility in `plugin_pb2.py`, which is
presumably a relatively minor regression, since we have not yet heard
any complaints about it. This PR:
- Excludes `4.22.0` from installation
- _Includes_ protobuf pre-releases into testing so this can be caught
more quickly in the future.
When bad prereleases are caught, we can exclude them from testing in a
similar manner to this PR. We may eventually want to invest into a
system where we can define these bad versions centrally.
Relands #32385 (reverted in #32419) with fixes.
The Windows build is clean on a test cherrypick: cl/511291828
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
---------
Co-authored-by: drfloob <drfloob@users.noreply.github.com>
This is a step towards enabling `--define=use_strict_warning=true` for
Windows clang-cl.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
Return `Timeout(kMaxHours, Unit::kHours)` if the value is about to
overflow in `DivideRoundingUp`.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
The `XdsFaultInjectionMaxFault` test has seen a few flakes since #32326
was merged. I believe the flakiness is caused by the fact that when a
large number of RPCs are queued up before the resolver result comes in,
those RPCs are now re-processed in parallel instead of sequentially,
which can cause us to delay more RPCs than we should due to the
`max_faults` setting. To fix this, we change the test to ensure that the
channel is connected (i.e., the resolver result has already been
returned) before we start sending a large number of concurrent RPCs.
Although this is the only test that I've seen flakes in, I've made this
same change consistently to all fault injection tests that are creating
a large number of concurrent RPCs, since the same flake could affect any
of them.
PHP7 build is failing, removing from CI while investigating the failure.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
This code is not plumbed through yet, but it provides the core
infrastructure needed to detect the proper GCP environment resources
needed to set up the labels/attributes/resources for stats, tracing and
logging.
Details on how the various environment resources are setup has been
derived by looking at java's cloud logging library and OpenTelemetry's
future plans. (Could be better explained in an offline review since some
links are internal).
Requesting @veblush for a full review and @markdroth for a structural
review.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
This is a prerequisite for converting the client_channel filter to
promises. This refactors two objects:
- `ClientChannel::CallData`, which is primarily responsible for applying
the service config to the call
- `ClientChannel::LoadBalancedCall`, which is responsible for doing the
LB pick for the call attempt
Each of those classes has been split into two pieces:
- a base class with the functionality to be shared between the legacy
filter stack implementation and the new promise-based implementation
- a subclass providing the legacy filter stack implementation
A subsequent PR will add another subclass that provides the
promise-based implementation.
The upb team wants to remove this particular bit of syntactic sugar from
the generated code. So instead of calling has_foo() when foo is a map
field, we call foo_size() and test the result against zero.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
Original attempt was #31973, reverted in #32324 due to test flakiness.
There were two problems causing test flakiness here.
The first problem was that, upon resolver error, we were dispatching an
async callback to re-process each of the queued picks *before* we
updated the channel's connectivity state, which meant that the queued
picks might be re-processed in another thread before the new
connectivity state was set, so tests that expected the state to be
TRANSIENT_FAILURE once RPCs failed might not see the expected state.
The second problem affected the xDS ring hash tests, and it's a bit more
involved to explain.
We have an e2e test that simulates an aggregate cluster failover from a
primary cluster using ring_hash at startup. The primary cluster has two
addresses, both of which are unreachable when the client starts up, so
the client should immediately fail over to the secondary cluster, which
does have reachable endpoints. The test requires that no RPCs are failed
while this failover occurs. The original PR made this test flaky.
The problem here was caused by a combination of two factors:
1. Prior to the original PR, when the picker was updated (which happens
inside the WorkSerializer), we re-processed previously queued picks
synchronously, so it was not possible for another subchannel
connectivity state update (which also happens in the WorkSerializer) to
be processed between the time that we updated the picker and the time
that we re-processed the previously queued picks. The original PR
changed this such that the queued picks are re-processed asynchronously
(outside of the WorkSerializer), so it is now possible for a subchannel
connectivity state update to be processed between when the picker is
updated and when we re-process the previously queued picks.
2. Unlike most LB policies, where the picker does not see updated
subchannel connectivity states until a new picker is created, the
ring_hash picker gets the subchannel connectivity states from the LB
policy via a lock, so it can wind up seeing the new states before it
gets updated. This means that when a subchannel connectivity state
update is processed by the ring_hash policy in the WorkSerializer, it
will immediately be seen by the existing picker, even without a picker
update.
With those two points in mind, the sequence of events in the failing
test were as follows:
1. The pick is attempted in the ring_hash picker for the primary
cluster. This causes the first subchannel to attempt to connect.
2. The subchannel transitions from IDLE to CONNECTING. A new picker is
returned due to the subchannel connectivity state change, and the
channel retries the queued pick. The retried pick is done
asynchronously, but in this case it does not matter: the call will be
re-queued.
3. The connection attempt fails, and the subchannel reports
TRANSIENT_FAILURE. A new picker is again returned, and the channel
retries the queued pick. The retried pick is done asynchronously, but in
this case it does not matter: this causes the picker to trigger a
connection attempt for the second subchannel.
4. The second subchannel transitions from IDLE to CONNECTING. A new
picker is again returned, and the channel retries the queued pick. The
retried pick is done asynchronously, and in this case it *does* matter.
5. The second subchannel now transitions to TRANSIENT_FAILURE. The
ring_hash policy will now report TRANSIENT_FAILURE, but before it can
finish that...
6. ...In another thread, the channel now tries to re-process the queued
pick using the CONNECTING picker from step 4. However, because the
ring_hash policy has already seen the TRANSIENT_FAILURE report from the
second subchannel, that picker will now fail the pick instead of queuing
it.
After discussion with @ejona86 and @dfawley (since this bug actually
exists in Java and Go as well), we agreed that the right solution is to
change the ring_hash picker to contain its own copy of the subchannel
connectivity state information, rather than sharing that information
with the LB policy using synchronization.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
---------
Co-authored-by: ctiller <ctiller@users.noreply.github.com>
This applies to all wrapped languages.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
Removing a number of unused variables. This has no behaviour change.
These types are not considered "unused variables" by normal
`-Wunused-variable` flags because they have nontrivial destructors, but
these types' destructors are not used for their side effects, so unused
variables of these types should be considered bug-prone.
This PR removes all unused `absl::Status` and `absl::StatusOr<>`
variables I could find in grpc.
Currently, the peer name is returned with the completion of the
send_initial_metadata op, which does not make sense, because with
retries, we don't actually know the peer name until we complete the
recv_initial_metadata op. This PR changes our code to return the peer
string as an attribute of the recv_initial_metadata op, so that it is
not available to the application until that point. This change may be
user-visible, but since our API docs don't seem to guarantee exactly
when this data will be available, it's not technically a breaking
change.
Note that in the promise-based stack, we were already assuming that the
peer string would be returned as part of the recv_initial_metadata
batch, so this PR helps reduce risk for the promise conversion by making
this semantic change now, thus decoupling it from the promise
conversion.
I have also changed the representation of the string in the metadata
batch to be a `grpc_core::Slice` instead of a `std::string`, so that we
can just take a ref to the string held in the transport instead of
having to copy the whole string for every call.
A handful of problems were identified while writing the
WindowsEventEngine Listener. To make the listener review easier, these
fixes can be landed separately.
This is built upon https://github.com/grpc/grpc/pull/32376
Problems that are fixed in this PR:
* `OnConnectCompleted` held a Mutex while calling the user callback,
which can deadlock.
* The WinSocket and some associated data needs to remain alive after the
Endpoint destroyed, since Windows IOCP still needs to use some of that
data. Endpoint destruction and socket shutdown are now decoupled, with
the socket managed by a shared_ptr.
<!--
If you know who should review your pull request, please assign it to
that
person, otherwise the pull request would get assigned randomly.
If your pull request is for a specific language, please add the
appropriate
lang label.
-->
---------
Co-authored-by: drfloob <drfloob@users.noreply.github.com>
While creating an internal CL that depends directly on
tsi_alts_credentials, I was getting linker errors saying ` error:
backward reference detected: grpc_channel_credentials_release`, because
`alts_tsi_handshaker.cc` uses the `grpc_channel_credentials_release`
API, which is defined in the `grpc_security_base` target.