Remove trailing-spaces from doc/*

pull/21959/head
Yushiro FURUKAWA 5 years ago
parent 349bc1b945
commit 61360c754b
  1. 10
      doc/core/combiner-explainer.md
  2. 2
      doc/core/epoll-polling-engine.md
  3. 2
      doc/core/grpc-client-server-polling-engine-usage.md
  4. 2
      doc/core/grpc-cq.md
  5. 6
      doc/core/moving-to-c++.md
  6. 2
      doc/environment_variables.md
  7. 2
      doc/fork_support.md
  8. 26
      doc/http2-interop-test-descriptions.md
  9. 2
      doc/internationalization.md
  10. 10
      doc/interop-test-descriptions.md
  11. 2
      doc/keepalive.md
  12. 4
      doc/python/server_reflection.md
  13. 8
      doc/security_audit.md
  14. 2
      doc/unit_testing.md
  15. 4
      doc/versioning.md

@ -41,9 +41,9 @@ Instead, get a new property:
class combiner {
mpscq q; // multi-producer single-consumer queue can be made non-blocking
state s; // is it empty or executing
run(f) {
if (q.push(f)) {
if (q.push(f)) {
// q.push returns true if it's the first thing
while (q.pop(&f)) { // modulo some extra work to avoid races
f();
@ -73,9 +73,9 @@ class combiner {
mpscq q; // multi-producer single-consumer queue can be made non-blocking
state s; // is it empty or executing
queue finally; // you can only do run_finally when you are already running something from the combiner
run(f) {
if (q.push(f)) {
if (q.push(f)) {
// q.push returns true if it's the first thing
loop:
while (q.pop(&f)) { // modulo some extra work to avoid races
@ -127,7 +127,7 @@ tries to spray events onto as many threads as possible to get as much concurrenc
So `offload` really does:
```
```
workqueue.run(continue_from_while_loop);
break;
```

@ -104,7 +104,7 @@ void grpc_use_signal(int signal_num)
If the calling application does not provide a signal number, then the gRPC library will relegate to using a model similar to the current implementation (where every thread does a blocking `poll()` on its `wakeup_fd` and the `epoll_fd`). The function` psi_wait() `in figure 2 implements this logic.
**>> **(**NOTE**: Or alternatively, we can implement a turnstile polling (i.e having only one thread calling `epoll_wait()` on the epoll set at any time - which all other threads call poll on their `wakeup_fds`)
**>> **(**NOTE**: Or alternatively, we can implement a turnstile polling (i.e having only one thread calling `epoll_wait()` on the epoll set at any time - which all other threads call poll on their `wakeup_fds`)
in case of not getting a signal number from the applications.

@ -7,7 +7,7 @@ This document talks about how polling engine is used in gRPC core (both on clien
## gRPC client
### Relation between Call, Channel (sub-channels), Completion queue, `grpc_pollset`
### Relation between Call, Channel (sub-channels), Completion queue, `grpc_pollset`
- A gRPC Call is tied to a channel (more specifically a sub-channel) and a completion queue for the lifetime of the call.
- Once a _sub-channel_ is picked for the call, the file-descriptor (socket fd in case of TCP channels) is added to the pollset corresponding to call's completion queue. (Recall that as per [grpc-cq](grpc-cq.md), a completion queue has a pollset by default)

@ -61,4 +61,4 @@ grpc_cq_end_op(cq, tag) {
}
```

@ -11,7 +11,7 @@ gRPC core was originally written in C89 for several reasons
support, etc). Over time, this was changed to C99 as all relevant
compilers in active use came to support C99 effectively.
gRPC started allowing to use C++ with a couple of exceptions not to
gRPC started allowing to use C++ with a couple of exceptions not to
have C++ library linked such as `libstdc++.so`.
(For more detail, see the [proposal](https://github.com/grpc/proposal/blob/master/L6-core-allow-cpp.md))
@ -25,12 +25,12 @@ C++ compatible with
## Constraints
- Most of features available in C++11 are allowed to use but there are some exceptions
- Most of features available in C++11 are allowed to use but there are some exceptions
because gRPC should support old systems.
- Should be built with gcc 4.8, clang 3.3, and Visual C++ 2015.
- Should be run on Linux system with libstdc++ 6.0.9 to support
[manylinux1](https://www.python.org/dev/peps/pep-0513).
- This would limit us not to use modern C++11 standard library such as `filesystem`.
- This would limit us not to use modern C++11 standard library such as `filesystem`.
You can easily see whether PR is free from this issue by checking the result of
`Artifact Build Linux` test.
- `thread_local` is not allowed to use on Apple's products because their old OSes

@ -69,7 +69,7 @@ some configuration as environment variables that can be set.
completion queue
- pick_first - traces the pick first load balancing policy
- plugin_credentials - traces plugin credentials
- pollable_refcount - traces reference counting of 'pollable' objects (only
- pollable_refcount - traces reference counting of 'pollable' objects (only
in DEBUG)
- resource_quota - trace resource quota objects internals
- round_robin - traces the round_robin load balancing policy

@ -25,7 +25,7 @@ A regression was noted in cases where users are doing fork/exec. This
was due to ```pthread_atfork()``` handler that was added in 1.7 to partially
support forking in gRPC. A deadlock can happen when pthread_atfork
handler is running, and an application thread is calling into gRPC.
We have provided a workaround for this issue by allowing users to turn
We have provided a workaround for this issue by allowing users to turn
off the handler using env flag ```GRPC_ENABLE_FORK_SUPPORT=False```.
This should be set whenever a user expects to always call exec
immediately following fork. It will disable the fork handlers.

@ -8,7 +8,7 @@ Server
------
The code for the custom http2 server can be found
[here](https://github.com/grpc/grpc/tree/master/test/http2_test).
It is responsible for handling requests and sending responses, and also for
It is responsible for handling requests and sending responses, and also for
fulfilling the behavior of each particular test case.
Server should accept these arguments:
@ -51,7 +51,7 @@ the user application having to do a thing.
Client Procedure:
1. Client sends two UnaryCall requests (and sleeps for 1 second in-between).
TODO: resolve [9300](https://github.com/grpc/grpc/issues/9300) and remove the 1 second sleep
```
{
response_size: 314159
@ -78,7 +78,7 @@ RST_STREAM immediately after sending headers to the client.
Procedure:
1. Client sends UnaryCall with:
```
{
response_size: 314159
@ -93,7 +93,7 @@ Client asserts:
Server Procedure:
1. Server sends a RST_STREAM with error code 0 after sending headers to the client.
*At the moment the error code and message returned are not standardized throughout all
languages. Those checks will be added once all client languages behave the same way. [#9142](https://github.com/grpc/grpc/issues/9142) is in flight.*
@ -104,7 +104,7 @@ RST_STREAM halfway through sending data to the client.
Procedure:
1. Client sends UnaryCall with:
```
{
response_size: 314159
@ -118,7 +118,7 @@ Client asserts:
* Call was not successful.
Server Procedure:
1. Server sends a RST_STREAM with error code 0 after sending half of
1. Server sends a RST_STREAM with error code 0 after sending half of
the requested data to the client.
### rst_after_data
@ -128,7 +128,7 @@ RST_STREAM after sending all of the data to the client.
Procedure:
1. Client sends UnaryCall with:
```
{
response_size: 314159
@ -156,7 +156,7 @@ server.
Procedure:
1. Client sends UnaryCall with:
```
{
response_size: 314159
@ -165,16 +165,16 @@ Procedure:
}
}
```
Client asserts:
* call was successful.
* response payload body is 314159 bytes in size.
Server Procedure:
1. Server tracks the number of outstanding pings (i.e. +1 when it sends a ping, and -1
1. Server tracks the number of outstanding pings (i.e. +1 when it sends a ping, and -1
when it receives an ack from the client).
2. Server sends pings before and after sending headers, also before and after sending data.
Server Asserts:
* Number of outstanding pings is 0 when the connection is lost.
@ -185,10 +185,10 @@ This test verifies that the client observes the MAX_CONCURRENT_STREAMS limit set
Client Procedure:
1. Client sends initial UnaryCall to allow the server to update its MAX_CONCURRENT_STREAMS settings.
2. Client concurrently sends 10 UnaryCalls.
Client Asserts:
* All UnaryCalls were successful, and had the correct type and payload size.
Server Procedure:
1. Sets MAX_CONCURRENT_STREAMS to one after the connection is made.

@ -1,7 +1,7 @@
gRPC Internationalization
=========================
As a universal RPC framework, gRPC needs to be fully usable within/across different international environments.
As a universal RPC framework, gRPC needs to be fully usable within/across different international environments.
This document describes gRPC API and behavior specifics when used in a non-english environment.
## API Concepts

@ -1007,21 +1007,21 @@ languages. Therefore they are not part of our interop matrix.
#### rpc_soak
The client performs many large_unary RPCs in sequence over the same channel.
The client performs many large_unary RPCs in sequence over the same channel.
The number of RPCs is configured by the experimental flag, `soak_iterations`.
#### channel_soak
The client performs many large_unary RPCs in sequence. Before each RPC, it
tears down and rebuilds the channel. The number of RPCs is configured by
The client performs many large_unary RPCs in sequence. Before each RPC, it
tears down and rebuilds the channel. The number of RPCs is configured by
the experimental flag, `soak_iterations`.
This tests puts stress on several gRPC components; the resolver, the load
This tests puts stress on several gRPC components; the resolver, the load
balancer, and the RPC hotpath.
#### long_lived_channel
The client performs a number of large_unary RPCs over a single long-lived
The client performs a number of large_unary RPCs over a single long-lived
channel with a fixed but configurable interval between each RPC.
### TODO Tests

@ -14,7 +14,7 @@ The keepalive ping is controlled by two important channel arguments -
The above two channel arguments should be sufficient for most users, but the following arguments can also be useful in certain use cases.
* **GRPC_ARG_KEEPALIVE_PERMIT_WITHOUT_CALLS**
* This channel argument if set to 1 (0 : false; 1 : true), allows keepalive pings to be sent even if there are no calls in flight.
* This channel argument if set to 1 (0 : false; 1 : true), allows keepalive pings to be sent even if there are no calls in flight.
* **GRPC_ARG_HTTP2_MAX_PINGS_WITHOUT_DATA**
* This channel argument controls the maximum number of pings that can be sent when there is no other data (data frame or header frame) to be sent. GRPC Core will not continue sending pings if we run over the limit. Setting it to 0 allows sending pings without sending data.
* **GRPC_ARG_HTTP2_MIN_SENT_PING_INTERVAL_WITHOUT_DATA_MS**

@ -6,7 +6,7 @@ and more examples how to use server reflection.
## Enable server reflection in Python servers
gRPC Python Server Reflection is an add-on library. To use it, first install
gRPC Python Server Reflection is an add-on library. To use it, first install
the [grpcio-reflection] PyPI package into your project.
Note that with Python you need to manually register the service
@ -29,7 +29,7 @@ def serve():
server.start()
```
Please see [greeter_server_with_reflection.py] in the examples directory for the full
Please see [greeter_server_with_reflection.py] in the examples directory for the full
example, which extends the gRPC [Python `Greeter` example] on a reflection-enabled server.
After starting the server, you can verify that the server reflection

@ -1,6 +1,6 @@
# gRPC Security Audit
A third-party security audit of gRPC C++ stack was performed by [Cure53](https://cure53.de) in October 2019. The full report can be found [here](https://github.com/grpc/grpc/tree/master/doc/grpc_security_audit.pdf).
A third-party security audit of gRPC C++ stack was performed by [Cure53](https://cure53.de) in October 2019. The full report can be found [here](https://github.com/grpc/grpc/tree/master/doc/grpc_security_audit.pdf).
# Addressing grpc_security_audit
@ -21,7 +21,7 @@ Below is a list of alternatives that gRPC team considered.
### Alternative #1: Rewrite gpr_free to take void\*\*
One solution is to change the API of `gpr_free` so that it automatically nulls the given pointer after freeing it.
One solution is to change the API of `gpr_free` so that it automatically nulls the given pointer after freeing it.
```
gpr_free (void** ptr) {
@ -30,7 +30,7 @@ gpr_free (void** ptr) {
}
```
This defensive programming pattern would help protect gRPC from the potential exploits and latent dangling pointer bugs mentioned in the security report.
This defensive programming pattern would help protect gRPC from the potential exploits and latent dangling pointer bugs mentioned in the security report.
However, performance would be a significant concern as we are now unconditionally adding a store to every gpr_free call, and there are potentially hundreds of these per RPC. At the RPC layer, this can add up to prohibitive costs.
@ -61,7 +61,7 @@ Because of performance and maintainability concerns, GRP-01-002 will be addresse
## GRP-01-003 Calls to malloc suffer from potential integer overflows
The vulnerability, as defined by the report, is that calls to `gpr_malloc` in the C-core codebase may suffer from potential integer overflow in cases where we multiply the array element size by the size of the array. The penetration testers did not identify a concrete place where this occurred, but rather emphasized that the coding pattern itself had potential to lead to vulnerabilities. The report’s suggested solution for GRP-01-003 was to create a `calloc(size_t nmemb, size_t size)` wrapper that contains integer overflow checks.
However, gRPC team firmly believes that gRPC Core should only use integer overflow checks in the places where they’re needed; for example, any place where remote input influences the input to `gpr_malloc` in an unverified way. This is because bounds-checking is very expensive at the RPC layer.
However, gRPC team firmly believes that gRPC Core should only use integer overflow checks in the places where they’re needed; for example, any place where remote input influences the input to `gpr_malloc` in an unverified way. This is because bounds-checking is very expensive at the RPC layer.
Determining exactly where bounds-checking is needed requires an audit of tracing each `gpr_malloc` (or `gpr_realloc` or `gpr_zalloc`) call up the stack to determine if the sufficient bounds-checking was performed. This kind of audit, done manually, is fairly expensive engineer-wise.

@ -75,7 +75,7 @@ grpc_proto_library(
```
By adding such a flag now a header file `echo_mock.grpc.pb.h` containing the mocked stub will also be generated.
By adding such a flag now a header file `echo_mock.grpc.pb.h` containing the mocked stub will also be generated.
This header file can then be included in test files along with a gmock dependency.

@ -3,7 +3,7 @@
## Versioning Overview
All gRPC implementations use a three-part version number (`vX.Y.Z`) and follow [semantic versioning](https://semver.org/), which defines the semantics of major, minor and patch components of the version number. In addition to that, gRPC versions evolve according to these rules:
- **Major version bumps** only happen on rare occasions. In order to qualify for a major version bump, certain criteria described later in this document need to be met. Most importantly, a major version increase must not break wire compatibility with other gRPC implementations so that existing gRPC libraries remain fully interoperable.
- **Major version bumps** only happen on rare occasions. In order to qualify for a major version bump, certain criteria described later in this document need to be met. Most importantly, a major version increase must not break wire compatibility with other gRPC implementations so that existing gRPC libraries remain fully interoperable.
- **Minor version bumps** happen approx. every 6 weeks as part of the normal release cycle as defined by the gRPC release process. A new release branch named vMAJOR.MINOR.PATCH) is cut every 6 weeks based on the [release schedule](https://github.com/grpc/grpc/blob/master/doc/grpc_release_schedule.md).
- **Patch version bump** corresponds to bugfixes done on release branch.
@ -24,7 +24,7 @@ There are also a few extra rules regarding adding new gRPC implementations (e.g.
To avoid user confusion and simplify reasoning, the gRPC releases in different languages try to stay synchronized in terms of major and minor version (all languages follow the same release schedule). Nevertheless, because we also strictly follow semantic versioning, there are circumstances in which a gRPC implementation needs to break the version synchronicity and do a major version bump independently of other languages.
### Situations when it's ok to do a major version bump
- **change forced by the language ecosystem:** when the language itself or its standard libraries that we depend on make a breaking change (something which is out of our control), reacting with updating gRPC APIs may be the only adequate response.
- **change forced by the language ecosystem:** when the language itself or its standard libraries that we depend on make a breaking change (something which is out of our control), reacting with updating gRPC APIs may be the only adequate response.
- **voluntary change:** Even in non-forced situations, there might be circumstances in which a breaking API change makes sense and represents a net win, but as a rule of thumb breaking changes are very disruptive for users, cause user fragmentation and incur high maintenance costs. Therefore, breaking API changes should be very rare events that need to be considered with extreme care and the bar for accepting such changes is intentionally set very high.
Example scenarios where a breaking API change might be adequate:
- fixing a security problem which requires changes to API (need to consider the non-breaking alternatives first)

Loading…
Cancel
Save