Merge pull request #16724 from sreecha/sreek-pe-usages-doc

Polling engine usage in client server
pull/16710/merge
Sree Kuchibhotla 6 years ago committed by GitHub
commit 262a5efd78
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
  1. 32
      doc/core/grpc-client-server-polling-engine-usage.md
  2. BIN
      doc/images/grpc-call-channel-cq.png
  3. BIN
      doc/images/grpc-client-lb-pss.png
  4. BIN
      doc/images/grpc-server-cq-fds.png
  5. 1
      tools/doxygen/Doxyfile.core
  6. 1
      tools/doxygen/Doxyfile.core.internal

@ -0,0 +1,32 @@
# Polling Engine Usage on gRPC client and Server
_Author: Sree Kuchibhotla (@sreecha) - Sep 2018_
This document talks about how polling engine is used in gRPC core (both on client and server code paths).
## gRPC client
### Relation between Call, Channel (sub-channels), Completion queue, `grpc_pollset`
- A gRPC Call is tied to a channel (more specifically a sub-channel) and a completion queue for the lifetime of the call.
- Once a _sub-channel_ is picked for the call, the file-descriptor (socket fd in case of TCP channels) is added to the pollset corresponding to call's completion queue. (Recall that as per [grpc-cq](grpc-cq.md), a completion queue has a pollset by default)
![image](../images/grpc-call-channel-cq.png)
### Making progress on Async `connect()` on sub-channels (`grpc_pollset_set` usecase)
- A gRPC channel is created between a client and a 'target'. The 'target' may resolve in to one or more backend servers.
- A sub-channel is the 'connection' from a client to the backend server
- While establishing sub-cannels (i.e connections) to the backends, gRPC issues async [`connect()`](https://github.com/grpc/grpc/blob/v1.15.1/src/core/lib/iomgr/tcp_client_posix.cc#L296) calls which may not complete right away. When the `connect()` eventually succeeds, the socket fd is make 'writable'
- This means that the polling engine must be monitoring all these sub-channel `fd`s for writable events and we need to make sure there is a polling thread that monitors all these fds
- To accomplish this, the `grpc_pollset_set` is used the following way (see picture below)
![image](../images/grpc-client-lb-pss.png)
## gRPC server
- The listening fd (i.e., the socket fd corresponding to the server listening port) is added to each of the server completion queues. Note that in gRPC we use SO_REUSEPORT option and create multiple listening fds but all of them map to the same listening port
- A new incoming channel is assigned to some server completion queue picked randomly (note that we currently [round-robin](https://github.com/grpc/grpc/blob/v1.15.1/src/core/lib/iomgr/tcp_server_posix.cc#L231) over the server completion queues)
![image](../images/grpc-server-cq-fds.png)

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 41 KiB

@ -771,6 +771,7 @@ doc/compression_cookbook.md \
doc/connection-backoff-interop-test-description.md \
doc/connection-backoff.md \
doc/connectivity-semantics-and-api.md \
doc/core/grpc-client-server-polling-engine-usage.md \
doc/core/grpc-cq.md \
doc/core/grpc-error.md \
doc/core/moving-to-c++.md \

@ -771,6 +771,7 @@ doc/compression_cookbook.md \
doc/connection-backoff-interop-test-description.md \
doc/connection-backoff.md \
doc/connectivity-semantics-and-api.md \
doc/core/grpc-client-server-polling-engine-usage.md \
doc/core/grpc-cq.md \
doc/core/grpc-error.md \
doc/core/moving-to-c++.md \

Loading…
Cancel
Save