mirror of https://github.com/grpc/grpc.git
commit
fdd4496117
51 changed files with 974 additions and 1609 deletions
@ -1,44 +1,13 @@ |
||||
# gRPC in 3 minutes (C++) |
||||
# gRPC C++ Examples |
||||
|
||||
## Installation |
||||
- **[Hello World][]!** Eager to run your first gRPC example? You'll find |
||||
instructions for building gRPC and running a simple "Hello World" app in [Quick Start][]. |
||||
- **[Route Guide][].** For a basic tutorial on gRPC see [gRPC Basics][]. |
||||
|
||||
To install gRPC on your system, follow the instructions to build from source |
||||
[here](../../BUILDING.md). This also installs the protocol buffer compiler |
||||
`protoc` (if you don't have it already), and the C++ gRPC plugin for `protoc`. |
||||
For information about the other examples in this directory, see their respective |
||||
README files. |
||||
|
||||
## Hello C++ gRPC! |
||||
|
||||
Here's how to build and run the C++ implementation of the [Hello |
||||
World](../protos/helloworld.proto) example used in [Getting started](..). |
||||
|
||||
### Client and server implementations |
||||
|
||||
The client implementation is at [greeter_client.cc](helloworld/greeter_client.cc). |
||||
|
||||
The server implementation is at [greeter_server.cc](helloworld/greeter_server.cc). |
||||
|
||||
### Try it! |
||||
Build client and server: |
||||
|
||||
```sh |
||||
$ make |
||||
``` |
||||
|
||||
Run the server, which will listen on port 50051: |
||||
|
||||
```sh |
||||
$ ./greeter_server |
||||
``` |
||||
|
||||
Run the client (in a different terminal): |
||||
|
||||
```sh |
||||
$ ./greeter_client |
||||
``` |
||||
|
||||
If things go smoothly, you will see the "Greeter received: Hello world" in the |
||||
client side output. |
||||
|
||||
## Tutorial |
||||
|
||||
You can find a more detailed tutorial in [gRPC Basics: C++](cpptutorial.md) |
||||
[gRPC Basics]: https://grpc.io/docs/tutorials/basic/cpp |
||||
[Hello World]: helloworld |
||||
[Quick Start]: https://grpc.io/docs/quickstart/cpp |
||||
[Route Guide]: route_guide |
||||
|
@ -1,488 +0,0 @@ |
||||
# gRPC Basics: C++ |
||||
|
||||
This tutorial provides a basic C++ programmer's introduction to working with |
||||
gRPC. By walking through this example you'll learn how to: |
||||
|
||||
- Define a service in a `.proto` file. |
||||
- Generate server and client code using the protocol buffer compiler. |
||||
- Use the C++ gRPC API to write a simple client and server for your service. |
||||
|
||||
It assumes that you are familiar with |
||||
[protocol buffers](https://developers.google.com/protocol-buffers/docs/overview). |
||||
Note that the example in this tutorial uses the proto3 version of the protocol |
||||
buffers language, which is currently in alpha release: you can find out more in |
||||
the [proto3 language guide](https://developers.google.com/protocol-buffers/docs/proto3) |
||||
and see the [release notes](https://github.com/google/protobuf/releases) for the |
||||
new version in the protocol buffers Github repository. |
||||
|
||||
## Why use gRPC? |
||||
|
||||
Our example is a simple route mapping application that lets clients get |
||||
information about features on their route, create a summary of their route, and |
||||
exchange route information such as traffic updates with the server and other |
||||
clients. |
||||
|
||||
With gRPC we can define our service once in a `.proto` file and implement clients |
||||
and servers in any of gRPC's supported languages, which in turn can be run in |
||||
environments ranging from servers inside Google to your own tablet - all the |
||||
complexity of communication between different languages and environments is |
||||
handled for you by gRPC. We also get all the advantages of working with protocol |
||||
buffers, including efficient serialization, a simple IDL, and easy interface |
||||
updating. |
||||
|
||||
## Example code and setup |
||||
|
||||
The example code for our tutorial is in [examples/cpp/route_guide](route_guide). |
||||
You also should have the relevant tools installed to generate the server and |
||||
client interface code - if you don't already, follow the setup instructions in |
||||
[BUILDING.md](../../BUILDING.md). |
||||
|
||||
## Defining the service |
||||
|
||||
Our first step is to define the gRPC *service* and the method *request* and |
||||
*response* types using |
||||
[protocol buffers](https://developers.google.com/protocol-buffers/docs/overview). |
||||
You can see the complete `.proto` file in |
||||
[`examples/protos/route_guide.proto`](../protos/route_guide.proto). |
||||
|
||||
To define a service, you specify a named `service` in your `.proto` file: |
||||
|
||||
```protobuf |
||||
service RouteGuide { |
||||
... |
||||
} |
||||
``` |
||||
|
||||
Then you define `rpc` methods inside your service definition, specifying their |
||||
request and response types. gRPC lets you define four kinds of service method, |
||||
all of which are used in the `RouteGuide` service: |
||||
|
||||
- A *simple RPC* where the client sends a request to the server using the stub |
||||
and waits for a response to come back, just like a normal function call. |
||||
|
||||
```protobuf |
||||
// Obtains the feature at a given position. |
||||
rpc GetFeature(Point) returns (Feature) {} |
||||
``` |
||||
|
||||
- A *server-side streaming RPC* where the client sends a request to the server |
||||
and gets a stream to read a sequence of messages back. The client reads from |
||||
the returned stream until there are no more messages. As you can see in our |
||||
example, you specify a server-side streaming method by placing the `stream` |
||||
keyword before the *response* type. |
||||
|
||||
```protobuf |
||||
// Obtains the Features available within the given Rectangle. Results are |
||||
// streamed rather than returned at once (e.g. in a response message with a |
||||
// repeated field), as the rectangle may cover a large area and contain a |
||||
// huge number of features. |
||||
rpc ListFeatures(Rectangle) returns (stream Feature) {} |
||||
``` |
||||
|
||||
- A *client-side streaming RPC* where the client writes a sequence of messages |
||||
and sends them to the server, again using a provided stream. Once the client |
||||
has finished writing the messages, it waits for the server to read them all |
||||
and return its response. You specify a client-side streaming method by placing |
||||
the `stream` keyword before the *request* type. |
||||
|
||||
```protobuf |
||||
// Accepts a stream of Points on a route being traversed, returning a |
||||
// RouteSummary when traversal is completed. |
||||
rpc RecordRoute(stream Point) returns (RouteSummary) {} |
||||
``` |
||||
|
||||
- A *bidirectional streaming RPC* where both sides send a sequence of messages |
||||
using a read-write stream. The two streams operate independently, so clients |
||||
and servers can read and write in whatever order they like: for example, the |
||||
server could wait to receive all the client messages before writing its |
||||
responses, or it could alternately read a message then write a message, or |
||||
some other combination of reads and writes. The order of messages in each |
||||
stream is preserved. You specify this type of method by placing the `stream` |
||||
keyword before both the request and the response. |
||||
|
||||
```protobuf |
||||
// Accepts a stream of RouteNotes sent while a route is being traversed, |
||||
// while receiving other RouteNotes (e.g. from other users). |
||||
rpc RouteChat(stream RouteNote) returns (stream RouteNote) {} |
||||
``` |
||||
|
||||
Our `.proto` file also contains protocol buffer message type definitions for all |
||||
the request and response types used in our service methods - for example, here's |
||||
the `Point` message type: |
||||
|
||||
```protobuf |
||||
// Points are represented as latitude-longitude pairs in the E7 representation |
||||
// (degrees multiplied by 10**7 and rounded to the nearest integer). |
||||
// Latitudes should be in the range +/- 90 degrees and longitude should be in |
||||
// the range +/- 180 degrees (inclusive). |
||||
message Point { |
||||
int32 latitude = 1; |
||||
int32 longitude = 2; |
||||
} |
||||
``` |
||||
|
||||
## Generating client and server code |
||||
|
||||
Next we need to generate the gRPC client and server interfaces from our `.proto` |
||||
service definition. We do this using the protocol buffer compiler `protoc` with |
||||
a special gRPC C++ plugin. |
||||
|
||||
For simplicity, we've provided a [Makefile](route_guide/Makefile) that runs |
||||
`protoc` for you with the appropriate plugin, input, and output (if you want to |
||||
run this yourself, make sure you've installed protoc and followed the gRPC code |
||||
[installation instructions](../../BUILDING.md) first): |
||||
|
||||
```shell |
||||
$ make route_guide.grpc.pb.cc route_guide.pb.cc |
||||
``` |
||||
|
||||
which actually runs: |
||||
|
||||
```shell |
||||
$ protoc -I ../../protos --grpc_out=. --plugin=protoc-gen-grpc=`which grpc_cpp_plugin` ../../protos/route_guide.proto |
||||
$ protoc -I ../../protos --cpp_out=. ../../protos/route_guide.proto |
||||
``` |
||||
|
||||
Running this command generates the following files in your current directory: |
||||
- `route_guide.pb.h`, the header which declares your generated message classes |
||||
- `route_guide.pb.cc`, which contains the implementation of your message classes |
||||
- `route_guide.grpc.pb.h`, the header which declares your generated service |
||||
classes |
||||
- `route_guide.grpc.pb.cc`, which contains the implementation of your service |
||||
classes |
||||
|
||||
These contain: |
||||
- All the protocol buffer code to populate, serialize, and retrieve our request |
||||
and response message types |
||||
- A class called `RouteGuide` that contains |
||||
- a remote interface type (or *stub*) for clients to call with the methods |
||||
defined in the `RouteGuide` service. |
||||
- two abstract interfaces for servers to implement, also with the methods |
||||
defined in the `RouteGuide` service. |
||||
|
||||
|
||||
<a name="server"></a> |
||||
## Creating the server |
||||
|
||||
First let's look at how we create a `RouteGuide` server. If you're only |
||||
interested in creating gRPC clients, you can skip this section and go straight |
||||
to [Creating the client](#client) (though you might find it interesting |
||||
anyway!). |
||||
|
||||
There are two parts to making our `RouteGuide` service do its job: |
||||
- Implementing the service interface generated from our service definition: |
||||
doing the actual "work" of our service. |
||||
- Running a gRPC server to listen for requests from clients and return the |
||||
service responses. |
||||
|
||||
You can find our example `RouteGuide` server in |
||||
[route_guide/route_guide_server.cc](route_guide/route_guide_server.cc). Let's |
||||
take a closer look at how it works. |
||||
|
||||
### Implementing RouteGuide |
||||
|
||||
As you can see, our server has a `RouteGuideImpl` class that implements the |
||||
generated `RouteGuide::Service` interface: |
||||
|
||||
```cpp |
||||
class RouteGuideImpl final : public RouteGuide::Service { |
||||
... |
||||
} |
||||
``` |
||||
In this case we're implementing the *synchronous* version of `RouteGuide`, which |
||||
provides our default gRPC server behaviour. It's also possible to implement an |
||||
asynchronous interface, `RouteGuide::AsyncService`, which allows you to further |
||||
customize your server's threading behaviour, though we won't look at this in |
||||
this tutorial. |
||||
|
||||
`RouteGuideImpl` implements all our service methods. Let's look at the simplest |
||||
type first, `GetFeature`, which just gets a `Point` from the client and returns |
||||
the corresponding feature information from its database in a `Feature`. |
||||
|
||||
```cpp |
||||
Status GetFeature(ServerContext* context, const Point* point, |
||||
Feature* feature) override { |
||||
feature->set_name(GetFeatureName(*point, feature_list_)); |
||||
feature->mutable_location()->CopyFrom(*point); |
||||
return Status::OK; |
||||
} |
||||
``` |
||||
|
||||
The method is passed a context object for the RPC, the client's `Point` protocol |
||||
buffer request, and a `Feature` protocol buffer to fill in with the response |
||||
information. In the method we populate the `Feature` with the appropriate |
||||
information, and then `return` with an `OK` status to tell gRPC that we've |
||||
finished dealing with the RPC and that the `Feature` can be returned to the |
||||
client. |
||||
|
||||
Now let's look at something a bit more complicated - a streaming RPC. |
||||
`ListFeatures` is a server-side streaming RPC, so we need to send back multiple |
||||
`Feature`s to our client. |
||||
|
||||
```cpp |
||||
Status ListFeatures(ServerContext* context, const Rectangle* rectangle, |
||||
ServerWriter<Feature>* writer) override { |
||||
auto lo = rectangle->lo(); |
||||
auto hi = rectangle->hi(); |
||||
long left = std::min(lo.longitude(), hi.longitude()); |
||||
long right = std::max(lo.longitude(), hi.longitude()); |
||||
long top = std::max(lo.latitude(), hi.latitude()); |
||||
long bottom = std::min(lo.latitude(), hi.latitude()); |
||||
for (const Feature& f : feature_list_) { |
||||
if (f.location().longitude() >= left && |
||||
f.location().longitude() <= right && |
||||
f.location().latitude() >= bottom && |
||||
f.location().latitude() <= top) { |
||||
writer->Write(f); |
||||
} |
||||
} |
||||
return Status::OK; |
||||
} |
||||
``` |
||||
|
||||
As you can see, instead of getting simple request and response objects in our |
||||
method parameters, this time we get a request object (the `Rectangle` in which |
||||
our client wants to find `Feature`s) and a special `ServerWriter` object. In the |
||||
method, we populate as many `Feature` objects as we need to return, writing them |
||||
to the `ServerWriter` using its `Write()` method. Finally, as in our simple RPC, |
||||
we `return Status::OK` to tell gRPC that we've finished writing responses. |
||||
|
||||
If you look at the client-side streaming method `RecordRoute` you'll see it's |
||||
quite similar, except this time we get a `ServerReader` instead of a request |
||||
object and a single response. We use the `ServerReader`s `Read()` method to |
||||
repeatedly read in our client's requests to a request object (in this case a |
||||
`Point`) until there are no more messages: the server needs to check the return |
||||
value of `Read()` after each call. If `true`, the stream is still good and it |
||||
can continue reading; if `false` the message stream has ended. |
||||
|
||||
```cpp |
||||
while (stream->Read(&point)) { |
||||
...//process client input |
||||
} |
||||
``` |
||||
Finally, let's look at our bidirectional streaming RPC `RouteChat()`. |
||||
|
||||
```cpp |
||||
Status RouteChat(ServerContext* context, |
||||
ServerReaderWriter<RouteNote, RouteNote>* stream) override { |
||||
std::vector<RouteNote> received_notes; |
||||
RouteNote note; |
||||
while (stream->Read(¬e)) { |
||||
for (const RouteNote& n : received_notes) { |
||||
if (n.location().latitude() == note.location().latitude() && |
||||
n.location().longitude() == note.location().longitude()) { |
||||
stream->Write(n); |
||||
} |
||||
} |
||||
received_notes.push_back(note); |
||||
} |
||||
|
||||
return Status::OK; |
||||
} |
||||
``` |
||||
|
||||
This time we get a `ServerReaderWriter` that can be used to read *and* write |
||||
messages. The syntax for reading and writing here is exactly the same as for our |
||||
client-streaming and server-streaming methods. Although each side will always |
||||
get the other's messages in the order they were written, both the client and |
||||
server can read and write in any order — the streams operate completely |
||||
independently. |
||||
|
||||
### Starting the server |
||||
|
||||
Once we've implemented all our methods, we also need to start up a gRPC server |
||||
so that clients can actually use our service. The following snippet shows how we |
||||
do this for our `RouteGuide` service: |
||||
|
||||
```cpp |
||||
void RunServer(const std::string& db_path) { |
||||
std::string server_address("0.0.0.0:50051"); |
||||
RouteGuideImpl service(db_path); |
||||
|
||||
ServerBuilder builder; |
||||
builder.AddListeningPort(server_address, grpc::InsecureServerCredentials()); |
||||
builder.RegisterService(&service); |
||||
std::unique_ptr<Server> server(builder.BuildAndStart()); |
||||
std::cout << "Server listening on " << server_address << std::endl; |
||||
server->Wait(); |
||||
} |
||||
``` |
||||
As you can see, we build and start our server using a `ServerBuilder`. To do this, we: |
||||
|
||||
1. Create an instance of our service implementation class `RouteGuideImpl`. |
||||
1. Create an instance of the factory `ServerBuilder` class. |
||||
1. Specify the address and port we want to use to listen for client requests |
||||
using the builder's `AddListeningPort()` method. |
||||
1. Register our service implementation with the builder. |
||||
1. Call `BuildAndStart()` on the builder to create and start an RPC server for |
||||
our service. |
||||
1. Call `Wait()` on the server to do a blocking wait until process is killed or |
||||
`Shutdown()` is called. |
||||
|
||||
<a name="client"></a> |
||||
## Creating the client |
||||
|
||||
In this section, we'll look at creating a C++ client for our `RouteGuide` |
||||
service. You can see our complete example client code in |
||||
[route_guide/route_guide_client.cc](route_guide/route_guide_client.cc). |
||||
|
||||
### Creating a stub |
||||
|
||||
To call service methods, we first need to create a *stub*. |
||||
|
||||
First we need to create a gRPC *channel* for our stub, specifying the server |
||||
address and port we want to connect to without SSL: |
||||
|
||||
```cpp |
||||
grpc::CreateChannel("localhost:50051", grpc::InsecureChannelCredentials()); |
||||
``` |
||||
|
||||
Now we can use the channel to create our stub using the `NewStub` method |
||||
provided in the `RouteGuide` class we generated from our `.proto`. |
||||
|
||||
```cpp |
||||
public: |
||||
RouteGuideClient(std::shared_ptr<Channel> channel, const std::string& db) |
||||
: stub_(RouteGuide::NewStub(channel)) { |
||||
... |
||||
} |
||||
``` |
||||
|
||||
### Calling service methods |
||||
|
||||
Now let's look at how we call our service methods. Note that in this tutorial |
||||
we're calling the *blocking/synchronous* versions of each method: this means |
||||
that the RPC call waits for the server to respond, and will either return a |
||||
response or raise an exception. |
||||
|
||||
#### Simple RPC |
||||
|
||||
Calling the simple RPC `GetFeature` is nearly as straightforward as calling a |
||||
local method. |
||||
|
||||
```cpp |
||||
Point point; |
||||
Feature feature; |
||||
point = MakePoint(409146138, -746188906); |
||||
GetOneFeature(point, &feature); |
||||
|
||||
... |
||||
|
||||
bool GetOneFeature(const Point& point, Feature* feature) { |
||||
ClientContext context; |
||||
Status status = stub_->GetFeature(&context, point, feature); |
||||
... |
||||
} |
||||
``` |
||||
|
||||
As you can see, we create and populate a request protocol buffer object (in our |
||||
case `Point`), and create a response protocol buffer object for the server to |
||||
fill in. We also create a `ClientContext` object for our call - you can |
||||
optionally set RPC configuration values on this object, such as deadlines, |
||||
though for now we'll use the default settings. Note that you cannot reuse this |
||||
object between calls. Finally, we call the method on the stub, passing it the |
||||
context, request, and response. If the method returns `OK`, then we can read the |
||||
response information from the server from our response object. |
||||
|
||||
```cpp |
||||
std::cout << "Found feature called " << feature->name() << " at " |
||||
<< feature->location().latitude()/kCoordFactor_ << ", " |
||||
<< feature->location().longitude()/kCoordFactor_ << std::endl; |
||||
``` |
||||
|
||||
#### Streaming RPCs |
||||
|
||||
Now let's look at our streaming methods. If you've already read [Creating the |
||||
server](#server) some of this may look very familiar - streaming RPCs are |
||||
implemented in a similar way on both sides. Here's where we call the server-side |
||||
streaming method `ListFeatures`, which returns a stream of geographical |
||||
`Feature`s: |
||||
|
||||
```cpp |
||||
std::unique_ptr<ClientReader<Feature> > reader( |
||||
stub_->ListFeatures(&context, rect)); |
||||
while (reader->Read(&feature)) { |
||||
std::cout << "Found feature called " |
||||
<< feature.name() << " at " |
||||
<< feature.location().latitude()/kCoordFactor_ << ", " |
||||
<< feature.location().longitude()/kCoordFactor_ << std::endl; |
||||
} |
||||
Status status = reader->Finish(); |
||||
``` |
||||
|
||||
Instead of passing the method a context, request, and response, we pass it a |
||||
context and request and get a `ClientReader` object back. The client can use the |
||||
`ClientReader` to read the server's responses. We use the `ClientReader`s |
||||
`Read()` method to repeatedly read in the server's responses to a response |
||||
protocol buffer object (in this case a `Feature`) until there are no more |
||||
messages: the client needs to check the return value of `Read()` after each |
||||
call. If `true`, the stream is still good and it can continue reading; if |
||||
`false` the message stream has ended. Finally, we call `Finish()` on the stream |
||||
to complete the call and get our RPC status. |
||||
|
||||
The client-side streaming method `RecordRoute` is similar, except there we pass |
||||
the method a context and response object and get back a `ClientWriter`. |
||||
|
||||
```cpp |
||||
std::unique_ptr<ClientWriter<Point> > writer( |
||||
stub_->RecordRoute(&context, &stats)); |
||||
for (int i = 0; i < kPoints; i++) { |
||||
const Feature& f = feature_list_[feature_distribution(generator)]; |
||||
std::cout << "Visiting point " |
||||
<< f.location().latitude()/kCoordFactor_ << ", " |
||||
<< f.location().longitude()/kCoordFactor_ << std::endl; |
||||
if (!writer->Write(f.location())) { |
||||
// Broken stream. |
||||
break; |
||||
} |
||||
std::this_thread::sleep_for(std::chrono::milliseconds( |
||||
delay_distribution(generator))); |
||||
} |
||||
writer->WritesDone(); |
||||
Status status = writer->Finish(); |
||||
if (status.IsOk()) { |
||||
std::cout << "Finished trip with " << stats.point_count() << " points\n" |
||||
<< "Passed " << stats.feature_count() << " features\n" |
||||
<< "Travelled " << stats.distance() << " meters\n" |
||||
<< "It took " << stats.elapsed_time() << " seconds" |
||||
<< std::endl; |
||||
} else { |
||||
std::cout << "RecordRoute rpc failed." << std::endl; |
||||
} |
||||
``` |
||||
|
||||
Once we've finished writing our client's requests to the stream using `Write()`, |
||||
we need to call `WritesDone()` on the stream to let gRPC know that we've |
||||
finished writing, then `Finish()` to complete the call and get our RPC status. |
||||
If the status is `OK`, our response object that we initially passed to |
||||
`RecordRoute()` will be populated with the server's response. |
||||
|
||||
Finally, let's look at our bidirectional streaming RPC `RouteChat()`. In this |
||||
case, we just pass a context to the method and get back a `ClientReaderWriter`, |
||||
which we can use to both write and read messages. |
||||
|
||||
```cpp |
||||
std::shared_ptr<ClientReaderWriter<RouteNote, RouteNote> > stream( |
||||
stub_->RouteChat(&context)); |
||||
``` |
||||
|
||||
The syntax for reading and writing here is exactly the same as for our |
||||
client-streaming and server-streaming methods. Although each side will always |
||||
get the other's messages in the order they were written, both the client and |
||||
server can read and write in any order — the streams operate completely |
||||
independently. |
||||
|
||||
## Try it out! |
||||
|
||||
Build client and server: |
||||
```shell |
||||
$ make |
||||
``` |
||||
Run the server, which will listen on port 50051: |
||||
```shell |
||||
$ ./route_guide_server |
||||
``` |
||||
Run the client (in a different terminal): |
||||
```shell |
||||
$ ./route_guide_client |
||||
``` |
@ -1,264 +1,6 @@ |
||||
# gRPC C++ Hello World Tutorial |
||||
# gRPC C++ Hello World Example |
||||
|
||||
### Install gRPC |
||||
Make sure you have installed gRPC on your system. Follow the |
||||
[BUILDING.md](../../../BUILDING.md) instructions. |
||||
You can find a complete set of instructions for building gRPC and running the |
||||
Hello World app in the [C++ Quick Start][]. |
||||
|
||||
### Get the tutorial source code |
||||
|
||||
The example code for this and our other examples lives in the `examples` |
||||
directory. Clone this repository at the [latest stable release tag](https://github.com/grpc/grpc/releases) |
||||
to your local machine by running the following command: |
||||
|
||||
|
||||
```sh |
||||
$ git clone -b RELEASE_TAG_HERE https://github.com/grpc/grpc |
||||
``` |
||||
|
||||
Change your current directory to examples/cpp/helloworld |
||||
|
||||
```sh |
||||
$ cd examples/cpp/helloworld/ |
||||
``` |
||||
|
||||
### Defining a service |
||||
|
||||
The first step in creating our example is to define a *service*: an RPC |
||||
service specifies the methods that can be called remotely with their parameters |
||||
and return types. As you saw in the |
||||
[overview](#protocolbuffers) above, gRPC does this using [protocol |
||||
buffers](https://developers.google.com/protocol-buffers/docs/overview). We |
||||
use the protocol buffers interface definition language (IDL) to define our |
||||
service methods, and define the parameters and return |
||||
types as protocol buffer message types. Both the client and the |
||||
server use interface code generated from the service definition. |
||||
|
||||
Here's our example service definition, defined using protocol buffers IDL in |
||||
[helloworld.proto](../../protos/helloworld.proto). The `Greeting` |
||||
service has one method, `hello`, that lets the server receive a single |
||||
`HelloRequest` |
||||
message from the remote client containing the user's name, then send back |
||||
a greeting in a single `HelloReply`. This is the simplest type of RPC you |
||||
can specify in gRPC - we'll look at some other types later in this document. |
||||
|
||||
```protobuf |
||||
syntax = "proto3"; |
||||
|
||||
option java_package = "ex.grpc"; |
||||
|
||||
package helloworld; |
||||
|
||||
// The greeting service definition. |
||||
service Greeter { |
||||
// Sends a greeting |
||||
rpc SayHello (HelloRequest) returns (HelloReply) {} |
||||
} |
||||
|
||||
// The request message containing the user's name. |
||||
message HelloRequest { |
||||
string name = 1; |
||||
} |
||||
|
||||
// The response message containing the greetings |
||||
message HelloReply { |
||||
string message = 1; |
||||
} |
||||
|
||||
``` |
||||
|
||||
<a name="generating"></a> |
||||
### Generating gRPC code |
||||
|
||||
Once we've defined our service, we use the protocol buffer compiler |
||||
`protoc` to generate the special client and server code we need to create |
||||
our application. The generated code contains both stub code for clients to |
||||
use and an abstract interface for servers to implement, both with the method |
||||
defined in our `Greeting` service. |
||||
|
||||
To generate the client and server side interfaces: |
||||
|
||||
```sh |
||||
$ make helloworld.grpc.pb.cc helloworld.pb.cc |
||||
``` |
||||
Which internally invokes the proto-compiler as: |
||||
|
||||
```sh |
||||
$ protoc -I ../../protos/ --grpc_out=. --plugin=protoc-gen-grpc=grpc_cpp_plugin ../../protos/helloworld.proto |
||||
$ protoc -I ../../protos/ --cpp_out=. ../../protos/helloworld.proto |
||||
``` |
||||
|
||||
### Writing a client |
||||
|
||||
- Create a channel. A channel is a logical connection to an endpoint. A gRPC |
||||
channel can be created with the target address, credentials to use and |
||||
arguments as follows |
||||
|
||||
```cpp |
||||
auto channel = CreateChannel("localhost:50051", InsecureChannelCredentials()); |
||||
``` |
||||
|
||||
- Create a stub. A stub implements the rpc methods of a service and in the |
||||
generated code, a method is provided to create a stub with a channel: |
||||
|
||||
```cpp |
||||
auto stub = helloworld::Greeter::NewStub(channel); |
||||
``` |
||||
|
||||
- Make a unary rpc, with `ClientContext` and request/response proto messages. |
||||
|
||||
```cpp |
||||
ClientContext context; |
||||
HelloRequest request; |
||||
request.set_name("hello"); |
||||
HelloReply reply; |
||||
Status status = stub->SayHello(&context, request, &reply); |
||||
``` |
||||
|
||||
- Check returned status and response. |
||||
|
||||
```cpp |
||||
if (status.ok()) { |
||||
// check reply.message() |
||||
} else { |
||||
// rpc failed. |
||||
} |
||||
``` |
||||
|
||||
For a working example, refer to [greeter_client.cc](greeter_client.cc). |
||||
|
||||
### Writing a server |
||||
|
||||
- Implement the service interface |
||||
|
||||
```cpp |
||||
class GreeterServiceImpl final : public Greeter::Service { |
||||
Status SayHello(ServerContext* context, const HelloRequest* request, |
||||
HelloReply* reply) override { |
||||
std::string prefix("Hello "); |
||||
reply->set_message(prefix + request->name()); |
||||
return Status::OK; |
||||
} |
||||
}; |
||||
|
||||
``` |
||||
|
||||
- Build a server exporting the service |
||||
|
||||
```cpp |
||||
GreeterServiceImpl service; |
||||
ServerBuilder builder; |
||||
builder.AddListeningPort("0.0.0.0:50051", grpc::InsecureServerCredentials()); |
||||
builder.RegisterService(&service); |
||||
std::unique_ptr<Server> server(builder.BuildAndStart()); |
||||
``` |
||||
|
||||
For a working example, refer to [greeter_server.cc](greeter_server.cc). |
||||
|
||||
### Writing asynchronous client and server |
||||
|
||||
gRPC uses `CompletionQueue` API for asynchronous operations. The basic work flow |
||||
is |
||||
- bind a `CompletionQueue` to a rpc call |
||||
- do something like a read or write, present with a unique `void*` tag |
||||
- call `CompletionQueue::Next` to wait for operations to complete. If a tag |
||||
appears, it indicates that the corresponding operation is complete. |
||||
|
||||
#### Async client |
||||
|
||||
The channel and stub creation code is the same as the sync client. |
||||
|
||||
- Initiate the rpc and create a handle for the rpc. Bind the rpc to a |
||||
`CompletionQueue`. |
||||
|
||||
```cpp |
||||
CompletionQueue cq; |
||||
auto rpc = stub->AsyncSayHello(&context, request, &cq); |
||||
``` |
||||
|
||||
- Ask for reply and final status, with a unique tag |
||||
|
||||
```cpp |
||||
Status status; |
||||
rpc->Finish(&reply, &status, (void*)1); |
||||
``` |
||||
|
||||
- Wait for the completion queue to return the next tag. The reply and status are |
||||
ready once the tag passed into the corresponding `Finish()` call is returned. |
||||
|
||||
```cpp |
||||
void* got_tag; |
||||
bool ok = false; |
||||
cq.Next(&got_tag, &ok); |
||||
if (ok && got_tag == (void*)1) { |
||||
// check reply and status |
||||
} |
||||
``` |
||||
|
||||
For a working example, refer to [greeter_async_client.cc](greeter_async_client.cc). |
||||
|
||||
#### Async server |
||||
|
||||
The server implementation requests a rpc call with a tag and then wait for the |
||||
completion queue to return the tag. The basic flow is |
||||
|
||||
- Build a server exporting the async service |
||||
|
||||
```cpp |
||||
helloworld::Greeter::AsyncService service; |
||||
ServerBuilder builder; |
||||
builder.AddListeningPort("0.0.0.0:50051", InsecureServerCredentials()); |
||||
builder.RegisterService(&service); |
||||
auto cq = builder.AddCompletionQueue(); |
||||
auto server = builder.BuildAndStart(); |
||||
``` |
||||
|
||||
- Request one rpc |
||||
|
||||
```cpp |
||||
ServerContext context; |
||||
HelloRequest request; |
||||
ServerAsyncResponseWriter<HelloReply> responder; |
||||
service.RequestSayHello(&context, &request, &responder, &cq, &cq, (void*)1); |
||||
``` |
||||
|
||||
- Wait for the completion queue to return the tag. The context, request and |
||||
responder are ready once the tag is retrieved. |
||||
|
||||
```cpp |
||||
HelloReply reply; |
||||
Status status; |
||||
void* got_tag; |
||||
bool ok = false; |
||||
cq.Next(&got_tag, &ok); |
||||
if (ok && got_tag == (void*)1) { |
||||
// set reply and status |
||||
responder.Finish(reply, status, (void*)2); |
||||
} |
||||
``` |
||||
|
||||
- Wait for the completion queue to return the tag. The rpc is finished when the |
||||
tag is back. |
||||
|
||||
```cpp |
||||
void* got_tag; |
||||
bool ok = false; |
||||
cq.Next(&got_tag, &ok); |
||||
if (ok && got_tag == (void*)2) { |
||||
// clean up |
||||
} |
||||
``` |
||||
|
||||
To handle multiple rpcs, the async server creates an object `CallData` to |
||||
maintain the state of each rpc and use the address of it as the unique tag. For |
||||
simplicity the server only uses one completion queue for all events, and runs a |
||||
main loop in `HandleRpcs` to query the queue. |
||||
|
||||
For a working example, refer to [greeter_async_server.cc](greeter_async_server.cc). |
||||
|
||||
#### Flags for the client |
||||
|
||||
```sh |
||||
./greeter_client --target="a target string used to create a GRPC client channel" |
||||
``` |
||||
|
||||
The Default value for --target is "localhost:50051". |
||||
[C++ Quick Start]: https://grpc.io/docs/quickstart/cpp |
||||
|
@ -0,0 +1,356 @@ |
||||
/*
|
||||
* |
||||
* Copyright 2020 gRPC authors. |
||||
* |
||||
* Licensed under the Apache License, Version 2.0 (the "License"); |
||||
* you may not use this file except in compliance with the License. |
||||
* You may obtain a copy of the License at |
||||
* |
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
* |
||||
* Unless required by applicable law or agreed to in writing, software |
||||
* distributed under the License is distributed on an "AS IS" BASIS, |
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
||||
* See the License for the specific language governing permissions and |
||||
* limitations under the License. |
||||
* |
||||
*/ |
||||
|
||||
/// Event engine based on Apple's CFRunLoop API family. If the CFRunLoop engine
|
||||
/// is enabled (see iomgr_posix_cfstream.cc), a global thread is started to
|
||||
/// handle and trigger all the CFStream events. The CFStream streams register
|
||||
/// themselves with the run loop with functions grpc_apple_register_read_stream
|
||||
/// and grpc_apple_register_read_stream. Pollsets are dummy and block on a
|
||||
/// condition variable in pollset_work().
|
||||
|
||||
#include <grpc/support/port_platform.h> |
||||
|
||||
#include "src/core/lib/iomgr/port.h" |
||||
|
||||
#ifdef GRPC_APPLE_EV |
||||
|
||||
#include <CoreFoundation/CoreFoundation.h> |
||||
|
||||
#include <list> |
||||
|
||||
#include "src/core/lib/gprpp/thd.h" |
||||
#include "src/core/lib/iomgr/ev_apple.h" |
||||
|
||||
grpc_core::DebugOnlyTraceFlag grpc_apple_polling_trace(false, "apple_polling"); |
||||
|
||||
#ifndef NDEBUG |
||||
#define GRPC_POLLING_TRACE(format, ...) \ |
||||
if (GRPC_TRACE_FLAG_ENABLED(grpc_apple_polling_trace)) { \
|
||||
gpr_log(GPR_DEBUG, "(polling) " format, __VA_ARGS__); \
|
||||
} |
||||
#else |
||||
#define GRPC_POLLING_TRACE(...) |
||||
#endif // NDEBUG
|
||||
|
||||
#define GRPC_POLLSET_KICK_BROADCAST ((grpc_pollset_worker*)1) |
||||
|
||||
struct GlobalRunLoopContext { |
||||
grpc_core::CondVar init_cv; |
||||
grpc_core::CondVar input_source_cv; |
||||
|
||||
grpc_core::Mutex mu; |
||||
|
||||
// Whether an input source registration is pending. Protected by mu.
|
||||
bool input_source_registered = false; |
||||
|
||||
// The reference to the global run loop object. Protected by mu.
|
||||
CFRunLoopRef run_loop; |
||||
|
||||
// Whether the pollset has been globally shut down. Protected by mu.
|
||||
bool is_shutdown = false; |
||||
}; |
||||
|
||||
struct GrpcAppleWorker { |
||||
// The condition varible to kick the worker. Works with the pollset's lock
|
||||
// (GrpcApplePollset.mu).
|
||||
grpc_core::CondVar cv; |
||||
|
||||
// Whether the worker is kicked. Protected by the pollset's lock
|
||||
// (GrpcApplePollset.mu).
|
||||
bool kicked = false; |
||||
}; |
||||
|
||||
struct GrpcApplePollset { |
||||
grpc_core::Mutex mu; |
||||
|
||||
// Tracks the current workers in the pollset. Protected by mu.
|
||||
std::list<GrpcAppleWorker*> workers; |
||||
|
||||
// Whether the pollset is shut down. Protected by mu.
|
||||
bool is_shutdown = false; |
||||
|
||||
// Closure to call when shutdown is done. Protected by mu.
|
||||
grpc_closure* shutdown_closure; |
||||
|
||||
// Whether there's an outstanding kick that was not processed. Protected by
|
||||
// mu.
|
||||
bool kicked_without_poller = false; |
||||
}; |
||||
|
||||
static GlobalRunLoopContext* gGlobalRunLoopContext = nullptr; |
||||
static grpc_core::Thread* gGlobalRunLoopThread = nullptr; |
||||
|
||||
/// Register the stream with the dispatch queue. Callbacks of the stream will be
|
||||
/// issued to the dispatch queue when a network event happens and will be
|
||||
/// managed by Grand Central Dispatch.
|
||||
static void grpc_apple_register_read_stream_queue( |
||||
CFReadStreamRef read_stream, dispatch_queue_t dispatch_queue) { |
||||
CFReadStreamSetDispatchQueue(read_stream, dispatch_queue); |
||||
} |
||||
|
||||
/// Register the stream with the dispatch queue. Callbacks of the stream will be
|
||||
/// issued to the dispatch queue when a network event happens and will be
|
||||
/// managed by Grand Central Dispatch.
|
||||
static void grpc_apple_register_write_stream_queue( |
||||
CFWriteStreamRef write_stream, dispatch_queue_t dispatch_queue) { |
||||
CFWriteStreamSetDispatchQueue(write_stream, dispatch_queue); |
||||
} |
||||
|
||||
/// Register the stream with the global run loop. Callbacks of the stream will
|
||||
/// be issued to the run loop when a network event happens and will be driven by
|
||||
/// the global run loop thread gGlobalRunLoopThread.
|
||||
static void grpc_apple_register_read_stream_run_loop( |
||||
CFReadStreamRef read_stream, dispatch_queue_t dispatch_queue) { |
||||
GRPC_POLLING_TRACE("Register read stream: %p", read_stream); |
||||
grpc_core::MutexLock lock(&gGlobalRunLoopContext->mu); |
||||
CFReadStreamScheduleWithRunLoop(read_stream, gGlobalRunLoopContext->run_loop, |
||||
kCFRunLoopDefaultMode); |
||||
gGlobalRunLoopContext->input_source_registered = true; |
||||
gGlobalRunLoopContext->input_source_cv.Signal(); |
||||
} |
||||
|
||||
/// Register the stream with the global run loop. Callbacks of the stream will
|
||||
/// be issued to the run loop when a network event happens, and will be driven
|
||||
/// by the global run loop thread gGlobalRunLoopThread.
|
||||
static void grpc_apple_register_write_stream_run_loop( |
||||
CFWriteStreamRef write_stream, dispatch_queue_t dispatch_queue) { |
||||
GRPC_POLLING_TRACE("Register write stream: %p", write_stream); |
||||
grpc_core::MutexLock lock(&gGlobalRunLoopContext->mu); |
||||
CFWriteStreamScheduleWithRunLoop( |
||||
write_stream, gGlobalRunLoopContext->run_loop, kCFRunLoopDefaultMode); |
||||
gGlobalRunLoopContext->input_source_registered = true; |
||||
gGlobalRunLoopContext->input_source_cv.Signal(); |
||||
} |
||||
|
||||
/// The default implementation of stream registration is to register the stream
|
||||
/// to a dispatch queue. However, if the CFRunLoop based pollset is enabled (by
|
||||
/// macro and environment variable, see docs in iomgr_posix_cfstream.cc), the
|
||||
/// CFStream streams are registered with the global run loop instead (see
|
||||
/// pollset_global_init below).
|
||||
static void (*grpc_apple_register_read_stream_impl)( |
||||
CFReadStreamRef, dispatch_queue_t) = grpc_apple_register_read_stream_queue; |
||||
static void (*grpc_apple_register_write_stream_impl)(CFWriteStreamRef, |
||||
dispatch_queue_t) = |
||||
grpc_apple_register_write_stream_queue; |
||||
|
||||
void grpc_apple_register_read_stream(CFReadStreamRef read_stream, |
||||
dispatch_queue_t dispatch_queue) { |
||||
grpc_apple_register_read_stream_impl(read_stream, dispatch_queue); |
||||
} |
||||
|
||||
void grpc_apple_register_write_stream(CFWriteStreamRef write_stream, |
||||
dispatch_queue_t dispatch_queue) { |
||||
grpc_apple_register_write_stream_impl(write_stream, dispatch_queue); |
||||
} |
||||
|
||||
/// Drive the run loop in a global singleton thread until the global run loop is
|
||||
/// shutdown.
|
||||
static void GlobalRunLoopFunc(void* arg) { |
||||
grpc_core::ReleasableMutexLock lock(&gGlobalRunLoopContext->mu); |
||||
gGlobalRunLoopContext->run_loop = CFRunLoopGetCurrent(); |
||||
gGlobalRunLoopContext->init_cv.Signal(); |
||||
|
||||
while (!gGlobalRunLoopContext->is_shutdown) { |
||||
// CFRunLoopRun() will return immediately if no stream is registered on it.
|
||||
// So we wait on a conditional variable until a stream is registered;
|
||||
// otherwise we'll be running a spinning loop.
|
||||
while (!gGlobalRunLoopContext->input_source_registered) { |
||||
gGlobalRunLoopContext->input_source_cv.Wait(&gGlobalRunLoopContext->mu); |
||||
} |
||||
gGlobalRunLoopContext->input_source_registered = false; |
||||
lock.Unlock(); |
||||
CFRunLoopRun(); |
||||
lock.Lock(); |
||||
} |
||||
lock.Unlock(); |
||||
} |
||||
|
||||
// pollset implementation
|
||||
|
||||
static void pollset_global_init(void) { |
||||
gGlobalRunLoopContext = new GlobalRunLoopContext; |
||||
|
||||
grpc_apple_register_read_stream_impl = |
||||
grpc_apple_register_read_stream_run_loop; |
||||
grpc_apple_register_write_stream_impl = |
||||
grpc_apple_register_write_stream_run_loop; |
||||
|
||||
grpc_core::MutexLock lock(&gGlobalRunLoopContext->mu); |
||||
gGlobalRunLoopThread = |
||||
new grpc_core::Thread("apple_ev", GlobalRunLoopFunc, nullptr); |
||||
gGlobalRunLoopThread->Start(); |
||||
while (gGlobalRunLoopContext->run_loop == NULL) |
||||
gGlobalRunLoopContext->init_cv.Wait(&gGlobalRunLoopContext->mu); |
||||
} |
||||
|
||||
static void pollset_global_shutdown(void) { |
||||
{ |
||||
grpc_core::MutexLock lock(&gGlobalRunLoopContext->mu); |
||||
gGlobalRunLoopContext->is_shutdown = true; |
||||
CFRunLoopStop(gGlobalRunLoopContext->run_loop); |
||||
} |
||||
gGlobalRunLoopThread->Join(); |
||||
delete gGlobalRunLoopThread; |
||||
delete gGlobalRunLoopContext; |
||||
} |
||||
|
||||
/// The caller must acquire the lock GrpcApplePollset.mu before calling this
|
||||
/// function. The lock may be temporarily released when waiting on the condition
|
||||
/// variable but will be re-acquired before the function returns.
|
||||
///
|
||||
/// The Apple pollset simply waits on a condition variable until it is kicked.
|
||||
/// The network events are handled in the global run loop thread. Processing of
|
||||
/// these events will eventually trigger the kick.
|
||||
static grpc_error* pollset_work(grpc_pollset* pollset, |
||||
grpc_pollset_worker** worker, |
||||
grpc_millis deadline) { |
||||
GRPC_POLLING_TRACE("pollset work: %p, worker: %p, deadline: %" PRIu64, |
||||
pollset, worker, deadline); |
||||
GrpcApplePollset* apple_pollset = |
||||
reinterpret_cast<GrpcApplePollset*>(pollset); |
||||
GrpcAppleWorker actual_worker; |
||||
if (worker) { |
||||
*worker = reinterpret_cast<grpc_pollset_worker*>(&actual_worker); |
||||
} |
||||
|
||||
if (apple_pollset->kicked_without_poller) { |
||||
// Process the outstanding kick and reset the flag. Do not block.
|
||||
apple_pollset->kicked_without_poller = false; |
||||
} else { |
||||
// Block until kicked, timed out, or the pollset shuts down.
|
||||
apple_pollset->workers.push_front(&actual_worker); |
||||
auto it = apple_pollset->workers.begin(); |
||||
|
||||
while (!actual_worker.kicked && !apple_pollset->is_shutdown) { |
||||
if (actual_worker.cv.Wait( |
||||
&apple_pollset->mu, |
||||
grpc_millis_to_timespec(deadline, GPR_CLOCK_REALTIME))) { |
||||
// timed out
|
||||
break; |
||||
} |
||||
} |
||||
|
||||
apple_pollset->workers.erase(it); |
||||
|
||||
// If the pollset is shut down asynchronously and this is the last pending
|
||||
// worker, the shutdown process is complete at this moment and the shutdown
|
||||
// callback will be called.
|
||||
if (apple_pollset->is_shutdown && apple_pollset->workers.empty()) { |
||||
grpc_core::ExecCtx::Run(DEBUG_LOCATION, apple_pollset->shutdown_closure, |
||||
GRPC_ERROR_NONE); |
||||
} |
||||
} |
||||
|
||||
return GRPC_ERROR_NONE; |
||||
} |
||||
|
||||
/// Kick a specific worker. The caller must acquire the lock GrpcApplePollset.mu
|
||||
/// before calling this function.
|
||||
static void kick_worker(GrpcAppleWorker* worker) { |
||||
worker->kicked = true; |
||||
worker->cv.Signal(); |
||||
} |
||||
|
||||
/// The caller must acquire the lock GrpcApplePollset.mu before calling this
|
||||
/// function. The kick action simply signals the condition variable of the
|
||||
/// worker.
|
||||
static grpc_error* pollset_kick(grpc_pollset* pollset, |
||||
grpc_pollset_worker* specific_worker) { |
||||
GrpcApplePollset* apple_pollset = |
||||
reinterpret_cast<GrpcApplePollset*>(pollset); |
||||
|
||||
GRPC_POLLING_TRACE("pollset kick: %p, worker:%p", pollset, specific_worker); |
||||
|
||||
if (specific_worker == nullptr) { |
||||
if (apple_pollset->workers.empty()) { |
||||
apple_pollset->kicked_without_poller = true; |
||||
} else { |
||||
GrpcAppleWorker* actual_worker = apple_pollset->workers.front(); |
||||
kick_worker(actual_worker); |
||||
} |
||||
} else if (specific_worker == GRPC_POLLSET_KICK_BROADCAST) { |
||||
for (auto& actual_worker : apple_pollset->workers) { |
||||
kick_worker(actual_worker); |
||||
} |
||||
} else { |
||||
GrpcAppleWorker* actual_worker = |
||||
reinterpret_cast<GrpcAppleWorker*>(specific_worker); |
||||
kick_worker(actual_worker); |
||||
} |
||||
|
||||
return GRPC_ERROR_NONE; |
||||
} |
||||
|
||||
static void pollset_init(grpc_pollset* pollset, gpr_mu** mu) { |
||||
GRPC_POLLING_TRACE("pollset init: %p", pollset); |
||||
GrpcApplePollset* apple_pollset = new (pollset) GrpcApplePollset(); |
||||
*mu = apple_pollset->mu.get(); |
||||
} |
||||
|
||||
/// The caller must acquire the lock GrpcApplePollset.mu before calling this
|
||||
/// function.
|
||||
static void pollset_shutdown(grpc_pollset* pollset, grpc_closure* closure) { |
||||
GRPC_POLLING_TRACE("pollset shutdown: %p", pollset); |
||||
|
||||
GrpcApplePollset* apple_pollset = |
||||
reinterpret_cast<GrpcApplePollset*>(pollset); |
||||
apple_pollset->is_shutdown = true; |
||||
pollset_kick(pollset, GRPC_POLLSET_KICK_BROADCAST); |
||||
|
||||
// If there is any worker blocked, shutdown will be done asynchronously.
|
||||
if (apple_pollset->workers.empty()) { |
||||
grpc_core::ExecCtx::Run(DEBUG_LOCATION, closure, GRPC_ERROR_NONE); |
||||
} else { |
||||
apple_pollset->shutdown_closure = closure; |
||||
} |
||||
} |
||||
|
||||
static void pollset_destroy(grpc_pollset* pollset) { |
||||
GRPC_POLLING_TRACE("pollset destroy: %p", pollset); |
||||
GrpcApplePollset* apple_pollset = |
||||
reinterpret_cast<GrpcApplePollset*>(pollset); |
||||
apple_pollset->~GrpcApplePollset(); |
||||
} |
||||
|
||||
size_t pollset_size(void) { return sizeof(GrpcApplePollset); } |
||||
|
||||
grpc_pollset_vtable grpc_apple_pollset_vtable = { |
||||
pollset_global_init, pollset_global_shutdown, |
||||
pollset_init, pollset_shutdown, |
||||
pollset_destroy, pollset_work, |
||||
pollset_kick, pollset_size}; |
||||
|
||||
// pollset_set implementation
|
||||
|
||||
grpc_pollset_set* pollset_set_create(void) { return nullptr; } |
||||
void pollset_set_destroy(grpc_pollset_set* pollset_set) {} |
||||
void pollset_set_add_pollset(grpc_pollset_set* pollset_set, |
||||
grpc_pollset* pollset) {} |
||||
void pollset_set_del_pollset(grpc_pollset_set* pollset_set, |
||||
grpc_pollset* pollset) {} |
||||
void pollset_set_add_pollset_set(grpc_pollset_set* bag, |
||||
grpc_pollset_set* item) {} |
||||
void pollset_set_del_pollset_set(grpc_pollset_set* bag, |
||||
grpc_pollset_set* item) {} |
||||
|
||||
grpc_pollset_set_vtable grpc_apple_pollset_set_vtable = { |
||||
pollset_set_create, pollset_set_destroy, |
||||
pollset_set_add_pollset, pollset_set_del_pollset, |
||||
pollset_set_add_pollset_set, pollset_set_del_pollset_set}; |
||||
|
||||
#endif |
@ -0,0 +1,43 @@ |
||||
/*
|
||||
* |
||||
* Copyright 2020 gRPC authors. |
||||
* |
||||
* Licensed under the Apache License, Version 2.0 (the "License"); |
||||
* you may not use this file except in compliance with the License. |
||||
* You may obtain a copy of the License at |
||||
* |
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
* |
||||
* Unless required by applicable law or agreed to in writing, software |
||||
* distributed under the License is distributed on an "AS IS" BASIS, |
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
||||
* See the License for the specific language governing permissions and |
||||
* limitations under the License. |
||||
* |
||||
*/ |
||||
|
||||
#ifndef GRPC_CORE_LIB_IOMGR_EV_APPLE_H |
||||
#define GRPC_CORE_LIB_IOMGR_EV_APPLE_H |
||||
|
||||
#include <grpc/support/port_platform.h> |
||||
|
||||
#ifdef GRPC_APPLE_EV |
||||
|
||||
#include <CoreFoundation/CoreFoundation.h> |
||||
|
||||
#include "src/core/lib/iomgr/pollset.h" |
||||
#include "src/core/lib/iomgr/pollset_set.h" |
||||
|
||||
void grpc_apple_register_read_stream(CFReadStreamRef read_stream, |
||||
dispatch_queue_t dispatch_queue); |
||||
|
||||
void grpc_apple_register_write_stream(CFWriteStreamRef write_stream, |
||||
dispatch_queue_t dispatch_queue); |
||||
|
||||
extern grpc_pollset_vtable grpc_apple_pollset_vtable; |
||||
|
||||
extern grpc_pollset_set_vtable grpc_apple_pollset_set_vtable; |
||||
|
||||
#endif |
||||
|
||||
#endif |
Loading…
Reference in new issue