I went through and reorganized things to make the v2 docs more
human browsable. I also did a few misc cleanups. There is a lot
more to do here which I'm hoping to find a contractor to pay to
work on, but this is a step in the right direction.
Signed-off-by: Matt Klein <mklein@lyft.com>
There are several main changes in this PR:
Create envoy.api.v2.core packages to break circular dependencies from xDS on to subpackages on to base protos.
Create individual packages for each filter and add independent versioning to each filter.
Add visibility constraints to prevent formation of dependency cycles.
Add gogoproto annotations to improve go code generation.
After moving xDS service definitions and top-level resource protos back to envoy.core.api.v2, cycles were created, since the second-level definitions depend on base protobuf definitions, and are in turn included from xDS; however xDS and base definitions are in the same package.
The solution is to split the base protos into another package, envoy.api.v2.core. That eliminates dependency cycles (validated using go-control-plane).
Added a few gogoproto annotations to improve golang code generation.
Signed-off-by: Kuat Yessenov <kuat@google.com>
In support of https://github.com/envoyproxy/envoy/issues/2200 and some
Google internal needs, we are planning on adding support to Envoy to
allow a configuration (or possibly build) driven decision on whether to
using the existing Envoy in-built Grpc::AsyncClient or
the Google C++ gRPC client library (https://grpc.io/grpc/cpp/index.html).
To move in this direction, the idea is we have the xDS ApiConfigSources,
rate limit service config and other filter configurations point at a
GrpcService object. This can be configured to use an Envoy cluster,
where Grpc::AsyncClient will orchestrate communication, or to contain
the config needed to establish a channel in Google C++ gRPC client
library.
Signed-off-by: Harvey Tuch <htuch@google.com>
This is code movement only and no other changes. There should be
no namespace changes or effects to consumers.
Signed-off-by: Matt Klein <mklein@lyft.com>
This patch adds an overview page introduced the v2 API concepts via a
worked example. Brought in the entire transitive dep set of protos from
bootstrap.proto, none of these have been cleaned up beyond the minimum
required to have them build under Sphinx.
Also added the ability to link to the underlying proto in messages/enums
from protodoc.py generated RST.
Signed-off-by: Harvey Tuch <htuch@google.com>
Turns out that files with just service methods don't get loaded into the
descriptor pool automatically in C++. So, needed to have some messages
in ads.proto. Turns out this was a good opportunity to move some of the
messages that were related to discovery out of base.proto.
This shouldn't break the API, since everything is in the envoy.api.v2
packge space.
This is a design-level update to bootstrap.proto, that plumbs in the
remaining top-level config from v1. It will probably have some small
changes made beyond this as we implement.
Notable differences to v1 are:
* Static/dynamic resources are clearly delineated at top-level, clusters no longer belong to the ClusterManager object.
* Stats sinks are a repeated list of opaque configs, similar to filter.
* Some simplifications to object types, e.g. RLS no longer specifies type (do we want to preserve the v1 generality here?).
Also renamed RLDS back to RLS, I'll admit that it didn't make sense to
cram it into the xDS namespace, it's really a very distinct service on
the data plane and shouldn't be bundled with the control plane services.
For envoy/#1411
Adding it in a structured proto because in the long run I anticipate us wanting to have per-cluster port ranges we bind to and it makes sense to me if we're going to eventually allow the bootstrap and cluster config to interact that we stick all the interacting bits in another message.
This patch adds a static bootstrap proto that is expected to be provided
on the filesystem or command-line. This should enable Envoy to then
either fetch the rest of config from disk or reach out to the various
management servers for the rest of the APIs.
Fixes#93.