docs: migrate main Envoy repo docs to data-plane-api. (#232)

Signed-off-by: Harvey Tuch <htuch@google.com>
pull/233/head
htuch 7 years ago committed by GitHub
parent 735db49401
commit 320b5afbd0
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
  1. 24
      docs/README.md
  2. 2
      docs/build.sh
  3. 6
      docs/conf.py
  4. 8
      docs/index.rst
  5. 32
      docs/publish.sh
  6. 4
      docs/root/_static/docker_compose_v0.1.svg
  7. 4
      docs/root/_static/double_proxy.svg
  8. 4
      docs/root/_static/front_proxy.svg
  9. 0
      docs/root/_static/placeholder
  10. 4
      docs/root/_static/service_to_service.svg
  11. 19
      docs/root/about_docs.rst
  12. 12
      docs/root/api/api.rst
  13. 308
      docs/root/configuration/access_log.rst
  14. 66
      docs/root/configuration/cluster_manager/cds.rst
  15. 201
      docs/root/configuration/cluster_manager/cluster.rst
  16. 73
      docs/root/configuration/cluster_manager/cluster_circuit_breakers.rst
  17. 140
      docs/root/configuration/cluster_manager/cluster_hc.rst
  18. 67
      docs/root/configuration/cluster_manager/cluster_manager.rst
  19. 85
      docs/root/configuration/cluster_manager/cluster_outlier_detection.rst
  20. 130
      docs/root/configuration/cluster_manager/cluster_runtime.rst
  21. 82
      docs/root/configuration/cluster_manager/cluster_ssl.rst
  22. 193
      docs/root/configuration/cluster_manager/cluster_stats.rst
  23. 15
      docs/root/configuration/cluster_manager/outlier.rst
  24. 24
      docs/root/configuration/cluster_manager/sds.rst
  25. 60
      docs/root/configuration/cluster_manager/sds_api.rst
  26. 17
      docs/root/configuration/configuration.rst
  27. 21
      docs/root/configuration/http_conn_man/filters.rst
  28. 35
      docs/root/configuration/http_conn_man/header_sanitizing.rst
  29. 276
      docs/root/configuration/http_conn_man/headers.rst
  30. 226
      docs/root/configuration/http_conn_man/http_conn_man.rst
  31. 86
      docs/root/configuration/http_conn_man/rds.rst
  32. 255
      docs/root/configuration/http_conn_man/route_config/rate_limits.rst
  33. 509
      docs/root/configuration/http_conn_man/route_config/route.rst
  34. 90
      docs/root/configuration/http_conn_man/route_config/route_config.rst
  35. 14
      docs/root/configuration/http_conn_man/route_config/route_matching.rst
  36. 136
      docs/root/configuration/http_conn_man/route_config/traffic_splitting.rst
  37. 47
      docs/root/configuration/http_conn_man/route_config/vcluster.rst
  38. 84
      docs/root/configuration/http_conn_man/route_config/vhost.rst
  39. 25
      docs/root/configuration/http_conn_man/runtime.rst
  40. 85
      docs/root/configuration/http_conn_man/stats.rst
  41. 23
      docs/root/configuration/http_conn_man/tracing.rst
  42. 38
      docs/root/configuration/http_filters/buffer_filter.rst
  43. 65
      docs/root/configuration/http_filters/cors_filter.rst
  44. 82
      docs/root/configuration/http_filters/dynamodb_filter.rst
  45. 177
      docs/root/configuration/http_filters/fault_filter.rst
  46. 55
      docs/root/configuration/http_filters/grpc_http1_bridge_filter.rst
  47. 85
      docs/root/configuration/http_filters/grpc_json_transcoder_filter.rst
  48. 16
      docs/root/configuration/http_filters/grpc_web_filter.rst
  49. 35
      docs/root/configuration/http_filters/health_check_filter.rst
  50. 20
      docs/root/configuration/http_filters/http_filters.rst
  51. 47
      docs/root/configuration/http_filters/ip_tagging_filter.rst
  52. 353
      docs/root/configuration/http_filters/lua_filter.rst
  53. 79
      docs/root/configuration/http_filters/rate_limit_filter.rst
  54. 307
      docs/root/configuration/http_filters/router_filter.rst
  55. 21
      docs/root/configuration/listeners/filters.rst
  56. 84
      docs/root/configuration/listeners/lds.rst
  57. 107
      docs/root/configuration/listeners/listeners.rst
  58. 8
      docs/root/configuration/listeners/runtime.rst
  59. 125
      docs/root/configuration/listeners/ssl.rst
  60. 24
      docs/root/configuration/listeners/stats.rst
  61. 98
      docs/root/configuration/network_filters/client_ssl_auth_filter.rst
  62. 12
      docs/root/configuration/network_filters/echo_filter.rst
  63. 212
      docs/root/configuration/network_filters/mongo_proxy_filter.rst
  64. 18
      docs/root/configuration/network_filters/network_filters.rst
  65. 71
      docs/root/configuration/network_filters/rate_limit_filter.rst
  66. 107
      docs/root/configuration/network_filters/redis_proxy_filter.rst
  67. 146
      docs/root/configuration/network_filters/tcp_proxy_filter.rst
  68. 27
      docs/root/configuration/overview/admin.rst
  69. 120
      docs/root/configuration/overview/overview.rst
  70. 37
      docs/root/configuration/overview/rate_limit.rst
  71. 107
      docs/root/configuration/overview/runtime.rst
  72. 69
      docs/root/configuration/overview/tracing.rst
  73. 170
      docs/root/configuration/tools/router_check.rst
  74. 10
      docs/root/extending/extending.rst
  75. BIN
      docs/root/favicon.ico
  76. 13
      docs/root/index.rst
  77. 8
      docs/root/install/building.rst
  78. 14
      docs/root/install/install.rst
  79. 6
      docs/root/install/installation.rst
  80. 70
      docs/root/install/ref_configs.rst
  81. 37
      docs/root/install/requirements.rst
  82. 228
      docs/root/install/sandboxes/front_proxy.rst
  83. 68
      docs/root/install/sandboxes/grpc_bridge.rst
  84. 81
      docs/root/install/sandboxes/jaeger_tracing.rst
  85. 35
      docs/root/install/sandboxes/local_docker_build.rst
  86. 17
      docs/root/install/sandboxes/sandboxes.rst
  87. 82
      docs/root/install/sandboxes/zipkin_tracing.rst
  88. 30
      docs/root/install/tools/config_load_check_tool.rst
  89. 65
      docs/root/install/tools/route_table_check_tool.rst
  90. 33
      docs/root/install/tools/schema_validator_check_tool.rst
  91. 9
      docs/root/install/tools/tools.rst
  92. 19
      docs/root/intro/arch_overview/access_logging.rst
  93. 37
      docs/root/intro/arch_overview/arch_overview.rst
  94. 38
      docs/root/intro/arch_overview/circuit_breaking.rst
  95. 26
      docs/root/intro/arch_overview/cluster_manager.rst
  96. 37
      docs/root/intro/arch_overview/connection_pooling.rst
  97. 35
      docs/root/intro/arch_overview/draining.rst
  98. 81
      docs/root/intro/arch_overview/dynamic_configuration.rst
  99. 18
      docs/root/intro/arch_overview/dynamo.rst
  100. 31
      docs/root/intro/arch_overview/global_rate_limiting.rst
  101. Some files were not shown because too many files have changed in this diff Show More

@ -0,0 +1,24 @@
# Developer-local docs build
```bash
./docs/build.sh
```
The output can be found in `generated/docs`.
# How the Envoy website and docs are updated
The Envoy website, and docs are automatically built, and pushed on every commit
to master. This process is handled by Travis CI with the
[`publish.sh`](https://github.com/envoyproxy/envoy/blob/master/docs/publish.sh) script.
In order to have this automatic process there is an encrypted ssh key at the root
of the envoy repo (`.publishdocskey.enc`). This key was encrypted with Travis CLI
and can only be decrypted by commits initiated in the Envoy repo, not PRs that are
submitted from forks. This is the case because only PRs initiated in the Envoy
repo have access to the secure environment variables (`encrypted_b1a4cc52fa4a_iv`,
`encrypted_b1a4cc52fa4a_key`) [used to decrypt the key.](https://docs.travis-ci.com/user/pull-requests#Pull-Requests-and-Security-Restrictions)
The key only has write access to the Envoy repo. If the key, or the variables
used to decrypt it are ever compromised, delete the key immediately from the
Envoy repo in `Settings > Deploy keys`.

@ -12,7 +12,7 @@ mkdir -p "${DOCS_OUTPUT_DIR}"
rm -rf "${GENERATED_RST_DIR}"
mkdir -p "${GENERATED_RST_DIR}"
cp -f "${SCRIPT_DIR}"/{conf.py,index.rst} "${GENERATED_RST_DIR}"
rsync -av "${SCRIPT_DIR}"/root/ "${SCRIPT_DIR}"/conf.py "${GENERATED_RST_DIR}"
if [ ! -d "${BUILD_DIR}"/venv ]; then
virtualenv "${BUILD_DIR}"/venv --no-site-packages

@ -35,7 +35,7 @@ extensions = ['sphinxcontrib.httpdomain', 'sphinx.ext.extlinks']
extlinks = {'repo': ('https://github.com/envoyproxy/envoy/blob/master/%s', '')}
# Add any paths that contain templates here, relative to this directory.
#templates_path = ['_templates']
templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
@ -139,12 +139,12 @@ html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
# The name of an image file (relative to this directory) to use as a favicon of
# the docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = 'favicon.ico'
html_favicon = 'favicon.ico'
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
#html_static_path = ['_static']
html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied

@ -1,8 +0,0 @@
Envoy v2 API documentation
==========================
.. toctree::
:glob:
:maxdepth: 2
**

@ -0,0 +1,32 @@
#!/bin/bash
set -e
DOCS_DIR=generated/docs
CHECKOUT_DIR=../envoy-docs
PUBLISH_DIR="$CHECKOUT_DIR"/envoy
BUILD_SHA=`git rev-parse HEAD`
if [ -z "$CIRCLE_PULL_REQUEST" ] && [ "$CIRCLE_BRANCH" == "master" ]
then
echo 'cloning'
git clone git@github.com:envoyproxy/envoyproxy.github.io "$CHECKOUT_DIR"
git -C "$CHECKOUT_DIR" fetch
git -C "$CHECKOUT_DIR" checkout -B master origin/master
rm -fr "$PUBLISH_DIR"
mkdir -p "$PUBLISH_DIR"
cp -r "$DOCS_DIR"/* "$PUBLISH_DIR"
cd "$CHECKOUT_DIR"
git config user.name "envoy-docs(travis)"
git config user.email envoy-docs@users.noreply.github.com
echo 'add'
git add .
echo 'commit'
git commit -m "docs @$BUILD_SHA"
echo 'push'
git push origin master
else
echo "Ignoring PR branch for docs push"
fi

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 295 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 67 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 54 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 52 KiB

@ -0,0 +1,19 @@
About the documentation
=======================
The Envoy documentation is composed of a few major sections:
* :ref:`Introduction <intro>`: This section covers a general overview of what Envoy is, an
architecture overview, how it is typically deployed, etc.
* :ref:`Installation <install>`: How to build/install Envoy using Docker.
* :ref:`Configuration <config>`: Detailed configuration instructions. Where relevant, the
configuration guide also contains information on statistics, runtime configuration, and REST
APIs.
* :ref:`Operations <operations>`: General information on how to operate Envoy including the command
line interface, hot restart wrapper, administration interface, a general statistics overview,
etc.
* :ref:`Extending Envoy <extending>`: Information on how to write custom filters for Envoy.

@ -0,0 +1,12 @@
Envoy v2 API reference
======================
.. CAUTION::
This documentation subtree is a work-in-progress and does not yet contain
complete documentation for the v2 API. Stay tuned for updates.
.. toctree::
:glob:
:maxdepth: 2
**

@ -0,0 +1,308 @@
.. _config_access_log:
Access logging
==============
Configuration
-------------------------
Access logs are configured as part of the :ref:`HTTP connection manager config
<config_http_conn_man>` or :ref:`TCP Proxy <config_network_filters_tcp_proxy>`.
.. code-block:: json
{
"access_log": [
{
"path": "...",
"format": "...",
"filter": "{...}",
},
]
}
.. _config_access_log_path_param:
path
*(required, string)* Path the access log is written to.
.. _config_access_log_format_param:
format
*(optional, string)* Access log format. Envoy supports :ref:`custom access log formats
<config_access_log_format>` as well as a :ref:`default format
<config_access_log_default_format>`.
.. _config_access_log_filter_param:
filter
*(optional, object)* :ref:`Filter <config_http_con_manager_access_log_filters>` which is used to
determine if the access log needs to be written.
.. _config_access_log_format:
Format rules
------------
The access log format string contains either command operators or other characters interpreted as a
plain string. The access log formatter does not make any assumptions about a new line separator, so one
has to specified as part of the format string.
See the :ref:`default format <config_access_log_default_format>` for an example.
Note that the access log line will contain a '-' character for every not set/empty value.
The same format strings are used by different types of access logs (such as HTTP and TCP). Some
fields may have slightly different meanings, depending on what type of log it is. Differences
are noted.
The following command operators are supported:
%START_TIME%
HTTP
Request start time including milliseconds.
TCP
Downstream connection start time including milliseconds.
%BYTES_RECEIVED%
HTTP
Body bytes received.
TCP
Downstream bytes received on connection.
%PROTOCOL%
HTTP
Protocol. Currently either *HTTP/1.1* or *HTTP/2*.
TCP
Not implemented ("-").
%RESPONSE_CODE%
HTTP
HTTP response code. Note that a response code of '0' means that the server never sent the
beginning of a response. This generally means that the (downstream) client disconnected.
TCP
Not implemented ("-").
%BYTES_SENT%
HTTP
Body bytes sent.
TCP
Downstream bytes sent on connection.
%DURATION%
HTTP
Total duration in milliseconds of the request from the start time to the last byte out.
TCP
Total duration in milliseconds of the downstream connection.
%RESPONSE_FLAGS%
Additional details about the response or connection, if any. For TCP connections, the response codes mentioned in
the descriptions do not apply. Possible values are:
HTTP and TCP
* **UH**: No healthy upstream hosts in upstream cluster in addition to 503 response code.
* **UF**: Upstream connection failure in addition to 503 response code.
* **UO**: Upstream overflow (:ref:`circuit breaking <arch_overview_circuit_break>`) in addition to 503 response code.
* **NR**: No :ref:`route configured <arch_overview_http_routing>` for a given request in addition to 404 response code.
HTTP only
* **LH**: Local service failed :ref:`health check request <arch_overview_health_checking>` in addition to 503 response code.
* **UT**: Upstream request timeout in addition to 504 response code.
* **LR**: Connection local reset in addition to 503 response code.
* **UR**: Upstream remote reset in addition to 503 response code.
* **UC**: Upstream connection termination in addition to 503 response code.
* **DI**: The request processing was delayed for a period specified via :ref:`fault injection <config_http_filters_fault_injection>`.
* **FI**: The request was aborted with a response code specified via :ref:`fault injection <config_http_filters_fault_injection>`.
* **RL**: The request was ratelimited locally by the :ref:`HTTP rate limit filter <config_http_filters_rate_limit>` in addition to 429 response code.
%UPSTREAM_HOST%
Upstream host URL (e.g., tcp://ip:port for TCP connections).
%UPSTREAM_CLUSTER%
Upstream cluster to which the upstream host belongs to.
%REQ(X?Y):Z%
HTTP
An HTTP request header where X is the main HTTP header, Y is the alternative one, and Z is an
optional parameter denoting string truncation up to Z characters long. The value is taken from
the HTTP request header named X first and if it's not set, then request header Y is used. If
none of the headers are present '-' symbol will be in the log.
TCP
Not implemented ("-").
%RESP(X?Y):Z%
HTTP
Same as **%REQ(X?Y):Z%** but taken from HTTP response headers.
TCP
Not implemented ("-").
.. _config_access_log_default_format:
Default format
--------------
If custom format is not specified, Envoy uses the following default format:
.. code-block:: none
[%START_TIME%] "%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% %PROTOCOL%"
%RESPONSE_CODE% %RESPONSE_FLAGS% %BYTES_RECEIVED% %BYTES_SENT% %DURATION%
%RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)% "%REQ(X-FORWARDED-FOR)%" "%REQ(USER-AGENT)%"
"%REQ(X-REQUEST-ID)%" "%REQ(:AUTHORITY)%" "%UPSTREAM_HOST%"\n
Example of the default Envoy access log format:
.. code-block:: none
[2016-04-15T20:17:00.310Z] "POST /api/v1/locations HTTP/2" 204 - 154 0 226 100 "10.0.35.28"
"nsq2http" "cc21d9b0-cf5c-432b-8c7e-98aeb7988cd2" "locations" "tcp://10.0.2.1:80"
.. _config_http_con_manager_access_log_filters:
Filters
-------
Envoy supports the following access log filters:
.. contents::
:local:
Status code
^^^^^^^^^^^
.. code-block:: json
{
"filter": {
"type": "status_code",
"op": "...",
"value": "...",
"runtime_key": "..."
}
}
Filters on HTTP response/status code.
op
*(required, string)* Comparison operator. Currently *>=* and *=* are the only supported operators.
value
*(required, integer)* Default value to compare against if runtime value is not available.
runtime_key
*(optional, string)* Runtime key to get value for comparision. This value is used if defined.
Duration
^^^^^^^^
.. code-block:: json
{
"filter": {
"type": "duration",
"op": "..",
"value": "...",
"runtime_key": "..."
}
}
Filters on total request duration in milliseconds.
op
*(required, string)* Comparison operator. Currently *>=* and *=* are the only supported operators.
value
*(required, integer)* Default value to compare against if runtime values is not available.
runtime_key
*(optional, string)* Runtime key to get value for comparision. This value is used if defined.
Not health check
^^^^^^^^^^^^^^^^
.. code-block:: json
{
"filter": {
"type": "not_healthcheck"
}
}
Filters for requests that are not health check requests. A health check request is marked by
the :ref:`health check filter <config_http_filters_health_check>`.
Traceable
^^^^^^^^^
.. code-block:: json
{
"filter": {
"type": "traceable_request"
}
}
Filters for requests that are traceable. See the :ref:`tracing overview <arch_overview_tracing>` for
more information on how a request becomes traceable.
.. _config_http_con_manager_access_log_filters_runtime:
Runtime
^^^^^^^^^
.. code-block:: json
{
"filter": {
"type": "runtime",
"key" : "..."
}
}
Filters for random sampling of requests. Sampling pivots on the header
:ref:`x-request-id<config_http_conn_man_headers_x-request-id>` being present. If
:ref:`x-request-id<config_http_conn_man_headers_x-request-id>` is present, the filter will
consistently sample across multiple hosts based on the runtime key value and the value extracted
from :ref:`x-request-id<config_http_conn_man_headers_x-request-id>`. If it is missing, the
filter will randomly sample based on the runtime key value.
key
*(required, string)* Runtime key to get the percentage of requests to be sampled.
This runtime control is specified in the range 0-100 and defaults to 0.
And
^^^
.. code-block:: json
{
"filter": {
"type": "logical_and",
"filters": []
}
}
Performs a logical "and" operation on the result of each filter in *filters*. Filters are evaluated
sequentially and if one of them returns false, the filter returns false immediately.
Or
^^
.. code-block:: json
{
"filter": {
"type": "logical_or",
"filters": []
}
}
Performs a logical "or" operation on the result of each individual filter. Filters are evaluated
sequentially and if one of them returns true, the filter returns true immediately.

@ -0,0 +1,66 @@
.. _config_cluster_manager_cds:
Cluster discovery service
=========================
The cluster discovery service (CDS) is an optional API that Envoy will call to dynamically fetch
cluster manager members. Envoy will reconcile the API response and add, modify, or remove known
clusters depending on what is required.
.. code-block:: json
{
"cluster": "{...}",
"refresh_delay_ms": "..."
}
:ref:`cluster <config_cluster_manager_cluster>`
*(required, object)* A standard definition of an upstream cluster that hosts the cluster
discovery service. The cluster must run a REST service that implements the :ref:`CDS HTTP API
<config_cluster_manager_cds_api>`.
refresh_delay_ms
*(optional, integer)* The delay, in milliseconds, between fetches to the CDS API. Envoy will add
an additional random jitter to the delay that is between zero and *refresh_delay_ms*
milliseconds. Thus the longest possible refresh delay is 2 \* *refresh_delay_ms*. Default value
is 30000ms (30 seconds).
.. _config_cluster_manager_cds_api:
REST API
--------
.. http:get:: /v1/clusters/(string: service_cluster)/(string: service_node)
Asks the discovery service to return all clusters for a particular `service_cluster` and
`service_node`. `service_cluster` corresponds to the :option:`--service-cluster` CLI option.
`service_node` corresponds to the :option:`--service-node` CLI option. Responses use the following
JSON schema:
.. code-block:: json
{
"clusters": []
}
clusters
*(Required, array)* A list of :ref:`clusters <config_cluster_manager_cluster>` that will be
dynamically added/modified within the cluster manager. Envoy will reconcile this list with the
clusters that are currently loaded and either add/modify/remove clusters as necessary. Note that
any clusters that are statically defined within the Envoy configuration cannot be modified via
the CDS API.
Statistics
----------
CDS has a statistics tree rooted at *cluster_manager.cds.* with the following statistics:
.. csv-table::
:header: Name, Type, Description
:widths: 1, 1, 2
config_reload, Counter, Total API fetches that resulted in a config reload due to a different config
update_attempt, Counter, Total API fetches attempted
update_success, Counter, Total API fetches completed successfully
update_failure, Counter, Total API fetches that failed (either network or schema errors)
version, Gauge, Hash of the contents from the last successful API fetch

@ -0,0 +1,201 @@
.. _config_cluster_manager_cluster:
Cluster
=======
.. code-block:: json
{
"name": "...",
"type": "...",
"connect_timeout_ms": "...",
"per_connection_buffer_limit_bytes": "...",
"lb_type": "...",
"hosts": [],
"service_name": "...",
"health_check": "{...}",
"max_requests_per_connection": "...",
"circuit_breakers": "{...}",
"ssl_context": "{...}",
"features": "...",
"http2_settings": "{...}",
"cleanup_interval_ms": "...",
"dns_refresh_rate_ms": "...",
"dns_lookup_family": "...",
"dns_resolvers": [],
"outlier_detection": "{...}"
}
.. _config_cluster_manager_cluster_name:
name
*(required, string)* Supplies the name of the cluster which must be unique across all clusters.
The cluster name is used when emitting :ref:`statistics <config_cluster_manager_cluster_stats>`.
By default, the maximum length of a cluster name is limited to 60 characters. This limit can be
increased by setting the :option:`--max-obj-name-len` command line argument to the desired value.
.. _config_cluster_manager_type:
type
*(required, string)* The :ref:`service discovery type <arch_overview_service_discovery_types>` to
use for resolving the cluster. Possible options are *static*, *strict_dns*, *logical_dns*,
:ref:`*original_dst* <arch_overview_service_discovery_types_original_destination>`, and *sds*.
connect_timeout_ms
*(required, integer)* The timeout for new network connections to hosts in the cluster specified
in milliseconds.
.. _config_cluster_manager_cluster_per_connection_buffer_limit_bytes:
per_connection_buffer_limit_bytes
*(optional, integer)* Soft limit on size of the cluster's connections read and write buffers.
If unspecified, an implementation defined default is applied (1MiB).
.. _config_cluster_manager_cluster_lb_type:
lb_type
*(required, string)* The :ref:`load balancer type <arch_overview_load_balancing_types>` to use
when picking a host in the cluster. Possible options are *round_robin*, *least_request*,
*ring_hash*, *random*, and *original_dst_lb*. Note that :ref:`*original_dst_lb*
<arch_overview_load_balancing_types_original_destination>` must be used with clusters of type
:ref:`*original_dst* <arch_overview_service_discovery_types_original_destination>`, and may not be
used with any other cluster type.
hosts
*(sometimes required, array)* If the service discovery type is *static*, *strict_dns*, or
*logical_dns* the hosts array is required. Hosts array is not allowed with cluster type
*original_dst*. How it is specified depends on the type of service discovery:
static
Static clusters must use fully resolved hosts that require no DNS lookups. Both TCP and unix
domain sockets (UDS) addresses are supported. A TCP address looks like:
``tcp://<ip>:<port>``
A UDS address looks like:
``unix://<file name>``
A list of addresses can be specified as in the following example:
.. code-block:: json
[{"url": "tcp://10.0.0.2:1234"}, {"url": "tcp://10.0.0.3:5678"}]
strict_dns
Strict DNS clusters can specify any number of hostname:port combinations. All names will be
resolved using DNS and grouped together to form the final cluster. If multiple records are
returned for a single name, all will be used. For example:
.. code-block:: json
[{"url": "tcp://foo1.bar.com:1234"}, {"url": "tcp://foo2.bar.com:5678"}]
logical_dns
Logical DNS clusters specify hostnames much like strict DNS, however only the first host will be
used. For example:
.. code-block:: json
[{"url": "tcp://foo1.bar.com:1234"}]
.. _config_cluster_manager_cluster_service_name:
service_name
*(sometimes required, string)* This parameter is required if the service discovery type is *sds*.
It will be passed to the :ref:`SDS API <config_cluster_manager_sds_api>` when fetching cluster
members.
:ref:`health_check <config_cluster_manager_cluster_hc>`
*(optional, object)* Optional :ref:`active health checking <arch_overview_health_checking>`
configuration for the cluster. If no configuration is specified no health checking will be done
and all cluster members will be considered healthy at all times.
max_requests_per_connection
*(optional, integer)* Optional maximum requests for a single upstream connection. This
parameter is respected by both the HTTP/1.1 and HTTP/2 connection pool implementations. If not
specified, there is no limit. Setting this parameter to 1 will effectively disable keep alive.
:ref:`circuit_breakers <config_cluster_manager_cluster_circuit_breakers>`
*(optional, object)* Optional :ref:`circuit breaking <arch_overview_circuit_break>` settings
for the cluster.
:ref:`ssl_context <config_cluster_manager_cluster_ssl>`
*(optional, object)* The TLS configuration for connections to the upstream cluster. If no TLS
configuration is specified, TLS will not be used for new connections.
.. _config_cluster_manager_cluster_features:
features
*(optional, string)* A comma delimited list of features that the upstream cluster supports.
The currently supported features are:
http2
If *http2* is specified, Envoy will assume that the upstream supports HTTP/2 when making new
HTTP connection pool connections. Currently, Envoy only supports prior knowledge for upstream
connections. Even if TLS is used with ALPN, *http2* must be specified. As an aside this allows
HTTP/2 connections to happen over plain text.
.. _config_cluster_manager_cluster_http2_settings:
http2_settings
*(optional, object)* Additional HTTP/2 settings that are passed directly to the HTTP/2 codec when
initiating HTTP connection pool connections. These are the same options supported in the HTTP connection
manager :ref:`http2_settings <config_http_conn_man_http2_settings>` option.
.. _config_cluster_manager_cluster_cleanup_interval_ms:
cleanup_interval_ms
*(optional, integer)* The interval for removing stale hosts from an *original_dst* cluster. Hosts
are considered stale if they have not been used as upstream destinations during this interval.
New hosts are added to original destination clusters on demand as new connections are redirected
to Envoy, causing the number of hosts in the cluster to grow over time. Hosts that are not stale
(they are actively used as destinations) are kept in the cluster, which allows connections to
them remain open, saving the latency that would otherwise be spent on opening new connections.
If this setting is not specified, the value defaults to 5000. For cluster types other than
*original_dst* this setting is ignored.
.. _config_cluster_manager_cluster_dns_refresh_rate_ms:
dns_refresh_rate_ms
*(optional, integer)* If the dns refresh rate is specified and the cluster type is either *strict_dns*,
or *logical_dns*, this value is used as the cluster's dns refresh rate. If this setting is not specified,
the value defaults to 5000. For cluster types other than *strict_dns* and *logical_dns* this setting is
ignored.
.. _config_cluster_manager_cluster_dns_lookup_family:
dns_lookup_family
*(optional, string)* The DNS IP address resolution policy. The options are *v4_only*, *v6_only*,
and *auto*. If this setting is not specified, the value defaults to *v4_only*. When *v4_only* is selected,
the DNS resolver will only perform a lookup for addresses in the IPv4 family. If *v6_only* is selected,
the DNS resolver will only perform a lookup for addresses in the IPv6 family. If *auto* is specified,
the DNS resolver will first perform a lookup for addresses in the IPv6 family and fallback to a lookup for
addresses in the IPv4 family. For cluster types other than *strict_dns* and *logical_dns*, this setting
is ignored.
.. _config_cluster_manager_cluster_dns_resolvers:
dns_resolvers
*(optional, array)* If DNS resolvers are specified and the cluster type is either *strict_dns*, or
*logical_dns*, this value is used to specify the cluster's dns resolvers. If this setting is not
specified, the value defaults to the default resolver, which uses /etc/resolv.conf for
configuration. For cluster types other than *strict_dns* and *logical_dns* this setting is
ignored.
.. _config_cluster_manager_cluster_outlier_detection_summary:
:ref:`outlier_detection <config_cluster_manager_cluster_outlier_detection>`
*(optional, object)* If specified, outlier detection will be enabled for this upstream cluster.
See the :ref:`architecture overview <arch_overview_outlier_detection>` for more information on outlier
detection.
.. toctree::
:hidden:
cluster_hc
cluster_circuit_breakers
cluster_ssl
cluster_stats
cluster_runtime
cluster_outlier_detection

@ -0,0 +1,73 @@
.. _config_cluster_manager_cluster_circuit_breakers:
Circuit breakers
================
* Circuit breaking :ref:`architecture overview <arch_overview_circuit_break>`.
* Priority routing :ref:`architecture overview <arch_overview_http_routing_priority>`.
Circuit breaking settings can be specified individually for each defined priority. How the
different priorities are used are documented in the sections of the configuration guide that use
them.
.. code-block:: json
{
"default": "{...}",
"high": "{...}"
}
default
*(optional, object)* Settings object for default priority.
high
*(optional, object)* Settings object for high priority.
Per priority settings
---------------------
.. code-block:: json
{
"max_connections": "...",
"max_pending_requests": "...",
"max_requests": "...",
"max_retries": "...",
}
.. _config_cluster_manager_cluster_circuit_breakers_max_connections:
max_connections
*(optional, integer)* The maximum number of connections that Envoy will make to the upstream
cluster. If not specified, the default is 1024. See the :ref:`circuit breaking overview
<arch_overview_circuit_break>` for more information.
.. _config_cluster_manager_cluster_circuit_breakers_max_pending_requests:
max_pending_requests
*(optional, integer)* The maximum number of pending requests that Envoy will allow to the upstream
cluster. If not specified, the default is 1024. See the :ref:`circuit breaking overview
<arch_overview_circuit_break>` for more information.
.. _config_cluster_manager_cluster_circuit_breakers_max_requests:
max_requests
*(optional, integer)* The maximum number of parallel requests that Envoy will make to the upstream
cluster. If not specified, the default is 1024. See the :ref:`circuit breaking overview
<arch_overview_circuit_break>` for more information.
.. _config_cluster_manager_cluster_circuit_breakers_max_retries:
max_retries
*(optional, integer)* The maximum number of parallel retries that Envoy will allow to the upstream
cluster. If not specified, the default is 3. See the :ref:`circuit breaking overview
<arch_overview_circuit_break>` for more information.
Runtime
-------
All four circuit breaking settings are runtime configurable for all defined priorities based on cluster
name. They follow the following naming scheme ``circuit_breakers.<cluster_name>.<priority>.<setting>``.
``cluster_name`` is the name field in each cluster's configuration, which is set in the envoy
:ref:`config file <config_cluster_manager_cluster_name>`. Available runtime settings will override
settings set in the envoy config file.

@ -0,0 +1,140 @@
.. _config_cluster_manager_cluster_hc:
Health checking
===============
* Health checking :ref:`architecture overview <arch_overview_health_checking>`.
* If health checking is configured for a cluster, additional statistics are emitted. They are
documented :ref:`here <config_cluster_manager_cluster_stats>`.
.. code-block:: json
{
"type": "...",
"timeout_ms": "...",
"interval_ms": "...",
"unhealthy_threshold": "...",
"healthy_threshold": "...",
"path": "...",
"send": [],
"receive": [],
"interval_jitter_ms": "...",
"service_name": "..."
}
type
*(required, string)* The type of health checking to perform. Currently supported types are
*http*, *redis*, and *tcp*. See the :ref:`architecture overview <arch_overview_health_checking>`
for more information.
timeout_ms
*(required, integer)* The time in milliseconds to wait for a health check response. If the
timeout is reached the health check attempt will be considered a failure.
.. _config_cluster_manager_cluster_hc_interval:
interval_ms
*(required, integer)* The interval between health checks in milliseconds.
unhealthy_threshold
*(required, integer)* The number of unhealthy health checks required before a host is marked
unhealthy. Note that for *http* health checking if a host responds with 503 this threshold is
ignored and the host is considered unhealthy immediately.
healthy_threshold
*(required, integer)* The number of healthy health checks required before a host is marked
healthy. Note that during startup, only a single successful health check is required to mark
a host healthy.
path
*(sometimes required, string)* This parameter is required if the type is *http*. It species the
HTTP path that will be requested during health checking. For example */healthcheck*.
send
*(sometimes required, array)* This parameter is required if the type is *tcp*. It specifies
the bytes to send for a health check request. It is an array of hex byte strings specified
as in the following example:
.. code-block:: json
[
{"binary": "01"},
{"binary": "000000FF"}
]
The array is allowed to be empty in the case of "connect only" health checking.
receive
*(sometimes required, array)* This parameter is required if the type is *tcp*. It specified the
bytes that are expected in a successful health check response. It is an array of hex byte strings
specified similarly to the *send* parameter. The array is allowed to be empty in the case of
"connect only" health checking.
interval_jitter_ms
*(optional, integer)* An optional jitter amount in millseconds. If specified, during every
internal Envoy will add 0 to *interval_jitter_ms* milliseconds to the wait time.
.. _config_cluster_manager_cluster_hc_service_name:
service_name
*(optional, string)* An optional service name parameter which is used to validate the identity of
the health checked cluster. See the :ref:`architecture overview
<arch_overview_health_checking_identity>` for more information.
.. _config_cluster_manager_cluster_hc_tcp_health_checking:
TCP health checking
-------------------
The type of matching performed is the following (this is the MongoDB health check request and
response):
.. code-block:: json
{
"send": [
{"binary": "39000000"},
{"binary": "EEEEEEEE"},
{"binary": "00000000"},
{"binary": "d4070000"},
{"binary": "00000000"},
{"binary": "746573742e"},
{"binary": "24636d6400"},
{"binary": "00000000"},
{"binary": "FFFFFFFF"},
{"binary": "13000000"},
{"binary": "01"},
{"binary": "70696e6700"},
{"binary": "000000000000f03f"},
{"binary": "00"}
],
"receive": [
{"binary": "EEEEEEEE"},
{"binary": "01000000"},
{"binary": "00000000"},
{"binary": "0000000000000000"},
{"binary": "00000000"},
{"binary": "11000000"},
{"binary": "01"},
{"binary": "6f6b"},
{"binary": "00000000000000f03f"},
{"binary": "00"}
]
}
During each health check cycle, all of the "send" bytes are sent to the target server. Each
binary block can be of arbitrary length and is just concatenated together when sent. (Separating
into multiple blocks can be useful for readability).
When checking the response, "fuzzy" matching is performed such that each binary block must be found,
and in the order specified, but not necessarly contiguous. Thus, in the example above,
"FFFFFFFF" could be inserted in the response between "EEEEEEEE" and "01000000" and the check
would still pass. This is done to support protocols that insert non-deterministic data, such as
time, into the response.
Health checks that require a more complex pattern such as send/receive/send/receive are not
currently possible.
If "receive" is an empty array, Envoy will perform "connect only" TCP health checking. During each
cycle, Envoy will attempt to connect to the upstream host, and consider it a success if the
connection succeeds. A new connection is created for each health check cycle.

@ -0,0 +1,67 @@
.. _config_cluster_manager:
Cluster manager
===============
.. toctree::
:hidden:
cluster
sds
sds_api
outlier
cds
Cluster manager :ref:`architecture overview <arch_overview_cluster_manager>`.
.. code-block:: json
{
"clusters": [],
"sds": "{...}",
"local_cluster_name": "...",
"outlier_detection": "{...}",
"cds": "{...}"
}
.. _config_cluster_manager_clusters:
:ref:`clusters <config_cluster_manager_cluster>`
*(required, array)* A list of upstream clusters that the cluster manager performs
:ref:`service discovery <arch_overview_service_discovery>`,
:ref:`health checking <arch_overview_health_checking>`, and
:ref:`load balancing <arch_overview_load_balancing>` on.
:ref:`sds <config_cluster_manager_sds>`
*(sometimes required, object)* If any defined clusters use the :ref:`sds
<arch_overview_service_discovery_sds>` cluster type, a global SDS configuration must be specified.
.. _config_cluster_manager_local_cluster_name:
local_cluster_name
*(optional, string)* Name of the local cluster (i.e., the cluster that owns the Envoy running this
configuration). In order to enable
:ref:`zone aware routing <arch_overview_load_balancing_zone_aware_routing>` this option must be
set. If *local_cluster_name* is defined then :ref:`clusters <config_cluster_manager_clusters>`
must contain a definition of a cluster with the same name.
:ref:`outlier_detection <config_cluster_manager_outlier_detection>`
*(optional, object)* Optional global configuration for outlier detection.
:ref:`cds <config_cluster_manager_cds>`
*(optional, object)* Optional configuration for the cluster discovery service (CDS) API.
Statistics
----------
The cluster manager has a statistics tree rooted at *cluster_manager.* with the following
statistics. Any ``:`` character in the stats name is replaced with ``_``.
.. csv-table::
:header: Name, Type, Description
:widths: 1, 1, 2
cluster_added, Counter, Total clusters added (either via static config or CDS)
cluster_modified, Counter, Total clusters modified (via CDS)
cluster_removed, Counter, Total clusters removed (via CDS)
total_clusters, Gauge, Number of currently loaded clusters

@ -0,0 +1,85 @@
.. _config_cluster_manager_cluster_outlier_detection:
Outlier detection
=================
.. code-block:: json
{
"consecutive_5xx": "...",
"interval_ms": "...",
"base_ejection_time_ms": "...",
"max_ejection_percent": "...",
"enforcing_consecutive_5xx" : "...",
"enforcing_success_rate" : "...",
"success_rate_minimum_hosts" : "...",
"success_rate_request_volume" : "...",
"success_rate_stdev_factor" : "..."
}
.. _config_cluster_manager_cluster_outlier_detection_consecutive_5xx:
consecutive_5xx
*(optional, integer)* The number of consecutive 5xx responses before a consecutive 5xx ejection occurs. Defaults to 5.
.. _config_cluster_manager_cluster_outlier_detection_interval_ms:
interval_ms
*(optional, integer)* The time interval between ejection analysis sweeps. This can result in both new ejections as well
as hosts being returned to service. Defaults to 10000ms or 10s.
.. _config_cluster_manager_cluster_outlier_detection_base_ejection_time_ms:
base_ejection_time_ms
*(optional, integer)* The base time that a host is ejected for. The real time is equal to the base time multiplied by
the number of times the host has been ejected. Defaults to 30000ms or 30s.
.. _config_cluster_manager_cluster_outlier_detection_max_ejection_percent:
max_ejection_percent
*(optional, integer)* The maximum % of hosts in an upstream cluster that can be ejected due to outlier detection.
Defaults to 10%.
.. _config_cluster_manager_cluster_outlier_detection_enforcing_consecutive_5xx:
enforcing_consecutive_5xx
*(optional, integer)* The % chance that a host will be actually ejected when an outlier status is detected through
consecutive 5xx. This setting can be used to disable ejection or to ramp it up slowly.
Defaults to 100 with 1% granularity.
.. _config_cluster_manager_cluster_outlier_detection_enforcing_success_rate:
enforcing_success_rate
*(optional, integer)* The % chance that a host will be actually ejected when an outlier status is detected through
success rate statistics. This setting can be used to disable ejection or to ramp it up slowly.
Defaults to 100 with 1% granularity.
.. _config_cluster_manager_cluster_outlier_detection_success_rate_minimum_hosts:
success_rate_minimum_hosts
*(optional, integer)* The number of hosts in a cluster that must have enough request volume to detect success rate outliers.
If the number of hosts is less than this setting, outlier detection via success rate statistics is not
performed for any host in the cluster. Defaults to 5.
.. _config_cluster_manager_cluster_outlier_detection_success_rate_request_volume:
success_rate_request_volume
*(optional, integer)* The minimum number of total requests that must be collected in one interval
(as defined by :ref:`interval_ms <config_cluster_manager_cluster_outlier_detection_interval_ms>` above)
to include this host in success rate based outlier detection. If the volume is lower than this setting,
outlier detection via success rate statistics is not performed for that host. Defaults to 100.
.. _config_cluster_manager_cluster_outlier_detection_success_rate_stdev_factor:
success_rate_stdev_factor
*(optional, integer)* This factor is used to determine the ejection threshold for success rate outlier ejection.
The ejection threshold is used as a measure to determine when a particular host has fallen below an acceptable
success rate.
The ejection threshold is the difference between the mean success rate, and the product of
this factor and the standard deviation of the mean success rate:
``mean - (stdev * success_rate_stdev_factor)``. This factor is divided by a thousand to
get a ``double``. That is, if the desired factor is ``1.9``, the runtime value should be ``1900``.
Defaults to 1900.
Each of the above configuration values can be overridden via
:ref:`runtime values <config_cluster_manager_cluster_runtime_outlier_detection>`.

@ -0,0 +1,130 @@
.. _config_cluster_manager_cluster_runtime:
Runtime
=======
Upstream clusters support the following runtime settings:
Active health checking
----------------------
health_check.min_interval
Min value for the health checking :ref:`interval <config_cluster_manager_cluster_hc_interval>`.
Default value is 0. The health checking interval will be between *min_interval* and
*max_interval*.
health_check.max_interval
Max value for the health checking :ref:`interval <config_cluster_manager_cluster_hc_interval>`.
Default value is MAX_INT. The health checking interval will be between *min_interval* and
*max_interval*.
health_check.verify_cluster
What % of health check requests will be verified against the :ref:`expected upstream service
<config_cluster_manager_cluster_hc_service_name>` as the :ref:`health check filter
<arch_overview_health_checking_filter>` will write the remote service cluster into the response.
.. _config_cluster_manager_cluster_runtime_outlier_detection:
Outlier detection
-----------------
See the outlier detection :ref:`architecture overview <arch_overview_outlier_detection>` for more
information on outlier detection. The runtime parameters supported by outlier detection are the
same as the :ref:`static configuration parameters <config_cluster_manager_cluster_outlier_detection>`, namely:
outlier_detection.consecutive_5xx
:ref:`consecutive_5XX
<config_cluster_manager_cluster_outlier_detection_consecutive_5xx>`
setting in outlier detection
outlier_detection.interval_ms
:ref:`interval_ms
<config_cluster_manager_cluster_outlier_detection_interval_ms>`
setting in outlier detection
outlier_detection.base_ejection_time_ms
:ref:`base_ejection_time_ms
<config_cluster_manager_cluster_outlier_detection_base_ejection_time_ms>`
setting in outlier detection
outlier_detection.max_ejection_percent
:ref:`max_ejection_percent
<config_cluster_manager_cluster_outlier_detection_max_ejection_percent>`
setting in outlier detection
outlier_detection.enforcing_consecutive_5xx
:ref:`enforcing_consecutive_5xx
<config_cluster_manager_cluster_outlier_detection_enforcing_consecutive_5xx>`
setting in outlier detection
outlier_detection.enforcing_success_rate
:ref:`enforcing_success_rate
<config_cluster_manager_cluster_outlier_detection_enforcing_success_rate>`
setting in outlier detection
outlier_detection.success_rate_minimum_hosts
:ref:`success_rate_minimum_hosts
<config_cluster_manager_cluster_outlier_detection_success_rate_minimum_hosts>`
setting in outlier detection
outlier_detection.success_rate_request_volume
:ref:`success_rate_request_volume
<config_cluster_manager_cluster_outlier_detection_success_rate_request_volume>`
setting in outlier detection
outlier_detection.success_rate_stdev_factor
:ref:`success_rate_stdev_factor
<config_cluster_manager_cluster_outlier_detection_success_rate_stdev_factor>`
setting in outlier detection
Core
----
upstream.healthy_panic_threshold
Sets the :ref:`panic threshold <arch_overview_load_balancing_panic_threshold>` percentage.
Defaults to 50%.
upstream.use_http2
Whether the cluster utilizes the *http2* :ref:`feature <config_cluster_manager_cluster_features>`
if configured. Set to 0 to disable HTTP/2 even if the feature is configured. Defaults to enabled.
upstream.weight_enabled
Binary switch to turn on or off weighted load balancing. If set to non 0, weighted load balancing
is enabled. Defaults to enabled.
.. _config_cluster_manager_cluster_runtime_ring_hash:
Ring hash load balancing
------------------------
upstream.ring_hash.min_ring_size
The minimum size of the hash ring for the :ref:`ring hash load balancer
<arch_overview_load_balancing_types>`. The default is 1024.
.. _config_cluster_manager_cluster_runtime_zone_routing:
Zone aware load balancing
-------------------------
upstream.zone_routing.enabled
% of requests that will be routed to the same upstream zone. Defaults to 100% of requests.
upstream.zone_routing.min_cluster_size
Minimal size of the upstream cluster for which zone aware routing can be attempted. Default value
is 6. If the upstream cluster size is smaller than *min_cluster_size* zone aware routing will not
be performed.
Circuit breaking
----------------
circuit_breakers.<cluster_name>.<priority>.max_connections
:ref:`Max connections circuit breaker setting <config_cluster_manager_cluster_circuit_breakers_max_connections>`
circuit_breakers.<cluster_name>.<priority>.max_pending_requests
:ref:`Max pending requests circuit breaker setting <config_cluster_manager_cluster_circuit_breakers_max_pending_requests>`
circuit_breakers.<cluster_name>.<priority>.max_requests
:ref:`Max requests circuit breaker setting <config_cluster_manager_cluster_circuit_breakers_max_requests>`
circuit_breakers.<cluster_name>.<priority>.max_retries
:ref:`Max retries circuit breaker setting <config_cluster_manager_cluster_circuit_breakers_max_retries>`

@ -0,0 +1,82 @@
.. _config_cluster_manager_cluster_ssl:
TLS context
===========
.. code-block:: json
{
"alpn_protocols": "...",
"cert_chain_file": "...",
"private_key_file": "...",
"ca_cert_file": "...",
"verify_certificate_hash": "...",
"verify_subject_alt_name": [],
"cipher_suites": "...",
"ecdh_curves": "...",
"sni": "..."
}
alpn_protocols
*(optional, string)* Supplies the list of ALPN protocols that connections should request. In
practice this is likely to be set to a single value or not set at all:
* "h2" If upstream connections should use HTTP/2. In the current implementation this must be set
alongside the *http2* cluster :ref:`features <config_cluster_manager_cluster_features>` option.
The two options together will use ALPN to tell a server that expects ALPN that Envoy supports
HTTP/2. Then the *http2* feature will cause new connections to use HTTP/2.
cert_chain_file
*(optional, string)* The certificate chain file that should be served by the connection. This is
used to provide a client side TLS certificate to an upstream host.
private_key_file
*(optional, string)* The private key that corresponds to the certificate chain file.
ca_cert_file
*(optional, string)* A file containing certificate authority certificates to use in verifying
a presented server certificate.
verify_certificate_hash
*(optional, string)* If specified, Envoy will verify (pin) the hash of the presented server
certificate.
verify_subject_alt_name
*(optional, array)* An optional list of subject alt names. If specified, Envoy will verify
that the server certificate's subject alt name matches one of the specified values.
cipher_suites
*(optional, string)* If specified, the TLS connection will only support the specified `cipher list
<https://commondatastorage.googleapis.com/chromium-boringssl-docs/ssl.h.html#Cipher-suite-configuration>`_.
If not specified, the default list:
.. code-block:: none
[ECDHE-ECDSA-AES128-GCM-SHA256|ECDHE-ECDSA-CHACHA20-POLY1305]
[ECDHE-RSA-AES128-GCM-SHA256|ECDHE-RSA-CHACHA20-POLY1305]
ECDHE-ECDSA-AES128-SHA256
ECDHE-RSA-AES128-SHA256
ECDHE-ECDSA-AES128-SHA
ECDHE-RSA-AES128-SHA
AES128-GCM-SHA256
AES128-SHA256
AES128-SHA
ECDHE-ECDSA-AES256-GCM-SHA384
ECDHE-RSA-AES256-GCM-SHA384
ECDHE-ECDSA-AES256-SHA384
ECDHE-RSA-AES256-SHA384
ECDHE-ECDSA-AES256-SHA
ECDHE-RSA-AES256-SHA
AES256-GCM-SHA384
AES256-SHA256
AES256-SHA
will be used.
ecdh_curves
*(optional, string)* If specified, the TLS connection will only support the specified ECDH curves.
If not specified, the default curves (X25519, P-256) will be used.
sni
*(optional, string)* If specified, the string will be presented as the SNI during the TLS
handshake.

@ -0,0 +1,193 @@
.. _config_cluster_manager_cluster_stats:
Statistics
==========
.. contents::
:local:
General
-------
Every cluster has a statistics tree rooted at *cluster.<name>.* with the following statistics:
.. csv-table::
:header: Name, Type, Description
:widths: 1, 1, 2
upstream_cx_total, Counter, Total connections
upstream_cx_active, Gauge, Total active connections
upstream_cx_http1_total, Counter, Total HTTP/1.1 connections
upstream_cx_http2_total, Counter, Total HTTP/2 connections
upstream_cx_connect_fail, Counter, Total connection failures
upstream_cx_connect_timeout, Counter, Total connection timeouts
upstream_cx_overflow, Counter, Total times that the cluster's connection circuit breaker overflowed
upstream_cx_connect_ms, Histogram, Connection establishment milliseconds
upstream_cx_length_ms, Histogram, Connection length milliseconds
upstream_cx_destroy, Counter, Total destroyed connections
upstream_cx_destroy_local, Counter, Total connections destroyed locally
upstream_cx_destroy_remote, Counter, Total connections destroyed remotely
upstream_cx_destroy_with_active_rq, Counter, Total connections destroyed with 1+ active request
upstream_cx_destroy_local_with_active_rq, Counter, Total connections destroyed locally with 1+ active request
upstream_cx_destroy_remote_with_active_rq, Counter, Total connections destroyed remotely with 1+ active request
upstream_cx_close_notify, Counter, Total connections closed via HTTP/1.1 connection close header or HTTP/2 GOAWAY
upstream_cx_rx_bytes_total, Counter, Total received connection bytes
upstream_cx_rx_bytes_buffered, Gauge, Received connection bytes currently buffered
upstream_cx_tx_bytes_total, Counter, Total sent connection bytes
upstream_cx_tx_bytes_buffered, Gauge, Send connection bytes currently buffered
upstream_cx_protocol_error, Counter, Total connection protocol errors
upstream_cx_max_requests, Counter, Total connections closed due to maximum requests
upstream_cx_none_healthy, Counter, Total times connection not established due to no healthy hosts
upstream_rq_total, Counter, Total requests
upstream_rq_active, Gauge, Total active requests
upstream_rq_pending_total, Counter, Total requests pending a connection pool connection
upstream_rq_pending_overflow, Counter, Total requests that overflowed connection pool circuit breaking and were failed
upstream_rq_pending_failure_eject, Counter, Total requests that were failed due to a connection pool connection failure
upstream_rq_pending_active, Gauge, Total active requests pending a connection pool connection
upstream_rq_cancelled, Counter, Total requests cancelled before obtaining a connection pool connection
upstream_rq_maintenance_mode, Counter, Total requests that resulted in an immediate 503 due to :ref:`maintenance mode<config_http_filters_router_runtime_maintenance_mode>`
upstream_rq_timeout, Counter, Total requests that timed out waiting for a response
upstream_rq_per_try_timeout, Counter, Total requests that hit the per try timeout
upstream_rq_rx_reset, Counter, Total requests that were reset remotely
upstream_rq_tx_reset, Counter, Total requests that were reset locally
upstream_rq_retry, Counter, Total request retries
upstream_rq_retry_success, Counter, Total request retry successes
upstream_rq_retry_overflow, Counter, Total requests not retried due to circuit breaking
upstream_flow_control_paused_reading_total, Counter, Total number of times flow control paused reading from upstream.
upstream_flow_control_resumed_reading_total, Counter, Total number of times flow control resumed reading from upstream.
upstream_flow_control_backed_up_total, Counter, Total number of times the upstream connection backed up and paused reads from downstream.
upstream_flow_control_drained_total, Counter, Total number of times the upstream connection drained and resumed reads from downstream.
membership_change, Counter, Total cluster membership changes
membership_healthy, Gauge, Current cluster healthy total (inclusive of both health checking and outlier detection)
membership_total, Gauge, Current cluster membership total
retry_or_shadow_abandoned, Counter, Total number of times shadowing or retry buffering was canceled due to buffer limits.
config_reload, Counter, Total API fetches that resulted in a config reload due to a different config
update_attempt, Counter, Total cluster membership update attempts
update_success, Counter, Total cluster membership update successes
update_failure, Counter, Total cluster membership update failures
version, Gauge, Hash of the contents from the last successful API fetch
max_host_weight, Gauge, Maximum weight of any host in the cluster
bind_errors, Counter, Total errors binding the socket to the configured source address.
Health check statistics
-----------------------
If health check is configured, the cluster has an additional statistics tree rooted at
*cluster.<name>.health_check.* with the following statistics:
.. csv-table::
:header: Name, Type, Description
:widths: 1, 1, 2
attempt, Counter, Number of health checks
success, Counter, Number of successful health checks
failure, Counter, Number of immediately failed health checks (e.g. HTTP 503) as well as network failures
passive_failure, Counter, Number of health check failures due to passive events (e.g. x-envoy-immediate-health-check-fail)
network_failure, Counter, Number of health check failures due to network error
verify_cluster, Counter, Number of health checks that attempted cluster name verification
healthy, Gauge, Number of healthy members
.. _config_cluster_manager_cluster_stats_outlier_detection:
Outlier detection statistics
----------------------------
If :ref:`outlier detection <arch_overview_outlier_detection>` is configured for a cluster,
statistics will be rooted at *cluster.<name>.outlier_detection.* and contain the following:
.. csv-table::
:header: Name, Type, Description
:widths: 1, 1, 2
ejections_total, Counter, Number of ejections due to any outlier type
ejections_active, Gauge, Number of currently ejected hosts
ejections_overflow, Counter, Number of ejections aborted due to the max ejection %
ejections_consecutive_5xx, Counter, Number of consecutive 5xx ejections
.. _config_cluster_manager_cluster_stats_dynamic_http:
Dynamic HTTP statistics
-----------------------
If HTTP is used, dynamic HTTP response code statistics are also available. These are emitted by
various internal systems as well as some filters such as the :ref:`router filter
<config_http_filters_router>` and :ref:`rate limit filter <config_http_filters_rate_limit>`. They
are rooted at *cluster.<name>.* and contain the following statistics:
.. csv-table::
:header: Name, Type, Description
:widths: 1, 1, 2
upstream_rq_<\*xx>, Counter, "Aggregate HTTP response codes (e.g., 2xx, 3xx, etc.)"
upstream_rq_<\*>, Counter, "Specific HTTP response codes (e.g., 201, 302, etc.)"
upstream_rq_time, Histogram, Request time milliseconds
canary.upstream_rq_<\*xx>, Counter, Upstream canary aggregate HTTP response codes
canary.upstream_rq_<\*>, Counter, Upstream canary specific HTTP response codes
canary.upstream_rq_time, Histogram, Upstream canary request time milliseconds
internal.upstream_rq_<\*xx>, Counter, Internal origin aggregate HTTP response codes
internal.upstream_rq_<\*>, Counter, Internal origin specific HTTP response codes
internal.upstream_rq_time, Histogram, Internal origin request time milliseconds
external.upstream_rq_<\*xx>, Counter, External origin aggregate HTTP response codes
external.upstream_rq_<\*>, Counter, External origin specific HTTP response codes
external.upstream_rq_time, Histogram, External origin request time milliseconds
.. _config_cluster_manager_cluster_stats_alt_tree:
Alternate tree dynamic HTTP statistics
--------------------------------------
If alternate tree statistics are configured, they will be present in the
*cluster.<name>.<alt name>.* namespace. The statistics produced are the same as documented in
the dynamic HTTP statistics section :ref:`above
<config_cluster_manager_cluster_stats_dynamic_http>`.
.. _config_cluster_manager_cluster_per_az_stats:
Per service zone dynamic HTTP statistics
----------------------------------------
If the service zone is available for the local service (via :option:`--service-zone`)
and the :ref:`upstream cluster <arch_overview_service_discovery_sds>`,
Envoy will track the following statistics in *cluster.<name>.zone.<from_zone>.<to_zone>.* namespace.
.. csv-table::
:header: Name, Type, Description
:widths: 1, 1, 2
upstream_rq_<\*xx>, Counter, "Aggregate HTTP response codes (e.g., 2xx, 3xx, etc.)"
upstream_rq_<\*>, Counter, "Specific HTTP response codes (e.g., 201, 302, etc.)"
upstream_rq_time, Histogram, Request time milliseconds
Load balancer statistics
------------------------
Statistics for monitoring load balancer decisions. Stats are rooted at *cluster.<name>.* and contain
the following statistics:
.. csv-table::
:header: Name, Type, Description
:widths: 1, 1, 2
lb_healthy_panic, Counter, Total requests load balanced with the load balancer in panic mode
lb_zone_cluster_too_small, Counter, No zone aware routing because of small upstream cluster size
lb_zone_routing_all_directly, Counter, Sending all requests directly to the same zone
lb_zone_routing_sampled, Counter, Sending some requests to the same zone
lb_zone_routing_cross_zone, Counter, Zone aware routing mode but have to send cross zone
lb_local_cluster_not_ok, Counter, Local host set is not set or it is panic mode for local cluster
lb_zone_number_differs, Counter, Number of zones in local and upstream cluster different
Load balancer subset statistics
-------------------------------
Statistics for monitoring `load balancer subset <arch_overview_load_balancer_subsets>`
decisions. Stats are rooted at *cluster.<name>.* and contain the following statistics:
.. csv-table::
:header: Name, Type, Description
:widths: 1, 1, 2
lb_subsets_active, Gauge, Number of currently available subsets.
lb_subsets_created, Counter, Number of subsets created.
lb_subsets_removed, Counter, Number of subsets removed due to no hosts.
lb_subsets_selected, Counter, Number of times any subset was selected for load balancing.
lb_subsets_fallback, Counter, Number of times the fallback policy was invoked.

@ -0,0 +1,15 @@
.. _config_cluster_manager_outlier_detection:
Outlier detection
=================
Outlier detection :ref:`architecture overview <arch_overview_outlier_detection>`.
.. code-block:: json
{
"event_log_path": "..."
}
event_log_path
*(optional, string)* Specifies the path to the outlier event log.

@ -0,0 +1,24 @@
.. _config_cluster_manager_sds:
Service discovery service
=========================
Service discovery service :ref:`architecture overview <arch_overview_service_discovery_sds>`.
.. code-block:: json
{
"cluster": "{...}",
"refresh_delay_ms": "{...}"
}
:ref:`cluster <config_cluster_manager_cluster>`
*(required, object)* A standard definition of an upstream cluster that hosts the service
discovery service. The cluster must run a REST service that implements the :ref:`SDS HTTP API
<config_cluster_manager_sds_api>`.
refresh_delay_ms
*(required, integer)* The delay, in milliseconds, between fetches to the SDS API for each
configured SDS cluster. Envoy will add an additional random jitter to the delay that is between
zero and *refresh_delay_ms* milliseconds. Thus the longest possible refresh delay is
2 \* *refresh_delay_ms*.

@ -0,0 +1,60 @@
.. _config_cluster_manager_sds_api:
Service discovery service REST API
==================================
Envoy expects the service discovery service to expose the following API (See Lyft's
`reference implementation <https://github.com/lyft/discovery>`_):
.. http:get:: /v1/registration/(string: service_name)
Asks the discovery service to return all hosts for a particular `service_name`. `service_name`
corresponds to the :ref:`service_name <config_cluster_manager_cluster_service_name>` cluster
parameter. Responses use the following JSON schema:
.. code-block:: json
{
"hosts": []
}
hosts
*(Required, array)* A list of :ref:`hosts <config_cluster_manager_sds_api_host>` that make up
the service.
.. _config_cluster_manager_sds_api_host:
Host JSON
---------
.. code-block:: json
{
"ip_address": "...",
"port": "...",
"tags": {
"az": "...",
"canary": "...",
"load_balancing_weight": "..."
}
}
ip_address
*(required, string)* The IP address of the upstream host.
port
*(required, integer)* The port of the upstream host.
.. _config_cluster_manager_sds_api_host_az:
az
*(optional, string)* The optional zone of the upstream host. Envoy uses the zone for various
statistics and load balancing tasks documented elsewhere.
canary
*(optional, boolean)* The optional canary status of the upstream host. Envoy uses the canary
status for various statistics and load balancing tasks documented elsewhere.
load_balancing_weight
*(optional, integer)* The optional load balancing weight of the upstream host, in the range
1 - 100. Envoy uses the load balancing weight in some of the built in load balancers.

@ -0,0 +1,17 @@
.. _config:
Configuration reference
=======================
.. toctree::
:maxdepth: 2
:includehidden:
overview/overview
listeners/listeners
network_filters/network_filters
http_conn_man/http_conn_man
http_filters/http_filters
cluster_manager/cluster_manager
access_log
tools/router_check

@ -0,0 +1,21 @@
.. _config_http_conn_man_filters:
Filters
=======
HTTP filter :ref:`architecture overview <arch_overview_http_filters>`.
.. code-block:: json
{
"name": "...",
"config": "{...}"
}
name
*(required, string)* The name of the filter to instantiate. The name must match a :ref:`supported
filter <config_http_filters>`.
config
*(required, object)* Filter specific configuration which depends on the filter being
instantiated. See the :ref:`supported filters <config_http_filters>` for further documentation.

@ -0,0 +1,35 @@
.. _config_http_conn_man_header_sanitizing:
HTTP header sanitizing
======================
For security reasons, Envoy will "sanitize" various incoming HTTP headers depending on whether the
request is an internal or external request. The sanitizing action depends on the header and may
result in addition, removal, or modification. Ultimately, whether the request is considered internal
or external is governed by the :ref:`x-forwarded-for <config_http_conn_man_headers_x-forwarded-for>`
header (please read the linked section carefully as how Envoy populates the header is complex and
depends on the :ref:`use_remote_address <config_http_conn_man_use_remote_address>` setting).
Envoy will potentially sanitize the following headers:
* :ref:`x-envoy-decorator-operation <config_http_filters_router_x-envoy-decorator-operation>`
* :ref:`x-envoy-downstream-service-cluster
<config_http_conn_man_headers_downstream-service-cluster>`
* :ref:`x-envoy-downstream-service-node <config_http_conn_man_headers_downstream-service-node>`
* :ref:`x-envoy-expected-rq-timeout-ms <config_http_filters_router_x-envoy-expected-rq-timeout-ms>`
* :ref:`x-envoy-external-address <config_http_conn_man_headers_x-envoy-external-address>`
* :ref:`x-envoy-force-trace <config_http_conn_man_headers_x-envoy-force-trace>`
* :ref:`x-envoy-internal <config_http_conn_man_headers_x-envoy-internal>`
* :ref:`x-envoy-max-retries <config_http_filters_router_x-envoy-max-retries>`
* :ref:`x-envoy-retry-grpc-on <config_http_filters_router_x-envoy-retry-grpc-on>`
* :ref:`x-envoy-retry-on <config_http_filters_router_x-envoy-retry-on>`
* :ref:`x-envoy-upstream-alt-stat-name <config_http_filters_router_x-envoy-upstream-alt-stat-name>`
* :ref:`x-envoy-upstream-rq-per-try-timeout-ms
<config_http_filters_router_x-envoy-upstream-rq-per-try-timeout-ms>`
* :ref:`x-envoy-upstream-rq-timeout-alt-response
<config_http_filters_router_x-envoy-upstream-rq-timeout-alt-response>`
* :ref:`x-envoy-upstream-rq-timeout-ms <config_http_filters_router_x-envoy-upstream-rq-timeout-ms>`
* :ref:`x-forwarded-client-cert <config_http_conn_man_headers_x-forwarded-client-cert>`
* :ref:`x-forwarded-for <config_http_conn_man_headers_x-forwarded-for>`
* :ref:`x-forwarded-proto <config_http_conn_man_headers_x-forwarded-proto>`
* :ref:`x-request-id <config_http_conn_man_headers_x-request-id>`

@ -0,0 +1,276 @@
.. _config_http_conn_man_headers:
HTTP header manipulation
========================
The HTTP connection manager manipulates several HTTP headers both during decoding (when the request
is being received) as well as during encoding (when the response is being sent).
.. contents::
:local:
.. _config_http_conn_man_headers_user-agent:
user-agent
----------
The *user-agent* header may be set by the connection manager during decoding if the
:ref:`add_user_agent <config_http_conn_man_add_user_agent>` option is enabled. The header is only
modified if it is not already set. If the connection manager does set the header, the value is
determined by the :option:`--service-cluster` command line option.
.. _config_http_conn_man_headers_server:
server
------
The *server* header will be set during encoding to the value in the :ref:`server_name
<config_http_conn_man_server_name>` option.
.. _config_http_conn_man_headers_x-client-trace-id:
x-client-trace-id
-----------------
If an external client sets this header, Envoy will join the provided trace ID with the internally
generated :ref:`config_http_conn_man_headers_x-request-id`. x-client-trace-id needs to be globally
unique and generating a uuid4 is recommended. If this header is set, it has similar effect to
:ref:`config_http_conn_man_headers_x-envoy-force-trace`. See the :ref:`tracing.client_enabled
<config_http_conn_man_runtime_client_enabled>` runtime configuration setting.
.. _config_http_conn_man_headers_downstream-service-cluster:
x-envoy-downstream-service-cluster
----------------------------------
Internal services often want to know which service is calling them. This header is cleaned from
external requests, but for internal requests will contain the service cluster of the caller. Note
that in the current implementation, this should be considered a hint as it is set by the caller and
could be easily spoofed by any internal entity. In the future Envoy will support a mutual
authentication TLS mesh which will make this header fully secure. Like *user-agent*, the value
is determined by the :option:`--service-cluster` command line option. In order to enable this
feature you need to set the :ref:`user_agent <config_http_conn_man_add_user_agent>` option to true.
.. _config_http_conn_man_headers_downstream-service-node:
x-envoy-downstream-service-node
-------------------------------
Internal services may want to know the downstream node request comes from. This header
is quite similar to :ref:`config_http_conn_man_headers_downstream-service-cluster`, except the value is taken from
the :option:`--service-node` option.
.. _config_http_conn_man_headers_x-envoy-external-address:
x-envoy-external-address
------------------------
It is a common case where a service wants to perform analytics based on the client IP address. Per
the lengthy discussion on :ref:`XFF <config_http_conn_man_headers_x-forwarded-for>`, this can get
quite complicated. A proper implementation involves forwarding XFF, and then choosing the first non
RFC1918 address *from the right*. Since this is such a common occurrence, Envoy simplifies this by
setting *x-envoy-external-address* during decoding if and only if the request ingresses externally
(i.e., it's from an external client). *x-envoy-external-address* is not set or overwritten for
internal requests. This header can be safely forwarded between internal services for analytics
purposes without having to deal with the complexities of XFF.
.. _config_http_conn_man_headers_x-envoy-force-trace:
x-envoy-force-trace
-------------------
If an internal request sets this header, Envoy will modify the generated
:ref:`config_http_conn_man_headers_x-request-id` such that it forces traces to be collected.
This also forces :ref:`config_http_conn_man_headers_x-request-id` to be returned in the response
headers. If this request ID is then propagated to other hosts, traces will also be collected on
those hosts which will provide a consistent trace for an entire request flow. See the
:ref:`tracing.global_enabled <config_http_conn_man_runtime_global_enabled>` and
:ref:`tracing.random_sampling <config_http_conn_man_runtime_random_sampling>` runtime
configuration settings.
.. _config_http_conn_man_headers_x-envoy-internal:
x-envoy-internal
----------------
It is a common case where a service wants to know whether a request is internal origin or not. Envoy
uses :ref:`XFF <config_http_conn_man_headers_x-forwarded-for>` to determine this and then will set
the header value to *true*.
This is a convenience to avoid having to parse and understand XFF.
.. _config_http_conn_man_headers_x-forwarded-client-cert:
x-forwarded-client-cert
-----------------------
*x-forwarded-clinet-cert* (XFCC) is a proxy header which indicates certificate information of part
or all of the clients or proxies that a request has flowed through, on its way from the client to the
server. A proxy may choose to sanitize/append/forward the XFCC header before proxying the request.
The XFCC header value is a comma (",") separated string. Each substring is an XFCC element, which
holds information added by a single proxy. A proxy can append the current client certificate
information as an XFCC element, to the end of the request's XFCC header after a comma.
Each XFCC element is a semicolon ";" separated string. Each substring is a key-value pair, grouped
together by an equals ("=") sign. The keys are case-insensitive, the values are case-sensitive. If
",", ";" or "=" appear in a value, the value should be double-quoted. Double-quotes in the value
should be replaced by backslash-double-quote (\").
The following keys are supported:
1. ``By`` The Subject Alternative Name (SAN) of the current proxy's certificate.
2. ``Hash`` The SHA 256 diguest of the current client certificate.
3. ``SAN`` The SAN field (URI type) of the current client certificate.
4. ``Subject`` The Subject field of the current client certificate. The value is always double-quoted.
Some examples of the XFCC header are:
1. ``x-forwarded-client-cert: By=http://frontend.lyft.com;Hash=468ed33be74eee6556d90c0149c1309e9ba61d6425303443c0748a02dd8de688;Subject="/C=US/ST=CA/L=San Francisco/OU=Lyft/CN=Test Client";SAN=http://testclient.lyft.com``
2. ``x-forwarded-client-cert: By=http://frontend.lyft.com;Hash=468ed33be74eee6556d90c0149c1309e9ba61d6425303443c0748a02dd8de688;SAN=http://testclient.lyft.com,By=http://backend.lyft.com;Hash=9ba61d6425303443c0748a02dd8de688468ed33be74eee6556d90c0149c1309e;SAN=http://frontend.lyft.com``
How Envoy processes XFCC is specified by the
:ref:`forward_client_cert<config_http_conn_man_forward_client_cert>` and the
:ref:`set_current_client_cert_details<config_http_conn_man_set_current_client_cert_details>` HTTP
connection manager options. If *forward_client_cert* is unset, the XFCC header will be sanitized by
default.
.. _config_http_conn_man_headers_x-forwarded-for:
x-forwarded-for
---------------
*x-forwarded-for* (XFF) is a standard proxy header which indicates the IP addresses that a request has
flowed through on its way from the client to the server. A compliant proxy will *append* the IP
address of the nearest client to the XFF list before proxying the request. Some examples of XFF are:
1. ``x-forwarded-for: 50.0.0.1`` (single client)
2. ``x-forwarded-for: 50.0.0.1, 40.0.0.1`` (external proxy hop)
3. ``x-forwarded-for: 50.0.0.1, 10.0.0.1`` (internal proxy hop)
Envoy will only append to XFF if the :ref:`use_remote_address
<config_http_conn_man_use_remote_address>` HTTP connection manager option is set to true. This means
that if *use_remote_address* is false, the connection manager operates in a transparent mode where
it does not modify XFF. This is needed for certain types of mesh deployments depending on whether
the Envoy in question is an edge node or an internal service node.
Envoy uses the final XFF contents to determine whether a request originated externally or
internally. This influences whether the :ref:`config_http_conn_man_headers_x-envoy-internal` header
is set.
A few very important notes about XFF:
1. Since IP addresses are appended to XFF, only the last address (furthest to the right) can be
trusted. More specifically, the first external (non RFC1918) address from *the right* is the only
trustable addresses. Anything to the left of that can be spoofed. To make this easier to deal
with for analytics, etc., front Envoy will also set the
:ref:`config_http_conn_man_headers_x-envoy-external-address` header.
2. XFF is what Envoy uses to determine whether a request is internal origin or external origin. It
does this by checking to see if XFF contains a *single* IP address which is an RFC1918 address.
* **NOTE**: If an internal service proxies an external request to another internal service, and
includes the original XFF header, Envoy will append to it on egress if
:ref:`use_remote_address <config_http_conn_man_use_remote_address>` is set. This will cause
the other side to think the request is external. Generally, this is what is intended if XFF is
being forwarded. If it is not intended, do not forward XFF, and forward
:ref:`config_http_conn_man_headers_x-envoy-internal` instead.
* **NOTE**: If an internal service call is forwarded to another internal service (preserving XFF),
Envoy will not consider it internal. This is a known "bug" due to the simplification of how
XFF is parsed to determine if a request is internal. In this scenario, do not forward XFF and
allow Envoy to generate a new one with a single internal origin IP.
.. _config_http_conn_man_headers_x-forwarded-proto:
x-forwarded-proto
-----------------
It is a common case where a service wants to know what the originating protocol (HTTP or HTTPS) was
of the connection terminated by front/edge Envoy. *x-forwarded-proto* contains this information. It
will be set to either *http* or *https*.
.. _config_http_conn_man_headers_x-request-id:
x-request-id
------------
The *x-request-id* header is used by Envoy to uniquely identify a request as well as perform stable
access logging and tracing. Envoy will generate an *x-request-id* header for all external origin
requests (the header is sanitized). It will also generate an *x-request-id* header for internal
requests that do not already have one. This means that *x-request-id* can and should be propagated
between client applications in order to have stable IDs across the entire mesh. Due to the out of
process architecture of Envoy, the header can not be automatically forwarded by Envoy itself. This
is one of the few areas where a thin client library is needed to perform this duty. How that is done
is out of scope for this documentation. If *x-request-id* is propagated across all hosts, the
following features are available:
* Stable :ref:`access logging <config_access_log>` via the
:ref:`runtime filter<config_http_con_manager_access_log_filters_runtime>`.
* Stable tracing when performing random sampling via the :ref:`tracing.random_sampling
<config_http_conn_man_runtime_random_sampling>` runtime setting or via forced tracing using the
:ref:`config_http_conn_man_headers_x-envoy-force-trace` and
:ref:`config_http_conn_man_headers_x-client-trace-id` headers.
.. _config_http_conn_man_headers_x-ot-span-context:
x-ot-span-context
-----------------
The *x-ot-span-context* HTTP header is used by Envoy to establish proper parent-child relationships
between tracing spans. This header can be used with both LightStep and Zipkin tracers.
For example, an egress span is a child of an ingress
span (if the ingress span was present). Envoy injects the *x-ot-span-context* header on ingress requests and
forwards it to the local service. Envoy relies on the application to propagate *x-ot-span-context* on
the egress call to an upstream. See more on tracing :ref:`here <arch_overview_tracing>`.
.. _config_http_conn_man_headers_x-b3-traceid:
x-b3-traceid
------------
The *x-b3-traceid* HTTP header is used by the Zipkin tracer in Envoy.
The TraceId is 64-bit in length and indicates the overall ID of the
trace. Every span in a trace shares this ID. See more on zipkin tracing
`here <https://github.com/openzipkin/b3-propagation>`.
.. _config_http_conn_man_headers_x-b3-spanid:
x-b3-spanid
-----------
The *x-b3-spanid* HTTP header is used by the Zipkin tracer in Envoy.
The SpanId is 64-bit in length and indicates the position of the current
operation in the trace tree. The value should not be interpreted: it may or
may not be derived from the value of the TraceId. See more on zipkin tracing
`here <https://github.com/openzipkin/b3-propagation>`.
.. _config_http_conn_man_headers_x-b3-parentspanid:
x-b3-parentspanid
-----------------
The *x-b3-parentspanid* HTTP header is used by the Zipkin tracer in Envoy.
The ParentSpanId is 64-bit in length and indicates the position of the
parent operation in the trace tree. When the span is the root of the trace
tree, the ParentSpanId is absent. See more on zipkin tracing
`here <https://github.com/openzipkin/b3-propagation>`.
.. _config_http_conn_man_headers_x-b3-sampled:
x-b3-sampled
------------
The *x-b3-sampled* HTTP header is used by the Zipkin tracer in Envoy.
When the Sampled flag is 1, the soan will be reported to the tracing
system. Once Sampled is set to 0 or 1, the same
value should be consistently sent downstream. See more on zipkin tracing
`here <https://github.com/openzipkin/b3-propagation>`.
.. _config_http_conn_man_headers_x-b3-flags:
x-b3-flags
----------
The *x-b3-flags* HTTP header is used by the Zipkin tracer in Envoy.
The encode one or more options. For example, Debug is encoded as
``X-B3-Flags: 1``. See more on zipkin tracing
`here <https://github.com/openzipkin/b3-propagation>`.

@ -0,0 +1,226 @@
.. _config_http_conn_man:
HTTP connection manager
=======================
* HTTP connection manager :ref:`architecture overview <arch_overview_http_conn_man>`.
* HTTP protocols :ref:`architecture overview <arch_overview_http_protocols>`.
.. code-block:: json
{
"name": "http_connection_manager",
"config": {
"codec_type": "...",
"stat_prefix": "...",
"rds": "{...}",
"route_config": "{...}",
"filters": [],
"add_user_agent": "...",
"tracing": "{...}",
"http1_settings": "{...}",
"http2_settings": "{...}",
"server_name": "...",
"idle_timeout_s": "...",
"drain_timeout_ms": "...",
"access_log": [],
"use_remote_address": "...",
"forward_client_cert": "...",
"set_current_client_cert": "...",
"generate_request_id": "..."
}
}
.. _config_http_conn_man_codec_type:
codec_type
*(required, string)* Supplies the type of codec that the connection manager should use. Possible
values are:
http1
The connection manager will assume that the client is speaking HTTP/1.1.
http2
The connection manager will assume that the client is speaking HTTP/2 (Envoy does not require
HTTP/2 to take place over TLS or to use ALPN. Prior knowledge is allowed).
auto
For every new connection, the connection manager will determine which codec to use. This mode
supports both ALPN for TLS listeners as well as protocol inference for plaintext listeners.
If ALPN data is available, it is preferred, otherwise protocol inference is used. In almost
all cases, this is the right option to choose for this setting.
.. _config_http_conn_man_stat_prefix:
stat_prefix
*(required, string)* The human readable prefix to use when emitting statistics for the
connection manager. See the :ref:`statistics <config_http_conn_man_stats>` documentation
for more information.
.. _config_http_conn_man_rds_option:
:ref:`rds <config_http_conn_man_rds>`
*(sometimes required, object)* The connection manager configuration must specify one of *rds* or
*route_config*. If *rds* is specified, the connection manager's route table will be dynamically
loaded via the RDS API. See the :ref:`documentation <config_http_conn_man_rds>` for more
information.
.. _config_http_conn_man_route_config:
:ref:`route_config <config_http_conn_man_route_table>`
*(sometimes required, object)* The connection manager configuration must specify one of *rds* or
*route_config*. If *route_config* is specified, the :ref:`route table <arch_overview_http_routing>`
for the connection manager is static and is specified in this property.
:ref:`filters <config_http_conn_man_filters>`
*(required, array)* A list of individual :ref:`HTTP filters <arch_overview_http_filters>` that
make up the filter chain for requests made to the connection manager. Order matters as the filters
are processed sequentially as request events happen.
.. _config_http_conn_man_add_user_agent:
add_user_agent
*(optional, boolean)* Whether the connection manager manipulates the
:ref:`config_http_conn_man_headers_user-agent` and
:ref:`config_http_conn_man_headers_downstream-service-cluster` headers. See the linked
documentation for more information. Defaults to false.
:ref:`tracing <config_http_conn_man_tracing>`
*(optional, object)* Presence of the object defines whether the connection manager
emits :ref:`tracing <arch_overview_tracing>` data to the :ref:`configured tracing provider <config_tracing>`.
.. _config_http_conn_man_http1_settings:
http1_settings
*(optional, object)* Additional HTTP/1 settings that are passed to the HTTP/1 codec.
allow_absolute_url
*(optional, boolean)* Handle http requests with absolute urls in the requests. These requests
are generally sent by clients to forward/explicit proxies. This allows clients to configure
envoy as their http proxy. In Unix, for example, this is typically done by setting the
http_proxy environment variable.
.. _config_http_conn_man_http2_settings:
http2_settings
*(optional, object)* Additional HTTP/2 settings that are passed directly to the HTTP/2 codec.
Currently supported settings are:
hpack_table_size
*(optional, integer)* `Maximum table size <http://httpwg.org/specs/rfc7541.html#rfc.section.4.2>`_
(in octets) that the encoder is permitted to use for
the dynamic HPACK table. Valid values range from 0 to 4294967295 (2^32 - 1) and defaults to 4096.
0 effectively disables header compression.
max_concurrent_streams
*(optional, integer)* `Maximum concurrent streams
<http://httpwg.org/specs/rfc7540.html#rfc.section.5.1.2>`_
allowed for peer on one HTTP/2 connection.
Valid values range from 1 to 2147483647 (2^31 - 1) and defaults to 2147483647.
.. _config_http_conn_man_http2_settings_initial_stream_window_size:
initial_stream_window_size
*(optional, integer)* `Initial stream-level flow-control window
<http://httpwg.org/specs/rfc7540.html#rfc.section.6.9.2>`_ size. Valid values range from 65535
(2^16 - 1, HTTP/2 default) to 2147483647 (2^31 - 1, HTTP/2 maximum) and defaults to 268435456
(256 * 1024 * 1024).
NOTE: 65535 is the initial window size from HTTP/2 spec. We only support increasing the default window
size now, so it's also the minimum.
This field also acts as a soft limit on the number of bytes Envoy will buffer per-stream in the
HTTP/2 codec buffers. Once the buffer reaches this pointer, watermark callbacks will fire to
stop the flow of data to the codec buffers.
initial_connection_window_size
*(optional, integer)* Similar to :ref:`initial_stream_window_size
<config_http_conn_man_http2_settings_initial_stream_window_size>`, but for connection-level flow-control
window. Currently , this has the same minimum/maximum/default as :ref:`initial_stream_window_size
<config_http_conn_man_http2_settings_initial_stream_window_size>`.
These are the same options available in the upstream cluster :ref:`http2_settings
<config_cluster_manager_cluster_http2_settings>` option.
.. _config_http_conn_man_server_name:
server_name
*(optional, string)* An optional override that the connection manager will write to the
:ref:`config_http_conn_man_headers_server` header in responses. If not set, the default is
*envoy*.
idle_timeout_s
*(optional, integer)* The idle timeout in seconds for connections managed by the connection
manager. The idle timeout is defined as the period in which there are no active requests. If not
set, there is no idle timeout. When the idle timeout is reached the connection will be closed. If
the connection is an HTTP/2 connection a drain sequence will occur prior to closing the
connection. See :ref:`drain_timeout_ms <config_http_conn_man_drain_timeout_ms>`.
.. _config_http_conn_man_drain_timeout_ms:
drain_timeout_ms
*(optional, integer)* The time in milliseconds that Envoy will wait between sending an HTTP/2
"shutdown notification" (GOAWAY frame with max stream ID) and a final GOAWAY frame. This is used
so that Envoy provides a grace period for new streams that race with the final GOAWAY frame.
During this grace period, Envoy will continue to accept new streams. After the grace period, a
final GOAWAY frame is sent and Envoy will start refusing new streams. Draining occurs both
when a connection hits the idle timeout or during general server draining. The default grace
period is 5000 milliseconds (5 seconds) if this option is not specified.
:ref:`access_log <config_access_log>`
*(optional, array)* Configuration for :ref:`HTTP access logs <arch_overview_access_logs>`
emitted by the connection manager.
.. _config_http_conn_man_use_remote_address:
use_remote_address
*(optional, boolean)* If set to true, the connection manager will use the real remote address
of the client connection when determining internal versus external origin and manipulating
various headers. If set to false or absent, the connection manager will use the
:ref:`config_http_conn_man_headers_x-forwarded-for` HTTP header. See the documentation for
:ref:`config_http_conn_man_headers_x-forwarded-for`,
:ref:`config_http_conn_man_headers_x-envoy-internal`, and
:ref:`config_http_conn_man_headers_x-envoy-external-address` for more information.
.. _config_http_conn_man_forward_client_cert:
forward_client_cert
*(optional, string)* How to handle the
:ref:`config_http_conn_man_headers_x-forwarded-client-cert` (XFCC) HTTP header.
Possible values are:
1. **sanitize**: Do not send the XFCC header to the next hop. This is the default value.
2. **forward_only**: When the client connection is mTLS (Mutual TLS), forward the XFCC header in the request.
3. **always_forward_only**: Always forward the XFCC header in the request, regardless of whether the client connection is mTLS.
4. **append_forward**: When the client connection is mTLS, append the client certificate information to the request's XFCC header and forward it.
5. **sanitize_set**: When the client connection is mTLS, reset the XFCC header with the client certificate information and send it to the next hop.
For the format of the XFCC header, please refer to
:ref:`config_http_conn_man_headers_x-forwarded-client-cert`.
.. _config_http_conn_man_set_current_client_cert_details:
set_current_client_cert_details
*(optional, array)* A list of strings, possible values are *Subject* and *SAN*. This field is
valid only when *forward_client_cert* is *append_forward* or *sanitize_set* and the client
connection is mTLS. It specifies the fields in the client certificate to be forwarded. Note that
in the :ref:`config_http_conn_man_headers_x-forwarded-client-cert` header, `Hash` is always set,
and `By` is always set when the client certificate presents the SAN value.
generate_request_id
*(optional, boolean)* Whether the connection manager will generate the
:ref:`config_http_conn_man_headers_x-request-id` header if it does not exist. This defaults to
*true*. Generating a random UUID4 is expensive so in high throughput scenarios where this
feature is not desired it can be disabled.
.. toctree::
:hidden:
route_config/route_config
filters
tracing
headers
header_sanitizing
stats
runtime
rds

@ -0,0 +1,86 @@
.. _config_http_conn_man_rds:
Route discovery service
=======================
The route discovery service (RDS) API is an optional API that Envoy will call to dynamically fetch
:ref:`route configurations <config_http_conn_man_route_table>`. A route configuration includes both
HTTP header modifications, virtual hosts, and the individual route entries contained within each
virtual host. Each :ref:`HTTP connection manager filter <config_http_conn_man>` can independently
fetch its own route configuration via the API.
.. code-block:: json
{
"cluster": "...",
"route_config_name": "...",
"refresh_delay_ms": "..."
}
cluster
*(required, string)* The name of an upstream :ref:`cluster <config_cluster_manager_cluster>` that
hosts the route discovery service. The cluster must run a REST service that implements the
:ref:`RDS HTTP API <config_http_conn_man_rds_api>`. NOTE: This is the *name* of a cluster defined
in the :ref:`cluster manager <config_cluster_manager>` configuration, not the full definition of
a cluster as in the case of SDS and CDS.
route_config_name
*(required, string)* The name of the route configuration. This name will be passed to the
:ref:`RDS HTTP API <config_http_conn_man_rds_api>`. This allows an Envoy configuration with
multiple HTTP listeners (and associated HTTP connection manager filters) to use different route
configurations. By default, the maximum length of the name is limited to 60 characters. This
limit can be increased by setting the :option:`--max-obj-name-len` command line argument to the
desired value.
refresh_delay_ms
*(optional, integer)* The delay, in milliseconds, between fetches to the RDS API. Envoy will add
an additional random jitter to the delay that is between zero and *refresh_delay_ms*
milliseconds. Thus the longest possible refresh delay is 2 \* *refresh_delay_ms*. Default
value is 30000ms (30 seconds).
.. _config_http_conn_man_rds_api:
REST API
--------
.. http:get:: /v1/routes/(string: route_config_name)/(string: service_cluster)/(string: service_node)
Asks the route discovery service to return the route configuration for a particular
`route_config_name`, `service_cluster`, and `service_node`. `route_config_name` corresponds to the
RDS configuration parameter above. `service_cluster` corresponds to the :option:`--service-cluster`
CLI option. `service_node` corresponds to the :option:`--service-node` CLI option. Responses are a
single JSON object that contains a route configuration as defined in the :ref:`route configuration
documentation <config_http_conn_man_route_table>`.
A new route configuration will be gracefully swapped in such that existing requests are not
affected. This means that when a request starts, it sees a consistent snapshot of the route
configuration that does not change for the duration of the request. Thus, if an update changes a
timeout for example, only new requests will use the updated timeout value.
As a performance optimization, Envoy hashes the route configuration it receives from the RDS API and
will only perform a full reload if the hash value changes.
.. attention::
Route configurations that are loaded via RDS are *not* checked to see if referenced clusters are
known to the :ref:`cluster manager <config_cluster_manager>`. The RDS API has been designed to
work alongside the :ref:`CDS API <config_cluster_manager_cds>` such that Envoy assumes eventually
consistent updates. If a route references an unknown cluster a 404 response will be returned by
the router filter.
Statistics
----------
RDS has a statistics tree rooted at *http.<stat_prefix>.rds.<route_config_name>.*.
Any ``:`` character in the ``route_config_name`` name gets replaced with ``_`` in the
stats tree. The stats tree contains the following statistics:
.. csv-table::
:header: Name, Type, Description
:widths: 1, 1, 2
config_reload, Counter, Total API fetches that resulted in a config reload due to a different config
update_attempt, Counter, Total API fetches attempted
update_success, Counter, Total API fetches completed successfully
update_failure, Counter, Total API fetches that failed (either network or schema errors)
version, Gauge, Hash of the contents from the last successful API fetch

@ -0,0 +1,255 @@
.. _config_http_conn_man_route_table_rate_limit_config:
Rate limit configuration
========================
Global rate limiting :ref:`architecture overview <arch_overview_rate_limit>`.
.. code-block:: json
{
"stage": "...",
"disable_key": "...",
"actions": []
}
stage
*(optional, integer)* Refers to the stage set in the filter. The rate limit configuration
only applies to filters with the same stage number. The default stage number is 0.
**NOTE:** The filter supports a range of 0 - 10 inclusively for stage numbers.
disable_key
*(optional, string)* The key to be set in runtime to disable this rate limit configuration.
actions
*(required, array)* A list of actions that are to be applied for this rate limit configuration.
Order matters as the actions are processed sequentially and the descriptor is composed by
appending descriptor entries in that sequence. If an action cannot append a descriptor entry,
no descriptor is generated for the configuration. See :ref:`composing actions
<config_http_conn_man_route_table_rate_limit_composing_actions>` for additional documentation.
.. _config_http_conn_man_route_table_rate_limit_actions:
Actions
-------
.. code-block:: json
{
"type": "..."
}
type
*(required, string)* The type of rate limit action to perform. The currently supported action
types are *source_cluster*, *destination_cluster* , *request_headers*, *remote_address*,
*generic_key* and *header_value_match*.
Source Cluster
^^^^^^^^^^^^^^
.. code-block:: json
{
"type": "source_cluster"
}
The following descriptor entry is appended to the descriptor:
.. code-block:: cpp
("source_cluster", "<local service cluster>")
<local service cluster> is derived from the :option:`--service-cluster` option.
Destination Cluster
^^^^^^^^^^^^^^^^^^^
.. code-block:: json
{
"type": "destination_cluster"
}
The following descriptor entry is appended to the descriptor:
.. code-block:: cpp
("destination_cluster", "<routed target cluster>")
Once a request matches against a route table rule, a routed cluster is determined by one of the
following :ref:`route table configuration <config_http_conn_man_route_table_route_cluster>`
settings:
* :ref:`cluster <config_http_conn_man_route_table_route_cluster>` indicates the upstream cluster
to route to.
* :ref:`weighted_clusters <config_http_conn_man_route_table_route_config_weighted_clusters>`
chooses a cluster randomly from a set of clusters with attributed weight.
* :ref:`cluster_header<config_http_conn_man_route_table_route_cluster_header>` indicates which
header in the request contains the target cluster.
Request Headers
^^^^^^^^^^^^^^^
.. code-block:: json
{
"type": "request_headers",
"header_name": "...",
"descriptor_key" : "..."
}
header_name
*(required, string)* The header name to be queried from the request headers. The header's value is
used to populate the value of the descriptor entry for the descriptor_key.
descriptor_key
*(required, string)* The key to use in the descriptor entry.
The following descriptor entry is appended when a header contains a key that matches the
*header_name*:
.. code-block:: cpp
("<descriptor_key>", "<header_value_queried_from_header>")
Remote Address
^^^^^^^^^^^^^^
.. code-block:: json
{
"type": "remote_address"
}
The following descriptor entry is appended to the descriptor and is populated using the trusted
address from :ref:`x-forwarded-for <config_http_conn_man_headers_x-forwarded-for>`:
.. code-block:: cpp
("remote_address", "<trusted address from x-forwarded-for>")
Generic Key
^^^^^^^^^^^
.. code-block:: json
{
"type": "generic_key",
"descriptor_value" : "..."
}
descriptor_value
*(required, string)* The value to use in the descriptor entry.
The following descriptor entry is appended to the descriptor:
.. code-block:: cpp
("generic_key", "<descriptor_value>")
Header Value Match
^^^^^^^^^^^^^^^^^^
.. code-block:: json
{
"type": "header_value_match",
"descriptor_value" : "...",
"expect_match" : "...",
"headers" : []
}
descriptor_value
*(required, string)* The value to use in the descriptor entry.
expect_match
*(optional, boolean)* If set to true, the action will append a descriptor entry when the request
matches the :ref:`headers<config_http_conn_man_route_table_route_headers>`. If set to false,
the action will append a descriptor entry when the request does not match the
:ref:`headers<config_http_conn_man_route_table_route_headers>`. The default value is true.
:ref:`headers<config_http_conn_man_route_table_route_headers>`
*(required, array)* Specifies a set of headers that the rate limit action should match on. The
action will check the request's headers against all the specified headers in the config. A match
will happen if all the headers in the config are present in the request with the same values (or
based on presence if the ``value`` field is not in the config).
The following descriptor entry is appended to the descriptor:
.. code-block:: cpp
("header_match", "<descriptor_value>")
.. _config_http_conn_man_route_table_rate_limit_composing_actions:
Composing Actions
-----------------
Each action populates a descriptor entry. A vector of descriptor entries compose a descriptor. To
create more complex rate limit descriptors, actions can be composed in any order. The descriptor
will be populated in the order the actions are specified in the configuration.
Example 1
^^^^^^^^^
For example, to generate the following descriptor:
.. code-block:: cpp
("generic_key", "some_value")
("source_cluster", "from_cluster")
The configuration would be:
.. code-block:: json
{
"actions" : [
{
"type" : "generic_key",
"descriptor_value" : "some_value"
},
{
"type" : "source_cluster"
}
]
}
Example 2
^^^^^^^^^
If an action doesn't append a descriptor entry, no descriptor is generated for
the configuration.
For the following configuration:
.. code-block:: json
{
"actions" : [
{
"type" : "generic_key",
"descriptor_value" : "some_value"
},
{
"type" : "remote_address"
},
{
"type" : "souce_cluster"
}
]
}
If a request did not set :ref:`x-forwarded-for<config_http_conn_man_headers_x-forwarded-for>`,
no descriptor is generated.
If a request sets :ref:`x-forwarded-for<config_http_conn_man_headers_x-forwarded-for>`, the
the following descriptor is generated:
.. code-block:: cpp
("generic_key", "some_value")
("remote_address", "<trusted address from x-forwarded-for>")
("source_cluster", "from_cluster")

@ -0,0 +1,509 @@
.. _config_http_conn_man_route_table_route:
Route
=====
A route is both a specification of how to match a request as well as in indication of what to do
next (e.g., redirect, forward, rewrite, etc.).
.. attention::
Envoy supports routing on HTTP method via :ref:`header matching
<config_http_conn_man_route_table_route_headers>`.
.. code-block:: json
{
"prefix": "...",
"path": "...",
"regex": "...",
"cluster": "...",
"cluster_header": "...",
"weighted_clusters" : "{...}",
"host_redirect": "...",
"path_redirect": "...",
"prefix_rewrite": "...",
"host_rewrite": "...",
"auto_host_rewrite": "...",
"case_sensitive": "...",
"use_websocket": "...",
"timeout_ms": "...",
"runtime": "{...}",
"retry_policy": "{...}",
"shadow": "{...}",
"priority": "...",
"headers": [],
"rate_limits": [],
"include_vh_rate_limits" : "...",
"hash_policy": "{...}",
"request_headers_to_add" : [],
"opaque_config": [],
"cors": "{...}",
"decorator" : "{...}"
}
prefix
*(sometimes required, string)* If specified, the route is a prefix rule meaning that the prefix
must match the beginning of the :path header. One of *prefix*, *path*, or *regex* must be specified.
path
*(sometimes required, string)* If specified, the route is an exact path rule meaning that the path
must exactly match the :path header once the query string is removed. One of *prefix*, *path*, or
*regex* must be specified.
regex
*(sometimes required, string)* If specified, the route is a regular expression rule meaning that the
regex must match the :path header once the query string is removed. The entire path (without the
query string) must match the regex. The rule will not match if only a subsequence of the :path header
matches the regex. The regex grammar is defined `here
<http://en.cppreference.com/w/cpp/regex/ecmascript>`_. One of *prefix*, *path*, or
*regex* must be specified.
Examples:
* The regex */b[io]t* matches the path */bit*
* The regex */b[io]t* matches the path */bot*
* The regex */b[io]t* does not match the path */bite*
* The regex */b[io]t* does not match the path */bit/bot*
:ref:`cors <config_http_filters_cors>`
*(optional, object)* Specifies the route's CORS policy.
.. _config_http_conn_man_route_table_route_cluster:
cluster
*(sometimes required, string)* If the route is not a redirect (*host_redirect* and/or
*path_redirect* is not specified), one of *cluster*, *cluster_header*, or *weighted_clusters* must
be specified. When *cluster* is specified, its value indicates the upstream cluster to which the
request should be forwarded to.
.. _config_http_conn_man_route_table_route_cluster_header:
cluster_header
*(sometimes required, string)* If the route is not a redirect (*host_redirect* and/or
*path_redirect* is not specified), one of *cluster*, *cluster_header*, or *weighted_clusters* must
be specified. When *cluster_header* is specified, Envoy will determine the cluster to route to
by reading the value of the HTTP header named by *cluster_header* from the request headers.
If the header is not found or the referenced cluster does not exist, Envoy will return a 404
response.
.. attention::
Internally, Envoy always uses the HTTP/2 *:authority* header to represent the HTTP/1 *Host*
header. Thus, if attempting to match on *Host*, match on *:authority* instead.
.. _config_http_conn_man_route_table_route_config_weighted_clusters:
:ref:`weighted_clusters <config_http_conn_man_route_table_route_weighted_clusters>`
*(sometimes required, object)* If the route is not a redirect (*host_redirect* and/or
*path_redirect* is not specified), one of *cluster*, *cluster_header*, or *weighted_clusters* must
be specified. With the *weighted_clusters* option, multiple upstream clusters can be specified for
a given route. The request is forwarded to one of the upstream clusters based on weights assigned
to each cluster. See :ref:`traffic splitting <config_http_conn_man_route_table_traffic_splitting_split>`
for additional documentation.
.. _config_http_conn_man_route_table_route_host_redirect:
host_redirect
*(sometimes required, string)* Indicates that the route is a redirect rule. If there is a match,
a 301 redirect response will be sent which swaps the host portion of the URL with this value.
*path_redirect* can also be specified along with this option.
.. _config_http_conn_man_route_table_route_path_redirect:
path_redirect
*(sometimes required, string)* Indicates that the route is a redirect rule. If there is a match,
a 301 redirect response will be sent which swaps the path portion of the URL with this value.
*host_redirect* can also be specified along with this option. The router filter will place
the original path before rewrite into the :ref:`x-envoy-original-path
<config_http_filters_router_x-envoy-original-path>` header.
.. _config_http_conn_man_route_table_route_prefix_rewrite:
prefix_rewrite
*(optional, string)* Indicates that during forwarding, the matched prefix (or path) should be
swapped with this value. When using regex path matching, the entire path (not including
the query string) will be swapped with this value. This option allows application URLs to be
rooted at a different path from those exposed at the reverse proxy layer.
.. _config_http_conn_man_route_table_route_host_rewrite:
host_rewrite
*(optional, string)* Indicates that during forwarding, the host header will be swapped with this
value.
.. _config_http_conn_man_route_table_route_auto_host_rewrite:
auto_host_rewrite
*(optional, boolean)* Indicates that during forwarding, the host header will be swapped with the
hostname of the upstream host chosen by the cluster manager. This option is applicable only when
the destination cluster for a route is of type *strict_dns* or *logical_dns*. Setting this to true
with other cluster types has no effect. *auto_host_rewrite* and *host_rewrite* are mutually exclusive
options. Only one can be specified.
.. _config_http_conn_man_route_table_route_case_sensitive:
case_sensitive
*(optional, boolean)* Indicates that prefix/path matching should be case sensitive. The default
is true.
.. _config_http_conn_man_route_table_route_use_websocket:
use_websocket
*(optional, boolean)* Indicates that a HTTP/1.1 client connection to this particular route
should be allowed to upgrade to a WebSocket connection. The default is false.
.. attention::
If set to true, Envoy will expect the first request matching this route to contain WebSocket
upgrade headers. If the headers are not present, the connection will be processed as a normal
HTTP/1.1 connection. If the upgrade headers are present, Envoy will setup plain TCP proxying
between the client and the upstream server. Hence, an upstream server that rejects the WebSocket
upgrade request is also responsible for closing the associated connection. Until then, Envoy will
continue to proxy data from the client to the upstream server.
Redirects, timeouts and retries are not supported on requests with WebSocket upgrade headers.
.. _config_http_conn_man_route_table_route_timeout:
timeout_ms
*(optional, integer)* Specifies the timeout for the route. If not specified, the default is 15s.
Note that this timeout includes all retries. See also
:ref:`config_http_filters_router_x-envoy-upstream-rq-timeout-ms`,
:ref:`config_http_filters_router_x-envoy-upstream-rq-per-try-timeout-ms`, and the
:ref:`retry overview <arch_overview_http_routing_retry>`.
:ref:`runtime <config_http_conn_man_route_table_route_runtime>`
*(optional, object)* Indicates that the route should additionally match on a runtime key.
:ref:`retry_policy <config_http_conn_man_route_table_route_retry>`
*(optional, object)* Indicates that the route has a retry policy.
:ref:`shadow <config_http_conn_man_route_table_route_shadow>`
*(optional, object)* Indicates that the route has a shadow policy.
priority
*(optional, string)* Optionally specifies the :ref:`routing priority
<arch_overview_http_routing_priority>`.
:ref:`headers <config_http_conn_man_route_table_route_headers>`
*(optional, array)* Specifies a set of headers that the route should match on. The router will
check the request's headers against all the specified headers in the route config. A match will
happen if all the headers in the route are present in the request with the same values (or based
on presence if the ``value`` field is not in the config).
:ref:`request_headers_to_add <config_http_conn_man_route_table_route_add_req_headers>`
*(optional, array)* Specifies a set of headers that will be added to requests matching this route.
:ref:`opaque_config <config_http_conn_man_route_table_opaque_config>`
*(optional, array)* Specifies a set of optional route configuration values that can be accessed by filters.
.. _config_http_conn_man_route_table_route_rate_limits:
:ref:`rate_limits <config_http_conn_man_route_table_rate_limit_config>`
*(optional, array)* Specifies a set of rate limit configurations that could be applied to the
route.
.. _config_http_conn_man_route_table_route_include_vh:
include_vh_rate_limits
*(optional, boolean)* Specifies if the rate limit filter should include the virtual host rate
limits. By default, if the route configured rate limits, the virtual host
:ref:`rate_limits <config_http_conn_man_route_table_rate_limit_config>` are not applied to the
request.
:ref:`hash_policy <config_http_conn_man_route_table_hash_policy>`
*(optional, object)* Specifies the route's hashing policy if the upstream cluster uses a hashing
:ref:`load balancer <arch_overview_load_balancing_types>`.
:ref:`decorator <config_http_conn_man_route_table_decorator>`
*(optional, object)* Specifies the route's decorator used to enhance information reported about
the matched request.
.. _config_http_conn_man_route_table_route_runtime:
Runtime
-------
A :ref:`runtime <arch_overview_runtime>` route configuration can be used to roll out route changes
in a gradual manner without full code/config deploys. Refer to
:ref:`traffic shifting <config_http_conn_man_route_table_traffic_splitting_shift>` docs
for additional documentation.
.. code-block:: json
{
"key": "...",
"default": "..."
}
key
*(required, string)* Specifies the runtime key name that should be consulted to determine whether
the route matches or not. See the :ref:`runtime documentation <operations_runtime>` for how key
names map to the underlying implementation.
.. _config_http_conn_man_route_table_route_runtime_default:
default
*(required, integer)* An integer between 0-100. Every time the route is considered for a match,
a random number between 0-99 is selected. If the number is <= the value found in the *key*
(checked first) or, if the key is not present, the default value, the route is a match (assuming
everything also about the route matches).
.. _config_http_conn_man_route_table_route_retry:
Retry policy
------------
HTTP retry :ref:`architecture overview <arch_overview_http_routing_retry>`.
.. code-block:: json
{
"retry_on": "...",
"num_retries": "...",
"per_try_timeout_ms" : "..."
}
retry_on
*(required, string)* specifies the conditions under which retry takes place. These are the same
conditions documented for :ref:`config_http_filters_router_x-envoy-retry-on` and
:ref:`config_http_filters_router_x-envoy-retry-grpc-on`.
num_retries
*(optional, integer)* specifies the allowed number of retries. This parameter is optional and
defaults to 1. These are the same conditions documented for
:ref:`config_http_filters_router_x-envoy-max-retries`.
per_try_timeout_ms
*(optional, integer)* specifies a non-zero timeout per retry attempt. This parameter is optional.
The same conditions documented for
:ref:`config_http_filters_router_x-envoy-upstream-rq-per-try-timeout-ms` apply.
**Note:** If left unspecified, Envoy will use the global
:ref:`route timeout <config_http_conn_man_route_table_route_timeout>` for the request.
Consequently, when using a :ref:`5xx <config_http_filters_router_x-envoy-retry-on>` based
retry policy, a request that times out will not be retried as the total timeout budget
would have been exhausted.
.. _config_http_conn_man_route_table_route_shadow:
Shadow
------
The router is capable of shadowing traffic from one cluster to another. The current implementation
is "fire and forget," meaning Envoy will not wait for the shadow cluster to respond before returning
the response from the primary cluster. All normal statistics are collected for the shadow
cluster making this feature useful for testing.
During shadowing, the host/authority header is altered such that *-shadow* is appended. This is
useful for logging. For example, *cluster1* becomes *cluster1-shadow*.
.. code-block:: json
{
"cluster": "...",
"runtime_key": "..."
}
cluster
*(required, string)* Specifies the cluster that requests will be shadowed to. The cluster must
exist in the :ref:`cluster manager configuration <config_cluster_manager>`.
runtime_key
*(optional, string)* If not specified, **all** requests to the target cluster will be shadowed.
If specified, Envoy will lookup the runtime key to get the % of requests to shadow. Valid values are
from 0 to 10000, allowing for increments of 0.01% of requests to be shadowed. If the runtime key
is specified in the configuration but not present in runtime, 0 is the default and thus 0% of
requests will be shadowed.
.. _config_http_conn_man_route_table_route_headers:
Headers
-------
.. code-block:: json
{
"name": "...",
"value": "...",
"regex": "..."
}
name
*(required, string)* Specifies the name of the header in the request.
value
*(optional, string)* Specifies the value of the header. If the value is absent a request that has
the *name* header will match, regardless of the header's value.
regex
*(optional, boolean)* Specifies whether the header value is a regular
expression or not. Defaults to false. The entire request header value must match the regex. The
rule will not match if only a subsequence of the request header value matches the regex. The
regex grammar used in the value field is defined
`here <http://en.cppreference.com/w/cpp/regex/ecmascript>`_.
Examples:
* The regex *\d{3}* matches the value *123*
* The regex *\d{3}* does not match the value *1234*
* The regex *\d{3}* does not match the value *123.456*
.. attention::
Internally, Envoy always uses the HTTP/2 *:authority* header to represent the HTTP/1 *Host*
header. Thus, if attempting to match on *Host*, match on *:authority* instead.
.. attention::
To route on HTTP method, use the special HTTP/2 *:method* header. This works for both
HTTP/1 and HTTP/2 as Envoy normalizes headers. E.g.,
.. code-block:: json
{
"name": ":method",
"value": "POST"
}
.. _config_http_conn_man_route_table_route_weighted_clusters:
Weighted Clusters
-----------------
Compared to the ``cluster`` field that specifies a single upstream cluster as the target
of a request, the ``weighted_clusters`` option allows for specification of multiple upstream clusters
along with weights that indicate the **percentage** of traffic to be forwarded to each cluster.
The router selects an upstream cluster based on the weights.
.. code-block:: json
{
"clusters": [],
"runtime_key_prefix" : "..."
}
clusters
*(required, array)* Specifies one or more upstream clusters associated with the route.
.. code-block:: json
{
"name" : "...",
"weight": "..."
}
name
*(required, string)* Name of the upstream cluster. The cluster must exist in the
:ref:`cluster manager configuration <config_cluster_manager>`.
weight
*(required, integer)* An integer between 0-100. When a request matches the route,
the choice of an upstream cluster is determined by its weight. The sum of
weights across all entries in the ``clusters`` array must add up to 100.
runtime_key_prefix
*(optional, string)* Specifies the runtime key prefix that should be used to construct the runtime
keys associated with each cluster. When the ``runtime_key_prefix`` is specified, the router will
look for weights associated with each upstream cluster under the key
``runtime_key_prefix + "." + cluster[i].name`` where ``cluster[i]`` denotes an entry in the
``clusters`` array field. If the runtime key for the cluster does not exist, the value specified
in the configuration file will be used as the default weight.
See the :ref:`runtime documentation <operations_runtime>` for how key names map to the
underlying implementation.
**Note:** If the sum of runtime weights exceed 100, the traffic splitting behavior
is undefined (although the request will be routed to one of the clusters).
.. _config_http_conn_man_route_table_hash_policy:
Hash policy
-----------
Specifies the route's hashing policy if the upstream cluster uses a hashing :ref:`load balancer
<arch_overview_load_balancing_types>`.
.. code-block:: json
{
"header_name": "..."
}
header_name
*(required, string)* The name of the request header that will be used to obtain the hash key. If
the request header is not present, the load balancer will use a random number as the hash,
effectively making the load balancing policy random.
.. _config_http_conn_man_route_table_decorator:
Decorator
---------
Specifies the route's decorator.
.. code-block:: json
{
"operation": "..."
}
operation
*(required, string)* The operation name associated with the request matched to this route. If tracing is
enabled, this information will be used as the span name reported for this request. NOTE: For ingress
(inbound) requests, or egress (outbound) responses, this value may be overridden by the
:ref:`x-envoy-decorator-operation <config_http_filters_router_x-envoy-decorator-operation>` header.
.. _config_http_conn_man_route_table_route_add_req_headers:
Adding custom request headers
-----------------------------
Custom request headers can be added to a request that matches a specific route. The headers are
specified in the following form:
.. code-block:: json
[
{"key": "header1", "value": "value1"},
{"key": "header2", "value": "value2"}
]
Envoy supports adding static and dynamic values to the request headers. Supported dynamic values are:
%CLIENT_IP%
The original client IP which is already added by envoy as a
:ref:`X-Forwarded-For <config_http_conn_man_headers_x-forwarded-for>` request header.
%PROTOCOL%
The original protocol which is already added by envoy as a
:ref:`X-Forwarded-Proto <config_http_conn_man_headers_x-forwarded-proto>` request header.
An example for adding a dynamic value to the request headers is as follows:
.. code-block:: json
[
{"key": "X-Client-IP", "value":"%CLIENT_IP%"}
]
*Note:* Headers are appended to requests in the following order:
route-level headers, :ref:`virtual host level <config_http_conn_man_route_table_vhost_add_req_headers>`
headers and finally global :ref:`route_config <config_http_conn_man_route_table_add_req_headers>`
level headers.
.. _config_http_conn_man_route_table_opaque_config:
Opaque Config
-------------
Additional configuration can be provided to filters through the "Opaque Config" mechanism. A
list of properties are specified in the route config. The configuration is uninterpreted
by envoy and can be accessed within a user-defined filter. The configuration is a generic
string map. Nested objects are not supported.
.. code-block:: json
[
{"...": "..."}
]

@ -0,0 +1,90 @@
.. _config_http_conn_man_route_table:
Route configuration
===================
* Routing :ref:`architecture overview <arch_overview_http_routing>`.
* HTTP :ref:`router filter <config_http_filters_router>`.
.. code-block:: json
{
"validate_clusters": "...",
"virtual_hosts": [],
"internal_only_headers": [],
"response_headers_to_add": [],
"response_headers_to_remove": [],
"request_headers_to_add": []
}
.. _config_http_conn_man_route_table_validate_clusters:
validate_clusters
*(optional, boolean)* An optional boolean that specifies whether the clusters that the route
table refers to will be validated by the cluster manager. If set to true and a route refers to
a non-existent cluster, the route table will not load. If set to false and a route refers to a
non-existent cluster, the route table will load and the router filter will return a 404 if the
route is selected at runtime. This setting defaults to true if the route table is statically
defined via the :ref:`route_config <config_http_conn_man_route_config>` option. This setting
default to false if the route table is loaded dynamically via the :ref:`rds
<config_http_conn_man_rds_option>` option. Users may which to override the default behavior in
certain cases (for example when using :ref:`cds <config_cluster_manager_cds>` with a static
route table).
:ref:`virtual_hosts <config_http_conn_man_route_table_vhost>`
*(required, array)* An array of virtual hosts that make up the route table.
internal_only_headers
*(optional, array)* Optionally specifies a list of HTTP headers that the connection manager
will consider to be internal only. If they are found on external requests they will be cleaned
prior to filter invocation. See :ref:`config_http_conn_man_headers_x-envoy-internal` for more
information. Headers are specified in the following form:
.. code-block:: json
["header1", "header2"]
response_headers_to_add
*(optional, array)* Optionally specifies a list of HTTP headers that should be added to each
response that the connection manager encodes. Headers are specified in the following form:
.. code-block:: json
[
{"key": "header1", "value": "value1"},
{"key": "header2", "value": "value2"}
]
response_headers_to_remove
*(optional, array)* Optionally specifies a list of HTTP headers that should be removed from each
response that the connection manager encodes. Headers are specified in the following form:
.. code-block:: json
["header1", "header2"]
.. _config_http_conn_man_route_table_add_req_headers:
request_headers_to_add
*(optional, array)* Specifies a list of HTTP headers that should be added to each
request forwarded by the HTTP connection manager. Headers are specified in the following form:
.. code-block:: json
[
{"key": "header1", "value": "value1"},
{"key": "header2", "value": "value2"}
]
*Note:* In the presence of duplicate header keys,
:ref:`precendence rules <config_http_conn_man_route_table_route_add_req_headers>` apply.
.. toctree::
:hidden:
vhost
route
vcluster
rate_limits
route_matching
traffic_splitting

@ -0,0 +1,14 @@
.. _config_http_conn_man_route_table_route_matching:
Route matching
==============
When Envoy matches a route, it uses the following procedure:
#. The HTTP request's *host* or *:authority* header is matched to a :ref:`virtual host
<config_http_conn_man_route_table_vhost>`.
#. Each :ref:`route entry <config_http_conn_man_route_table_route>` in the virtual host is checked,
*in order*. If there is a match, the route is used and no further route checks are made.
#. Independently, each :ref:`virtual cluster <config_http_conn_man_route_table_vcluster>` in the
virtual host is checked, *in order*. If there is a match, the virtual cluster is used and no
further virtual cluster checks are made.

@ -0,0 +1,136 @@
.. _config_http_conn_man_route_table_traffic_splitting:
Traffic Shifting/Splitting
===========================================
.. contents::
:local:
Envoy's router can split traffic to a route in a virtual host across
two or more upstream clusters. There are two common use cases.
1. Version upgrades: traffic to a route is shifted gradually
from one cluster to another. The
:ref:`traffic shifting <config_http_conn_man_route_table_traffic_splitting_shift>`
section describes this scenario in more detail.
2. A/B testing or multivariate testing: ``two or more versions`` of
the same service are tested simultaneously. The traffic to the route has to
be *split* between clusters running different versions of the same
service. The
:ref:`traffic splitting <config_http_conn_man_route_table_traffic_splitting_split>`
section describes this scenario in more detail.
.. _config_http_conn_man_route_table_traffic_splitting_shift:
Traffic shifting between two upstreams
--------------------------------------
The :ref:`runtime <config_http_conn_man_route_table_route_runtime>` object
in the route configuration determines the probability of selecting a
particular route (and hence its cluster). By using the runtime
configuration, traffic to a particular route in a virtual host can be
gradually shifted from one cluster to another. Consider the following
example configuration, where two versions ``helloworld_v1`` and
``helloworld_v2`` of a service named ``helloworld`` are declared in the
envoy configuration file.
.. code-block:: json
{
"route_config": {
"virtual_hosts": [
{
"name": "helloworld",
"domains": ["*"],
"routes": [
{
"prefix": "/",
"cluster": "helloworld_v1",
"runtime": {
"key": "routing.traffic_shift.helloworld",
"default": 50
}
},
{
"prefix": "/",
"cluster": "helloworld_v2",
}
]
}
]
}
}
Envoy matches routes with a :ref:`first match <config_http_conn_man_route_table_route_matching>` policy.
If the route has a runtime object, the request will be additionally matched based on the runtime
:ref:`value <config_http_conn_man_route_table_route_runtime_default>`
(or the default, if no value is specified). Thus, by placing routes
back-to-back in the above example and specifying a runtime object in the
first route, traffic shifting can be accomplished by changing the runtime
value. The following are the approximate sequence of actions required to
accomplish the task.
1. In the beginning, set ``routing.traffic_shift.helloworld`` to ``100``,
so that all requests to the ``helloworld`` virtual host would match with
the v1 route and be served by the ``helloworld_v1`` cluster.
2. To start shifting traffic to ``helloworld_v2`` cluster, set
``routing.traffic_shift.helloworld`` to values ``0 < x < 100``. For
instance at ``90``, 1 out of every 10 requests to the ``helloworld``
virtual host will not match the v1 route and will fall through to the v2
route.
3. Gradually decrease the value set in ``routing.traffic_shift.helloworld``
so that a larger percentage of requests match the v2 route.
4. When ``routing.traffic_shift.helloworld`` is set to ``0``, no requests
to the ``helloworld`` virtual host will match to the v1 route. All
traffic would now fall through to the v2 route and be served by the
``helloworld_v2`` cluster.
.. _config_http_conn_man_route_table_traffic_splitting_split:
Traffic splitting across multiple upstreams
-------------------------------------------
Consider the ``helloworld`` example again, now with three versions (v1, v2 and
v3) instead of two. To split traffic evenly across the three versions
(i.e., ``33%, 33%, 34%``), the ``weighted_clusters`` option can be used to
specify the weight for each upstream cluster.
Unlike the previous example, a **single** :ref:`route
<config_http_conn_man_route_table_route>` entry is sufficient. The
:ref:`weighted_clusters <config_http_conn_man_route_table_route_weighted_clusters>`
configuration block in a route can be used to specify multiple upstream clusters
along with weights that indicate the **percentage** of traffic to be sent
to each upstream cluster.
.. code-block:: json
{
"route_config": {
"virtual_hosts": [
{
"name": "helloworld",
"domains": ["*"],
"routes": [
{
"prefix": "/",
"weighted_clusters": {
"runtime_key_prefix" : "routing.traffic_split.helloworld",
"clusters" : [
{ "name" : "helloworld_v1", "weight" : 33 },
{ "name" : "helloworld_v2", "weight" : 33 },
{ "name" : "helloworld_v3", "weight" : 34 }
]
}
}
]
}
]
}
}
The weights assigned to each cluster can be dynamically adjusted using the
following runtime variables: ``routing.traffic_split.helloworld.helloworld_v1``,
``routing.traffic_split.helloworld.helloworld_v2`` and
``routing.traffic_split.helloworld.helloworld_v3``.

@ -0,0 +1,47 @@
.. _config_http_conn_man_route_table_vcluster:
Virtual cluster
===============
A virtual cluster is a way of specifying a regex matching rule against certain important endpoints
such that statistics are generated explicitly for the matched requests. The reason this is useful is
that when doing prefix/path matching Envoy does not always know what the application considers to
be an endpoint. Thus, it's impossible for Envoy to generically emit per endpoint statistics.
However, often systems have highly critical endpoints that they wish to get "perfect" statistics on.
Virtual cluster statistics are perfect in the sense that they are emitted on the downstream side
such that they include network level failures.
.. note::
Virtual clusters are a useful tool, but we do not recommend setting up a virtual cluster for
every application endpoint. This is both not easily maintainable as well as the matching and
statistics output are not free.
.. code-block:: json
{
"pattern": "...",
"name": "...",
"method": "..."
}
pattern
*(required, string)* Specifies a regex pattern to use for matching requests. The entire path of the request
must match the regex. The regex grammar used is defined `here <http://en.cppreference.com/w/cpp/regex/ecmascript>`_.
name
*(required, string)* Specifies the name of the virtual cluster. The virtual cluster name as well
as the virtual host name are used when emitting statistics. The statistics are emitted by the
router filter and are documented :ref:`here <config_http_filters_router_stats>`.
method
*(optional, string)* Optionally specifies the HTTP method to match on. For example *GET*, *PUT*,
etc.
Examples:
* The regex */rides/\d+* matches the path */rides/0*
* The regex */rides/\d+* matches the path */rides/123*
* The regex */rides/\d+* does not match the path */rides/123/456*
Documentation for :ref:`virtual cluser statistics <config_http_filters_router_stats>`.

@ -0,0 +1,84 @@
.. _config_http_conn_man_route_table_vhost:
Virtual host
============
The top level element in the routing configuration is a virtual host. Each virtual host has
a logical name as well as a set of domains that get routed to it based on the incoming request's
host header. This allows a single listener to service multiple top level domain path trees. Once a
virtual host is selected based on the domain, the routes are processed in order to see which
upstream cluster to route to or whether to perform a redirect.
.. code-block:: json
{
"name": "...",
"domains": [],
"routes": [],
"require_ssl": "...",
"virtual_clusters": [],
"rate_limits": [],
"request_headers_to_add": []
}
name
*(required, string)* The logical name of the virtual host. This is used when emitting certain
statistics but is not relevant for forwarding. By default, the maximum length of the name is
limited to 60 characters. This limit can be increased by setting the
:option:`--max-obj-name-len` command line argument to the desired value.
domains
*(required, array)* A list of domains (host/authority header) that will be matched to this
virtual host. Wildcard hosts are supported in the form of "\*.foo.com" or "\*-bar.foo.com".
Note that the wildcard will not match the empty string. e.g. "\*-bar.foo.com" will match
"baz-bar.foo.com" but not "-bar.foo.com". Additionally, a special entry "\*" is allowed
which will match any host/authority header. Only a single virtual host in the entire route
configuration can match on "\*". A domain must be unique across all virtual hosts or the config
will fail to load.
:ref:`routes <config_http_conn_man_route_table_route>`
*(required, array)* The list of routes that will be matched, in order, for incoming requests.
The first route that matches will be used.
:ref:`cors <config_http_filters_cors>`
*(optional, object)* Specifies the virtual host's CORS policy.
.. _config_http_conn_man_route_table_vhost_require_ssl:
require_ssl
*(optional, string)* Specifies the type of TLS enforcement the virtual host expects. Possible
values are:
all
All requests must use TLS. If a request is not using TLS, a 302 redirect will be sent telling
the client to use HTTPS.
external_only
External requests must use TLS. If a request is external and it is not using TLS, a 302 redirect
will be sent telling the client to use HTTPS.
If this option is not specified, there is no TLS requirement for the virtual host.
:ref:`virtual_clusters <config_http_conn_man_route_table_vcluster>`
*(optional, array)* A list of virtual clusters defined for this virtual host. Virtual clusters
are used for additional statistics gathering.
:ref:`rate_limits <config_http_conn_man_route_table_rate_limit_config>`
*(optional, array)* Specifies a set of rate limit configurations that will be applied to the
virtual host.
.. _config_http_conn_man_route_table_vhost_add_req_headers:
request_headers_to_add
*(optional, array)* Specifies a list of HTTP headers that should be added to each
request handled by this virtual host. Headers are specified in the following form:
.. code-block:: json
[
{"key": "header1", "value": "value1"},
{"key": "header2", "value": "value2"}
]
*Note:* In the presence of duplicate header keys,
:ref:`precendence rules <config_http_conn_man_route_table_route_add_req_headers>` apply.

@ -0,0 +1,25 @@
.. _config_http_conn_man_runtime:
Runtime
=======
The HTTP connection manager supports the following runtime settings:
.. _config_http_conn_man_runtime_client_enabled:
tracing.client_enabled
% of requests that will be force traced if the
:ref:`config_http_conn_man_headers_x-client-trace-id` header is set. Defaults to 100.
.. _config_http_conn_man_runtime_global_enabled:
tracing.global_enabled
% of requests that will be traced after all other checks have been applied (force tracing,
sampling, etc.). Defaults to 100.
.. _config_http_conn_man_runtime_random_sampling:
tracing.random_sampling
% of requests that will be randomly traced. See :ref:`here <arch_overview_tracing>` for more
information. This runtime control is specified in the range 0-10000 and defaults to 10000. Thus,
trace sampling can be specified in 0.01% increments.

@ -0,0 +1,85 @@
.. _config_http_conn_man_stats:
Statistics
==========
Every connection manager has a statistics tree rooted at *http.<stat_prefix>.* with the following
statistics:
.. csv-table::
:header: Name, Type, Description
:widths: 1, 1, 2
downstream_cx_total, Counter, Total connections
downstream_cx_ssl_total, Counter, Total TLS connections
downstream_cx_http1_total, Counter, Total HTTP/1.1 connections
downstream_cx_websocket_total, Counter, Total WebSocket connections
downstream_cx_http2_total, Counter, Total HTTP/2 connections
downstream_cx_destroy, Counter, Total connections destroyed
downstream_cx_destroy_remote, Counter, Total connections destroyed due to remote close
downstream_cx_destroy_local, Counter, Total connections destroyed due to local close
downstream_cx_destroy_active_rq, Counter, Total connections destroyed with 1+ active request
downstream_cx_destroy_local_active_rq, Counter, Total connections destroyed locally with 1+ active request
downstream_cx_destroy_remote_active_rq, Counter, Total connections destroyed remotely with 1+ active request
downstream_cx_active, Gauge, Total active connections
downstream_cx_ssl_active, Gauge, Total active TLS connections
downstream_cx_http1_active, Gauge, Total active HTTP/1.1 connections
downstream_cx_websocket_active, Gauge, Total active WebSocket connections
downstream_cx_http2_active, Gauge, Total active HTTP/2 connections
downstream_cx_protocol_error, Counter, Total protocol errors
downstream_cx_length_ms, Histogram, Connection length milliseconds
downstream_cx_rx_bytes_total, Counter, Total bytes received
downstream_cx_rx_bytes_buffered, Gauge, Total received bytes currently buffered
downstream_cx_tx_bytes_total, Counter, Total bytes sent
downstream_cx_tx_bytes_buffered, Gauge, Total sent bytes currently buffered
downstream_cx_drain_close, Counter, Total connections closed due to draining
downstream_cx_idle_timeout, Counter, Total connections closed due to idle timeout
downstream_flow_control_paused_reading_total, Counter, Total number of times reads were disabled due to flow control
downstream_flow_control_resumed_reading_total, Counter, Total number of times reads were enabled on the connection due to flow control
downstream_rq_total, Counter, Total requests
downstream_rq_http1_total, Counter, Total HTTP/1.1 requests
downstream_rq_http2_total, Counter, Total HTTP/2 requests
downstream_rq_active, Gauge, Total active requests
downstream_rq_response_before_rq_complete, Counter, Total responses sent before the request was complete
downstream_rq_rx_reset, Counter, Total request resets received
downstream_rq_tx_reset, Counter, Total request resets sent
downstream_rq_non_relative_path, Counter, Total requests with a non-relative HTTP path
downstream_rq_too_large, Counter, Total requests resulting in a 413 due to buffering an overly large body.
downstream_rq_2xx, Counter, Total 2xx responses
downstream_rq_3xx, Counter, Total 3xx responses
downstream_rq_4xx, Counter, Total 4xx responses
downstream_rq_5xx, Counter, Total 5xx responses
downstream_rq_ws_on_non_ws_route, Counter, Total WebSocket upgrade requests rejected by non WebSocket routes
downstream_rq_time, Histogram, Request time milliseconds
rs_too_large, Counter, Total response errors due to buffering an overly large body.
Per user agent statistics
-------------------------
Additional per user agent statistics are rooted at *http.<stat_prefix>.user_agent.<user_agent>.*
Currently Envoy matches user agent for both iOS (*ios*) and Android (*android*) and produces
the following statistics:
.. csv-table::
:header: Name, Type, Description
:widths: 1, 1, 2
downstream_cx_total, Counter, Total connections
downstream_cx_destroy_remote_active_rq, Counter, Total connections destroyed remotely with 1+ active requests
downstream_rq_total, Counter, Total requests
Per listener statistics
-----------------------
Additional per listener statistics are rooted at *listener.<address>.http.<stat_prefix>.* with the
following statistics:
.. csv-table::
:header: Name, Type, Description
:widths: 1, 1, 2
downstream_rq_2xx, Counter, Total 2xx responses
downstream_rq_3xx, Counter, Total 3xx responses
downstream_rq_4xx, Counter, Total 4xx responses
downstream_rq_5xx, Counter, Total 5xx responses

@ -0,0 +1,23 @@
.. _config_http_conn_man_tracing:
Tracing
=======
.. code-block:: json
{
"tracing": {
"operation_name": "...",
"request_headers_for_tags": []
}
}
operation_name
*(required, string)* Span name will be derived from operation_name. "ingress" and "egress"
are the only supported values.
request_headers_for_tags
*(optional, array)* A list of header names used to create tags for the active span.
The header name is used to populate the tag name, and the header value is used to populate the tag value.
The tag is created if the specified header name is present in the request's headers.

@ -0,0 +1,38 @@
.. _config_http_filters_buffer:
Buffer
======
The buffer filter is used to stop filter iteration and wait for a fully buffered complete request.
This is useful in different situations including protecting some applications from having to deal
with partial requests and high network latency.
.. code-block:: json
{
"name": "buffer",
"config": {
"max_request_bytes": "...",
"max_request_time_s": "..."
}
}
max_request_bytes
*(required, integer)* The maximum request size that the filter will before the connection manager
will stop buffering and return a 413 response.
max_request_time_s
*(required, integer)* The maximum amount of time that the filter will wait for a complete request
before returning a 408 response.
Statistics
----------
The buffer filter outputs statistics in the *http.<stat_prefix>.buffer.* namespace. The :ref:`stat
prefix <config_http_conn_man_stat_prefix>` comes from the owning HTTP connection manager.
.. csv-table::
:header: Name, Type, Description
:widths: 1, 1, 2
rq_timeout, Counter, Total requests that timed out waiting for a full request

@ -0,0 +1,65 @@
.. _config_http_filters_cors:
CORS filter
====================
This is a filter which handles Cross-Origin Resource Sharing requests based on route or virtual host settings.
For the meaning of the headers please refer to the pages below.
- https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS
- https://www.w3.org/TR/cors/
.. code-block:: json
{
"name": "cors",
"config": {}
}
Settings
--------
Settings on a route take precedence over settings on the virtual host.
.. code-block:: json
{
"cors": {
"enabled": false,
"allow_origin": ["http://foo.example"],
"allow_methods": "POST, GET, OPTIONS",
"allow_headers": "Content-Type",
"allow_credentials": false,
"expose_headers": "X-Custom-Header",
"max_age": "86400"
}
}
enabled
*(optional, boolean)* Defaults to true. Setting *enabled* to false on a route disables CORS
for this route only. The setting has no effect on a virtual host.
allow_origin
*(optional, array)* The origins that will be allowed to do CORS request.
Wildcard "\*" will allow any origin.
allow_methods
*(optional, string)* The content for the *access-control-allow-methods* header.
Comma separated list of HTTP methods.
allow_headers
*(optional, string)* The content for the *access-control-allow-headers* header.
Comma separated list of HTTP headers.
allow_credentials
*(optional, boolean)* Whether the resource allows credentials.
expose_headers
*(optional, string)* The content for the *access-control-expose-headers* header.
Comma separated list of HTTP headers.
max_age
*(optional, string)* The content for the *access-control-max-age* header.
Value in seconds for how long the response to the preflight request can be cached.

@ -0,0 +1,82 @@
.. _config_http_filters_dynamo:
DynamoDB
========
DynamoDB :ref:`architecture overview <arch_overview_dynamo>`.
.. code-block:: json
{
"name": "http_dynamo_filter",
"config": {}
}
name
*(required, string)* Filter name. The only supported value is `http_dynamo_filter`.
config
*(required, object)* The filter does not use any configuration.
Statistics
----------
The DynamoDB filter outputs statistics in the *http.<stat_prefix>.dynamodb.* namespace. The
:ref:`stat prefix <config_http_conn_man_stat_prefix>` comes from the owning HTTP connection manager.
Per operation stats can be found in the *http.<stat_prefix>.dynamodb.operation.<operation_name>.*
namespace.
.. csv-table::
:header: Name, Type, Description
:widths: 1, 1, 2
upstream_rq_total, Counter, Total number of requests with <operation_name>
upstream_rq_time, Histogram, Time spent on <operation_name>
upstream_rq_total_xxx, Counter, Total number of requests with <operation_name> per response code (503/2xx/etc)
upstream_rq_time_xxx, Histogram, Time spent on <operation_name> per response code (400/3xx/etc)
Per table stats can be found in the *http.<stat_prefix>.dynamodb.table.<table_name>.* namespace.
Most of the operations to DynamoDB involve a single table, but BatchGetItem and BatchWriteItem can
include several tables, Envoy tracks per table stats in this case only if it is the same table used
in all operations from the batch.
.. csv-table::
:header: Name, Type, Description
:widths: 1, 1, 2
upstream_rq_total, Counter, Total number of requests on <table_name> table
upstream_rq_time, Histogram, Time spent on <table_name> table
upstream_rq_total_xxx, Counter, Total number of requests on <table_name> table per response code (503/2xx/etc)
upstream_rq_time_xxx, Histogram, Time spent on <table_name> table per response code (400/3xx/etc)
*Disclaimer: Please note that this is a pre-release Amazon DynamoDB feature that is not yet widely available.*
Per partition and operation stats can be found in the *http.<stat_prefix>.dynamodb.table.<table_name>.*
namespace. For batch operations, Envoy tracks per partition and operation stats only if it is the same
table used in all operations.
.. csv-table::
:header: Name, Type, Description
:widths: 1, 1, 2
capacity.<operation_name>.__partition_id=<last_seven_characters_from_partition_id>, Counter, Total number of capacity for <operation_name> on <table_name> table for a given <partition_id>
Additional detailed stats:
* For 4xx responses and partial batch operation failures, the total number of failures for a given
table and failure are tracked in the *http.<stat_prefix>.dynamodb.error.<table_name>.* namespace.
.. csv-table::
:header: Name, Type, Description
:widths: 1, 1, 2
<error_type>, Counter, Total number of specific <error_type> for a given <table_name>
BatchFailureUnprocessedKeys, Counter, Total number of partial batch failures for a given <table_name>
Runtime
-------
The DynamoDB filter supports the following runtime settings:
dynamodb.filter_enabled
The % of requests for which the filter is enabled. Default is 100%.

@ -0,0 +1,177 @@
.. _config_http_filters_fault_injection:
Fault Injection
===============
The fault injection filter can be used to test the resiliency of
microservices to different forms of failures. The filter can be used to
inject delays and abort requests with user-specified error codes, thereby
providing the ability to stage different failure scenarios such as service
failures, service overloads, high network latency, network partitions,
etc. Faults injection can be limited to a specific set of requests based on
the (destination) upstream cluster of a request and/or a set of pre-defined
request headers.
The scope of failures is restricted to those that are observable by an
application communicating over the network. CPU and disk failures on the
local host cannot be emulated.
Currently, the fault injection filter has the following limitations:
* Abort codes are restricted to HTTP status codes only
* Delays are restricted to fixed duration.
Future versions will include support for restricting faults to specific
routes, injecting *gRPC* and *HTTP/2* specific error codes and delay
durations based on distributions.
Configuration
-------------
*Note: The fault injection filter must be inserted before any other filter,
including the router filter.*
.. code-block:: json
{
"name" : "fault",
"config" : {
"abort" : "{...}",
"delay" : "{...}",
"upstream_cluster" : "...",
"headers" : [],
"downstream_nodes": []
}
}
:ref:`abort <config_http_filters_fault_injection_abort>`
*(sometimes required, object)* If specified, the filter will abort requests based on
the values in the object. At least *abort* or *delay* must be specified.
:ref:`delay <config_http_filters_fault_injection_delay>`
*(sometimes required, object)* If specified, the filter will inject delays based on the values
in the object. At least *abort* or *delay* must be specified.
upstream_cluster:
*(optional, string)* Specifies the name of the (destination) upstream
cluster that the filter should match on. Fault injection will be
restricted to requests bound to the specific upstream cluster.
:ref:`headers <config_http_conn_man_route_table_route_headers>`
*(optional, array)* Specifies a set of headers that the filter should match on. The fault
injection filter can be applied selectively to requests that match a set of headers specified in
the fault filter config. The chances of actual fault injection further depend on the values of
*abort_percent* and *fixed_delay_percent* parameters.The filter will check the request's headers
against all the specified headers in the filter config. A match will happen if all the headers in
the config are present in the request with the same values (or based on presence if the ``value``
field is not in the config).
TODO: allow runtime configuration on per entry basis for headers match.
downstream_nodes:
*(optional, array)* Faults are injected for the specified list of downstream hosts. If this setting is
not set, faults are injected for all downstream nodes. Downstream node name is taken from
:ref:`the HTTP x-envoy-downstream-service-node <config_http_conn_man_headers_downstream-service-node>`
header and compared against downstream_nodes list.
The abort and delay blocks can be omitted. If they are not specified in the
configuration file, their respective values will be obtained from the
runtime.
.. _config_http_filters_fault_injection_abort:
Abort
-----
.. code-block:: json
{
"abort_percent" : "...",
"http_status" : "..."
}
abort_percent
*(required, integer)* The percentage of requests that
should be aborted with the specified *http_status* code. Valid values
range from 0 to 100.
http_status
*(required, integer)* The HTTP status code that will be used as the
response code for the request being aborted.
.. _config_http_filters_fault_injection_delay:
Delay
-----
.. code-block:: json
{
"type" : "...",
"fixed_delay_percent" : "...",
"fixed_duration_ms" : "..."
}
type:
*(required, string)* Specifies the type of delay being
injected. Currently only *fixed* delay type (step function) is supported.
fixed_delay_percent:
*(required, integer)* The percentage of requests that will
be delayed for the duration specified by *fixed_duration_ms*. Valid
values range from 0 to 100.
fixed_duration_ms:
*(required, integer)* The delay duration in milliseconds. Must be greater than 0.
Runtime
-------
The HTTP fault injection filter supports the following global runtime settings:
fault.http.abort.abort_percent
% of requests that will be aborted if the headers match. Defaults to the
*abort_percent* specified in config. If the config does not contain an
*abort* block, then *abort_percent* defaults to 0.
fault.http.abort.http_status
HTTP status code that will be used as the of requests that will be
aborted if the headers match. Defaults to the HTTP status code specified
in the config. If the config does not contain an *abort* block, then
*http_status* defaults to 0.
fault.http.delay.fixed_delay_percent
% of requests that will be delayed if the headers match. Defaults to the
*delay_percent* specified in the config or 0 otherwise.
fault.http.delay.fixed_duration_ms
The delay duration in milliseconds. If not specified, the
*fixed_duration_ms* specified in the config will be used. If this field
is missing from both the runtime and the config, no delays will be
injected.
*Note*, fault filter runtime settings for the specific downstream cluster
override the default ones if present. The following are downstream specific
runtime keys:
* fault.http.<downstream-cluster>.abort.abort_percent
* fault.http.<downstream-cluster>.abort.http_status
* fault.http.<downstream-cluster>.delay.fixed_delay_percent
* fault.http.<downstream-cluster>.delay.fixed_duration_ms
Downstream cluster name is taken from
:ref:`the HTTP x-envoy-downstream-service-cluster <config_http_conn_man_headers_downstream-service-cluster>`
header. If the following settings are not found in the runtime it defaults to the global runtime settings
which defaults to the config settings.
Statistics
----------
The fault filter outputs statistics in the *http.<stat_prefix>.fault.* namespace. The :ref:`stat
prefix <config_http_conn_man_stat_prefix>` comes from the owning HTTP connection manager.
.. csv-table::
:header: Name, Type, Description
:widths: 1, 1, 2
delays_injected, Counter, Total requests that were delayed
aborts_injected, Counter, Total requests that were aborted
<downstream-cluster>.delays_injected, Counter, Total delayed requests for the given downstream cluster
<downstream-cluster>.aborts_injected, Counter, Total aborted requests for the given downstream cluster

@ -0,0 +1,55 @@
.. _config_http_filters_grpc_bridge:
gRPC HTTP/1.1 bridge
====================
gRPC :ref:`architecture overview <arch_overview_grpc>`.
This is a simple filter which enables the bridging of an HTTP/1.1 client which does not support
response trailers to a compliant gRPC server. It works by doing the following:
* When a request is sent, the filter sees if the connection is HTTP/1.1 and the request content type
is *application/grpc*.
* If so, when the response is received, the filter buffers it and waits for trailers and then checks the
*grpc-status* code. If it is not zero, the filter switches the HTTP response code to 503. It also copies
the *grpc-status* and *grpc-message* trailers into the response headers so that the client can look
at them if it wishes.
* The client should send HTTP/1.1 requests that translate to the following psuedo headers:
* *\:method*: POST
* *\:path*: <gRPC-METHOD-NAME>
* *content-type*: application/grpc
* The body should be the serialized grpc body which is:
* 1 byte of zero (not compressed).
* network order 4 bytes of proto message length.
* serialized proto message.
* Because this scheme must buffer the response to look for the *grpc-status* trailer it will only
work with unary gRPC APIs.
More info: http://www.grpc.io/docs/guides/wire.html
This filter also collects stats for all gRPC requests that transit, even if those requests are
normal gRPC requests over HTTP/2.
.. code-block:: json
{
"name": "grpc_http1_bridge",
"config": {}
}
Statistics
----------
The filter emits statistics in the *cluster.<route target cluster>.grpc.* namespace.
.. csv-table::
:header: Name, Type, Description
:widths: 1, 1, 2
<grpc service>.<grpc method>.success, Counter, Total successful service/method calls
<grpc service>.<grpc method>.failure, Counter, Total failed service/method calls
<grpc service>.<grpc method>.total, Counter, Total service/method calls

@ -0,0 +1,85 @@
.. _config_http_filters_grpc_json_transcoder:
gRPC-JSON transcoder filter
===========================
gRPC :ref:`architecture overview <arch_overview_grpc>`.
This is a filter which allows a RESTful JSON API client to send requests to Envoy over HTTP
and get proxied to a gRPC service. The HTTP mapping for the gRPC service has to be defined by
`custom options <https://cloud.google.com/service-management/reference/rpc/google.api#http>`_.
Configure gRPC-JSON transcoder
------------------------------
The filter config for the filter requires the descriptor file as well as a list of the gRPC
services to be transcoded.
.. code-block:: json
{
"name": "grpc_json_transcoder",
"config": {
"proto_descriptor": "proto.pb",
"services": ["grpc.service.Service"],
"print_options": {
"add_whitespace": false,
"always_print_primitive_fields": false,
"always_print_enums_as_ints": false,
"preserve_proto_field_names": false
}
}
}
proto_descriptor
*(required, string)* Supplies the binary protobuf descriptor set for the gRPC services.
The descriptor set has to include all of the types that are used in the services. Make sure
to use the ``--include_import`` option for ``protoc``.
To generate a protobuf descriptor set for the gRPC service, you'll also need to clone the
googleapis repository from Github before running protoc, as you'll need annotations.proto
in your include path.
.. code-block:: bash
git clone https://github.com/googleapis/googleapis
GOOGLEAPIS_DIR=<your-local-googleapis-folder>
Then run protoc to generate the descriptor set from bookstore.proto:
.. code-block:: bash
protoc -I$(GOOGLEAPIS_DIR) -I. --include_imports --include_source_info \
--descriptor_set_out=proto.pb test/proto/bookstore.proto
If you have more than one proto source files, you can pass all of them in one command.
services
*(required, array)* A list of strings that supplies the service names that the
transcoder will translate. If the service name doesn't exist in ``proto_descriptor``, Envoy
will fail at startup. The ``proto_descriptor`` may contain more services than the service names
specified here, but they won't be translated.
print_options
*(optional, object)* Control options for response json. These options are passed directly to
`JsonPrintOptions <https://developers.google.com/protocol-buffers/docs/reference/cpp/
google.protobuf.util.json_util#JsonPrintOptions>`_. Valid options are:
add_whitespace
*(optional, boolean)* Whether to add spaces, line breaks and indentation to make the JSON
output easy to read. Default to false.
always_print_primitive_fields
*(optional, boolean)* Whether to always print primitive fields. By default primitive fields
with default values will be omitted in JSON output. For example, an int32 field set to 0
will be omitted. Set this flag to true will override the default behavior and print primitive
fields regardless of their values. Default to false.
always_print_enums_as_ints
*(optional, boolean)* Whether to always print enums as ints. By default they are rendered as
strings. Default to false.
preserve_proto_field_names
*(optional, boolean)* Whether to preserve proto field names. By default protobuf will generate
JSON field names use ``json_name`` option, or lower camel case, in that order. Set this flag
will preserve original field names. Default to false.

@ -0,0 +1,16 @@
.. _config_http_filters_grpc_web:
gRPC-Web filter
====================
gRPC :ref:`architecture overview <arch_overview_grpc>`.
This is a filter which enables the bridging of a gRPC-Web client to a compliant gRPC server by
following https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-WEB.md.
.. code-block:: json
{
"name": "grpc_web",
"config": {}
}

@ -0,0 +1,35 @@
.. _config_http_filters_health_check:
Health check
============
Health check filter :ref:`architecture overview <arch_overview_health_checking_filter>`.
.. code-block:: json
{
"name": "health_check",
"config": {
"pass_through_mode": "...",
"endpoint": "...",
"cache_time_ms": "...",
}
}
Note that the filter will automatically fail health checks and set the
:ref:`x-envoy-immediate-health-check-fail
<config_http_filters_router_x-envoy-immediate-health-check-fail>` header if the
:ref:`/healthcheck/fail <operations_admin_interface_healthcheck_fail>` admin endpoint has been
called. (The :ref:`/healthcheck/ok <operations_admin_interface_healthcheck_ok>` admin endpoint
reverses this behavior).
pass_through_mode
*(required, boolean)* Specifies whether the filter operates in pass through mode or not.
endpoint
*(required, string)* Specifies the incoming HTTP endpoint that should be considered the
health check endpoint. For example */healthcheck*.
cache_time_ms
*(optional, integer)* If operating in pass through mode, the amount of time in milliseconds that
the filter should cache the upstream response.

@ -0,0 +1,20 @@
.. _config_http_filters:
HTTP filters
============
.. toctree::
:maxdepth: 2
buffer_filter
cors_filter
fault_filter
dynamodb_filter
grpc_http1_bridge_filter
grpc_json_transcoder_filter
grpc_web_filter
health_check_filter
ip_tagging_filter
rate_limit_filter
router_filter
lua_filter

@ -0,0 +1,47 @@
.. _config_http_filters_ip_tagging:
Ip tagging filter
====================
This is an HTTP filter which enables Envoy to tag requests with extra information such as location, cloud source, and any
extra data. This is useful to prevent against DDoS.
**Note**: this filter is under active development, and currently does not perform any tagging on requests. In other
words, installing this filter is a no-op in the filter chain.
.. code-block:: json
{
"name": "ip_tagging",
"config": {
"request_type": "...",
"ip_tags": []
}
}
request_type
*(optional, string)* The type of requests the filter should apply to. The supported
types are *internal*, *external* or *both*. A request is considered internal if
:ref:`x-envoy-internal<config_http_conn_man_headers_x-envoy-internal>` is set to true. If
:ref:`x-envoy-internal<config_http_conn_man_headers_x-envoy-internal>` is not set or false, a
request is considered external. The filter defaults to *both*, and it will apply to all request
types.
ip_tags:
*(optional, array)* Specifies the list of ip tags to set for a request.
Ip tags
-------
.. code-block:: json
{
"ip_tag_name": "...",
"ip_list": []
}
ip_tag_name:
*(required, string)* Specifies the ip tag name to apply.
ip_list:
*(required, list of strings)* A list of IP address and subnet masks that will be tagged with the ``ip_tag_name``. Both
IPv4 and IPv6 CIDR addresses are allowed here.

@ -0,0 +1,353 @@
.. _config_http_filters_lua:
Lua
===
.. attention::
The Lua scripting HTTP filter is **experimental**. Use in production at your own risk. It is
being released for initial feedback on the exposed API and for further development, testing,
and verification. This warning will be removed when we feel that the filter has received enough
testing and API stability to call it generally production ready.
Overview
--------
The HTTP Lua filter allows `Lua <https://www.lua.org/>`_ scripts to be run during both the request
and response flows. `LuaJIT <http://luajit.org/>`_ is used as the runtime. Because of this, the
supported Lua version is mostly 5.1 with some 5.2 features. See the `LuaJIT documentation
<http://luajit.org/extensions.html>`_ for more details.
The filter only supports loading Lua code in-line in the configuration. If local filesystem code
is desired, a trivial in-line script can be used to load the rest of the code from the local
environment.
The design of the filter and Lua support at a high level is as follows:
* All Lua environments are :ref:`per worker thread <arch_overview_threading>`. This means that
there is no truly global data. Any globals create and populated at load time will be visible
from each worker thread in isolation. True global support may be added via an API in the future.
* All scripts are run as coroutines. This means that they are written in a synchronous style even
though they may perform complex asynchronous tasks. This makes the scripts substantially easier
to write. All network/async processing is performed by Envoy via a set of APIs. Envoy will
yield the script as appropriate and resume it when async tasks are complete.
* **Do not perform blocking operations from scripts.** It is critical for performance that
Envoy APIs are used for all IO.
Currently supported high level features
---------------------------------------
**NOTE:** It is expected that this list will expand over time as the filter is used in production.
The API surface has been kept small on purpose. The goal is to make scripts extremely simple and
safe to write. Very complex or high performance use cases are assumed to use the native C++ filter
API.
* Inspection of headers, body, and trailers while streaming in either the request flow, response
flow, or both.
* Modification of headers and trailers.
* Blocking and buffering the full request/response body for inspection.
* Performing an outbound async HTTP call to an upstream host. Such a call can be performed while
buffering body data so that when the call completes upstream headers can be modified.
* Performing a direct response and skipping further filter iteration. For example, a script
could make an upstream HTTP call for authentication, and then directly respond with a 403
response code.
Configuration
-------------
.. code-block:: json
{
"name": "lua",
"config": {
"inline_code": "..."
}
}
inline_code
*(required, string)* The Lua code that Envoy will execute. This can be a very small script that
further loads code from disk if desired. Note that if JSON configuration is used, the code must
be properly escaped. YAML configuration may be easier to read since YAML supports multi-line
strings so complex scripts can be easily expressed inline in the configuration.
Script examples
---------------
This section provides some concrete examples of Lua scripts as a more gentle introduction and quick
start. Please refer to the :ref:`stream handle API <config_http_filters_lua_stream_handle_api>` for
more details on the supported API.
.. code-block:: lua
-- Called on the request path.
function envoy_on_request(request_handle)
-- Wait for the entire request body and add a request header with the body size.
request_handle:headers():add("request_body_size", request_handle:body():length())
end
-- Called on the response path.
function envoy_on_response(response_handle)
-- Wait for the entire response body and a response header with the the body size.
response_handle:headers():add("response_body_size", response_handle:body():length())
-- Remove a response header named 'foo'
response_handle:headers():remove("foo")
end
.. code-block:: lua
function envoy_on_request(request_handle)
-- Make an HTTP call to an upstream host with the following headers, body, and timeout.
local headers, body = request_handle:httpCall(
"lua_cluster",
{
[":method"] = "POST",
[":path"] = "/",
[":authority"] = "lua_cluster"
},
"hello world",
5000)
-- Add information from the HTTP call into the headers that are about to be sent to the next
-- filter in the filter chain.
request_handle:headers():add("upstream_foo", headers["foo"])
request_handle:headers():add("upstream_body_size", #body)
end
.. code-block:: lua
function envoy_on_request(request_handle)
-- Make an HTTP call.
local headers, body = request_handle:httpCall(
"lua_cluster",
{
[":method"] = "POST",
[":path"] = "/",
[":authority"] = "lua_cluster"
},
"hello world",
5000)
-- Response directly and set a header from the HTTP call. No further filter iteration
-- occurs.
request_handle:respond(
{[":status"] = "403",
["upstream_foo"] = headers["foo"]},
"nope")
end
.. _config_http_filters_lua_stream_handle_api:
Stream handle API
-----------------
When Envoy loads the script in the configuration, it looks for two global functions that the
script defines:
.. code-block:: lua
function envoy_on_request(request_handle)
end
function envoy_on_response(response_handle)
end
A script can define either or both of these functions. During the request path, Envoy will
run *envoy_on_request* as a coroutine, passing an API handle. During the response path, Envoy will
run *envoy_on_response* as a coroutine, passing an API handle.
.. attention::
It is critical that all interaction with Envoy occur through the passed stream handle. The stream
handle should not be assigned to any global variable and should not be used outside of the
coroutine. Envoy will fail your script if the handle is used incorrectly.
The following methods on the stream handle are supported:
headers()
^^^^^^^^^
.. code-block:: lua
headers = handle:headers()
Returns the stream's headers. The headers can be modified as long as they have not been sent to
the next filter in the header chain. For example, they can be modified after an *httpCall()* or
after a *body()* call returns. The script will fail if the headers are modified in any other
situation.
Returns a :ref:`header object <config_http_filters_lua_header_wrapper>`.
body()
^^^^^^
.. code-block:: lua
body = handle:body()
Returns the stream's body. This call will cause Envoy to yield the script until the entire body
has been buffered. Note that all buffering must adhere to the flow control policies in place.
Envoy will not buffer more data than is allowed by the connection manager.
Returns a :ref:`buffer object <config_http_filters_lua_buffer_wrapper>`.
bodyChunks()
^^^^^^^^^^^^
.. code-block:: lua
iterator = handle:bodyChunks()
Returns an iterator that can be used to iterate through all received body chunks as they arrive.
Envoy will yield the script in between chunks, but *will not buffer* them. This can be used by
a script to inspect data as it is streaming by.
.. code-block:: lua
for chunk in request_handle:bodyChunks() do
request_handle:log(0, chunk:length())
end
Each chunk the iterator returns is a :ref:`buffer object <config_http_filters_lua_buffer_wrapper>`.
trailers()
^^^^^^^^^^
.. code-block:: lua
trailers = handle:trailers()
Returns the stream's trailers. May return nil if there are no trailers. The trailers may be
modified before they are sent to the next filter.
Returns a :ref:`header object <config_http_filters_lua_header_wrapper>`.
log*()
^^^^^^
.. code-block:: lua
handle:logTrace(message)
handle:logDebug(message)
handle:logInfo(message)
handle:logWarn(message)
handle:logErr(message)
handle:logCritical(message)
Logs a message using Envoy's application logging. *message* is a string to log.
httpCall()
^^^^^^^^^^
.. code-block:: lua
headers, body = handle:httpCall(cluster, headers, body, timeout)
Makes an HTTP call to an upstream host. Envoy will yield the script until the call completes or
has an error. *cluster* is a string which maps to a configured cluster manager cluster. *headers*
is a table of key/value pairs to send. Note that the *:method*, *:path*, and *:authority* headers
must be set. *body* is an optional string of body data to send. *timeout* is an integer that
specifies the call timeout in milliseconds.
Returns *headers* which is a table of response headers. Returns *body* which is the string response
body. May be nil if there is no body.
respond()
^^^^^^^^^^
.. code-block:: lua
handle:respond(headers, body)
Respond immediately and do not continue further filter iteration. This call is *only valid in
the request flow*. Additionally, a response is only possible if request headers have not yet been
passed to subsequent filters. Meaning, the following Lua code is invalid:
.. code-block:: lua
function envoy_on_request(request_handle)
for chunk in request_handle:bodyChunks() do
request_handle:respond(
{[":status"] = "100"},
"nope")
end
end
*headers* is a table of key/value pairs to send. Note that the *:status* header
must be set. *body* is a string and supplies the optional response body. May be nil.
.. _config_http_filters_lua_header_wrapper:
Header object API
-----------------
add()
^^^^^
.. code-block:: lua
headers:add(key, value)
Adds a header. *key* is a string that supplies the header key. *value* is a string that supplies
the header value.
get()
^^^^^
.. code-block:: lua
headers:get(key)
Gets a header. *key* is a string that suplies the header key. Returns a string that is the header
value or nil if there is no such header.
__pairs()
^^^^^^^^^
.. code-block:: lua
for key, value in pairs(headers) do
end
Iterates through every header. *key* is a string that supplies the header key. *value* is a string
that supplies the header value.
.. attention::
In the currently implementation, headers cannot be modified during iteration. Additionally, if
it is desired to modify headers after iteration, the iteration must be completed. Meaning, do
not use `break` or any other mechanism to exit the loop early. This may be relaxed in the future.
remove()
^^^^^^^^
.. code-block:: lua
headers:remove(key)
Removes a header. *key* supplies the header key to remove.
.. _config_http_filters_lua_buffer_wrapper:
Buffer API
----------
length()
^^^^^^^^^^
.. code-block:: lua
size = buffer:length()
Gets the size of the buffer in bytes. Returns an integer.
getBytes()
^^^^^^^^^^
.. code-block:: lua
buffer:getBytes(index, length)
Get bytes from the buffer. By default Envoy will not copy all buffer bytes to Lua. This will
cause a buffer segment to be copied. *index* is an integer and supplies the buffer start index to
copy. *length* is an integer and supplies the buffer length to copy. *index* + *length* must be
less than the buffer length.

@ -0,0 +1,79 @@
.. _config_http_filters_rate_limit:
Rate limit
==========
Global rate limiting :ref:`architecture overview <arch_overview_rate_limit>`.
The HTTP rate limit filter will call the rate limit service when the request's route or virtual host
has one or more :ref:`rate limit configurations<config_http_conn_man_route_table_route_rate_limits>`
that match the filter stage setting. The :ref:`route<config_http_conn_man_route_table_route_include_vh>`
can optionally include the virtual host rate limit configurations. More than one configuration can
apply to a request. Each configuration results in a descriptor being sent to the rate limit service.
If the rate limit service is called, and the response for any of the descriptors is over limit, a
429 response is returned.
.. code-block:: json
{
"name": "rate_limit",
"config": {
"domain": "...",
"stage": "...",
"request_type": "...",
"timeout_ms": "..."
}
}
domain
*(required, string)* The rate limit domain to use when calling the rate limit service.
stage
*(optional, integer)* Specifies the rate limit configurations to be applied with the same stage
number. If not set, the default stage number is 0.
**NOTE:** The filter supports a range of 0 - 10 inclusively for stage numbers.
request_type
*(optional, string)* The type of requests the filter should apply to. The supported
types are *internal*, *external* or *both*. A request is considered internal if
:ref:`x-envoy-internal<config_http_conn_man_headers_x-envoy-internal>` is set to true. If
:ref:`x-envoy-internal<config_http_conn_man_headers_x-envoy-internal>` is not set or false, a
request is considered external. The filter defaults to *both*, and it will apply to all request
types.
timeout_ms
*(optional, integer)* The timeout in milliseconds for the rate limit service RPC. If not set,
this defaults to 20ms.
Statistics
----------
The buffer filter outputs statistics in the *cluster.<route target cluster>.ratelimit.* namespace.
429 responses are emitted to the normal cluster :ref:`dynamic HTTP statistics
<config_cluster_manager_cluster_stats_dynamic_http>`.
.. csv-table::
:header: Name, Type, Description
:widths: 1, 1, 2
ok, Counter, Total under limit responses from the rate limit service
error, Counter, Total errors contacting the rate limit service
over_limit, Counter, total over limit responses from the rate limit service
Runtime
-------
The HTTP rate limit filter supports the following runtime settings:
ratelimit.http_filter_enabled
% of requests that will call the rate limit service. Defaults to 100.
ratelimit.http_filter_enforcing
% of requests that will call the rate limit service and enforce the decision. Defaults to 100.
This can be used to test what would happen before fully enforcing the outcome.
ratelimit.<route_key>.http_filter_enabled
% of requests that will call the rate limit service for a given *route_key* specified in the
:ref:`rate limit configuration <config_http_conn_man_route_table_rate_limit_config>`. Defaults to 100.

@ -0,0 +1,307 @@
.. _config_http_filters_router:
Router
======
The router filter implements HTTP forwarding. It will be used in almost all HTTP proxy scenarios
that Envoy is deployed for. The filter's main job is to follow the instructions specified in the
configured :ref:`route table <config_http_conn_man_route_table>`. In addition to forwarding and
redirection, the filter also handles retry, statistics, etc.
.. code-block:: json
{
"name": "router",
"config": {
"dynamic_stats": "...",
"start_child_span": "..."
}
}
dynamic_stats
*(optional, boolean)* Whether the router generates :ref:`dynamic cluster statistics
<config_cluster_manager_cluster_stats_dynamic_http>`. Defaults to *true*. Can be disabled in high
performance scenarios.
.. _config_http_filters_router_start_child_span:
start_child_span
*(optional, boolean)* Whether to start a child :ref:`tracing <arch_overview_tracing>` span for
egress routed calls. This can be useful in scenarios where other filters (auth, ratelimit, etc.)
make outbound calls and have child spans rooted at the same ingress parent. Defaults to *false*.
.. _config_http_filters_router_headers:
HTTP headers
------------
The router consumes and sets various HTTP headers both on the egress/request path as well as on the
ingress/response path. They are documented in this section.
.. contents::
:local:
.. _config_http_filters_router_x-envoy-expected-rq-timeout-ms:
x-envoy-expected-rq-timeout-ms
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This is the time in milliseconds the router expects the request to be completed. Envoy sets this
header so that the upstream host receiving the request can make decisions based on the request
timeout, e.g., early exit. This is set on internal requests and is either taken from the
:ref:`config_http_filters_router_x-envoy-upstream-rq-timeout-ms` header or the :ref:`route timeout
<config_http_conn_man_route_table_route_timeout>`, in that order.
.. _config_http_filters_router_x-envoy-max-retries:
x-envoy-max-retries
^^^^^^^^^^^^^^^^^^^
If a :ref:`retry policy <config_http_conn_man_route_table_route_retry>` is in place, Envoy will default to retrying one
time unless explicitly specified. The number of retries can be explicitly set in the
:ref:`route retry config <config_http_conn_man_route_table_route_retry>` or by using this header.
If a :ref:`retry policy <config_http_conn_man_route_table_route_retry>` is not configured and
:ref:`config_http_filters_router_x-envoy-retry-on` or
:ref:`config_http_filters_router_x-envoy-retry-grpc-on` headers are not specified, Envoy will not retry a failed request.
A few notes on how Envoy does retries:
* The route timeout (set via :ref:`config_http_filters_router_x-envoy-upstream-rq-timeout-ms` or the
:ref:`route configuration <config_http_conn_man_route_table_route_timeout>`) **includes** all
retries. Thus if the request timeout is set to 3s, and the first request attempt takes 2.7s, the
retry (including backoff) has .3s to complete. This is by design to avoid an exponential
retry/timeout explosion.
* Envoy uses a fully jittered exponential backoff algorithm for retries with a base time of 25ms.
The first retry will be delayed randomly between 0-24ms, the 2nd between 0-74ms, the 3rd between
0-174ms and so on.
* If max retries is set both by header as well as in the route configuration, the maximum value is
taken when determining the max retries to use for the request.
.. _config_http_filters_router_x-envoy-retry-on:
x-envoy-retry-on
^^^^^^^^^^^^^^^^
Setting this header on egress requests will cause Envoy to attempt to retry failed requests (number
of retries defaults to 1 and can be controlled by :ref:`x-envoy-max-retries <config_http_filters_router_x-envoy-max-retries>`
header or the :ref:`route config retry policy <config_http_conn_man_route_table_route_retry>`). The
value to which the x-envoy-retry-on header is set indicates the retry policy. One or more policies can be specified
using a ',' delimited list. The supported policies are:
5xx
Envoy will attempt a retry if the upstream server responds with any 5xx response code, or does not
respond at all (disconnect/reset/read timeout). (Includes *connect-failure* and *refused-stream*)
* **NOTE:** Envoy will not retry when a request exceeds
:ref:`config_http_filters_router_x-envoy-upstream-rq-timeout-ms` (resulting in a 504 error
code). Use :ref:`config_http_filters_router_x-envoy-upstream-rq-per-try-timeout-ms` if you want
to retry when individual attempts take too long.
:ref:`config_http_filters_router_x-envoy-upstream-rq-timeout-ms` is an outer time limit for a
request, including any retries that take place.
connect-failure
Envoy will attempt a retry if a request is failed because of a connection failure to the upstream
server (connect timeout, etc.). (Included in *5xx*)
* **NOTE:** A connection failure/timeout is a the TCP level, not the request level. This does not
include upstream request timeouts specified via
:ref:`config_http_filters_router_x-envoy-upstream-rq-timeout-ms` or via :ref:`route
configuration <config_http_conn_man_route_table_route_retry>`.
retriable-4xx
Envoy will attempt a retry if the upstream server responds with a retriable 4xx response code.
Currently, the only response code in this category is 409.
* **NOTE:** Be careful turning on this retry type. There are certain cases where a 409 can indicate
that an optimistic locking revision needs to be updated. Thus, the caller should not retry and
needs to read then attempt another write. If a retry happens in this type of case it will always
fail with another 409.
refused-stream
Envoy will attempt a retry if the upstream server resets the stream with a REFUSED_STREAM error
code. This reset type indicates that a request is safe to retry. (Included in *5xx*)
The number of retries can be controlled via the
:ref:`config_http_filters_router_x-envoy-max-retries` header or via the :ref:`route
configuration <config_http_conn_man_route_table_route_retry>`.
Note that retry policies can also be applied at the :ref:`route level
<config_http_conn_man_route_table_route_retry>`.
By default, Envoy will *not* perform retries unless you've configured them per above.
.. _config_http_filters_router_x-envoy-retry-grpc-on:
x-envoy-retry-grpc-on
^^^^^^^^^^^^^^^^^^^^^
Setting this header on egress requests will cause Envoy to attempt to retry failed requests (number of
retries defaults to 1, and can be controlled by
:ref:`x-envoy-max-retries <config_http_filters_router_x-envoy-max-retries>`
header or the :ref:`route config retry policy <config_http_conn_man_route_table_route_retry>`).
gRPC retries are currently only supported for gRPC status codes in response headers. gRPC status codes in
trailers will not trigger retry logic. One or more policies can be specified using a ',' delimited
list. The supported policies are:
cancelled
Envoy will attempt a retry if the gRPC status code in the response headers is "cancelled" (1)
deadline-exceeded
Envoy will attempt a retry if the gRPC status code in the response headers is "deadline-exceeded" (4)
resource-exhausted
Envoy will attempt a retry if the gRPC status code in the response headers is "resource-exhausted" (8)
As with the x-envoy-retry-grpc-on header, the number of retries can be controlled via the
:ref:`config_http_filters_router_x-envoy-max-retries` header
Note that retry policies can also be applied at the :ref:`route level
<config_http_conn_man_route_table_route_retry>`.
By default, Envoy will *not* perform retries unless you've configured them per above.
.. _config_http_filters_router_x-envoy-upstream-alt-stat-name:
x-envoy-upstream-alt-stat-name
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Setting this header on egress requests will cause Envoy to emit upstream response code/timing
statistics to a dual stat tree. This can be useful for application level categories that Envoy
doesn't know about. The output tree is documented :ref:`here
<config_cluster_manager_cluster_stats_alt_tree>`.
x-envoy-upstream-canary
^^^^^^^^^^^^^^^^^^^^^^^
If an upstream host sets this header, the router will use it to generate canary specific statistics.
The output tree is documented :ref:`here <config_cluster_manager_cluster_stats_dynamic_http>`.
.. _config_http_filters_router_x-envoy-upstream-rq-timeout-alt-response:
x-envoy-upstream-rq-timeout-alt-response
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Setting this header on egress requests will cause Envoy to set a 204 response code (instead of 504)
in the event of a request timeout. The actual value of the header is ignored; only its presence
is considered. See also :ref:`config_http_filters_router_x-envoy-upstream-rq-timeout-ms`.
.. _config_http_filters_router_x-envoy-upstream-rq-timeout-ms:
x-envoy-upstream-rq-timeout-ms
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Setting this header on egress requests will cause Envoy to override the :ref:`route configuration
<config_http_conn_man_route_table_route_timeout>`. The timeout must be specified in millisecond
units. See also :ref:`config_http_filters_router_x-envoy-upstream-rq-per-try-timeout-ms`.
.. _config_http_filters_router_x-envoy-upstream-rq-per-try-timeout-ms:
x-envoy-upstream-rq-per-try-timeout-ms
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Setting this header on egress requests will cause Envoy to set a *per try* timeout on routed
requests. This timeout must be <= the global route timeout (see
:ref:`config_http_filters_router_x-envoy-upstream-rq-timeout-ms`) or it is ignored. This allows a
caller to set a tight per try timeout to allow for retries while maintaining a reasonable overall
timeout.
x-envoy-upstream-service-time
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Contains the time in milliseconds spent by the upstream host processing the request. This is useful
if the client wants to determine service time compared to network latency. This header is set on
responses.
.. _config_http_filters_router_x-envoy-original-path:
x-envoy-original-path
^^^^^^^^^^^^^^^^^^^^^
If the route utilizes :ref:`prefix_rewrite <config_http_conn_man_route_table_route_prefix_rewrite>`,
Envoy will put the original path header in this header. This can be useful for logging and
debugging.
.. _config_http_filters_router_x-envoy-immediate-health-check-fail:
x-envoy-immediate-health-check-fail
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If the upstream host returns this header (set to any value), Envoy will immediately assume the
upstream host has failed :ref:`active health checking <arch_overview_health_checking>` (if the
cluster has been :ref:`configured <config_cluster_manager_cluster_hc>` for active health checking).
This can be used to fast fail an upstream host via standard data plane processing without waiting
for the next health check interval. The host can become healthy again via standard active health
checks. See the :ref:`health checking overview <arch_overview_health_checking>` for more
information.
.. _config_http_filters_router_x-envoy-overloaded:
x-envoy-overloaded
^^^^^^^^^^^^^^^^^^
If this header is set by upstream, Envoy will not retry. Currently the value of the header is not
looked at, only its presence. Additionally, Envoy will set this header on the downstream response
if a request was dropped due to either :ref:`maintenance mode
<config_http_filters_router_runtime_maintenance_mode>` or upstream :ref:`circuit breaking
<arch_overview_circuit_break>`.
.. _config_http_filters_router_x-envoy-decorator-operation:
x-envoy-decorator-operation
^^^^^^^^^^^^^^^^^^^^^^^^^^^
If this header is present on ingress requests, its value will override any locally defined
operation (span) name on the server span generated by the tracing mechanism. Similarly, if
this header is present on an egress response, its value will override any locally defined
operation (span) name on the client span.
.. _config_http_filters_router_stats:
Statistics
----------
The router outputs many statistics in the cluster namespace (depending on the cluster specified in
the chosen route). See :ref:`here <config_cluster_manager_cluster_stats>` for more information.
The router filter outputs statistics in the *http.<stat_prefix>.* namespace. The :ref:`stat
prefix <config_http_conn_man_stat_prefix>` comes from the owning HTTP connection manager.
.. csv-table::
:header: Name, Type, Description
:widths: 1, 1, 2
no_route, Counter, Total requests that had no route and resulted in a 404
no_cluster, Counter, Total requests in which the target cluster did not exist and resulted in a 404
rq_redirect, Counter, Total requests that resulted in a redirect response
rq_total, Counter, Total routed requests
Virtual cluster statistics are output in the
*vhost.<virtual host name>.vcluster.<virtual cluster name>.* namespace and include the following
statistics:
.. csv-table::
:header: Name, Type, Description
:widths: 1, 1, 2
upstream_rq_<\*xx>, Counter, "Aggregate HTTP response codes (e.g., 2xx, 3xx, etc.)"
upstream_rq_<\*>, Counter, "Specific HTTP response codes (e.g., 201, 302, etc.)"
upstream_rq_time, Histogram, Request time milliseconds
Runtime
-------
The router filter supports the following runtime settings:
upstream.base_retry_backoff_ms
Base exponential retry back off time. See :ref:`here <arch_overview_http_routing_retry>` for more
information. Defaults to 25ms.
.. _config_http_filters_router_runtime_maintenance_mode:
upstream.maintenance_mode.<cluster name>
% of requests that will result in an immediate 503 response. This overrides any routing behavior
for requests that would have been destined for <cluster name>. This can be used for load
shedding, failure injection, etc. Defaults to disabled.
upstream.use_retry
% of requests that are eligible for retry. This configuration is checked before any other retry
configuration and can be used to fully disable retries across all Envoys if needed.

@ -0,0 +1,21 @@
.. _config_listener_filters:
Filters
=======
Network filter :ref:`architecture overview <arch_overview_network_filters>`.
.. code-block:: json
{
"name": "...",
"config": "{...}"
}
name
*(required, string)* The name of the filter to instantiate. The name must match a :ref:`supported
filter <config_network_filters>`.
config
*(required, object)* Filter specific configuration which depends on the filter being instantiated.
See the :ref:`supported filters <config_network_filters>` for further documentation.

@ -0,0 +1,84 @@
.. _config_listeners_lds:
Listener discovery service
==========================
The listener discovery service (LDS) is an optional API that Envoy will call to dynamically fetch
listeners. Envoy will reconcile the API response and add, modify, or remove known listeners
depending on what is required.
The semantics of listener updates are as follows:
* Every listener must have a unique :ref:`name <config_listeners_name>`. If a name is not
provided, Envoy will create a UUID. Listeners that are to be dynamically updated should have a
unique name supplied by the management server.
* When a listener is added, it will be "warmed" before taking traffic. For example, if the listener
references an :ref:`RDS <config_http_conn_man_rds>` configuration, that configuration will be
resolved and fetched before the listener is moved to "active."
* Listeners are effectively constant once created. Thus, when a listener is updated, an entirely
new listener is created (with the same listen socket). This listener goes through the same
warming process described above for a newly added listener.
* When a listener is updated or removed, the old listener will be placed into a "draining" state
much like when the entire server is drained for restart. Connections owned by the listener will
be gracefully closed (if possible) for some period of time before the listener is removed and any
remaining connections are closed. The drain time is set via the :option:`--drain-time-s` option.
.. code-block:: json
{
"cluster": "...",
"refresh_delay_ms": "..."
}
cluster
*(required, string)* The name of an upstream :ref:`cluster <config_cluster_manager_cluster>` that
hosts the listener discovery service. The cluster must run a REST service that implements the
:ref:`LDS HTTP API <config_listeners_lds_api>`. NOTE: This is the *name* of a cluster defined
in the :ref:`cluster manager <config_cluster_manager>` configuration, not the full definition of
a cluster as in the case of SDS and CDS.
refresh_delay_ms
*(optional, integer)* The delay, in milliseconds, between fetches to the LDS API. Envoy will add
an additional random jitter to the delay that is between zero and *refresh_delay_ms*
milliseconds. Thus the longest possible refresh delay is 2 \* *refresh_delay_ms*. Default value
is 30000ms (30 seconds).
.. _config_listeners_lds_api:
REST API
--------
.. http:get:: /v1/listeners/(string: service_cluster)/(string: service_node)
Asks the discovery service to return all listeners for a particular `service_cluster` and
`service_node`. `service_cluster` corresponds to the :option:`--service-cluster` CLI option.
`service_node` corresponds to the :option:`--service-node` CLI option. Responses use the following
JSON schema:
.. code-block:: json
{
"listeners": []
}
listeners
*(Required, array)* A list of :ref:`listeners <config_listeners>` that will be
dynamically added/modified within the listener manager. The management server is expected to
respond with the complete set of listeners that Envoy should configure during each polling cycle.
Envoy will reconcile this list with the listeners that are currently loaded and either
add/modify/remove listeners as necessary.
Statistics
----------
LDS has a statistics tree rooted at *listener_manager.lds.* with the following statistics:
.. csv-table::
:header: Name, Type, Description
:widths: 1, 1, 2
config_reload, Counter, Total API fetches that resulted in a config reload due to a different config
update_attempt, Counter, Total API fetches attempted
update_success, Counter, Total API fetches completed successfully
update_failure, Counter, Total API fetches that failed (either network or schema errors)
version, Gauge, Hash of the contents from the last successful API fetch

@ -0,0 +1,107 @@
.. _config_listeners:
Listeners
=========
.. toctree::
:hidden:
filters
ssl
stats
runtime
lds
The top level Envoy configuration contains a list of :ref:`listeners <arch_overview_listeners>`.
Each individual listener configuration has the following format:
.. code-block:: json
{
"name": "...",
"address": "...",
"filters": [],
"ssl_context": "{...}",
"bind_to_port": "...",
"use_proxy_proto": "...",
"use_original_dst": "...",
"per_connection_buffer_limit_bytes": "...",
"drain_type": "..."
}
.. _config_listeners_name:
name
*(optional, string)* The unique name by which this listener is known. If no name is provided,
Envoy will allocate an internal UUID for the listener. If the listener is to be dynamically
updated or removed via :ref:`LDS <config_listeners_lds>` a unique name must be provided.
By default, the maximum length of a listener's name is limited to 60 characters. This limit can be
increased by setting the :option:`--max-obj-name-len` command line argument to the desired value.
address
*(required, string)* The address that the listener should listen on. Currently only TCP
listeners are supported, e.g., "tcp://127.0.0.1:80". Note, "tcp://0.0.0.0:80" is the wild card
match for any IPv4 address with port 80.
:ref:`filters <config_listener_filters>`
*(required, array)* A list of individual :ref:`network filters <arch_overview_network_filters>`
that make up the filter chain for connections established with the listener. Order matters as the
filters are processed sequentially as connection events happen.
**Note:** If the filter list is empty, the connection will close by default.
:ref:`ssl_context <config_listener_ssl_context>`
*(optional, object)* The :ref:`TLS <arch_overview_ssl>` context configuration for a TLS listener.
If no TLS context block is defined, the listener is a plain text listener.
bind_to_port
*(optional, boolean)* Whether the listener should bind to the port. A listener that doesn't bind
can only receive connections redirected from other listeners that set use_original_dst parameter to
true. Default is true.
use_proxy_proto
*(optional, boolean)* Whether the listener should expect a
`PROXY protocol V1 <http://www.haproxy.org/download/1.5/doc/proxy-protocol.txt>`_ header on new
connections. If this option is enabled, the listener will assume that that remote address of the
connection is the one specified in the header. Some load balancers including the AWS ELB support
this option. If the option is absent or set to false, Envoy will use the physical peer address
of the connection as the remote address.
use_original_dst
*(optional, boolean)* If a connection is redirected using *iptables*, the port on which the proxy
receives it might be different from the original destination port. When this flag is set to true,
the listener hands off redirected connections to the listener associated with the original
destination port. If there is no listener associated with the original destination port, the
connection is handled by the listener that receives it. Default is false.
.. _config_listeners_per_connection_buffer_limit_bytes:
per_connection_buffer_limit_bytes
*(optional, integer)* Soft limit on size of the listener's new connection read and write buffers.
If unspecified, an implementation defined default is applied (1MiB).
.. _config_listeners_drain_type:
drain_type
*(optional, string)* The type of draining that the listener does. Allowed values include *default*
and *modify_only*. See the :ref:`draining <arch_overview_draining>` architecture overview for
more information.
Statistics
----------
The listener manager has a statistics tree rooted at *listener_manager.* with the following
statistics. Any ``:`` character in the stats name is replaced with ``_``.
.. csv-table::
:header: Name, Type, Description
:widths: 1, 1, 2
listener_added, Counter, Total listeners added (either via static config or LDS)
listener_modified, Counter, Total listeners modified (via LDS)
listener_removed, Counter, Total listeners removed (via LDS)
listener_create_success, Counter, Total listener objects successfully added to workers.
listener_create_failure, Counter, Total failed listener object additions to workers.
total_listeners_warming, Gauge, Number of currently warming listeners
total_listeners_active, Gauge, Number of currently active listeners
total_listeners_draining, Gauge, Number of currently draining listeners

@ -0,0 +1,8 @@
Runtime
=======
Listeners support the following runtime settings:
ssl.alt_alpn
What % of requests use the configured :ref:`alt_alpn <config_listener_ssl_context_alt_alpn>`
protocol string. Defaults to 0.

@ -0,0 +1,125 @@
.. _config_listener_ssl_context:
TLS context
===========
TLS :ref:`architecture overview <arch_overview_ssl>`.
.. code-block:: json
{
"cert_chain_file": "...",
"private_key_file": "...",
"alpn_protocols": "...",
"alt_alpn_protocols": "...",
"ca_cert_file": "...",
"verify_certificate_hash": "...",
"verify_subject_alt_name": [],
"cipher_suites": "...",
"ecdh_curves": "...",
"session_ticket_key_paths": []
}
cert_chain_file
*(required, string)* The certificate chain file that should be served by the listener.
private_key_file
*(required, string)* The private key that corresponds to the certificate chain file.
alpn_protocols
*(optional, string)* Supplies the list of ALPN protocols that the listener should expose. In
practice this is likely to be set to one of two values (see the
:ref:`codec_type <config_http_conn_man_codec_type>` parameter in the HTTP connection
manager for more information):
* "h2,http/1.1" If the listener is going to support both HTTP/2 and HTTP/1.1.
* "http/1.1" If the listener is only going to support HTTP/1.1
.. _config_listener_ssl_context_alt_alpn:
alt_alpn_protocols
*(optional, string)* An alternate ALPN protocol string that can be switched to via runtime. This
is useful for example to disable HTTP/2 without having to deploy a new configuration.
ca_cert_file
*(optional, string)* A file containing certificate authority certificates to use in verifying
a presented client side certificate. If not specified and a client certificate is presented it
will not be verified. By default, a client certificate is optional, unless one of the additional
options (
:ref:`require_client_certificate <config_listener_ssl_context_require_client_certificate>`,
:ref:`verify_certificate_hash <config_listener_ssl_context_verify_certificate_hash>` or
:ref:`verify_subject_alt_name <config_listener_ssl_context_verify_subject_alt_name>`) is also
specified.
.. _config_listener_ssl_context_require_client_certificate:
require_client_certificate
*(optional, boolean)* If specified, Envoy will reject connections without a valid client certificate.
.. _config_listener_ssl_context_verify_certificate_hash:
verify_certificate_hash
*(optional, string)* If specified, Envoy will verify (pin) the hash of the presented client
side certificate.
.. _config_listener_ssl_context_verify_subject_alt_name:
verify_subject_alt_name
*(optional, array)* An optional list of subject alt names. If specified, Envoy will verify
that the client certificate's subject alt name matches one of the specified values.
cipher_suites
*(optional, string)* If specified, the TLS listener will only support the specified `cipher list
<https://commondatastorage.googleapis.com/chromium-boringssl-docs/ssl.h.html#Cipher-suite-configuration>`_.
If not specified, the default list:
.. code-block:: none
[ECDHE-ECDSA-AES128-GCM-SHA256|ECDHE-ECDSA-CHACHA20-POLY1305]
[ECDHE-RSA-AES128-GCM-SHA256|ECDHE-RSA-CHACHA20-POLY1305]
ECDHE-ECDSA-AES128-SHA256
ECDHE-RSA-AES128-SHA256
ECDHE-ECDSA-AES128-SHA
ECDHE-RSA-AES128-SHA
AES128-GCM-SHA256
AES128-SHA256
AES128-SHA
ECDHE-ECDSA-AES256-GCM-SHA384
ECDHE-RSA-AES256-GCM-SHA384
ECDHE-ECDSA-AES256-SHA384
ECDHE-RSA-AES256-SHA384
ECDHE-ECDSA-AES256-SHA
ECDHE-RSA-AES256-SHA
AES256-GCM-SHA384
AES256-SHA256
AES256-SHA
will be used.
ecdh_curves
*(optional, string)* If specified, the TLS connection will only support the specified ECDH curves.
If not specified, the default curves (X25519, P-256) will be used.
session_ticket_key_paths
*(optional, array)* Paths to keyfiles for encrypting and decrypting TLS session tickets. The
first keyfile in the array contains the key to encrypt all new sessions created by this context.
All keys are candidates for decrypting received tickets. This allows for easy rotation of keys
by, for example, putting the new keyfile first, and the previous keyfile second.
If `session_ticket_key_paths` is not specified, the TLS library will still support resuming
sessions via tickets, but it will use an internally-generated and managed key, so sessions cannot
be resumed across hot restarts or on different hosts.
Each keyfile must contain exactly 80 bytes of cryptographically-secure random data. For example,
the output of ``openssl rand 80``.
.. attention::
Using this feature has serious security considerations and risks. Improper handling of keys may
result in loss of secrecy in connections, even if ciphers supporting perfect forward secrecy
are used. See https://www.imperialviolet.org/2013/06/27/botchingpfs.html for some discussion.
To minimize the risk, you must:
* Keep the session ticket keys at least as secure as your TLS certificate private keys
* Rotate session ticket keys at least daily, and preferably hourly
* Always generate keys using a cryptographically-secure random data source

@ -0,0 +1,24 @@
.. _config_listener_stats:
Statistics
==========
Every listener has a statistics tree rooted at *listener.<address>.* with the following statistics:
.. csv-table::
:header: Name, Type, Description
:widths: 1, 1, 2
downstream_cx_total, Counter, Total connections
downstream_cx_destroy, Counter, Total destroyed connections
downstream_cx_active, Gauge, Total active connections
downstream_cx_length_ms, Histogram, Connection length milliseconds
ssl.connection_error, Counter, Total TLS connection errors not including failed certificate verifications
ssl.handshake, Counter, Total successful TLS connection handshakes
ssl.session_reused, Counter, Total successful TLS session resumptions
ssl.no_certificate, Counter, Total successul TLS connections with no client certificate
ssl.fail_verify_no_cert, Counter, Total TLS connections that failed because of missing client certificate
ssl.fail_verify_error, Counter, Total TLS connections that failed CA verification
ssl.fail_verify_san, Counter, Total TLS connections that failed SAN verification
ssl.fail_verify_cert_hash, Counter, Total TLS connections that failed certificate pinning verification
ssl.cipher.<cipher>, Counter, Total TLS connections that used <cipher>

@ -0,0 +1,98 @@
.. _config_network_filters_client_ssl_auth:
Client TLS authentication
=========================
Client TLS authentication filter :ref:`architecture overview <arch_overview_ssl_auth_filter>`.
.. code-block:: json
{
"name": "client_ssl_auth",
"config": {
"auth_api_cluster": "...",
"stat_prefix": "...",
"refresh_delay_ms": "...",
"ip_white_list": []
}
}
auth_api_cluster
*(required, string)* The :ref:`cluster manager <arch_overview_cluster_manager>` cluster that runs
the authentication service. The filter will connect to the service every 60s to fetch the list
of principals. The service must support the expected :ref:`REST API
<config_network_filters_client_ssl_auth_rest_api>`.
stat_prefix
*(required, string)* The prefix to use when emitting :ref:`statistics
<config_network_filters_client_ssl_auth_stats>`.
refresh_delay_ms
*(optional, integer)* Time in milliseconds between principal refreshes from the authentication
service. Default is 60000 (60s). The actual fetch time will be this value plus a random jittered
value between 0-refresh_delay_ms milliseconds.
ip_white_list
*(optional, array)* An optional list of IP address and subnet masks that should be white listed
for access by the filter. If no list is provided, there is no IP white list. The list is
specified as in the following example:
.. code-block:: json
[
"192.168.3.0/24",
"50.1.2.3/32",
"10.15.0.0/16",
"2001:abcd::/64"
]
.. _config_network_filters_client_ssl_auth_stats:
Statistics
----------
Every configured client TLS authentication filter has statistics rooted at
*auth.clientssl.<stat_prefix>.* with the following statistics:
.. csv-table::
:header: Name, Type, Description
:widths: 1, 1, 2
update_success, Counter, Total principal update successes
update_failure, Counter, Total principal update failures
auth_no_ssl, Counter, Total connections ignored due to no TLS
auth_ip_white_list, Counter, Total connections allowed due to the IP white list
auth_digest_match, Counter, Total connections allowed due to certificate match
auth_digest_no_match, Counter, Total connections denied due to no certificate match
total_principals, Gauge, Total loaded principals
.. _config_network_filters_client_ssl_auth_rest_api:
REST API
--------
.. http:get:: /v1/certs/list/approved
The authentication filter will call this API every refresh interval to fetch the current list
of approved certificates/principals. The expected JSON response looks like:
.. code-block:: json
{
"certificates": []
}
certificates
*(required, array)* list of approved certificates/principals.
Each certificate object is defined as:
.. code-block:: json
{
"fingerprint_sha256": "...",
}
fingerprint_sha256
*(required, string)* The SHA256 hash of the approved client certificate. Envoy will match this
hash to the presented client certificate to determine whether there is a digest match.

@ -0,0 +1,12 @@
Echo
====
The echo is a trivial network filter mainly meant to demonstrate the network filter API. If
installed it will echo (write) all received data back to the connected downstream client.
.. code-block:: json
{
"name": "echo",
"config": {}
}

@ -0,0 +1,212 @@
.. _config_network_filters_mongo_proxy:
Mongo proxy
===========
MongoDB :ref:`architecture overview <arch_overview_mongo>`.
.. code-block:: json
{
"name": "mongo_proxy",
"config": {
"stat_prefix": "...",
"access_log": "...",
"fault": {}
}
}
stat_prefix
*(required, string)* The prefix to use when emitting :ref:`statistics
<config_network_filters_mongo_proxy_stats>`.
access_log
*(optional, string)* The optional path to use for writing Mongo access logs. If not access log
path is specified no access logs will be written. Note that access log is also gated by
:ref:`runtime <config_network_filters_mongo_proxy_runtime>`.
fault
*(optional, object)* If specified, the filter will inject faults based on the values in the object.
Fault configuration
-------------------
Configuration for MongoDB fixed duration delays. Delays are applied to the following MongoDB
operations: Query, Insert, GetMore, and KillCursors. Once an active delay is in progress, all
incoming data up until the timer event fires will be a part of the delay.
.. code-block:: json
{
"fixed_delay": {
"percent": "...",
"duration_ms": "..."
}
}
percent
*(required, integer)* Probability of an eligible MongoDB operation to be affected by the
injected fault when there is no active fault. Valid values are integers in a range of [0, 100].
duration_ms
*(required, integer)* Non-negative delay duration in milliseconds.
.. _config_network_filters_mongo_proxy_stats:
Statistics
----------
Every configured MongoDB proxy filter has statistics rooted at *mongo.<stat_prefix>.* with the
following statistics:
.. csv-table::
:header: Name, Type, Description
:widths: 1, 1, 2
decoding_error, Counter, Number of MongoDB protocol decoding errors
delay_injected, Counter, Number of times the delay is injected
op_get_more, Counter, Number of OP_GET_MORE messages
op_insert, Counter, Number of OP_INSERT messages
op_kill_cursors, Counter, Number of OP_KILL_CURSORS messages
op_query, Counter, Number of OP_QUERY messages
op_query_tailable_cursor, Counter, Number of OP_QUERY with tailable cursor flag set
op_query_no_cursor_timeout, Counter, Number of OP_QUERY with no cursor timeout flag set
op_query_await_data, Counter, Number of OP_QUERY with await data flag set
op_query_exhaust, Counter, Number of OP_QUERY with exhaust flag set
op_query_no_max_time, Counter, Number of queries without maxTimeMS set
op_query_scatter_get, Counter, Number of scatter get queries
op_query_multi_get, Counter, Number of multi get queries
op_query_active, Gauge, Number of active queries
op_reply, Counter, Number of OP_REPLY messages
op_reply_cursor_not_found, Counter, Number of OP_REPLY with cursor not found flag set
op_reply_query_failure, Counter, Number of OP_REPLY with query failure flag set
op_reply_valid_cursor, Counter, Number of OP_REPLY with a valid cursor
cx_destroy_local_with_active_rq, Counter, Connections destroyed locally with an active query
cx_destroy_remote_with_active_rq, Counter, Connections destroyed remotely with an active query
cx_drain_close, Counter, Connections gracefully closed on reply boundaries during server drain
Scatter gets
^^^^^^^^^^^^
Envoy defines a *scatter get* as any query that does not use an *_id* field as a query parameter.
Envoy looks in both the top level document as well as within a *$query* field for *_id*.
Multi gets
^^^^^^^^^^
Envoy defines a *multi get* as any query that does use an *_id* field as a query parameter, but
where *_id* is not a scalar value (i.e., a document or an array). Envoy looks in both the top level
document as well as within a *$query* field for *_id*.
.. _config_network_filters_mongo_proxy_comment_parsing:
$comment parsing
^^^^^^^^^^^^^^^^
If a query has a top level *$comment* field (typically in addition to a *$query* field), Envoy will
parse it as JSON and look for the following structure:
.. code-block:: json
{
"callingFunction": "..."
}
callingFunction
*(required, string)* the function that made the query. If available, the function will be used
in :ref:`callsite <config_network_filters_mongo_proxy_callsite_stats>` query statistics.
Per command statistics
^^^^^^^^^^^^^^^^^^^^^^
The MongoDB filter will gather statistics for commands in the *mongo.<stat_prefix>.cmd.<cmd>.*
namespace.
.. csv-table::
:header: Name, Type, Description
:widths: 1, 1, 2
total, Counter, Number of commands
reply_num_docs, Histogram, Number of documents in reply
reply_size, Histogram, Size of the reply in bytes
reply_time_ms, Histogram, Command time in milliseconds
.. _config_network_filters_mongo_proxy_collection_stats:
Per collection query statistics
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The MongoDB filter will gather statistics for queries in the
*mongo.<stat_prefix>.collection.<collection>.query.* namespace.
.. csv-table::
:header: Name, Type, Description
:widths: 1, 1, 2
total, Counter, Number of queries
scatter_get, Counter, Number of scatter gets
multi_get, Counter, Number of multi gets
reply_num_docs, Histogram, Number of documents in reply
reply_size, Histogram, Size of the reply in bytes
reply_time_ms, Histogram, Query time in milliseconds
.. _config_network_filters_mongo_proxy_callsite_stats:
Per collection and callsite query statistics
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If the application provides the :ref:`calling function
<config_network_filters_mongo_proxy_comment_parsing>` in the *$comment* field, Envoy will generate
per callsite statistics. These statistics match the :ref:`per collection statistics
<config_network_filters_mongo_proxy_collection_stats>` but are found in the
*mongo.<stat_prefix>.collection.<collection>.callsite.<callsite>.query.* namespace.
.. _config_network_filters_mongo_proxy_runtime:
Runtime
-------
The Mongo proxy filter supports the following runtime settings:
mongo.connection_logging_enabled
% of connections that will have logging enabled. Defaults to 100. This allows only a % of
connections to have logging, but for all messages on those connections to be logged.
mongo.proxy_enabled
% of connections that will have the proxy enabled at all. Defaults to 100.
mongo.logging_enabled
% of messages that will be logged. Defaults to 100. If less than 100, queries may be logged
without replies, etc.
mongo.mongo.drain_close_enabled
% of connections that will be drain closed if the server is draining and would otherwise
attempt a drain close. Defaults to 100.
mongo.fault.fixed_delay.percent
Probability of an eligible MongoDB operation to be affected by
the injected fault when there is no active fault.
Defaults to the *percent* specified in the config.
mongo.fault.fixed_delay.duration_ms
The delay duration in milliseconds. Defaults to the *duration_ms* specified in the config.
Access log format
-----------------
The access log format is not customizable and has the following layout:
.. code-block:: json
{"time": "...", "message": "...", "upstream_host": "..."}
time
System time that complete message was parsed, including milliseconds.
message
Textual expansion of the message. Whether the message is fully expanded depends on the context.
Sometimes summary data is presented to avoid extremely large log sizes.
upstream_host
The upstream host that the connection is proxying to, if available. This is populated if the
filter is used along with the :ref:`TCP proxy filter <config_network_filters_tcp_proxy>`.

@ -0,0 +1,18 @@
.. _config_network_filters:
Network filters
===============
In addition to the :ref:`HTTP connection manager <config_http_conn_man>` which is large
enough to have its own section in the configuration guide, Envoy has the follow builtin network
filters.
.. toctree::
:maxdepth: 2
client_ssl_auth_filter
echo_filter
mongo_proxy_filter
rate_limit_filter
redis_proxy_filter
tcp_proxy_filter

@ -0,0 +1,71 @@
.. _config_network_filters_rate_limit:
Rate limit
==========
Global rate limiting :ref:`architecture overview <arch_overview_rate_limit>`.
.. code-block:: json
{
"name": "ratelimit",
"config": {
"stat_prefix": "...",
"domain": "...",
"descriptors": [],
"timeout_ms": "..."
}
}
stat_prefix
*(required, string)* The prefix to use when emitting :ref:`statistics
<config_network_filters_rate_limit_stats>`.
domain
*(required, string)* The rate limit domain to use in the rate limit service request.
descriptors
*(required, array)* The rate limit descriptor list to use in the rate limit service request. The
descriptors are specified as in the following example:
.. code-block:: json
[
[{"key": "hello", "value": "world"}, {"key": "foo", "value": "bar"}],
[{"key": "foo2", "value": "bar2"}]
]
timeout_ms
*(optional, integer)* The timeout in milliseconds for the rate limit service RPC. If not set,
this defaults to 20ms.
.. _config_network_filters_rate_limit_stats:
Statistics
----------
Every configured rate limit filter has statistics rooted at *ratelimit.<stat_prefix>.* with the
following statistics:
.. csv-table::
:header: Name, Type, Description
:widths: 1, 1, 2
total, Counter, Total requests to the rate limit service
error, Counter, Total errors contacting the rate limit service
over_limit, Counter, Total over limit responses from the rate limit service
ok, Counter, Total under limit responses from the rate limit service
cx_closed, Counter, Total connections closed due to an over limit response from the rate limit service
active, Gauge, Total active requests to the rate limit service
Runtime
-------
The network rate limit filter supports the following runtime settings:
ratelimit.tcp_filter_enabled
% of connections that will call the rate limit service. Defaults to 100.
ratelimit.tcp_filter_enforcing
% of connections that will call the rate limit service and enforce the decision. Defaults to 100.
This can be used to test what would happen before fully enforcing the outcome.

@ -0,0 +1,107 @@
.. _config_network_filters_redis_proxy:
Redis proxy
===========
Redis :ref:`architecture overview <arch_overview_redis>`.
.. code-block:: json
{
"name": "redis_proxy",
"config": {
"cluster_name": "...",
"conn_pool": "{...}",
"stat_prefix": "..."
}
}
cluster_name
*(required, string)* Name of cluster from cluster manager.
See the :ref:`configuration section <arch_overview_redis_configuration>` of the architecture
overview for recommendations on configuring the backing cluster.
conn_pool
*(required, object)* Connection pool configuration.
stat_prefix
*(required, string)* The prefix to use when emitting :ref:`statistics
<config_network_filters_redis_proxy_stats>`.
Connection pool configuration
-----------------------------
.. code-block:: json
{
"op_timeout_ms": "...",
}
op_timeout_ms
*(required, integer)* Per-operation timeout in milliseconds. The timer starts when the first
command of a pipeline is written to the backend connection. Each response received from Redis
resets the timer since it signifies that the next command is being processed by the backend.
The only exception to this behavior is when a connection to a backend is not yet established. In
that case, the connect timeout on the cluster will govern the timeout until the connection is
ready.
.. _config_network_filters_redis_proxy_stats:
Statistics
----------
Every configured Redis proxy filter has statistics rooted at *redis.<stat_prefix>.* with the
following statistics:
.. csv-table::
:header: Name, Type, Description
:widths: 1, 1, 2
downstream_cx_active, Gauge, Total active connections
downstream_cx_protocol_error, Counter, Total protocol errors
downstream_cx_rx_bytes_buffered, Gauge, Total received bytes currently buffered
downstream_cx_rx_bytes_total, Counter, Total bytes received
downstream_cx_total, Counter, Total connections
downstream_cx_tx_bytes_buffered, Gauge, Total sent bytes currently buffered
downstream_cx_tx_bytes_total, Counter, Total bytes sent
downstream_cx_drain_close, Counter, Number of connections closed due to draining
downstream_rq_active, Gauge, Total active requests
downstream_rq_total, Counter, Total requests
Splitter statistics
-------------------
The Redis filter will gather statistics for the command splitter in the
*redis.<stat_prefix>.splitter.* with the following statistics:
.. csv-table::
:header: Name, Type, Description
:widths: 1, 1, 2
invalid_request, Counter, "Number of requests with an incorrect number of arguments"
unsupported_command, Counter, "Number of commands issued which are not recognized by the
command splitter"
Per command statistics
----------------------
The Redis filter will gather statistics for commands in the
*redis.<stat_prefix>.command.<command>.* namespace.
.. csv-table::
:header: Name, Type, Description
:widths: 1, 1, 2
total, Counter, Number of commands
.. _config_network_filters_redis_proxy_per_command_stats:
Runtime
-------
The Redis proxy filter supports the following runtime settings:
redis.drain_close_enabled
% of connections that will be drain closed if the server is draining and would otherwise
attempt a drain close. Defaults to 100.

@ -0,0 +1,146 @@
.. _config_network_filters_tcp_proxy:
TCP proxy
=========
TCP proxy :ref:`architecture overview <arch_overview_tcp_proxy>`.
.. code-block:: json
{
"name": "tcp_proxy",
"config": {
"stat_prefix": "...",
"route_config": "{...}"
"access_log": "[]"
}
}
:ref:`route_config <config_network_filters_tcp_proxy_route_config>`
*(required, object)* The route table for the filter.
All filter instances must have a route table, even if it is empty.
stat_prefix
*(required, string)* The prefix to use when emitting :ref:`statistics
<config_network_filters_tcp_proxy_stats>`.
:ref:`access_log <config_access_log>`
*(optional, array)* Configuration for :ref:`access logs <arch_overview_access_logs>`
emitted by the this tcp_proxy.
.. _config_network_filters_tcp_proxy_route_config:
Route Configuration
-------------------
.. code-block:: json
{
"routes": []
}
:ref:`routes <config_network_filters_tcp_proxy_route>`
*(required, array)* An array of route entries that make up the route table.
.. _config_network_filters_tcp_proxy_route:
Route
-----
A TCP proxy route consists of a set of optional L4 criteria and the name of a
:ref:`cluster <config_cluster_manager_cluster>`. If a downstream connection matches
all the specified criteria, the cluster in the route is used for the corresponding upstream
connection. Routes are tried in the order specified until a match is found. If no match is
found, the connection is closed. A route with no criteria is valid and always produces a match.
.. code-block:: json
{
"cluster": "...",
"destination_ip_list": [],
"destination_ports": "...",
"source_ip_list": [],
"source_ports": "..."
}
cluster
*(required, string)* The :ref:`cluster <config_cluster_manager_cluster>` to connect
to when a the downstream network connection matches the specified criteria.
destination_ip_list
*(optional, array)* An optional list of IP address subnets in the form "ip_address/xx".
The criteria is satisfied if the destination IP address of the downstream connection is
contained in at least one of the specified subnets.
If the parameter is not specified or the list is empty, the destination IP address is ignored.
The destination IP address of the downstream connection might be different from the addresses
on which the proxy is listening if the connection has been redirected. Example:
.. code-block:: json
[
"192.168.3.0/24",
"50.1.2.3/32",
"10.15.0.0/16",
"2001:abcd::/64"
]
destination_ports
*(optional, string)* An optional string containing a comma-separated list of port numbers or
ranges. The criteria is satisfied if the destination port of the downstream connection
is contained in at least one of the specified ranges.
If the parameter is not specified, the destination port is ignored. The destination port address
of the downstream connection might be different from the port on which the proxy is listening if
the connection has been redirected. Example:
.. code-block:: json
{
"destination_ports": "1-1024,2048-4096,12345"
}
source_ip_list
*(optional, array)* An optional list of IP address subnets in the form "ip_address/xx".
The criteria is satisfied if the source IP address of the downstream connection is contained
in at least one of the specified subnets. If the parameter is not specified or the list is empty,
the source IP address is ignored. Example:
.. code-block:: json
[
"192.168.3.0/24",
"50.1.2.3/32",
"10.15.0.0/16",
"2001:abcd::/64"
]
source_ports
*(optional, string)* An optional string containing a comma-separated list of port numbers or
ranges. The criteria is satisfied if the source port of the downstream connection is contained
in at least one of the specified ranges. If the parameter is not specified, the source port is
ignored. Example:
.. code-block:: json
{
"source_ports": "1-1024,2048-4096,12345"
}
.. _config_network_filters_tcp_proxy_stats:
Statistics
----------
The TCP proxy filter emits both its own downstream statistics as well as many of the :ref:`cluster
upstream statistics <config_cluster_manager_cluster_stats>` where applicable. The downstream
statistics are rooted at *tcp.<stat_prefix>.* with the following statistics:
.. csv-table::
:header: Name, Type, Description
:widths: 1, 1, 2
downstream_cx_total, Counter, Total number of connections handled by the filter.
downstream_cx_no_route, Counter, Number of connections for which no matching route was found.
downstream_cx_tx_bytes_total, Counter, Total bytes written to the downstream connection.
downstream_cx_tx_bytes_buffered, Gauge, Total bytes currently buffered to the downstream connection.
downstream_flow_control_paused_reading_total, Counter, Total number of times flow control paused reading from downstream.
downstream_flow_control_resumed_reading_total, Counter, Total number of times flow control resumed reading from downstream.

@ -0,0 +1,27 @@
.. _config_admin:
Administration interface
========================
Administration interface :ref:`operations documentation <operations_admin_interface>`.
.. code-block:: json
{
"access_log_path": "...",
"profile_path": "...",
"address": "..."
}
access_log_path
*(required, string)* The path to write the access log for the administration server. If no
access log is desired specify '/dev/null'.
profile_path
*(optional, string)* The cpu profiler output path for the administration server. If no profile
path is specified, the default is '/var/log/envoy/envoy.prof'.
address
*(required, string)* The TCP address that the administration server will listen on, e.g.,
"tcp://127.0.0.1:1234". Note, "tcp://0.0.0.0:1234" is the wild card match for any IPv4 address
with port 1234.

@ -0,0 +1,120 @@
.. _config_overview:
Overview
========
The Envoy configuration format is written in JSON and is validated against a JSON schema. The
schema can be found in :repo:`source/common/json/config_schemas.cc`. The main configuration for the
server is contained within the listeners and cluster manager sections. The other top level elements
specify miscellaneous configuration.
YAML support is also provided as a syntactic convenience for hand-written configurations. Envoy will
internally convert YAML to JSON if a file path ends with .yaml. In the rest of the configuration
documentation, we refer exclusively to JSON. Envoy expects unambiguous YAML scalars, so if a cluster
name (which should be a string) is called *true*, it should be written in the configuration YAML as
*"true"*. The same applies to integer and floating point values (e.g. *1* vs. *1.0* vs. *"1.0"*).
.. code-block:: json
{
"listeners": [],
"lds": "{...}",
"admin": "{...}",
"cluster_manager": "{...}",
"flags_path": "...",
"statsd_udp_ip_address": "...",
"statsd_tcp_cluster_name": "...",
"stats_flush_interval_ms": "...",
"watchdog_miss_timeout_ms": "...",
"watchdog_megamiss_timeout_ms": "...",
"watchdog_kill_timeout_ms": "...",
"watchdog_multikill_timeout_ms": "...",
"tracing": "{...}",
"rate_limit_service": "{...}",
"runtime": "{...}",
}
:ref:`listeners <config_listeners>`
*(required, array)* An array of :ref:`listeners <arch_overview_listeners>` that will be
instantiated by the server. A single Envoy process can contain any number of listeners.
.. _config_overview_lds:
:ref:`lds <config_listeners_lds>`
*(optional, object)* Configuration for the Listener Discovery Service (LDS). If not specified
only static listeners are loaded.
:ref:`admin <config_admin>`
*(required, object)* Configuration for the :ref:`local administration HTTP server
<operations_admin_interface>`.
:ref:`cluster_manager <config_cluster_manager>`
*(required, object)* Configuration for the :ref:`cluster manager <arch_overview_cluster_manager>`
which owns all upstream clusters within the server.
.. _config_overview_flags_path:
flags_path
*(optional, string)* The file system path to search for :ref:`startup flag files
<operations_file_system_flags>`.
.. _config_overview_statsd_udp_ip_address:
statsd_udp_ip_address
*(optional, string)* The UDP address of a running statsd compliant listener. If specified,
:ref:`statistics <arch_overview_statistics>` will be flushed to this address. IPv4 addresses should
have format host:port (ex: 127.0.0.1:855). IPv6 addresses should have URL format [host]:port
(ex: [::1]:855).
statsd_tcp_cluster_name
*(optional, string)* The name of a cluster manager cluster that is running a TCP statsd compliant
listener. If specified, Envoy will connect to this cluster to flush :ref:`statistics
<arch_overview_statistics>`.
.. _config_overview_stats_flush_interval_ms:
stats_flush_interval_ms
*(optional, integer)* The time in milliseconds between flushes to configured stats sinks. For
performance reasons Envoy latches counters and only flushes counters and gauges at a periodic
interval. If not specified the default is 5000ms (5 seconds).
watchdog_miss_timeout_ms
*(optional, integer)* The time in milliseconds after which Envoy counts a nonresponsive thread in the
"server.watchdog_miss" statistic. If not specified the default is 200ms.
watchdog_megamiss_timeout_ms
*(optional, integer)* The time in milliseconds after which Envoy counts a nonresponsive thread in the
"server.watchdog_mega_miss" statistic. If not specified the default is 1000ms.
watchdog_kill_timeout_ms
*(optional, integer)* If a watched thread has been nonresponsive for this many milliseconds assume
a programming error and kill the entire Envoy process. Set to 0 to disable kill behavior. If not
specified the default is 0 (disabled).
watchdog_multikill_timeout_ms
*(optional, integer)* If at least two watched threads have been nonresponsive for at least this many
milliseconds assume a true deadlock and kill the entire Envoy process. Set to 0 to disable this
behavior. If not specified the default is 0 (disabled).
:ref:`tracing <config_tracing>`
*(optional, object)* Configuration for an external :ref:`tracing <arch_overview_tracing>`
provider. If not specified, no tracing will be performed.
:ref:`rate_limit_service <config_rate_limit_service>`
*(optional, object)* Configuration for an external :ref:`rate limit service
<arch_overview_rate_limit>` provider. If not specified, any calls to the rate limit service will
immediately return success.
:ref:`runtime <config_runtime>`
*(optional, object)* Configuration for the :ref:`runtime configuration <arch_overview_runtime>`
provider. If not specified, a "null" provider will be used which will result in all defaults being
used.
.. toctree::
:hidden:
admin
tracing
rate_limit
runtime

@ -0,0 +1,37 @@
.. _config_rate_limit_service:
Rate limit service
==================
The :ref:`rate limit service <arch_overview_rate_limit>` configuration specifies the global rate
limit service Envoy should talk to when it needs to make global rate limit decisions. If no rate
limit service is configured, a "null" service will be used which will always return OK if called.
.. code-block:: json
{
"type": "grpc_service",
"config": {
"cluster_name": "..."
}
}
type
*(required, string)* Specifies the type of rate limit service to call. Currently the only
supported option is *grpc_service* which specifies Lyft's global rate limit service and
associated IDL.
config
*(required, object)* Specifies type specific configuration for the rate limit service.
cluster_name
*(required, string)* Specifies the cluster manager cluster name that hosts the rate limit
service. The client will connect to this cluster when it needs to make rate limit service
requests.
gRPC service IDL
----------------
Envoy expects the rate limit service to support the gRPC IDL specified in
:repo:`/source/common/ratelimit/ratelimit.proto`. See the IDL documentation for more information
on how the API works. See Lyft's reference implementation `here <https://github.com/lyft/ratelimit>`_.

@ -0,0 +1,107 @@
.. _config_runtime:
Runtime
=======
The :ref:`runtime configuration <arch_overview_runtime>` specifies the location of the local file
system tree that contains re-loadable configuration elements. If runtime is not configured, a "null"
provider is used which has the effect of using all defaults built into the code.
.. code-block:: json
{
"symlink_root": "...",
"subdirectory": "...",
"override_subdirectory": "..."
}
symlink_root
*(required, string)* The implementation assumes that the file system tree is accessed via a
symbolic link. An atomic link swap is used when a new tree should be switched to. This
parameter specifies the path to the symbolic link. Envoy will watch the location for changes
and reload the file system tree when they happen.
subdirectory
*(required, string)* Specifies the subdirectory to load within the root directory. This is useful
if multiple systems share the same delivery mechanism. Envoy configuration elements can be
contained in a dedicated subdirectory.
.. _config_runtime_override_subdirectory:
override_subdirectory
*(optional, string)* Specifies an optional subdirectory to load within the root directory. If
specified and the directory exists, configuration values within this directory will override those
found in the primary subdirectory. This is useful when Envoy is deployed across many different
types of servers. Sometimes it is useful to have a per service cluster directory for runtime
configuration. See below for exactly how the override directory is used.
File system layout
------------------
Various sections of the configuration guide describe the runtime settings that are available.
For example, :ref:`here <config_cluster_manager_cluster_runtime>` are the runtime settings for
upstream clusters.
Assume that the folder ``/srv/runtime/v1`` points to the actual file system path where global
runtime configurations are stored. The following would be a typical configuration setting for
runtime:
* *symlink_root*: ``/srv/runtime/current``
* *subdirectory*: ``envoy``
* *override_subdirectory*: ``envoy_override``
Where ``/srv/runtime/current`` is a symbolic link to ``/srv/runtime/v1``.
Each '.' in a runtime key indicates a new directory in the hierarchy, rooted at *symlink_root* +
*subdirectory*. For example, the *health_check.min_interval* key would have the following full
file system path (using the symbolic link):
``/srv/runtime/current/envoy/health_check/min_interval``
The terminal portion of a path is the file. The contents of the file constitute the runtime value.
When reading numeric values from a file, spaces and new lines will be ignored.
The *override_subdirectory* is used along with the :option:`--service-cluster` CLI option. Assume
that :option:`--service-cluster` has been set to ``my-cluster``. Envoy will first look for the
*health_check.min_interval* key in the following full file system path:
``/srv/runtime/current/envoy_override/my-cluster/health_check/min_interval``
If found, the value will override any value found in the primary lookup path. This allows the user
to customize the runtime values for individual clusters on top of global defaults.
Comments
--------
Lines starting with ``#`` as the first character are treated as comments.
Comments can be used to provide context on an existing value. Comments are also useful in an
otherwise empty file to keep a placeholder for deployment in a time of need.
Updating runtime values via symbolic link swap
----------------------------------------------
There are two steps to update any runtime value. First, create a hard copy of the entire runtime
tree and update the desired runtime values. Second, atomically swap the symbolic link root from the
old tree to the new runtime tree, using the equivalent of the following command:
.. code-block:: console
/srv/runtime:~$ ln -s /srv/runtime/v2 new && mv -Tf new current
It's beyond the scope of this document how the file system data is deployed, garbage collected, etc.
Statistics
----------
The file system runtime provider emits some statistics in the *runtime.* namespace.
.. csv-table::
:header: Name, Type, Description
:widths: 1, 1, 2
load_error, Counter, Total number of load attempts that resulted in an error
override_dir_not_exists, Counter, Total number of loads that did not use an override directory
override_dir_exists, Counter, Total number of loads that did use an override directory
load_success, Counter, Total number of load attempts that were successful
num_keys, Gauge, Number of keys currently loaded

@ -0,0 +1,69 @@
.. _config_tracing:
Tracing
=======
The :ref:`tracing <arch_overview_tracing>` configuration specifies global settings for the HTTP
tracer used by Envoy. The configuration is defined on the :ref:`server's top level configuration
<config_overview>`. Envoy may support other tracers in the future, but right now the HTTP tracer is
the only one supported.
.. code-block:: json
{
"http": {
"driver": "{...}"
}
}
http
*(optional, object)* Provides configuration for the HTTP tracer.
driver
*(optional, object)* Provides the driver that handles trace and span creation.
Currently `LightStep <http://lightstep.com/>`_ and `Zipkin
<http://zipkin.io>`_ drivers are supported.
LightStep driver
----------------
.. code-block:: json
{
"type": "lightstep",
"config": {
"access_token_file": "...",
"collector_cluster": "..."
}
}
access_token_file
*(required, string)* File containing the access token to the `LightStep <http://lightstep.com/>`_
API.
collector_cluster
*(required, string)* The cluster manager cluster that hosts the LightStep collectors.
Zipkin driver
-------------
.. code-block:: json
{
"type": "zipkin",
"config": {
"collector_cluster": "...",
"collector_endpoint": "..."
}
}
collector_cluster
*(required, string)* The cluster manager cluster that hosts the Zipkin collectors. Note that the
Zipkin cluster must be defined under `clusters` in the cluster manager configuration section.
collector_endpoint
*(optional, string)* The API endpoint of the Zipkin service where the
spans will be sent. When using a standard Zipkin installation, the
API endpoint is typically `/api/v1/spans`, which is the default value.

@ -0,0 +1,170 @@
.. _config_tools_router_check_tool:
Route table check tool
======================
**NOTE: The following configuration is for the route table check tool only and is not part of the Envoy binary.
The route table check tool is a standalone binary that can be used to verify Envoy's routing for a given configuration
file.**
The following specifies input to the route table check tool. The route table check tool checks if
the route returned by a :ref:`router <config_http_conn_man_route_table>` matches what is expected.
The tool can be used to check cluster name, virtual cluster name,
virtual host name, manual path rewrite, manual host rewrite, path redirect, and
header field matches. Extensions for other test cases can be added. Details about installing the tool
and sample tool input/output can be found at :ref:`installation <install_tools_route_table_check_tool>`.
The route table check tool config is composed of an array of json test objects. Each test object is composed of
three parts.
Test name
This field specifies the name of each test object.
Input values
The input value fields specify the parameters to be passed to the router. Example input fields include
the :authority, :path, and :method header fields. The :authority and :path fields specify the url
sent to the router and are required. All other input fields are optional.
Validate
The validate fields specify the expected values and test cases to check. At least one test
case is required.
A simple tool configuration json has one test case and is writen as follows. The test
expects a cluster name match of "instant-server".::
[
{
"test_name: "Cluster_name_test",
"input":
{
":authority":"api.lyft.com",
":path": "/api/locations"
}
"validate"
{
"cluster_name": "instant-server"
}
}
]
.. code-block:: json
[
{
"test_name": "...",
"input":
{
":authority": "...",
":path": "...",
":method": "...",
"internal" : "...",
"random_value" : "...",
"ssl" : "...",
"additional_headers": [
{
"field": "...",
"value": "..."
},
{
"..."
}
]
}
"validate": {
"cluster_name": "...",
"virtual_cluster_name": "...",
"virtual_host_name": "...",
"host_rewrite": "...",
"path_rewrite": "...",
"path_redirect": "...",
"header_fields" : [
{
"field": "...",
"value": "..."
},
{
"..."
}
]
}
},
{
"..."
}
]
test_name
*(required, string)* The name of a test object.
input
*(required, object)* Input values sent to the router that determine the returned route.
:authority
*(required, string)* The url authority. This value along with the path parameter define
the url to be matched. An example authority value is "api.lyft.com".
:path
*(required, string)* The url path. An example path value is "/foo".
:method
*(optional, string)* The request method. If not specified, the default method is GET. The options
are GET, PUT, or POST.
internal
*(optional, boolean)* A flag that determines whether to set x-envoy-internal to "true".
If not specified, or if internal is equal to false, x-envoy-internal is not set.
random_value
*(optional, integer)* An integer used to identify the target for weighted cluster selection.
The default value of random_value is 0.
ssl
*(optional, boolean)* A flag that determines whether to set x-forwarded-proto to https or http.
By setting x-forwarded-proto to a given protocol, the tool is able to simulate the behavior of
a client issuing a request via http or https. By default ssl is false which corresponds to
x-forwarded-proto set to http.
additional_headers
*(optional, array)* Additional headers to be added as input for route determination. The ":authority",
":path", ":method", "x-forwarded-proto", and "x-envoy-internal" fields are specified by the other config
options and should not be set here.
field
*(required, string)* The name of the header field to add.
value
*(required, string)* The value of the header field to add.
validate
*(required, object)* The validate object specifies the returned route parameters to match. At least one
test parameter must be specificed. Use "" (empty string) to indicate that no return value is expected.
For example, to test that no cluster match is expected use {"cluster_name": ""}.
cluster_name
*(optional, string)* Match the cluster name.
virutal_cluster_name
*(optional, string)* Match the virtual cluster name.
virtual_host_name
*(optional, string)* Match the virtual host name.
host_rewrite
*(optional, string)* Match the host header field after rewrite.
path_rewrite
*(optional, string)* Match the path header field after rewrite.
path_redirect
*(optional, string)* Match the returned redirect path.
header_fields
*(optional, array)* Match the listed header fields. Examples header fields include the ":path", "cookie",
and "date" fields. The header fields are checked after all other test cases. Thus, the header fields checked
will be those of the redirected or rewriten routes when applicable.
field
*(required, string)* The name of the header field to match.
value
*(required, string)* The value of the header field to match.

@ -0,0 +1,10 @@
.. _extending:
Extending Envoy for custom use cases
====================================
The Envoy architecture makes it fairly easily extensible via both :ref:`network filters
<arch_overview_network_filters>` and :ref:`HTTP filters <arch_overview_http_filters>`.
An example of how to add a network filter and structure the repository and build dependencies can
be found at `envoy-filter-example <https://github.com/envoyproxy/envoy-filter-example>`_.

Binary file not shown.

After

Width:  |  Height:  |  Size: 66 KiB

@ -0,0 +1,13 @@
Envoy documentation
=================================
.. toctree::
:maxdepth: 2
about_docs
intro/intro
install/install
configuration/configuration
operations/operations
extending/extending
api/api

@ -0,0 +1,8 @@
Building
========
The Envoy build system uses Bazel. In order to ease initial building and for a quick start, we
provide an Ubuntu 16 based docker container that has everything needed inside of it to build
and *statically link* envoy, see :repo:`ci/README.md`.
In order to build manually, follow the instructions at :repo:`bazel/README.md`.

@ -0,0 +1,14 @@
.. _install:
Building and installation
=========================
.. toctree::
:maxdepth: 2
requirements
building
installation
ref_configs
sandboxes/sandboxes.rst
tools/tools

@ -0,0 +1,6 @@
Installation
============
Currently we do not provide any pre-compiled binaries or startup scripts. Typically Envoy will be
used with the :ref:`hot restart wrapper <operations_hot_restarter>` for launching. In the future we
may provide OS specific deployment packages.

@ -0,0 +1,70 @@
.. _install_ref_configs:
Reference configurations
========================
The source distribution includes a set of example configuration templates for each of the three
major Envoy deployment types:
* :ref:`Service to service <deployment_type_service_to_service>`
* :ref:`Front proxy <deployment_type_front_proxy>`
* :ref:`Double proxy <deployment_type_double_proxy>`
The goal of this set of example configurations is to demonstrate the full capabilities of Envoy in
a complex deployment. All features will not be applicable to all use cases. For full documentation
see the :ref:`configuration reference <config>`.
Configuration generator
-----------------------
Envoy configurations can become relatively complicated. At Lyft we use `jinja
<http://jinja.pocoo.org/>`_ templating to make the configurations easier to create and manage. The
source distribution includes a version of the configuration generator that loosely approximates what
we use at Lyft. We have also included three example configuration templates for each of the above
three scenarios.
* Generator script: :repo:`configs/configgen.py`
* Service to service template: :repo:`configs/envoy_service_to_service.template.json`
* Front proxy template: :repo:`configs/envoy_front_proxy.template.json`
* Double proxy template: :repo:`configs/envoy_double_proxy.template.json`
To generate the example configurations run the following from the root of the repo:
.. code-block:: console
mkdir -p generated/configs
bazel build //configs:example_configs
tar xvf $PWD/bazel-genfiles/configs/example_configs.tar -C generated/configs
The previous command will produce three fully expanded configurations using some variables
defined inside of `configgen.py`. See the comments inside of `configgen.py` for detailed
information on how the different expansions work.
A few notes about the example configurations:
* An instance of :ref:`service discovery service <arch_overview_service_discovery_sds>` is assumed
to be running at `discovery.yourcompany.net`.
* DNS for `yourcompany.net` is assumed to be setup for various things. Search the configuration
templates for different instances of this.
* Tracing is configured for `LightStep <http://lightstep.com/>`_. To
disable this or enable `Zipkin <http://zipkin.io>` tracing, delete or
change the :ref:`tracing configuration <config_tracing>` accordingly.
* The configuration demonstrates the use of a :ref:`global rate limiting service
<arch_overview_rate_limit>`. To disable this delete the :ref:`rate limit configuration
<config_rate_limit_service>`.
* :ref:`Route discovery service <config_http_conn_man_rds>` is configured for the service to service
reference configuration and it is assumed to be running at `rds.yourcompany.net`.
* :ref:`Cluster discovery service <config_cluster_manager_cds>` is configured for the service to
service reference configuration and it is assumed that be running at `cds.yourcompany.net`.
Smoketest configuration
-----------------------
A very minimal Envoy configuration that can be used to validate basic plain HTTP proxying is
available in :repo:`configs/google_com_proxy.json`. This is not intended to represent a realistic
Envoy deployment. To smoketest Envoy with this, run:
.. code-block:: console
build/source/exe/envoy -c configs/google_com_proxy.json -l debug
curl -v localhost:10000

@ -0,0 +1,37 @@
.. _install_requirements:
Requirements
============
Envoy was initially developed and deployed on Ubuntu 14 LTS. It should work on any reasonably
recent Linux including Ubuntu 16 LTS.
Envoy has the following requirements:
* GCC 5+ (for C++14 support)
* `backward <https://github.com/bombela/backward-cpp>`_ (last tested with 1.3)
* `Bazel <https://github.com/bazelbuild/bazel>`_ (last tested with 0.5.3)
* `BoringSSL <https://boringssl.googlesource.com/boringssl>`_ (last tested with sha ae9f0616c58bddcbe7a6d80d29d796bee9aaff2e)
* `c-ares <https://github.com/c-ares/c-ares>`_ (last tested with 1.13.0)
* `spdlog <https://github.com/gabime/spdlog>`_ (last tested with 0.14.0)
* `fmtlib <https://github.com/fmtlib/fmt/>`_ (last tested with 4.0.0)
* `gperftools <https://github.com/gperftools/gperftools>`_ (last tested with 2.6.1)
* `http-parser <https://github.com/nodejs/http-parser>`_ (last tested with 2.7.1)
* `libevent <http://libevent.org/>`_ (last tested with 2.1.8)
* `lightstep-tracer-cpp <https://github.com/lightstep/lightstep-tracer-cpp/>`_ (last tested with 0.36)
* `luajit <http://luajit.org/>`_ (last tested with 2.0.5)
* `nghttp2 <https://github.com/nghttp2/nghttp2>`_ (last tested with 1.25.0)
* `protobuf <https://github.com/google/protobuf>`_ (last tested with 3.4.0)
* `tclap <http://tclap.sourceforge.net/>`_ (last tested with 1.2.1)
* `rapidjson <https://github.com/miloyip/rapidjson/>`_ (last tested with 1.1.0)
* `xxHash <https://github.com/Cyan4973/xxHash>`_ (last tested with 0.6.3)
* `yaml-cpp <https://github.com/jbeder/yaml-cpp>`_ (last tested with sha e2818c423e5058a02f46ce2e519a82742a8ccac9)
* `zlib <https://github.com/madler/zlib>`_ (last tested with 1.2.11)
In order to compile and run the tests the following is required:
* `googletest <https://github.com/google/googletest>`_ (last tested with sha 43863938377a9ea1399c0596269e0890b5c5515a)
In order to run code coverage the following is required:
* `gcovr <http://gcovr.com/>`_ (last tested with 3.3)

@ -0,0 +1,228 @@
.. _install_sandboxes_front_proxy:
Front Proxy
===========
To get a flavor of what Envoy has to offer as a front proxy, we are releasing a
`docker compose <https://docs.docker.com/compose/>`_ sandbox that deploys a front
envoy and a couple of services (simple flask apps) colocated with a running
service envoy. The three containers will be deployed inside a virtual network
called ``envoymesh``.
Below you can see a graphic showing the docker compose deployment:
.. image:: /_static/docker_compose_v0.1.svg
:width: 100%
All incoming requests are routed via the front envoy, which is acting as a reverse proxy sitting on
the edge of the ``envoymesh`` network. Port ``80`` is mapped to port ``8000`` by docker compose
(see :repo:`/examples/front-proxy/docker-compose.yml`). Moreover, notice
that all traffic routed by the front envoy to the service containers is actually routed to the
service envoys (routes setup in :repo:`/examples/front-proxy/front-envoy.json`). In turn the service
envoys route the request to the flask app via the loopback address (routes setup in
:repo:`/examples/front-proxy/service-envoy.json`). This setup
illustrates the advantage of running service envoys collocated with your services: all requests are
handled by the service envoy, and efficiently routed to your services.
Running the Sandbox
~~~~~~~~~~~~~~~~~~~
The following documentation runs through the setup of an envoy cluster organized
as is described in the image above.
**Step 1: Install Docker**
Ensure that you have a recent versions of ``docker, docker-compose`` and
``docker-machine`` installed.
A simple way to achieve this is via the `Docker Toolbox <https://www.docker.com/products/docker-toolbox>`_.
**Step 2: Docker Machine setup**
First let's create a new machine which will hold the containers::
$ docker-machine create --driver virtualbox default
$ eval $(docker-machine env default)
**Step 4: Clone the Envoy repo, and start all of our containers**
If you have not cloned the envoy repo, clone it with ``git clone git@github.com:envoyproxy/envoy``
or ``git clone https://github.com/envoyproxy/envoy.git``::
$ pwd
envoy/examples/front-proxy
$ docker-compose up --build -d
$ docker-compose ps
Name Command State Ports
-------------------------------------------------------------------------------------------------------------
example_service1_1 /bin/sh -c /usr/local/bin/ ... Up 80/tcp
example_service2_1 /bin/sh -c /usr/local/bin/ ... Up 80/tcp
example_front-envoy_1 /bin/sh -c /usr/local/bin/ ... Up 0.0.0.0:8000->80/tcp, 0.0.0.0:8001->8001/tcp
**Step 5: Test Envoy's routing capabilities**
You can now send a request to both services via the front-envoy.
For service1::
$ curl -v $(docker-machine ip default):8000/service/1
* Trying 192.168.99.100...
* Connected to 192.168.99.100 (192.168.99.100) port 8000 (#0)
> GET /service/1 HTTP/1.1
> Host: 192.168.99.100:8000
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 200 OK
< content-type: text/html; charset=utf-8
< content-length: 89
< x-envoy-upstream-service-time: 1
< server: envoy
< date: Fri, 26 Aug 2016 19:39:19 GMT
< x-envoy-protocol-version: HTTP/1.1
<
Hello from behind Envoy (service 1)! hostname: f26027f1ce28 resolvedhostname: 172.19.0.6
* Connection #0 to host 192.168.99.100 left intact
For service2::
$ curl -v $(docker-machine ip default):8000/service/2
* Trying 192.168.99.100...
* Connected to 192.168.99.100 (192.168.99.100) port 8000 (#0)
> GET /service/2 HTTP/1.1
> Host: 192.168.99.100:8000
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 200 OK
< content-type: text/html; charset=utf-8
< content-length: 89
< x-envoy-upstream-service-time: 2
< server: envoy
< date: Fri, 26 Aug 2016 19:39:23 GMT
< x-envoy-protocol-version: HTTP/1.1
<
Hello from behind Envoy (service 2)! hostname: 92f4a3737bbc resolvedhostname: 172.19.0.2
* Connection #0 to host 192.168.99.100 left intact
Notice that each request, while sent to the front envoy, was correctly routed
to the respective application.
**Step 6: Test Envoy's load balancing capabilities**
Now let's scale up our service1 nodes to demonstrate the clustering abilities
of envoy.::
$ docker-compose scale service1=3
Creating and starting example_service1_2 ... done
Creating and starting example_service1_3 ... done
Now if we send a request to service1 multiple times, the front envoy will load balance the
requests by doing a round robin of the three service1 machines::
$ curl -v $(docker-machine ip default):8000/service/1
* Trying 192.168.99.100...
* Connected to 192.168.99.100 (192.168.99.100) port 8000 (#0)
> GET /service/1 HTTP/1.1
> Host: 192.168.99.100:8000
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 200 OK
< content-type: text/html; charset=utf-8
< content-length: 89
< x-envoy-upstream-service-time: 1
< server: envoy
< date: Fri, 26 Aug 2016 19:40:21 GMT
< x-envoy-protocol-version: HTTP/1.1
<
Hello from behind Envoy (service 1)! hostname: 85ac151715c6 resolvedhostname: 172.19.0.3
* Connection #0 to host 192.168.99.100 left intact
$ curl -v $(docker-machine ip default):8000/service/1
* Trying 192.168.99.100...
* Connected to 192.168.99.100 (192.168.99.100) port 8000 (#0)
> GET /service/1 HTTP/1.1
> Host: 192.168.99.100:8000
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 200 OK
< content-type: text/html; charset=utf-8
< content-length: 89
< x-envoy-upstream-service-time: 1
< server: envoy
< date: Fri, 26 Aug 2016 19:40:22 GMT
< x-envoy-protocol-version: HTTP/1.1
<
Hello from behind Envoy (service 1)! hostname: 20da22cfc955 resolvedhostname: 172.19.0.5
* Connection #0 to host 192.168.99.100 left intact
$ curl -v $(docker-machine ip default):8000/service/1
* Trying 192.168.99.100...
* Connected to 192.168.99.100 (192.168.99.100) port 8000 (#0)
> GET /service/1 HTTP/1.1
> Host: 192.168.99.100:8000
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 200 OK
< content-type: text/html; charset=utf-8
< content-length: 89
< x-envoy-upstream-service-time: 1
< server: envoy
< date: Fri, 26 Aug 2016 19:40:24 GMT
< x-envoy-protocol-version: HTTP/1.1
<
Hello from behind Envoy (service 1)! hostname: f26027f1ce28 resolvedhostname: 172.19.0.6
* Connection #0 to host 192.168.99.100 left intact
**Step 7: enter containers and curl services**
In addition of using ``curl`` from your host machine, you can also enter the
containers themselves and ``curl`` from inside them. To enter a container you
can use ``docker-compose exec <container_name> /bin/bash``. For example we can
enter the ``front-envoy`` container, and ``curl`` for services locally::
$ docker-compose exec front-envoy /bin/bash
root@81288499f9d7:/# curl localhost:80/service/1
Hello from behind Envoy (service 1)! hostname: 85ac151715c6 resolvedhostname: 172.19.0.3
root@81288499f9d7:/# curl localhost:80/service/1
Hello from behind Envoy (service 1)! hostname: 20da22cfc955 resolvedhostname: 172.19.0.5
root@81288499f9d7:/# curl localhost:80/service/1
Hello from behind Envoy (service 1)! hostname: f26027f1ce28 resolvedhostname: 172.19.0.6
root@81288499f9d7:/# curl localhost:80/service/2
Hello from behind Envoy (service 2)! hostname: 92f4a3737bbc resolvedhostname: 172.19.0.2
**Step 8: enter containers and curl admin**
When envoy runs it also attaches an ``admin`` to your desired port. In the example
configs the admin is bound to port ``8001``. We can ``curl`` it to gain useful information.
For example you can ``curl`` ``/server_info`` to get information about the
envoy version you are running. Addionally you can ``curl`` ``/stats`` to get
statistics. For example inside ``frontenvoy`` we can get::
$ docker-compose exec front-envoy /bin/bash
root@e654c2c83277:/# curl localhost:8001/server_info
envoy 10e00b/RELEASE live 142 142 0
root@e654c2c83277:/# curl localhost:8001/stats
cluster.service1.external.upstream_rq_200: 7
...
cluster.service1.membership_change: 2
cluster.service1.membership_total: 3
...
cluster.service1.upstream_cx_http2_total: 3
...
cluster.service1.upstream_rq_total: 7
...
cluster.service2.external.upstream_rq_200: 2
...
cluster.service2.membership_change: 1
cluster.service2.membership_total: 1
...
cluster.service2.upstream_cx_http2_total: 1
...
cluster.service2.upstream_rq_total: 2
...
Notice that we can get the number of members of upstream clusters, number of requests
fulfilled by them, information about http ingress, and a plethora of other useful
stats.

@ -0,0 +1,68 @@
.. _install_sandboxes_grpc_bridge:
gRPC Bridge
===========
Envoy gRPC
~~~~~~~~~~
The gRPC bridge sandbox is an example usage of Envoy's
:ref:`gRPC bridge filter <config_http_filters_grpc_bridge>`.
Included in the sandbox is a gRPC in-memory Key/Value store with a Python HTTP
client. The Python client makes HTTP/1 requests through the Envoy sidecar
process which are upgraded into HTTP/2 gRPC requests. Response trailers are then
buffered and sent back to the client as a HTTP/1 header payload.
Another Envoy feature demonstrated in this example is Envoy's ability to do authority
base routing via its route configuration.
Building the Go service
~~~~~~~~~~~~~~~~~~~~~~~
To build the Go gRPC service run::
$ pwd
envoy/examples/grpc-bridge
$ script/bootstrap
$ script/build
Note: ``build`` requires that your Envoy codebase (or a working copy thereof) is in ``$GOPATH/src/github.com/envoyproxy/envoy``.
Docker compose
~~~~~~~~~~~~~~
To run the docker compose file, and set up both the Python and the gRPC containers
run::
$ pwd
envoy/examples/grpc-bridge
$ docker-compose up --build
Sending requests to the Key/Value store
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To use the Python service and send gRPC requests::
$ pwd
envoy/examples/grpc-bridge
# set a key
$ docker-compose exec python /client/client.py set foo bar
setf foo to bar
# get a key
$ docker-compose exec python /client/client.py get foo
bar
# modify an existing key
$ docker-compose exec python /client/client.py set foo baz
setf foo to baz
# get the modified key
$ docker-compose exec python /client/client.py get foo
baz
In the running docker-compose container, you should see the gRPC service printing a record of its activity::
grpc_1 | 2017/05/30 12:05:09 set: foo = bar
grpc_1 | 2017/05/30 12:05:12 get: foo
grpc_1 | 2017/05/30 12:05:18 set: foo = baz

@ -0,0 +1,81 @@
.. _install_sandboxes_jaeger_tracing:
Jaeger Tracing
==============
The Jaeger tracing sandbox demonstrates Envoy's :ref:`request tracing <arch_overview_tracing>`
capabilities using `Jaeger <https://uber.github.io/jaeger/>`_ as the tracing provider. This sandbox
is very similar to the front proxy architecture described above, with one difference:
service1 makes an API call to service2 before returning a response.
The three containers will be deployed inside a virtual network called ``envoymesh``.
All incoming requests are routed via the front envoy, which is acting as a reverse proxy
sitting on the edge of the ``envoymesh`` network. Port ``80`` is mapped to port ``8000``
by docker compose (see :repo:`/examples/jaeger-tracing/docker-compose.yml`). Notice that
all envoys are configured to collect request traces (e.g., http_connection_manager/config/tracing setup in
:repo:`/examples/jaeger-tracing/front-envoy-jaeger.json`) and setup to propagate the spans generated
by the Jaeger tracer to a Jaeger cluster (trace driver setup
in :repo:`/examples/jaeger-tracing/front-envoy-jaeger.json`).
Before routing a request to the appropriate service envoy or the application, Envoy will take
care of generating the appropriate spans for tracing (parent/child context spans).
At a high-level, each span records the latency of upstream API calls as well as information
needed to correlate the span with other related spans (e.g., the trace ID).
One of the most important benefits of tracing from Envoy is that it will take care of
propagating the traces to the Jaeger service cluster. However, in order to fully take advantage
of tracing, the application has to propagate trace headers that Envoy generates, while making
calls to other services. In the sandbox we have provided, the simple flask app
(see trace function in :repo:`/examples/front-proxy/service.py`) acting as service1 propagates
the trace headers while making an outbound call to service2.
Running the Sandbox
~~~~~~~~~~~~~~~~~~~
The following documentation runs through the setup of an envoy cluster organized
as is described in the image above.
**Step 1: Build the sandbox**
To build this sandbox example, and start the example apps run the following commands::
$ pwd
envoy/examples/jaeger-tracing
$ docker-compose up --build -d
$ docker-compose ps
Name Command State Ports
-------------------------------------------------------------------------------------------------------------
jaegertracing_service1_1 /bin/sh -c /usr/local/bin/ ... Up 80/tcp
jaegertracing_service2_1 /bin/sh -c /usr/local/bin/ ... Up 80/tcp
jaegertracing_front-envoy_1 /bin/sh -c /usr/local/bin/ ... Up 0.0.0.0:8000->80/tcp, 0.0.0.0:8001->8001/tcp
**Step 2: Generate some load**
You can now send a request to service1 via the front-envoy as follows::
$ curl -v $(docker-machine ip default):8000/trace/1
* Trying 192.168.99.100...
* Connected to 192.168.99.100 (192.168.99.100) port 8000 (#0)
> GET /trace/1 HTTP/1.1
> Host: 192.168.99.100:8000
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 200 OK
< content-type: text/html; charset=utf-8
< content-length: 89
< x-envoy-upstream-service-time: 1
< server: envoy
< date: Fri, 26 Aug 2016 19:39:19 GMT
< x-envoy-protocol-version: HTTP/1.1
<
Hello from behind Envoy (service 1)! hostname: f26027f1ce28 resolvedhostname: 172.19.0.6
* Connection #0 to host 192.168.99.100 left intact
**Step 3: View the traces in Jaeger UI**
Point your browser to http://localhost:16686 . You should see the Jaeger dashboard.
Set the service to "front-proxy" and hit 'Find Traces'. You should see traces from the front-proxy.
Click on a trace to explore the path taken by the request from front-proxy to service1
to service2, as well as the latency incurred at each hop.

@ -0,0 +1,35 @@
.. _install_sandboxes_local_docker_build:
Building an Envoy Docker image
==============================
The following steps guide you through building your own Envoy binary, and
putting that in a clean Ubuntu container.
**Step 1: Build Envoy**
Using ``envoyproxy/envoy-build`` you will compile Envoy.
This image has all software needed to build Envoy. From your Envoy directory::
$ pwd
src/envoy
$ ./ci/run_envoy_docker.sh './ci/do_ci.sh bazel.release'
That command will take some time to run because it is compiling an Envoy binary and running tests.
For more information on building and different build targets, please refer to :repo:`ci/README.md`.
**Step 2: Build image with only envoy binary**
In this step we'll build an image that only has the Envoy binary, and none
of the software used to build it.::
$ pwd
src/envoy/
$ docker build -f ci/Dockerfile-envoy-image -t envoy .
Now you can use this ``envoy`` image to build the any of the sandboxes if you change
the ``FROM`` line in any Dockerfile.
This will be particularly useful if you are interested in modifying Envoy, and testing
your changes.

@ -0,0 +1,17 @@
.. _install_sandboxes:
Sandboxes
=========
The docker-compose sandboxes give you different environments to test out Envoy's
features. As we gauge people's interests we will add more sandboxes demonstrating
different features. The following sandboxes are available:
.. toctree::
:maxdepth: 1
front_proxy
zipkin_tracing
jaeger_tracing
grpc_bridge
local_docker_build

@ -0,0 +1,82 @@
.. _install_sandboxes_zipkin_tracing:
Zipkin Tracing
==============
The Zipkin tracing sandbox demonstrates Envoy's :ref:`request tracing <arch_overview_tracing>`
capabilities using `Zipkin <http://zipkin.io/>`_ as the tracing provider. This sandbox
is very similar to the front proxy architecture described above, with one difference:
service1 makes an API call to service2 before returning a response.
The three containers will be deployed inside a virtual network called ``envoymesh``.
All incoming requests are routed via the front envoy, which is acting as a reverse proxy
sitting on the edge of the ``envoymesh`` network. Port ``80`` is mapped to port ``8000``
by docker compose (see :repo:`/examples/zipkin-tracing/docker-compose.yml`). Notice that
all envoys are configured to collect request traces (e.g., http_connection_manager/config/tracing setup in
:repo:`/examples/zipkin-tracing/front-envoy-zipkin.json`) and setup to propagate the spans generated
by the Zipkin tracer to a Zipkin cluster (trace driver setup
in :repo:`/examples/zipkin-tracing/front-envoy-zipkin.json`).
Before routing a request to the appropriate service envoy or the application, Envoy will take
care of generating the appropriate spans for tracing (parent/child/shared context spans).
At a high-level, each span records the latency of upstream API calls as well as information
needed to correlate the span with other related spans (e.g., the trace ID).
One of the most important benefits of tracing from Envoy is that it will take care of
propagating the traces to the Zipkin service cluster. However, in order to fully take advantage
of tracing, the application has to propagate trace headers that Envoy generates, while making
calls to other services. In the sandbox we have provided, the simple flask app
(see trace function in :repo:`/examples/front-proxy/service.py`) acting as service1 propagates
the trace headers while making an outbound call to service2.
Running the Sandbox
~~~~~~~~~~~~~~~~~~~
The following documentation runs through the setup of an envoy cluster organized
as is described in the image above.
**Step 1: Build the sandbox**
To build this sandbox example, and start the example apps run the following commands::
$ pwd
envoy/examples/zipkin-tracing
$ docker-compose up --build -d
$ docker-compose ps
Name Command State Ports
-------------------------------------------------------------------------------------------------------------
zipkintracing_service1_1 /bin/sh -c /usr/local/bin/ ... Up 80/tcp
zipkintracing_service2_1 /bin/sh -c /usr/local/bin/ ... Up 80/tcp
zipkintracing_front-envoy_1 /bin/sh -c /usr/local/bin/ ... Up 0.0.0.0:8000->80/tcp, 0.0.0.0:8001->8001/tcp
**Step 2: Generate some load**
You can now send a request to service1 via the front-envoy as follows::
$ curl -v $(docker-machine ip default):8000/trace/1
* Trying 192.168.99.100...
* Connected to 192.168.99.100 (192.168.99.100) port 8000 (#0)
> GET /trace/1 HTTP/1.1
> Host: 192.168.99.100:8000
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 200 OK
< content-type: text/html; charset=utf-8
< content-length: 89
< x-envoy-upstream-service-time: 1
< server: envoy
< date: Fri, 26 Aug 2016 19:39:19 GMT
< x-envoy-protocol-version: HTTP/1.1
<
Hello from behind Envoy (service 1)! hostname: f26027f1ce28 resolvedhostname: 172.19.0.6
* Connection #0 to host 192.168.99.100 left intact
**Step 3: View the traces in Zipkin UI**
Point your browser to http://localhost:9411 . You should see the Zipkin dashboard.
Set the service to "front-proxy" and set the start time to a few minutes before
the start of the test (step 2) and hit enter. You should see traces from the front-proxy.
Click on a trace to explore the path taken by the request from front-proxy to service1
to service2, as well as the latency incurred at each hop.

@ -0,0 +1,30 @@
.. _install_tools_config_load_check_tool:
Config load check tool
======================
The config load check tool checks that a configuration file in JSON format is written using valid JSON
and conforms to the Envoy JSON schema. This tool leverages the configuration test in
``test/config_test/config_test.cc``. The test loads the JSON configuration file and runs server configuration
initialization with it.
Input
The tool expects a PATH to the root of a directory that holds JSON Envoy configuration files. The tool
will recursively go through the file system tree and run a configuration test for each file found. Keep in mind that
the tool will try to load all files found in the path.
Output
The tool will output Envoy logs as it initializes the server configuration with the config it is currently testing.
If there are configuration files where the JSON file is malformed or is does not conform to the Envoy JSON schema, the
tool will exit with status EXIT_FAILURE. If the tool successfully loads all configuration files found it will
exit with status EXIT_SUCCESS.
Building
The tool can be built locally using Bazel. ::
bazel build //test/tools/config_load_check:config_load_check_tool
Running
The tool takes a path as described above. ::
bazel-bin/test/tools/config_load_check/config_load_check_tool PATH

@ -0,0 +1,65 @@
.. _install_tools_route_table_check_tool:
Route table check tool
=======================
The route table check tool checks whether the route parameters returned by a router match what is expected.
The tool can also be used to check whether a path redirect, path rewrite, or host rewrite
match what is expected.
Input
The tool expects two input JSON files:
1. A router config JSON file. The router config JSON file schema is found in
:ref:`config <config_http_conn_man_route_table>`.
2. A tool config JSON file. The tool config JSON file schema is found in
:ref:`config <config_tools_router_check_tool>`.
The tool config input file specifies urls (composed of authorities and paths)
and expected route parameter values. Additonal parameters such as additonal headers are optional.
Output
The program exits with status EXIT_FAILURE if any test case does not match the expected route parameter
value.
The ``--details`` option prints out details for each test. The first line indicates the test name.
If a test fails, details of the failed test cases are printed. The first field is the expected
route parameter value. The second field is the actual route parameter value. The third field indicates
the parameter that is compared. In the following example, Test_2 and Test_5 failed while the other tests
passed. In the failed test cases, conflict details are printed. ::
Test_1
Test_2
default other virtual_host_name
Test_3
Test_4
Test_5
locations ats cluster_name
Test_6
Testing with valid :ref:`runtime values <config_http_conn_man_route_table_route>` is not currently supported,
this may be added in future work.
Building
The tool can be built locally using Bazel. ::
bazel build //test/tools/router_check:router_check_tool
Running
The tool takes two input json files and an optional command line parameter ``--details``. The
expected order of command line arguements is:
1. The router configuration json file.
2. The tool configuration json file.
3. The optional details flag. ::
bazel-bin/test/tools/router_check/router_check_tool router_config.json tool_config.json
bazel-bin/test/tools/router_check/router_check_tool router_config.json tool_config.json --details
Testing
A bash shell script test can be run with bazel. The test compares routes using different router and
tool configuration json files. The configuration json files can be found in
test/tools/router_check/test/config/... . ::
bazel test //test/tools/router_check/...

@ -0,0 +1,33 @@
.. _install_tools_schema_validator_check_tool:
Schema Validator check tool
===========================
The schema validator tool validates that the passed in JSON conforms to a schema in
the configuration. To validate the entire config, please refer to the
:ref:`config load check tool<install_tools_config_load_check_tool>`. Currently, only
:ref:`route config<config_http_conn_man_route_table>` schema validation is supported.
Input
The tool expects two inputs:
1. The schema type to check the passed in JSON against. The supported type is:
* `route` - for :ref:`route configuration<config_http_conn_man_route_table>` validation.
2. The path to the JSON.
Output
If the JSON conforms to the schema, the tool will exit with status EXIT_SUCCESS. If the JSON does
not conform to the schema, an error message is outputted detailing what doesn't conform to the
schema. The tool will exit with status EXIT_FAILURE.
Building
The tool can be built locally using Bazel. ::
bazel build //test/tools/schema_validator:schema_validator_tool
Running
The tool takes a path as described above. ::
bazel-bin/test/tools/schema_validator/schema_validator_tool --schema-type SCHEMA_TYPE --json-path PATH

@ -0,0 +1,9 @@
Tools
=====
.. toctree::
:maxdepth: 2
config_load_check_tool
route_table_check_tool
schema_validator_check_tool

@ -0,0 +1,19 @@
.. _arch_overview_access_logs:
Access logging
===================
The :ref:`HTTP connection manager <arch_overview_http_conn_man>` and
:ref:`tcp proxy <arch_overview_tcp_proxy>` supports extensible access logging with the following
features:
* Any number of access logs per connection manager or tcp proxy.
* Asynchronous IO flushing architecture. Access logging will never block the main network processing
threads.
* Customizable access log formats using predefined fields as well as arbitrary HTTP request and
response headers.
* Customizable access log filters that allow different types of requests and responses to be written
to different access logs.
Access log :ref:`configuration <config_access_log>`.

@ -0,0 +1,37 @@
Architecture overview
=====================
.. toctree::
:maxdepth: 2
terminology
threading_model
listeners
network_filters
http_connection_management
http_filters
http_routing
grpc
websocket
cluster_manager
service_discovery
health_checking
connection_pooling
load_balancing
outlier
circuit_breaking
global_rate_limiting
ssl
statistics
runtime
tracing
tcp_proxy
access_logging
mongo
dynamo
redis
hot_restart
dynamic_configuration
init
draining
scripting

@ -0,0 +1,38 @@
.. _arch_overview_circuit_break:
Circuit breaking
================
Circuit breaking is a critical component of distributed systems. It’s nearly always better to fail
quickly and apply back pressure downstream as soon as possible. One of the main benefits of an Envoy
mesh is that Envoy enforces circuit breaking limits at the network level as opposed to having to
configure and code each application independently. Envoy supports various types of fully distributed
(not coordinated) circuit breaking:
* **Cluster maximum connections**: The maximum number of connections that Envoy will establish to
all hosts in an upstream cluster. In practice this is only applicable to HTTP/1.1 clusters since
HTTP/2 uses a single connection to each host.
* **Cluster maximum pending requests**: The maximum number of requests that will be queued while
waiting for a ready connection pool connection. In practice this is only applicable to HTTP/1.1
clusters since HTTP/2 connection pools never queue requests. HTTP/2 requests are multiplexed
immediately. If this circuit breaker overflows the :ref:`upstream_rq_pending_overflow
<config_cluster_manager_cluster_stats>` counter for the cluster will increment.
* **Cluster maximum requests**: The maximum number of requests that can be outstanding to all hosts
in a cluster at any given time. In practice this is applicable to HTTP/2 clusters since HTTP/1.1
clusters are governed by the maximum connections circuit breaker. If this circuit breaker
overflows the :ref:`upstream_rq_pending_overflow <config_cluster_manager_cluster_stats>` counter
for the cluster will increment.
* **Cluster maximum active retries**: The maximum number of retries that can be outstanding to all
hosts in a cluster at any given time. In general we recommend aggressively circuit breaking
retries so that retries for sporadic failures are allowed but the overall retry volume cannot
explode and cause large scale cascading failure. If this circuit breaker overflows the
:ref:`upstream_rq_retry_overflow <config_cluster_manager_cluster_stats>` counter for the cluster
will increment.
Each circuit breaking limit is :ref:`configurable <config_cluster_manager_cluster_circuit_breakers>`
and tracked on a per upstream cluster and per priority basis. This allows different components of
the distributed system to be tuned independently and have different limits.
Note that circuit breaking will cause the :ref:`x-envoy-overloaded
<config_http_filters_router_x-envoy-overloaded>` header to be set by the router filter in the
case of HTTP requests.

@ -0,0 +1,26 @@
.. _arch_overview_cluster_manager:
Cluster manager
===============
Envoy’s cluster manager manages all configured upstream clusters. Just as the Envoy configuration
can contain any number of listeners, the configuration can also contain any number of independently
configured upstream clusters.
Upstream clusters and hosts are abstracted from the network/HTTP filter stack given that upstream
clusters and hosts may be used for any number of different proxy tasks. The cluster manager exposes
APIs to the filter stack that allow filters to obtain a L3/L4 connection to an upstream cluster, or
a handle to an abstract HTTP connection pool to an upstream cluster (whether the upstream host
supports HTTP/1.1 or HTTP/2 is hidden). A filter stage determines whether it needs an L3/L4
connection or a new HTTP stream and the cluster manager handles all of the complexity of knowing
which hosts are available and healthy, load balancing, thread local storage of upstream connection
data (since most Envoy code is written to be single threaded), upstream connection type (TCP/IP,
UDS), upstream protocol where applicable (HTTP/1.1, HTTP/2), etc.
Clusters known to the cluster manager can be configured either statically, or fetched dynamically
via the cluster discovery service (CDS) API. Dynamic cluster fetches allow more configuration to
be stored in a central configuration server and thus requires fewer Envoy restarts and configuration
distribution.
* Cluster manager :ref:`configuration <config_cluster_manager>`.
* CDS :ref:`configuration <config_cluster_manager_cds>`.

@ -0,0 +1,37 @@
.. _arch_overview_conn_pool:
Connection pooling
==================
For HTTP traffic, Envoy supports abstract connection pools that are layered on top of the underlying
wire protocol (HTTP/1.1 or HTTP/2). The utilizing filter code does not need to be aware of whether
the underlying protocol supports true multiplexing or not. In practice the underlying
implementations have the following high level properties:
HTTP/1.1
--------
The HTTP/1.1 connection pool acquires connections as needed to an upstream host (up to the circuit
breaking limit). Requests are bound to connections as they become available, either because a
connection is done processing a previous request or because a new connection is ready to receive its
first request. The HTTP/1.1 connection pool does not make use of pipelining so that only a single
downstream request must be reset if the upstream connection is severed.
HTTP/2
------
The HTTP/2 connection pool acquires a single connection to an upstream host. All requests are
multiplexed over this connection. If a GOAWAY frame is received or if the connection reaches the
maximum stream limit, the connection pool will create a new connection and drain the existing one.
HTTP/2 is the preferred communication protocol as connections rarely if ever get severed.
.. _arch_overview_conn_pool_health_checking:
Health checking interactions
----------------------------
If Envoy is configured for either active or passive :ref:`health checking
<arch_overview_health_checking>`, all connection pool connections will be closed on behalf of a host
that transitions from a healthy state to an unhealthy state. If the host reenters the load
balancing rotation it will create fresh connections which will maximize the chance of working
around a bad flow (due to ECMP route or something else).

@ -0,0 +1,35 @@
.. _arch_overview_draining:
Draining
========
Draining is the process by which Envoy attempts to gracefully shed connections in response to
various events. Draining occurs at the following times:
* The server has been manually health check failed via the :ref:`healthcheck/fail
<operations_admin_interface_healthcheck_fail>` admin endpoint. See the :ref:`health check filter
<arch_overview_health_checking_filter>` architecture overview for more information.
* The server is being :ref:`hot restarted <arch_overview_hot_restart>`.
* Individual listeners are being modified or removed via :ref:`LDS
<arch_overview_dynamic_config_lds>`.
Each :ref:`configured listener <arch_overview_listeners>` has a :ref:`drain_type
<config_listeners_drain_type>` setting which controls when draining takes place. The currently
supported values are:
default
Envoy will drain listeners in response to all three cases above (admin drain, hot restart, and
LDS update/remove). This is the default setting.
modify_only
Envoy will drain listeners only in response to the 2nd and 3rd cases above (hot restart and
LDS update/remove). This setting is useful if Envoy is hosting both ingress and egress listeners.
It may be desirable to set *modify_only* on egress listeners so they only drain during
modifications while relying on ingress listener draining to perform full server draining when
attempting to do a controlled shutdown.
Note that although draining is a per-listener concept, it must be supported at the network filter
level. Currently the only filters that support graceful draining are
:ref:`HTTP connection manager <config_http_conn_man>`,
:ref:`Redis <config_network_filters_redis_proxy>`, and
:ref:`Mongo <config_network_filters_mongo_proxy>`.

@ -0,0 +1,81 @@
.. _arch_overview_dynamic_config:
Dynamic configuration
=====================
Envoy is architected such that different types of configuration management approaches are possible.
The approach taken in a deployment will be dependent on the needs of the implementor. Simple
deployments are possible with a fully static configuration. More complicated deployments can
incrementally add more complex dynamic configuration, the downside being that the implementor must
provide one or more external REST based configuration provider APIs. This document gives an overview
of the options currently available.
* Top level configuration :ref:`reference <config>`.
* :ref:`Reference configurations <install_ref_configs>`.
Fully static
------------
In a fully static configuration, the implementor provides a set of :ref:`listeners
<config_listeners>` (and :ref:`filter chains <config_listener_filters>`), :ref:`clusters
<config_cluster_manager>`, and optionally :ref:`HTTP route configurations
<config_http_conn_man_route_table>`. Dynamic host discovery is only possible via DNS based
:ref:`service discovery <arch_overview_service_discovery>`. Configuration reloads must take place
via the built in :ref:`hot restart <arch_overview_hot_restart>` mechanism.
Though simplistic, fairly complicated deployments can be created using static configurations and
graceful hot restarts.
.. _arch_overview_dynamic_config_sds:
SDS only
--------
The :ref:`service discovery service (SDS) API <config_cluster_manager_sds>` provides a more advanced
mechanism by which Envoy can discover members of an upstream cluster. Layered on top of a static
configuration, SDS allows an Envoy deployment to circumvent the limitations of DNS (maximum records
in a response, etc.) as well as consume more information used in load balancing and routing (e.g.,
canary status, zone, etc.).
.. _arch_overview_dynamic_config_cds:
SDS and CDS
-----------
The :ref:`cluster discovery service (CDS) API <config_cluster_manager_cds>` layers on a mechanism by
which Envoy can discover upstream clusters used during routing. Envoy will gracefully add, update,
and remove clusters as specified by the API. This API allows implementors to build a topology in
which Envoy does not need to be aware of all upstream clusters at initial configuration time.
Typically, when doing HTTP routing along with CDS (but without route discovery service),
implementors will make use of the router's ability to forward requests to a cluster specified in an
:ref:`HTTP request header <config_http_conn_man_route_table_route_cluster_header>`.
Although it is possible to use CDS without SDS by specifying fully static clusters, we recommend
still using the SDS API for clusters specified via CDS. Internally, when a cluster definition is
updated, the operation is graceful. However, all existing connection pools will be drained and
reconnected. SDS does not suffer from this limitation. When hosts are added and removed via SDS,
the existing hosts in the cluster are unaffected.
.. _arch_overview_dynamic_config_rds:
SDS, CDS, and RDS
-----------------
The :ref:`route discovery service (RDS) API <config_http_conn_man_rds>` layers on a mechanism by which
Envoy can discover the entire route configuration for an HTTP connection manager filter at runtime.
The route configuration will be gracefully swapped in without affecting existing requests. This API,
when used alongside SDS and CDS, allows implementors to build a complex routing topology
(:ref:`traffic shifting <config_http_conn_man_route_table_traffic_splitting>`, blue/green
deployment, etc.) that will not require any Envoy restarts other than to obtain a new Envoy binary.
.. _arch_overview_dynamic_config_lds:
SDS, CDS, RDS, and LDS
----------------------
The :ref:`listener discovery service (LDS) <config_overview_lds>` layers on a mechanism by which
Envoy can discover entire listeners at runtime. This includes all filter stacks, up to and including
HTTP filters with embedded references to :ref:`RDS <config_http_conn_man_rds>`. Adding LDS into
the mix allows almost every aspect of Envoy to be dynamically configured. Hot restart should
only be required for very rare configuration changes (admin, tracing driver, etc.) or binary
updates.

@ -0,0 +1,18 @@
.. _arch_overview_dynamo:
DynamoDB
========
Envoy supports an HTTP level DynamoDB sniffing filter with the following features:
* DynamoDB API request/response parser.
* DynamoDB per operation/per table/per partition and operation statistics.
* Failure type statistics for 4xx responses, parsed from response JSON,
e.g., ProvisionedThroughputExceededException.
* Batch operation partial failure statistics.
The DynamoDB filter is a good example of Envoy’s extensibility and core abstractions at the HTTP
layer. At Lyft we use this filter for all application communication with DynamoDB. It provides an
invaluable source of data agnostic to the application platform and specific AWS SDK in use.
DynamoDB filter :ref:`configuration <config_http_filters_dynamo>`.

@ -0,0 +1,31 @@
.. _arch_overview_rate_limit:
Global rate limiting
====================
Although distributed :ref:`circuit breaking <arch_overview_circuit_break>` is generally extremely
effective in controlling throughput in distributed systems, there are times when it is not very
effective and global rate limiting is desired. The most common case is when a large number of hosts
are forwarding to a small number of hosts and the average request latency is low (e.g.,
connections/requests to a database server). If the target hosts become backed up, the downstream
hosts will overwhelm the upstream cluster. In this scenario it is extremely difficult to configure a
tight enough circuit breaking limit on each downstream host such that the system will operate
normally during typical request patterns but still prevent cascading failure when the system starts
to fail. Global rate limiting is a good solution for this case.
Envoy integrates directly with a global gRPC rate limiting service. Although any service that
implements the defined RPC/IDL protocol can be used, Lyft provides a `reference implementation <https://github.com/lyft/ratelimit>`_
written in Go which uses a Redis backend. Envoy’s rate limit integration has the following features:
* **Network level rate limit filter**: Envoy will call the rate limit service for every new
connection on the listener where the filter is installed. The configuration specifies a specific
domain and descriptor set to rate limit on. This has the ultimate effect of rate limiting the
connections per second that transit the listener. :ref:`Configuration reference
<config_network_filters_rate_limit>`.
* **HTTP level rate limit filter**: Envoy will call the rate limit service for every new request on
the listener where the filter is installed and where the route table specifies that the global
rate limit service should be called. All requests to the target upstream cluster as well as all
requests from the originating cluster to the target cluster can be rate limited.
:ref:`Configuration reference <config_http_filters_rate_limit>`
Rate limit service :ref:`configuration <config_rate_limit_service>`.

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save