The C based gRPC (C++, Python, Ruby, Objective-C, PHP, C#) https://grpc.io/
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

173 lines
7.0 KiB

[Performance] Benchmark AddCounter with FakeStatsPlugin and OTelPlugin (#37282) Sample output: ``` ➜ grpc git:(otel-metrics-benchmark) ✗ bazel-bin/test/cpp/microbenchmarks/bm_stats_plugin WARNING: All log messages before absl::InitializeLog() is called are written to STDERR I0000 00:00:1721695619.615582 2126186 config.cc:257] gRPC experiments enabled: call_status_override_on_cancellation, call_tracer_in_transport, event_engine_dns, event_engine_listener, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 2024-07-22T17:46:59-07:00 Running bazel-bin/test/cpp/microbenchmarks/bm_stats_plugin Run on (48 X 2450 MHz CPU s) CPU Caches: L1 Data 32 KiB (x24) L1 Instruction 32 KiB (x24) L2 Unified 512 KiB (x24) L3 Unified 32768 KiB (x3) Load Average: 1.16, 0.85, 0.85 ***WARNING*** Library was built as DEBUG. Timings may be affected. --------------------------------------------------------------------------- Benchmark Time CPU Iterations --------------------------------------------------------------------------- BM_AddCounterWithFakeStatsPlugin 1738 ns 1738 ns 404265 BM_AddCounterWithOTelPlugin 757 ns 757 ns 928142 I0000 00:00:1721695621.304593 2126186 test_config.cc:186] TestEnvironment ends ``` <!-- If you know who should review your pull request, please assign it to that person, otherwise the pull request would get assigned randomly. If your pull request is for a specific language, please add the appropriate lang label. --> Closes #37282 COPYBARA_INTEGRATE_REVIEW=https://github.com/grpc/grpc/pull/37282 from yijiem:otel-metrics-benchmark eeba3dfb5e2991e7093bfaa613ac89e9215043f5 PiperOrigin-RevId: 655286398
4 months ago
// Copyright 2024 The gRPC Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include <memory>
#include <benchmark/benchmark.h>
#include "absl/log/absl_check.h"
#include "absl/strings/string_view.h"
#include "opentelemetry/sdk/metrics/meter_provider.h"
#include <grpcpp/ext/otel_plugin.h>
#include "src/core/lib/event_engine/channel_args_endpoint_config.h"
#include "src/core/telemetry/metrics.h"
#include "test/core/test_util/fake_stats_plugin.h"
#include "test/core/test_util/test_config.h"
#include "test/cpp/microbenchmarks/helpers.h"
#include "test/cpp/util/test_config.h"
namespace {
constexpr const absl::string_view kMetricName = "test.counter";
constexpr const absl::string_view kMetricNameWithLabels =
"test.counter_with_labels";
[Performance] Benchmark AddCounter with FakeStatsPlugin and OTelPlugin (#37282) Sample output: ``` ➜ grpc git:(otel-metrics-benchmark) ✗ bazel-bin/test/cpp/microbenchmarks/bm_stats_plugin WARNING: All log messages before absl::InitializeLog() is called are written to STDERR I0000 00:00:1721695619.615582 2126186 config.cc:257] gRPC experiments enabled: call_status_override_on_cancellation, call_tracer_in_transport, event_engine_dns, event_engine_listener, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 2024-07-22T17:46:59-07:00 Running bazel-bin/test/cpp/microbenchmarks/bm_stats_plugin Run on (48 X 2450 MHz CPU s) CPU Caches: L1 Data 32 KiB (x24) L1 Instruction 32 KiB (x24) L2 Unified 512 KiB (x24) L3 Unified 32768 KiB (x3) Load Average: 1.16, 0.85, 0.85 ***WARNING*** Library was built as DEBUG. Timings may be affected. --------------------------------------------------------------------------- Benchmark Time CPU Iterations --------------------------------------------------------------------------- BM_AddCounterWithFakeStatsPlugin 1738 ns 1738 ns 404265 BM_AddCounterWithOTelPlugin 757 ns 757 ns 928142 I0000 00:00:1721695621.304593 2126186 test_config.cc:186] TestEnvironment ends ``` <!-- If you know who should review your pull request, please assign it to that person, otherwise the pull request would get assigned randomly. If your pull request is for a specific language, please add the appropriate lang label. --> Closes #37282 COPYBARA_INTEGRATE_REVIEW=https://github.com/grpc/grpc/pull/37282 from yijiem:otel-metrics-benchmark eeba3dfb5e2991e7093bfaa613ac89e9215043f5 PiperOrigin-RevId: 655286398
4 months ago
const auto kCounterHandle =
grpc_core::GlobalInstrumentsRegistry::RegisterUInt64Counter(
kMetricName, "A simple test counter", "{count}", true)
.Build();
const auto kCounterWithLabelsHandle =
grpc_core::GlobalInstrumentsRegistry::RegisterUInt64Counter(
kMetricNameWithLabels, "A simple test counter with labels", "{count}",
true)
.Labels("key1", "key2", "key3")
.OptionalLabels("opt_key1", "opt_key2", "opt_key3")
.Build();
[Performance] Benchmark AddCounter with FakeStatsPlugin and OTelPlugin (#37282) Sample output: ``` ➜ grpc git:(otel-metrics-benchmark) ✗ bazel-bin/test/cpp/microbenchmarks/bm_stats_plugin WARNING: All log messages before absl::InitializeLog() is called are written to STDERR I0000 00:00:1721695619.615582 2126186 config.cc:257] gRPC experiments enabled: call_status_override_on_cancellation, call_tracer_in_transport, event_engine_dns, event_engine_listener, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 2024-07-22T17:46:59-07:00 Running bazel-bin/test/cpp/microbenchmarks/bm_stats_plugin Run on (48 X 2450 MHz CPU s) CPU Caches: L1 Data 32 KiB (x24) L1 Instruction 32 KiB (x24) L2 Unified 512 KiB (x24) L3 Unified 32768 KiB (x3) Load Average: 1.16, 0.85, 0.85 ***WARNING*** Library was built as DEBUG. Timings may be affected. --------------------------------------------------------------------------- Benchmark Time CPU Iterations --------------------------------------------------------------------------- BM_AddCounterWithFakeStatsPlugin 1738 ns 1738 ns 404265 BM_AddCounterWithOTelPlugin 757 ns 757 ns 928142 I0000 00:00:1721695621.304593 2126186 test_config.cc:186] TestEnvironment ends ``` <!-- If you know who should review your pull request, please assign it to that person, otherwise the pull request would get assigned randomly. If your pull request is for a specific language, please add the appropriate lang label. --> Closes #37282 COPYBARA_INTEGRATE_REVIEW=https://github.com/grpc/grpc/pull/37282 from yijiem:otel-metrics-benchmark eeba3dfb5e2991e7093bfaa613ac89e9215043f5 PiperOrigin-RevId: 655286398
4 months ago
void BM_AddCounterWithFakeStatsPlugin(benchmark::State& state) {
grpc_core::GlobalStatsPluginRegistryTestPeer::
ResetGlobalStatsPluginRegistry();
grpc_core::FakeStatsPluginBuilder().BuildAndRegister();
grpc_event_engine::experimental::ChannelArgsEndpointConfig endpoint_config;
auto stats_plugin_group =
grpc_core::GlobalStatsPluginRegistry::GetStatsPluginsForChannel(
grpc_core::experimental::StatsPluginChannelScope("", "",
endpoint_config));
for (auto _ : state) {
stats_plugin_group.AddCounter(kCounterHandle, 1, {}, {});
}
}
BENCHMARK(BM_AddCounterWithFakeStatsPlugin);
void BM_AddCounterWithLabelsWithFakeStatsPlugin(benchmark::State& state) {
grpc_core::GlobalStatsPluginRegistryTestPeer::
ResetGlobalStatsPluginRegistry();
grpc_core::FakeStatsPluginBuilder().BuildAndRegister();
grpc_event_engine::experimental::ChannelArgsEndpointConfig endpoint_config;
auto stats_plugin_group =
grpc_core::GlobalStatsPluginRegistry::GetStatsPluginsForChannel(
grpc_core::experimental::StatsPluginChannelScope("", "",
endpoint_config));
for (auto _ : state) {
stats_plugin_group.AddCounter(kCounterWithLabelsHandle, 1,
{"val1", "val2", "val3"},
{"opt_val1", "opt_val2", "opt_val3"});
}
}
BENCHMARK(BM_AddCounterWithLabelsWithFakeStatsPlugin);
[Performance] Benchmark AddCounter with FakeStatsPlugin and OTelPlugin (#37282) Sample output: ``` ➜ grpc git:(otel-metrics-benchmark) ✗ bazel-bin/test/cpp/microbenchmarks/bm_stats_plugin WARNING: All log messages before absl::InitializeLog() is called are written to STDERR I0000 00:00:1721695619.615582 2126186 config.cc:257] gRPC experiments enabled: call_status_override_on_cancellation, call_tracer_in_transport, event_engine_dns, event_engine_listener, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 2024-07-22T17:46:59-07:00 Running bazel-bin/test/cpp/microbenchmarks/bm_stats_plugin Run on (48 X 2450 MHz CPU s) CPU Caches: L1 Data 32 KiB (x24) L1 Instruction 32 KiB (x24) L2 Unified 512 KiB (x24) L3 Unified 32768 KiB (x3) Load Average: 1.16, 0.85, 0.85 ***WARNING*** Library was built as DEBUG. Timings may be affected. --------------------------------------------------------------------------- Benchmark Time CPU Iterations --------------------------------------------------------------------------- BM_AddCounterWithFakeStatsPlugin 1738 ns 1738 ns 404265 BM_AddCounterWithOTelPlugin 757 ns 757 ns 928142 I0000 00:00:1721695621.304593 2126186 test_config.cc:186] TestEnvironment ends ``` <!-- If you know who should review your pull request, please assign it to that person, otherwise the pull request would get assigned randomly. If your pull request is for a specific language, please add the appropriate lang label. --> Closes #37282 COPYBARA_INTEGRATE_REVIEW=https://github.com/grpc/grpc/pull/37282 from yijiem:otel-metrics-benchmark eeba3dfb5e2991e7093bfaa613ac89e9215043f5 PiperOrigin-RevId: 655286398
4 months ago
void BM_AddCounterWithOTelPlugin(benchmark::State& state) {
grpc_core::GlobalStatsPluginRegistryTestPeer::
ResetGlobalStatsPluginRegistry();
auto meter_provider =
std::make_shared<opentelemetry::sdk::metrics::MeterProvider>();
auto status = grpc::OpenTelemetryPluginBuilder()
.EnableMetrics({kMetricName})
.SetMeterProvider(std::move(meter_provider))
.BuildAndRegisterGlobal();
CHECK(status.ok());
grpc_event_engine::experimental::ChannelArgsEndpointConfig endpoint_config;
auto stats_plugin_group =
grpc_core::GlobalStatsPluginRegistry::GetStatsPluginsForChannel(
grpc_core::experimental::StatsPluginChannelScope("", "",
endpoint_config));
for (auto _ : state) {
stats_plugin_group.AddCounter(kCounterHandle, 1, {}, {});
}
}
BENCHMARK(BM_AddCounterWithOTelPlugin);
void BM_AddCounterWithLabelsWithOTelPlugin(benchmark::State& state) {
grpc_core::GlobalStatsPluginRegistryTestPeer::
ResetGlobalStatsPluginRegistry();
auto meter_provider =
std::make_shared<opentelemetry::sdk::metrics::MeterProvider>();
auto status = grpc::OpenTelemetryPluginBuilder()
.EnableMetrics({kMetricName})
.SetMeterProvider(std::move(meter_provider))
.BuildAndRegisterGlobal();
CHECK(status.ok());
grpc_event_engine::experimental::ChannelArgsEndpointConfig endpoint_config;
auto stats_plugin_group =
grpc_core::GlobalStatsPluginRegistry::GetStatsPluginsForChannel(
grpc_core::experimental::StatsPluginChannelScope("", "",
endpoint_config));
for (auto _ : state) {
stats_plugin_group.AddCounter(kCounterWithLabelsHandle, 1,
{"val1", "val2", "val3"},
{"opt_val1", "opt_val2", "opt_val3"});
}
}
BENCHMARK(BM_AddCounterWithLabelsWithOTelPlugin);
void BM_AddCounterWithNoPlugin(benchmark::State& state) {
grpc_core::GlobalStatsPluginRegistryTestPeer::
ResetGlobalStatsPluginRegistry();
grpc_event_engine::experimental::ChannelArgsEndpointConfig endpoint_config;
auto stats_plugin_group =
grpc_core::GlobalStatsPluginRegistry::GetStatsPluginsForChannel(
grpc_core::experimental::StatsPluginChannelScope("", "",
endpoint_config));
for (auto _ : state) {
stats_plugin_group.AddCounter(kCounterHandle, 1, {}, {});
}
}
BENCHMARK(BM_AddCounterWithNoPlugin);
void BM_AddCounterWithLabelsWithNoPlugin(benchmark::State& state) {
grpc_core::GlobalStatsPluginRegistryTestPeer::
ResetGlobalStatsPluginRegistry();
grpc_event_engine::experimental::ChannelArgsEndpointConfig endpoint_config;
auto stats_plugin_group =
grpc_core::GlobalStatsPluginRegistry::GetStatsPluginsForChannel(
grpc_core::experimental::StatsPluginChannelScope("", "",
endpoint_config));
for (auto _ : state) {
stats_plugin_group.AddCounter(kCounterWithLabelsHandle, 1,
{"val1", "val2", "val3"},
{"opt_val1", "opt_val2", "opt_val3"});
}
}
BENCHMARK(BM_AddCounterWithLabelsWithNoPlugin);
[Performance] Benchmark AddCounter with FakeStatsPlugin and OTelPlugin (#37282) Sample output: ``` ➜ grpc git:(otel-metrics-benchmark) ✗ bazel-bin/test/cpp/microbenchmarks/bm_stats_plugin WARNING: All log messages before absl::InitializeLog() is called are written to STDERR I0000 00:00:1721695619.615582 2126186 config.cc:257] gRPC experiments enabled: call_status_override_on_cancellation, call_tracer_in_transport, event_engine_dns, event_engine_listener, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 2024-07-22T17:46:59-07:00 Running bazel-bin/test/cpp/microbenchmarks/bm_stats_plugin Run on (48 X 2450 MHz CPU s) CPU Caches: L1 Data 32 KiB (x24) L1 Instruction 32 KiB (x24) L2 Unified 512 KiB (x24) L3 Unified 32768 KiB (x3) Load Average: 1.16, 0.85, 0.85 ***WARNING*** Library was built as DEBUG. Timings may be affected. --------------------------------------------------------------------------- Benchmark Time CPU Iterations --------------------------------------------------------------------------- BM_AddCounterWithFakeStatsPlugin 1738 ns 1738 ns 404265 BM_AddCounterWithOTelPlugin 757 ns 757 ns 928142 I0000 00:00:1721695621.304593 2126186 test_config.cc:186] TestEnvironment ends ``` <!-- If you know who should review your pull request, please assign it to that person, otherwise the pull request would get assigned randomly. If your pull request is for a specific language, please add the appropriate lang label. --> Closes #37282 COPYBARA_INTEGRATE_REVIEW=https://github.com/grpc/grpc/pull/37282 from yijiem:otel-metrics-benchmark eeba3dfb5e2991e7093bfaa613ac89e9215043f5 PiperOrigin-RevId: 655286398
4 months ago
} // namespace
// Some distros have RunSpecifiedBenchmarks under the benchmark namespace,
// and others do not. This allows us to support both modes.
namespace benchmark {
void RunTheBenchmarksNamespaced() { RunSpecifiedBenchmarks(); }
} // namespace benchmark
int main(int argc, char** argv) {
grpc::testing::TestEnvironment env(&argc, argv);
LibraryInitializer libInit;
::benchmark::Initialize(&argc, argv);
grpc::testing::InitTest(&argc, &argv, false);
benchmark::RunTheBenchmarksNamespaced();
return 0;
}