Until recently, these targets were dependencies of `//:protobuf` and could thus
be accessed through that target. But now that we are adhering to the layering
check, we need to provide proper access in a way that respects that check. This
change uses top-level alias targets following the existing pattern.
PiperOrigin-RevId: 605123979
# Motivation
https://github.com/protocolbuffers/protobuf/issues/7392#issuecomment-1884666885
This is for completeness - I am not blocked by this change in any way, so feel free to close the PR or treat is a low priority.
# Changes
Implement JsonIgnoreUnknownParsingTest in C# conformance test runner.
# Tested
Locally with:
```
bazel test //csharp:conformance_test \
--action_env=DOTNET_CLI_TELEMETRY_OPTOUT=1 --test_env=DOTNET_CLI_HOME=~ \
--action_env=DOTNET_SYSTEM_GLOBALIZATION_INVARIANT=1
```
Closes#15369
COPYBARA_INTEGRATE_REVIEW=https://github.com/protocolbuffers/protobuf/pull/15369 from noom:issue-7392/show-csharp-failures 3a652202a0
PiperOrigin-RevId: 602537927
This check enforces that each C++ build target has the correct dependencies for
all headers that it includes. We have many targets that were not correct with
respect to this check, so I fixed them up.
I also cleaned up the C++ targets related to the well-known types. I created a
cc_proto_library() target for each one and removed the :wkt_cc_protos target,
since this was necessary to satisfy the layering check. I deleted the
//src/google/protobuf:protobuf_nowkt target and deprecated :protobuf_nowkt,
because the distinction between the :protobuf and :protobuf_nowkt targets was
not really correct. Neither one exposed the headers for the well-known types in
a way that was valid with respect to the layering check, and the idea of
bundling all the well-known types together is not idiomatic in Bazel anyway.
This is a breaking change, because the //:protobuf target no longer bundles the
well-known types. From now on they should be accessed through the new
//:*_cc_proto aliases in our top-level package.
I renamed the :port_def target to :port, which simplifies things a bit by
matching our internal name.
The original motivation for this change was that to move utf8_range onto our CI
infrastructure, we needed to make its dependency rules_fuzzing compatible with
Bazel 6. The rules_fuzzing project builds with the layering check, and I found
that the process of upgrading it to Bazel 6 made it take a dependency on
protobuf, which caused it to break due to layering violations. I was able to
work around this, but it would still be nice to comply with the layering check
so that we don't have to worry about this kind of thing in the future.
PiperOrigin-RevId: 595516736
There is no canonical rules_ruby repo today, and we don't want our fork to become one. In order to unblock inclusion of Protobuf in the bzlmod registry, we're making this a dev dependency and dropping support for Bazel/Ruby.
Fixes#14569
PiperOrigin-RevId: 584393841
This also fixes a few minor bugs in the editions implementation that were caught in python/conformance tests, and adds a new SetFeatureSetDefaults API to the def pool for consistency with C++ and other python implementations.
PiperOrigin-RevId: 581384108
This change only covers pure python, and follow-up changes will handle C++/upb variants and actually enable editions support. The C++ one works (as evident from the conformance tests), but needs some APIs added to allow for testing.
PiperOrigin-RevId: 580304039
We transformed the proto2/proto3 protos to editions, and then run the same set of tests over both. This will verify that migrating to editions preserves the same proto2/proto3 behavior. These will not be enabled by default, and require a flag `--maximum_edition=2023`.
Future changes will:
- add more targeted editions-specific tests
- clean up our conformance test framework to allow for more targeted tests
- add wildcards to failure lists in limited cases to reduce noise
- add feature resolution conformance tests
PiperOrigin-RevId: 574570607
This doesn't change the tests at all, but refactors them to be more reusable in different contexts. Specifically, this will make it easier to add corresponding editions-based tests. This also splits the concept of "uses a proto3 message" and "runs proto3 tests", in preparation for that change.
PiperOrigin-RevId: 574310612
This doesn't change the tests at all, but refactors them to be more reusable in different contexts. Specifically, this will make it easier to add corresponding editions-based tests. This also splits the concept of "uses a proto3 message" and "runs proto3 tests", in preparation for that change.
PiperOrigin-RevId: 574299923
Add some missing endian conversions so that the conformance tests can be run on big endian platforms.
The message length value created by the conformance test runner is little endian according to the comments in the file but actually was sent in the native endianness of the host. I was able to run the java, python, ruby, php and csharp test executables and they all expect little endian length values so those tests would hang on big endian machines. Only the cpp test executable was using native endian so it has been changed to expect little endian too.
Also change the fixed32 and fixed64 functions in binary_json_conformance_test_suite.cc to send the data as little endian which fixes some failures in the python conformance tests on big endian platforms.
Closes#13443
COPYBARA_INTEGRATE_REVIEW=https://github.com/protocolbuffers/protobuf/pull/13443 from linux-on-ibm-z:conformance-runner-little-endian-fix 4ef7948997
PiperOrigin-RevId: 553958649
Going forward we will be forbidding `null` in `json_name`, but we will continue to support embedded `null` in JSON value. This conformance test will help clarify that this is a supported use case.
PiperOrigin-RevId: 527918031
This PR removes the DSL from the code generator, in anticipation of splitting the DSL out into a separate package.
Given a .proto file like:
```proto
syntax = "proto3";
package pkg;
message TestMessage {
optional int32 i32 = 1;
optional TestMessage msg = 2;
}
```
Generated code before:
```ruby
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: protoc_explorer/main.proto
require 'google/protobuf'
Google::Protobuf::DescriptorPool.generated_pool.build do
add_file("test.proto", :syntax => :proto3) do
add_message "pkg.TestMessage" do
proto3_optional :i32, :int32, 1
proto3_optional :msg, :message, 2, "pkg.TestMessage"
end
end
end
module Pkg
TestMessage = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("pkg.TestMessage").msgclass
end
```
Generated code after:
```ruby
# frozen_string_literal: true
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: test.proto
require 'google/protobuf'
descriptor_data = "\n\ntest.proto\x12\x03pkg\"S\n\x0bTestMessage\x12\x10\n\x03i32\x18\x01 \x01(\x05H\x00\x88\x01\x01\x12\"\n\x03msg\x18\x02 \x01(\x0b\x32\x10.pkg.TestMessageH\x01\x88\x01\x01\x42\x06\n\x04_i32B\x06\n\x04_msgb\x06proto3"
begin
Google::Protobuf::DescriptorPool.generated_pool.add_serialized_file(descriptor_data)
rescue TypeError => e
# <compatibility code, see below>
end
module Pkg
TestMessage = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("pkg.TestMessage").msgclass
end
```
This change fixes nearly all remaining conformance problems that existed previously. This is a side effect of moving from the DSL (which is lossy) to a serialized descriptor (which preserves all information).
## Backward Compatibility
This change should be 100% compatible with Ruby Protobuf >= 3.18.0, released in Sept 2021. Additionally, it should be compatible with all existing users and deployments. However there is some special compatibility code I inserted to achieve this level of backward compatibility.
Without the compatibility code, there is an edge case that could break backward compatibility. The existing code is lax in a way that the new code would be more strict.
When we use a full serialized descriptor, it will contain a list of all `.proto` files imported by this file (whereas the DSL never added dependencies properly): dfb71558a2/src/google/protobuf/descriptor.proto (L65-L66)
`add_serialized_file` will verify that all dependencies listed in the descriptor were previously added with `add_serialized_file`. Generally that should be fine, because the generated code will contain Ruby `require` statements for all dependencies, and the descriptor will fail to load anyway if the types we depend on were not previously defined in the DescriptorPool.
But there is a potential for problems if there are ambiguities around file paths. For example, consider the following scenario:
```proto
// foo/bar.proto
syntax = "proto2";
message Bar {}
```
```proto
// foo/baz.proto
syntax = "proto2";
import "bar.proto";
message Baz {
optional Bar bar = 1;
}
```
If you invoke `protoc` like so, it will work correctly:
```
$ protoc --ruby_out=. -Ifoo foo/bar.proto foo/baz.proto
$ RUBYLIB=. ruby baz_pb.rb
```
However if you invoke `protoc` like so, and didn't have any compatibility code, it would fail to load:
```
$ protoc --ruby_out=. -I. -Ifoo foo/baz.proto
$ protoc --ruby_out=. -I. -Ifoo foo/bar.proto
$ RUBYLIB=foo ruby foo/baz_pb.rb
foo/baz_pb.rb:10:in `add_serialized_file': Unable to build file to DescriptorPool: Depends on file 'bar.proto', but it has not been loaded (Google::Protobuf::TypeError)
from foo/baz_pb.rb:10:in `<main>'
```
The problem is that `bar.proto` is being referred to by two different canonical names: `bar.proto` and `foo/bar.proto`. This is a user error: each import should always be referred to by a consistent full path. Hopefully user errors of this sort are rare, but it is hard to know without trying.
The code in this PR prints a warning using `warn` if we detect that this edge case has occurred. We will plan to remove this compatibility code in the next major version.
Closes#12319
COPYBARA_INTEGRATE_REVIEW=https://github.com/protocolbuffers/protobuf/pull/12319 from haberman:ruby-gencode-binary 5c0e8f20b1
PiperOrigin-RevId: 524129023
- General naming up to "Objective-C" that some linters want, so just
use it every where for consistence.
- Add a bazel macOS config
- Includes the conformance test, so drop it from the Xcode flows.
- Tweak the naming on the Xcode flows so they are a little more clear.
- CocoaPods flows:
- Give them (hopefully) clear naming
- Use the same `image` as the Xcode tests
- Pin the Xcode version to match the Xcode tests
PiperOrigin-RevId: 508156130