This test validates that upb Python targets can be built successfully even if
Python is not installed locally.
I also updated our pinned upb version to pull in some recent fixes needed for
this test run.
PiperOrigin-RevId: 559504790
This will be used to seed feature resolution, which becomes just an edition lookup followed by proto merges. Runtime and generators that need to resolve their own features will be able to leverage this to avoid reimplementing all of the reflective logic that goes into the default calculation.
PiperOrigin-RevId: 559271634
This change moves the upb Fastbuild, Optimized, and FastTable test runs over to
the protobuf repo CI in preparation for moving the upb codebase itself. There
are a bunch more test runs to move, but this initial change handles the easy
ones first.
I also updated our pinned upb version to the current head to pick up some
recent fixes.
PiperOrigin-RevId: 557486174
https://github.com/protocolbuffers/protobuf/pull/13204 refactored the Ruby object cache to use a key of `LL2NUM(key_val)` instead of `LL2NUM(key_val >> 2)`. On 32-bit systems, `LL2NUM(key_val)` returns inconsistent results because a large value has to be stored as a Bignum on the heap. This causes cache lookups to fail.
This commit restores the previous behavior of using `ObjectCache_GetKey`, which discards the lower 2 bits, which are zero. This enables a key to be stored as a Fixnum on both 32 and 64-bit platforms.
As https://patshaughnessy.net/2014/1/9/how-big-is-a-bignum describes, a Fixnum uses:
* 1 bit for the `FIXNUM_FLAG`.
* 1 bit for the sign flag.
Therefore the largest possible Fixnum value on a 64-bit value is 4611686018427387903 (2^62 - 1). On a 32-bit system, the largest value is 1073741823 (2^30 - 1).
For example, a possible VALUE pointer address on a 32-bit system:
0xff5b4af8 => 4284173048
Dropping the lower 2 bits makes up for the loss of range to these flags. In the example above, we see that shifting by 2 turns the value into a 30-bit number, which can be represented as a Fixnum:
(0xff5b4af8 >> 2) => 1071043262
This bug can also manifest on a 64-bit system if the upper bits are 0xff.
Closes#13481Closes#13494
COPYBARA_INTEGRATE_REVIEW=https://github.com/protocolbuffers/protobuf/pull/13494 from stanhu:sh-fix-ruby-protobuf-32bit d63122a6fc
PiperOrigin-RevId: 557211768
Fixes a class of flaky test failures observed only in the FFI implementation due to garbage collection in between calls to an accessors for a frozen field.
Closes#13420
COPYBARA_INTEGRATE_REVIEW=https://github.com/protocolbuffers/protobuf/pull/13420 from protocolbuffers:rub 0ea91165fb
PiperOrigin-RevId: 552602026
This will be used for tracking the unresolved feature sets from the original proto file. Notable use-cases for this include:
* Code generators that need to validate their own features
* Runtimes that need to be able to accurately round-trip the original protos
PiperOrigin-RevId: 547610367
Unblocks execution of these tests on JRuby FFI.
Automated reformatting of WORKSPACE with documentary comment to ease future debugging of JRuby tests locally.
Closes#13293
COPYBARA_INTEGRATE_REVIEW=https://github.com/protocolbuffers/protobuf/pull/13293 from JasonLunn:object_cache_test_update 7db211a342
PiperOrigin-RevId: 547614702
This will be used for tracking the unresolved feature sets from the original proto file. Notable use-cases for this include:
* Code generators that need to validate their own features
* Runtimes that need to be able to accurately round-trip the original protos
PiperOrigin-RevId: 547610367
This represents the future direction of protobuf, replacing proto2/proto3 syntax with editions. These will enable more incremental evolution of protobuf APIs through features, which are individual behaviors (such as whether field presence is explicit or implicit). For more details see https://protobuf.dev/editions/overview/.
This PR contains a working implementation of editions for the protoc frontend and C++ code generation, along with various infrastructure improvements to support it. It gives early access for anyone who wants to a preview of editions, but has no effect on proto2/proto3 syntax. It is flag-guarded behind the `--experimental_editions` flag, and is an experimental feature with no guarantees.
PiperOrigin-RevId: 544805690
Prior to this CL, asserts have no effect for Ruby 3+, because Ruby unconditionally defines `NDEBUG`: https://bugs.ruby-lang.org/issues/18777
We work around this by doing `#undef NDEBUG` right after including Ruby, if `NDEBUG` was not previously defined.
PiperOrigin-RevId: 535359088
An extension range is either of verification state "UNVERIFIED" or "DECLARATION". If "DECLARATION", all extension fields of the range must be declared, or build error otherwise. The current default is "UNVERIFIED", but we will flip the default later.
Deprecate `is_repeated` in favor of `repeated`.
PiperOrigin-RevId: 526184056
This PR removes the DSL from the code generator, in anticipation of splitting the DSL out into a separate package.
Given a .proto file like:
```proto
syntax = "proto3";
package pkg;
message TestMessage {
optional int32 i32 = 1;
optional TestMessage msg = 2;
}
```
Generated code before:
```ruby
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: protoc_explorer/main.proto
require 'google/protobuf'
Google::Protobuf::DescriptorPool.generated_pool.build do
add_file("test.proto", :syntax => :proto3) do
add_message "pkg.TestMessage" do
proto3_optional :i32, :int32, 1
proto3_optional :msg, :message, 2, "pkg.TestMessage"
end
end
end
module Pkg
TestMessage = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("pkg.TestMessage").msgclass
end
```
Generated code after:
```ruby
# frozen_string_literal: true
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: test.proto
require 'google/protobuf'
descriptor_data = "\n\ntest.proto\x12\x03pkg\"S\n\x0bTestMessage\x12\x10\n\x03i32\x18\x01 \x01(\x05H\x00\x88\x01\x01\x12\"\n\x03msg\x18\x02 \x01(\x0b\x32\x10.pkg.TestMessageH\x01\x88\x01\x01\x42\x06\n\x04_i32B\x06\n\x04_msgb\x06proto3"
begin
Google::Protobuf::DescriptorPool.generated_pool.add_serialized_file(descriptor_data)
rescue TypeError => e
# <compatibility code, see below>
end
module Pkg
TestMessage = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("pkg.TestMessage").msgclass
end
```
This change fixes nearly all remaining conformance problems that existed previously. This is a side effect of moving from the DSL (which is lossy) to a serialized descriptor (which preserves all information).
## Backward Compatibility
This change should be 100% compatible with Ruby Protobuf >= 3.18.0, released in Sept 2021. Additionally, it should be compatible with all existing users and deployments. However there is some special compatibility code I inserted to achieve this level of backward compatibility.
Without the compatibility code, there is an edge case that could break backward compatibility. The existing code is lax in a way that the new code would be more strict.
When we use a full serialized descriptor, it will contain a list of all `.proto` files imported by this file (whereas the DSL never added dependencies properly): dfb71558a2/src/google/protobuf/descriptor.proto (L65-L66)
`add_serialized_file` will verify that all dependencies listed in the descriptor were previously added with `add_serialized_file`. Generally that should be fine, because the generated code will contain Ruby `require` statements for all dependencies, and the descriptor will fail to load anyway if the types we depend on were not previously defined in the DescriptorPool.
But there is a potential for problems if there are ambiguities around file paths. For example, consider the following scenario:
```proto
// foo/bar.proto
syntax = "proto2";
message Bar {}
```
```proto
// foo/baz.proto
syntax = "proto2";
import "bar.proto";
message Baz {
optional Bar bar = 1;
}
```
If you invoke `protoc` like so, it will work correctly:
```
$ protoc --ruby_out=. -Ifoo foo/bar.proto foo/baz.proto
$ RUBYLIB=. ruby baz_pb.rb
```
However if you invoke `protoc` like so, and didn't have any compatibility code, it would fail to load:
```
$ protoc --ruby_out=. -I. -Ifoo foo/baz.proto
$ protoc --ruby_out=. -I. -Ifoo foo/bar.proto
$ RUBYLIB=foo ruby foo/baz_pb.rb
foo/baz_pb.rb:10:in `add_serialized_file': Unable to build file to DescriptorPool: Depends on file 'bar.proto', but it has not been loaded (Google::Protobuf::TypeError)
from foo/baz_pb.rb:10:in `<main>'
```
The problem is that `bar.proto` is being referred to by two different canonical names: `bar.proto` and `foo/bar.proto`. This is a user error: each import should always be referred to by a consistent full path. Hopefully user errors of this sort are rare, but it is hard to know without trying.
The code in this PR prints a warning using `warn` if we detect that this edge case has occurred. We will plan to remove this compatibility code in the next major version.
Closes#12319
COPYBARA_INTEGRATE_REVIEW=https://github.com/protocolbuffers/protobuf/pull/12319 from haberman:ruby-gencode-binary 5c0e8f20b1
PiperOrigin-RevId: 524129023
This can be useful when an extension field/declaration is deleted in schema to avoid data corruption.
It also adds a check for duplicate numbers in the declarations.
PiperOrigin-RevId: 523269837
Syntax will become meaningless once we migrate to editions. In the meantime, we've implemented APIs to expose the differences between proto2 and proto3 in terms of the features we plan to release in edition 2023.
PiperOrigin-RevId: 520433607