Our bootstrapping setup compiles multiple versions of the generated code for `descriptor.proto` and `plugin.proto`, one for each stage of the bootstrap. For source files (`.c`), we can always select the correct version of the file in the BUILD rules, but for header files we need to make sure the correct stage's file is always selected via `#include`.
Previously we used `cc_library(includes=[])` to make it appear as though our bootstrapped headers had the same names as the "real" headers. This allowed a lot of the code to be agnostic to whether a bootstrap header was being used, which simplified things because we did not have to change the code performing the `#include`.
Unfortunately, due to build system limitations, this sometimes led to the incorrect header getting included. This should not have been possible, because we had a clean BUILD graph that should have removed all ambiguity about which header should be available. But in non-sandboxed builds, the compiler was able to find headers that were not actually in `deps=[]`, and worse it preferred those headers over the headers that actually were in `deps=[]`. This led to unintended results and errors about layering check violations.
This CL fixes the problem by removing all use of `includes=[]`. We now spell a full pathname to all bootstrap headers, so this class of errors is no longer possible. Unfortunately this adds some complexity, as we have to hard-code these full paths in several places.
A nice improvement in this CL is that `bootstrap_upb_proto_library()` can now only be used for bootstrapping; it only exposes the `descriptor_bootstrap.h` / `plugin_bootstrap.h` files. Anyone wanting to use the normal `net/proto2/proto/descriptor.upb.h` file should depend on `//net/proto2/proto:descriptor_upb_c_proto` target instead.
PiperOrigin-RevId: 664953196
Using `field.full_name()` instead of constructing a thunk name based on the containing_type makes little difference for normal fields, but it will actually be unique for extension fields, whereas the prior approach would break if an extension field name and a regular field collided (or if 2 extensions for the same message had field names that collided).
PiperOrigin-RevId: 664910530
This builds off of our existing pattern for unused imports. We will still warn when any deprecated features are used in a proto file passed directly to protoc, but will avoid them in the following situations:
1) Transitive imports pulled from the filesystem or descriptors will not trigger warnings. This will keep warnings isolated to the upstream builds instead of alerting all downstream clients.
2) Dynamic pool builds will not log deprecation warnings by default.
PiperOrigin-RevId: 663396953
Since a group GPBUnknownField uses a GPBUnknownFields and that is mutable, it needs to be copied so two instances aren't linked.
PiperOrigin-RevId: 663323972
If there's an unexpected failure message or an unexpected succeeding test from a wildcard expansion users will be made to remove the wildcarded equivalent. Once removed, they must rerun the conformance test to add the failures contained within the removed wildcarded equivalent.
PiperOrigin-RevId: 663040062
For some reason, the current if statement also causes the blocking test to be skipped if any of the needs are skipped. We fix this by going back to `if: always()` to ensure that this is always run.
PiperOrigin-RevId: 663033872
Before this PR, we stored a list internally of tests that must pass on presubmit and tried to keep it up to date.
This PR moves that information keeping into GitHub by adding a 'continuous-only' variable to most testing matrices to allow authors to specify which of their tests should be skipped on presubmit. During presubmit, tests that were specified to not run on presubmit will not be run and their names will be prefixed with "[SKIPPED]". All continuous only tests will be suffixed with "(Continuous)".
At the end of running all the tests, we have a single "All Blocking Tests" signal that will tell us whether all of the necessary tests have passed (either for presubmit or continuous based on how the test was triggered).
I've tested this from a different branch [here](https://github.com/protocolbuffers/protobuf/actions/runs/9602443750?pr=17151) and from a different fork [here](https://github.com/protocolbuffers/protobuf/actions/runs/9602554500?pr=17192). These should be the same and are as far as I can tell.
I also have a continuous test run [here](https://github.com/protocolbuffers/protobuf/actions/runs/9603824200) which runs the entire test suite.
Closes#17198
PiperOrigin-RevId: 662940724
The api is annotated that it isn't valid, but incase someone calling code isn't
annotated correct and someone returns nil for another nonnull api, it could happen,
so make it an explicit failure just to be safe.
PiperOrigin-RevId: 662935009
Pure python and upb do not support it and filtered out the test. This API does
not exists in any other language(except protobuf c++).
GetDebugString() for cpp extension will be removed in Jan 2025
PiperOrigin-RevId: 662640110
*** Reason for rollback ***
Rolling back while investigating tsan reports.
*** Original change description ***
Remove a volatile read/write in SmallSortedMap.entrySet()
I don't know why this was volatile.
Volatile semantics are not strong enough for making this read-then-write a
transaction. You'd need a mutex or AtomicReference to achieve that.
***
PiperOrigin-RevId: 662300377
I don't know why this was volatile.
Volatile semantics are not strong enough for making this read-then-write a
transaction. You'd need a mutex or AtomicReference to achieve that.
Maybe it's so that if someone else has already made a EntrySet on another
thread, we can reuse that? But, it's probably cheaper to make a new EntrySet
probably than to do the volatile read-or-write (this is just a guess, may be wrong, I'm
not super experienced with the cost of volatiles).
The EntrySet is stateless (except for the state of its containing
SmallSortedMap) so it's fine to have more than one of them.
The commit introducing this doesn't explain why it's volatile, but the method
comment referring to "Similar to the AbstractMap implementation of {@code
keySet()}" may have some clues.
AbstractMap.keySet uses a transient keySet, not a volatile keySet.
Is it possible that this initial implementation mistook transient for volatile?
Anyway, it's unnecessary, let's rip it out and enjoy a little more performance.
PiperOrigin-RevId: 662071950
I've noted the rationale for this in the comment, and in the attached bug.
This should save about 4-8 bytes of memory (on the 64bit server JVM depending on if compressed object pointers are on) per object with extensions. And 4 bytes on Android.
PiperOrigin-RevId: 661900014