.index() is dependent on the order specified in the .proto file.
Minitables create their own ordering, which represent the true index that we're interested in set_alias.
This can be fetched via .layout_index when we have a upb::FieldDefPtr.
PiperOrigin-RevId: 691825782
Before this CL, it was only possible to get CHandles when fetching messages from a map. proto2::cpp does not have this restriction, so we patch this gap in here.
+utilize the new Emit :)
PiperOrigin-RevId: 686475508
We unify hpb's impl to utilize Protobuf's canonical printer, as opposed to a bespoke solution.
- augment hpb::Context with EmitLegacy, which keeps the $* syntax while we convert. This allows for a far simpler transitional pathway, and we can incrementally port to .Emit() as we see fit. Of course, all new calls should use .Emit().
- use the Context to wrap io::Printer and any hpb-specific info (e.g. what backend we're using)
PiperOrigin-RevId: 686198921
In certain cases, it is useful to share submessages across multiple parent messages.
proto2::cpp has a mechanism for this, so we add the hpb equivalent.
For this initial impl, we stipulate that the arenas must be exactly the same. We may explore broadening the constraint to allow for all fused arenas.
PiperOrigin-RevId: 681169537
The goal of the `names.h` convention is to have a single canonical place where a code generator can define the set of symbols it exports to other code generators, and a canonical place where the name mangling logic is implemented.
Each upb code generator now has its own `names.h` file defining the symbols that it owns & exports:
* `third_party/upb/upb_generator/c/names.h` (for `foo.upb.h` files)
* `third_party/upb/upb_generator/minitable/names.h` (for `foo.upb_minitable.h` files)
* `third_party/upb/upb_generator/reflection/names.h` (for `foo.upbdefs.h` files)
This is a significant improvement over the previous situation where the name mangling functions were co-mingled in `common.h`/`mangle.h`, or sprinkled throughout the generators, with no clear structure for which code generator owns which symbols.
With this structure in place, the visibility lists for the various `names.h` files provide a clear dependency graph for how different generators depend on each other. In general, we want to keep dependencies on the "C" code generator to a minimum, since it is the largest and most complicated of upb's generated APIs, and is also the most prone to symbol name clashes.
Note that upb's `names.h` headers are somewhat unusual, in that we do not want them to depend on C++'s reflection or upb's reflection. Most `names.h` headers in protobuf would use types like `proto2::Descriptor`, but we don't want upb to depend on C++ reflection, especially during its bootstrapping process. We also don't want to force users to build upb defs just to use these name mangling functions. So we use only plain string types like `absl::string_view` and `std::string`.
PiperOrigin-RevId: 672397247
This yields several benefits:
1. The code no longer needs to be bootstrapped (since it no longer depends on upb reflection).
2. The upb code generator no longer depends on libprotobuf at all (except for `code_generator_lite.{h,cc}`, which is just one .cc file and has no deps).
PiperOrigin-RevId: 672280579
In this first stage, we rename the directory from protos_generator to hpb_generator, updating all necessary BUILD files and #includes.
PiperOrigin-RevId: 642600953
This makes the file layout a bit more consistent with the `protos ->
protos_generator` pattern. I also replaced the `upbc` namespace with
`upb::generator`.
PiperOrigin-RevId: 569264372
This change moves almost everything in the `upb/` directory up one level, so
that for example `upb/upb/generated_code_support.h` becomes just
`upb/generated_code_support.h`. The only exceptions I made to this were that I
left `upb/cmake` and `upb/BUILD` where they are, mostly because that avoids
conflict with other files and the current locations seem reasonable for now.
The `python/` directory is a little bit of a challenge because we had to merge
the existing directory there with `upb/python/`. I made `upb/python/BUILD` into
the BUILD file for the merged directory, and it effectively loads the contents
of the other BUILD file via `python/build_targets.bzl`, but I plan to clean
this up soon.
PiperOrigin-RevId: 568651768
A couple weeks ago we moved upb into the protobuf Git repo, and this change
continues the merger of the two repos by making them into a single Bazel repo.
This was mostly a matter of deleting upb's WORKSPACE file and fixing up a bunch
of references to reflect the new structure.
Most of the changes are pretty mechanical, but one thing that needed more
invasive changes was the Python script for generating CMakeLists.txt,
make_cmakelists.py. The WORKSPACE file it relied on no longer exists with this
change, so I updated it to hardcode the information it needed from that file.
PiperOrigin-RevId: 564810016
This is the second attempt to fix our Git history. This should allow
"git blame" to work correctly in the upb/ directory even though our
automation unexpectedly blew away that directory.
Update signatures to use T* Ptr<T> consistently.
Cross language blocks utility updated to use GetArena(T*)
Fixes arena_ for Proxy/CProxy ::Access class now consistently fills in arena_ (where message was created in).
PiperOrigin-RevId: 543457599
Note: Code looks duplicated but in C case, it is for performance. For C++,
C++ and C may diverge in the future for certain methods.
PiperOrigin-RevId: 525826831
This CL changes the upb compiler to no longer depend on C++ protobuf libraries. upb now uses its own reflection libraries to implement its code generator.
# Key Benefits
1. upb can now use its own reflection libraries throughout the compiler. This makes upb more consistent and principled, and gives us more chances to dogfood our own C++ reflection API. This highlighted several parts of the C++ reflection API that were incomplete.
2. This CL removes code duplication that previously existed in the compiler. The upb reflection library has code to build MiniDescriptors and MiniTables out of descriptors, but prior to this CL the upb compiler could not use it. The upb compiler had a separate copy of this logic, and the compiler's copy of this logic was especially tricky and hard to maintain. This CL removes the separate copy of that logic.
3. This CL (mostly) removes upb's dependency on the C++ protobuf library. We still depend on `protoc` (the binary), but the runtime and compiler no longer link against C++'s libraries. This opens up the possibility of speeding up some builds significantly if we can use a prebuilt `protoc` binary.
# Bootstrap Stages
To bootstrap, we check in a copy of our generated code for `descriptor.proto` and `plugin.proto`. This allows the compiler to depend on the generated code for these two protos without creating a circular dependency. This code is checked in to the `stage0` directory.
The bootstrapping process is divided into a few stages. All `cc_library()`, `upb_proto_library()`, and `cc_binary()` targets that would otherwise be circular participate in this staging process. That currently includes:
* `//third_party/upb:descriptor_upb_proto`
* `//third_party/upb:plugin_upb_proto`
* `//third_party/upb:reflection`
* `//third_party/upb:reflection_internal`
* `//third_party/upbc:common`
* `//third_party/upbc:file_layout`
* `//third_party/upbc:plugin`
* `//third_party/upbc:protoc-gen-upb`
For each of these targets, we produce a rule for each stage (the logic for this is nicely encapsulated in Blaze/Bazel macros like `bootstrap_cc_library()` and `bootstrap_upb_proto_library()`, so the `BUILD` file remains readable). For example:
* `//third_party/upb:descriptor_upb_proto_stage0`
* `//third_party/upb:descriptor_upb_proto_stage1`
* `//third_party/upb:descriptor_upb_proto`
The stages are:
1. `stage0`: This uses the checked-in version of the generated code. The stage0 compiler is correct and outputs the same code as all other compilers, but it is unnecessarily slow because its protos were compiled in bootstrap mode. The stage0 compiler is used to generate protos for stage1.
2. `stage1`: The stage1 compiler is correct and fast, and therefore we use it in almost all cases (eg. `upb_proto_library()`). However its own protos were not generated using `upb_proto_library()`, so its `cc_library()` targets cannot be safely mixed with `upb_proto_library()`, as this would lead to duplicate symbols.
3. final (no stage): The final compiler is identical to the `stage1` compiler. The only difference is that its protos were built with `upb_proto_library()`. This doesn't matter very much for the compiler binary, but for the `cc_library()` targets like `//third_party/upb:reflection`, only the final targets can be safely linked in by other applications.
# "Bootstrap Mode" Protos
The checked-in generated code is generated in a special "bootstrap" mode that is a bit different than normal generated code. Bootstrap mode avoids depending on the internal representation of MiniTables or the messages, at the cost of slower runtime performance.
Bootstrap mode only interacts with MiniTables and messages using public APIs such as `upb_MiniTable_Build()`, `upb_Message_GetInt32()`, etc. This is very important as it allows us to change the internal representation without needing to regenerate our bootstrap protos. This will make it far easier to write CLs that change the internal representation, because it avoids the awkward dance of trying to regenerate the bootstrap protos when the compiler itself is broken due to bootstrap protos being out of date.
The bootstrap generated code does have two downsides:
1. The accessors are less efficient, because they look up MiniTable fields by number instead of hard-coding the MiniTableField into the generated code.
2. It requires runtime initialization of the MiniTables, which costs CPU cycles at startup, and also allocates memory which is never freed. Per google3 rules this is not really a leak, since this memory is still reachable via static variables, but it is undesirable in many contexts. We could fix this part by introducing the equivalent of `google::protobuf::ShutdownProtobufLibrary()`).
These downsides are fine for the bootstrapping process, but they are reason enough not to enable bootstrap mode in general for all protos.
# Bootstrapping Always Uses OSS Protos
To enable smooth syncing between Google3 and OSS, we always use an OSS version of the checked in generated code for `stage0`, even in google3.
This requires that the google3 code can be switched to reference the OSS proto names using a preprocessor define. We introduce the `UPB_DESC(xyz)` macro for this, which will expand into either `proto2_xyz` or `google_protobuf_xyz`. Any libraries used in `stage0` must use `UPB_DESC(xyz)` rather than refer to the symbol names directly.
PiperOrigin-RevId: 501458451
Example:
```
bazel-out/k8-fastbuild/bin/protos_generator/tests/test_model.upb.proto.cc: In member function 'void protos_generator::test::protos::internal::TestModelV1Access::set_value4(size_t, int32_t)':
bazel-out/k8-fastbuild/bin/protos_generator/tests/test_model.upb.proto.cc:559:14: error: comparison of unsigned expression in '>= 0' is always true [-Werror=type-limits]
559 | assert(0 <= index && index < len);
```
PiperOrigin-RevId: 489086133