* Put oneofs in the same table as fields.
Oneofs and fields are not allowed to have names that conflict,
so we might as well put them all in the same table. This also
allows an efficient operation that looks for both fields and
oneofs in a single lookup.
Added support for OneofDef to Lua to allow testing of this.
* Addressed PR comments.
* Changed schema for JSON test to be defined in a .proto file.
Before we had lots of code to build these schemas manually,
but this was verbose and made it difficult to add to the
schema easily. Now we can just write a .proto file and
adding fields is easy.
To avoid making the tests depend on upbc (and thus Lua)
we check in the generated schema.
* Made protobuf-compiler a dependency of "make genfiles."
* For genfiles download recent protoc that can handle proto3.
* Only use new protoc for genfiles.
* Split upb::Arena/upb::Allocator from upb::Environment.
This will allow arenas and allocators to be used
independently of environments, which will be important
for an upcoming change (a message representation).
Overall this design feels cleaner that the previous
Environment/SeededAllocator design.
As part of this change, moved all allocations in upb
to use a global allocator instead of hard-coding
malloc/free. This will allow injecting OOM faults
for more robust testing.
One place that doesn't use the global allocator is
the tracked ref code. Instead of its previous approach
of CHECK_OOM() after every malloc() or table insert, it
simply uses an allocator that does this automatically.
I moved Allocator/Arena/Environment into upb.h.
This seems principled since these are the only types
in upb whose size is directly exposed to users, since
they form the basis of memory allocation strategy.
* Cleaned up some header includes and fixed more malloc -> upb_gmalloc().
* Changes from PR review.
* Don't use UINTPTR_MAX or UINT64_MAX.
* Punt on adding line/file for now.
* We actually can't store (uint64_t)-1, update comment and test.
* JSON parser: always accept both name variants, flag controls which we generate.
My previous commit was based on wrong information about the
proto3 JSON spec. It turns out we need to accept both field
name formats all the time, and a runtime flag should control
which we generate.
* Documented the preserve_proto_fieldnames option.
* Fix bugs in PR.
It is entirely optional: MessageDef/EnumDef can still exist
on their own. But this can represent a def's file when it is
desirable to do so (eg. for code generators).
This approach will require that we change the way we handle
extensions. But I think it will be a good change overall.
Specifically, we previously handled extensions by duplicating
the extended message and then adding the extension as a regular
field to the duplicated message. This required also duplicating
any messages that could reach the extended message.
In the new world we will need a way of declaring and looking up
extensions separately from the message being extended.
This change also involves some notable changes to the generated
code:
- files are now called foo.upbdefs.h instead of foo.upb.h.
This reflects the fact that we might possibly generate several
different output files for a .proto file, and this one is just
for defs.
- we no longer generate selectors in the .h file.
- the upbdefs.c no longer vends a SymbolTable. Now it vends the
individual messages (and possibly a FileDef later). I think this
will compose better once we can generate files where one
generated files imports another.
We also make the descriptor reader vend a list of FileDefs now.
This is the best conceptual match for parsing a FileDescriptorSet.
Prior to this change, if an error was returned, it would be
guaranteed to always return a short byte count. Now the two
concepts are a bit more orthogonal. There are cases where
the entire input is consumed even though an error was
encountered.
Prior to this change:
parse(buf, len) -> len + N
...would indicate that the next N bytes of the input are not
needed, *and* would advance the decoding position by this
much.
After this change:
parse(buf, len) -> len + N
parse(NULL, N) -> N
...can be used to achieve the same thing. But skipping the
N bytes is not explicitly performed by the user. A user that
doesn't want/need to skip can just say:
parsed = parse(buf, len);
if (parsed < len) {
// Handle suspend, advance stream by "parsed".
} else {
// Stream was advanced by "len" (even if parsed > len).
}
Updated unit tests to test this new behavior, and refactored
test utility code a bit to support it.
A large part of this change contains surface-level
porting, like moving variable declarations to the
top of the block.
However there are a few more substantial things too:
- moved internal-only struct definitions to a separate
file (structdefs.int.h), for greater encapsulation
and ABI compatibility.
- removed the UPB_UPCAST macro, since it requires access
to the internal-only struct definitions. Replaced uses
with calls to inline, type-safe casting functions.
- removed the UPB_DEFINE_CLASS/UPB_DEFINE_STRUCT macros.
Class and struct definitions are now more explicit -- you
get to see the actual class/struct keywords in the source.
The casting convenience functions have been moved into
UPB_DECLARE_DERIVED_TYPE() and UPB_DECLARE_DERIVED_TYPE2().
- the new way that we duplicate base methods in derived types
is also more convenient and requires less duplication.
It is also less greppable, but hopefully that is not
too big a problem.
Compiler flags (-std=c89 -pedantic) should help to rigorously
enforce that the code is free of C99-isms.
A few functions are not available in C89 (strtoll). There
are temporary, hacky solutions in place.
Changes the data layout of tables slightly so that string
keys are prefixed with their size, rather than the size
being inline in the table itself.
This has a few benefits:
1. inttables shrink a bit, because there is no longer a wasted
and unused size field sitting in them.
2. This avoids the need to have a union in the table. This is
important for an impending C89 port of upb, since C89 has
literally no way of statically initializing a non-first union
member.
This change has several parts:
1. Resurrected tools/upbc. The code was all there but the build was
broken for open-source. Now you can type "make tools/upbc" and
it will build all necessary Lua modules and create a robust shell
script for running upbc.
2. Changed Lua module loading to no longer rely on OS-level .so
dependencies. The net effect of this is that you now only need
to set LUA_PATH and LUA_CPATH; setting LD_LIBRARY_PATH or rpaths
is no longer required. Downside: this drops compatibility with
Lua 5.1, since it depends on a feature that only exists in Lua >=5.2
(and LuaJIT).
3. Since upbc works again, I fixed the re-generation of the descriptor
files (descriptor.upb.h, descriptor.upb.c). "make genfiles" will
re-generate these as well as the JIT code generator.
4. Added a Travis test target that ensures that the checked-in generated
files are not out of date. I would do this for the Ragel generated
file also, but we can't count on all versions of Ragel to necessarily
generate identical output.
5. Changed Makefile to no longer automatically run Ragel to regenerate
the JSON parser. This is unfortuante, because it's convenient when
you're developing the JSON parser. However, "git clone" sometimes
skews the timestamps a little bit so that "make" thinks it needs to
regenerate these files for a fresh "git clone." This would normally
be harmless, but if the user doesn't have Ragel installed, it makes
the build fail completely. So now you have to explicitly regenerate
the Ragel output. If you want to you can uncomment the auto-generation
during development.
This is a sync of our internal developing of JSON parsing and
serialization. It implements native understanding of MapEntry
submessages, so that map fields with (key, value) pairs are serialized
as JSON maps (objects) natively rather than as arrays of objects with
'key' and 'value' fields. The parser also now understands how to emit
handler calls corresponding to MapEntry objects when processing a map
field.
This sync also picks up a bugfix in `table.c` to handle an alloc-failed
case.
This change adds support for a OneofDef (upb_oneofdef), which represents
a 'oneof' as introduced by Protocol Buffers. This is semantically a
union type that contains fields and in turn may be added to a
MessageDef. This change does not alter parsing or the handler
abstraction in any way, because a oneof has impact only at a higher
semantic level (i.e., any sort of storage of the fields in a message
object), which is user-specific with respect to upb.
- Added a JSON test that round-trips (parses then re-serializes) several
test messages, ensuring that the re-serialized form matches the
original exactly.
- Added support for printing and parsing symbolic enum names (rather than
integer values) in JSON.
- Updated JSON printer to properly handle string fields that come in
multiple pieces. ('bytes' fields still do not support this, and this
work is more challenging because it requires making the base64 encoder
resumable. Base64 encoding is not separable at an input-byte
granularity, unlike string escaping.)
- Fixed a < vs. <= bug in UTF-8 encoding generation (oops).
This avoids requiring protoc to be installed
just to build/run the basic tests.
When upb has its own .proto file parser we should
be able to remove this precompiled version from the
repository.
Notable changes:
- We now only build things by default that require
no dependencies. So you can build upb even if you
don't have Lua or Google protobuf installed.
- Checked in a pre-built version of the JIT, so you
don't need Lua installed at build time to run DynASM.
It will still notice if you change the .dasc file and
attempt to re-run DynASM in that case.
- The build system now builds all modules of upb into
separate libraries, reflecting the modularity that
is already inherent in upb's design. This should
make it easier to trim the fat.
- removed the GDB JIT interface. I wasn't using it
much; using a .so is easier and more robust.