upb_msg was trying to be general enough that it could either live in
an arena or be allocated with malloc()/free(). This was too much
complexity for too little benefit. We should commit to just saying
that upb_msg is arena-only.
I also ripped out the code to glue upb_msg to the existing
handlers-based encoder/decoder. upb_msg has its own, small, simple
encoder/decoder. I'm trying to whittle down upb_msg to a small
and simple core.
I updated the Lua extension for these changes. Lua needs some more
work to properly create arenas per message. For now I just created
a single global arena.
Also includes an implementation of the conformance tests
to display what the API usage will be like.
There is still a lot to do, and things that are broken (oneofs,
repeated fields, etc), but it's a good start.
This involves:
- remove upb_msglayout -> upb_msgfactory dependency.
- remove upb_msglayout -> upb_msgdef dependency (in progress).
- make upb_msglayout use a representation that can be
statically initialized by generated code.
The goal here is that upb_msglayout becomes a kind of "descriptor
lite": it contains enough data to parser and serialize protobufs
and manipulate a upb_msg in memory, while being far smaller and
simpler than a full descriptor. It also does not include field
names, which can be a benefit for applications that do not want
to leak field names.
Generated code can then create a upb_msglayout, and do most things
without ever needing to construct full descriptors/defs if they
don't want to.
Many things have changed and been simplified.
The memory-management story for upb_def and upb_handlers
is much more robust; upb_def and upb_handlers should be
fairly stable interfaces now. There is still much work
to do for the runtime component (upb_sink).
Many improvements, too many to mention. One significant
perf regression warrants investigation:
omitfp.parsetoproto2_googlemessage1.upb_jit: 343 -> 252 (-26.53)
plain.parsetoproto2_googlemessage1.upb_jit: 334 -> 251 (-24.85)
25% regression for this benchmark is bad, but since I don't think
there's any fundamental design issue that caused it I'm going to
go ahead with the commit anyway. Can investigate and fix later.
Other benchmarks were neutral or showed slight improvement.
Added a upb_byteregion that tracks a region of
the input buffer; decoders use this instead of
using a upb_bytesrc directly. upb_byteregion
is also used as the way of passing a string to
a upb_handlers callback. This symmetry makes
decoders compose better; if you want to take
a parsed string and decode it as something else,
you can take the string directly from the callback
and feed it as input to another parser.
A commented-out version of a pinning interface
is present; I decline to actually implement it
(and accept its extra complexity) until/unless
it is clear that it is actually a win. But it
is included as a proof-of-concept, to show that
it fits well with the existing interface.
This leads to a major (20-40%) improvement in the parsetoproto2
benchmark with small messages. We now are faster than proto2 in all
apples-to-apples comparisons, at least given the (admittedly
limited) set of benchmarks in this source tree.
Includes are now via upb/foo.h.
Files specific to the protobuf format are
now in upb/pb (the core library is concerned
with message definitions, handlers, and
byte streams, but knows nothing about any
particular serializationf format).
I'm realizing that basically all upb objects
will need to be refcounted to be sharable
across languages, but *not* messages which
are on their way out so we can get out of
the business of data representations.
Things which must be refcounted:
- encoders, decoders
- handlers objects
- defs
Startseq/endseq handlers are called at the beginning
and end of a sequence of repeated values. Protobuf
does not really have direct support for this (repeated
primitive fields do not delimit "begin" and "end" of
the sequence) but we can infer them from the bytestream.
The benefit of supporting them explicitly is that they
get their own stack frame and closure, so we can avoid
having to find the array's address over and over and
deciding if we need to initialize it.
This will also pave the way for better support of JSON,
which does have explicit "startseq/endseq" markers: [].
It can successfully parse SpeedMessage1.
Preliminary results: 750MB/s on Core2 2.4GHz.
This number is 2.5x proto2.
This isn't apples-to-apples, because
proto2 is parsing to a struct and we are
just doing stream parsing, but for apps
that are currently using proto2, this is the
improvement they would see if they could
move to stream-based processing.
Unfortunately perf-regression-test.py is
broken, and I'm not 100% sure why. It would
be nice to fix it first (to ensure that
there are no performance regressions for
the table-based decoder) but I'm really
impatient to get the JIT checked in.
Simplified some of the semantics around
the decoder's data structures, in anticipation
of sharing them between the regular C decoder
and a JIT-ted decoder.