* Implement json decoding for Any message.
type url may not appear as the first value in json. As a result,
other data cannot be resolved before resolving type url. To solve that,
this change caches the start and end position of unparsed values and
resolve them in end_any_object when type url has been resolved.
* Handle Any in switch
* Update json parser size
* Fix comments
* Sync upstream
* Add dependency on upb_pb for upb_json
* Debug failed test
* Fix cmake
* Update test generated files
* Remove debug tests
* Changed schema for JSON test to be defined in a .proto file.
Before we had lots of code to build these schemas manually,
but this was verbose and made it difficult to add to the
schema easily. Now we can just write a .proto file and
adding fields is easy.
To avoid making the tests depend on upbc (and thus Lua)
we check in the generated schema.
* Made protobuf-compiler a dependency of "make genfiles."
* For genfiles download recent protoc that can handle proto3.
* Only use new protoc for genfiles.
* JSON parser: always accept both name variants, flag controls which we generate.
My previous commit was based on wrong information about the
proto3 JSON spec. It turns out we need to accept both field
name formats all the time, and a runtime flag should control
which we generate.
* Documented the preserve_proto_fieldnames option.
* Fix bugs in PR.
Prior to this change:
parse(buf, len) -> len + N
...would indicate that the next N bytes of the input are not
needed, *and* would advance the decoding position by this
much.
After this change:
parse(buf, len) -> len + N
parse(NULL, N) -> N
...can be used to achieve the same thing. But skipping the
N bytes is not explicitly performed by the user. A user that
doesn't want/need to skip can just say:
parsed = parse(buf, len);
if (parsed < len) {
// Handle suspend, advance stream by "parsed".
} else {
// Stream was advanced by "len" (even if parsed > len).
}
Updated unit tests to test this new behavior, and refactored
test utility code a bit to support it.
This is a sync of our internal developing of JSON parsing and
serialization. It implements native understanding of MapEntry
submessages, so that map fields with (key, value) pairs are serialized
as JSON maps (objects) natively rather than as arrays of objects with
'key' and 'value' fields. The parser also now understands how to emit
handler calls corresponding to MapEntry objects when processing a map
field.
This sync also picks up a bugfix in `table.c` to handle an alloc-failed
case.
- Added a JSON test that round-trips (parses then re-serializes) several
test messages, ensuring that the re-serialized form matches the
original exactly.
- Added support for printing and parsing symbolic enum names (rather than
integer values) in JSON.
- Updated JSON printer to properly handle string fields that come in
multiple pieces. ('bytes' fields still do not support this, and this
work is more challenging because it requires making the base64 encoder
resumable. Base64 encoding is not separable at an input-byte
granularity, unlike string escaping.)
- Fixed a < vs. <= bug in UTF-8 encoding generation (oops).