- Tighten up on Infinity/NaN handling in terms of whitespace handling (and test casing)
- Validate that values are genuinely integers when they've been parsed from a JSON number (ignoring the fact that 1.0000000000000000001 == 1 as a double...)
- Allow exponents and decimal points in string representations
This required a rework of the tokenizer to allow for a "replaying" tokenizer, basically in case the @type value comes after the data itself. This rework is nice in some ways (all the pushback and object depth logic in one place) but is a little fragile in terms of token push-back when using the replay tokenizer. It'll be fine for the scenario we need it for, but we should be careful...
This is only thrown directly by JsonTokenizer, but surfaces from JsonParser as well. I've added doc comments to hopefully make everything clear.
The exception is actually thrown by the reader within JsonTokenizer, in anticipation of keeping track of the location within the document, but that change is not within this PR.
This includes all the well-known types except Any.
Some aspects are likely to require further work when the details of the JSON parsing expectations are hammered out in more detail. Some of these have "ignored" tests already.
Note that the choice *not* to use Json.NET was made for two reasons:
- Going from 0 dependencies to 1 dependency is a big hit, and there's not much benefit here
- Json.NET parses more leniently than we'd want; accommodating that would be nearly as much work as writing the tokenizer
This only really affects the JsonTokenizer, which could be replaced by Json.NET. The JsonParser code would be about the same length with Json.NET... but I wouldn't be as confident in it.