JsonParser should be capable of parsing keys not surrounded by "
JsonParser should be capable of parsing keys without ", assuming that they are of type String (as is done in JS), rather than rejecting them and requiring them to be surrounded with ".
As in:
"{\"10729945\":{2:null},\"10730556\":{1:null},\"10729026\":{3:null}}"
P.S. The benefit is 2 bytes less per key (e.g., 1 vs. "1") used in network traffic, thus making a small performance & bandwidth usage optimization, that would make a lot of apps better at once (given that this is a very generic thing).
The above example currently results in: javax.json.stream.JsonParsingException: Invalid token=NUMBER at (line no=1, column no=14, offset=13). Expected tokens are: [STRING] at org.glassfish.json.JsonParserImpl.parsingException(JsonParserImpl.java:238) at org.glassfish.json.JsonParserImpl.access$1200(JsonParserImpl.java:61) at org.glassfish.json.JsonParserImpl$ObjectContext.getNextEvent(JsonParserImpl.java:291) at org.glassfish.json.JsonParserImpl$StateIterator.next(JsonParserImpl.java:172) at org.glassfish.json.JsonParserImpl.next(JsonParserImpl.java:149) at ...
One can achieve equivalent shorter JSON using an array of numbers instead, but if the usecase requires only a AbstractMap.SimpleImmutableEntry, then it's better to use that rather than an array of numbers for maintainability purposes, etc.
Of course the same should be applicable to the writing side: JSON should be optimized there first.
The approach i like currently requires custom
MessageBodyWriter
&
MessageBodyReader.
But i think this optimization should be the default behavior.
P.S. The current implementation is not optimized enough with regards to both writing & reading.
JsonParser should be capable of parsing keys without ", assuming that they are of type String
But ECMA-404 says: A string is a sequence of Unicode code points wrapped with quotation marks (U+0022).
(as is done in JS)
What do you mean by JS? Rhino:
js> JSON.parse('{ name: "Hercule" }')
js: uncaught JavaScript runtime exception: SyntaxError: Unexpected token in object literal
js> JSON.parse('{ "name": "Hercule" }')
[object Object]
Firefox:
> JSON.parse('{ name: "Hercule"}')
Uncaught SyntaxError: JSON.parse: expected property name or '}' at line 1 column 3 of the JSON data
<anonymous> debugger eval code:1
> JSON.parse('{ "name": "Hercule"}')
Object { name: "Hercule" }
Chromium:
> JSON.parse('{ name: "Hercule" }')
VM93:1 Uncaught SyntaxError: Unexpected token n in JSON at position 2
at JSON.parse (<anonymous>)
at <anonymous>:1:6
(anonymous) @ VM92:1
> JSON.parse('{ "name": "Hercule" }')
{name: "Hercule"}
Good point: this enhancement in jsonp would require & depend on a similar enhancement in JSON.parse(...). So i guess this would be a long-term thing, if it ever happens.
What do you mean by JS?
I meant plain JS, not JSON.parse(...) functionality. E.g.:
var m2={
10729945:{null:2},
10730556:{1:null},
10729026:{3:null}
};
console.log(m2);
var keys0 = Object.keys(m2[10729945]);
console.log(keys0); // Array [ "null" ]
console.log(typeof keys0[0]); // string
var keys1 = Object.keys(m2[10730556]);
console.log(keys1); // Array [ "1" ]
console.log(typeof keys1[0]) // string
P.S. My point is: since JSON is based on JS, what's conventional in JS should be acceptable in JSON, & in its parsers. Imagine how many bytes around the known universe could be removed from network traffic & CPU workloads. That would be a good thing.
Of course Java-to-Java communication could start benefiting from that without depending on JSON.parse(...) in JS. But perhaps that would require a config setting like EXPERIMENTAL_QUOTING_OPTIMIZATION, which could start off being false, & perhaps become true over time.
P.S. I made the JS example above a little more legible (spacing & 2 more variables).
FWIW, this is a request to generate/accept invalid JSON.
IMHO, one can view this as being about progress vs. status quo (& it's not at all natural that JSON requires extra characters that equivalent JS code doesn't (1 could also argue that it's a bug in JSON spec/implementation)):
P.S. My point is: since JSON is based on JS, what's conventional in JS should be acceptable in JSON, & in its parsers.
JSON is supposed to be interoperable. There are many things that I dislike in JSON (and would change if I could), but if you really want to change JSON, you'll need to engage with ECMA and IETF.
Experimenting with changes to JSON is fine, but by all means don't call it "JSON" anymore.
See https://json5.org/. I suppose someone could create a configuration flag.