borc
borc copied to clipboard
fix: replace node buffers with uint8arrays
All use of node Buffers have been replaced with Uint8Arrays
BREAKING CHANGES:
-
cbor.encodeused to return a Buffer, now it returns a Uint8Array
Very nice work, but npm run bench is a bit sad:
Before
encode - node-cbor - 76 x 1,350 ops/sec ±23.87% (73 runs sampled)
encode - borc - 76 x 20,574 ops/sec ±2.73% (89 runs sampled)
encode - stream - borc - 76 x 7,558 ops/sec ±3.59% (82 runs sampled)
encode - JSON.stringify - 76 x 40,179 ops/sec ±5.96% (69 runs sampled)
decode - node-cbor - 47 x 1,788 ops/sec ±19.72% (82 runs sampled)
decode - borc - 47 x 28,251 ops/sec ±7.06% (81 runs sampled)
decode - JSON.parse - 47 x 52,242 ops/sec ±2.92% (90 runs sampled)
After
encode - node-cbor - 76 x 1,291 ops/sec ±22.84% (76 runs sampled)
encode - borc - 76 x 7,794 ops/sec ±3.46% (77 runs sampled)
encode - stream - borc - 76 x 3,208 ops/sec ±8.85% (79 runs sampled)
encode - JSON.stringify - 76 x 40,007 ops/sec ±7.14% (69 runs sampled)
decode - node-cbor - 47 x 3,814 ops/sec ±11.51% (89 runs sampled)
decode - borc - 47 x 11,816 ops/sec ±21.45% (63 runs sampled)
decode - JSON.parse - 47 x 49,421 ops/sec ±3.76% (81 runs sampled)
I recall hearing 3rd hand (I haven't tested for myself and I think it was @mikeal who told me that someone told him), that the various set*() methods are particularly terrible performers. I don't know how heavily they feature in the paths of these benchmarks but something's imposing a very high toll on both encode and decode, more than half the speed for the ops in question.
Makes me think I need to build more benchmarking into the things I'm currently converting!