I think a neat route would be to use this as an authoring plugin in VS Code, like prettier: write Duper (or JSON5, or whatever), and then downlevel it to regular json automatically when pressing cmd-s. You wouldn't get to keep your comments (or they could be transformed to { "//": "comment text" }).

Outside of that, it's tough to compete with JSON in the "human readable unschematized serialization format" market, especially targetting JavaScript:

Use in the browser requires some degree of bundle size increase, since the parser code needs to be loaded before your format can be used. WebAssembly libraries are usually quite large compared to a pure-JS implementation. According to [bundlejs](https://bundlejs.com/?q=%40duper-js%2Fwasm&treeshake=%5B*%5D), @duper-js/wasm weighs in at about 488 kB uncompressed, 159 kB gzip.

Use in any JavaScript runtime means you're competing against the runtime's native `JSON.parse` and `JSON.stringify`. In v8, these are very quick and have runtime-level tricks to go faster, for example see [v8's recent post on making JSON.stringify 2x faster](https://v8.dev/blog/json-stringify) when serializing plain objects with no funny business .toJSON methods, replacer, or indent formatting.

Besides those points, my major complaint about JSON is how expensive it is to encode binary data for transmission; in JSON I usually do base64, with your format it's transformed to escape characters that are less efficient than base64, right? \xNN is base16 with 2 extra bytes wasted on the \ and x, or \uNNNN which is base 10 with 2 extra bytes. Is there a way you can fit binary with no expensive encode/decode step into the format?

So, for me this seems suitable as a config file format: there you get good benefit from comments, identifiers, easier string authoring. Not sure I need the binary raw string thingy in config files that much, but I guess it doesn't hurt.

> I think a neat route would be to use this as an authoring plugin in VS Code, like prettier: write Duper (or JSON5, or whatever),

This actually somewhat works right now. If you pass this JSON5 example through Prettier:

    {
      // comments
      unquoted: 'and you can quote me on that',
      singleQuotes: 'I can use "double quotes" here',
      lineBreaks: "Look, Mom! \
    No \\n's!",
      hexadecimal: 0xdecaf,
      leadingDecimalPoint: .8675309, andTrailing: 8675309.,
      positiveSign: +1,
      trailingComma: 'in objects', andIn: ['arrays',],
      "backwardsCompatible": "with JSON",
    }
You’ll get:

    {
      // comments
      "unquoted": "and you can quote me on that",
      "singleQuotes": "I can use \"double quotes\" here",
      "lineBreaks": "Look, Mom! \
    No \\n's!",
      "hexadecimal": 0xdecaf,
      "leadingDecimalPoint": 0.8675309,
      "andTrailing": 8675309,
      "positiveSign": +1,
      "trailingComma": "in objects",
      "andIn": ["arrays"],
      "backwardsCompatible": "with JSON"
    }
Which is still invalid JSON... but it does fix unquoted keys, floats, trailing comma, and single → double quote strings with correct escaping. So if you have “format on save” enabled in your editor, it might just work!

Duper certainly won't outperform the native JSON implementation (and it likely never will), though I do think benchmarks would be a great addition. Bundle size and binary representation are definitely things I'll keep in mind!

The config file transpiration to JSON idea is quite interesting. It's pretty similar to how I'm already defining the TextMate grammar used by the website's syntax highlighter, so I'll certainly try to incorporate that into the tooling.

It may be worth it to pipe Duper into your WASM/native code, and get back plain JSON out, which you then hand off to the runtime's `JSON.parse` with a post-processing step to support any special features needed. Something like this:

    // idea of implementing public duper.parse function to lean on
    // runtime's JSON.parse
    //
    // downlevel to json, eg binary strings become base64 normal json strings
    const { jsonString, enhancements } = duper.duperToJSON(data)
    // let the runtime go fast when decoding
    const rawObject = JSON.parse(jsonString)
    // `enhance` knows the paths to all the binary base64 strings
    // and replaces them with Uint8Arrays
    const decoded = duper.enhance(rawObject, enhancements)
Here enhancements is something very easy / low cost to construct over the FFI bridge, like

    type Path = Array<string | number>
    type TransformFn = (value: unknown) => unknown
    type Transform = TransformFn | Enhancements
    type Enhancements = Array<[path: Path, transform: Transform]>
Not sure if this would end up faster, it may allocate more, but it's probably better than unoptimized object/array construction from WASM/native -> runtime. You could also try with a `reviver` argument to JSON.parse but i always find the lack of full path to key somewhat clunky.