I've been using JSON pointer in a project or two:


I implemented it as a trivial walker on a map[string]interface{}{}, but
that is a tremendous amount of parsing overhead for larger objects where I
only want to grab a tiny bit, so I did a new implementation using the json
scanner so I only have to unmarshal the parts of the input I actually
intend to use (or in some cases, just extract without unmarshaling).

The new implementation is really simple
<https://github.com/dustin/go-jsonpointer/blob/master/pointer.go> *except*
I had to copy the entire encoding/json package source into my tree in order
to get this. I like using the scanner and some of the helpers for parsing
some of the grosser JSON stuff and have a few more similar projects I'd
like to do streaming JSON parsing with.

Is it OK to just keep copying around the entire JSON implementation? (it
doesn't *feel* OK and I can't easily get rid of all the exposed symbols
that come with it) Are there any plans to capitalize the scanner?


Search Discussions

  • Dustin at Oct 28, 2012 at 8:16 pm
    I've had very good luck with this and put up a fork of encoding/json with
    scanner public: http://github.com/dustin/gojson

    Does anyone have any interest in integrating this? This is why I do:

    This is my code before (using unmarshal to open the testdata/code.json.gz
    byte array and traverse the map it makes to pull a path):

    109240058 ns/op 17.76 MB/s

    And this is after (scanning the byte array directly and pulling the piece
    I want):

    33330 ns/op 58218.91 MB/s

    It's slightly more complicated, but an obvious win, as my program works
    with arbitrary "foreign" JSON with user-supplied paths to pull stuff out of
    it. Besides being significantly faster it also has an obvious reduction in
    memory footprint since it doesn't have to materialize all of that data

  • Jan Mercl at Oct 28, 2012 at 9:57 pm
    58 GB/s must be a bogus measurement, that would mean 58 bytes in 1


  • Dustin at Oct 29, 2012 at 12:06 am

    On Sunday, October 28, 2012 2:58:01 PM UTC-7, Jan Mercl wrote:
    58 GB/s must be a bogus measurement, that would mean 58 bytes in 1
    "misleading" would be better than "bogus" I don't have a very broad
    range of tests, so I just pulled one arbitrary. It's representative of
    some uses cases, but not others. It's (due to my laziness) closer to a
    best case example.

    In this case, it gets everything it needs near the beginning of the parse
    and simply doesn't need to consume the entire document. Worst case is when
    nothing you're looking for matches. I'm lacking a test case there, so I'll
    expand on that and give worst case measurements as well. Thanks for the

  • Dustin at Oct 29, 2012 at 3:44 am

    On Sunday, October 28, 2012 2:58:01 PM UTC-7, Jan Mercl wrote:
    58 GB/s must be a bogus measurement, that would mean 58 bytes in 1
    Thanks for driving me to look at this more. Miss didn't do the right thing
    so I couldn't bench properly. My good case is faster now, but my worst
    case was a good deal slower. Now my worst case is only 3x or so faster
    than the previous method (but still without all the memory overhead).

    Good Case 100000 26871 ns/op 72212.48 MB/s
    Worst Case 50 34122944 ns/op 56.87 MB/s
    Previous Impl 20 107650901 ns/op 18.03 MB/s

    Unfortunately, the profiler is now giving me nonsense, so I can't make it
    any faster.


Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupgolang-nuts @
postedOct 27, '12 at 10:32a
activeOct 29, '12 at 3:44a

2 users in discussion

Dustin: 4 posts Jan Mercl: 1 post



site design / logo © 2022 Grokbase