FAQ
Hello golang-nuts,

I am playing around with an encoder, something like encoding/gob or
encoding/json, and figured that the fastest way to figure out what
type the incoming interface{} is, as long as it is simple combination
of one of the builtin types, was a type switch.

So I started writing a type switch, but quickly realized that I wanted
to handle a LOT of cases. Not just all the ints and uints etc, but
also *ints and *uints, and []ints, []uints, *[]ints, []*ints, *[]*ints
etc etc ad nauseam.

So I created a small text/template that generates all these cases for
a HUGE switch statement.

Especially all the map cases take a lot of cases to express - what
with there being maps, pointers to maps, maps with pointer values,
pointer keys, pointer both ...

Anyway, the resulting switch statement is 3001 cases long, and the
file it lives in is 44325 lines, of which most is one function
containing the switch statement.

Compiling this package now takes time.

$ time go build
real 2m12.840s
user 2m9.947s
sys 0m1.776s

Why is this? It's not an uncommonly huge amount of code, even if the
function and switch size is probably fairly uncommon...

For the brave willing to duplicate this, the code can be found at
github.com/zond/godec.

To generate the source, "go run generator/generator.go", then just "go build".

--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Search Discussions

  • Martin Schnabel at Jul 10, 2014 at 9:40 pm
    you could use the reflect package to drastically simplify your program
    logic. i would even expect faster run and build times because of the
    reduced complexity. also it would be simpler to install, read and maintain.
    On 07/10/2014 11:17 PM, Martin Bruse wrote:
    Hello golang-nuts,

    I am playing around with an encoder, something like encoding/gob or
    encoding/json, and figured that the fastest way to figure out what
    type the incoming interface{} is, as long as it is simple combination
    of one of the builtin types, was a type switch.

    So I started writing a type switch, but quickly realized that I wanted
    to handle a LOT of cases. Not just all the ints and uints etc, but
    also *ints and *uints, and []ints, []uints, *[]ints, []*ints, *[]*ints
    etc etc ad nauseam.

    So I created a small text/template that generates all these cases for
    a HUGE switch statement.

    Especially all the map cases take a lot of cases to express - what
    with there being maps, pointers to maps, maps with pointer values,
    pointer keys, pointer both ...

    Anyway, the resulting switch statement is 3001 cases long, and the
    file it lives in is 44325 lines, of which most is one function
    containing the switch statement.

    Compiling this package now takes time.

    $ time go build
    real 2m12.840s
    user 2m9.947s
    sys 0m1.776s

    Why is this? It's not an uncommonly huge amount of code, even if the
    function and switch size is probably fairly uncommon...

    For the brave willing to duplicate this, the code can be found at
    github.com/zond/godec.

    To generate the source, "go run generator/generator.go", then just "go build".
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Martin Bruse at Jul 10, 2014 at 11:52 pm
    Right, if I NEED to do it this way is irrelevant to my question about why
    Go seems to choke on a long function with a long type swith. But since you
    brought it up...

    What you suggest was my first approach, but when I compared the performance
    of my code with that of encoding/gob, or
    https://github.com/ugorji/go/tree/master/codec, my code was an order of
    magnitude slower.

    When I started digging into why, I saw that the main difference (for the
    case I was benching) between my code and github.com/ugorji/go/codec was
    that ugorji took shortcuts using a type switch for some common cases.

    encoding/gob, of course, does something different and much more clever, and
    uses unsafe which is unacceptable for my implementation.

    Reflection is unfortunately an order of magnitude slower than doing it the
    "regular" way :/
    On Jul 10, 2014 11:40 PM, "Martin Schnabel" wrote:

    you could use the reflect package to drastically simplify your program
    logic. i would even expect faster run and build times because of the
    reduced complexity. also it would be simpler to install, read and maintain.
    On 07/10/2014 11:17 PM, Martin Bruse wrote:

    Hello golang-nuts,

    I am playing around with an encoder, something like encoding/gob or
    encoding/json, and figured that the fastest way to figure out what
    type the incoming interface{} is, as long as it is simple combination
    of one of the builtin types, was a type switch.

    So I started writing a type switch, but quickly realized that I wanted
    to handle a LOT of cases. Not just all the ints and uints etc, but
    also *ints and *uints, and []ints, []uints, *[]ints, []*ints, *[]*ints
    etc etc ad nauseam.

    So I created a small text/template that generates all these cases for
    a HUGE switch statement.

    Especially all the map cases take a lot of cases to express - what
    with there being maps, pointers to maps, maps with pointer values,
    pointer keys, pointer both ...

    Anyway, the resulting switch statement is 3001 cases long, and the
    file it lives in is 44325 lines, of which most is one function
    containing the switch statement.

    Compiling this package now takes time.

    $ time go build
    real 2m12.840s
    user 2m9.947s
    sys 0m1.776s

    Why is this? It's not an uncommonly huge amount of code, even if the
    function and switch size is probably fairly uncommon...

    For the brave willing to duplicate this, the code can be found at
    github.com/zond/godec.

    To generate the source, "go run generator/generator.go", then just "go
    build".
    --
    You received this message because you are subscribed to a topic in the
    Google Groups "golang-nuts" group.
    To unsubscribe from this topic, visit https://groups.google.com/d/
    topic/golang-nuts/smBKqhI3iJo/unsubscribe.
    To unsubscribe from this group and all its topics, send an email to
    golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Dobrosław Żybort at Jul 11, 2014 at 6:32 am

    W dniu piątek, 11 lipca 2014 01:52:57 UTC+2 użytkownik Martin Bruse napisał:

    encoding/gob, of course, does something different and much more clever,
    and uses unsafe which is unacceptable for my implementation.

    Reflection is unfortunately an order of magnitude slower than doing it the
    "regular" way :/
    In Go 1.4 encoding/gob will stop using unsafe and will use reflection:
    https://codereview.appspot.com/102680045/

    It's possible that before Go 1.4 will freeze there will be some more
    optimizations to it.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Martin Bruse at Jul 11, 2014 at 9:15 am
    This is very interesting..

    The reason I am doing this is because gob (last time I checked) was fairly
    slow when encoding using single use/throw away Encoders, which is likely
    explained with gob optimizing for streams.

    If gob is being worked over and possibly optimimized, is there any chance
    that gob would become more performant in the single use case?
    On Jul 11, 2014 8:32 AM, "Dobrosław Żybort" wrote:

    W dniu piątek, 11 lipca 2014 01:52:57 UTC+2 użytkownik Martin Bruse
    napisał:
    encoding/gob, of course, does something different and much more clever,
    and uses unsafe which is unacceptable for my implementation.

    Reflection is unfortunately an order of magnitude slower than doing it
    the "regular" way :/
    In Go 1.4 encoding/gob will stop using unsafe and will use reflection:
    https://codereview.appspot.com/102680045/

    It's possible that before Go 1.4 will freeze there will be some more
    optimizations to it.

    --
    You received this message because you are subscribed to a topic in the
    Google Groups "golang-nuts" group.
    To unsubscribe from this topic, visit
    https://groups.google.com/d/topic/golang-nuts/smBKqhI3iJo/unsubscribe.
    To unsubscribe from this group and all its topics, send an email to
    golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Tomwilde at Jul 11, 2014 at 11:34 am

    is there any chance that gob would become more performant in the single
    use case?
    Quoting the issue's description (https://codereview.appspot.com/102680045/):

    Performance of course suffers, but not too badly.


    So no, performance is more likely to decrease than increase.

    Either way, optimizing encoders with humongous type-switches is not really
    a solution either, for now.

    Maybe with Rob Pike's proposal ("go generate":
    https://groups.google.com/forum/#!topic/golang-dev/ZTD1qtpruA8) such code
    would become viable.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Martin Bruse at Jul 11, 2014 at 12:07 pm
    Quoting the issue's description (https://codereview.appspot.com/102680045/
    ):
    Performance of course suffers, but not too badly.
    So no, performance is more likely to decrease than increase.
    Yeah, I realized that, but was hoping that gob would get more TLC when
    people were looking at it. But I guess not :/
    Either way, optimizing encoders with humongous type-switches is not
    really a solution either, for now.

    Well, why not? I will never be perfect but it may improve speeds a lot for
    the cases it matches.
    Maybe with Rob Pike's proposal ("go generate":
    https://groups.google.com/forum/#!topic/golang-dev/ZTD1qtpruA8) such code
    would become viable.

    Oh, that shouldn't make a big difference. When the code stabilizes I can
    commit the generated switch to git, and then only maintainers would need to
    handle the generator mess.
    --
    You received this message because you are subscribed to a topic in the
    Google Groups "golang-nuts" group.
    To unsubscribe from this topic, visit
    https://groups.google.com/d/topic/golang-nuts/smBKqhI3iJo/unsubscribe.
    To unsubscribe from this group and all its topics, send an email to
    golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Tomwilde at Jul 11, 2014 at 1:43 pm

    So no, performance is more likely to decrease than increase.
    Yeah, I realized that, but was hoping that gob would get more TLC when
    people were looking at it. But I guess not :/
    Either way, optimizing encoders with humongous type-switches is not
    really a solution either, for now.

    Well, why not? I will never be perfect but it may improve speeds a lot for
    the cases it matches.
    Maybe with Rob Pike's proposal ("go generate":
    https://groups.google.com/forum/#!topic/golang-dev/ZTD1qtpruA8) such code
    would become viable.

    Oh, that shouldn't make a big difference. When the code stabilizes I can
    commit the generated switch to git, and then only maintainers would need to
    handle the generator mess.

    I was talking about the standard library encoders, not a 3rd party library.
    Spelling out a big (non-auto-generated) switch in the stdlib is not a
    viable solution; but it might become one with Rob Pike's proposal.


    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Andrewchamberss at Jul 10, 2014 at 10:32 pm
    How much ram is on your system? It could be one of the function analysis
    passes allocating alot of memory slowing down the whole thing.

    Alternatively It could also be something like the liveness analysis
    algorithm needing to iterate a huge number of times because the function is
    just unluckily bad.
    If you are curious, http://en.wikipedia.org/wiki/Live_variable_analysis . I
    think these compilers use the same iterative algorithm, though may be wrong.
    There are many other compiler passes which may be suffering in the same
    sort of way.

    Neither answer really helps if you didn't ask purely out of curiosity I
    suppose, though if you aren't regenerating the source it shouldn't need to
    rebuild.
    On Friday, July 11, 2014 9:18:07 AM UTC+12, Martin Bruse wrote:

    Hello golang-nuts,

    I am playing around with an encoder, something like encoding/gob or
    encoding/json, and figured that the fastest way to figure out what
    type the incoming interface{} is, as long as it is simple combination
    of one of the builtin types, was a type switch.

    So I started writing a type switch, but quickly realized that I wanted
    to handle a LOT of cases. Not just all the ints and uints etc, but
    also *ints and *uints, and []ints, []uints, *[]ints, []*ints, *[]*ints
    etc etc ad nauseam.

    So I created a small text/template that generates all these cases for
    a HUGE switch statement.

    Especially all the map cases take a lot of cases to express - what
    with there being maps, pointers to maps, maps with pointer values,
    pointer keys, pointer both ...

    Anyway, the resulting switch statement is 3001 cases long, and the
    file it lives in is 44325 lines, of which most is one function
    containing the switch statement.

    Compiling this package now takes time.

    $ time go build
    real 2m12.840s
    user 2m9.947s
    sys 0m1.776s

    Why is this? It's not an uncommonly huge amount of code, even if the
    function and switch size is probably fairly uncommon...

    For the brave willing to duplicate this, the code can be found at
    github.com/zond/godec.

    To generate the source, "go run generator/generator.go", then just "go
    build".
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Andrewchamberss at Jul 10, 2014 at 11:11 pm
    Just to clarify, the algorithms used by the compiler probably don't scale
    linearly within functions, so 1000 small functions compile faster than 1
    huge equivalent function.

    Again, for the curious:

    One advantage of huge functions is it can allow more efficient allocation,
    more optimal instruction selection, more constant expressions can be
    removed from loops, and more comprehensive constant propagation can be
    done. This is part of the reason why function inlining is done (other than
    removing function call overhead), but excessive inlining would both slow
    compilation as the algorithms process more and don't time scale linearly,
    and bloat the binary.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Martin Bruse at Jul 10, 2014 at 11:56 pm
    Sounds reasonable, I guess.

    I will try to find a way to break it into smaller chunks.

    Thanks for the explanation!
    On Jul 11, 2014 1:11 AM, wrote:

    Just to clarify, the algorithms used by the compiler probably don't scale
    linearly within functions, so 1000 small functions compile faster than 1
    huge equivalent function.

    Again, for the curious:

    One advantage of huge functions is it can allow more efficient allocation,
    more optimal instruction selection, more constant expressions can be
    removed from loops, and more comprehensive constant propagation can be
    done. This is part of the reason why function inlining is done (other than
    removing function call overhead), but excessive inlining would both slow
    compilation as the algorithms process more and don't time scale linearly,
    and bloat the binary.

    --
    You received this message because you are subscribed to a topic in the
    Google Groups "golang-nuts" group.
    To unsubscribe from this topic, visit
    https://groups.google.com/d/topic/golang-nuts/smBKqhI3iJo/unsubscribe.
    To unsubscribe from this group and all its topics, send an email to
    golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Andrewchamberss at Jul 11, 2014 at 12:08 am
    Yeah, if your template generates smaller functions I would be surprised if
    your build times were still that long. I don't know how long a "normal"
    40k-50k line project takes to compile on your machine though as a
    comparison.

    Perhaps test the theory somehow before fully committing to redoing
    everything, since it's just my guess as to whats happening.
    On Friday, July 11, 2014 11:56:46 AM UTC+12, Martin Bruse wrote:

    Sounds reasonable, I guess.

    I will try to find a way to break it into smaller chunks.

    Thanks for the explanation!
    On Jul 11, 2014 1:11 AM, <andrewc...@gmail.com <javascript:>> wrote:

    Just to clarify, the algorithms used by the compiler probably don't scale
    linearly within functions, so 1000 small functions compile faster than 1
    huge equivalent function.

    Again, for the curious:

    One advantage of huge functions is it can allow more efficient
    allocation, more optimal instruction selection, more constant expressions
    can be removed from loops, and more comprehensive constant propagation can
    be done. This is part of the reason why function inlining is done (other
    than removing function call overhead), but excessive inlining would both
    slow compilation as the algorithms process more and don't time scale
    linearly, and bloat the binary.

    --
    You received this message because you are subscribed to a topic in the
    Google Groups "golang-nuts" group.
    To unsubscribe from this topic, visit
    https://groups.google.com/d/topic/golang-nuts/smBKqhI3iJo/unsubscribe.
    To unsubscribe from this group and all its topics, send an email to
    golang-nuts...@googlegroups.com <javascript:>.
    For more options, visit https://groups.google.com/d/optout.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Martin Bruse at Jul 11, 2014 at 12:28 am
    Success!

    I did the simplest possible thing, and just broke out the bodies of
    all the cases into separate functions (I also reduced the 44k lines of
    code to 32k in the process), so that each case just called another
    function as opposed to allocating variables, doing for loops etc), and
    ended up with

    $ time go build
    real 0m8.115s
    user 0m7.812s
    sys 0m0.282s

    A big improvement.

    Hopefully my intuition about switch statements (from C experience, and
    how they can reduce a great number of comparisons into a single jmp)
    is relevant to type switches ...
    On Fri, Jul 11, 2014 at 2:08 AM, wrote:
    Yeah, if your template generates smaller functions I would be surprised if
    your build times were still that long. I don't know how long a "normal"
    40k-50k line project takes to compile on your machine though as a
    comparison.

    Perhaps test the theory somehow before fully committing to redoing
    everything, since it's just my guess as to whats happening.

    On Friday, July 11, 2014 11:56:46 AM UTC+12, Martin Bruse wrote:

    Sounds reasonable, I guess.

    I will try to find a way to break it into smaller chunks.

    Thanks for the explanation!
    On Jul 11, 2014 1:11 AM, wrote:

    Just to clarify, the algorithms used by the compiler probably don't scale
    linearly within functions, so 1000 small functions compile faster than 1
    huge equivalent function.

    Again, for the curious:

    One advantage of huge functions is it can allow more efficient
    allocation, more optimal instruction selection, more constant expressions
    can be removed from loops, and more comprehensive constant propagation can
    be done. This is part of the reason why function inlining is done (other
    than removing function call overhead), but excessive inlining would both
    slow compilation as the algorithms process more and don't time scale
    linearly, and bloat the binary.

    --
    You received this message because you are subscribed to a topic in the
    Google Groups "golang-nuts" group.
    To unsubscribe from this topic, visit
    https://groups.google.com/d/topic/golang-nuts/smBKqhI3iJo/unsubscribe.
    To unsubscribe from this group and all its topics, send an email to
    golang-nuts...@googlegroups.com.

    For more options, visit https://groups.google.com/d/optout.
    --
    You received this message because you are subscribed to a topic in the
    Google Groups "golang-nuts" group.
    To unsubscribe from this topic, visit
    https://groups.google.com/d/topic/golang-nuts/smBKqhI3iJo/unsubscribe.
    To unsubscribe from this group and all its topics, send an email to
    golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Andrewchamberss at Jul 11, 2014 at 12:36 am
    I'm also interested if type switches can turn into a jump table, perhaps
    someone with knowledge about this will post.
    On Friday, July 11, 2014 12:30:02 PM UTC+12, Martin Bruse wrote:

    Success!

    I did the simplest possible thing, and just broke out the bodies of
    all the cases into separate functions (I also reduced the 44k lines of
    code to 32k in the process), so that each case just called another
    function as opposed to allocating variables, doing for loops etc), and
    ended up with

    $ time go build
    real 0m8.115s
    user 0m7.812s
    sys 0m0.282s

    A big improvement.

    Hopefully my intuition about switch statements (from C experience, and
    how they can reduce a great number of comparisons into a single jmp)
    is relevant to type switches ...

    On Fri, Jul 11, 2014 at 2:08 AM, <andrewc...@gmail.com <javascript:>>
    wrote:
    Yeah, if your template generates smaller functions I would be surprised if
    your build times were still that long. I don't know how long a "normal"
    40k-50k line project takes to compile on your machine though as a
    comparison.

    Perhaps test the theory somehow before fully committing to redoing
    everything, since it's just my guess as to whats happening.

    On Friday, July 11, 2014 11:56:46 AM UTC+12, Martin Bruse wrote:

    Sounds reasonable, I guess.

    I will try to find a way to break it into smaller chunks.

    Thanks for the explanation!
    On Jul 11, 2014 1:11 AM, wrote:

    Just to clarify, the algorithms used by the compiler probably don't
    scale
    linearly within functions, so 1000 small functions compile faster than
    1
    huge equivalent function.

    Again, for the curious:

    One advantage of huge functions is it can allow more efficient
    allocation, more optimal instruction selection, more constant
    expressions
    can be removed from loops, and more comprehensive constant propagation
    can
    be done. This is part of the reason why function inlining is done
    (other
    than removing function call overhead), but excessive inlining would
    both
    slow compilation as the algorithms process more and don't time scale
    linearly, and bloat the binary.

    --
    You received this message because you are subscribed to a topic in the
    Google Groups "golang-nuts" group.
    To unsubscribe from this topic, visit
    https://groups.google.com/d/topic/golang-nuts/smBKqhI3iJo/unsubscribe.
    To unsubscribe from this group and all its topics, send an email to
    golang-nuts...@googlegroups.com.

    For more options, visit https://groups.google.com/d/optout.
    --
    You received this message because you are subscribed to a topic in the
    Google Groups "golang-nuts" group.
    To unsubscribe from this topic, visit
    https://groups.google.com/d/topic/golang-nuts/smBKqhI3iJo/unsubscribe.
    To unsubscribe from this group and all its topics, send an email to
    golang-nuts...@googlegroups.com <javascript:>.
    For more options, visit https://groups.google.com/d/optout.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Ian Lance Taylor at Jul 11, 2014 at 12:45 am

    On Thu, Jul 10, 2014 at 5:36 PM, wrote:
    I'm also interested if type switches can turn into a jump table, perhaps
    someone with knowledge about this will post.
    Yes, with the gc compiler, a type switch will use a jump table for
    cases that are not interface types nor nil.

    Ian

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Rogerjd at Jul 11, 2014 at 12:19 am
    Some compilers don't optimize the resulting binary code, so the developer
    doesn't have to wait as long.
    Other compilers try to optimize the resulting binary code for quick
    execution time, and don't care if the developer must wait.
    Does that explain it?
    Roger
    On Thursday, July 10, 2014 7:56:46 PM UTC-4, Martin Bruse wrote:

    Sounds reasonable, I guess.

    I will try to find a way to break it into smaller chunks.

    Thanks for the explanation!
    On Jul 11, 2014 1:11 AM, <andrewc...@gmail.com <javascript:>> wrote:

    Just to clarify, the algorithms used by the compiler probably don't scale
    linearly within functions, so 1000 small functions compile faster than 1
    huge equivalent function.

    Again, for the curious:

    One advantage of huge functions is it can allow more efficient
    allocation, more optimal instruction selection, more constant expressions
    can be removed from loops, and more comprehensive constant propagation can
    be done. This is part of the reason why function inlining is done (other
    than removing function call overhead), but excessive inlining would both
    slow compilation as the algorithms process more and don't time scale
    linearly, and bloat the binary.

    --
    You received this message because you are subscribed to a topic in the
    Google Groups "golang-nuts" group.
    To unsubscribe from this topic, visit
    https://groups.google.com/d/topic/golang-nuts/smBKqhI3iJo/unsubscribe.
    To unsubscribe from this group and all its topics, send an email to
    golang-nuts...@googlegroups.com <javascript:>.
    For more options, visit https://groups.google.com/d/optout.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Minux at Jul 12, 2014 at 3:22 am

    On Thu, Jul 10, 2014 at 7:11 PM, wrote:

    Just to clarify, the algorithms used by the compiler probably don't scale
    linearly within functions, so 1000 small functions compile faster than 1
    huge equivalent function.
    Again, for the curious:
    One advantage of huge functions is it can allow more efficient allocation,
    more optimal instruction selection, more constant expressions can be
    removed from loops, and more comprehensive constant propagation can be
    done. This is part of the reason why function inlining is done (other than
    removing function call overhead), but excessive inlining would both slow
    compilation as the algorithms process more and don't time scale linearly,
    and bloat the binary.
    While this is generally true for other optimizing compilers, it's not true
    for gc.
    If a function is more complex than a threshold, the optimizer (and the
    register allocator) will simply give up.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupgolang-nuts @
categoriesgo
postedJul 10, '14 at 9:17p
activeJul 12, '14 at 3:22a
posts17
users8
websitegolang.org

People

Translate

site design / logo © 2022 Grokbase