FAQ
I'm trying to understand the benefit of having a capacity when using make()
with a slice. I've read the documentation, and it still isn't clear to me
what the possible benefit of it is.

If I create a new slice
b := make([]int, 0, 5)

I still have to use append() to add a new element to it, which is exactly
what I would have to do if I never specified a capacity in the first place.

Could someone please show me an example where it would be beneficial to
give a slice a capacity greater than its length?

--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Search Discussions

  • Matthew Kane at Feb 25, 2014 at 11:59 pm
    Appending to a slice that has a capacity greater than its length doesn't
    acquire reallocation and copying the contents.
    On Feb 25, 2014 6:51 PM, "John Beckett" wrote:

    I'm trying to understand the benefit of having a capacity when using
    make() with a slice. I've read the documentation, and it still isn't clear
    to me what the possible benefit of it is.

    If I create a new slice
    b := make([]int, 0, 5)

    I still have to use append() to add a new element to it, which is exactly
    what I would have to do if I never specified a capacity in the first place.

    Could someone please show me an example where it would be beneficial to
    give a slice a capacity greater than its length?

    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Caleb Spare at Feb 26, 2014 at 12:00 am

    On Tue, Feb 25, 2014 at 3:51 PM, John Beckett wrote:

    I'm trying to understand the benefit of having a capacity when using
    make() with a slice. I've read the documentation, and it still isn't clear
    to me what the possible benefit of it is.

    If I create a new slice
    b := make([]int, 0, 5)

    I still have to use append() to add a new element to it, which is exactly
    what I would have to do if I never specified a capacity in the first place.
    Your code would look the same, yes, but under the hood append doesn't need
    to allocate more space and copy data as it would if you started with a 0
    capacity (assuming you don't grow the slice to more than 5 items).

    If you're making a slice with 10k items and you know its final size a
    priori (or even just have a good estimate), it's going to be faster to
    create a slice with appropriate capacity from the beginning.

    This blog post gives more insights into slices and how append works:

    http://blog.golang.org/slices


    Could someone please show me an example where it would be beneficial to
    give a slice a capacity greater than its length?

    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • JohnGB at Feb 26, 2014 at 12:12 am
    Thanks for the replies Caleb and mkb.

    So if I understand correctly, it's only an efficiency / performance
    consideration rather than anything functionally different?
    On Wednesday, February 26, 2014 12:51:07 AM UTC+1, JohnGB wrote:

    I'm trying to understand the benefit of having a capacity when using
    make() with a slice. I've read the documentation, and it still isn't clear
    to me what the possible benefit of it is.

    If I create a new slice
    b := make([]int, 0, 5)

    I still have to use append() to add a new element to it, which is exactly
    what I would have to do if I never specified a capacity in the first place.

    Could someone please show me an example where it would be beneficial to
    give a slice a capacity greater than its length?
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Caleb Spare at Feb 26, 2014 at 12:23 am
    Capacity of slices is part of the semantics of Go; it is settable in make
    and also by the three-item slice form (s[i:j:k]), and it is exposed by the
    cap() builtin. So capacity is a first-class part of the "functionality" of
    the language; it's not just an implementation detail.

    That said, in the simple use case presented in the original post where
    you're only append()ing to the slice there's no difference in semantics
    whether you initialize your slice with a capacity or not. It may be okay to
    ignore it for now if it keeps things simpler for you.

    I do suggest reading that blog post I linked. It should be quite
    illuminating.

    -Caleb

    On Tue, Feb 25, 2014 at 4:12 PM, JohnGB wrote:

    Thanks for the replies Caleb and mkb.

    So if I understand correctly, it's only an efficiency / performance
    consideration rather than anything functionally different?

    On Wednesday, February 26, 2014 12:51:07 AM UTC+1, JohnGB wrote:

    I'm trying to understand the benefit of having a capacity when using
    make() with a slice. I've read the documentation, and it still isn't clear
    to me what the possible benefit of it is.

    If I create a new slice
    b := make([]int, 0, 5)

    I still have to use append() to add a new element to it, which is exactly
    what I would have to do if I never specified a capacity in the first place.

    Could someone please show me an example where it would be beneficial to
    give a slice a capacity greater than its length?
    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • JohnGB at Feb 26, 2014 at 12:35 am
    Thanks Caleb, that helps a lot. I'm still reading the blog post :)
    On Wednesday, February 26, 2014 1:23:28 AM UTC+1, Caleb Spare wrote:

    Capacity of slices is part of the semantics of Go; it is settable in make
    and also by the three-item slice form (s[i:j:k]), and it is exposed by the
    cap() builtin. So capacity is a first-class part of the "functionality" of
    the language; it's not just an implementation detail.

    That said, in the simple use case presented in the original post where
    you're only append()ing to the slice there's no difference in semantics
    whether you initialize your slice with a capacity or not. It may be okay to
    ignore it for now if it keeps things simpler for you.

    I do suggest reading that blog post I linked. It should be quite
    illuminating.

    -Caleb


    On Tue, Feb 25, 2014 at 4:12 PM, JohnGB <jgbe...@gmail.com <javascript:>>wrote:
    Thanks for the replies Caleb and mkb.

    So if I understand correctly, it's only an efficiency / performance
    consideration rather than anything functionally different?

    On Wednesday, February 26, 2014 12:51:07 AM UTC+1, JohnGB wrote:

    I'm trying to understand the benefit of having a capacity when using
    make() with a slice. I've read the documentation, and it still isn't clear
    to me what the possible benefit of it is.

    If I create a new slice
    b := make([]int, 0, 5)

    I still have to use append() to add a new element to it, which is
    exactly what I would have to do if I never specified a capacity in the
    first place.

    Could someone please show me an example where it would be beneficial to
    give a slice a capacity greater than its length?
    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to golang-nuts...@googlegroups.com <javascript:>.
    For more options, visit https://groups.google.com/groups/opt_out.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Rodrigo Kochenburger at Feb 26, 2014 at 12:39 am
    Think of it as pre-allocating memory, preventing subsequent calls to
    append() from allocating memory and having to copy data around until your
    capacity is full.

    - RK

    On Tue, Feb 25, 2014 at 4:35 PM, JohnGB wrote:

    Thanks Caleb, that helps a lot. I'm still reading the blog post :)

    On Wednesday, February 26, 2014 1:23:28 AM UTC+1, Caleb Spare wrote:

    Capacity of slices is part of the semantics of Go; it is settable in make
    and also by the three-item slice form (s[i:j:k]), and it is exposed by the
    cap() builtin. So capacity is a first-class part of the "functionality" of
    the language; it's not just an implementation detail.

    That said, in the simple use case presented in the original post where
    you're only append()ing to the slice there's no difference in semantics
    whether you initialize your slice with a capacity or not. It may be okay to
    ignore it for now if it keeps things simpler for you.

    I do suggest reading that blog post I linked. It should be quite
    illuminating.

    -Caleb

    On Tue, Feb 25, 2014 at 4:12 PM, JohnGB wrote:

    Thanks for the replies Caleb and mkb.

    So if I understand correctly, it's only an efficiency / performance
    consideration rather than anything functionally different?

    On Wednesday, February 26, 2014 12:51:07 AM UTC+1, JohnGB wrote:

    I'm trying to understand the benefit of having a capacity when using
    make() with a slice. I've read the documentation, and it still isn't clear
    to me what the possible benefit of it is.

    If I create a new slice
    b := make([]int, 0, 5)

    I still have to use append() to add a new element to it, which is
    exactly what I would have to do if I never specified a capacity in the
    first place.

    Could someone please show me an example where it would be beneficial to
    give a slice a capacity greater than its length?
    --
    You received this message because you are subscribed to the Google
    Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send
    an email to golang-nuts...@googlegroups.com.

    For more options, visit https://groups.google.com/groups/opt_out.
    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • JohnGB at Feb 26, 2014 at 12:07 pm
    I've done some testing to try and get a feel for how much of a performance
    benefit it is when specifying a capacity, and the results have been
    somewhat disappointing. I used the following code:

    package main

    const n = 1000000

    func withCap() {
    b := make([]int, 0, 100)
    for i := 0; i < n; i++ {
    b = append(b, i)
    }
    }

    func noCap() {
    b := make([]int, 0)
    for i := 0; i < n; i++ {
    b = append(b, i)
    }
    }

    func main() {
    }

    I ran benchmarks on the functions with varying amounts of initial capacity,
    and found that if the initial capacity specified was < ~1000, that there
    was no benefit (in fact it was slower) when compared to not specifying a
    capacity. This seems to go against what I've read. Am I missing something
    here, or is specifying a capacity really only beneficial when using very
    large?
    On Wednesday, February 26, 2014 1:23:28 AM UTC+1, Caleb Spare wrote:

    Capacity of slices is part of the semantics of Go; it is settable in make
    and also by the three-item slice form (s[i:j:k]), and it is exposed by the
    cap() builtin. So capacity is a first-class part of the "functionality" of
    the language; it's not just an implementation detail.

    That said, in the simple use case presented in the original post where
    you're only append()ing to the slice there's no difference in semantics
    whether you initialize your slice with a capacity or not. It may be okay to
    ignore it for now if it keeps things simpler for you.

    I do suggest reading that blog post I linked. It should be quite
    illuminating.

    -Caleb


    On Tue, Feb 25, 2014 at 4:12 PM, JohnGB <jgbe...@gmail.com <javascript:>>wrote:
    Thanks for the replies Caleb and mkb.

    So if I understand correctly, it's only an efficiency / performance
    consideration rather than anything functionally different?

    On Wednesday, February 26, 2014 12:51:07 AM UTC+1, JohnGB wrote:

    I'm trying to understand the benefit of having a capacity when using
    make() with a slice. I've read the documentation, and it still isn't clear
    to me what the possible benefit of it is.

    If I create a new slice
    b := make([]int, 0, 5)

    I still have to use append() to add a new element to it, which is
    exactly what I would have to do if I never specified a capacity in the
    first place.

    Could someone please show me an example where it would be beneficial to
    give a slice a capacity greater than its length?
    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to golang-nuts...@googlegroups.com <javascript:>.
    For more options, visit https://groups.google.com/groups/opt_out.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Jan Mercl at Feb 26, 2014 at 12:25 pm

    On Wed, Feb 26, 2014 at 1:07 PM, JohnGB wrote:
    I've done some testing to try and get a feel for how much of a performance
    benefit it is when specifying a capacity, and the results have been somewhat
    disappointing. I used the following code:

    package main

    const n = 1000000

    func withCap() {
    b := make([]int, 0, 100)
    for i := 0; i < n; i++ {
    b = append(b, i)
    }
    }

    func noCap() {
    b := make([]int, 0)
    for i := 0; i < n; i++ {
    b = append(b, i)
    }
    }

    func main() {
    }
    If you really used the above code then you measured nothing (func main() {}).

    In any case, use the stdlib benchmarking facility in "testing" and
    compare preallocating b.N capacity vs no prealloc. Also reset the
    bench time before the b.N times loop.

    -j

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • JohnGB at Feb 26, 2014 at 12:37 pm
    I did use that code, but I used the benchmarking facility in testing to
    test each of the functions in that code. My _test.go function was:

    package main

    import (
    "testing"
    )

    func Benchmark_withCap(b *testing.B) {
    for i := 0; i < b.N; i++ {
    withCap()
    }
    }

    func Benchmark_noCap(b *testing.B) {
    for i := 0; i < b.N; i++ {
    noCap()
    }
    }

    Was there some error in my methodology here?
    On Wednesday, February 26, 2014 1:24:59 PM UTC+1, Jan Mercl wrote:

    On Wed, Feb 26, 2014 at 1:07 PM, JohnGB <jgbe...@gmail.com <javascript:>>
    wrote:
    I've done some testing to try and get a feel for how much of a
    performance
    benefit it is when specifying a capacity, and the results have been somewhat
    disappointing. I used the following code:

    package main

    const n = 1000000

    func withCap() {
    b := make([]int, 0, 100)
    for i := 0; i < n; i++ {
    b = append(b, i)
    }
    }

    func noCap() {
    b := make([]int, 0)
    for i := 0; i < n; i++ {
    b = append(b, i)
    }
    }

    func main() {
    }
    If you really used the above code then you measured nothing (func main()
    {}).

    In any case, use the stdlib benchmarking facility in "testing" and
    compare preallocating b.N capacity vs no prealloc. Also reset the
    bench time before the b.N times loop.

    -j
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Tamás Gulácsi at Feb 26, 2014 at 12:39 pm
    withCap only preallocates 100 bytes, but it shoul n.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • JohnGB at Feb 26, 2014 at 12:47 pm
    I wasn't trying to make withCap() as efficient as possible, but rather I
    was testing how much better it would be in situations where I don't know
    the final size of the slice, and trying to understand how much of a
    performance improvement I would get by setting a capacity.
    On Wednesday, February 26, 2014 1:39:14 PM UTC+1, Tamás Gulácsi wrote:

    withCap only preallocates 100 bytes, but it shoul n.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Jérôme Champion at Feb 26, 2014 at 12:57 pm
    So you bench the optimization of the allocation of the first 25, 50 bytes,
    but you don't care about the final allocation of one millions bytes..
    It's not surprising that you don't see performance improvement.

    Le mercredi 26 février 2014 13:47:46 UTC+1, JohnGB a écrit :
    I wasn't trying to make withCap() as efficient as possible, but rather I
    was testing how much better it would be in situations where I don't know
    the final size of the slice, and trying to understand how much of a
    performance improvement I would get by setting a capacity.
    On Wednesday, February 26, 2014 1:39:14 PM UTC+1, Tamás Gulácsi wrote:

    withCap only preallocates 100 bytes, but it shoul n.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • JohnGB at Feb 26, 2014 at 1:10 pm
    Jérôme, could you please explain what you mean. The function that is
    called in the benchmark will generate a slice that is at least a million
    bytes long, so I don't see how it is only considering the first 25 or 50
    bytes.

    If you could show me what I'm doing wrong here (preferably with code), I'd
    be grateful.
    On Wednesday, February 26, 2014 1:57:40 PM UTC+1, Jérôme Champion wrote:

    So you bench the optimization of the allocation of the first 25, 50 bytes,
    but you don't care about the final allocation of one millions bytes..
    It's not surprising that you don't see performance improvement.

    Le mercredi 26 février 2014 13:47:46 UTC+1, JohnGB a écrit :
    I wasn't trying to make withCap() as efficient as possible, but rather I
    was testing how much better it would be in situations where I don't know
    the final size of the slice, and trying to understand how much of a
    performance improvement I would get by setting a capacity.
    On Wednesday, February 26, 2014 1:39:14 PM UTC+1, Tamás Gulácsi wrote:

    withCap only preallocates 100 bytes, but it shoul n.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Voidlogic at Feb 26, 2014 at 2:19 pm
    Your problem in understanding here isn't code, its conceptual.

    Your n is so large that the cap version of the program enjoys an advantage
    only for a very short number of iterations. Try making the init cap within
    at least 1 or 2 orders of magnitude of n.
    Lets assume that when a slice is full, the new slice that is created is
    2*len(slice)+1 long (and then the old slice is copied into the new one)

    Lets look at the sizes for the zero cap and 100 cap versions as they need
    to grow:
    Zero cap: 0, 1, 3, 7, 15, 31, 63, 127, 255, 511 ....n >= 1000000
    100 cap: 100, 201, 403 ....n >= 1000000

    As you can see the 100 cap version only enjoys an advantage for the first 8
    resize/copy operations- and each successive resize is more expensive (more
    bytes to copy), so those initial ones are actually the comparatively cheap
    resizes.












    On Wednesday, February 26, 2014 7:10:19 AM UTC-6, JohnGB wrote:

    Jérôme, could you please explain what you mean. The function that is
    called in the benchmark will generate a slice that is at least a million
    bytes long, so I don't see how it is only considering the first 25 or 50
    bytes.

    If you could show me what I'm doing wrong here (preferably with code), I'd
    be grateful.
    On Wednesday, February 26, 2014 1:57:40 PM UTC+1, Jérôme Champion wrote:

    So you bench the optimization of the allocation of the first 25, 50
    bytes, but you don't care about the final allocation of one millions
    bytes..
    It's not surprising that you don't see performance improvement.

    Le mercredi 26 février 2014 13:47:46 UTC+1, JohnGB a écrit :
    I wasn't trying to make withCap() as efficient as possible, but rather I
    was testing how much better it would be in situations where I don't know
    the final size of the slice, and trying to understand how much of a
    performance improvement I would get by setting a capacity.
    On Wednesday, February 26, 2014 1:39:14 PM UTC+1, Tamás Gulácsi wrote:

    withCap only preallocates 100 bytes, but it shoul n.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • JohnGB at Feb 26, 2014 at 2:41 pm
    Thanks voidlogic, that explanation helps.

    I've tried running it with the initial cap = n/10, and I only see a ~15%
    improvement, but what I find somewhat odd is that when I set the initial
    cap to n, I see ~300% improvement in speed. If the initial capacity is
    much larger than n, there is no performance gain and higher memory use.

    So what I take from this (performance wise), is that there is little
    performance gain in setting a capacity explicitly unless you know to an
    order of magnitude what the expected max size of the slice will be.
    On Wednesday, February 26, 2014 3:19:28 PM UTC+1, voidlogic wrote:

    Your problem in understanding here isn't code, its conceptual.

    Your n is so large that the cap version of the program enjoys an advantage
    only for a very short number of iterations. Try making the init cap within
    at least 1 or 2 orders of magnitude of n.
    Lets assume that when a slice is full, the new slice that is created is
    2*len(slice)+1 long (and then the old slice is copied into the new one)

    Lets look at the sizes for the zero cap and 100 cap versions as they need
    to grow:
    Zero cap: 0, 1, 3, 7, 15, 31, 63, 127, 255, 511 ....n >= 1000000
    100 cap: 100, 201, 403 ....n >= 1000000

    As you can see the 100 cap version only enjoys an advantage for the first
    8 resize/copy operations- and each successive resize is more expensive
    (more bytes to copy), so those initial ones are actually the comparatively
    cheap resizes.












    On Wednesday, February 26, 2014 7:10:19 AM UTC-6, JohnGB wrote:

    Jérôme, could you please explain what you mean. The function that is
    called in the benchmark will generate a slice that is at least a million
    bytes long, so I don't see how it is only considering the first 25 or 50
    bytes.

    If you could show me what I'm doing wrong here (preferably with code),
    I'd be grateful.
    On Wednesday, February 26, 2014 1:57:40 PM UTC+1, Jérôme Champion wrote:

    So you bench the optimization of the allocation of the first 25, 50
    bytes, but you don't care about the final allocation of one millions
    bytes..
    It's not surprising that you don't see performance improvement.

    Le mercredi 26 février 2014 13:47:46 UTC+1, JohnGB a écrit :
    I wasn't trying to make withCap() as efficient as possible, but rather
    I was testing how much better it would be in situations where I don't know
    the final size of the slice, and trying to understand how much of a
    performance improvement I would get by setting a capacity.
    On Wednesday, February 26, 2014 1:39:14 PM UTC+1, Tamás Gulácsi wrote:

    withCap only preallocates 100 bytes, but it shoul n.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Michael Jones at Feb 26, 2014 at 2:46 pm
    Each magic reallocation of the slice generates garbage (the previous slice)
    that needs to be found and handled by the garbage collector. There is a
    cost to this as well. Thus, preallocating to the necessary size both saves
    all the copying (2x the final number of bytes) and also the garbage
    collection and heap-growth. These are all benefits in big computations and
    insignificant in typical computations.

    On Wed, Feb 26, 2014 at 9:41 AM, JohnGB wrote:

    Thanks voidlogic, that explanation helps.

    I've tried running it with the initial cap = n/10, and I only see a ~15%
    improvement, but what I find somewhat odd is that when I set the initial
    cap to n, I see ~300% improvement in speed. If the initial capacity is
    much larger than n, there is no performance gain and higher memory use.

    So what I take from this (performance wise), is that there is little
    performance gain in setting a capacity explicitly unless you know to an
    order of magnitude what the expected max size of the slice will be.

    On Wednesday, February 26, 2014 3:19:28 PM UTC+1, voidlogic wrote:

    Your problem in understanding here isn't code, its conceptual.

    Your n is so large that the cap version of the program enjoys an
    advantage only for a very short number of iterations. Try making the init
    cap within at least 1 or 2 orders of magnitude of n.
    Lets assume that when a slice is full, the new slice that is created is
    2*len(slice)+1 long (and then the old slice is copied into the new one)

    Lets look at the sizes for the zero cap and 100 cap versions as they
    need to grow:
    Zero cap: 0, 1, 3, 7, 15, 31, 63, 127, 255, 511 ....n >= 1000000
    100 cap: 100, 201, 403 ....n >= 1000000

    As you can see the 100 cap version only enjoys an advantage for the first
    8 resize/copy operations- and each successive resize is more expensive
    (more bytes to copy), so those initial ones are actually the comparatively
    cheap resizes.












    On Wednesday, February 26, 2014 7:10:19 AM UTC-6, JohnGB wrote:

    Jérôme, could you please explain what you mean. The function that is
    called in the benchmark will generate a slice that is at least a million
    bytes long, so I don't see how it is only considering the first 25 or 50
    bytes.

    If you could show me what I'm doing wrong here (preferably with code),
    I'd be grateful.
    On Wednesday, February 26, 2014 1:57:40 PM UTC+1, Jérôme Champion wrote:

    So you bench the optimization of the allocation of the first 25, 50
    bytes, but you don't care about the final allocation of one millions
    bytes..
    It's not surprising that you don't see performance improvement.

    Le mercredi 26 février 2014 13:47:46 UTC+1, JohnGB a écrit :
    I wasn't trying to make withCap() as efficient as possible, but rather
    I was testing how much better it would be in situations where I don't know
    the final size of the slice, and trying to understand how much of a
    performance improvement I would get by setting a capacity.
    On Wednesday, February 26, 2014 1:39:14 PM UTC+1, Tamás Gulácsi wrote:

    withCap only preallocates 100 bytes, but it shoul n.
    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.


    --
    *Michael T. Jones | Chief Technology Advocate | mtj@google.com
    <mtj@google.com> | +1 650-335-5765*

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Voidlogic at Feb 26, 2014 at 3:37 pm
    Michael, makes a good points in regards to garbage generation and that the
    cap for your final slice is often much larger then needed (due to its cap
    growth factor). Futhermore, the overhead of this kind of garbage is more
    noticeable in a longer running program then a quick bench. I've had some
    programs where using tip's sync.Pool to pool []byte (with a max size) has
    been a huge win for these reasons.
    On Wednesday, February 26, 2014 8:46:15 AM UTC-6, Michael Jones wrote:

    Each magic reallocation of the slice generates garbage (the previous
    slice) that needs to be found and handled by the garbage collector. There
    is a cost to this as well. Thus, preallocating to the necessary size both
    saves all the copying (2x the final number of bytes) and also the garbage
    collection and heap-growth. These are all benefits in big computations and
    insignificant in typical computations.


    On Wed, Feb 26, 2014 at 9:41 AM, JohnGB <jgbe...@gmail.com <javascript:>>wrote:
    Thanks voidlogic, that explanation helps.

    I've tried running it with the initial cap = n/10, and I only see a ~15%
    improvement, but what I find somewhat odd is that when I set the initial
    cap to n, I see ~300% improvement in speed. If the initial capacity is
    much larger than n, there is no performance gain and higher memory use.

    So what I take from this (performance wise), is that there is little
    performance gain in setting a capacity explicitly unless you know to an
    order of magnitude what the expected max size of the slice will be.

    On Wednesday, February 26, 2014 3:19:28 PM UTC+1, voidlogic wrote:

    Your problem in understanding here isn't code, its conceptual.

    Your n is so large that the cap version of the program enjoys an
    advantage only for a very short number of iterations. Try making the init
    cap within at least 1 or 2 orders of magnitude of n.
    Lets assume that when a slice is full, the new slice that is created is
    2*len(slice)+1 long (and then the old slice is copied into the new one)

    Lets look at the sizes for the zero cap and 100 cap versions as they
    need to grow:
    Zero cap: 0, 1, 3, 7, 15, 31, 63, 127, 255, 511 ....n >= 1000000
    100 cap: 100, 201, 403 ....n >= 1000000

    As you can see the 100 cap version only enjoys an advantage for the
    first 8 resize/copy operations- and each successive resize is more
    expensive (more bytes to copy), so those initial ones are actually the
    comparatively cheap resizes.












    On Wednesday, February 26, 2014 7:10:19 AM UTC-6, JohnGB wrote:

    Jérôme, could you please explain what you mean. The function that is
    called in the benchmark will generate a slice that is at least a million
    bytes long, so I don't see how it is only considering the first 25 or 50
    bytes.

    If you could show me what I'm doing wrong here (preferably with code),
    I'd be grateful.
    On Wednesday, February 26, 2014 1:57:40 PM UTC+1, Jérôme Champion wrote:

    So you bench the optimization of the allocation of the first 25, 50
    bytes, but you don't care about the final allocation of one millions
    bytes..
    It's not surprising that you don't see performance improvement.

    Le mercredi 26 février 2014 13:47:46 UTC+1, JohnGB a écrit :
    I wasn't trying to make withCap() as efficient as possible, but
    rather I was testing how much better it would be in situations where I
    don't know the final size of the slice, and trying to understand how much
    of a performance improvement I would get by setting a capacity.
    On Wednesday, February 26, 2014 1:39:14 PM UTC+1, Tamás Gulácsi wrote:

    withCap only preallocates 100 bytes, but it shoul n.
    --
    You received this message because you are subscribed to the Google Groups
    "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to golang-nuts...@googlegroups.com <javascript:>.
    For more options, visit https://groups.google.com/groups/opt_out.


    --
    *Michael T. Jones | Chief Technology Advocate | m...@google.com
    <javascript:> | +1 650-335-5765*
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Konstantin Khomoutov at Feb 26, 2014 at 12:31 pm

    On Wed, 26 Feb 2014 04:07:23 -0800 (PST) JohnGB wrote:

    I've done some testing to try and get a feel for how much of a
    performance benefit it is when specifying a capacity, and the results
    have been somewhat disappointing. I used the following code: [...]
    I ran benchmarks on the functions with varying amounts of initial
    capacity, and found that if the initial capacity specified was <
    ~1000, that there was no benefit (in fact it was slower) when
    compared to not specifying a capacity. This seems to go against what
    I've read. Am I missing something here, or is specifying a capacity
    really only beneficial when using very large?
    I have nothing to say on the nature of your observations but I'd like
    to point out that append() is not as dumb as to grow/realloc the
    slice's underlying array by a single element when you're appending a
    single element--instead, it goes optimisting and overallocates the
    memory using certain heuristics which are, I beleive, tuned for
    low-to-moderate amounts of memory. For Go 1.2, see the growslice1()
    function in the pkg\runtime\slice.c file. Also, I heard from
    experienced folks here that memory copying is not *that* costly these
    days on popular H/W platforms.

    I think these facts might explain the behaviour you're observing.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.
  • JohnGB at Feb 26, 2014 at 12:38 pm
    Thanks Konstantin, that is helpful and would explain a lot.

    On Wednesday, February 26, 2014 1:30:59 PM UTC+1, Konstantin Khomoutov
    wrote:
    On Wed, 26 Feb 2014 04:07:23 -0800 (PST)
    JohnGB <jgbe...@gmail.com <javascript:>> wrote:
    I've done some testing to try and get a feel for how much of a
    performance benefit it is when specifying a capacity, and the results
    have been somewhat disappointing. I used the following code: [...]
    I ran benchmarks on the functions with varying amounts of initial
    capacity, and found that if the initial capacity specified was <
    ~1000, that there was no benefit (in fact it was slower) when
    compared to not specifying a capacity. This seems to go against what
    I've read. Am I missing something here, or is specifying a capacity
    really only beneficial when using very large?
    I have nothing to say on the nature of your observations but I'd like
    to point out that append() is not as dumb as to grow/realloc the
    slice's underlying array by a single element when you're appending a
    single element--instead, it goes optimisting and overallocates the
    memory using certain heuristics which are, I beleive, tuned for
    low-to-moderate amounts of memory. For Go 1.2, see the growslice1()
    function in the pkg\runtime\slice.c file. Also, I heard from
    experienced folks here that memory copying is not *that* costly these
    days on popular H/W platforms.

    I think these facts might explain the behaviour you're observing.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupgolang-nuts @
categoriesgo
postedFeb 25, '14 at 11:51p
activeFeb 26, '14 at 3:37p
posts20
users10
websitegolang.org

People

Translate

site design / logo © 2022 Grokbase