FAQ
Hi all.

The go-start *webframework* <http://go-start.org/> has this note in its
documentation page:

Note: Don't use Go on 32 bit systems in production, it has severe memory
leaks.
That's a pretty serious remark, and it's even more odd coming from the
author of a Go framework, not some ranting no doer.

How can one counter this remark?

--

Search Discussions

  • Joseph Lisee at Oct 21, 2012 at 11:38 pm

    On Sunday, October 21, 2012 1:34:01 PM UTC-4, Dumitru Ungureanu wrote:

    Hi all.

    The go-start *webframework* <http://go-start.org/> has this note in its
    documentation page:

    Note: Don't use Go on 32 bit systems in production, it has severe memory
    leaks.
    That's a pretty serious remark, and it's even more odd coming from the
    author of a Go framework, not some ranting no doer.
    I have seen this widely reported, and it's been labeled as known side
    effect of Go's conservative GC. I would imagine the severity of this
    problem is application dependent though.

    How can one counter this remark?
    In this specific case, the only way to do this would be to prove the author
    wrong. I would also ask him to make sure he has reports of it being an
    issue.

    -Joe L.

    --
  • Dave Cheney at Oct 22, 2012 at 12:16 am
    This is issue 909.

    It is being actively worked on.

    As someone who spends a lot of time working on 32 platforms, I can't say
    this issue is a showstopper, and is sometimes conflated with a failure to
    close response bodies from http requests, and/or ignoring errors.

    In summary, it's a known issue, but not the end of the world.

    Dave


    On Monday, October 22, 2012, Joseph Lisee wrote:
    On Sunday, October 21, 2012 1:34:01 PM UTC-4, Dumitru Ungureanu wrote:

    Hi all.

    The go-start *webframework* <http://go-start.org/> has this note in its
    documentation page:

    Note: Don't use Go on 32 bit systems in production, it has severe memory
    leaks.
    That's a pretty serious remark, and it's even more odd coming from the
    author of a Go framework, not some ranting no doer.
    I have seen this widely reported, and it's been labeled as known side
    effect of Go's conservative GC. I would imagine the severity of this
    problem is application dependent though.

    How can one counter this remark?
    In this specific case, the only way to do this would be to prove the
    author wrong. I would also ask him to make sure he has reports of it being
    an issue.

    -Joe L.

    --

    --
  • Corey Thomasson at Oct 22, 2012 at 2:52 am
    To expound on it not being the end of the world a bit further:

    I did run into this problem when (foolishly) benchmarking a large HTTP
    project; it quickly went away when we began being conscious about the
    garbage we were creating. GC's will never be sufficiently smart to handle
    badly written software (speaking of my own).
    On 21 October 2012 19:43, Dave Cheney wrote:

    This is issue 909.

    It is being actively worked on.

    As someone who spends a lot of time working on 32 platforms, I can't say
    this issue is a showstopper, and is sometimes conflated with a failure to
    close response bodies from http requests, and/or ignoring errors.

    In summary, it's a known issue, but not the end of the world.

    Dave


    On Monday, October 22, 2012, Joseph Lisee wrote:
    On Sunday, October 21, 2012 1:34:01 PM UTC-4, Dumitru Ungureanu wrote:

    Hi all.

    The go-start *webframework* <http://go-start.org/> has this note in its
    documentation page:

    Note: Don't use Go on 32 bit systems in production, it has severe
    memory leaks.
    That's a pretty serious remark, and it's even more odd coming from the
    author of a Go framework, not some ranting no doer.
    I have seen this widely reported, and it's been labeled as known side
    effect of Go's conservative GC. I would imagine the severity of this
    problem is application dependent though.

    How can one counter this remark?
    In this specific case, the only way to do this would be to prove the
    author wrong. I would also ask him to make sure he has reports of it being
    an issue.

    -Joe L.

    --

    --

    --
  • Kevin Gillette at Oct 22, 2012 at 3:42 am
    On a positive note, the stdlib http server does close request bodies for
    you -- it's when you're acting as an http client that this kind of garbage
    becomes a concern.
    On Sunday, October 21, 2012 6:09:57 PM UTC-6, Dave Cheney wrote:

    This is issue 909.

    It is being actively worked on.

    As someone who spends a lot of time working on 32 platforms, I can't say
    this issue is a showstopper, and is sometimes conflated with a failure to
    close response bodies from http requests, and/or ignoring errors.

    In summary, it's a known issue, but not the end of the world.

    Dave


    On Monday, October 22, 2012, Joseph Lisee wrote:
    On Sunday, October 21, 2012 1:34:01 PM UTC-4, Dumitru Ungureanu wrote:

    Hi all.

    The go-start *webframework* <http://go-start.org/> has this note in its
    documentation page:

    Note: Don't use Go on 32 bit systems in production, it has severe
    memory leaks.
    That's a pretty serious remark, and it's even more odd coming from the
    author of a Go framework, not some ranting no doer.
    I have seen this widely reported, and it's been labeled as known side
    effect of Go's conservative GC. I would imagine the severity of this
    problem is application dependent though.

    How can one counter this remark?
    In this specific case, the only way to do this would be to prove the
    author wrong. I would also ask him to make sure he has reports of it being
    an issue.

    -Joe L.

    --

    --
  • Mail at Oct 24, 2012 at 6:53 am

    On Monday, October 22, 2012 1:30:48 AM UTC+2, Joseph Lisee wrote:

    In this specific case, the only way to do this would be to prove the
    author wrong. I would also ask him to make sure he has reports of it being
    an issue.
    Our server for http://startuplive.in did crash every two hours on 32 bit
    when we got some traffic.

    -Erik

    --
  • Dumitru Ungureanu at Oct 22, 2012 at 6:17 am
    Go 1.1 is the release solving this issues?

    Meanwhile, some pointers on the best practice to avoid this issue?

    Thanks.

    --
  • Bryanturley at Oct 23, 2012 at 4:34 pm
    Best practice is to reuse allocations when possible, or just use 64bit.
    64bit brings a number of other benefits over 32bit x86 as well.
    On Monday, October 22, 2012 1:11:04 AM UTC-5, Dumitru Ungureanu wrote:

    Go 1.1 is the release solving this issues?

    Meanwhile, some pointers on the best practice to avoid this issue?

    Thanks.
    --
  • Paul at Oct 24, 2012 at 12:39 pm
    I would like to try and understand what the hard requirement is for running
    a production Webserver on 32 bit Hardware. Are there Software requirements
    that mandate running on 32 bit? Are there perhaps additional costs for
    proprietary 64 bit Software? Are there financial requirements that mandate
    running on 32 bit vs 64 bit? What is the actual business case?

    I don't know what the answer is, but I would like to understand it.

    On Sunday, October 21, 2012 7:34:01 PM UTC+2, Dumitru Ungureanu wrote:

    Hi all.

    The go-start *webframework* <http://go-start.org/> has this note in its
    documentation page:

    Note: Don't use Go on 32 bit systems in production, it has severe memory
    leaks.
    That's a pretty serious remark, and it's even more odd coming from the
    author of a Go framework, not some ranting no doer.

    How can one counter this remark?
    --
  • Patrick Mylund Nielsen at Oct 24, 2012 at 12:56 pm
    64-bit programs use much more memory. If you are not actually using 64-bit
    instructions, then all 64-bit gives you is increased RAM costs. If you
    are--and you will be if you are using Go and its standard library--then
    it's usually worthwhile.
    On Wed, Oct 24, 2012 at 2:39 PM, Paul wrote:

    I would like to try and understand what the hard requirement is for
    running a production Webserver on 32 bit Hardware. Are there Software
    requirements that mandate running on 32 bit? Are there perhaps additional
    costs for proprietary 64 bit Software? Are there financial requirements
    that mandate running on 32 bit vs 64 bit? What is the actual business case?

    I don't know what the answer is, but I would like to understand it.

    On Sunday, October 21, 2012 7:34:01 PM UTC+2, Dumitru Ungureanu wrote:

    Hi all.

    The go-start *webframework* <http://go-start.org/> has this note in its
    documentation page:

    Note: Don't use Go on 32 bit systems in production, it has severe memory
    leaks.
    That's a pretty serious remark, and it's even more odd coming from the
    author of a Go framework, not some ranting no doer.

    How can one counter this remark?
    --

    --
  • Norbert Roos at Oct 24, 2012 at 1:40 pm

    On 10/24/2012 02:39 PM, Paul wrote:
    I would like to try and understand what the hard requirement is for running
    a production Webserver on 32 bit Hardware.
    For example if it should be running on an ARM processor.

    --
  • Minux at Oct 24, 2012 at 2:46 pm

    On Oct 24, 2012 8:56 PM, "Patrick Mylund Nielsen" wrote:
    64-bit programs use much more memory. If you are not actually using
    64-bit instructions, then all 64-bit gives you is increased RAM costs. If
    you are--and you will be if you are using Go and its standard library--then
    it's usually worthwhile.
    no.
    even if you're not using 64-bit instructions, x86_64 provides
    8 more registers to use, and saner sse2 instructions for
    FP. big benefits for Go at least. you might already know
    this, and if you're planing to use Go on x86, x86_64 is
    highly recommended.

    also, please note we now have x32 mode where you can
    use 64-bit mode without much memory overhead.
    this enforces the principle that you should use 64-bit
    OS. (Go doesn't support the x32 mode unfortunately.)

    --
  • Patrick Mylund Nielsen at Oct 24, 2012 at 2:51 pm
    I'm not trying to say don't run Go on 64-bit. Go is most certainly
    optimized for, and prefers, amd64. However, memory usage is a popular
    argument against 64-bit archs.
    On Wed, Oct 24, 2012 at 4:46 PM, minux wrote:


    On Oct 24, 2012 8:56 PM, "Patrick Mylund Nielsen" <
    patrick@patrickmylund.com> wrote:
    64-bit programs use much more memory. If you are not actually using
    64-bit instructions, then all 64-bit gives you is increased RAM costs. If
    you are--and you will be if you are using Go and its standard library--then
    it's usually worthwhile.
    no.
    even if you're not using 64-bit instructions, x86_64 provides
    8 more registers to use, and saner sse2 instructions for
    FP. big benefits for Go at least. you might already know
    this, and if you're planing to use Go on x86, x86_64 is
    highly recommended.

    also, please note we now have x32 mode where you can
    use 64-bit mode without much memory overhead.
    this enforces the principle that you should use 64-bit
    OS. (Go doesn't support the x32 mode unfortunately.)
    --
  • Patrick Mylund Nielsen at Oct 24, 2012 at 2:56 pm
    I knew saying "all" would come back to bite me. I didn't mean to imply that
    memory is the only difference between x86 and x64, it's just the most
    obvious reason (to me) why somebody would prefer the former.
    On Wed, Oct 24, 2012 at 4:51 PM, Patrick Mylund Nielsen wrote:

    I'm not trying to say don't run Go on 64-bit. Go is most certainly
    optimized for, and prefers, amd64. However, memory usage is a popular
    argument against 64-bit archs.

    On Wed, Oct 24, 2012 at 4:46 PM, minux wrote:


    On Oct 24, 2012 8:56 PM, "Patrick Mylund Nielsen" <
    patrick@patrickmylund.com> wrote:
    64-bit programs use much more memory. If you are not actually using
    64-bit instructions, then all 64-bit gives you is increased RAM costs. If
    you are--and you will be if you are using Go and its standard library--then
    it's usually worthwhile.
    no.
    even if you're not using 64-bit instructions, x86_64 provides
    8 more registers to use, and saner sse2 instructions for
    FP. big benefits for Go at least. you might already know
    this, and if you're planing to use Go on x86, x86_64 is
    highly recommended.

    also, please note we now have x32 mode where you can
    use 64-bit mode without much memory overhead.
    this enforces the principle that you should use 64-bit
    OS. (Go doesn't support the x32 mode unfortunately.)
    --
  • Bryanturley at Oct 24, 2012 at 5:35 pm
    I would mirror what minux said, also here are some recent benchmarks if you
    can wade through the incessant ads.

    http://www.phoronix.com/scan.php?page=article&item=ubuntu_1210_3264&num=1

    --
  • Bryanturley at Oct 24, 2012 at 5:37 pm
    http://www.phoronix.com/scan.php?page=article&item=ubuntu_1210_3264&num=1

    Should have noted these are linux benchmarks not go benchmarks. Just for
    comparison on 32bit vs 64bit in real apps.

    --
  • Patrick Mylund Nielsen at Oct 24, 2012 at 5:46 pm
    I'm not trying to dispute that x64 is faster than x86. That would be silly.
    All I said was that the memory overhead is usually larger.

    Companies like Canonical still refer to their 32-bit edition of Ubuntu as
    the recommended download, and Linode strongly recommends everyone to use
    the 32-bit distributions (in the latter case, specifically because of the
    decrease in memory usage.)
    On Wed, Oct 24, 2012 at 7:35 PM, bryanturley wrote:

    I would mirror what minux said, also here are some recent benchmarks if
    you can wade through the incessant ads.

    http://www.phoronix.com/scan.php?page=article&item=ubuntu_1210_3264&num=1

    --

    --
  • ⚛ at Oct 24, 2012 at 7:36 pm

    On Wednesday, October 24, 2012 7:46:39 PM UTC+2, Patrick Mylund Nielsen wrote:

    I'm not trying to dispute that x64 is faster than x86. That would be silly.

    It depends on the machine used by the developers of the software. If they
    are mostly using 32-bit x86 machines the developers will make optimization
    choices that work for 32-bit x86 while ignoring the consequences of source
    code modifications on 64-bit x86. The overall process seems similar to
    http://en.wikipedia.org/wiki/Supervised_learning in AI. Developers are
    optimizing based on measured data, so if there are measurements for 32-bit
    and no measurements for 64-bit then it may happen that the software will
    run faster on 32-bit machines.

    Advantages of x86-64 over x86-32 are not clear (not mentioning the obvious
    fact that 64-bit pointers consume two times the memory of 32-bit pointers):

    - accessing the 8 additional registers on x86-64 requires a prefix in the
    instruction

    - instructions working with 32-bit integers differ from 64-bit versions
    because of a prefix in one form of the instruction

    In general, x86-64 isn't an uniform extension of x86-32 (it is a
    non-uniform extension). An ideal uniform extension would, for example,
    mandate two instruction decoders in the 32+64-bit CPU (and consequently a
    slightly redesigned instruction encoding in 64-bit mode). That said, x86-64
    is very close to being a uniform extension of x86-32.

    Today there do not exist purely 32-bit versions of x86 CPUs (with no 64-bit
    mode in the silicon) that would have a corresponding 32+64-bit version, so
    it is hard to tell whether CPUs without any 64-bit mode would be faster
    thanks to a slightly higher operating frequency.

    I believe that a 32-bit address space is already large enough for almost
    any application in the sense that extending the address space to 64 bits
    does not improve the application's performance in any way. This requires
    programs working with large datasets to swap data to/from the disk at
    run-time, or alternatively to use some form of data compression. x86 has no
    architectural support for this programming style. Go as a safe
    programming language (without cgo) seems compatible with this programming
    style because it appears possible for a Go implementation to do the
    swapping and compression automatically.

    All I said was that the memory overhead is usually larger.

    Companies like Canonical still refer to their 32-bit edition of Ubuntu as
    the recommended download, and Linode strongly recommends everyone to use
    the 32-bit distributions (in the latter case, specifically because of the
    decrease in memory usage.)

    On Wed, Oct 24, 2012 at 7:35 PM, bryanturley <bryan...@gmail.com<javascript:>
    wrote:
    I would mirror what minux said, also here are some recent benchmarks if
    you can wade through the incessant ads.

    http://www.phoronix.com/scan.php?page=article&item=ubuntu_1210_3264&num=1

    --

    --
  • Bryanturley at Oct 24, 2012 at 9:12 pm

    It depends on the machine used by the developers of the software. If they
    are mostly using 32-bit x86 machines the developers will make optimization
    choices that work for 32-bit x86 while ignoring the consequences of source
    code modifications on 64-bit x86. The overall process seems similar to
    http://en.wikipedia.org/wiki/Supervised_learning in AI. Developers are
    optimizing based on measured data, so if there are measurements for 32-bit
    and no measurements for 64-bit then it may happen that the software will
    run faster on 32-bit machines.
    That definitely occurs, I have noticed it in my own code.

    Advantages of x86-64 over x86-32 are not clear (not mentioning the obvious
    fact that 64-bit pointers consume two times the memory of 32-bit pointers):

    - accessing the 8 additional registers on x86-64 requires a prefix in the
    instruction
    That is still 8 * 64bits less swapping to and from stack which does add up.

    - instructions working with 32-bit integers differ from 64-bit versions
    because of a prefix in one form of the instruction

    In general, x86-64 isn't an uniform extension of x86-32 (it is a
    non-uniform extension). An ideal uniform extension would, for example,
    mandate two instruction decoders in the 32+64-bit CPU (and consequently a
    slightly redesigned instruction encoding in 64-bit mode). That said, x86-64
    is very close to being a uniform extension of x86-32.

    Today there do not exist purely 32-bit versions of x86 CPUs (with no
    64-bit mode in the silicon) that would have a corresponding 32+64-bit
    version, so it is hard to tell whether CPUs without any 64-bit mode would
    be faster thanks to a slightly higher operating frequency.

    I believe that a 32-bit address space is already large enough for almost
    any application in the sense that extending the address space to 64 bits
    does not improve the application's performance in any way.
    Well if you run with large data amounts of data, it is simpler to write a
    program when you don't have to worry about running out of memory. So maybe
    it doesn't help application performance but it does help programmer
    performance.

    This requires programs working with large datasets to swap data to/from
    the disk at run-time, or alternatively to use some form of data
    compression. x86 has no architectural support for this programming style.
    Go as a safe programming language (without cgo) seems compatible with this
    programming style because it appears possible for a Go implementation to do
    the swapping and compression automatically.
    Is there an arch that does lossless memory compression on the fly? I know
    most gpus can do lossy texture compression on the fly but you can't use
    that for solid math.
    You could fake it with a compressed swap file or even compressed in memory
    stuff and page faults but that might be more work and slowdown than it is
    worth. Though I think some people are extending ram in this way with PCIe
    based ssd's with some sucess, kind of a multilevel swap thing.
    I would say *transparent automagic* compressed memory access is probably
    not a good idea to do at a programming language level. That would be a
    kernel/hardware thing.

    Back to the OP though until the newer garbage collector is finished you
    should try to focus on 64bit platforms.

    --
  • Niklas Schnelle at Oct 24, 2012 at 10:16 pm
    Well I just had to answer the part about 32bit being large enough not to
    impact performance, that's so utterly wrong I can't even start.
    I might be biased because most of the applications I'm currently working on
    (research on route planning algorithms) in their current form can't even
    load with a standard dataset on 32bit and the performance loss of not
    keeping the data completely in RAM would many orders of magnitude.
    Current route planning software can compute shortest paths on the road
    network of Europe in less than 10 ms some algorithms even go down to
    hundreth of microseconds.
    If they had to keep the data on disk, they would run on the order of
    seconds.
    Any sane database system will keep as much data in memory as there is RAM
    and I believe we would also be talking about many orders of magnitude here.
    There is a reason mainframes have been 64bit for decades.
    On Wednesday, October 24, 2012 9:36:23 PM UTC+2, ⚛ wrote:

    On Wednesday, October 24, 2012 7:46:39 PM UTC+2, Patrick Mylund Nielsen
    wrote:
    I'm not trying to dispute that x64 is faster than x86. That would be
    silly.

    It depends on the machine used by the developers of the software. If they
    are mostly using 32-bit x86 machines the developers will make optimization
    choices that work for 32-bit x86 while ignoring the consequences of source
    code modifications on 64-bit x86. The overall process seems similar to
    http://en.wikipedia.org/wiki/Supervised_learning in AI. Developers are
    optimizing based on measured data, so if there are measurements for 32-bit
    and no measurements for 64-bit then it may happen that the software will
    run faster on 32-bit machines.

    Advantages of x86-64 over x86-32 are not clear (not mentioning the obvious
    fact that 64-bit pointers consume two times the memory of 32-bit pointers):

    - accessing the 8 additional registers on x86-64 requires a prefix in the
    instruction

    - instructions working with 32-bit integers differ from 64-bit versions
    because of a prefix in one form of the instruction

    In general, x86-64 isn't an uniform extension of x86-32 (it is a
    non-uniform extension). An ideal uniform extension would, for example,
    mandate two instruction decoders in the 32+64-bit CPU (and consequently a
    slightly redesigned instruction encoding in 64-bit mode). That said, x86-64
    is very close to being a uniform extension of x86-32.

    Today there do not exist purely 32-bit versions of x86 CPUs (with no
    64-bit mode in the silicon) that would have a corresponding 32+64-bit
    version, so it is hard to tell whether CPUs without any 64-bit mode would
    be faster thanks to a slightly higher operating frequency.

    I believe that a 32-bit address space is already large enough for almost
    any application in the sense that extending the address space to 64 bits
    does not improve the application's performance in any way. This requires
    programs working with large datasets to swap data to/from the disk at
    run-time, or alternatively to use some form of data compression. x86 has no
    architectural support for this programming style. Go as a safe
    programming language (without cgo) seems compatible with this programming
    style because it appears possible for a Go implementation to do the
    swapping and compression automatically.

    All I said was that the memory overhead is usually larger.

    Companies like Canonical still refer to their 32-bit edition of Ubuntu as
    the recommended download, and Linode strongly recommends everyone to use
    the 32-bit distributions (in the latter case, specifically because of the
    decrease in memory usage.)
    On Wed, Oct 24, 2012 at 7:35 PM, bryanturley wrote:

    I would mirror what minux said, also here are some recent benchmarks if
    you can wade through the incessant ads.

    http://www.phoronix.com/scan.php?page=article&item=ubuntu_1210_3264&num=1

    --

    --
  • ⚛ at Oct 25, 2012 at 12:08 pm

    On Thursday, October 25, 2012 12:16:54 AM UTC+2, Niklas Schnelle wrote:

    Well I just had to answer the part about 32bit being large enough not to
    impact performance, that's so utterly wrong I can't even start.
    Every program has hotspots in code and also in data.

    I might be biased because most of the applications I'm currently working
    on (research on route planning algorithms) in their current form can't even
    load with a standard dataset on 32bit and the performance loss of not
    keeping the data completely in RAM would many orders of magnitude.
    Current route planning software can compute shortest paths on the road
    network of Europe in less than 10 ms some algorithms even go down to
    hundreth of microseconds.
    If they had to keep the data on disk, they would run on the order of
    seconds.
    A single request to compute the shortest path most likely accesses only a
    limited portion of data. The same is true when executing 100 typical
    requests concurrently or in sequence. The minimum working set size which
    has a small effect on performance equals to the amount of data common to
    typical requests.

    Any sane database system will keep as much data in memory as there is RAM
    and I believe we would also be talking about many orders of magnitude here.
    There is a reason mainframes have been 64bit for decades.
    In the past there were no solid state disks.

    On Wednesday, October 24, 2012 9:36:23 PM UTC+2, ⚛ wrote:

    On Wednesday, October 24, 2012 7:46:39 PM UTC+2, Patrick Mylund Nielsen
    wrote:
    I'm not trying to dispute that x64 is faster than x86. That would be
    silly.

    It depends on the machine used by the developers of the software. If they
    are mostly using 32-bit x86 machines the developers will make optimization
    choices that work for 32-bit x86 while ignoring the consequences of source
    code modifications on 64-bit x86. The overall process seems similar to
    http://en.wikipedia.org/wiki/Supervised_learning in AI. Developers are
    optimizing based on measured data, so if there are measurements for 32-bit
    and no measurements for 64-bit then it may happen that the software will
    run faster on 32-bit machines.

    Advantages of x86-64 over x86-32 are not clear (not mentioning the
    obvious fact that 64-bit pointers consume two times the memory of 32-bit
    pointers):

    - accessing the 8 additional registers on x86-64 requires a prefix in the
    instruction

    - instructions working with 32-bit integers differ from 64-bit versions
    because of a prefix in one form of the instruction

    In general, x86-64 isn't an uniform extension of x86-32 (it is a
    non-uniform extension). An ideal uniform extension would, for example,
    mandate two instruction decoders in the 32+64-bit CPU (and consequently a
    slightly redesigned instruction encoding in 64-bit mode). That said, x86-64
    is very close to being a uniform extension of x86-32.

    Today there do not exist purely 32-bit versions of x86 CPUs (with no
    64-bit mode in the silicon) that would have a corresponding 32+64-bit
    version, so it is hard to tell whether CPUs without any 64-bit mode would
    be faster thanks to a slightly higher operating frequency.

    I believe that a 32-bit address space is already large enough for almost
    any application in the sense that extending the address space to 64 bits
    does not improve the application's performance in any way. This requires
    programs working with large datasets to swap data to/from the disk at
    run-time, or alternatively to use some form of data compression. x86 has no
    architectural support for this programming style. Go as a safe
    programming language (without cgo) seems compatible with this programming
    style because it appears possible for a Go implementation to do the
    swapping and compression automatically.

    All I said was that the memory overhead is usually larger.

    Companies like Canonical still refer to their 32-bit edition of Ubuntu
    as the recommended download, and Linode strongly recommends everyone to use
    the 32-bit distributions (in the latter case, specifically because of the
    decrease in memory usage.)
    On Wed, Oct 24, 2012 at 7:35 PM, bryanturley wrote:

    I would mirror what minux said, also here are some recent benchmarks if
    you can wade through the incessant ads.


    http://www.phoronix.com/scan.php?page=article&item=ubuntu_1210_3264&num=1

    --

    --
  • Paul at Oct 25, 2012 at 12:46 pm
    The whole discussion around "64 bit uses much more memory" led me to check
    DDR3 Memory prices on Amazon. Currently 4GB of DDR3 memory will set you
    back $20; 8GB will cost $40; thats 5$ per GB. It should be noted that
    those are NOT high-Volume discount prices.

    So for additional $30 you get 3X the additional of memory that 32 bit
    architectures would allow you to address.

    At those prices, can't you just have random access memory to your hearts
    desire?

    Next: calculate the average hourly rate of a programmer against the price
    of additional 6GB of DDR3 Memory. Then ask: how many hours of programmer
    time can you afford to pay for instead of just sending out a Purchase Order
    for 8GB or 12GB of Memory.

    In the above rationalization I have not included any actual benefits from
    exploiting the addtional fuctionality that 64 bit archictecture and
    operating systems can provide, like having more registers and maybe sse2.
    And also being technically enabled to do larger scale in-memory computing.

    Prove me wrong, it looks like a no-brainer business case to me.
    On Thursday, October 25, 2012 12:16:54 AM UTC+2, Niklas Schnelle wrote:

    Well I just had to answer the part about 32bit being large enough not to
    impact performance, that's so utterly wrong I can't even start.
    I might be biased because most of the applications I'm currently working
    on (research on route planning algorithms) in their current form can't even
    load with a standard dataset on 32bit and the performance loss of not
    keeping the data completely in RAM would many orders of magnitude.
    Current route planning software can compute shortest paths on the road
    network of Europe in less than 10 ms some algorithms even go down to
    hundreth of microseconds.
    If they had to keep the data on disk, they would run on the order of
    seconds.
    Any sane database system will keep as much data in memory as there is RAM
    and I believe we would also be talking about many orders of magnitude here.
    There is a reason mainframes have been 64bit for decades.
    On Wednesday, October 24, 2012 9:36:23 PM UTC+2, ⚛ wrote:

    On Wednesday, October 24, 2012 7:46:39 PM UTC+2, Patrick Mylund Nielsen
    wrote:
    I'm not trying to dispute that x64 is faster than x86. That would be
    silly.

    It depends on the machine used by the developers of the software. If they
    are mostly using 32-bit x86 machines the developers will make optimization
    choices that work for 32-bit x86 while ignoring the consequences of source
    code modifications on 64-bit x86. The overall process seems similar to
    http://en.wikipedia.org/wiki/Supervised_learning in AI. Developers are
    optimizing based on measured data, so if there are measurements for 32-bit
    and no measurements for 64-bit then it may happen that the software will
    run faster on 32-bit machines.

    Advantages of x86-64 over x86-32 are not clear (not mentioning the
    obvious fact that 64-bit pointers consume two times the memory of 32-bit
    pointers):

    - accessing the 8 additional registers on x86-64 requires a prefix in the
    instruction

    - instructions working with 32-bit integers differ from 64-bit versions
    because of a prefix in one form of the instruction

    In general, x86-64 isn't an uniform extension of x86-32 (it is a
    non-uniform extension). An ideal uniform extension would, for example,
    mandate two instruction decoders in the 32+64-bit CPU (and consequently a
    slightly redesigned instruction encoding in 64-bit mode). That said, x86-64
    is very close to being a uniform extension of x86-32.

    Today there do not exist purely 32-bit versions of x86 CPUs (with no
    64-bit mode in the silicon) that would have a corresponding 32+64-bit
    version, so it is hard to tell whether CPUs without any 64-bit mode would
    be faster thanks to a slightly higher operating frequency.

    I believe that a 32-bit address space is already large enough for almost
    any application in the sense that extending the address space to 64 bits
    does not improve the application's performance in any way. This requires
    programs working with large datasets to swap data to/from the disk at
    run-time, or alternatively to use some form of data compression. x86 has no
    architectural support for this programming style. Go as a safe
    programming language (without cgo) seems compatible with this programming
    style because it appears possible for a Go implementation to do the
    swapping and compression automatically.

    All I said was that the memory overhead is usually larger.

    Companies like Canonical still refer to their 32-bit edition of Ubuntu
    as the recommended download, and Linode strongly recommends everyone to use
    the 32-bit distributions (in the latter case, specifically because of the
    decrease in memory usage.)
    On Wed, Oct 24, 2012 at 7:35 PM, bryanturley wrote:

    I would mirror what minux said, also here are some recent benchmarks if
    you can wade through the incessant ads.


    http://www.phoronix.com/scan.php?page=article&item=ubuntu_1210_3264&num=1

    --

    --
  • Tim Harig at Oct 25, 2012 at 4:25 pm

    On Thu, Oct 25, 2012 at 05:46:11AM -0700, Paul wrote:
    Next: calculate the average hourly rate of a programmer against the price
    of additional 6GB of DDR3 Memory. Then ask: how many hours of programmer
    time can you afford to pay for instead of just sending out a Purchase Order
    for 8GB or 12GB of Memory.
    VPS plans typically start at less the 1GB memory. Some offer the choice
    of 64 bit. Many only give you 32 bit because it keeps their systems
    costs down by allowing them to have more virtual hosts per system and
    gets the job done quite sufficiently with less memory for the customer.
    CPU time isn't a limiting factor, memory is.

    --
  • Patrick Mylund Nielsen at Oct 25, 2012 at 4:42 pm
    Totally agree. For most virtualization providers, additional memory
    capacity is the most expensive to get, simply because most virtualized
    machines are idle most of the time, but still probably use X amount of
    memory.
    On Thu, Oct 25, 2012 at 5:11 PM, Tim Harig wrote:
    On Thu, Oct 25, 2012 at 05:46:11AM -0700, Paul wrote:
    Next: calculate the average hourly rate of a programmer against the price
    of additional 6GB of DDR3 Memory. Then ask: how many hours of programmer
    time can you afford to pay for instead of just sending out a Purchase Order
    for 8GB or 12GB of Memory.
    VPS plans typically start at less the 1GB memory. Some offer the choice
    of 64 bit. Many only give you 32 bit because it keeps their systems
    costs down by allowing them to have more virtual hosts per system and
    gets the job done quite sufficiently with less memory for the customer.
    CPU time isn't a limiting factor, memory is.

    --

    --
  • Patrick Mylund Nielsen at Oct 25, 2012 at 4:40 pm
    Prove me wrong, it looks like a no-brainer business case to me.
    Look at VPS plans. Many companies don't host, or colocate, their own
    servers.
    On Thu, Oct 25, 2012 at 2:46 PM, Paul wrote:

    The whole discussion around "64 bit uses much more memory" led me to check
    DDR3 Memory prices on Amazon. Currently 4GB of DDR3 memory will set you
    back $20; 8GB will cost $40; thats 5$ per GB. It should be noted that
    those are NOT high-Volume discount prices.

    So for additional $30 you get 3X the additional of memory that 32 bit
    architectures would allow you to address.

    At those prices, can't you just have random access memory to your hearts
    desire?

    Next: calculate the average hourly rate of a programmer against the price
    of additional 6GB of DDR3 Memory. Then ask: how many hours of programmer
    time can you afford to pay for instead of just sending out a Purchase Order
    for 8GB or 12GB of Memory.

    In the above rationalization I have not included any actual benefits from
    exploiting the addtional fuctionality that 64 bit archictecture and
    operating systems can provide, like having more registers and maybe sse2.
    And also being technically enabled to do larger scale in-memory computing.

    Prove me wrong, it looks like a no-brainer business case to me.
    On Thursday, October 25, 2012 12:16:54 AM UTC+2, Niklas Schnelle wrote:

    Well I just had to answer the part about 32bit being large enough not to
    impact performance, that's so utterly wrong I can't even start.
    I might be biased because most of the applications I'm currently working
    on (research on route planning algorithms) in their current form can't even
    load with a standard dataset on 32bit and the performance loss of not
    keeping the data completely in RAM would many orders of magnitude.
    Current route planning software can compute shortest paths on the road
    network of Europe in less than 10 ms some algorithms even go down to
    hundreth of microseconds.
    If they had to keep the data on disk, they would run on the order of
    seconds.
    Any sane database system will keep as much data in memory as there is RAM
    and I believe we would also be talking about many orders of magnitude here.
    There is a reason mainframes have been 64bit for decades.
    On Wednesday, October 24, 2012 9:36:23 PM UTC+2, ⚛ wrote:

    On Wednesday, October 24, 2012 7:46:39 PM UTC+2, Patrick Mylund Nielsen
    wrote:
    I'm not trying to dispute that x64 is faster than x86. That would be
    silly.

    It depends on the machine used by the developers of the software. If
    they are mostly using 32-bit x86 machines the developers will make
    optimization choices that work for 32-bit x86 while ignoring the
    consequences of source code modifications on 64-bit x86. The overall
    process seems similar to http://en.wikipedia.org/**
    wiki/Supervised_learning<http://en.wikipedia.org/wiki/Supervised_learning> in
    AI. Developers are optimizing based on measured data, so if there are
    measurements for 32-bit and no measurements for 64-bit then it may happen
    that the software will run faster on 32-bit machines.

    Advantages of x86-64 over x86-32 are not clear (not mentioning the
    obvious fact that 64-bit pointers consume two times the memory of 32-bit
    pointers):

    - accessing the 8 additional registers on x86-64 requires a prefix in
    the instruction

    - instructions working with 32-bit integers differ from 64-bit versions
    because of a prefix in one form of the instruction

    In general, x86-64 isn't an uniform extension of x86-32 (it is a
    non-uniform extension). An ideal uniform extension would, for example,
    mandate two instruction decoders in the 32+64-bit CPU (and consequently a
    slightly redesigned instruction encoding in 64-bit mode). That said, x86-64
    is very close to being a uniform extension of x86-32.

    Today there do not exist purely 32-bit versions of x86 CPUs (with no
    64-bit mode in the silicon) that would have a corresponding 32+64-bit
    version, so it is hard to tell whether CPUs without any 64-bit mode would
    be faster thanks to a slightly higher operating frequency.

    I believe that a 32-bit address space is already large enough for almost
    any application in the sense that extending the address space to 64 bits
    does not improve the application's performance in any way. This requires
    programs working with large datasets to swap data to/from the disk at
    run-time, or alternatively to use some form of data compression. x86 has no
    architectural support for this programming style. Go as a safe
    programming language (without cgo) seems compatible with this programming
    style because it appears possible for a Go implementation to do the
    swapping and compression automatically.

    All I said was that the memory overhead is usually larger.

    Companies like Canonical still refer to their 32-bit edition of Ubuntu
    as the recommended download, and Linode strongly recommends everyone to use
    the 32-bit distributions (in the latter case, specifically because of the
    decrease in memory usage.)
    On Wed, Oct 24, 2012 at 7:35 PM, bryanturley wrote:

    I would mirror what minux said, also here are some recent benchmarks
    if you can wade through the incessant ads.

    http://www.phoronix.com/scan.**php?page=article&item=ubuntu_**
    1210_3264&num=1<http://www.phoronix.com/scan.php?page=article&item=ubuntu_1210_3264&num=1>

    --

    --
    --
  • Bryanturley at Oct 25, 2012 at 4:47 pm
    This discussion is getting silly. No one is stopping anyone from using
    32bit, the docs just state that there is a problem with go's current
    garbage collector on 32bit platforms that is being rectified at this moment.


    --
  • Patrick Mylund Nielsen at Oct 25, 2012 at 4:53 pm
    I guess this discussion arose after the question was asked, "Is there even
    a business case for 32-bit?", and, indeed, two big ones came to mind:
    running on ARM, and using VPSs (reducing memory usage.) I think it would be
    unfortunate to leave the discussion at "You should just use 64-bit all the
    time and everywhere", because a lot of people (who might like to use Go)
    can't do that.
    On Thu, Oct 25, 2012 at 6:47 PM, bryanturley wrote:

    This discussion is getting silly. No one is stopping anyone from using
    32bit, the docs just state that there is a problem with go's current
    garbage collector on 32bit platforms that is being rectified at this moment.


    --

    --
  • Bryanturley at Oct 25, 2012 at 5:13 pm

    On Thursday, October 25, 2012 11:53:19 AM UTC-5, Patrick Mylund Nielsen wrote:

    I guess this discussion arose after the question was asked, "Is there even
    a business case for 32-bit?", and, indeed, two big ones came to mind:
    running on ARM, and using VPSs (reducing memory usage.) I think it would be
    unfortunate to leave the discussion at "You should just use 64-bit all the
    time and everywhere", because a lot of people (who might like to use Go)
    can't do that.
    If the go developers wanted 64bit all the time everywhere there would be no
    arm or x86 versions of the compiler.
    Simple logic, don't panic.

    The real question is when will the real crazies come out and port it to old
    x86 20bit addressing?
    640KB is enough for everyone....
    I am sure there are a few that would like 6502 version of go as well.




    --
  • Patrick Mylund Nielsen at Oct 25, 2012 at 5:22 pm

    If the go developers wanted 64bit all the time everywhere there would be
    no arm or x86 versions of the compiler.
    Simple logic, don't panic.
    The Go developers strongly encourage 64-bit most of the time.
    The real question is when will the real crazies come out and port it to
    old x86 20bit addressing?
    640KB is enough for everyone.... I am sure there are a few that would
    like 6502 version of go as well.

    That is a stupid question, not a real one. The sentiment that 32-bit is to
    64-bit what 6502 is to x86 is the very reason I commented that 32-bit still
    exists, and that there are several reasons why it won't go away soon, so
    dismissing it doesn't help anyone. I'm well aware that the Go developers
    don't share that sentiment, although Go certainly runs best on 64-bit at
    this point.
    On Thu, Oct 25, 2012 at 7:13 PM, bryanturley wrote:

    On Thursday, October 25, 2012 11:53:19 AM UTC-5, Patrick Mylund Nielsen
    wrote:
    I guess this discussion arose after the question was asked, "Is there
    even a business case for 32-bit?", and, indeed, two big ones came to mind:
    running on ARM, and using VPSs (reducing memory usage.) I think it would be
    unfortunate to leave the discussion at "You should just use 64-bit all the
    time and everywhere", because a lot of people (who might like to use Go)
    can't do that.
    If the go developers wanted 64bit all the time everywhere there would be
    no arm or x86 versions of the compiler.
    Simple logic, don't panic.

    The real question is when will the real crazies come out and port it to
    old x86 20bit addressing?
    640KB is enough for everyone....
    I am sure there are a few that would like 6502 version of go as well.




    --

    --
  • Bryanturley at Oct 25, 2012 at 5:48 pm

    On Thursday, October 25, 2012 12:22:21 PM UTC-5, Patrick Mylund Nielsen wrote:
    If the go developers wanted 64bit all the time everywhere there would be
    no arm or x86 versions of the compiler.
    Simple logic, don't panic.
    The Go developers strongly encourage 64-bit most of the time.
    So.... ?
    Also I don't think you read the DON'T PANIC part.

    The real question is when will the real crazies come out and port it to
    old x86 20bit addressing?
    640KB is enough for everyone.... I am sure there are a few that would
    like 6502 version of go as well.

    That is a stupid question, not a real one. The sentiment that 32-bit is to
    64-bit what 6502 is to x86 is the very reason I commented that 32-bit still
    exists, and that there are several reasons why it won't go away soon, so
    dismissing it doesn't help anyone. I'm well aware that the Go developers
    don't share that sentiment, although Go certainly runs best on 64-bit at
    this point.
    Actually it wasn't a question it was an attempt to humorously point out
    that we (as software engineers) have had similar problems in the past and
    will probably have them again in the future.
    If you couldn't see that as humor I think you need to settle down a bit, go
    for a walk pick some flowers. No one is trying to steal your 32bit cpus.
    I myself want to do more go on arm and mainstream arm is going to be
    dominantly 32bit for easily the next 3+ years till the (still unreleased?)
    64bit chips proliferate. They had to start somewhere and x86_64 is the
    arch for big_iron/cloud/<buzzword missing>/etc... nowadays.

    --
  • Dumitru Ungureanu at Oct 25, 2012 at 6:19 pm
    I'd appreciate best:

    a) polite and documented suggestions and opinions
    b) polite undocumented silence

    Thanks.
    On Thursday, October 25, 2012 8:48:08 PM UTC+3, bryanturley wrote:

    Actually it wasn't a question it was an attempt to humorously point out
    that we (as software engineers) have had similar problems in the past and
    will probably have them again in the future.
    If you couldn't see that as humor I think you need to settle down a bit,
    go for a walk pick some flowers. No one is trying to steal your 32bit
    cpus.
    --
  • Paul at Oct 25, 2012 at 6:52 pm
    OK I just looked at some VPS plans.
    A couple of key observations:
    1. VPS plans are not a technical problem they are a business problem. And
    they are certainly not a language design problem.
    2. You always get what you pay for.
    3. You NEVER get a free lunch in the business world, the price always has
    to be paid somewhere.

    So what it means in this case: apparently some folks are trying to save on
    their VPS business plans, by purchasing relatively cheap 32 bit plans on
    apparently non-dedicated hardware. What happens though is that a
    necessary expenditure is just just being shifted to another level, and it
    just means you pay the price somewhere else and it looks like the price is
    being paid in additional programming expenditure for supporting a 32 bit
    architecture in order to save some money on a VPS plan. Maybe thats a bit
    speculative, but I do not think I am too far off.

    Its a pretty serious business issue, and I certainly would not say that its
    a silly one. However it does beg the question: why is such a business
    problem (VPS plan) not fixed on the business and organizational level then?

    I have NOT seen a commitment to develop a new more precise Garbage
    Collector, I have seen that there is some work going on, but I have not
    seen an official commitment or any release date. So the statement that the
    problem is being rectified is not reproducable to me.

    On Thursday, October 25, 2012 6:40:39 PM UTC+2, Patrick Mylund Nielsen
    wrote:
    Prove me wrong, it looks like a no-brainer business case to me.
    Look at VPS plans. Many companies don't host, or colocate, their own
    servers.

    On Thu, Oct 25, 2012 at 2:46 PM, Paul <2pau...@googlemail.com<javascript:>
    wrote:
    The whole discussion around "64 bit uses much more memory" led me to
    check DDR3 Memory prices on Amazon. Currently 4GB of DDR3 memory will set
    you back $20; 8GB will cost $40; thats 5$ per GB. It should be noted that
    those are NOT high-Volume discount prices.

    So for additional $30 you get 3X the additional of memory that 32 bit
    architectures would allow you to address.

    At those prices, can't you just have random access memory to your hearts
    desire?

    Next: calculate the average hourly rate of a programmer against the price
    of additional 6GB of DDR3 Memory. Then ask: how many hours of programmer
    time can you afford to pay for instead of just sending out a Purchase Order
    for 8GB or 12GB of Memory.

    In the above rationalization I have not included any actual benefits from
    exploiting the addtional fuctionality that 64 bit archictecture and
    operating systems can provide, like having more registers and maybe sse2.
    And also being technically enabled to do larger scale in-memory computing.

    Prove me wrong, it looks like a no-brainer business case to me.
    On Thursday, October 25, 2012 12:16:54 AM UTC+2, Niklas Schnelle wrote:

    Well I just had to answer the part about 32bit being large enough not to
    impact performance, that's so utterly wrong I can't even start.
    I might be biased because most of the applications I'm currently working
    on (research on route planning algorithms) in their current form can't even
    load with a standard dataset on 32bit and the performance loss of not
    keeping the data completely in RAM would many orders of magnitude.
    Current route planning software can compute shortest paths on the road
    network of Europe in less than 10 ms some algorithms even go down to
    hundreth of microseconds.
    If they had to keep the data on disk, they would run on the order of
    seconds.
    Any sane database system will keep as much data in memory as there is
    RAM and I believe we would also be talking about many orders of magnitude
    here.
    There is a reason mainframes have been 64bit for decades.
    On Wednesday, October 24, 2012 9:36:23 PM UTC+2, ⚛ wrote:

    On Wednesday, October 24, 2012 7:46:39 PM UTC+2, Patrick Mylund Nielsen
    wrote:
    I'm not trying to dispute that x64 is faster than x86. That would be
    silly.

    It depends on the machine used by the developers of the software. If
    they are mostly using 32-bit x86 machines the developers will make
    optimization choices that work for 32-bit x86 while ignoring the
    consequences of source code modifications on 64-bit x86. The overall
    process seems similar to http://en.wikipedia.org/**
    wiki/Supervised_learning<http://en.wikipedia.org/wiki/Supervised_learning> in
    AI. Developers are optimizing based on measured data, so if there are
    measurements for 32-bit and no measurements for 64-bit then it may happen
    that the software will run faster on 32-bit machines.

    Advantages of x86-64 over x86-32 are not clear (not mentioning the
    obvious fact that 64-bit pointers consume two times the memory of 32-bit
    pointers):

    - accessing the 8 additional registers on x86-64 requires a prefix in
    the instruction

    - instructions working with 32-bit integers differ from 64-bit versions
    because of a prefix in one form of the instruction

    In general, x86-64 isn't an uniform extension of x86-32 (it is a
    non-uniform extension). An ideal uniform extension would, for example,
    mandate two instruction decoders in the 32+64-bit CPU (and consequently a
    slightly redesigned instruction encoding in 64-bit mode). That said, x86-64
    is very close to being a uniform extension of x86-32.

    Today there do not exist purely 32-bit versions of x86 CPUs (with no
    64-bit mode in the silicon) that would have a corresponding 32+64-bit
    version, so it is hard to tell whether CPUs without any 64-bit mode would
    be faster thanks to a slightly higher operating frequency.

    I believe that a 32-bit address space is already large enough for
    almost any application in the sense that extending the address space to 64
    bits does not improve the application's performance in any way. This
    requires programs working with large datasets to swap data to/from the disk
    at run-time, or alternatively to use some form of data compression. x86 has
    no architectural support for this programming style. Go as a safe
    programming language (without cgo) seems compatible with this programming
    style because it appears possible for a Go implementation to do the
    swapping and compression automatically.

    All I said was that the memory overhead is usually larger.

    Companies like Canonical still refer to their 32-bit edition of Ubuntu
    as the recommended download, and Linode strongly recommends everyone to use
    the 32-bit distributions (in the latter case, specifically because of the
    decrease in memory usage.)
    On Wed, Oct 24, 2012 at 7:35 PM, bryanturley wrote:

    I would mirror what minux said, also here are some recent benchmarks
    if you can wade through the incessant ads.

    http://www.phoronix.com/scan.**php?page=article&item=ubuntu_**
    1210_3264&num=1<http://www.phoronix.com/scan.php?page=article&item=ubuntu_1210_3264&num=1>

    --

    --
    --
  • Bryanturley at Oct 25, 2012 at 7:32 pm

    On Thursday, October 25, 2012 1:52:48 PM UTC-5, Paul wrote:
    OK I just looked at some VPS plans.
    A couple of key observations:
    1. VPS plans are not a technical problem they are a business problem. And
    they are certainly not a language design problem.
    2. You always get what you pay for.
    3. You NEVER get a free lunch in the business world, the price always has
    to be paid somewhere.
    I have indeed eaten many free lunches, they were usually quite tasty.

    So what it means in this case: apparently some folks are trying to save on
    their VPS business plans, by purchasing relatively cheap 32 bit plans on
    apparently non-dedicated hardware.
    vps (VIRTUAL private server) implies non-dedicated hardware

    What happens though is that a necessary expenditure is just just being
    shifted to another level, and it just means you pay the price somewhere
    else and it looks like the price is being paid in additional programming
    expenditure for supporting a 32 bit architecture in order to save some
    money on a VPS plan. Maybe thats a bit speculative, but I do not think I am
    too far off.

    Its a pretty serious business issue, and I certainly would not say that
    its a silly one. However it does beg the question: why is such a business
    problem (VPS plan) not fixed on the business and organizational level then?
    most of the ones I have used have a simple install a distro of your choice
    menu with options for 32bit AND 64bit distros.
    I use a 1GB vps everyday personally I run just random crap on it with a
    group of people from around the US. Around mid/end of 2011 moved it from
    32bit to a 64bit distro so that I could mmap larger files in some c code I
    was using, even though there was only 1GB of memory. It was just easier
    for what I was doing to access this ~4GB file as memory even if there was
    some performance hits in not controlling the actual disk reads, though in
    the end it was just faster overall.

    I have NOT seen a commitment to develop a new more precise Garbage
    Collector, I have seen that there is some work going on, but I have not
    seen an official commitment or any release date. So the statement that the
    problem is being rectified is not reproducable to me.
    Well I think you know people are working very hard on all aspects of go,
    and the GC on 32bit platforms is a known issue.
    I don't think they are announcing release dates, I think things are just
    done when they are done.
    Dave Cheney already mentioned the issue number in this thread...
    here is a link, don't bother them ;)
    http://code.google.com/p/go/issues/detail?id=909 <--- official comments
    At this moment it does seem to be marked for Go 1.1

    That issue is the answer to the OP.

    --
  • Tim Harig at Oct 25, 2012 at 7:44 pm

    On Thu, Oct 25, 2012 at 11:52:48AM -0700, Paul wrote:
    1. VPS plans are not a technical problem they are a business problem. And
    they are certainly not a language design problem.
    Agreed. Unfortunately, many of us who would like to use go more
    extensively in our organizations are hampered by having to prove business
    cases. It doesn't matter where the problem lies business or technical.
    If it causes business problems , go looses.

    --
  • Patrick Mylund Nielsen at Oct 25, 2012 at 8:23 pm
    VPS plans are not a technical problem they are a business problem.
    Indeed. I was responding to your question, "what is the business case [for
    32-bit]?" For a business that intends to outsource their infrastructure,
    for example, there is a very real one.
    And they are certainly not a language design problem.
    Poorly supporting 32-bit is a design problem (or rather an implementation
    problem) for a language that intends to be general-purpose and available on
    many platforms, i.e. used on 32-bit, ARM, in some virtual environments, et
    cetera. It is being worked on, but it _is_ a problem, and because you can't
    always "just use 64-bit", it's not negligible. If 32-bit were obscure, it
    would be, but it is not (yet, anyway.)
    On Thu, Oct 25, 2012 at 8:52 PM, Paul wrote:

    OK I just looked at some VPS plans.
    A couple of key observations:
    1. VPS plans are not a technical problem they are a business problem. And
    they are certainly not a language design problem.
    2. You always get what you pay for.
    3. You NEVER get a free lunch in the business world, the price always has
    to be paid somewhere.

    So what it means in this case: apparently some folks are trying to save on
    their VPS business plans, by purchasing relatively cheap 32 bit plans on
    apparently non-dedicated hardware. What happens though is that a
    necessary expenditure is just just being shifted to another level, and it
    just means you pay the price somewhere else and it looks like the price is
    being paid in additional programming expenditure for supporting a 32 bit
    architecture in order to save some money on a VPS plan. Maybe thats a bit
    speculative, but I do not think I am too far off.

    Its a pretty serious business issue, and I certainly would not say that
    its a silly one. However it does beg the question: why is such a business
    problem (VPS plan) not fixed on the business and organizational level then?

    I have NOT seen a commitment to develop a new more precise Garbage
    Collector, I have seen that there is some work going on, but I have not
    seen an official commitment or any release date. So the statement that the
    problem is being rectified is not reproducable to me.


    On Thursday, October 25, 2012 6:40:39 PM UTC+2, Patrick Mylund Nielsen
    wrote:
    Prove me wrong, it looks like a no-brainer business case to me.
    Look at VPS plans. Many companies don't host, or colocate, their own
    servers.
    On Thu, Oct 25, 2012 at 2:46 PM, Paul wrote:

    The whole discussion around "64 bit uses much more memory" led me to
    check DDR3 Memory prices on Amazon. Currently 4GB of DDR3 memory will set
    you back $20; 8GB will cost $40; thats 5$ per GB. It should be noted that
    those are NOT high-Volume discount prices.

    So for additional $30 you get 3X the additional of memory that 32 bit
    architectures would allow you to address.

    At those prices, can't you just have random access memory to your hearts
    desire?

    Next: calculate the average hourly rate of a programmer against the
    price of additional 6GB of DDR3 Memory. Then ask: how many hours of
    programmer time can you afford to pay for instead of just sending out a
    Purchase Order for 8GB or 12GB of Memory.

    In the above rationalization I have not included any actual benefits
    from exploiting the addtional fuctionality that 64 bit archictecture and
    operating systems can provide, like having more registers and maybe sse2.
    And also being technically enabled to do larger scale in-memory computing.

    Prove me wrong, it looks like a no-brainer business case to me.
    On Thursday, October 25, 2012 12:16:54 AM UTC+2, Niklas Schnelle wrote:

    Well I just had to answer the part about 32bit being large enough not
    to impact performance, that's so utterly wrong I can't even start.
    I might be biased because most of the applications I'm currently
    working on (research on route planning algorithms) in their current form
    can't even load with a standard dataset on 32bit and the performance loss
    of not keeping the data completely in RAM would many orders of magnitude.
    Current route planning software can compute shortest paths on the road
    network of Europe in less than 10 ms some algorithms even go down to
    hundreth of microseconds.
    If they had to keep the data on disk, they would run on the order of
    seconds.
    Any sane database system will keep as much data in memory as there is
    RAM and I believe we would also be talking about many orders of magnitude
    here.
    There is a reason mainframes have been 64bit for decades.
    On Wednesday, October 24, 2012 9:36:23 PM UTC+2, ⚛ wrote:

    On Wednesday, October 24, 2012 7:46:39 PM UTC+2, Patrick Mylund
    Nielsen wrote:
    I'm not trying to dispute that x64 is faster than x86. That would be
    silly.

    It depends on the machine used by the developers of the software. If
    they are mostly using 32-bit x86 machines the developers will make
    optimization choices that work for 32-bit x86 while ignoring the
    consequences of source code modifications on 64-bit x86. The overall
    process seems similar to http://en.wikipedia.org/**wik**
    i/Supervised_learning<http://en.wikipedia.org/wiki/Supervised_learning> in
    AI. Developers are optimizing based on measured data, so if there are
    measurements for 32-bit and no measurements for 64-bit then it may happen
    that the software will run faster on 32-bit machines.

    Advantages of x86-64 over x86-32 are not clear (not mentioning the
    obvious fact that 64-bit pointers consume two times the memory of 32-bit
    pointers):

    - accessing the 8 additional registers on x86-64 requires a prefix in
    the instruction

    - instructions working with 32-bit integers differ from 64-bit
    versions because of a prefix in one form of the instruction

    In general, x86-64 isn't an uniform extension of x86-32 (it is a
    non-uniform extension). An ideal uniform extension would, for example,
    mandate two instruction decoders in the 32+64-bit CPU (and consequently a
    slightly redesigned instruction encoding in 64-bit mode). That said, x86-64
    is very close to being a uniform extension of x86-32.

    Today there do not exist purely 32-bit versions of x86 CPUs (with no
    64-bit mode in the silicon) that would have a corresponding 32+64-bit
    version, so it is hard to tell whether CPUs without any 64-bit mode would
    be faster thanks to a slightly higher operating frequency.

    I believe that a 32-bit address space is already large enough for
    almost any application in the sense that extending the address space to 64
    bits does not improve the application's performance in any way. This
    requires programs working with large datasets to swap data to/from the disk
    at run-time, or alternatively to use some form of data compression. x86 has
    no architectural support for this programming style. Go as a safe
    programming language (without cgo) seems compatible with this programming
    style because it appears possible for a Go implementation to do the
    swapping and compression automatically.

    All I said was that the memory overhead is usually larger.

    Companies like Canonical still refer to their 32-bit edition of
    Ubuntu as the recommended download, and Linode strongly recommends everyone
    to use the 32-bit distributions (in the latter case, specifically because
    of the decrease in memory usage.)
    On Wed, Oct 24, 2012 at 7:35 PM, bryanturley wrote:

    I would mirror what minux said, also here are some recent benchmarks
    if you can wade through the incessant ads.

    http://www.phoronix.com/scan.**p**hp?page=article&item=ubuntu_**12**
    10_3264&num=1<http://www.phoronix.com/scan.php?page=article&item=ubuntu_1210_3264&num=1>

    --

    --
    --
    --
  • Tim Harig at Oct 24, 2012 at 7:59 pm

    On Wed, Oct 24, 2012 at 07:46:30PM +0200, Patrick Mylund Nielsen wrote:
    Companies like Canonical still refer to their 32-bit edition of Ubuntu as
    the recommended download, and Linode strongly recommends everyone to use
    the 32-bit distributions (in the latter case, specifically because of the
    decrease in memory usage.)
    Lots of VPSs run in 32bit mode because of the memory overhead of 64bit. If
    you are on a physical machine with lots of memory, then by all means, use
    64 bit. When you have many virtual systems running on a single piece of
    hardware, which is all many small/medium businesses actually need, the
    overhead of using 64 bit adds up quickly. From the provider's viewpoint,
    you cannot have as many virtual systems per physical system. From a
    consumer's standpoint, using a 64 bit system requires paying more for
    memory each month.

    Also note once again that x86 embedded hardware (Geode, C3, C7, etc.) are
    32 bit. Their requirements have no particular need for the expense (both
    power and monetary) of 64 bit.

    --
  • GreatOdinsRaven at Oct 26, 2012 at 3:51 am
    A precise GC is in the works! That should help (if not solve completely)
    the 32bit issues that are well known. The new GC tracks the type info which
    allows the runtime to differentiate pointers from non-pointers, at the cost
    of increased memory usage.

    Here's a relevant changeset:
    http://codereview.appspot.com/6114046/

    Looks like it's on track for Go 1.1

    As to this whole "who needs 32-bit" thing....I think it's ridiculous. You
    can't discount 32-bit architectures yet, not with a "general purpose"
    programming language at least.

    I remember (can't find the thread now) Russ Cox saying something like
    "32-bit isn't a priority for us", implying that Go, as used inside Google,
    doesn't need an accurate 32-bit GC, *not* that nobody else outside Google
    doesn't need it. Huge difference.
    On Sunday, October 21, 2012 11:34:01 AM UTC-6, Dumitru Ungureanu wrote:

    Hi all.

    The go-start *webframework* <http://go-start.org/> has this note in its
    documentation page:

    Note: Don't use Go on 32 bit systems in production, it has severe memory
    leaks.
    That's a pretty serious remark, and it's even more odd coming from the
    author of a Go framework, not some ranting no doer.

    How can one counter this remark?
    --
  • Anssi Porttikivi at Oct 26, 2012 at 4:29 am
    If you want to implement massive hw parallelism, liki million cores, are you sure future is 64 bit?

    --
  • Bryanturley at Oct 26, 2012 at 9:07 am
    How future? What if we stopped using spinning disks and everything was
    just some future spiffy non-volatile ram?
    In 50 years when all your storage gets mapped into one address space, we
    might need more than 64bits then.

    Probably won't matter when the robots cut us all into 64 bits.... man the
    future sucks...

    ;)



    On Thursday, October 25, 2012 11:29:32 PM UTC-5, Anssi Porttikivi wrote:

    If you want to implement massive hw parallelism, liki million cores, are
    you sure future is 64 bit?
    --
  • ⚛ at Oct 26, 2012 at 3:17 pm

    On Friday, October 26, 2012 5:51:48 AM UTC+2, GreatOdinsRaven wrote:

    A precise GC is in the works!

    It isn't fully precise, but it is more precise than the current GC.

    That should help (if not solve completely) the 32bit issues that are well
    known. The new GC tracks the type info which allows the runtime to
    differentiate pointers from non-pointers, at the cost of increased memory
    usage.

    The increase in memory usage can be clearly seen in synthetic
    (=unrealistic) benchmarks. In normal applications (for example: godoc) the
    increase is smaller or not measurable.

    Here's a relevant changeset:
    http://codereview.appspot.com/6114046/

    Looks like it's on track for Go 1.1
    --
  • Ethan Burns at Oct 26, 2012 at 3:41 pm

    On Friday, October 26, 2012 9:42:00 AM UTC-4, ⚛ wrote:
    On Friday, October 26, 2012 5:51:48 AM UTC+2, GreatOdinsRaven wrote:

    A precise GC is in the works!

    It isn't fully precise, but it is more precise than the current GC.
    Just out of curiosity, what is still imprecise about it?


    Best,
    Ethan

    --
  • ⚛ at Oct 26, 2012 at 7:39 pm

    On Friday, October 26, 2012 5:41:52 PM UTC+2, Ethan Burns wrote:
    On Friday, October 26, 2012 9:42:00 AM UTC-4, ⚛ wrote:
    On Friday, October 26, 2012 5:51:48 AM UTC+2, GreatOdinsRaven wrote:

    A precise GC is in the works!

    It isn't fully precise, but it is more precise than the current GC.
    Just out of curiosity, what is still imprecise about it?
    The stack frames of all goroutines, closures, some allocations by the Go
    runtime (C code), global variables defined by C, maybe some values which
    are generated by the compiler as a side-effect of compilation, and maybe
    some additional parts of memory.

    Although this looks like a long list, it is only a small fraction of memory
    used by a Go program.

    It is probable that after merging http://codereview.appspot.com/6114046/ there
    will be some additional optimizations to the algorithm, and in time it may
    be extended to handle stack frames and closures.

    It is hard to formalize the type of variables defined by the C portion of
    Go's run-time, so there always will be some improbable cases where the
    garbage collector fails to free a memory block. A curious case that in some
    cases prevents the GC from freeing memory is the following
    innocently-looking line of C code (link to the source code<http://code.google.com/p/go/source/browse/src/pkg/runtime/thread_linux.c?r=ef1158a7371796bf4823a1ce43e3d01d2a765e14#249>
    ):

    static int8 badcallback[] = "runtime: cgo callback on thread not created
    by Go.\n";

    --

Related Discussions

People

Translate

site design / logo © 2022 Grokbase