It seems one of the requirements of creating a go server that handles a
large number of requests per second is creating little garbage and making
sure that large data structures follow certain rules. This seems to boil
down to eliminating the use of pointers passed between functions or through
channels, using offsets into arrays instead of pointers to elements within
large data structures, and in general not using pointers in large data
I am assuming that the use of an interface results in heap allocated data
which is then pointed to by the interface value, therefore interfaces can
be lumped into this.
This means you have to remove all pointers/interfaces from function
signatures and channels that are involved in the critical path. This would
be no trivial task particularly considering the standard library makes
heavy use of interfaces, not to mention interfaces being one of the key
abstractions in go.
It seems to me that go tries to be a clean and simple language, but when
you're writing high performance software, you end up having to write go in
a rather obtuse way... and I'm mainly talking about avoiding the passing of
pointers and interfaces between functions or across channels. The data
structure restrictions are pretty inconvenient but, ultimately I think you
don't have to sacrifice too much to follow best practices there.
I'd like to see comments related to the above relative to go1. Also, how
might go change in the future regarding improved GC, escape analysis, and
maybe even addition to the language syntax itself to help with any of these